Before machine arithmetic considerations, suppose with sigfig, we wanted to compare x to y. Looking at y, floor(log10(y)) tells us where its first significant digit is. For example,

floor(log10(0.234)) = -1

floor(log10(2.34)) = 0

floor(log10(234)) = 2

floor(log10(100)) = 2

So then if tolerance is say 3 (for requiring 3 significant figures to be correct) then compare x and y using the difference between x and y, with an absolute tolerance (different from the sig fig tolerance value) of 5*10**(floor(log10(y)) - 3)

for example, in comparing x to 0.0234 with 3 sigfigs, x would be equivalent to 0.0234 if

abs(x - 0.0234) < 5*10**(floor(log10(0.0234)) - 3)

abs(x - 0.0234) < 5*10**(-2 - 3)

abs(x - 0.0234) < 5*10**(-5)

abs(x - 0.0234) < 0.00005

This might need some tweaking if we use 3 sigfigs, but the y in the example is something like 0.02344444... I haven't thought it through completely. But basically you are using an absolute tolerance comparison, where the absolute tolerance level changes depending on the order of magnitude of one of the numbers to compare.

----------

With percentages, consider a context where only answers in [0, 1] are appropriate in the first place. Like computing a probability. (In fact, 'probability' might be a better name for the tolType.)

With tolType => absolute,

With tolerance => 0.01, a common correct answer in stats of 0.997 would let 1.000 or 0.999 get credit. While they are numerically close, they are galaxies away in a deeper sense. For answers near 100% or 0%, you tend to want high precision.

One could alleviate this by using tolerance => 0.0005, but then what if the problem allows the same answer variable to be 0.5123? We simultaneously don't want to force students to use all that precision when the answers are not close to 100% or 0%. In this case I think that many would be happy with answers of 51%.

Worst of all, no matter what you may use for tolerance, including 0.0005, what if the answer ends up being 0.999992? Is it OK to answer 100%? With 0.00003, is it OK to answer with 0%? Both feel wrong, since in this context 0% and 100% are so different from anything else.

With tolType => relative,

With say tolerance => 0.001,

Then these issues go away at the end of the spectrum close to 0%. For an answer close to 0%, you won't get away with 0% until all these numbers are down so low that we are at the zeroLevel.

But the issues are still there at the end of the spectrum close to 100%. Now a correct answer of 0.9992 would allow 100% to be counted correct.

So in the end, when comparing the student's x to the answer y, what feels right is to:

- use a relative precision (or sigfig precision) comparison of x and y when y is less than 50%,
- use a relative precision (or sigfig precision) comparison of (1-x) and (1-y) when y is greater than 50%
- or maybe for simplicity, declare the values equivalent if they pass two comparison checks: one of x with y and one of (1-x) with (1-y), regardless of position relative to 50%.