## Features & Development

### relative error tolerance results in inconsistent error checking ### relative error tolerance results in inconsistent error checking

by Gavin LaRose -
Number of replies: 4
Hi all,

This has now come up twice this week, so I thought I'd float it. The use of function evaluation at points to determine the correctness of student answers is, I've found, generally very good. But it has the undesirable side-effect that an inexact student answer may be marked correct or incorrect on different submissions.

Consider the problem snippet:

$a = random(1,9); BEGIN_TEXT $$f(x) = a - e^x$$$BR
Find the equation of the tangent to $$f$$
where it crosses the $$x$$ axis:
$BR $$y =$$ \{ ans_rule(20) \} END_TEXT ANS( Compute("-$a*(x - ln($a))")->cmp() );  If $a=4, then the correct answer is y = -4x + 4ln(4), or approximately y = -4x + 5.54518. We had a student enter y = -4x + 5.545; on one of six attempts (there are other parts to the problem) one of the evaluation points occurred close to x = 1.38625, where the student's answer evaluates to zero, with the result that the error relative to the (non-zero) correct answer was greater than the allowed tolerance, and the answer was marked wrong.

The tricky thing here is that the singularity is a function of the student's answer, so it's not entirely straightforward to avoid it when answer checking. This isn't completely correct, of course: if we avoid zeros in the correct solution, we avoid approximate zeros in the student's solution.

So: is this the best work-around? (That is, when the correct solution has a zero, be careful to check the answer away from it?) Is there a better one? Can or should the answer checking be modified to avoid students' answers being marked differently on different submissions?

I think this last issue: that a student's answer can be marked differently on subsequent submissions, is quite significant.

Thanks,
Gavin ### Re: relative error tolerance results in inconsistent error checking

by Michael Gage -
I agree that the worst aspect of this is that the question is marked differently on subsequent identical submissions.  In many cases the numerical evaluation is done at "random points",  but since these random points are determined by the problem's seed these random points are exactly the same every time for a given problem/seed pair.  You could get an incorrect evaluation, but you should always get the same incorrect evaluation.

Apparently "truly random" points are being used in this case (determined by the time of submission, or something like that) and this example is a good argument for not to using this kind of randomness for checking answers.

I haven't checked the code behind this example so there may be more to it. In particular I would also assume that when checking relative error the absolute error is divided by the correct answer, not the student's answer.  So as long as you stay away from points where the correct answer is zero, you would be ok.
I guess I'm not convinced that the 0 in the student's answer is causing the problem-- it shouldn't be -- but I haven't checked the code.

-- Mike ### Re: relative error tolerance results in inconsistent error checking

by Gavin LaRose -
Hi Mike,

Attached are screenshots of the diagnostics for two subsequent submissions of the problem; as noted, the first is marked incorrect, the second correct. And I think you're correct: the problem may be the zero in the correct solution, which magnifies the relative error.

As a side note, we had a report of the same issue for one of our gateway tests as well, which we haven't heard before. This may argue that something subtlely changed in the evaluation code that has caused the evaluation points to be "truly" random now.

Gavin  ### Re: relative error tolerance results in inconsistent error checking

by Michael Gage -
Gavin,

The screen shots seem to indicate that the evaluation points are different from one submission to the next.  This might be an unintended consequence of some other change -- I'll have to search through the code to see what causes this. My memory is that at one point at least the points were completely determined by the problem/seed for the problem instance and I'm not sure why that has changed.

It may be this weekend before I get a chance to look at it.

-- Mike function_cmp($correctFunction,$var,$llimit,$ulimit,$relpercentTol,$numOfPoints,$zeroLevel,$zeroLevelTol) which I took from
Math Objects (which now lie underneath function_cmp) have the same parameters with I assume the same names. Upping the $zeroLevelTol and maybe the$zeroLevel might help in this situation.