Some of my students are having correct answers marked wrong.
For the problem in question, my latest attempt to fix this is to include this block in the setup:
Context()->flags->set( tolerance => 0.001, tolType => "relative", zeroLevel => 1E-7, zeroLevelTol => 1E-7 );
I'm mostly comfortable using the relative error to compare answers, but being nudged to increase the zeroLevel repeatedly makes me uneasy.
I understand that WW evaluates the student expression and the official one at several random points and compares the results. Usually the test points produce values far enough from 0 that the relative error comes out small and all is well. But some unlucky combinations produce values around 1E-8, and for some of these the relative error exceeds any tolerance I would be comfortable allowing
- What do the flags "zeroLevel" and "zeroLevelTol" really mean and do?
- Is there a simple standard way to accept an answer that has either a small relative error or a small absolute error?
- Do you know a better question I should be asking? If so, please share it ... and the answer, too, if known.