I want to point out something else that I have experienced with statistics questions and quartiles. While your class and your textbook may have one specific procedure for calculating them, the resources my students have turned to (internet, software, calculators) have at least four methods that I've catalogued.
Your data set has eight values. Consider a data set:
1 1 5 5 5 5 5 5
It looks like you would expect Q1 to be 3, the average of 1 and 5, quite a reasonable method. But you should know that even mainstream software like Excel, which has three quartile functions:
quartile() [which is deprecated],
outputs either 2 or 4 for Q1, depending on which function you use. The logic is about going 25% or 75% of the way between 1 and 5.
So students could reasonably (imho) find Q1 to be 2, 3, or 4. So now you have completely different tolerance issues if the answer is 3 and they enter 2 or 4.
You could deal with this by being very clear with your students about your specifications on finding quartiles. Or, what I did was to use parserOneOf.pl and make it so the answer was OneOf("2,3,4"). Then any answer within the usual tolerance of any one of these would be OK.
Not that you asked, but if you are coding statistics questions, other tolerance issues that are going to require special care that I have catalogued fall into four categories:
- The quartiles issue
- Conversion of z-scores (and t-scores, etc.) to probabilities (and vice versa) sometimes relies on tables, where excessive rounding happens; sometimes on more accurate decimal values like say from calculators; and sometimes relies on approximations coming form the normal approximation to the binomial theorem [with or without the normal correction by 0.5]. Because of this, sometimes it's reasonable for four students to have all done something "right", but they all have decimal answers that are slightly out of tolerance from each other. As with quartiles, I just use OneOf to deal with this.
- With regression lines covering a range of x-value data that is far from x=0, a small rounding error in slope can result in a large relative error for x-values near where the data is. So with these, I make sure to change the domain for comparison to surround x-bar, not be the default [-2,2].
- What should be done when a probability answer is something like 0.9999? With default tolerances, 1 is acceptable. Maybe that is OK, but there is a big conceptual difference between probability 1 and probability 0.9999. Even worse, 1.0008 would be acceptable too. [I have not yet come up with a context-based solution for this issue.]