There have always been strong recommendations to use
$showPartialCorrectAnswers = 0 ;
with templates for sets of True/False or Matching tasks (and to restrict the number of submissions with an appropriate entry in a DEF file).
Is there a way to also inhibit display of partial credit score until 100% success or number of tries is exhausted?
If not, then I suggest adding it to the current stack of feature-requests --- at the bottom because I'm unsure whether it would aid or hinder student attempts at self-assessment of their understanding. If it were there, then I could experiment with its effect on student response. On the other hand, I usually need to explain the distinction between "score for this attempt" and "overall recorded score" --- perhaps a further complication is unwarranted.
++++++++++
In a slightly different vein:
Seeing "Note: You can earn partial credit on this problem" displayed within a problem made me realize that I have forgotten how to inhibit partial credit on a problem. Is there a standard grader, or system variable, which lets me do that?
If you don't want to give students partial credit, use the "std_problem_grader" grader.
You can use this in a problem with:
install_problem_grader(~~&std_problem_grader);
See e.g. http://webwork.maa.org/wiki/WeightedGrader
You can use this in a problem with:
install_problem_grader(~~&std_problem_grader);
See e.g. http://webwork.maa.org/wiki/WeightedGrader
Thanks for the link to a description of several ways to weight multiple responses. For all or nothing scoring, std_problem_grader will suffice.
On the other hand, I disagree with the idea (first bullet on linked page) that all-or-nothing is appropriate scoring for collections of matching or True/False tasks. I agree with providing minimal feedback (e.g., "at least one of your answers is NOT correct") before a student is done (all correct or tries exhausted), but am willing to give some partial credit. I.e., I would like to distinguish between minimal feedback while student thinking is in process from a percentage score when done.
That, however, leads to complications in combining scores for separate attempts into an overall score --- consider a problem with 5 parts and 3 allowable attempts: [c=correct, w=wrong]
part A B C D E
try
1 c c w w w
2 c w c c w
3 w c w w w
correct at least once = 4
maximal correct in a try = 3
correct on majority of tries = 2
correct on all tries = 0
Bottom Line: although I would like to award some partial credit in this scenario, only "maximal correct in a try" seems a good alternative to "all-or-nothing", but later thoughts should be given more credence than earlier --- perhaps STET (i.e., use std_problem_grader) is an acceptable & simpler compromise.
On the other hand, I disagree with the idea (first bullet on linked page) that all-or-nothing is appropriate scoring for collections of matching or True/False tasks. I agree with providing minimal feedback (e.g., "at least one of your answers is NOT correct") before a student is done (all correct or tries exhausted), but am willing to give some partial credit. I.e., I would like to distinguish between minimal feedback while student thinking is in process from a percentage score when done.
That, however, leads to complications in combining scores for separate attempts into an overall score --- consider a problem with 5 parts and 3 allowable attempts: [c=correct, w=wrong]
part A B C D E
try
1 c c w w w
2 c w c c w
3 w c w w w
correct at least once = 4
maximal correct in a try = 3
correct on majority of tries = 2
correct on all tries = 0
Bottom Line: although I would like to award some partial credit in this scenario, only "maximal correct in a try" seems a good alternative to "all-or-nothing", but later thoughts should be given more credence than earlier --- perhaps STET (i.e., use std_problem_grader) is an acceptable & simpler compromise.
I've been sitting on my hands out of a desire not to be polemical, but I'm giving up.
My contention is that collections of matching tasks, or True/False tasks are almost always bad homework problems. (And they are certainly problems that don't use the power of WeBWorK as a pedagogical tool well.)
This comes from my belief that the _point_ of a homework problem is to give students an opportunity learn and practice skills. On the other hand, the point of a matching task is to test whether students have successfully mastered a skill (or a set of skills). Thus matching problems make sense on a test or quiz, but not as homework exercises.
Obviously things are not quite that black and white, but I think they're close.
(Note: if we had the power to make our students behave as we
wish they would, this might be different. But we don't have that
power, even with tools like WeBWorK.)
-Hal
Hi,
This may not be exactly what you want, but it is possible to give incremental credit and withhold feedback. For example, you can use the custom problem grader fluid in PGgraders.pl (the newest version is necessary, see http://webwork.maa.org/viewvc/system/trunk/pg/macros/PGgraders.pl?view=log). An example is given at
http://webwork.maa.org/wiki/PopUpListsLong
and the relevant portion of code is:
This may not be exactly what you want, but it is possible to give incremental credit and withhold feedback. For example, you can use the custom problem grader fluid in PGgraders.pl (the newest version is necessary, see http://webwork.maa.org/viewvc/system/trunk/pg/macros/PGgraders.pl?view=log). An example is given at
http://webwork.maa.org/wiki/PopUpListsLong
and the relevant portion of code is:
loadMacros( "PGstandard.pl", "PGchoicemacros.pl", "PGgraders.pl", );
install_problem_grader(~~&custom_problem_grader_fluid); $ENV{'grader_numright'} = [2,5,7,8]; $ENV{'grader_scores'} = [0.1,0.6,0.8,1]; $ENV{'grader_message'} = "You can earn " . "10% partial credit for 2 - 4 correct answers, " . "60% partial credit for 5 - 6 correct answers, and " . "80% partial credit for 7 correct answers."; $showPartialCorrectAnswers = 0; ANS( ... );