## WeBWorK Problems

by Michael Gallis -
Number of replies: 16
I am trying to write a custom answer evaluator (for ranking tasks) but I am having trouble getting a simple template evaluator to work. The evaluator does not seem suited for MathObjects because I will be trying to evaluate the string response where there will be multiple right answers.

I am trying to build a simple custom answer evaluator template to get things started, but have run into trouble. I started with some (really old?) examples which had some typos/issues, and merged that with a palindrome example (don't remember where that came from...original distro maybe?)

The problem code at the bottom works, but when I try adding a passed correct answer by doing (what seems to me) to be some of the obvious, I get errors.

For example I try

$test_sub = sub { my$CorrectAnswer = shift @_;
my $in = shift @_; etc. ANS($ans->$test_sub); I get ### Warning messages • Error in Translator.pm::process_answers: Answer AnSwEr1: •  Unrecognized evaluator type |AnswerHash| at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1156 • Error in Translator.pm::process_answers: Answer AnSwEr1: •  Answer evaluators must return a hash or an AnswerHash type, not type || at /opt/webwork/pg/lib/WeBWorK/PG/Translator.pm line 1161 I suspect my problem may be in how I'm trying to use PERL but I don't have access to any PERL gurus. I'd appreciate any help or any pointers to working simple examples of custom evaluators that have parameters passed from the question to the evaluating function. For the record, I am using webwork-2.4.1. Thanks in advance for any help. -Mike Gallis Functioning example below: DOCUMENT(); loadMacros( "PGstandard.pl", "MathObjects.pl", "PGanswermacros.pl", ); TEXT(&beginproblem);$showPartialCorrectAnswers = 1;
$ans="Mozart"; BEGIN_TEXT test_sub test:enter$ans $BR \{ans_rule(60) \} END_TEXT$test_sub = sub {
my $in = shift @_; my$CorrectAnswer = "Mozart";
my $correctQ = ($in eq $CorrectAnswer ) ? 1: 0; my$ansMsg = "the message";
my $rh_answer = new AnswerHash( score =>$correctQ,
correct_ans => $CorrectAnswer, student_ans =>$in,
ans_message => $ansMsg, type => 'custom' );$rh_answer;
};

ANS($test_sub(); ENDDOCUMENT(); In reply to Michael Gallis ### Re: Custom Answer Checker by Gavin LaRose - Hi Michael, This might be easier if you wanted to use the (newer) MathObjects instead of the old style answer checkers. There is a wiki page on custom evaluators using MathObjects at http://webwork.maa.org/wiki/CustomAnswerCheckers , which might be a good place to be start. Gavin In reply to Gavin LaRose ### Re: Custom Answer Checker by Davide Cervone - Gavin: Michael is right that trying to use a custom checker with a MathObject string is not the way to go, here, because he wants to check arbitrary strings from the student, and MathObjects doesn't have a mechanism for doing that. The proper way to use MathObjects, here, would be to define a new Context in which you had new operators for > and = that produced some form of List that could be compared against the correct answer. That is a non-trivial undertaking, and not one that there is any documentation for how to perform. I had wanted to do it when Michael originally posted about this nearly a year ago (see http://wwrk.maa.org/moodle/mod/forum/discuss.php?d=337). I see that my response there was truncated (due to an unescaped not doubt) and I didn't catch it at the time. I wish I knew what I had said. In any case, I never got back to doing the context for him, and so he is now trying to write his own string-based answer evaluator. Davide In reply to Gavin LaRose ### Re: Custom Answer Checker by Davide Cervone - Well, it turns out that there is a way to use MathObjects with a custom checker to do this, and you might want to try it out, Michael. Here's the key: you need to modify the Context so that it has a new pattern for matching strings, and that pattern should match anything. This means that the student (and professor) answers will always be packaged up as a String object, but you don't have to have them predefined. It turns out this in not hard to do: $context = Context("Numeric");
$context->strings->{patterns}{'^.*$'} = [-20,'str'];
$context->update;  will do it. Technically it is better to use $context = Context("Numeric");
$context->parens->clear();$context->variables->clear();
$context->constants->clear();$context->operators->clear();
$context->functions->clear();$context->strings->clear();
$context->{pattern}{number} = '^$';
$context->variables->{patterns} = {};$context->strings->{patterns}{'^.*$'} = [-20,'str'];$context->update;

because this will clear out all the other tokens and make for a cleaner pattern, but the -20 in the string pattern means it is searched before everything else, so none of the others really matter.

Once you have this context set up, you can define any string answer using Compute("..."), for example

    ANS(Compute("B>A=D>C")->cmp);

and the student would have to answer that exact string. Note that any input by the user is valid and so nothing will produce any error messages. (If you want error message when the student uses wrong variables and so on, then it is possible to use a more limited pattern for the strings, but there are issues about what operators and other things to leave defined in order to get appropriate error messages. Ask for more details if that is what you want to do.)

Now you can apply your own custom answer checker using the checker option of the cmp method. For example:

    ANS(Compute("A>B")->cmp(checker => sub {
my ($correct,$student,$ans) = @_;$correct = $correct->value;$correct =~ s/ //g;
$student =$student->value; $student =~ s/ //g; return$student eq $correct; }));  would check the answers while ignoring spaces. The first line gets the correct and student answers, plus a reference to the AnswerHash for the answer checker (there are reasons you might need access to that). Note that the correct and student answers are MathObjects, so to get the Perl string from them, we use their value methods in the second and third lines. The final line returns true when the answer is correct and false otherwise. Of course, you will need to do whatever you need to do to check the validity of your answers (taking into account equal signs and so on). Hope that does what you want. Davide In reply to Davide Cervone ### Re: Custom Answer Checker by Michael Gallis - I'm not sure the MathObjects approach above will be sufficient for what I have in mind for the ranking tasks evaluator. Let me outline what my approach has been. My first mental prototype for this is to present the students a graph representing position of an object as a function of time. The graph consists of intervals of straight line segments so visual comparisons of slopes of different sections is fairly easy. Students are asked to rank the velocities of the intervals. I do have a variety of other examples in mind that would recycle the ranking task evaluator part of the code. From the question code side, we generate a list of items/keys (each represented by a letter or perhaps a string) with corresponding numerical values (which would be the slopes on the straight line intervals). To me, this is best represented by a hash. From the student side, they will input text specifying their evaluation of the ranking, much as Davide wrote above. However, if the answer B>A=D>C is correct than B>D=A>C must also be acceptable. It is this multiplicity of correct answers which may be making this problem more ... interesting? (well, at least to me). So a rank value (1,2,etc) is assigned to each key by taking a list of items/keys, sorting them by value and then going through the list in order assigning rank. During the ranking process the rank value is only incremented when the values differ by more than some specified tolerance, since the computer can more easily differentiate values than the students and may generate false differences through round off. This was the part I was struggling with last year (seems I only get to play with this during winter break) but have the code working. The second (easier) part is to parse the answer string of the students to assign their ranks to the keys. The students answer is correct if their ranks are the same as the ranks determined by the questions hash. I'd like to do all this in the custom evaluator subroutine to which we pass the question hash, the tolerance and the student response. I think I can get it to work for myself just using the scope of the variables to grab the hash and tolerances from within the sub, but that seems ... clunky and not good practice. I would eventually like to share this with colleagues if things work out. I am not asking for this to be done for me, I just need some help opening some doors on the basics custom evaluators. I think I can see how to adapt the MathObjects approach but it would mean having a lot more of the legwork code appearing in the question code (and generating the correct answer string" within the question code) rather than stuffing all the nitty gritty into the answer evaluator itself. I'm also not sure how to set the answer message from the MathObjects approach, and I suspect I'll be wanting to put in a whole bunch of hints to the students based upon the text of their response (things like ranking in descending order when ascending was asked for, using symbols not in the list of items to be ranked, answers that do not parse correctly as rankings whould all get different responses). Anyway, you can see my "big picture" which generated my original post. I still very much want to play with custom evaluators in a general way and my biggest stumbling block (I think) has been how get get extra parameters to that evaluator. Thanks -Mike Gallis In reply to Michael Gallis ### Re: Custom Answer Checker by Davide Cervone - Certainly I wasn't suggesting that this checker was the solution to your problem. I was only putting the framework in place for you to fill in the routine that you wanted as an example of how to use MathObjects to handle arbitrary input from a user. The routine you have been trying to write would get the student's answer as a string (and have the correct answer as a known value) and would process both to determine whether the student answer is correct. What I have given you here is a means of doing that through MathObjects. You get the correct and student answers as strings, and then do whatever you want to compare them to see if they match. My example simply removed spaces and compared as strings, but certainly your routine would need to be more sophisticated than that. (I was leaving that to you, since only you know what you want the comparison to do.) I merely intended to give an example into which you could insert your code. The advantage of using MathObjects in this way is that the details of making the answer checker are handled by MathObjects, and all you really have to concentrate on is checking the student answer against the correct answer (not all the stuff about what you need to pass to ANS(), or what an AnswerHash does). That seems to be a savings to me. In terms of passing parameters to the checker, that can be done through the parameters to the cmp() method; they will become entries in the$ans hash passed to the custom checker. For example:

    ANS(Compute("A>B")->cmp(
myHash => ~~%hash_of_values,
myTolerance => .001,
checker => sub {
my ($correct,$student,$ans) = @_; my$hashRef = $ans->{myHash}; my$tol = $ans->{myTolerance}; .... return$isCorrect; # 1 or 0 for right or wrong
}
));


One could package the checker into a marco file as

    $orderChecker = sub { my ($correct,$student,$ans) = @_;
my $hashRef =$ans->{myHash};
my $tol =$ans->{myTolerance};
....
return $isCorrect; # 1 or 0 for right or wrong };  (along with the necessary Context code) and then use  ANS(Compute("A>B")->cmp( myHash => ~~%hash_of_values, myTolerance => .001, checker =>$orderChecker,
));

in the main problem code, for example. It would also be possible to define something like
    sub ordering_cmp {
my $order = shift; my$hash = shift;
my $tol = shift; return Compute($order)->cmp(
@_,   # include any other user options
myHash => $hash, myTolerance =>$tol,
checker => $orderChecker, ); }  in the macro file and use  ANS(ordering_cmp("A>B",~~%hash_of_values,.001));  in your problem, but this is going backward in suggested programming style for PG routines. Better would be:  sub ordering_cmp { my$order = shift;
return Compute($order)->cmp( @_, # include any other user options checker =>$orderChecker,
);
}

with
    ANS(ordering_cmp("A>B",
myHash => ~~%hash_of_values,
myTolerance => .001,
));


Those are ways to do it using MathObjects custom checkers. If you want to do hand-coded ones, you could use a routine that returns the CODE reference that is the answer checker, and that code could be a closure over the local namespace of the routine that creates it. For example:

    sub ordering_cmp {
my $hash = shift; my$tolerance = shift;
return sub {
my $student = shift; ... (your code here using$hash and $tolerance) ... my$ans = new AnswerHash(...);
... (set $ans return values) ... return$ans;
};
}

and then use
    ANS(ordering_cmp(~~%hash_of_values,.001));

Note, however, that you have to initialize a lot of stuff in the AnswerHash, such as the LaTeX preview, the student answer, the correct answer, the error messages, and the score, to name a few. (MathObjects takes care of that for you in the other examples above.)

Davide

by Davide Cervone -
I mentioned before that the proper way to use MathObjects for this was really to use a specialized context, and that the string approach I gave above is really just a hack.

I decided I would write the context to help you out, and to see how hard it would be. It took me the morning to do it, but I think the result is pretty much what you need. I have added to the pg/macros directory (you can get it from the CVS repository) as contextOrdering.pl. See the comments at the top of the file for how to use it. The basic idea is that you can give the orderings either as a string like the student types, or as a list if letter=>value pairs (like you describe).

For example:

    loadMacros("contextOrdering.pl");

Context("Ordering");
$ans = Ordering("D > A = B > C"); .... ANS($ans->cmp);

or
    loadMacros("contextOrdering.pl");

Context("Ordering");
$ans = Ordering(A => 2, B => 2, C => 1, D => 3); .... ANS($ans->cmp);

or
    loadMacros("contextOrdering.pl");

Context("Ordering");
%ans = (
A => 2,
B => 2,
C => 1,
D => 3,
);
$ans = Ordering(%ans); .... ANS($ans->cmp);

The tolerance is controlled by the context's tolerance setting. You should get reasonable error messages for syntactic problems (like missing operands, or use of undefined letters). If you want additional messages (like for the wrong order), you could use the answerHints macros to post-process the answer when it is incorrect. You can give both helpful hints and partial credit that way. See the pg/macros/answerHints.pl file for more details.

by Davide Cervone -
I have recently updated the contextOrdering.pl file include warning messages when a letter is used more than once in an ordering, or when the student's ordering doesn't include all the letters in the correct answer. You may want to update your copy if you already downloaded it.

Davide

by Michael Gallis -
Thanks for your efforts on this! I feel as though I've not only gotten some answers, but have gained a co-conspirator. I've only now gotten to download the contextOrdering.plfile but I am having some trouble. My simple code is (largely cut and paste from above):
DOCUMENT();
"PGstandard.pl",
"MathObjects.pl",
);

TEXT(&beginproblem);

Context("Ordering");
$ans = Ordering(A => 2, B => 2, C => 1, D => 3); BEGIN_TEXT The ordering is$ans $BR The order is (from greatest to least): \{ ans_rule() \}$BR
END_TEXT
ANS($ans->cmp); ENDDOCUMENT(); But this generates the errors: Can't use string ("Numeric") as a HASH ref while "strict refs" in use at [PG]/lib/Parser/Context.pm line 172 Died within Parser::Context::getCopy called at line 98 of [PG]/macros/contextOrdering.pl from within context::Ordering::Init called at line 83 of [PG]/macros/contextOrdering.pl from within main::_contextOrdering_init called at line 296 of [PG]/macros/dangerousMacros.pl from within main::loadMacros called at line 6 of [TMPL]/setranking_test/neoranktest1.pgwith the details Can't use string ("Numeric") as a HASH ref while "strict refs" in use at [PG]/lib/Parser/Context.pm line 172 Died within Parser::Context::getCopy called at line 98 of [PG]/macros/contextOrdering.pl from within context::Ordering::Init called at line 83 of [PG]/macros/contextOrdering.pl from within main::_contextOrdering_init called at line 296 of [PG]/macros/dangerousMacros.pl from within main::loadMacros called at line 6 of [TMPL]/setranking_test/neoranktest1.pg  So it seems it's not loading correctly. Is this a problem with using webworks 2.4.0 versus 2.4.5? I'm pretty excited about the opportunities this new setup will give me, but I don't want to chew up too much of your time. -Mike Gallis In reply to Michael Gallis ### Re: Custom Answer Checker by Davide Cervone - You are right, this is a version problem. There was a change in the getCopy method of the Parser::Context object that is not in version 2.4.0. The change is backward compatible so that old problems will run in the new version, but not forward compatible (old systems won't run new problems). There are two possible solutions. The first is to update PG to a later version, say rel-2-4-patches. This can be done without updating all of webwork (just update the pg directory). The two halves of webwork are pretty well isolated from each other, and you should be able to update pg to 2.4.5 while leaving webwork2 at 2.4.0. Of course, you would want to back up your pg directory, just in case you needed to go back. The other possibility would be to edit the contextOrdering.pl file and change line 96 from  my$context = $main::context{Ordering} = Parser::Context->getCopy("Numeric");  to  my$context = $main::context{Ordering} = Parser::Context->getCopy(undef,"Numeric");  If I remember correctly, that would work with the older version. Give it a try. I'm not sure if there is anything else new that I used in the Ordering context, as I no longer can remember exactly when each new feature was added. I do tend to use the most current features, however, so you might find that you will need to update before the new context works for you. Davide In reply to Davide Cervone ### Re: Custom Answer Checker by Michael Gallis - Sweet!!! With the CVS updates, everything works like a charm (I've always installed from tarballs in the past, so that was also a new experience for me). I've used the code snippet to play, and everything works as I'd expect. I even got to play with the tolerance settings and got what I expected. Hopefully I'll get to set up a few of the ranking task questions I've been thinking about over this weekend. Should I post any follow up here? Not to be greedy (I very impressed with what you've "thrown together over a weekend") but would it be possible to use strings for the keys instead of a single character? It occurred to me that I might want students to assess kinetic and potential energies at different points so intuitive labels might be KE1, PE1, KE2, PE2 etc. If you don't have the time, I most certainly understand. With much gratitude, Mike Gallis In reply to Michael Gallis ### Re: Custom Answer Checker by Davide Cervone - Yes, CVS is pretty cool. As for the other labels, you can do that now. There are only two places where the Ordering context assumes the values are single letters. The first is when you specify the order by a string passed to Ordering() and I have updated the contextOrdering.pl file to allow multi-letter labels; use CVS to get the latest copy. The other is in the error messages, which all refer to "letters". You could either make a local copy of contextOrdering.pl and edit the error messages in it (not recommended), or make a separate macro file (say Ordering.pl without the "context" prefix) that contains the following:  loadMacros("contextOrdering.pl"); sub _Ordering_init {$context{Ordering} = Parser::Context->getCopy("Ordering") unless defined $context{Ordering}; my$context = $context{Ordering};$context->{error}{msg}{"Missing operand before '%s'"} = "Missing label before '%s'";
$context->{error}{msg}{"Missing operand after '%s'"} = "Missing label after '%s'";$context->{error}{msg}{"Operands of %s must be letters"} = "Operatands of %s must be labels";
$context->{error}{msg}{"Each letter may appear only once in an ordering"} = "Each label may appear only once in an ordering";$context->{error}{msg}{"Your ordering should include all the letters"} =
"Your ordering should include all the labels";
}

1;

and load that macro file instead of contextOrdering.pl. This will replace the error messages that refer to "letters" with ones that refer to "labels" instead. Use whatever word best represents your situation.

Note that if you change the name of the file you will also need to change the name of the _init routine to correspond to the new name.

Hope that gets you what you need.

Davide

PS, I would start a new thread for follow-ups. This one has gotten pretty long and complicated.

by Davide Cervone -
It's been a long time since I wrote an answer checker that was a raw subroutine (the modern approach is to use an AnswerEvaluator object). The error message says that the thing you have passed to ANS() is neither a CODE reference nor an AnswerEvaluator object, so we need to look at your ANS() call for the problem.

Unfortunately, I think you have not copied the code correctly, because that line is

    ANS($test_sub();  which is a syntax error, so would not have run. I suspect that you actually had  ANS($test_sub());

in your code, which is not correct. Here, you have called the $test_sub routine, passing it no parameters, and the result of the call is being passed to ANS(). That result is an AnswerHash object, which corresponds to the error message ("unrecognized evaluator type |AnswerHash|"). The correct form would be  ANS($test_sub);

which passes the CODE reference itself to ANS() rather than the result of calling the code. That is what you want to do, as the code will be called later when the problem is graded.

Hope that helps.

Davide

by Michael Gallis -
Sorry,

Yes, this part I have working. I was trying some changes and didn't back out of all the edits before pasting the code into the board. The difficulty I am having is passing (extra) parameters to the custom evaluator code. I've tried things like

ANS($ans->$test_sub);

then

ANS(test_sub($ans)); which barfed more severely, so I changed the top of the sub def to sub test_sub{ my$CorrectAnswer = shift @_;
my $in = shift @_; with ANS(test_sub($ans));

which also gave me the errors described in my original post. So I guess I don't understand what ANS() is really looking for and what it does with what it gets.
I was guessing that it called its argument and appended the student response to the routines @_, and that it was looking for a hash (specifically an AnswerHash?) as the return variable. I tried adapting what I saw in both the old docs as well as the custom evaluator page in the wiki that Gavin referred to, to no avail. I suspect the real approach is much more object oriented, which I struggle with being born and raised on FORTRAN many moons ago.

I would like to be able to play around with some simple custom evaluators but I haven't yet seen an example that I can get to work where additional parameters are passed to the subroutine. I'm sorry if I am a bit dense on this.

I am trying to do a ranking tasks evaluator, I think there is a better post in this thread to follow up on with the details.

-Mike Gallis

by Davide Cervone -
    ANS($ans->$test_sub);

would work only if $test_sub were a reference to a subroutine that returned either a CODE reference or an AnswerEvaluator object, and that took the$ans as an argument. But your $test_sub is a reference to a subroutine that returns an AnswerHash, which is not the correct type of object. Your code  ANS(test_sub($ans));

would only work if test_sub were a defined subroutine that returned a CODE reference or an AnswerEvaluator (and used $ans to construct it). This is like the ordering_cmp routine I defined in my message above. You don't have a test_sub routine, you have$test_sub, which is a scalar variable holding a reference to an anonymous (i.e., unnamed) subroutine. You could try
   ANS($test_sub($ans));

but this is equivalent to your first example; it calls $test_sub passing it$ans as a parameter, and takes its return value and passes that to ANS().

Your third attempt, where you define test_sub at least makes ANS(test_sub($ans)) run, but it has the same problem as the previous examples in that test_sub returns the wrong type of value. What ANS() is looking for is either an AnswerEvaluator object or a CODE reference to the routine to be called to check the answer. What it does with it is stores it in a hash (along with the label for the answer blank it is associated with) so that when the problem is submitted by the student, that evaluator or code reference can be called. It is passed the student's answer and the label for the answer blank where the answer was entered. It is not called at the time ANS() is executed; that happens much later after the problem has been completely processed. If you are passing a CODE reference (like the result of a sub command), then your code will be called with exactly two items: the student's answer and the answer blank label. There is no way to pass it additional data when it is called. The way you give it additional data is to build it into the code itself (via a closure for example), or have it use the answer label as an index into an array of data to be used, or use global variables. That certainly is a limitation to the code reference approach, and (I suspect) one of the reasons for the development of the AnswerEvaluator object. That is documented in the pg/lib/AnswerHash.pm file (along with its associated AnswerHash structure), but it is complicated, and is based on lists of filters that you attach to the AnswerEvaluator. These are themselves code references, together with data that gets passed to them. I, personally, find them hard to manage, and am glad that I have packaged up most of the details in the MathObjects cmp method, where I don't have to worry about it any more. Both approaches require that you return an AnswerHash object. This encodes the results of checking the student's answer against the correct one, and includes things like the preview string, the evaluated answer, the correct answer string, the error messages, the score, and so on. The AnswerHash is not the evaluator itself, it is the result of the evaluation of the answer. The AnswerEvaluator (or code reference) is the actual answer checker, and is what ANS() needs to be given. Hope that helps clear things up. Davide In reply to Davide Cervone ### Re: Custom Answer Checker by Michael Gallis - Maybe one day when I have a LOT more time I'll go diving back into the brute force method. As it stands, I have the following "Hello World" type template for prototyping which seems to work and gives me a starting point for future play: DOCUMENT(); loadMacros( "PGstandard.pl", "MathObjects.pl", "PGanswermacros.pl", ); TEXT(&beginproblem);$showPartialCorrectAnswers = 1;

$context = Context("Numeric");$context->strings->{patterns}{'^.*$'} = [-20,'str'];$context->update;

$desired_response="Mozart";$extra="Wolfgang";

BEGIN_TEXT
test_sub test:enter $ans$BR
\{ans_rule(60) \}
END_TEXT

$checker_template = sub { my ($correct, $student,$ans ) = @_;
my $xtra =$ans->{myXtra};
my $fullcorrect=Compute($correct.$xtra); if($fullcorrect==$student){ return 1; }else{ return 0; } }; ANS(Compute($desired_response)->cmp(
myXtra => $extra, checker =>$checker_template
));

ENDDOCUMENT();

This was pared down from your several helpful suggestions. It shows me how to get some extra parameters to the custom checker and a few other simple aspects.

I've just started trying your new ordering context, but I'll place those comments/questions in a more appropriate place in this thread.

-Mike