Difference between revisions of "Automated Problem Checking"

From WeBWorK_wiki
Jump to navigation Jump to search
m (allow text, especially for last part, to be wrapped rather than shown as a very longlong line)
m (marked as historical)
 
Line 1: Line 1:
  +
{{Historical}}
  +
 
==== Report from the AIM-WeBWorK meeting August 2007 ====
 
==== Report from the AIM-WeBWorK meeting August 2007 ====
   

Latest revision as of 12:22, 16 June 2021

This article has been retained as a historical document. It is not up-to-date and the formatting may be lacking. Use the information herein with caution.

Report from the AIM-WeBWorK meeting August 2007

Auto-checking of WeBWorK Problems

Mike has a tool to run a WeBWorK problem through a limited PG environment to see if it compiles properly. We assume the existence of this in the following. Given that, we would consider a reasonable auto checker one that:

a) Checked the input problem for a large number of (say, 1000) different seeds to verify that it had no compile-time errors;

b) Generated a hardcopy version of the problem once, to verify that there are no hidden errors in the TeX generation for the problem;

c) Reported any errors along with, perhaps, a summary of the variation found in the problem. For example, count the different problem solutions that are generated in the course of the 1000 different seeds and report the distribution. For formula answers this might be done by counting the number of unique formulas generated and the number of times each appeared. We aren't concerned with presenting what answer is produced, just with giving a sense of the extent to which the problem is actually varying.