# Automated Problem Checking

From WeBWorK

(Difference between revisions)

(New page: ==== Report from the AIM-WeBWorK meeting August 2007 ==== =====Auto-checking of WeBWorK Problems===== Mike has a tool to run a WeBWorK problem through a limited PG environment to see ...) |
(→Report from the AIM-WeBWorK meeting August 2007) |
||

Line 1: | Line 1: | ||

− | |||

− | |||

==== Report from the AIM-WeBWorK meeting August 2007 ==== | ==== Report from the AIM-WeBWorK meeting August 2007 ==== | ||

Line 11: | Line 9: | ||

* Generated a hardcopy version of the problem once, to verify that there are no hidden errors in the TeX generation for the problem; and | * Generated a hardcopy version of the problem once, to verify that there are no hidden errors in the TeX generation for the problem; and | ||

* Reported any errors along with, perhaps, a summary of the variation found in the problem. For example, count the different problem solutions that are generated in the course of the 1000 different seeds and report the distribution. For formula answers this might be done by counting the number of unique formulas generated and the number of times each appeared. We aren't concerned with presenting what answer is produced, just with giving a sense of the extent to which the problem is actually varying. | * Reported any errors along with, perhaps, a summary of the variation found in the problem. For example, count the different problem solutions that are generated in the course of the 1000 different seeds and report the distribution. For formula answers this might be done by counting the number of unique formulas generated and the number of times each appeared. We aren't concerned with presenting what answer is produced, just with giving a sense of the extent to which the problem is actually varying. | ||

+ | |||

+ | |||

+ | [[Category:Problem Libraries]] |

## Revision as of 07:53, 13 June 2008

#### Report from the AIM-WeBWorK meeting August 2007

##### Auto-checking of WeBWorK Problems

Mike has a tool to run a WeBWorK problem through a limited PG environment to see if it compiles properly. We assume the existence of this in the following. Given that, we would consider a reasonable auto checker one that:

* Checked the input problem for a large number of (say, 1000) different seeds to verify that it had no compile-time errors; * Generated a hardcopy version of the problem once, to verify that there are no hidden errors in the TeX generation for the problem; and * Reported any errors along with, perhaps, a summary of the variation found in the problem. For example, count the different problem solutions that are generated in the course of the 1000 different seeds and report the distribution. For formula answers this might be done by counting the number of unique formulas generated and the number of times each appeared. We aren't concerned with presenting what answer is produced, just with giving a sense of the extent to which the problem is actually varying.