I know this topic has been discussed before, but we're trying to do something slightly different from what I've read about.

Specifically, my colleague Mike Price is trying to implement placement exams on WeBWorK in such a way that, after taking the exam, the student can both find out how he or she has been placed, and what topics he or she needs to review or study.

This means we need a bit of code that will analyze the detailed results of the exam (not just the total right or wrong, but which problems).

Of course this data is in the system, so this should be possible. What I'd like to do is the following:

1) have the students take the exam of (let's say) 20 questions.

2) Then have them click on a link (which will really be another WeBWorK problem) and what this link will do is tell them how they placed, and what to review. For this to work, the pg code in the problem they have just clicked on will have to be able to retrieve their vector of scores from the first 20 questions (and then do some analysis, but the analysis is easy to program).

Has anyone done anything like this, or does anyone have any idea of how to, from within a webwork problem, extract this vector of scores?

thanks, Hal

I don't know of anyone who has done this (our placement test combines the score on the test with some external factors, and we therefore just don't report a placement to students---they get the placement when they meet with an advisor).

Can you give an example of what information you would like to have students see when they finish the exam? Are the placement and review recommendations a function of their scores on different subsets of the problems on the test?

The problem scores are calculated in the GatewayQuiz Content Generator; when submit is pressed, an array @probStatus is created that gives the scores on each of the problem. So these data are in the code already, and it shouldn't be too hard to push that out to generate an appropriate placement message in some manner or another.

Getting a better picture of the goal will probably help figure out if this should be a general solution or a one-off code edit here.

Thanks,

Gavin

Dear Gavin,

Thanks for the interest. What we are trying to do is the following:

We're on the quarter system. We have two remedial courses we offer (some students need both, some students just need the more advanced remedial course, and of course many students need neither). Then we have "college algebra," pre-calculus, and calculus, in that order, all of which are for credit.

So we'd like to do two things:

1) tell students whether they should take elementary algebra or intermediate algebra (both remedial), college algebra, pre-calculus or calculus.

and

2) tell them (in case they are motivated to do remedial work on their own) what areas they need work in to place into the lowest level credit bearing course (college algebra).

Furthermore, we want to make the test and the diagnostics self-contained (without requiring the intervention of an advisor) so that a high school senior (or better yet a high school junior) can take the test and find out what he or she is ready for at our university.

So the goal is to have the results reported to the test-taker (along with suggests for remediation) automatically and immediately.

Thanks, Hal

Thanks. So I'm guessing you have an N question placement test, with problems on on the test divided into up to 5 different topics (elementary algebra, intermediate algebra, etc.). Then students' scores on these different topics form an (up to) five dimensional input to the placement function, which generates a placement (the course name) and recommended review topics (which could be empty). Is that a reasonable facsimile of your situation?

Do you use the WeBWorK to give tests in other contexts (e.g., in classes)?

Thanks,

Gavin

Dear Gavin,

That's about right for our current placement system (which does not use WeBWorK). [It is actually 4 levels of questions - no score places you into the bottom remedial course, and then the 4 groups of questions are to place you above that.]

I'd like the dimensions to the array on our new test to be higher than the number of placement levels - ideally they should correspond to topics on which we are testing so that we can give students feedback on what should be learned/reviewed that is as precise as possible.

The limitations there are not technical ones though, but rather pedagogical.

We haven't been using WeBWorK to give tests in classes. It is something we've been considering, but proper proctoring is a bigger issue there. I think our university probably has the resources to do this sensibly (the proper proctoring, that is) but the logistical issues are significant.

It has occurred to me that the right way to solve our problem might be to simply have a web link the student logs into which runs a program that uses wwdb or something like that to pull the placement exam results out of the database and then analyzes them.

Hal

For what it's worth, we also use WeBWorK to administer our placement exam, and our placement exam is internally divided into 5 parts covering various areas of high school mathematics. A placement code is computed using a slightly complex nested if-then condition on the scores in the various subsections which tells a student what math courses he or she is ready for. This sounds like your situation.

We set up a WeBWorK course with a large number of student accounts for administering the actual test. A student first logs into an external web site (powered by PHP) which does the following: (1) if a student has never logged in before, it associates the student with one of the unused WeBWorK accounts and provides a link for logging into the WeBWorK course. (2) if a student has logged in before, it reports the results (placement codes) for all previous graded attempts at taking the placement exam. A grading program is run which grabs the problem vectors directly from the MySQL database that WeBWorK is using, and there is a link on the page allowing the student to invoke the grading program explicitly

This is workable. On the other hand, it would certainly be nice if the placement code could be computed from within WeBWorK and reported to the student immediately after finishing the test. Some students have a lot of trouble figuring out how to get their placement code after finishing the test (i.e., by returning to the original site.) Our secretaries have become adept at explaining this.

Actually, WeBWorK already allows for configurable problem graders. It seems to me that a configurable assignment grader is an obvious generalization of this, but I'm sure this is not high on anyone's to-do list.

Regards,

Bob Byerly

Dear Bob,

That's interesting. That sounds a lot like what I'm imagining might be the easiest to implement, though not necessarily the ideal.

How would you feel about sharing the code for your grading program (and maybe for the external web page)?

thanks, Hal

A sneaky trick that might be somewhere between embedding the placement algorithm in WeBWorK and having it completely outside might be to have a link in the test that points to a pop-up that gives the placement results.

The simplest way to do this would be to have it embedded in the solution or answer message in the first problem on the test. It's harder to get it to show up at the top of the page, which is where one would want it, but that would be relatively easy to hack if one wanted to do it on a one-off basis. I'm thinking of something like a javascript window.open() call to pull up a window that contains the results of something like Bob's placement php script. In the test we have access to the student's scores on the test, so we could embed that in the query string sent to the script, but it's probably just as easy to have the script to the score lookup on its own.

To embed the link at the top of the page would require that we edit the GatewayQuiz Content Generator, I think. Maybe something like

`if ( $set->id eq 'PlacementTest' ) {`

print CGI::a({-href=>$diagnosticURL}, "Click here for your placement.")

}

If that sounds useful it should be easy enough to iron out the details to do it a little more gracefully than that.

Gavin

Basically, I would like the students to see two things when they finish the test. 1. Their score. and 2. A list of topics that the students should study based on their incorrect answers to the test questions. To make things easy, I could associate a topic or two to each exam question.

Any ideas you have would be great.

--Tim