WeBWorK Main Forum

The WW "Statistics" Tool -- what does it really show?

The WW "Statistics" Tool -- what does it really show?

by Ted Cox -
Number of replies: 6
I find that the "Statistics" function on WW is not very useful. In fact it is misleading. Perhaps it is the exact meaning of "active student" that is the problem. But I find that "The percentage of active students with correct answers for each problem" is always much higher than it should be.

For example, in one set it shows 100% getting problem 1 correct, but when I look in "Student Progress" I find 6 or 7 students who have not gotten it correct. Does this 100% figure come from only checking students who have attempted that particular problem? If not, EXACTLY how is the number computed? When I look at the 100% correct I think fine, everyone got it, so I don't work it out in class. But they didn't get it, and I have missed the chance to explain it.

No matter where it comes from, I don't understand why it is there, instead of the more useful information of (REALLY) what percentage of your students have answered the problem correctly, whether they have tried it or not, or whatever criterion is used to exclude them from the calculation.

Is it possible to get a patch for this? Or some clues on how to add such a function by customizing the code?

Ted Cox
In reply to Ted Cox

Re: The WW "Statistics" Tool -- what does it really show?

by Chrissy Safranski -
This is a super old thread, but there weren't any replies, and I have this problem, too.  The Student Progress feature seems to be working fine, but when I go to Statistics, the first two charts shown are "Percentage of Active Students with Correct Answers," (one is a bar graph and the second is a table of values for the same thing) and they both always show 100% for almost all the problems.  That does not at all match what the students have actually done on each problem according to the Student Progress feature.

Is there something I can do to make those charts accurately reflect the percentage of students who have answered the problem correctly?
In reply to Chrissy Safranski

Re: The WW "Statistics" Tool -- what does it really show?

by James Morski -
It looks like it has been a few years since this was active so it seems time to bump this. I echo the statements of Chrissy and Ted. Can anybody share some light on this topic?
In reply to James Morski

Re: The WW "Statistics" Tool -- what does it really show?

by Danny Glin -
Looking at the code, it looks like Ted's assessment is correct: the "Percentage of Active Students with Correct Answers" is based only on students who attempted a given problem. This may be what is meant by "active students", though I tend to think of active students as those who are still enrolled in the course (i.e. have not dropped/withdrawn).

The code for the statistics page can be found at /opt/webwork/webwork2/lib/WeBWorK/ContentGenerator/Instructor/Stats.pm
You can try making modifications there to see if you can tailor it to your purposes. It's not a one-line fix to change the bar graph from ranging over students who attempted a problem to all students, since the number of "active" students is also used in calculating the average number of attempts per student. For a question where almost all students who attempted it got it correct, this number currently tells me how many tries it took them to get it right, which wouldn't be the case if a bunch of students with 0 attempts were factored in to the calculation.

One of the issues I see is that different people use this information differently. Looking at the statistics for a set while the set is still open, I would get more useful information from the current graph. Including students who have not yet attempted the problem would cause the average score to be artificially low. In our large classes we also have a number of students who "give up" on the course but do not formally withdraw until later in the term, so even after the deadline they would bring down the average for a question.

I do agree that the information presented can be misleading, since it does not factor in students who thought a question was too difficult, and thus didn't even attempt it. Any summary statistic is limited in the information it can provide, so switching this to a different calculation would solve that issue, but may introduce other confusions.

Whoever originally coded the statistics page set it up the way they did because that was the information they wanted, and this is the way it has been presented for a long time, so there are almost certainly people using WeBWorK who expect that information to be presented. Before changing the statistics page, it's worth asking the question: what is the most meaningful data that can be shown simply on this page.
In reply to Ted Cox

Re: The WW "Statistics" Tool -- what does it really show?

by Thomas Wong -

I have a similar question regarding the statistics shown in WebWork.

Is the exact computation of the "success index" shown anywhere? The closest I have found is the description given in the getting started with WW document. Namely, "it is roughly the inverse of the number of incorrect attempts in a problem set and for a given problem set gives some indication of how difficult a student found the set compared to his peers"

Thank you very much.
In reply to Thomas Wong

Re: The WW "Statistics" Tool -- what does it really show?

by Danny Glin -
I thought that there used to be a description on one of the scoring pages of how the success index is calculated, but I can't find it right now, so it may have been removed at some point.

There is a comment in the code which describes it as follows:
index = ( total_status / total_value )**2 / average_number_of_attempts

Based on a quick look at the code, that seems to match the calculation being done.