Thanks,
Ken, for bringing up a very important topic. As I see it the creation
and distribution of good problems has a number of aspects:
1. Conceive of an educationally appropriate mathematics question for
the course. -- (or take one of the existing problems in books, one's
earlier tests, etc.)
2. Write the PG code which displays this problem and checks the answer.
-- (or find a similar one in the library) -- If it's a new problem or a
significant modification of an old one, add the problem to the bank of
problems.
3. Make sure that the code is debugged -- in particular that it checks
the answer correctly in all situations. -- Transmit any corrections
back to the bank of problems. (Double check that these are really
correct corrections. :-) )
4. Revise the wording of the problem to make it more effective. --
Transmit revisions back to the bank of problems (as a new version of
the problem perhaps? or as a "debugged" version of the old one?)
5. Write a new problem inspired by the existing one. -- enter this into
the data bank (as a new problem? a variant of an old one, a variant of
a combination of several existing problems?)
Aspect 1 is and always has been the central part of creating good
problem sets and software doesn't significantly change this aspect of
the process, except perhaps as it makes certain questions infeasible to
ask and check or contrarily makes questions which were previously
infeasible, now possible.
Aspect 2: Coding a new problem requires some new skills, but we're
actively working to minimize the amount of work required to create a
good PG problem which can be asked and checked over the web from a good
mathematics question clearly expressed on paper. At the very least we
want to be able to ask all of the types of questions we asked earlier
with pencil and paper (and checked by hand if the class was small
enough) and perhaps in some cases ask questions that were previously
not feasible. All in all, WeBWorK and the PG language seem to pose few
barriers for asking and checking questions relevant to pre-calculus,
calculus and other similar courses. The learning curve for writing
these problems is still steeper than I would like but it's getting
better. Adding a problem to the data bank is easy. Deciding whether the
problem is new or a significant modification will probably require a
human editor. Good indexes will help this editor and will also in
discovering whether a particular mathematical question has already been
coded.
Aspect 3: The problem of archiving and retrieving very similar
collections of code, each purporting to have fixed a bug in previous
pieces of code seems very close to the problems faced by the open
source development groups surrounding projects such as apache, perl,
mozilla, etc. (see http://www.sourceforge.org)
As a first attempt at a storage method for new problems it seems that using one
of the Concurrent Version Servers (CVS) designed by these groups would allow us
to build on their expertise. (See http://www.cvs.org).
These systems allow many different people to co-operatively edit and
improve a common database of code packages. They also have the
capability of "rolling back" to earlier versions if the "fixes" turn
out to be mistakes.
Aspect 4: The revision problem is also somewhat related to the problem
of coding a large project with many people. The CVS implementations
allow one to develop
"branches" -- parallel development of code packages that have different
features. It also allows one to merge these branches if that becomes
desirable.
Whether this is the enough or the right kind of flexibility for
revising problems remains to be seen.
Aspect 5: In this case one has a new problem, but one would like to
maintain some connection with the problems that inspired it. It's not
clear whether the "branch" mechanism of the CVS is flexible enough in
this case.
--
Which leads to the question of indexing and cross-indexing, which is
really at the heart of Ken's comments above. One way to do "on the fly"
indexing is using a search engine -- and storing the problems in an SQL
(search query language) database would facilitate this. (Quite likely
you would only want the "latest" representative of each problem -- not
the previous "buggy" or "confusing" versions of the problems
represented in this database.) This kind of searching and indexing
takes some skill and can produce variable results, it is pretty
flexible however, since one can always think up new combinations of
keywords to search for. With the help of this search engine individual
researchers/instructors could compile indices to the problem data bank
that they have designed for a specific course or to accompany a
specific book or to complement a certain style of teaching. Because the
indices would point to a development "tree", rather than a single piece
of code, improvements to the code --say code that worked faster, or
gave better error messages when checking answers or had improved
wording would automatically be incorporated into the indices.
-
So (finally :-) ) here is an initial suggestion for archiving problems:
Bottom layer: A CVS which holds the problem templates. It maintains
versions of the problems including both coding corrections and
mathematical/educational corrections. Many users can build branches
"improving" existing problems. An editorial board decides whether the
branches should be folded into the main branch, entered as a separate
problem, or pruned. The CVS might be mirrored
at more than one institution for reliability.
Middle layer: An SQL database which points to the "current" version of
the problem in the CVS and contains additional data: who wrote the
problem, various
key words describing the content of the problem, the level of
difficulty, the availability of the problem (some authors might not
allow global availability), possibly information about the problem it
was derived from, and a long list of other attributes to be determined.
A library board would
check in new problems, verify accuracy of attributes (add new attribute
categories) etc.
Top layer: Instructors and authors would create and make available
lists of problems that they thought appropriate for their course or for
a specific text book. These could stored anywhere on the web, including
in an SQL database for easy searching. This is where a new instructor
for a course would go first to obtain a good starting point for
designing a course. From there they could follow links and keywords in
the SQL database to find other similar problems, possibly write some of
their own and generally customize the problem sets for their course to
their liking.
---
I'm anxious to hear further thoughts on this subject. Is this scheme
too complicated? not inclusive enough? too rigid? Are there suggestions
for refining and implementing the scheme (both the software support and
the organizational structure) or perhaps we should consider a
completely different scheme.
Thanks for your attention.
-- Mike Gage
<| Post or View Comments |>
|