Model Course Notes
Notes from Web Conference 3
- Good problems follow-up
- Problem authoring discussion
- Model Courses
- The heuristics that we discussed last time shape fairly easily into a "rubric" that may or may not be useful to think about when writing problems and thinking about what existing problems might have or lack.
- Learning objective -- could be very simple; it may also be that this should also be available to students (though obviously in some cases that would be counterproductive)
- Related to the suggestion that there be "nice enough" numbers, there are cases where it is possible to create a problem with distinct values which allow tracking of student work through the problem
- Is a test suite possible? Can we check for nice numbers? For robustness? It would be nice to have some sort of problem checker that is able to automate verification that there aren't hidden singularities in a problem we are authoring, or that the values generated are "nice."
- Students and nice numbers: is there information about how students react to problems, to figure out what problems are effective and which turn students off?
- Metadata for problems: could be part of a new NPL, and could include
- Learning objectives (possibly available to students, in some cases this might not be a good thing)
- Quality measures
- Information on which problems have substantial added information, hints, or instruction. These might be problems that should be embedded in a set.
- A measure of the difficulty of the problem---maybe the number or percent of incorrect submissions seen on the problem
- A measure of the number of uses of the problem
- We may need a better mapping of problems to course sections---e.g., a better generic course/chapter/section list
Good NPL Problems
- Are solutions good for students? Are there data that substantiate the value of solutions to student learning?
- The solution -> new problem model. This may be very useful in some cases, though not necessarily all. (I wrote this down, but I'm not sure what it means now...)
- Note that searching for problems is dependent on the tagging, and that the displayed directory structure may not reflect the actual database chapter/section/problem numbers (this may be a fault in the organization of the files in the NPL)
- Finding problems that are similar to a given model problem, or that have characteristics that we want. The keyword search might be a good option for this
- One aspect of developing model courses is that of translating problems from textbook problems to problems that are parameterized, algorithmic WeBWorK problems
- This translation allows us to do more with the problems---e.g., allow negative parameters, or change the problems to challenge students
- Testing problems becomes an issue: ensuring that the problems are consistent and have no singularities
- Making the format and numbers that show up in the problems "nice" can be a significant time drain
- There are some model courses currently available: calculus I
- For the calculus I model course, the problems are set up so that the problem paths are visible, and the source for the problems is visible
- Things that we might want in a model course:
- Sample problem sets
- Textbook notes
- Assumptions about how the problems are picked and assigned
- Assignment information and related data that are provided to students when using the problems
- That it be a course that actually has been used (and tested)
- Set header files giving information about the sets
- How are these stored?
- A courses repository? This could include metadata, including textbook information, philosophy, etc.
- The moodle course model is a good one: it allows viewing of a lot of metadata about the course and the sets that are given
- Problem sets can be stored in an archive file that can be downloaded and installed in a course. Is there a better way than a tgz file?