I would like to comment and ask about "best practices".
1) Most of the 2000+ NPL problems I viewed last semester used textbook phrasing for DBsection rather than something in the "official" list. Since the regular Library Browser copes with that, I think that is a reasonable practice.
2) OTOH, many of those problem templates also had their KEYWORDS be generic to the section rather than specific to the particular problem. Although I welcome contribution of all problem templates, that lazy tagging decreases their usefulness. E.g., try Advanced Search with Precalculus as subject and "concavity" as the keyword to get an idea about the magnitude of irrelevant "hits".
3) Is DESCRIPTION worth writing? Its content is not presented in the Library Browser --- it is only seen by those viewing code for the problem template. Almost all uses seem banal: "a problem from text ABC by XYZ, written by ME@pdq.edu" with info which is duplicated in subsequent tags. I prefer to write something of the form
#### locate inflection & determine concavity for a t + b (sin t)^2
or
#### Newton's method: apply to a quadratic, approximate the larger zero
(plus comments within the code at non-trivial complexities).
4) What types of tags would be useful to add to the system (with appropriate queries by a Library Browser)?
5) What steps, if any, are worth taking to prune the NPL of duplicate templates? of near-duplicates?
6) Most current problem templates are written in English. There are some projects starting to produce translations into other languages. What can we do to identify our "best problems" as candidates for initial attention by those projects?
7) Please add your questions, comments, and suggestions.
2. I am not exactly sure how the search feature works, but I think relying on the tags and keywords is too restrictive. Mostly because these keywords are not the best in many problems. It would be desirable to search through the whole body of the pg file. Perhaps even search the directory names of the pg files.
6. It would not be too hard to collect information automatically on what pg files are assigned most often.
7. I often get a surprise in a problem because of randomization. For example I assign a true false question in linear algebra that looks reasonable in the library browser but one of the randomized versions uses a terminology that I have not defined in class. It would be nice to have a button in the library browser next to each displayed problem that would rerandomize that single problem. I know I can rerandomize the whole page but scrolling up pushing the rerandomize button and then scrolling back to the problem takes a lot of time.
2) An advanced search with Keywords is literal. For example, with NPL at revision 2876,
a) "point-slope form" gets 25 hits while "point-slope" gets none
b) "slope-intercept" finds 37 while "slope-intercept form" finds 20 others
c) "piecewise defined function" finds none and "Piecewise Defined Functions" finds only 1 although there is a Precalculus section with that title.
I see good tagging as an area for improving the NPL. Then the Library Browser can be an even better replacement for repeated searches of Rochester_problib.pdf.
The Keywords file is descriptive rather than prescriptive --- it includes sum missspellinks sutch az "piecwise" and some items with an embedded comma.
5) I've not yet seen a near-duplication in the NPL which included a comment about the changes. E.g., a colleague recently did some local edits involving "antiderivative" versus "indefinite integral" so that his class saw consistent terminology --- should those changes propagate to the NPL? A pedagogical choice about endpoints qualifying for "local extreme" status (or inclusion in an interval where a function is monotonic) is more substantive --- have you found problem templates which include an author's comment about making that type of policy decision?
A recent bug report (bugzilla #2271) included the comment that 4 near-duplicates had been found --- 2 with the reported bug and 2 without. Perhaps some use of svn (e.g., "annotate" aka "blame") might identify reasons for differences among those 4 files. In any event, near-duplicates complicate the task of maintaining the NPL.
a) "point-slope form" gets 25 hits while "point-slope" gets none
b) "slope-intercept" finds 37 while "slope-intercept form" finds 20 others
c) "piecewise defined function" finds none and "Piecewise Defined Functions" finds only 1 although there is a Precalculus section with that title.
I see good tagging as an area for improving the NPL. Then the Library Browser can be an even better replacement for repeated searches of Rochester_problib.pdf.
The Keywords file is descriptive rather than prescriptive --- it includes sum missspellinks sutch az "piecwise" and some items with an embedded comma.
5) I've not yet seen a near-duplication in the NPL which included a comment about the changes. E.g., a colleague recently did some local edits involving "antiderivative" versus "indefinite integral" so that his class saw consistent terminology --- should those changes propagate to the NPL? A pedagogical choice about endpoints qualifying for "local extreme" status (or inclusion in an interval where a function is monotonic) is more substantive --- have you found problem templates which include an author's comment about making that type of policy decision?
A recent bug report (bugzilla #2271) included the comment that 4 near-duplicates had been found --- 2 with the reported bug and 2 without. Perhaps some use of svn (e.g., "annotate" aka "blame") might identify reasons for differences among those 4 files. In any event, near-duplicates complicate the task of maintaining the NPL.
I strongly support the idea of having "re-randomize button" on the library browser page. I too have run into several problems that looked doable on my end, but certain students' versions were much harder (unfortunately I did not keep a record of such problems).