WeBWorK Main Forum

memory, swap space, and hard copy generation

memory, swap space, and hard copy generation

by Andras Balogh -
Number of replies: 3

I found several posts about memory questions, some older posts indicating possible memory leak, and some unanswered questions about memory (for example http://webwork.maa.org/moodle/mod/forum/discuss.php?d=3183)

The biggest problem for us right now is that if instructors try to generate hard copy of a collection of large numbers of assignments with large numbers of problems (a few hundreds), the server runs out of its 10GB memory, starts using swap space, and there is a performance degradation. The system does not seem to get up to speed until it gets rebooted.

1. Is there any way to reduce memory usage during hard copy generation?
2. Is there a recommendation for the frequency of rebooting the server?
3. Is there a recommendation for the frequency of restarting apache?


-
Andras



In reply to Andras Balogh

Re: memory, swap space, and hard copy generation

by Michael Gage -
Thanks Andras for this report.

This tends to confirm something we've expected, that hardcopy generation, particularly of large sets, is a memory leak because the process reads everything into memory and then runs TeX.  A more complicated process would break this up into smaller pieces but that is not a project that can be done quickly. 

 From my reading it appears that perl returns the memory used to the perl child
doing the processing, but does not return the memory back to the system in general.  So if one child processes two large homework sets it is about the same size as if it just processes one set. But if two successive children are used to process the homework set (which is likely) there are now two large children in the system.  It doesn't take long to run out of memory.  

The systems I've worked with have restarted apache every 24 hours.  In my experience rebooting the server was not necessary. Restricting the total number of children allowed by apache helps prevent swapping.  Periodic printing of sets of 10 or 20 problems has not caused a lot of difficulties for us.  Printing out entire sections of the OPL is another matter.

Some people have also found apache modules that spawn a new process and kill an old one if the old process gets too big.  I'll let others chime in on this.
In reply to Michael Gage

Re: memory, swap space, and hard copy generation

by Danny Glin -
As you have both noted, generating hardcopies with lots of problems causes the apache child process doing the work to use up more memory, which doesn't get released until that process dies.  In fact, any action which generates a lot of problems at once has the same consequences.  The other two common culprits are loading the Library Browser with many problems, or generating a gateway set with many problems (when a student first clicks on "Take ... test".

This can be mitigated with the right combination of settings in your apache configuration.  In particular, deciding how many child processes should be allowed (MaxClients or MaxRequestWorkers in Apache 2.4), and how many requests each child process serves before being killed and restarted (MaxRequestsPerChild or MaxConnectionsPerChild in Apache 2.4).

Based on my investigations on my Redhat (CentOS) systems, generating a page with ~50 problems can cause an apache process to grow to over 100MB of memory usage (you can use the top command to see the current memory usage of each process.  Press 'M' to sort by memory usage).  Based on some conversations and brief investigations, it looks like in Ubuntu with its default installation, apache processes use more memory, so you may want to estimate this high end at closer to 200MB.

Depending on how frequently such large pages are being generated, you may see more of your child processes with large memory footprints.
Reducing MaxConnectionsPerChild will cause these processes to be killed more frequently, resetting them to their starting memory footprint.  The tradeoff is that re-spawning a child process takes processing power and time, so requests will be served more slowly while this is happening.
Reducing MaxRequestWorkers will mean that there are less processes taking up memory, so that even if each process grows large, you can make sure that there are not enough of them to exceed the available memory.  The tradeoff here is that your server can only serve as many requests simultaneously as there are child processes available.  If more requests come in than there are child processes, the later requests will be queued until a process becomes available.

The starting point that we have been using for MaxConnectionsPerChild is 100.  This has been effective on servers where there are a lot of students doing regular assignment work, and only occasionally does someone generate a page with a large number of problems.  This works because most of the requests do not cause large memory increases, so over time the big apache processes die off.  If your server involves more of these "big" requests amid not a lot of "small" requests (such as viewing assignments or individual questions), you may consider lowering this number.

With 10GB of RAM, I would think that with MaxRequestWorkers at 50 and MaxConnectionsPerChild set at 100 your server should be able to handle the load, unless there are a lot of requests for many problems in a short period of time.

I am not a fan of period restarts of apache, since you can accomplish the same goals more efficiently using the configuration settings.  If you do choose to schedule apache restarts, make sure that you use the "graceful" command rather than "restart".  This allows each process to finish serving the page that it is working on before restarting, which will prevent submissions from being dropped in the middle.
In reply to Danny Glin

Re: memory, swap space, and hard copy generation

by Andras Balogh -
Mike and Danny,

Thank you both for the answer.
For us MaxClients was set too high at 150 for sure.

Andras