In fact, last summer we had only instructors working and saw the slowdowns. At the time we thought it was just a server issue and mostly resolved the problem by adjusting the number of available threads and increasing the amount of memory assigned to the virtual server.
This was going to be one of my summer activities, trying to replicate the issue and pin down the problem.
D. Brian Walton
James Madison University
Debian Linux 6 (squeeze)
mysql Ver 14.14 Distrib 5.1.61
On the installation that we use at UW-Stout, any display of problems through Library Browser causes apache process to increase in its size (e.g., RAM use). The more problems are displayed on one screen, the larger the increase in the size of the apache process.
If one views about 20 problems, the process size grows from about 65m to 110m (I am talking about 'resident' size, 'virtual' size is obviously larger).
If about 50 problems are viewed through the Library Browser, the process size grows to about 206m.
We had the situation when an instructor attempted to view about 500 problems at once, which made apache process to grow beyond 2g limit, so it was killed by kernel and generated a 'segfault' complaint from the kernel.
My first guess would be that repeated use of Library Browser causes a lot of apache processes to become really large, which consumes all of available RAM on the server; the server starts to use swap, and the intensive use of swap slows the server down.
We are using Apache::SizeLimit to terminate any 'huge' apache processes to avoid this type of situation.
Depending on the apache server configuration, it might be quite easy to replicate the problem: one would repeatedly view problems through Library Browser until most of apache processes consume all of available RAM.
Yes, we also restart apache server every 24 hours -- just in case.
I did not have a chance to work with Library Browser 3, as we did not update webwork code for a while. If and when I have a chance to play with the new Browser, I'll try to report back on the memory usage.
You can use svn to get the latest version from github.com/openwebwork
For instructions of using svn with github go to
the instructions are pretty straightforward.
There is a README in the webwork2/conf directory for working with the new configuration files.
It's best to get new versions of both the webwork2 repository and the pg repository from github.com/openwebwork.
Complete docs still need to be written which is part of why the newest version hasn't been pushed to the svn repository yet. (Exam season is the other reason :-) )
Looking at the installation documentation in the tar file for Apache2::SizeLimit, it looks like you have to pass it some info about apache:
$ tar zxvf Apache-SizeLimit-0.XX.tar.gz
$ cd Apache-SizeLimit-0.XX
$ perl Makefile.PL
For static mod_perl use -httpd /path/to/httpd
For dynamic mod_perl use -apxs /path/to/apxs
$ sudo make install
I don't know how to do that in cpan (it's probably easy), so I'd just suggest downloading the tarball and doing the steps above.
Hope this helps,
Anyways, the following are the edits that I did to webwork.apache2-config file:
after the line with
my $webwork_dir = "whatever"
# size limiter for Apache2
then somewhere close to the end of the <Perl> section (I have it just above the </Perl> line), insert the actual configuration:
# sizes are in KB
$Apache2::SizeLimit::MAX_PROCESS_SIZE = 120000;
$Apache2::SizeLimit::MAX_UNSHARED_SIZE = 120000;
$Apache2::SizeLimit::CHECK_EVERY_N_REQUESTS = 20;
The maximum's are really what one needs to carefully check. Those are the 'virtual' process sizes, on the 64 bit installation they might be 4 times the real resident size.
Then, in the /etc/apache/apache2.conf file I added
to turn the use of Apache2::SizeLimit on.
To see if the configuration really does what it is supposed to do, one can look through the error logs of apache for lines similar to
[Wed Apr 11 14:15:21 2012] (19620) Apache2::SizeLimit httpd process too big, exiting at SIZE=315752/120000 KB SHARE=5856/0 KB UNSHARED=309896/120000 KB REQUESTS=61 LIFETIME=5166 secondsIf one is really curious, it should be possible to track the pages (through the access logs) that such killed processes had generated to see if there is a pattern.
There is also a program called monit, http://mmonit.com/monit/ that could be set up to monitor server resources in almost real time. It could be set up to send an email when available RAM get below certain percentage, etc. This could be a way to get more information about what is happening with your webwork server.