Forums

Search results: 22

Hi Christian,

We had some slowdown on our server and used the recommendation of Alex Basyrov here to configure the apache SizeLimit module. Try add this to the end of the <Perl>...</Perl> section of webwork.apache2-config:

use Apache2::SizeLimit;

$Apache2::SizeLimit::MAX_PROCESS_SIZE = 120000;
$Apache2::SizeLimit::MAX_UNSHARED_SIZE = 120000;
$Apache2::SizeLimit::CHECK_EVERY_N_REQUESTS = 20;

We have not had any issues since.

Lars.
Hi Lars,

Great!

To see if the configuration really does what it is supposed to do, one can look through the error logs of apache for lines similar to

[Wed Apr 11 14:15:21 2012] (19620) Apache2::SizeLimit httpd process too big, exiting at SIZE=315752/120000 KB SHARE=5856/0 KB UNSHARED=309896/120000 KB REQUESTS=61 LIFETIME=5166 seconds
If one is really curious, it should be possible to track the pages (through the access logs) that such killed processes had generated to see if there is a pattern.

There is also a program called monit, http://mmonit.com/monit/ that could be set up to monitor server resources in almost real time. It could be set up to send an email when available RAM get below certain percentage, etc. This could be a way to get more information about what is happening with your webwork server.

-- Alex
I did not have to separately install Apache2::Size limit, it seemed to be already available as part of mod_perl installation.

Anyways, the following are the edits that I did to webwork.apache2-config file:

after the line with
my $webwork_dir = "whatever"
add
# size limiter for Apache2
use Apache2::SizeLimit;

then somewhere close to the end of the <Perl> section (I have it just above the </Perl> line), insert the actual configuration:

# sizes are in KB
$Apache2::SizeLimit::MAX_PROCESS_SIZE = 120000;
$Apache2::SizeLimit::MAX_UNSHARED_SIZE = 120000;
$Apache2::SizeLimit::CHECK_EVERY_N_REQUESTS = 20;
</Perl>


The maximum's are really what one needs to carefully check. Those are the 'virtual' process sizes, on the 64 bit installation they might be 4 times the real resident size.

Then, in the /etc/apache/apache2.conf file I added
PerlCleanupHandler Apache2::SizeLimit
to turn the use of Apache2::SizeLimit on.




Hi Lars,

Looking at the installation documentation in the tar file for Apache2::SizeLimit, it looks like you have to pass it some info about apache:

$ tar zxvf Apache-SizeLimit-0.XX.tar.gz
$ cd Apache-SizeLimit-0.XX

$ perl Makefile.PL
For static mod_perl use -httpd /path/to/httpd
For dynamic mod_perl use -apxs /path/to/apxs
$ make
$ sudo make install

I don't know how to do that in cpan (it's probably easy), so I'd just suggest downloading the tarball and doing the steps above.

Hope this helps,
Jason
My two cents.

Our installation:
Debian Linux 6 (squeeze)
perl v5.10.1
mysql Ver 14.14 Distrib 5.1.61

On the installation that we use at UW-Stout, any display of problems through Library Browser causes apache process to increase in its size (e.g., RAM use). The more problems are displayed on one screen, the larger the increase in the size of the apache process.

If one views about 20 problems, the process size grows from about 65m to 110m (I am talking about 'resident' size, 'virtual' size is obviously larger).

If about 50 problems are viewed through the Library Browser, the process size grows to about 206m.

We had the situation when an instructor attempted to view about 500 problems at once, which made apache process to grow beyond 2g limit, so it was killed by kernel and generated a 'segfault' complaint from the kernel.

My first guess would be that repeated use of Library Browser causes a lot of apache processes to become really large, which consumes all of available RAM on the server; the server starts to use swap, and the intensive use of swap slows the server down.

We are using Apache::SizeLimit to terminate any 'huge' apache processes to avoid this type of situation.

Depending on the apache server configuration, it might be quite easy to replicate the problem: one would repeatedly view problems through Library Browser until most of apache processes consume all of available RAM.

WeBWorK Main Forum -> frequent crashes -> Re: frequent crashes

by William Wheeler -
At Indiana University Bloomington we use Apache::SizeLimit in combination with the MaxClients setting to solve this problem, when using Apache1.

To determine a value for MaxClients, we find out how much RAM is available when everything is running EXCEPT for Apache, then divide that by 120M, and then round down and subtract 3, 4, or 5.

Then we configure Apache::SizeLimit to kill Apache children when they exceed 100M.

Also, at 5 am each morning we stop Apache, rotate the log files, and then restart Apache.

Under normal circumstances, this has solved the problem even for servers that have as many as 1,400 users with twice/thrice weekly assignments.

The only failures are usually due to inadvertent Denial of Service attacks due to deficiencies in some browsers.

Sincerely,

Bill Wheeler