## Forum archive 2000-2006

### Bob Byerly - Server maintenance -- memory leaks?

by Arnold Pizer -
Number of replies: 0
 Server maintenance -- memory leaks? topic started 2/14/2005; 11:51:23 AMlast post 8/29/2005; 12:12:23 AM
 Bob Byerly - Server maintenance -- memory leaks?  2/14/2005; 11:51:23 AM (reads: 2725, responses: 12) I hope this isn't off-topic, since it concerns a WebWork server rather than WebWork, but I'm hoping that readers will have some suggestions and comments about their experiences. We've had several crashes (about every two weeks) this semester on our WebWork server that I believe are due to memory leaks in the apache-mod_perl processes. (The server, a 4-processor Xeon with 2G of memory, is not rebooting automatically, nor is it recording any error messages. The reason I think this is software related is that I can cause the same symptoms with a badly written pg file.) I was wondering if other people have had similar problems and what they are doing about them. Things we've tried: I looked at Arnold Pizer's message in http://webhost.math.rochester.edu/webworkdocs/discuss/msgReader$701 and set our Apache parameters similarly. The main difference (and this may have been a mistake) is that I set MaxRequestsPerChild considerably higher (to 200). Since Arnold's message is from the pre-mod_perl days, things may have changed. What do others find optimal for this and other parameters? We're also trying rebooting daily in the wee hours from a cronjob. (Except Sunday mornings. Last crash happened Sunday evening. Hmm.) Setting the Linux kernel "panic" parameter to a non-zero value so that it would reboot automatically after a kernel panic had no effect. I'm considering: starting the Apache processes with a hard memory limit (via ulimit). Are there any WebWork issues involved with this? getting a watchdog card. (I'm aware they exist but don't know much about them.) Any suggestions will be appreciated. (If we get enough good suggestions on this topic, a WebWork FAQ might not be a bad idea!) <| Post or View Comments |>  Michael Gage - Re: Server maintenance -- memory leaks? 2/14/2005; 12:19:05 PM (reads: 2996, responses: 0) If there is a really bad memory leak, then using a utility such as top you should be able to watch the memory size of the child grow. We could see this with webwork1.9 when we were caching code and reusing it. The size of the process would grow visibily as we watched. I haven't seen that happen with mod_perl and webwork2. Type top into a unix window to see the most active processes displayed. What do you put into a .pg file to cause the crash? an infinite loop? --Mike <| Post or View Comments |>  Bob Byerly - Re: Server maintenance -- memory leaks? 2/14/2005; 1:19:59 PM (reads: 2973, responses: 0) I have been using top and I do see the size of the apache processes grow -- usually slowly. On a freshly started server there are about 7 apache processes each with resident memory size about 27m. Right now I'm looking at some logs of memory usage I've been keeping (redirecting top in batch mode to a file) and observing one apache process whose resident memory usage has grown from 30m to 60m over the course of an hour. If I recall correctly (I don't want to try this now on our working server and our development machine is being upgraded now) the pg error that caused run-away memory usage was something like passing the "Matrix" function in the new parser package a string rather than an array reference as it was expecting. E.g., $A=Matrix( "[[1,2], [3,4] ]"); rather than the correct \$A=Matrix( [[1,2], [3,4] ]); When our development server is back up I'll try this again. At least once I was able to observe memory usage growing rapidly after such an error. If nobody else is having these problems we'll just have to assume it's some problem with our set-up and try some upgrades. But I thought I'd check with others first.  FWIW we're using Apache 1.3.33, which we compiled ourselves with mod_perl-1.29, mod_ssl-2.8, and php4.3.9. <| Post or View Comments |>

 Michael Gage - Re: Server maintenance -- memory leaks?  2/14/2005; 1:39:46 PM (reads: 2997, responses: 0) For reference, here is the child configuration we are using on our slower machine with 1/2 Gig of memory -- the one that runs hosted.webwork.rochester.edu StartServers 7MinSpareServers 7MaxSpareServers 9MaxClients 10MaxRequestsPerChild 100 If we leave too many servers going swapping occurs -- hosted actually runs faster with fewer servers. If we had more memory we'd leave more servers open. If you suspect memory leak then cut down on the number of requests per child. Our observations are similar to yours -- there may be a slow memory leak, but not a very fast one. The optimization above is one we switched to a few weeks ago to resolve some slow response on the hosted machine -- we reduced the number of spare servers. It seems to have helped. Let us know how your situation evolves. -- Mike <| Post or View Comments |>

 Bob Byerly - Re: Server maintenance -- memory leaks?  2/14/2005; 1:50:48 PM (reads: 3009, responses: 0) Thanks Mike. I'll try your parameters and see what happens. Bob <| Post or View Comments |>

 Davide P. Cervone - Re: Server maintenance -- memory leaks?  2/14/2005; 9:43:20 PM (reads: 2991, responses: 0) Bob: The problem you reported with Matrix("[[1,2],[3,4]]") turns out to be a bug in the Parser, and it did cause an infinite loop that eats up memory. I have fixed the error and submitted the changes to the CVS repository. In addition to preventing the infinite loop, I have arranged for Matrix() to evaluate the string to produce the matrix, rather than produce an error (which is what it should have done before). Similarly for Point(), Vector() and Real(). Hope that helps. Davide <| Post or View Comments |>

 Bob Byerly - Re: Server maintenance -- memory leaks?  8/23/2005; 8:51:32 AM (reads: 2094, responses: 2) Actually we've set MaxRequestsPerChild to 15, and MaxSpareServers to 10. We very occasionally get a message in our log files that a connection was denied because there weren't enough servers available, but apparently this doesn't happen often enough to provoke student complaints :).  The solution that I think really made the difference for us though was simply to make sure that a server process is killed and a new one respawned whenever it gets too big. We inserted the following in our httpd.conf: PerlSetEnv PERL_RLIMIT_AS 100:120PerlModule Apache::Resource You will need to make sure you have the perl module Apache::Resource for this to work. There should be documentation with this module that explains these lines, but this is supposed to set the soft limit for an httpd child to 100mb and the hard limit to 120 mb. You may want to adjust these depending on how many servers you're running and your memory size. Bob <| Post or View Comments |>
 Sam Hathaway - Re: Server maintenance -- memory leaks?  8/27/2005; 11:08:39 AM (reads: 1915, responses: 0) Bob, You can enable the timing log in global.conf. This will give you a log of each problem rendered, and how long it took. -sam <| Post or View Comments |>