I had been using JMeter to generate load for my production server to test my application. The test-plan has 13+ HTTP sampler to make different request and one Regular Expression extractor to extract some value from the response. This value was used in consecutive HTTP Sampler. This test case simple and straight forward. Initially I used 200 JMeter threads to simulate 200 users. Server was able to handle these many requests easily but when the number of thread was increased, it couldn’t handled and waited infinitely. Surely something was going on. JMeter threads were waiting for the connection and couldn’t get it so waited infinitely. To avoid this situation, I introduced “HTTP Request default” to add some connection and response timeout. One problem was solved, now threads are not hanging there infinitely but they were timing out with the following exception.

java.net.SocketTimeoutException: Read timed out
	at java.net.SocketInputStream.socketRead0(Native Method)
	at java.net.SocketInputStream.read(SocketInputStream.java:129)
	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
	at org.apache.commons.httpclient.HttpParser.readRawLine(HttpParser.java:78)
	at org.apache.commons.httpclient.HttpParser.readLine(HttpParser.java:106)
	at org.apache.commons.httpclient.HttpConnection.readLine(HttpConnection.java:1116)
	at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1973)
	at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1735)
	at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1098)
	at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398)
	at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
	at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
	at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
	at org.apache.jmeter.protocol.http.sampler.HTTPHC3Impl.sample(HTTPHC3Impl.java:258)
	at org.apache.jmeter.protocol.http.sampler.HTTPSamplerProxy.sample(HTTPSamplerProxy.java:62)
	at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1088)
	at org.apache.jmeter.protocol.http.sampler.HTTPSamplerBase.sample(HTTPSamplerBase.java:1077)
	at org.apache.jmeter.threads.JMeterThread.process_sampler(JMeterThread.java:428)
	at org.apache.jmeter.threads.JMeterThread.run(JMeterThread.java:256)
	at java.lang.Thread.run(Thread.java:662)

Definitely some-part of server (Apache 2.2.14 + Apache-tomcat-7.0.11) was creaking but not sure which part. Surely there was some bottleneck in the setup. In the current setup Apache server was forwarding requests to Tomcat engine. So surely either one of them were not able to handle 200+ request at a time. I changed the setup a little bit to forward all request directly to Tomcat engine. It was able to handle it, that means Apache was slacking. I quickly checked the Apache error.log file present at /var/log/apache2/error.log and found the following lines.

[Wed Jun 26 16:46:19 2013] [error] server is within MinSpareThreads of MaxClients, consider raising the MaxClients setting
[Wed Jun 26 16:46:20 2013] [error] server reached MaxClients setting, consider raising the MaxClients setting
[Wed Jun 26 17:24:42 2013] [error] server is within MinSpareThreads of MaxClients, consider raising the MaxClients setting
[Wed Jun 26 17:24:43 2013] [error] server reached MaxClients setting, consider raising the MaxClients setting

This clearly indicated that the MaxClients number should be increase which will in-turn increase the number of Apache thread. I quickly edited the apache2.conf file to increase the MaxClients number by putting the following configuration and executed "apache2ctl -t" to confirm that the changes made in configuration files were correct.

<IfModule mpm_worker_module>
    StartServers          2
    MinSpareThreads      25  
    MaxSpareThreads     100 
    ThreadLimit         800
    ThreadsPerChild     800 
    MaxClients         2400
    MaxRequestsPerChild   0   
</IfModule>

I was bit relaxed thinking that now Apache will definitely handle the load now. But out of my surprise, it couldn’t and again the same exception hit the JMeter again. At this point I wanted to check the Apache performance by enabling the server-status. This feature can give you clear idea about the connection states. I put the following lines in the /etc/apache2/mods-available/status.conf

<Location /server-status>
    SetHandler server-status
    Order deny,allow
    Deny from all
    Allow from  .your.domain.here.com localhost
</Location>

I restarted Apache and server by executing the following command

/> sudo service apache2 restart; sudo service tomcat restart;

Once the Apache restarted I hit the following URL to get the server-status.


http://your.server.name/server-status?refresh=3

After running the JMeter for 200+ threads I noticed this on the status page.

Apache server status

Apache server status


The status page indicated that there were lots of connection in "W" (Sending Reply) state. This is can be caused by other reasons. I tried to google it but couldn’t find any definite solution. But one thing was sure that the Apache is not causing problem. It must be between Apache and tomcat. I realized that JMeter test plan works fine for 200 threads and even Tomcat by default has 200 threads. I just took a chance and increased the number of threads to 400 by editing "<APACHE_TOMCAT_HOME>/conf/server.xml" file.

<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" maxThreads="400" minSpareThreads="20" />

Now Server can handle the JMeter load easily and Apache connections are in good state even after finishing the test-plan.
Although problem was fixed, I couldn’t understand why Tomcat was able to handle the load when JMeter was configured to send all requests directly to Tomcat but couldn’t handle the load when Apache was forwarding the request to Tomcat. If you have any explanation about this behavior please share below in comment section.
Hope this blog helped you in some way. If you like this blog then please share it. You can also leave your comment below.

, ,
Trackback

4 comments untill now

  1. I don’t think that by default Tomcat have 200 threads. It has a thread pool for 15-20 threads (but I am not sure). Have you tried to set not 400 threads but 201?
    Have you tried to set 150 threads to Apache and 400 for Tomcat? Use VisualVm to diagnose thread usage. If it doesn’t work obvious try different variant, not one 200/400.

  2. @Yury, Thanks for your comment. I hadn’t tried these things. But I will try and put by experience here.

  3. I tried to set Tomcat 201 thread but the result is still same. Some how Apache doesn’t like it.
    Apache 150 and Tomcat 400 threads do work well. I am not sure how I can use Visualvm to diagnose these. are you suggesting to use VisualVm with Apache to monitor its thread and figure out why Apache is acting weird.

  4. I think the reason why Tomcat couldn’t handle forwarded requests is you were stressing two servers from localhost, from other side it looks like ddos attack or endless while condition. Actucally, I don’t understand the point of forwarding requests from Apache to Tomcat. Also, I think it’s not good idea to load your localhost without tuning up JMeter script.

Add your comment now