Difference between revisions of "Jetty/Howto/High Load"

From Eclipsepedia

< Jetty‎ | Howto
Jump to: navigation, search
Line 23: Line 23:
  
 
These should be increased to at least 16MB for 10G paths and tune the autotuning (although buffer bloat now needs to be considered).
 
These should be increased to at least 16MB for 10G paths and tune the autotuning (although buffer bloat now needs to be considered).
 
+
   sysctl -w net.core.rmem_max=16777216
   sysctl -w net.core.rmem_max = 16777216
+
   sysctl -w net.core.wmem_max=16777216
   sysctl -w net.core.wmem_max = 16777216
+
   sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
   sysctl -w net.ipv4.tcp_rmem = 4096 87380 16777216
+
   sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"
   sysctl -w net.ipv4.tcp_wmem = 4096 65536 16777216
+
  
 
==== Queue Sizes ====
 
==== Queue Sizes ====
 
net.core.somaxconn controls the size of the connection listening queue. The default value of 128 and if you are running a high-volume server and connections are getting refused at a TCP level, then you want to increase this. This is a very tweakable setting in such a case. Too high and you'll get resource problems as it tries to notify a server of a large number of connections and many will remain pending, and too low and you'll get refused connections:
 
net.core.somaxconn controls the size of the connection listening queue. The default value of 128 and if you are running a high-volume server and connections are getting refused at a TCP level, then you want to increase this. This is a very tweakable setting in such a case. Too high and you'll get resource problems as it tries to notify a server of a large number of connections and many will remain pending, and too low and you'll get refused connections:
 
 
   sysctl -w net.core.somaxconn=4096
 
   sysctl -w net.core.somaxconn=4096
 
  
 
The net.core.netdev_max_backlog controls the size of the incoming packet queue for upper-layer (java) processing. The default (2048) may be increased and other related parameters (TODO MORE EXPLANATION) adjusted with:
 
The net.core.netdev_max_backlog controls the size of the incoming packet queue for upper-layer (java) processing. The default (2048) may be increased and other related parameters (TODO MORE EXPLANATION) adjusted with:
 
 
   sysctl -w net.core.netdev_max_backlog=16384
 
   sysctl -w net.core.netdev_max_backlog=16384
 
   sysctl -w net.ipv4.tcp_max_syn_backlog=8192
 
   sysctl -w net.ipv4.tcp_max_syn_backlog=8192
 
   sysctl -w net.ipv4.tcp_syncookies=1
 
   sysctl -w net.ipv4.tcp_syncookies=1
 +
 +
==== Ports ====
 +
If many outgoing connections are made (eg on load generators), then the operating system may run low on ports.  Thus it is best to increase the port range used and allow reuse of sockets in TIME_WAIT:
 +
  sysctl -w net.ipv4.ip_local_port_range="1024 65535"
 
   sysctl -w net.ipv4.tcp_tw_recycle=1
 
   sysctl -w net.ipv4.tcp_tw_recycle=1
 +
 +
==== File Descriptors ====
 +
 +
Busy servers and load generators may run out of file descriptors as the system defaults are normally low.  These can be increased for a specific user in /etc/security/limits.conf:
 +
 +
  theusername hard nofile 40000
 +
  theusername soft nofile 40000
  
  
Line 50: Line 57:
 
If cubic and/or htcp are not listed then you will need to research the control algorithms for your kernel.  You can try setting the control to cubic with:
 
If cubic and/or htcp are not listed then you will need to research the control algorithms for your kernel.  You can try setting the control to cubic with:
 
   sysctl -w net.ipv4.tcp_congestion_control=cubic
 
   sysctl -w net.ipv4.tcp_congestion_control=cubic
 
 
 
 
 
  
  
Line 64: Line 66:
  
 
== Network Tuning ==
 
== Network Tuning ==
 +
 +
* Intermediaries such as nginx can use non persistent HTTP/1.0 connection.  Make sure that persistent HTTP/1.1 connections are used.
  
 
== Jetty Tuning ==
 
== Jetty Tuning ==

Revision as of 20:48, 23 February 2011



Contents

Introduction

Configuring Jetty for highload, albeit for load testing or for production, requires that the operating system, the JVM, jetty, the application, the network and the load generation all be tuned.

Load Generation for Load Testing

  • The load generation machines must have their OS, JVM etc tuned just as much as the server machines.
  • The load generation should not be over the local network on the server machine, as this has unrealistic performance and latency as well as different packet sizes and transport characteristics.
  • The load generator should generate a realistic load:
    • A common mistake is that load generators often open relatively few connections that are kept totally busy sending as many requests as possible over each connection. This causes the measured throughput to be limited by request latency (see Lies Damned Lies and Benchmarks for an analysis of such an issue.
    • Another common mistake is to use a TCP/IP for a single request and to open many many short lived connections. This will often result in accept queues filling and limitations due to file descriptor and/or port starvation.
  • A load generator should well model the traffic profile from the normal clients of the server. For browsers, this if mostly between 2 and 6 connections that are mostly idle and that are used in sporadic bursts with read times in between. The connections are mostly long held HTTP/1.1 connections.
  • Load generators should be written in asynchronous programming style, so that limited threads does not limit the maximum number of users that can be simulated. If the generator is not asynchronous, then a thread pool of 2000 may only be able to simulate 500 or less users. The Jetty HttpClient is an ideal basis for building a load generator, as it is asynchronous and can be used to simulate many thousands of connections (see the Cometd Load Tester for a good example of a realistic load generator).

Operating System Tuning

Both the server machine and any load generating machines need to be tuned to support many TCP/IP connections and high throughput.

Linux

Linux does a reasonable job of self configuring TCP/IP, but there are a few limits and defaults that that are best increased. These can mostly be configured in /etc/security/limits.conf or via sysctl

TCP Buffer Sizes

These should be increased to at least 16MB for 10G paths and tune the autotuning (although buffer bloat now needs to be considered).

 sysctl -w net.core.rmem_max=16777216
 sysctl -w net.core.wmem_max=16777216
 sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
 sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"

Queue Sizes

net.core.somaxconn controls the size of the connection listening queue. The default value of 128 and if you are running a high-volume server and connections are getting refused at a TCP level, then you want to increase this. This is a very tweakable setting in such a case. Too high and you'll get resource problems as it tries to notify a server of a large number of connections and many will remain pending, and too low and you'll get refused connections:

 sysctl -w net.core.somaxconn=4096

The net.core.netdev_max_backlog controls the size of the incoming packet queue for upper-layer (java) processing. The default (2048) may be increased and other related parameters (TODO MORE EXPLANATION) adjusted with:

 sysctl -w net.core.netdev_max_backlog=16384
 sysctl -w net.ipv4.tcp_max_syn_backlog=8192
 sysctl -w net.ipv4.tcp_syncookies=1

Ports

If many outgoing connections are made (eg on load generators), then the operating system may run low on ports. Thus it is best to increase the port range used and allow reuse of sockets in TIME_WAIT:

 sysctl -w net.ipv4.ip_local_port_range="1024 65535"
 sysctl -w net.ipv4.tcp_tw_recycle=1

File Descriptors

Busy servers and load generators may run out of file descriptors as the system defaults are normally low. These can be increased for a specific user in /etc/security/limits.conf:

 theusername		hard nofile	40000
 theusername		soft nofile	40000


Congestion Control

Linux supports pluggable congestion control algorithms. To get a list of congestion control algorithms that are available in your kernel run:

 sysctl net.ipv4.tcp_available_congestion_control

If cubic and/or htcp are not listed then you will need to research the control algorithms for your kernel. You can try setting the control to cubic with:

 sysctl -w net.ipv4.tcp_congestion_control=cubic


Mac OS

Windows

Seriously???


Network Tuning

  • Intermediaries such as nginx can use non persistent HTTP/1.0 connection. Make sure that persistent HTTP/1.1 connections are used.

Jetty Tuning

Connectors

Acceptors

Low Resource Limits

Thread Pool