Jump to: navigation, search

Difference between revisions of "Jetty/Reference/QoSFilter"

(New page: {{Jetty Reference | body = ==Examples of the Problem== ===Waiting for Resources=== Web application frequently uses JDBC Connection pool to limit the simultaneous load on the database. T...)
 
 
(21 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 
{{Jetty Reference
 
{{Jetty Reference
 +
| introduction =
 +
 +
{{Jetty Redirect|http://www.eclipse.org/jetty/documentation/current/qos-filter.html}}
 +
 +
Jetty supports [[Jetty/Feature/Continuations|Continuations]], which allow non-blocking handling of HTTP requests, so that threads can be allocated in a managed way to provide application specific [http://en.wikipedia.org/wiki/Quality_of_service Quality of Service] (QoS). The QoSFilter is a utility servlet filter that implements some QoS features.
 
| body =  
 
| body =  
==Examples of the Problem==
+
==Understanding the Problem==
  
 
===Waiting for Resources===
 
===Waiting for Resources===
  
Web application frequently uses JDBC Connection pool to limit the simultaneous load on the database. This protects the database from peak loads, but makes the web application vulnerable to thread starvation.
+
Web applications frequently use JDBC Connection pools to limit the simultaneous load on the database. This protects the database from peak loads, but makes the web application vulnerable to thread starvation. Consider a thread pool with 20 connections, being used by a web application that that typically receives 200 requests per second and each request holds a JDBC connection for 50ms. Such a pool can service on average 200*20*1000/50 = 400 requests per second.
  
Consider a thread pool with 20 connections, being used by a web application that that typically receives 200 request per second and each request holds a JDBC connection for 50ms. Such a pool can service on average 200*20*1000/50 = 400 requests per second.
+
However, if the request rate rises above 400 per second, or if the database slows down (due to a large query) or becomes momentarily unavailable, the thread pool can very quickly accumulate many waiting requests. If, for example, the website is slashdotted or experiences some other temporary burst of traffic and the request rate rises from 400 to 500 requests per second, then 100 requests per second join those waiting for a JDBC connection. Typically, a web server's thread pool contains only a few hundred threads, so a burst or slow DB need only persist for a few seconds to consume the entire web server's thread pool. This is called thread starvation. The key issue with thread starvation is that it effects the entire web application, and potentially the entire web server. Even if the requests using the database are only a small proportion of the total requests on the web server, all requests are blocked because all the available threads are waiting on the JDBC connection pool. This represents non graceful degradation under load and provides a very poor quality of service.
 
+
However, if the request rate raises above 400 per second, or if the database slows down (due to a large query) or becomes momentarily unavailable, the thread pool can very quickly accumulate many waiting requests. If (for example) the website is slashdotted or experiences some other temporary burst of traffic and the request rate rises from 400 to 500 requests per second, then 100 requests per second join those waiting for a JDBC connection. Typically, a web servers thread pool will contain only a few hundred threads, so a burst or slow DB need only persist for a few seconds to consume the entire web servers thread pool. This is called thread starvation.
+
 
+
The key issue with thread starvation, is that it effects the entire web application (and potentially the entire web server). So that even if the requests use the database are only a small proportion of the total request on the web server, all requests are blocked because all the available threads are waiting on the JDBC connection pool. This represents non graceful degradation under load and provides a very poor quality of service.
+
  
 
===Prioritizing Resources===
 
===Prioritizing Resources===
  
Consider a web application that is under extreme load. This load may be due to a popularity spike (slash dot), usage burst (christmas or close of business), or even a denial of service attack. During such periods of load, it is often desirable to not treat all requests as equals and to give priority to high value customers or administration users.
+
Consider a web application that is under extreme load. This load might be due to a popularity spike (slashdot), usage burst (Christmas or close of business), or even a denial of service attack. During such periods of load, it is often desirable not to treat all requests as equals, and to give priority to high value customers or administrative users.
  
The typical behaviour of a web server under extreme load is to use all it's threads servicing requests and to build up a back log of unserviced requests. If the backlog grows deep enough, then requests will start to timeout and the users will experience failures as well as delays.
+
The typical behaviour of a web server under extreme load is to use all its threads to service requests and to build up a backlog of unserviced requests. If the backlog grows deep enough, then requests start to timeout and users experience failures as well as delays.
  
Ideally, the web application should be able to examine the requests in the backlog, and give priority to high value customers and administration users. But with the standard blocking servlet API, it is not possible to examine a request without allocating a thread to that request for the duration of it's handling. There is no way to delay the handling of low priority requests and if the resources are to be reallocated, then the low priority requests must all be failed.
+
Ideally, the web application should be able to examine the requests in the backlog, and give priority to high value customers and administrative users. But with the standard blocking servlet API, it is not possible to examine a request without allocating a thread to that request for the duration of its handling. There is no way to delay the handling of low priority requests, so if the resources are to be reallocated, then the low priority requests must all be failed.
  
==QoSFilter==
+
==Applying the QoSFilter==
  
The Quality of Service Filter (QoSFilter) uses [[Jetty/Feature/Continuations|Continuations]] to avoid thread starvation, prioritize requests and give graceful degredation under load, so that a high quality of service can be provided. The filter can be applied to specific URLs within a web application and will limit the number of active requests being handled for those URLs. Any requests in excess of the limit are suspended.
+
The Quality of Service Filter (QoSFilter) uses [[Jetty/Feature/Continuations|Continuations]] to avoid thread starvation, prioritize requests and give graceful degradation under load, to provide a high quality of service. When you apply the filter to specific URLs within a web application, it limits the number of active requests being handled for those URLs. Any requests in excess of the limit are suspended. When a request completes handling the limited URL, one of the waiting requests resumes and can be handled. You can assign priorities to each suspended request, so that high priority requests resume before lower prority requests.
 
+
When a request completes handling the limited, URL, one of the waiting requests is resumed, so that it may be handled. Priorities may be given to each suspended request, so that high priority requests are resumed before low prority requests.
+
  
 
===Required JARs===
 
===Required JARs===
  
These JAR files must be available within WEB-INF/lib:
+
To use the QoS Filter, these JAR files must be available in WEB-INF/lib:
  
* $JETTY_HOME/lib/jetty-util.jar
+
* $JETTY_HOME/lib/ext/jetty-util.jar
* $JETTY_HOME/lib/ext/jetty-servlet.jar - contains QoSFilter
+
* $JETTY_HOME/lib/ext/jetty-servlets.jar–contains QoSFilter
  
 
===Sample Configuration===
 
===Sample Configuration===
  
The configuration should be placed in a webapp's web.xml or [[Jetty/Feature/jetty-web.xml|jetty-web.xml]] This configuration will process only five requests at a time, servicing more important requests first, and queuing up the rest:
+
Place the configuration in a webapp's web.xml or [http://wiki.eclipse.org/Jetty/Reference/jetty-web.xml jetty-web.xml] The default configuration processes ten requests at a time, servicing more important requests first, and queuing up the rest. This example processes fifty requests at a time:
 
<source lang="xml">
 
<source lang="xml">
 
  <filter>
 
  <filter>
 
   <filter-name>QoSFilter</filter-name>
 
   <filter-name>QoSFilter</filter-name>
   <filter-class>org.mortbay.servlet.QoSFilter</filter-class>
+
   <filter-class>org.eclipse.jetty.servlets.QoSFilter</filter-class>
 
   <init-param>
 
   <init-param>
 
     <param-name>maxRequests</param-name>
 
     <param-name>maxRequests</param-name>
Line 47: Line 46:
 
  </filter>
 
  </filter>
 
</source>
 
</source>
 +
 +
===Configuring QoS Filter Parameters===
 +
A [http://download.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Semaphore.html semaphore] polices the "maxRequests" limit. The filter waits a short time while attempting to acquire the semaphore. The "waitMs" init parameter controls the wait, avoiding the expense of a suspend if the semaphore is shortly available. If the semaphore cannot be obtained, Jetty suspends the request for the default suspend period of the container or the value set as the "suspendMs" init parameter.
 +
 +
The QoS filter uses the following init parameters:
 +
 +
* <tt>maxRequests</tt>–the maximum number of requests to be serviced at a time. The default is 10.
 +
* <tt>maxPriority</tt>–the maximum valid priority that can be assigned to a request. A request with a high priority value is more important than a request with a low priority value. The default is 10.
 +
* <tt>waitMS</tt>–length of time, in milliseconds, to wait while trying to accept a new request. Used when the maxRequests limit is reached. Default is 50 ms.
 +
* <tt>suspendMS</tt>–length of time, in milliseconds, that the request will be suspended if it is not accepted immediately. If not set, the container's default suspend period applies. Default is -1 ms.
 +
* <tt>managedAttr</tt>–If set to true, then this servlet is set as a [http://download.oracle.com/javaee/5/api/javax/servlet/ServletContext.html ServletContext] attribute with the filter name as the attribute name. This allows a context external mechanism (for example,  JMX via ContextHandler.MANAGED_ATTRIBUTES) to manage the configuration of the filter.
  
 
===Mapping to URLs===
 
===Mapping to URLs===
  
You can use the <code><filter-mapping></code> syntax to map the QoSFilter to a servlet, either by using the servlet name, or by using a URL pattern. In this example, a URL pattern is used to apply the QoSFilter to every request within the web application context:
+
You can use the <code><filter-mapping></code> syntax to map the QoSFilter to a servlet, either by using the servlet name, or by using a URL pattern. In this example, a URL pattern applies the QoSFilter to every request within the web application context:
 
<source lang="xml">
 
<source lang="xml">
 
  <filter-mapping>
 
  <filter-mapping>
Line 58: Line 68:
 
</source>
 
</source>
  
===Configuration===
+
===Setting the Request Priority===
 
+
The following init parameters are supported:
+
<source lang="text">
+
maxRequests - the maximum number of requests to be serviced at a time.
+
maxPriority - the maximum valid priority that can be assigned to a request. A request with a high priority value is more important than a request with a low priority value.
+
waitMS - length of time, in milliseconds, to wait while trying to accept a new request. Used when the maxRequests limit is reached.
+
suspendMS - length of time, in milliseconds, that the request will be suspended if it is not accepted immediately. If not set, the container's default suspend period will be used.
+
</source>
+
 
+
===Request Priority===
+
  
The default request priorities assigned by the QoSFilter are:
+
Requests with higher values have a higher priority. The default request priorities assigned by the QoSFilter are:
  
* 2 - For any authenticated request
+
* 2–For any authenticated request
* 1 - For any request with a non-new valid session
+
* 1–For any request with a non-new valid session
* 0 - For all other requests
+
* 0–For all other requests
  
To customize the priority, subclass QoSFilter and then override the getPriority(ServletRequest request) method to return an appropriate priority for the request. Higher values have a higher priority. You can then use this subclass as your QoS filter. Here's a trivial example:
+
To customize the priority, subclass QoSFilter and then override the getPriority(ServletRequest request) method to return an appropriate priority for the request. You can then use this subclass as your QoS filter. Here's a trivial example:
  
 
<source lang="java">
 
<source lang="java">

Latest revision as of 16:41, 24 April 2013



Introduction


Jetty supports Continuations, which allow non-blocking handling of HTTP requests, so that threads can be allocated in a managed way to provide application specific Quality of Service (QoS). The QoSFilter is a utility servlet filter that implements some QoS features.

Understanding the Problem

Waiting for Resources

Web applications frequently use JDBC Connection pools to limit the simultaneous load on the database. This protects the database from peak loads, but makes the web application vulnerable to thread starvation. Consider a thread pool with 20 connections, being used by a web application that that typically receives 200 requests per second and each request holds a JDBC connection for 50ms. Such a pool can service on average 200*20*1000/50 = 400 requests per second.

However, if the request rate rises above 400 per second, or if the database slows down (due to a large query) or becomes momentarily unavailable, the thread pool can very quickly accumulate many waiting requests. If, for example, the website is slashdotted or experiences some other temporary burst of traffic and the request rate rises from 400 to 500 requests per second, then 100 requests per second join those waiting for a JDBC connection. Typically, a web server's thread pool contains only a few hundred threads, so a burst or slow DB need only persist for a few seconds to consume the entire web server's thread pool. This is called thread starvation. The key issue with thread starvation is that it effects the entire web application, and potentially the entire web server. Even if the requests using the database are only a small proportion of the total requests on the web server, all requests are blocked because all the available threads are waiting on the JDBC connection pool. This represents non graceful degradation under load and provides a very poor quality of service.

Prioritizing Resources

Consider a web application that is under extreme load. This load might be due to a popularity spike (slashdot), usage burst (Christmas or close of business), or even a denial of service attack. During such periods of load, it is often desirable not to treat all requests as equals, and to give priority to high value customers or administrative users.

The typical behaviour of a web server under extreme load is to use all its threads to service requests and to build up a backlog of unserviced requests. If the backlog grows deep enough, then requests start to timeout and users experience failures as well as delays.

Ideally, the web application should be able to examine the requests in the backlog, and give priority to high value customers and administrative users. But with the standard blocking servlet API, it is not possible to examine a request without allocating a thread to that request for the duration of its handling. There is no way to delay the handling of low priority requests, so if the resources are to be reallocated, then the low priority requests must all be failed.

Applying the QoSFilter

The Quality of Service Filter (QoSFilter) uses Continuations to avoid thread starvation, prioritize requests and give graceful degradation under load, to provide a high quality of service. When you apply the filter to specific URLs within a web application, it limits the number of active requests being handled for those URLs. Any requests in excess of the limit are suspended. When a request completes handling the limited URL, one of the waiting requests resumes and can be handled. You can assign priorities to each suspended request, so that high priority requests resume before lower prority requests.

Required JARs

To use the QoS Filter, these JAR files must be available in WEB-INF/lib:

  • $JETTY_HOME/lib/ext/jetty-util.jar
  • $JETTY_HOME/lib/ext/jetty-servlets.jar–contains QoSFilter

Sample Configuration

Place the configuration in a webapp's web.xml or jetty-web.xml The default configuration processes ten requests at a time, servicing more important requests first, and queuing up the rest. This example processes fifty requests at a time:

 <filter>
   <filter-name>QoSFilter</filter-name>
   <filter-class>org.eclipse.jetty.servlets.QoSFilter</filter-class>
   <init-param>
     <param-name>maxRequests</param-name>
     <param-value>50</param-value>
   </init-param>
 </filter>

Configuring QoS Filter Parameters

A semaphore polices the "maxRequests" limit. The filter waits a short time while attempting to acquire the semaphore. The "waitMs" init parameter controls the wait, avoiding the expense of a suspend if the semaphore is shortly available. If the semaphore cannot be obtained, Jetty suspends the request for the default suspend period of the container or the value set as the "suspendMs" init parameter.

The QoS filter uses the following init parameters:

  • maxRequests–the maximum number of requests to be serviced at a time. The default is 10.
  • maxPriority–the maximum valid priority that can be assigned to a request. A request with a high priority value is more important than a request with a low priority value. The default is 10.
  • waitMS–length of time, in milliseconds, to wait while trying to accept a new request. Used when the maxRequests limit is reached. Default is 50 ms.
  • suspendMS–length of time, in milliseconds, that the request will be suspended if it is not accepted immediately. If not set, the container's default suspend period applies. Default is -1 ms.
  • managedAttr–If set to true, then this servlet is set as a ServletContext attribute with the filter name as the attribute name. This allows a context external mechanism (for example, JMX via ContextHandler.MANAGED_ATTRIBUTES) to manage the configuration of the filter.

Mapping to URLs

You can use the <filter-mapping> syntax to map the QoSFilter to a servlet, either by using the servlet name, or by using a URL pattern. In this example, a URL pattern applies the QoSFilter to every request within the web application context:

 <filter-mapping>
   <filter-name>QoSFilter</filter-name>
   <url-pattern>/*</url-pattern>
 </filter-mapping>

Setting the Request Priority

Requests with higher values have a higher priority. The default request priorities assigned by the QoSFilter are:

  • 2–For any authenticated request
  • 1–For any request with a non-new valid session
  • 0–For all other requests

To customize the priority, subclass QoSFilter and then override the getPriority(ServletRequest request) method to return an appropriate priority for the request. You can then use this subclass as your QoS filter. Here's a trivial example:

 public class ParsePriorityQoSFilter extends QoSFilter
 {
     protected int getPriority(ServletRequest request)
     {
         String p = ((HttpServletRequest)request).getParameter("priority");
         if (p!=null)
             return Integer.parseInt(p);
         return 0;
     }
 }