EclipseLink/UserGuide/JPA/Advanced JPA Development/Data Partitioning

From Eclipsepedia

Jump to: navigation, search

EclipseLink JPA


Data Partitioning

With data partitioning, you can subdivide a database table, index or index-organized table into smaller units. That makes it possible to manage and access those objects at a finer level of granularity, thereby improving manageability, performance, and availability. For example, data partitioning facilitates load-balancing and replicating data across multiple different databases or across a database cluster.

Partitioning Policies

You configure data partitioning using partitioning policies. The different kinds of policies are:

  • CustomPartitioningPolicy - Defines a user defined partitioning policy. Used by metadata to defer class loading to init.
  • FieldPartitioningPolicy - Partitions access to a database cluster by a field value from the object, such as the object's ID, location, or tenant. All write or read requests for objects with that value are sent to the server. If a query does not include the field as a parameter, it can either be sent to all servers and unioned or left to the session's default behavior.
  • HashPartitioningPolicy - Partitions access to a database cluster by the hash of a field value from the object, such as the object's location, or tenant. The hash indexes into the list of connection pools. All write or read request for objects with that hash value are sent to the server. If a query does not include the field as a parameter, it can be sent to all servers and unioned, or it can be left to the session's default behavior.
  • PartitioningPolicy - Partitions the data for a class across multiple different databases or across a database cluster such as Oracle Real Application Clusters (RAC). Partitioning can provide improved scalability by allowing multiple database machines to service requests. (If multiple partitions are used to process a single transaction, JTA should be used for proper XA transaction support.)
  • RangePartitioningPolicy - Partitions access to a database cluster by a field value from the object, such as the object's ID, location, or tenant. Each server is assigned a range of values. All write or read requests for objects with that value are sent to the server. Each server is assigned a range of values. All write or read request for object's with that value are sent to the server. If a query does not include the field as a parameter, then it can either be sent to all server's and unioned, or left to the sesion's default behavior. [In Javadoc, sters (RAC). Partitioning can provide improved scalability by allowing multiple database machines to service requests. (If multiple partitions are used to process a single transaction, JTA should be used for proper XA transaction support.)
  • RangePartitioningPolicy - Partitions access to a database cluster by a field value from the object, such as the object's ID, location, or tenant. Each server is assigned a range of values. All write or read requests for objects with that value are sent to the server. Each server is assigned a range of values. All write or read request for object's with that value are sent to the server. If a query does not include the field as a parameter, then it can either be sent to all server's and unioned, or left to the sesion's default behavior. [In Javadoc, not in spec]
  • ReplicationPartitioningPolicy - Sends requests to a set of connection pools. This policy is for replicating data across a cluster of database machines. Only modification queries are replicated.
  • RoundRobinPartitioningPolicy - Sends requests in a round-robin fashion to the set of connection pools. It is for load balancing read queries across a cluster of database machines. It requires that the full database be replicated on each machine, so it does not support partitioning. The data should either be read-only, or writes should be replicated on the database.
  • UnionPartitioningPolicy - Sends queries to all connection pools and unions the results. This is for queries or relationships that span partitions when partitioning is used, such as on a ManyToMany cross partition relationship.
  • ValuePartitioningPolicy - Partitions access to a database cluster by a field value from the object, such as the object's location or tenant. Each value is assigned a specific server. All write or read requests for objects with that value are sent to the server. If a query does not include the field as a parameter, then it can be sent to all servers and unioned, or it can be left to the session's default behavior.

All policies provide an exclusive connection option. This assigns an accessor to the client session on the first query execution and uses that connection for the duration of the session. This ensures that the entire transaction stays on the same node.

Partitioning policies are globally-named objects in a persistence unit and are reusable across multiple descriptors or queries. This improves the usability of the configuration, specifically with JPA annotations and XML.

The persistence unit properties support adding named connection pools in addition to the existing configuration for read/write/sequence.

Data Affinity and Oracle RAC Support

Some cluster-enabled databases define their own DataSource implementation that is cluster-aware. Some support data affinity and integration with a data affinity service such as EclipseLink provides. Oracle RAC is supported through the Oracle JDBC Universal Connection Pool (UCP). UCP supports a single DataSource into the RAC and can perform its own load balancing and fail-over. UCP also supports a data affinity callback API. A callback can be registered to provide UCP a hint as to which node to direct the connection request to.

A DataSource is required for every node.

A generic DataPartitioningCallback interface is defined in EclipseLink (platform.database.partitioning) to support integration with an external DataSource data affinity. The callback is given a chance to register itself with the DataSource on connect. The PartitioningPolicys set the partition ID into the callback instead of acquiring a connection from a connection pool.

Replication is not required.

Configuration Files

orm.xml

<partitioning-policy class="org.acme.MyPolicy"/>
<round-robin-policy replicate-writes="true">
  <connection-pool>node1</connection-pool>
  <connection-pool>node2</connection-pool>
</round-robin-policy>
<random-policy replicate-writes="true">
  <connection-pool>node1</connection-pool>
  <connection-pool>node2</connection-pool>
</random-policy>
<replication-policy>
  <connection-pool>node1</connection-pool>
  <connection-pool>node2</connection-pool>
</replication-policy>
<range-partitioning-policy parameter-name="id" exclusive-connection="true" union-unpartitionable-queries="true">
  <range-partition connection-pool="node1" start-value="0" end-value="100000" value-type="java.lang.Integer"/>
  <range-partition connection-pool="node2" start-value="100001" end-value="200000" value-type="java.lang.Integer"/>
  <range-partition connection-pool="node3" start-value="200001" value-type="java.lang.Integer"/>
</range-partitioning-policy>


<--

Deprecated Connection Pool Properties

These properties are now under under the "connection-pool" category and are deprecated.

  • "eclipselink.connection-pool.max"
  • "eclipselink.connection-pool.write.max"
  • "eclipselink.connection-pool.read.max"

-->

Eclipselink-logo.gif
Version: 2.2.0 DRAFT
Other versions...