This plan lays out the work areas required to re architect the current Orion Java server for very large scale deployment. This is not all work we will be able to scope within a single release, but is intended to provide a roadmap of the work required to make this happen. Work items are roughly sorted by order of priority.
- 1 Assessment of current state
- 2 Work areas for scalability
Assessment of current state
Much of the current Orion Java server is designed to enable scalability. There are a number of problem areas detailed in the next section, but the current server has some existing strengths:
- Nearly stateless. With the exception of Java session state for authentication, and some metadata caching for performance, the current server is stateless. The server can be killed and restarted at any time with no data loss. All data is persisted to disk at the end of each HTTP request, with the exception of asynchronous long running operations that occur in a background thread and span multiple HTTP requests.
- Fast startup. The current server starts within 2-3 seconds, most of which is Java and OSGi startup. No data is loaded until client requests start to arrive. The exception is the background search indexer, which starts in a background thread immediately and starts crawling. If the search index has been deleted (common practice on server upgraded) it can take several minutes for it to complete indexing. This in no way blocks client requests - search results will just be potentially incomplete until the index is fully populated.
- Externalized configuration. Most server configuration is centrally managed in the orion.conf file. There are a small number of properties specified in orion.ini as well. Generally server configuration can be managed without code changes.
- Scalable core technologies. All of the core technologies we are using on the server are known to be capable of significant scaling: Equinox, Jetty, JGit, and Apache Solr. In some cases we will need to change how these services are configured but nothing we need to throw away and start from scratch. Solr doesn't handle search crawling but we could use complementary packages like Apache Nutch for crawling if needed.
Work areas for scalability
Metadata backing store
As of Orion 2.0, the server metadata is stored in Equinox preferences. There are four preference trees for the various types of metadata: Users, Workspaces, Projects, and Sites. These are currently persisted in four files, one per preference tree. There are a number of problems with this implementation:
- Storing large amounts of metadata in a single file that needs to be read on most requests is a severe bottleneck.
- Migration of metadata format across builds/releases is ad-hoc and error prone
- Individual preference node access/storage is thread safe, but many Orion API calls require multiple read/writes to multiple nodes. There is no way to synchronize access/update across nodes.
- Metadata file format does not lend itself to administrative tools for managing users, migrating, etc.
- There is no clean API to the backing store and the interaction between the request handling services and the backing store is tangled.
We need a clean backing store API, and a scalable implementation of that API. The actual backing storage should be pluggable, for example raw files, database. Interaction with the API needs to be performant and transactional. There needs to be an easy way to administer the metadata, for example deleting inactive accounts, migrating account data, etc.
Move search engine/indexer to separate process(es)
Our interaction with the search engine is entirely over HTTP to an in-process Solr server. The Solr server should be forked out of process so it can be scaled out independently. The indexer is currently run as a pair of background threads in the Orion server process (one for updating, one for handling deletion). This should also be able to run from a separate process.
Move long running Git operations out of process
Long running Git operations are currently run in background threads within the Orion server process. This should be moved to separate Git processes so that it can be scaled out independently. This would also enable Git processes to continue running if the front end node falls over or gets switched out. This can be implemented with a basic job queue. The Orion front end server writes a description of the long running Git operation to execute to a persistent queue. A separate Git server polls that queue and performs the git operations.
Move Java session state out of process
Authentication between client and Orion server is based on Java session state. We should investigate externalizing this state to an external storage such as memcached. This would allow sessions to persist across Orion servers being recycled, or subsequent requests from the same client being handled by different Orion server instances from the one that initiated authentication.
Orion primarily uses SLF4J for logging, written to stdout/stderr of the Orion server process. The log configuration is stored in a bundle of the Orion server. It needs to be possible to configure the log external to the Orion build itself. I.e., the Orion server configuration controls the logging rather than settings embedded with the code. We should continue writing log data to stdout so that external applications can be used to persist logs for remote monitoring and debugging.
Search for implicit assumptions about system tools (exec calls)
The Orion server is pure Java code so it should be able to run anywhere that can run a modern JVM (generally Java 7 or higher). We need to validate that we don't make assumptions about external system tools (i.e., System.exec calls). We should bundle the Java 7 back end of the Eclipse file system API to enable running any Orion build on arbitrary hardware.