Difference between revisions of "Orion/Plan/3.0/Server scalability"
(New page: This plan lays out the work areas required to re architect the current Orion Java server for very large scale deployment. This is not all work we will be able to scope within a single rele...)
Revision as of 09:50, 4 April 2013
This plan lays out the work areas required to re architect the current Orion Java server for very large scale deployment. This is not all work we will be able to scope within a single release, but is intended to provide a roadmap of the work required over the coming releases to make this happen. Work items are roughly sorted by order of priority.
Metadata backing store
As of Orion 2.0, the server metadata is stored in Equinox preferences. There are four preference trees for the various types of metadata: Users, Workspaces, Projects, and Sites. These are currently persisted in four files, one per preference tree. There are a number of problems with this implementation:
- Storing large amounts of metadata in a single file that needs to be read on most requests is a severe bottleneck.
- Migration of metadata format across builds/releases is ad-hoc and error prone
- Individual preference node access/storage is thread safe, but many Orion API calls require multiple read/writes to multiple nodes. There is no way to synchronize access/update across nodes.
- Metadata file format does not lend itself to administrative tools for managing users, migrating, etc.
1. Metadata access through clear API - transactional, CRUD, etc, capable of swapping in DB implementation 2. Search out of process 3. Long running Git operations out of process (tasks) 4. Session state out of process (Memcached?) 5. Externalize log configuration 6. Search for implicit assumptions about system tools (exec calls)