E4/Resources/Semantic File System/Use Cases
Semantic File System addresses some requirements and use cases elaborated at E4/Resources/Work Areas and E4/Resources/Requirements as well as other requirements coming from RAP, RCP and supporting REST-based access to remote content.
Support new distributed and collaborative scenarios
The IT world moves towards new scenarios that are distributed and collaborative (e.g. clouds, client/server, web-based). In order to keep pace with this developments, it is necessary to enable resource-based access to the content that resides outside of the local file system. A distributed, collaborative environment has some major differences compared to classical file-based development scenarios:
- It is often not known in advance, which content is needed and it is not feasible to get all content in advance compared with classical file-based development scenarios where everything is structured in projects or OSGi bundles.
- Content often resides in many places/repositories compared with classical file-based development scenarios where all content of an Eclipse project resides in a single file-based version control system.
- Frequency of collaboration is much higher and require automated solutions compared with rather unfrequent and manual checkins/checkouts of source code. For example, email or Instant Messaging clients are automatically downloading new mails/posts without forcing a user to press a button or type something in a command line.
- Collaboration very often means concurrent read/write access to the same remote content requiring means to control such concurrent access like e.g. pessimistic resource locking as opposed to optimistic changes for file-based source repositories.
SFS addresses the above requirements by
- providing an ability and APIs to lazily retrieve remote content
- providing an ability to flexibly combine content from different sources within one resource hierarchy on arbitrary granularity (up to a single folder or file)
- providing an API to programmatically control the bidirectional content transfer between client and server
- providing an API for pessimistic and optimistic resource locking
Citing requrements from E4/Resources/Work Areas#Non-Local_Resources:
Allow parts of the workspace to be virtual or non-local, represented by "The Network". Make non-local / non-physical elements first class citizens.
EFS has shown that transparently adding remote stuff is problematic: In terms of backward compatibility, old-style clients of the old API will never treat network failures and latency properly for non-local resources. It must be explicit.
But the concept of "Deep Refresh" is problematic with remote resources and new concepts may be needed.
Caching and Synchronization of workspace resources with other partners. Currently exposed by ISynchronizer in core.resources but isn't that another layer? In terms of separation of concerns, think about layers for remote support.
SFS provides a new approach to handle non-local resources that reside on "The Network" by
- clearly separating which calls are executed locally and which require network communication
- introducing the "client view" that flexibly maps pieces of "The Network" into the local resource hierarchy
- introducing a new API to maintain the "client view"
- introducing a cache service for content caching within the "client view"
- handling "Deep Refresh" locally on the "client view" without network communication
- introducing a new API for explicit synchronization between the "client view" and "The Network" (including cache updates)
Integration with remote repositories and other sources of information via REST
Addressing and Hierarchies
EFS, WebDAV&Co all work with rather deep and well balanced file-folder hierarchies.Having more that a thousand files in one folder is rather an exception. REST is inherently flat. A REST URI may contain slashes like a file system path but there no relationship between resources that share the same URI prefix. There is no counterpart for Folder.getChildren() in REST. And it is done on purpose in order to handle a situation where there is a huge or infinite number of resources beneath a root URI.
SFS addresses the above by implementing a flexible and programmable mapping between Eclipse Resource hierarchies and flat REST-based content. SFS also provides a service to translate between REST URIs and local Resource paths.
HTTP/MIME Content-Types vs. Eclipse Content Types
Eclipse Content Type detection is based on file extentions and analysis of resource content where MIME-based content types (that are exchanged between server and client) are used in HTTP/REST world. SFS provides an ability to work with MIME-based content types for SFS-controlled resources.
RAP and other server-side Scenarios
RAP and other Eclipse-based server-side scenarios would like to be able to run e4 workbench/workspace within the Eclipse server. A server-side solution forbids usage of real file system to store resource content and metadata. SFS provides a file-system-independent implementation that may run on server with pluggable persistence of resource metadata (e.g. CDO or database) and cached content (e.g. memory or database).
RCP scenarios have multiple major differences to IDE scenarios:
- RCP users are often non-developers and are not used (and are not willing) to work with file systems
- RCP users expect automated and implicit synchronisation of content between RCP client and server contrary to developer that need/want full control over checkin/checkout of sources.
- RCP users expect that, on UI level, they are dealing not with projects, folders and files and their hierarchies but with model objects like mails, appointments, sales orders, data types etc.
SFS addresses the above requirements by
- providing an implementation that fully virtualises and hides the file system from the user.
- providing an API to control and automate content synchronization between client and server
- providing all functionality via APIs in order to allow building alternative UIs for RCP apps
One of the important constraints as a smooth, stepwise migration of existing 3.x software. There are many tools and frameworks built on top of Eclipse Resources that can not be migrated at one shot. SFS introduces new concepts without breaking basic assumptions that clients of Eclipse Resources have today:
- Fast access to resource hierarchy and content
- Eclipse Resources as single medium shared between tools with concurrent access protected by scheduling rules
- Usage of resource change events, markers, content types, IFileEditorInput
- Team Support
SFS fully fulfills the EFS contract so that all toolsets that work with resources solely via Eclipse Resources APIs can run on SFS. For example, it is possible to have a JDT project working on SFS without any changes to JDT tools.