Skip to main content
Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/DataObjectTypesAndBuckets"

Line 3: Line 3:
 
== Buckets  ==
 
== Buckets  ==
  
A bucket is a data container comprising logically grouped data objects that are to processed by some asynchronous workflows in SMILA. All data objects in a bucket are physically located in the same store and therefore share the same naming convention. For example, a data object can be a sequence of records ("record bulk") or an index. Also, the contents within one bucket have the same structure as is determined by its data object type. The actual data object types from which you can select when creating a bucket are predefined by the software and cannot be changed during runtime.  
+
A bucket is a data container comprising logically grouped data objects that are to processed by some asynchronous workflow in SMILA. All data objects in a bucket are physically located in the same store and therefore share the same naming convention. For example, data objects could be sequences of records (so called "record bulks") or indices. Also, the contents within one bucket have the same structure as is determined by its data object type. The actual data object types from which you can select when creating a bucket are predefined by the software and cannot be changed during runtime.  
  
 
An important aspect of buckets is that they can be persistent or transient: Objects in transient buckets are deleted automatically when the workflow run that created them has ended while objects in persistent buckets survive until they are deleted explicitly or another workflow uses them. Whereas persistent buckets have to be created explicitly via the respective REST/JSON API call (see below) before they can be used in a workflow, transient ones are generated automatically by the system based on the definition of the respective workflow and need not and also cannot be created explicitly via this API. Similar, a store referenced by some transient bucket is created automatically by the Job Manager but a store referenced by a persistent bucket must be created beforehand.
 
An important aspect of buckets is that they can be persistent or transient: Objects in transient buckets are deleted automatically when the workflow run that created them has ended while objects in persistent buckets survive until they are deleted explicitly or another workflow uses them. Whereas persistent buckets have to be created explicitly via the respective REST/JSON API call (see below) before they can be used in a workflow, transient ones are generated automatically by the system based on the definition of the respective workflow and need not and also cannot be created explicitly via this API. Similar, a store referenced by some transient bucket is created automatically by the Job Manager but a store referenced by a persistent bucket must be created beforehand.
Line 13: Line 13:
 
==== All buckets  ====
 
==== All buckets  ====
  
Use a GET request to list all persistent buckets. Use POST to add new persistent buckets. Transient buckets are not shown.
+
Use a GET request to list all persistent buckets. Transient buckets are not shown in the list.
 +
 
 +
Use POST to add new persistent buckets or to edit them. Transient buckets cannot be created explicitly via this API.
  
 
'''Supported operations:'''  
 
'''Supported operations:'''  
  
 
*GET: Returns a list of all buckets. If there are no buckets defined, you will get an empty list.
 
*GET: Returns a list of all buckets. If there are no buckets defined, you will get an empty list.
*POST: Add a new persistent bucket. The bucket definition must at least contain the name and the data object type of the bucket, additionally parameters may be set that are used in the data object type to build store and objects names in this bucket. You can create only buckets with data object types that contain a "persistent" definition. See below for a description of available data object types. If an already existing name is used, the bucket will be updated after successful validation. An actually running job will not be influenced. If an existing workflow uses a bucket of the same name, even if this bucket is optional and did not exist before, it will be updated for a new job run, too.
+
*POST: Add a new persistent bucket or edit an existing one. The bucket definition must at least contain the name and the data object type of the bucket. Bucket parameters are optional. If the bucket already exists, it will be updated after successful validation. However, the changes will not apply until the next job run, i.e. the current job run is not influenced by the changes.  
  
 
'''Usage:'''  
 
'''Usage:'''  
Line 28: Line 30:
 
*Response status codes:  
 
*Response status codes:  
 
**200 OK: Upon successful execution (GET).
 
**200 OK: Upon successful execution (GET).
**201 CREATED: Upon successfull execution (POST). In case of success an HTTP return code is returned (followed by a JSON object containing the name and URI of the created bucket).
+
**201 CREATED: Upon successfull execution (POST). The result object returns a JSON object giving the name and URI of the created bucket.
**400 Bad Request: If the parameters in the bucket definition would result in incorrect store names, an HTTP 400 Bad Request is followed by an error in JSON format specifying which bucket and data object type is involved.
+
**400 Bad Request: If the parameters in the bucket definition would result in incorrect store names. The result object returns an error message in JSON format.
 +
 
 +
 
 +
'''Examples:'''
 +
 
 +
To list all buckets:
 +
 
 +
<pre>
 +
GET /smila/jobmanager/buckets/
 +
</pre>
 +
 
 +
The result would be:
 +
 
 +
<pre>
 +
HTTP/1.x 200 OK
 +
 
 +
{
 +
  "buckets":[
 +
      {
 +
        "name":"myBucket",
 +
        "url":"http://localhost:8080/smila/jobmanager/buckets/myBucket/"
 +
      },
 +
      {
 +
        "name":"myOtherBucket",
 +
        "url":"http://localhost:8080/smila/jobmanager/buckets/myOtherBucket/"
 +
      }
 +
  ]
 +
}
 +
</pre>
 +
 
 +
To create a bucket:
 +
 
 +
<pre>
 +
POST /smila/jobmanager/buckets/
 +
 
 +
{
 +
  "name": "myBucket",
 +
  "type": "recordBulks",
 +
  "parameters":
 +
  {
 +
    "store": "mystore"
 +
  }
 +
}
 +
</pre>
 +
 
 +
The result would be:
 +
 
 +
<pre>
 +
HTTP/1.x 201 CREATED
 +
 
 +
{
 +
  "name" : "myBucket",
 +
  "url" : "http://localhost:8080/smila/jobmanager/buckets/myBucket/"
 +
}
 +
</pre>
  
 
==== Specific buckets ====
 
==== Specific buckets ====
Line 49: Line 105:
 
**200 OK: Upon successful execution (GET, DELETE). When trying to delete a bucket that does not exist, the call will be ignored (DELETE) and 200 OK is returned nevertheless.
 
**200 OK: Upon successful execution (GET, DELETE). When trying to delete a bucket that does not exist, the call will be ignored (DELETE) and 200 OK is returned nevertheless.
 
**404 Server error: In case <bucket-name> does not exist (GET).  
 
**404 Server error: In case <bucket-name> does not exist (GET).  
**400 Bad Request: If the bucket is referenced by an existing workflow. If a bucket is predefined in the configuration it cannot be removed.
+
**400 Bad Request: If the bucket is referenced by an existing workflow. Also, if a bucket is predefined in the configuration it cannot be removed.
  
== Data Object Types ==
+
'''Examples:'''
  
The data object type definition is provided with the software and cannot be added at runtime. It defines default data object modes and default name patterns required by the software.
+
To get a bucket definition:
* The data object type definition should not be changed by the user. If something is wrong the Job Manager would start anyway but errors may occur later.
+
 
* Data object type use parameter variables ${...}. System parameter variables (name starts with _) are resolved automatically, values for other variables must be provided by higher-level definitions (buckets, workflows, jobs)
+
<pre>
 +
GET /smila/jobmanager/buckets/myBucket/
 +
</pre>
  
Data Object Types delivered by software:
+
The result would be:
  
 
<pre>
 
<pre>
{"dataObjectTypes": [
+
HTTP/1.x 200 OK
      {
+
 
      "name": "recordBulk",  
+
{
        "persistent": {
+
  "name" : "myBucket",
          "store": "${store}",
+
  "type" : "recordBulks",
          "object": "${_bucketName}/${_uuid}"
+
  "parameters" : {
        },
+
    "store" : "mystore"
        "transient": {
+
  }
          "store": "${tempStore}",
+
          "object": "${_bucketName}/${_uuid}"
+
        }
+
      }
+
  ]
+
 
}
 
}
 
</pre>
 
</pre>
  
Notes:  
+
To delete a bucket:
* ${_uuid} means: new uuid is generated only when creating new bulk. When transforming an existing bulk, the uuid is reused.
+
 
 +
<pre>
 +
DELETE /smila/jobmanager/buckets/myBucket/
 +
</pre>
 +
 
 +
The result would be:
 +
 
 +
<pre>
 +
HTTP/1.x 200 OK
 +
</pre>
 +
 
 +
== Data Object Types ==
  
The meaning of these definitions is:
+
The data object type definition is provided with the software and cannot be added at runtime. It defines the default data object modes and default name patterns required by the software.
* recordBulk: a data object of this type contains a sequence of records. This type is the standard intermediate object type in workflows.  
+
* The data object type definition should not be changed by the user.
 +
* Data object types contain parameter variables denoted by "${...}". System parameter variables, names starting with "_" (underscore), are resolved automatically. Values for other variables must be set as a bucket parameter or a higher-level definition, e.g. as a workflow or job parameter. Where a type specifies both persistent and transient data, you will have to resolve only those parameter variables defined for the respective type.
  
 
=== List data object types ===
 
=== List data object types ===
 
==== All data object types  ====
 
==== All data object types  ====
 +
 
Use a GET request to retrieve information about all object data types. This API is read-only: You cannot add or modify data object types during runtime.
 
Use a GET request to retrieve information about all object data types. This API is read-only: You cannot add or modify data object types during runtime.
  
Line 96: Line 162:
 
**GET
 
**GET
 
*Response status codes:  
 
*Response status codes:  
**200 OK: Upon successful execution
+
**200 OK: Upon successful execution.
 +
 
 +
'''Examples:'''
 +
 
 +
To list all data object types:
 +
 
 +
<pre>
 +
GET /smila/jobmanager/dataobjecttypes/
 +
</pre>
 +
 
 +
The result would be:
 +
 
 +
<pre>
 +
HTTP/1.x 200 OK
 +
 
 +
{
 +
  "dataObjectTypes":[
 +
      {
 +
        "name":"recordBulks",
 +
        "url":"http://localhost:8080/smila/jobmanager/dataobjecttypes/recordBulks/"
 +
      }
 +
  ]
 +
}
 +
</pre>
  
 
==== Specific data object type ====
 
==== Specific data object type ====
Use a GET request to retrieve information about a specific object data types.
+
Use a GET request to retrieve information about a specific object data type.
  
 
'''Supported operations:'''  
 
'''Supported operations:'''  
Line 112: Line 201:
 
*Response status codes:  
 
*Response status codes:  
 
**200 OK: Upon successful execution
 
**200 OK: Upon successful execution
 +
 +
'''Examples:'''
 +
 +
To get the definition of one data object type:
 +
 +
<pre>
 +
GET /smila/jobmanager/dataobjecttypes/recordBulks/
 +
</pre>
 +
 +
The result would be:
 +
 +
<pre>
 +
HTTP/1.x 200 OK
 +
 +
{
 +
  "name":"recordBulks",
 +
  "persistent":{
 +
      "object":"${_bucketName}/${_uuid}",
 +
      "store":"${store}"
 +
  },
 +
  "transient":{
 +
      "object":"${_bucketName}/${_uuid}",
 +
      "store":"${tempStore}"
 +
  }
 +
}
 +
</pre>
 +
 +
'''Available data object types:'''
 +
 +
Currently, there is only one data object type available, namely the type "recordBulks" (see its definition above).
 +
 +
The "recordBulk" type allows for sequences of records (record bulks). It is the standard intermediate object type in workflows, meaning there can be workers in a workflow that use objects of the "recordBulk" type as their input data and also workers that write objects of the same type as their result.
 +
 +
The "recordBulk" type allows both transient and persistent data. If a persistent bucket uses this type, one has to set the value of the <tt>${store}</tt> variable. Vice versa, when a transient bucket uses this type, one has to set the value of the <tt>${tempStore}</tt> variable. The variables <tt>${store}</tt> and <tt>${tempStore}</tt> define the name of the object store in which the respective data objects should be stored.
 +
 +
The system variable <tt>${_uuid}</tt> need not be set by the user. It is set automatically by the system. New <tt>uuid</tt>s are only generated when creating new bulks. When transforming an existing bulk, it is resused.

Revision as of 05:43, 12 July 2011

Buckets and Data Object Types

Buckets

A bucket is a data container comprising logically grouped data objects that are to processed by some asynchronous workflow in SMILA. All data objects in a bucket are physically located in the same store and therefore share the same naming convention. For example, data objects could be sequences of records (so called "record bulks") or indices. Also, the contents within one bucket have the same structure as is determined by its data object type. The actual data object types from which you can select when creating a bucket are predefined by the software and cannot be changed during runtime.

An important aspect of buckets is that they can be persistent or transient: Objects in transient buckets are deleted automatically when the workflow run that created them has ended while objects in persistent buckets survive until they are deleted explicitly or another workflow uses them. Whereas persistent buckets have to be created explicitly via the respective REST/JSON API call (see below) before they can be used in a workflow, transient ones are generated automatically by the system based on the definition of the respective workflow and need not and also cannot be created explicitly via this API. Similar, a store referenced by some transient bucket is created automatically by the Job Manager but a store referenced by a persistent bucket must be created beforehand.

Persistent buckets can have parameters that are required for the referenced data object type or for the involved workers to operate when the bucket is referenced in a workflow.

List, create, and modify buckets

All buckets

Use a GET request to list all persistent buckets. Transient buckets are not shown in the list.

Use POST to add new persistent buckets or to edit them. Transient buckets cannot be created explicitly via this API.

Supported operations:

  • GET: Returns a list of all buckets. If there are no buckets defined, you will get an empty list.
  • POST: Add a new persistent bucket or edit an existing one. The bucket definition must at least contain the name and the data object type of the bucket. Bucket parameters are optional. If the bucket already exists, it will be updated after successful validation. However, the changes will not apply until the next job run, i.e. the current job run is not influenced by the changes.

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/buckets.
  • Allowed methods:
    • GET
    • POST
  • Response status codes:
    • 200 OK: Upon successful execution (GET).
    • 201 CREATED: Upon successfull execution (POST). The result object returns a JSON object giving the name and URI of the created bucket.
    • 400 Bad Request: If the parameters in the bucket definition would result in incorrect store names. The result object returns an error message in JSON format.


Examples:

To list all buckets:

GET /smila/jobmanager/buckets/

The result would be:

HTTP/1.x 200 OK

{
   "buckets":[
      {
         "name":"myBucket",
         "url":"http://localhost:8080/smila/jobmanager/buckets/myBucket/"
      },
      {
         "name":"myOtherBucket",
         "url":"http://localhost:8080/smila/jobmanager/buckets/myOtherBucket/"
      }
   ]
}

To create a bucket:

POST /smila/jobmanager/buckets/

{
  "name": "myBucket",
  "type": "recordBulks",
  "parameters": 
  {
     "store": "mystore"
  }
}

The result would be:

HTTP/1.x 201 CREATED

{
  "name" : "myBucket",
  "url" : "http://localhost:8080/smila/jobmanager/buckets/myBucket/"
}

Specific buckets

Use a GET request to get the definition of a bucket. Use DELETE to delete a bucket.

Supported operations:

  • GET: Returns the definition of the given bucket.
  • DELETE: Deletes a bucket. Buckets which are still used in a workflow definition cannot be deleted.

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/buckets/<bucket-name>.
  • Allowed methods:
    • GET
    • DELETE
  • Response status codes:
    • 200 OK: Upon successful execution (GET, DELETE). When trying to delete a bucket that does not exist, the call will be ignored (DELETE) and 200 OK is returned nevertheless.
    • 404 Server error: In case <bucket-name> does not exist (GET).
    • 400 Bad Request: If the bucket is referenced by an existing workflow. Also, if a bucket is predefined in the configuration it cannot be removed.

Examples:

To get a bucket definition:

GET /smila/jobmanager/buckets/myBucket/

The result would be:

HTTP/1.x 200 OK

{
  "name" : "myBucket",
  "type" : "recordBulks",
  "parameters" : {
    "store" : "mystore"
  }
}

To delete a bucket:

DELETE /smila/jobmanager/buckets/myBucket/

The result would be:

HTTP/1.x 200 OK

Data Object Types

The data object type definition is provided with the software and cannot be added at runtime. It defines the default data object modes and default name patterns required by the software.

  • The data object type definition should not be changed by the user.
  • Data object types contain parameter variables denoted by "${...}". System parameter variables, names starting with "_" (underscore), are resolved automatically. Values for other variables must be set as a bucket parameter or a higher-level definition, e.g. as a workflow or job parameter. Where a type specifies both persistent and transient data, you will have to resolve only those parameter variables defined for the respective type.

List data object types

All data object types

Use a GET request to retrieve information about all object data types. This API is read-only: You cannot add or modify data object types during runtime.

Supported operations:

  • GET: Returns a list of all data object types.

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/dataobjecttypes/.
  • Allowed methods:
    • GET
  • Response status codes:
    • 200 OK: Upon successful execution.

Examples:

To list all data object types:

GET /smila/jobmanager/dataobjecttypes/

The result would be:

HTTP/1.x 200 OK

{
   "dataObjectTypes":[
      {
         "name":"recordBulks",
         "url":"http://localhost:8080/smila/jobmanager/dataobjecttypes/recordBulks/"
      }
   ]
}

Specific data object type

Use a GET request to retrieve information about a specific object data type.

Supported operations:

  • GET: Returns the definition of a specific data object type.

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/dataobjecttypes/<dataobjecttype-name>/.
  • Allowed methods:
    • GET
  • Response status codes:
    • 200 OK: Upon successful execution

Examples:

To get the definition of one data object type:

GET /smila/jobmanager/dataobjecttypes/recordBulks/

The result would be:

HTTP/1.x 200 OK

{
   "name":"recordBulks",
   "persistent":{
      "object":"${_bucketName}/${_uuid}",
      "store":"${store}"
   },
   "transient":{
      "object":"${_bucketName}/${_uuid}",
      "store":"${tempStore}"
   }
}

Available data object types:

Currently, there is only one data object type available, namely the type "recordBulks" (see its definition above).

The "recordBulk" type allows for sequences of records (record bulks). It is the standard intermediate object type in workflows, meaning there can be workers in a workflow that use objects of the "recordBulk" type as their input data and also workers that write objects of the same type as their result.

The "recordBulk" type allows both transient and persistent data. If a persistent bucket uses this type, one has to set the value of the ${store} variable. Vice versa, when a transient bucket uses this type, one has to set the value of the ${tempStore} variable. The variables ${store} and ${tempStore} define the name of the object store in which the respective data objects should be stored.

The system variable ${_uuid} need not be set by the user. It is set automatically by the system. New uuids are only generated when creating new bulks. When transforming an existing bulk, it is resused.

Back to the top