Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

SMILA/Documentation/WorkerAndWorkflows

< SMILA‎ | Documentation
Revision as of 08:23, 5 September 2011 by Juergen.schumacher.attensity.com (Talk | contribs) (Workers and Workflows)

Note.png
Available since SMILA 0.9!


Workers and Workflows

Workers

Worker definition

A worker definition describes the input and output behavior as well as the required parameters of a worker. The definitions are provided with the software and must be known in the system before a worker can be added as an action to a workflow. They cannot be added or edited at runtime and are therefore not intended to be manipulated by the user.

Typically, a worker definition consists of the following parts:

  1. A parameter section declaring the worker's parameters: These parameters must be set either in the workflow or in the job definition when using this worker.
  2. An input slot describing the type of input objects that the worker is able to consume: All input slots must be connected to buckets in a workflow definition that wants to use this worker.
  3. An output slot describing the type of output objects that the worker generates: All output slots must be connected to buckets in a workflow definition that wants to use this worker. An exception to this rule are output slots that were marked as optional in the worker definition or output slots that belong to another slot group (see below).

Slot groups

As an advanced feature, output slots can be associated with a group label. Slots having the same group label then belong to the same group. Grouping is used to define which slots can be used together in the same workflow and which not. Whereas slots that were not associated with a group label can be combined freely because they belong to each group implicitly, it is not possible to use slots from different groups in the same workflow. When using groups, the rules concerning optional and mandatory output slots are as follows:

  • A mandatory slot without a group label must always be connected to a bucket.
  • An optional slot without a group label is allowed in any combination with other any group slot.
  • If a particular group shall be used, all mandatory slots of the group must be connected to a bucket.
  • If each group contains at least one mandatory slot, at least one group must be connected. It is not possible then to connect the slots without a group label only.

Worker properties in detail

  • name: Required. Defines the name of the worker. Can be used in a workflow to add the worker as an action.
  • modes: Optional. Sets a mode in the worker, controlling a special behavior.
  • parameters: Optional. Gives the worker's parameter descriptions. The description can contain the following properties:
    • name: the parameter name (required)
    • optional: flag telling whether this parameter is optional (true) or required (false, default)
  • taskGenerator: Optional. Defines the name of the OSGi service which should be used to create the tasks whenever there are changes in the respective input buckets. If the taskGenerator is not set, the default task generator is used.
  • input: Optional. Describes the input slots:
    • name: Gives the name of a slot. Has to be bound as a parameter key to an existing bucket in a workflow.
    • type: Gives the required data object type of the input slot. The bucket bound in an actual workflow must comply with this type.
    • mode: Sets the mode(s) of the respective input slot, controlling a special behavior.
  • output: Optional. Describes the output slots:
    • name: Gives the name of the slot. Has to be bound as a parameter key to an existing bucket in a workflow.
    • type: Gives the required data object type of the output slot. The bucket bound in an actual workflow must comply with this type.
    • group: Gives the group label of this slot (see above).
    • mode: Sets the mode(s) of the respective output slot, controlling a special behavior.

Worker definitions can include additional information (e.g. comments or layout information for graphical design tools, etc.), but a GET request will return only relevant information (i.e. the above attributes). If you want to retrieve the additional info that is present in the json file, add returnDetails=true as request parameter.

Example

An exemplary worker definition:

{
  "name" : "exampleWorker",
  "readOnly": true,
  "parameters":[
    { "name": "parameter1" , "optional": "true"},
    { "name": "parameter2" }            
  ],
  "input" : [ {
    "name" : "inputRecords",
    "type" : "recordBulks"
  } ],
  "output" : [ {
    "name" : "outputRecords",
    "type" : "recordBulks"
  }]
}

As workers currently can be defined in the system configuration only, they are all marked as "readOnly".

List workers

All workers

Use a GET request to list all worker definitions.

Supported operations:

  • GET: Returns a list of all worker definitions. If you want to retrieve the additional information (if present), add returnDetails=true as request parameter.

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/workers/
  • Allowed methods:
    • GET
  • Response status codes:
    • 200 OK: Upon successful execution.

Specific worker

Use a GET request to list the definition of a specific worker.

Supported operations:

  • GET: Returns the definition of the given worker. Optional parameter: returnDetails: true or false (default)

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/workers/<worker-name>/
  • Allowed methods:
    • GET
  • Response status codes:
    • 200 OK: Upon successful execution.

Workflows

Workflow definition

A workflow definition describes the individual actions of an asynchronous workflow by connecting workers to input and output slots. Which slots have to be connected depends on the workers you are using and is defined by the worker definition. Typically, all input and output slots of a used worker must be associated to buckets. And, the type of the connected bucket must match that defined in the worker's definition.

A workflow run starts with the start-action. The order of the other actions is determined by their inputs and outputs.

Connecting a workflow to another workflow

A workflow can be linked to another workflow when both share the same persistent bucket. To give an example, let's assume a workflow named A and a workflow named B sharing the same bucket. If the first workflow A then adds an object into the shared bucket, the second workflow B is triggered to process this data. To be able to connect workflow A and B, the following prerequisites must be fulfilled:

  • The shared bucket must be a persistent one.
  • The definition of workflow A must define the shared bucket as an output bucket of an action. This can be any action in the workflow chain, hence, not necessarily the first or the last one.
  • The definition of workflow B must state the shared bucket as the input bucket of its start action. Other positions in the workflow definition will not do.
  • Individual jobs must be created for both the triggering (A) and the triggered workflow (B).
  • The parameters used for the store and object name in the data object type definition of the shared bucket must be identical in both job definitions.
  • The job runs must fulfill the following conditions to allow for the triggering of a connected workflow:
    • The status of the job run using workflow A must be RUNNING or FINISHING.
    • The status of the job run using workflow B must be RUNNING.

Warning: As there is no explicit chaining of workflows, you have to be very careful when using the same bucket name in multiple workflow definitions. This might result in the triggering of jobs which were not meant to be triggered at all.

Workflow properties in detail

Description of a workflow:

  • name: Required. Gives the name of the workflow.
  • parameters (MAP): Optional. Sets the global workflow parameters. They apply to all actions in the workflow as well as to the buckets used by these workers.
  • startAction (MAP): Required. Defines the starting action of the workflow. There can be only one starting action within the workflow.
  • actions (LIST of MAPs): Optional. Defines the follow-up actions of the workflow.
  • timestamp: The (readonly) timestamp that is created by the system when the workflow has been pushed to the system (initial creation or last update). Read-Only workflows (i.e. workflows initially loaded from workflow.json file have no timestamp property. The value cannot be set manually, it is system defined.
  • Additional properties can be provided, but will only be listed when returnDetails is set to true. This could be used by a designer tool to add layout information or comments.

Description of startAction and actions:

  • worker: Gives the name of a worker. This name must comply with the name given in the worker definition.
  • parameters: Sets the local worker parameters. The apply to the referenced worker but not to the buckets used by this worker.
  • input (MAP): Maps the worker's named input slot(s) to an existing bucket definition. The name of an input slot must be the key and the name of the bucket must be the value of that key. All of the worker's named input slots have to be resolved against existing buckets of the expected type.
  • output (MAP): Maps the worker's named output slot(s) to an existing bucket definition. The name of an output slot must be the key and the name of the bucket must be the value of that key. All of the worker's named output slots have to be resolved against existing buckets of the expected type.

Workflow definitions can include additional information (e.g. comments or layout information for graphical design tools, etc.), but a GET request will return only relevant information (i.e. the above attributes). If you want to retrieve the additional info that is present in the json file or has been posted with the definition, add returnDetails=true as request parameter.

Example

An exemplary workflow definition:

{
   "name":"myWorkflow",
   "parameters":{
      "paramKey2":"paramValue2",
      "paramKey1":"paramValue2"
   },
   "startAction":{
      "worker":"worker1",
      "input":{
         "slotA":"myBucketA"
      },
      "output":{
         "slotB":"myBucketB"
      }
   },
   "actions":[
      {
         "worker":"worker2",
         "parameters":{
            "paramKey3":"paramValue3"
         },
         "input":{
            "slotC":"myBucketB"
         },
         "output":{
            "slotD":"myBucketC"
         }
      },
      {
         "worker":"worker3",
         "input":{
            "slotE":"myBucketC"
         },
         "output":{
            "slotF":"myBucketD"
         }
      }
   ],
   "timestamp" : "2011-07-25T08:57:47.628+0200"
}

List, create, and modify workflows

All workflows

Use a GET request to list the definitions of all workflows. If the timestamps (if present) or any other additional information contained in the definition should also be displayed, the request parameter returnDetails must be set to true. Use POST for adding or updating a workflow definition.

Supported operations:

  • GET: Returns a list of all workflow definitions. If there are no workflows defined, you will get an empty list. Optional request parameter: returnDetails: true or false (default).
  • POST: Create a new workflow definition or update an existing one. If the workflow already exists, it will be updated after successful validation. However, the changes will not apply until the next job run, i.e. the current job run is not influenced by the changes. Only workers for which worker definitions exist can be added to the workflow definition as actions. When adding a worker, all parameters defined in the worker's definition have to be satisfied. If not in the global or local sections of the workflow definition itself, then later in the job definition. Also, all input and output slots have to be connected to existing buckets if they are persistent ones or at least a bucket name must be provided in case of transient ones. Expceptions to this rule are optional slots or those of other slot groups which need not and must not (in the latter case) be connected to buckets. An error will be thrown:
    • If a required slot is not connected to a bucket.
    • If a referenced bucket, defined as persistent one, does not exist.

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/workflows.
  • Allowed methods:
    • GET
    • POST
  • Response status codes:
    • 200 OK: Upon successful execution (GET).
    • 201 CREATED: Upon successful execution (POST).
    • 400 Bad Request: name, startAction are mandatory fields. If they are not set, an HTTP 400 Bad Request including an error message in the response body will be returned.

Specific workflow

Use a GET request to retrieve the defintion of a specific workflow. Use DELETE to delete a specific workflow.

Supported operations:

  • GET: Returns the definition of the given workflow.
    • You can set the URL parameter returnDetails to true to return additional information that might have been provided when creating the workflow. If the parameter is ommitted or set to false only the relevant information (name, parameters, startAction, actions, timestamp) is gathered.
  • DELETE: Deletes the given workflow.

Usage:

  • URL: http://<hostname>:8080/smila/jobmanager/workflows/
  • Allowed methods:
    • GET
    • DELETE
  • Response status codes:
    • 200 OK: Upon successful execution (GET, DELETE). If the workflow to be deleted does not exist, you will get 200 anyway (DELETE).
    • 404 Not Found: If the workflow does not exist (GET).
    • 400 Bad Request: If the workflow to be deleted is stil referenced by a job definition (DELETE).

Back to the top