Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Architecture Overview"

(Introduction)
(Description)
Line 20: Line 20:
 
[[Image:SMILA Architecture Overview_1.0.png]]
 
[[Image:SMILA Architecture Overview_1.0.png]]
  
=== Description ===
+
== Architecture Overview ==
  
This architecture overview depicts generally two processes: preprocessing and information retrieval.
+
[[Image:SMILA Architecture Overview_1.0.png]]
  
'''Note:''' ''In case where SMILA is used for building a search application, we talk about indexing and search process.''
+
<font size="-2">
<ul>
+
Download [[Media:SMILA_Architecture_1.0.zip|SMILA Architecture.zip]] with the original PowerPoint file of this slide.
<li>
+
</font>
<p>
+
The '''''preprocessing''''' process generally includes the interaction with the data source either by pulling data by crawlers and pushing it into the system via the BulkBuilder module. The information that can be pushed into the framework is in general document's metadata, content and diverse security relevant information i.e. access rights.
+
</p>
+
<p>
+
The bulkbuilder is the entry point to the asynchronous job management and persists the data in dedicated stores for further processing. A bulk is a bunch of records that is processed by various ''workers'' that are orchestrated via an asynchronous ''workflow''. Such a workflow can be instantiated by defining a ''job'' and the execution ob such a job is called a ''job run''.
+
</p>
+
<p>
+
For better crawl performance, a crawler (e.g. file system crawler or web crawler) is now implemented as a set of different workers that are running the the asynchronous job management, too. This makes it possible to run the several steps of a crawler in parallel (even on multiple hosts). The complete preprocessing therefore consists of two jobs: One for ''extracting'' the raw data from the data source into SMILA, and one for ''transforming'' it and ''loading'' it into some target, e.g. an index.
+
</p>
+
<p>
+
An indexing client can also use the REST API to push JSON objects (i.e. a document's metadata) and the document contents into the bulkbuilder. Such a client could be running inside the datasource and react on create-update-delete events in the datasource to send the changed objects to SMILA for processing, so that SMILA does not need to crawl the datasource regularly to stay up-to-date.
+
</p>
+
<p>
+
Metadata, access rights and document contents are stored in the object store. Beside these two storages, SMILA also offers a DeltaChecker worker for keeping information about the state objects/documents during a crawling of a data source so that in follow-up crawl runs only changed objects are pushed in to the transformation workflow. Ontology store is a dedicated store for persisting and managing ontologies. The Blackboard service represents a high level API for accessing record information by BPEL pipelines.
+
</p>
+
<p>
+
After one bulk has been completed by matching configured time or size constraints, the bulk is released and the JobManager will determine follow up tasks for the next worker(s) as defined by the workflow of the active job.
+
</p>
+
<p>
+
The WorkerManager listens for available tasks from the TaskManager and let them be processed by its workers.
+
These include PipelineProcessorWorkers that execute synchronous BPEL workflows. These workers initialize a Blackboard with the records to be processed and start a BPEL engine which executes desired workflow. The workflow again is defined by the order of execution of some services either provided by the framework itself or implemented by application's developer.
+
</p>
+
<p>
+
Since the job processing synchronizes itself via ZooKeeper across the whole cluster, the tasks can be executed on different nodes in the cluster, so the preprocessing can easily be spread and therefore parallelized across the whole cluster (provided that the storages are accessible from each node in the cluster). Thus the asynchonous job processing components are the central framework components which enable horizontal scaling of the preprocessing process in the framework. Workers can also be configured to process multiple tasks in parallel on one single node.
+
</p>
+
</li>
+
<li>
+
<p>
+
The '''''information retrieval''''' provides a swift access to previously preprocessed and stored information. Since this process is synchronous there has to be some external component responsible for distributing the load and therefore enabling the horizontal scalability of the information retrieval process. The flexible definition and execution of application's business logic is provided here also by calling a BPEL engine with a desired workflow.
+
</p>
+
</li>
+
<p>
+
'''Hint:'''
+
For initial architecture proposal please see the [[SMILA/Attic/Architecture Overview|archived version]].
+
</p>
+
  
<p>
+
A SMILA system usually consists of two parts:
Original slides can be found here: [[Media:SMILA_Architecture_1.0.zip|SMILA Architecture.zip]]
+
* First, data has to be imported into the system and processed to produce an search index or an ontology or whatever can be learned from the data.  
</p>
+
* Second, the learned information is used to answer retrieval requests from users, for examples search or ontology exploration requests.
  
 +
In the first process usually some data source is crawled or an external client pushes data from the source into the SMILA system using the HTTP ReST API. Often the data consists of large number of documents (e.g. a file system, web site, or content management system). To be processed each document is represented in SMILA by a ''record'' describing the metadata of the document (name, size, access rights, authors, keywords, ...) and the original content of the document itself.
  
 +
To process large amounts of data, SMILA must be able to distribute the work to be done on multiple SMILA nodes (computers). Therefore the ''bulkbuilder'' seperates the incoming data into ''bulks'' of records of a configurable size and writes them to an ObjectStore. For each of these bulks the ''JobManager'' creates ''tasks'' for ''workers'' to process such a bulk and produce other bulks with the result. When such a worker is available it asks the ''TaskManager'' for tasks to-do, does the work and finally notifies the TaskManager about the result. ''Workflows'' define which workers should process a bulk in what sequence. Whenever a worker finishes a task for a bulk successfully, the JobManager can create follow-up tasks based on such a workflow definition. In case a worker fails its task (because the process or machine crashes or because of network problem) the JobManager can decide to retry the task later and so ensure that the data is processed even in error conditions. The processing of the complete data set using such a workflow is called a ''job run'' and monitoring of the current state of such a job run is easily possible via the HTTP ReST API.
 +
 +
JobManager and TaskManager use [http://zookeeper.apache.org|Apache Zookeeper] to coordinate the state of a job run and the to-do and in-progress tasks over multiple computer. So the job processing is distributed
 +
 +
To make implementing workers easy, the SMILA JobManager system contains the ''WorkerManager'' that enables you to concentrate on the actual worker functionality without having to worry about getting the TaskManager and ObjectStore interaction right.
 +
 +
To extract large amounts of data from the data source, the asynchronous job framework can also be used to implement highly scalable ''crawlers''. Crawling can be divided into several steps:
 +
* getting names of elements from the datasource
 +
* checking if the element has changed since a previous crawl run (delta check)
 +
* getting the content of changed or new elements
 +
* pushing the element to a processing job.
 +
These steps can be implemented as seperate workers, too, so the crawl work can be parallelized and distributed quite easily. By using the JobManager to control the crawling we gain the same reliabilty and scalability from the processing for the crawling, too. And: Implementing new crawlers is just as easy as implementing new workers.
 +
 +
Eventually, the final step of such asynchrounous processing workflow will write the processed data to some target system, for example a search engine or an ontology manager or a database where it can be used to process retrieval requests, and so we get to the second part of the system. Such requests are coming from an external client application via the HTTP ReST API. They are usually of a synchronous nature, meaning that a client sends a request and waits for the result to present it to a user, and it expects the result to be produced rather quickly. On the other hand we want to have a similar flexibility to configure the processing of such synchronous requests as we have for the asynchronous job processing. Therefore we use a different workflow processor here which is based on a BPEL engine. The BPEL workflows (which we call ''pipelines'') in this processor orchestrate so-called ''pipelets'' to perform the different steps needed to enrich and refine the original requests and to produce the result. Implementing such a pipelet is probably even easier than implementing a worker ;-)
 +
 +
Finally, it's even possible to combine both workflow variants because there is a ''PipelineProcessing'' worker in the asynchronous system performs a task by executing synchronous pipeline. So it's possible to implement a only pipelet and have the functionality available in both kinds of workflows. Additionally, there is a ''PipeletProcessing'' worker available that executes just a single pipelet and so saves the overhead of the synchronous workflow processor if one pipelet is sufficient to execute tasks.
 
----
 
----
  

Revision as of 11:15, 20 January 2012

This page describes the short overview of SMILA's current architecture.

Introduction

SMILA is a framework for creating scalable server-side systems that process large amounts of unstructured data in order to build applications in the area of search, linguistic analysis, information mining or similar. The goal is to enable you to easily integrate data source connectors, search engines, sophisticated analysis methods and more and gaining scalability and reliability out-of-the-box.

As such, SMILA provides these main parts:

  • JobManager: a system for asynchronous, scalable processing of data using configurable workflows. The system is able to reliably distribute the tasks to be done on big clusters of hosts. The workflows orchestrate easy-to-implement workers that can be used to integrate application-specific processing logic.
  • Crawlers: concepts and basic implementations for scalable components that extract data from data sources.
  • Pipelines: a system for processing synchronous requests (e.g. search requests) by orchestrating easy-to-implement components (pipelets) in workflows defined in BPEL.
  • Storage: concepts for integrating big-data storages for efficient persistence of the processed data.

Eventually, all SMILA functionality will be accessible for external clients via an HTTP ReST API using JSON as the exchange data format.

As an Eclipse system, SMILA is built in OSGi and makes heavy use of the OSGi service component model.

Architecture Overview

SMILA Architecture Overview 1.0.png

Architecture Overview

SMILA Architecture Overview 1.0.png

Download SMILA Architecture.zip with the original PowerPoint file of this slide.

A SMILA system usually consists of two parts:

  • First, data has to be imported into the system and processed to produce an search index or an ontology or whatever can be learned from the data.
  • Second, the learned information is used to answer retrieval requests from users, for examples search or ontology exploration requests.

In the first process usually some data source is crawled or an external client pushes data from the source into the SMILA system using the HTTP ReST API. Often the data consists of large number of documents (e.g. a file system, web site, or content management system). To be processed each document is represented in SMILA by a record describing the metadata of the document (name, size, access rights, authors, keywords, ...) and the original content of the document itself.

To process large amounts of data, SMILA must be able to distribute the work to be done on multiple SMILA nodes (computers). Therefore the bulkbuilder seperates the incoming data into bulks of records of a configurable size and writes them to an ObjectStore. For each of these bulks the JobManager creates tasks for workers to process such a bulk and produce other bulks with the result. When such a worker is available it asks the TaskManager for tasks to-do, does the work and finally notifies the TaskManager about the result. Workflows define which workers should process a bulk in what sequence. Whenever a worker finishes a task for a bulk successfully, the JobManager can create follow-up tasks based on such a workflow definition. In case a worker fails its task (because the process or machine crashes or because of network problem) the JobManager can decide to retry the task later and so ensure that the data is processed even in error conditions. The processing of the complete data set using such a workflow is called a job run and monitoring of the current state of such a job run is easily possible via the HTTP ReST API.

JobManager and TaskManager use Zookeeper to coordinate the state of a job run and the to-do and in-progress tasks over multiple computer. So the job processing is distributed

To make implementing workers easy, the SMILA JobManager system contains the WorkerManager that enables you to concentrate on the actual worker functionality without having to worry about getting the TaskManager and ObjectStore interaction right.

To extract large amounts of data from the data source, the asynchronous job framework can also be used to implement highly scalable crawlers. Crawling can be divided into several steps:

  • getting names of elements from the datasource
  • checking if the element has changed since a previous crawl run (delta check)
  • getting the content of changed or new elements
  • pushing the element to a processing job.

These steps can be implemented as seperate workers, too, so the crawl work can be parallelized and distributed quite easily. By using the JobManager to control the crawling we gain the same reliabilty and scalability from the processing for the crawling, too. And: Implementing new crawlers is just as easy as implementing new workers.

Eventually, the final step of such asynchrounous processing workflow will write the processed data to some target system, for example a search engine or an ontology manager or a database where it can be used to process retrieval requests, and so we get to the second part of the system. Such requests are coming from an external client application via the HTTP ReST API. They are usually of a synchronous nature, meaning that a client sends a request and waits for the result to present it to a user, and it expects the result to be produced rather quickly. On the other hand we want to have a similar flexibility to configure the processing of such synchronous requests as we have for the asynchronous job processing. Therefore we use a different workflow processor here which is based on a BPEL engine. The BPEL workflows (which we call pipelines) in this processor orchestrate so-called pipelets to perform the different steps needed to enrich and refine the original requests and to produce the result. Implementing such a pipelet is probably even easier than implementing a worker ;-)

Finally, it's even possible to combine both workflow variants because there is a PipelineProcessing worker in the asynchronous system performs a task by executing synchronous pipeline. So it's possible to implement a only pipelet and have the functionality available in both kinds of workflows. Additionally, there is a PipeletProcessing worker available that executes just a single pipelet and so saves the overhead of the synchronous workflow processor if one pipelet is sufficient to execute tasks.


Want to know more?

For further up to date documentation of all implemented components please see:

Copyright © Eclipse Foundation, Inc. All Rights Reserved.