Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Management"

m (Management with the aid of jconsole)
m (External links)
Line 54: Line 54:
  
 
* [http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/ Java Management Extensions (JMX)]
 
* [http://java.sun.com/javase/technologies/core/mntr-mgmt/javamanagement/ Java Management Extensions (JMX)]
 +
* [http://java.sun.com/developer/technicalArticles/J2SE/jconsole.html Using JConsole to Monitor Applications]
  
 
[[Category:SMILA]]
 
[[Category:SMILA]]

Revision as of 02:24, 20 March 2009

SMILA is a framework with a lot of functionality. Most is invoke automatically by internal operations. Nevertheless, the user has to configure and start an initial operation. All functions a user can execute are accessible from the JMX Management Agent. On the following pages you will learn how to use SMILA with the aid of Java's built in JConsole and to handle the JMXClient which features access to SMILA commands via batch files.

Management with the aid of jconsole

The jconsole is a little tool for monitoring java applications nested in the JDK. Over a JMX connection it’s possible to connect an application with the swing UI of jconsole. If you start up SMILA engine and open jconsole you can connect the Jconsole to SMILA immediately.

jconsole

After connecting you can find SMILA operation on MBeans tab in the Tree on the left site.

Smila manageable Components

There are four components of SMILA which you can access over jconsole.

CrawlerController

Here you can manage the crawling jobs. The following commands are available:

  • startCrawl(String dataSourceID): starts a crawling job with the given dataSourceID, for example file or web.
  • stopCrawl(String dataSourceID): stops the crawling job for the given dataSourceID. Note: the crawler is only signaled to stop and may do so at its own discretion. In other words: depending on the implementation it might take a while until it actually stops crawling. It thus gives the crawler the chance to clean up all open resources and finish whatever business it needs to.
  • getActiveCrawls(): opens a dialog which show a list containing the dataSourceID for all active crawl jobs. If no job is running the dialog shows null.
  • getActiveCrawlsStatus(): opens a dialog telling you how many crawl jobs are active at the moment.
  • getStatus(String dataSourceID): opens a dialog which shows you the status of the crawling job for a given dataSoruceID. Possible states are: RUNNING, FINISHED, STOPPED or ABORTED.

RecordRecycler

The RecordRecycler gives you the possibly to push already crawled records into Data Flow Process again. For example it could be useful if you want to modify record in the index with another pipeline. To control RecordRecycler there are following operations available.

  • startRecycling(String configurationID, String dataSourceId): fires a recycling event with the given configurationID ( the configurationID must match the name of a configuration file located at configuration/org.eclipse.smila.connectivity.framework.queue.worker/recyclers) and dataSourceID (get records from RecordStorage which have this dataSourceID). See QueueWorker documentation for further enlightenment on the Recycler.
  • stopRecycling(String dataSourceID): stops the recycle event for the given dataSourceID.
  • getRecordsRecycled(String dataSourceID): opens a dialog shows how many records are recycled.
  • getConfigurations(String dataSourceID): show a list containing all available recycle configuration files.
  • getStatus(String dataSourceID): open dialog showing the status of recycling event for given dataSourceID. Possible states are: STARTED, IN_PROCESS, STOPPING, STOPPED, FINISHED.

DeltaIndexingManager

The DeltaIndexManager stores a hash value of each record. It is part of the Connectivity Framework and signals a crawler that a given record has (not) changed since the last crawl. See DeltaIndexing documentation. Within jconsole you can use the following commands:

  • clearAll(): clears all hashes thus enabling to reprocess all records
  • unlockAll(): unlock all datasources
  • clear(String dataSourceID): same as clearAll but limited to one data source

Lucene

With Lucene you have the possibility to invoke several method concerning the index. Following operation are available:

  • deleteIndex(String indexName): removes the index with the given name if available. Otherwise an error dialog is shown.
  • indexExists(String indexName): ask the framework if the given Index exists. Returns true or false.
  • createIndex (String indexName): creates an index with the given name.
  • reorganizeIndex(String indexName): reorganizes the index with the given name. This will clean up the index, in that deleted entries are physically removed resulting in a smaller index size.
  • renameIndex(String currentIndexName, String newIndexName): rename the index with the given name (currentIndexname) into the value of newIndexName.

External links

Back to the top