This pages contains the frequently asked questions of the SMILA project.
General Hint: When you have problems during a SMILA launch / run, please have a look at the SMILA log file first. (<SMILA>/SMILA.log)
- 1 Building SMILA
- 2 Launching SMILA
- 2.1 Linux
- 2.2 Bundles
- 2.2.1 Launching crawler bundles - How to solve: "Could not find crawler id" error message?
- 2.2.2 I changed the implementation of a bundle, deployed it and restarted SMILA, but SMILA still seems to use the old bundle
- 2.2.3 I changed bundle settings in my config.ini, but after SMILA restart nothing changed
- 2.3 JConsole
- 3 Configuring/Running SMILA
- 3.1 Crawler
- 3.2 Pipeline
- 3.3 Lucene Indexing / Search
- 3.3.1 How can I browse/search an existing Lucene index without SMILA (for debug purposes)?
- 3.3.2 Why are attributes in my Lucene index missing / permuted?
- 3.3.3 SMILA doesn't return search results, although I see appropriate entries for (some fields of) my query in the Lucene index
- 3.3.4 Test search application (http://localhost:8080/SMILA/search) returns HTTP Status 500 with the stack trace
- 4 Implementing Pipelets / Processing Services / Bundles
- 4.1 Configuration
- 4.2 Deploy / Launch
I receive an Out of Memory error? What can I do?
While building with SMILA.builder I receive the following errror message:
Build Failed - Out of Memory - Java heap space
The reason for this is that Ant hasn´t enough heap space to build the project. You will have to expand the heap space by setting the VM arguments accordingly. In eclipse try the following:
- Click Open external tools dialog and select your Ant build profile.
- Switch to the JRE tab and add the following VM arguments: -Xms40m -Xmx512m.
- Save and build again.
How to start/stop and manage SMILA as a background process on a Linux machine?
Since the default configuration (stored in SMILA.ini) of the OSGi runtime (in our case Equinox) launcher expects that you execute it in foreground and therefore have an OSGi console running in your shell and listening to the standard input, the first thing we have to do is to advise the launcher (and thereby Equinox) to listen on some TCP port instead. This is done by adding a new line with the port number just after the "-console" line.
For example, to set console to listen at TCP port 9999, SMILA.ini would look like this:
-console 9999 ...
Now, after SMILA has been started with “$ nohup ./SMILA &”, the console can be accessed from any computer simply by opening a telnet session:
$ telnet <smila_host_name> <console_port>
For the complete documentation on eclipse runtime options please see: http://help.eclipse.org/ganymede/index.jsp?topic=/org.eclipse.platform.doc.isv/reference/misc/runtime-options.html
Launching crawler bundles - How to solve: "Could not find crawler id" error message?
While launching SMILA I receive the following error message:
Could not find crawler id
If you started SMILA.launch to launch SMILA: The launcher didn't start your new crawler bundle. Try this:
- Add your bundle by selecting "Open Run dialog" in eclipse and choose your SMILA profile.
- Select your bundle in the list and set the checkmark.
- Set the start level to "4" and the autostart to "true".
If you started SMILA.EXE to launch SMILA: Your bundle isn`t defined in config.ini or the start level isn´t correct. Try this:
- Open the file config.ini and add your bundle as shown below:
- Open the build.properties file of your bundle and include the folders schemas/, OSGI-INF/, and the file plugin.xml.
I changed the implementation of a bundle, deployed it and restarted SMILA, but SMILA still seems to use the old bundle
Close SMILA, delete the following directories in your configuration folder and restart SMILA again:
I changed bundle settings in my config.ini, but after SMILA restart nothing changed
Check your config.ini for unusual whitespaces (e.g. a tab) between the (edited) bundle entries - and remove them.
If that doesn't help, see question (resp. answer) before.
Why is the SMILA package not in the JConsole tree?
I've started SMILA.exe but the SMILA package isn't in the tree of JConsole.
To solve this try the following:
- Create a new connection.
- Change your connection by setting the port "9004" on the Remote tab.
- Click the Connect button, switch to the MBeans tab, and check the tree again.
Why is the SMILA/CrawlerController MBean not in the JConsole tree?
Check if all needed bundles are active. Open the equinox console and type:
- ss crawler
- ss deltaindexing.impl
If one of these is not active:
- check the configuration/config.ini
- check the log file for errors
Cannot connect with JConsole to a remote machine running Ubuntu 8.04 or newer
For some reason the JConsole cannot make a remote connection to JVM running on Ubuntu 8.04 or newer installations. (We did some tests with SuSE Linux and had no problems.)
This can be easily circumvented by modifying /etc/hosts file. Simply replace 127.0.1.1 with the real IP address of your Ubuntu machine and you're ready to go.
For example, if you have a line in /etc/hosts that looks like this:
127.0.1.1 jupiter (Where "jupiter" it the actual name of the Ubuntu machine in this example. You will almost certainly have some other name here ;-)
then replace it with:
192.168.220.101 jupiter (Where 192.168.220.101 is the actual IP address of the Ubuntu machine in this example.)
I tried to crawl/index a data source, JConsole says "Crawl ... sucessfully started" but nothing seems to happen
Check if your queue is receiving records:
- Open the JConsole's MBeans tab and go to:
EnqueueCountis increased your queue is receiving records, so it's not a crawler issue.
If your queue didn't receive any records from the crawler:
- Did you crawl the data source before? If so, try a
clearAllon the DeltaIndexing-MBean in your JConsole
- Check your crawler configuration file (see configuration/org.eclipse.smila.connectivity.framework/)
- Check the queue routing configuration: org.eclipse.smila.connectivity.queue.worker.jms/QueueWorkerRouterConfig.xml
- Check the queue connection: org.eclipse.smila.connectivity.queue.worker.jms/QueueWorkerConnectionConfig.xml
- If queue is running in a separate SMILA instance (e.g. on another PC), make sure that "Queue-SMILA" is started before "Crawler-SMILA"
Why do I get a timeout exception during a (long running) pipeline execution?
In SMILA there's a timeout configured for a pipeline execution:
Lucene Indexing / Search
How can I browse/search an existing Lucene index without SMILA (for debug purposes)?
Why are attributes in my Lucene index missing / permuted?
Open your configuration files:
- if all index fields are specified in these two files
- if the field numbers are compatible to each other
Still the same problem?
- Close SMILA, remove file (if exists): workspace/org.eclipse.smila.search.datadictionary/DataDictionary.xml and restart SMILA
SMILA doesn't return search results, although I see appropriate entries for (some fields of) my query in the Lucene index
Open the file: configuration/org.eclipse.smila.search.datadictionary/DataDictionary.xml
- Check the
Constraintentries for all fields: If a
requiredand this field is set in your query, the query value must match the indexed value! (regardless of other fields)
Still problems? Try removing the workspace version of that file if it exists:
- Close SMILA, remove workspace/org.eclipse.smila.search.datadictionary/DataDictionary.xml, and restart SMILA.
Test search application (http://localhost:8080/SMILA/search) returns HTTP Status 500 with the stack trace
This error usually happens when no index has been build but the user tries to do the search (directly after installing SMILA) by calling the test search application. Prior to 0.5 M3 user gets in this case a long stack trace with the real cause of the problem at the end of it: org.eclipse.smila.search.index.IndexException: index does not exist in data dictionary [test_index]. This means that the test index named "test_index" has not been created yet. This can easily be done by starting a crawler or an agent. In 0.5 M3 we improved the error handling in search servlet so that only error messages are displayed and highlighted. (Stack trace can still be seen by simply clicking on "Details...".)
Implementing Pipelets / Processing Services / Bundles
I want to use the
ConfigUtils class in my Pipelet to read the configuration, where do I have to put my configuration files?
Configuration files are searched for in the following order:
- <config-file> in the root path of the bundle jar-file
Deploy / Launch
I implemented/deployed a OSGi Service in a new bundle but SMILA log says that it couldn't be found
Check your new bundle, it should contain a file like that:
In this file your new service has to be referenced. If you have copied the file from some other service, be sure to change the component name in the root element to something unique, because DS does not start multiple services with the same component name.
- <component name="<myService>" immediate="true">
Also the file has to be referenced from the MANIFEST.MF file of your bundle as a service component:
- Service-Component: OSGI-INF/<myService>.xml
Also, you may need to include Import-Package: declarations for super-classes of your service implementation class even if there are no compile errors.
On the "Build" page of the manifest editor, you must add the OSGI-INF directory to the binary build.
And finally, your bundle has to be started at SMILA launch, e.g. by adding it to the config.ini.
I get classloading errors in invocations of my own Pipelet when running SMILA outside the IDE. In the IDE it works
The error could look like this:
2010-11-19 11:28:36,101 ERROR [ODEServerImpl-1
] vpu.JacobVPU - Method "run" in class "org.apache.ode.bpel.rtrep.v2.EXTENSIONACTIVITY" threw an unexpected exception.
java.lang.LinkageError: loader constraint violation: loader (instance of org/eclipse/osgi/internal/baseadaptor/DefaultClassLoader) previously initiated loading for a different type with name "org/w3c/dom/Document"
We are not completely sure, why this happens, but a solution is to set this system property in the SMILA.ini file:
Thamks to Bogdan Sacaleanu for the solution. See this thread in the smila-dev mailing list for additional details.