Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "Aperi Data Server R2 Extensibility"

m (Service Infrastructure)
m (How do you add new messages?)
Line 89: Line 89:
  
 
The <tt>name</tt> attribute of the <tt>messageSource</tt> tag must be set to the fully-qualified name of the resource bundle created in the first step. The <tt>prefix</tt> attribute must be set to the message key prefix used for all messages in the that bundle. The <tt>source</tt> attribute must be set to the fully-qualified name of the message bundle adapter created in the second step.
 
The <tt>name</tt> attribute of the <tt>messageSource</tt> tag must be set to the fully-qualified name of the resource bundle created in the first step. The <tt>prefix</tt> attribute must be set to the message key prefix used for all messages in the that bundle. The <tt>source</tt> attribute must be set to the fully-qualified name of the message bundle adapter created in the second step.
 +
 +
In the resource bundles that ship with the Data Server, message keys generally consist of three parts: a prefix, a numeric identifier, and a suffix. Consider, for example, the message key <tt>SRV0001E</tt>. <tt>SRV</tt> is its prefix. The Data Server has several message prefixes. <tt>SRV</tt> represents a generic server message. The numeric identifier is <tt>0001</tt>. The suffix is <tt>E</tt>, which indiciates that the associated message is an error message. There are four different suffixes:
 +
 +
* <tt>E</tt> - Signifies an error message.
 +
* <tt>W</tt> - Signifies a warning message.
 +
* <tt>I</tt> - Signifies an informational message.
 +
* <tt>L</tt> - Signifies a message that is used as a label within the GUI.
  
 
<i>
 
<i>

Revision as of 20:09, 1 February 2007

Contents

Introduction

The purpose of this page is to present information on how to extend the functionality provided by the Aperi Data Server. More than anything, this page should be viewed as a series of HOWTOs. It assumes knowledge of key Equinox / OSGi concepts (e.g., plugins, extension points, etc.), as well as a basic understanding of Aperi’s architecture. Below is a brief, high-level, outline of the topics this page addresses:

  • General Infrastructure
    • Deployment
    • Serviceability
  • Service Infrastructure
  • Resource Attribute Infrastructure
  • Job / Schedule Infrastructure
  • Alert Infrastructure

As with all software projects, Aperi will evolve over time. However, the content of this page is specific to its 0.2 release. This page should be viewed as a living document. It will evolve in response to the needs of developers looking to build on top of the Aperi platform. Please feel free to share any questions, comments, and / or concerns on either the Aperi Development mailing list or the Aperi newsgroup. Any and all feedback is welcome.

General Infrastructure

This section of the page presents general information about the Data Server, addressing such topics as deployment and serviceability. Though presented in the context of the Data Server, some of what is discussed here applies to other high-level Aperi components (e.g., Agent, Device Server, etc.). Having some understanding of the information presented in this section is important for those looking to write, package, and distribute code that extends the functionality of the Data Server.

Deployment

Aperi leverages Equinox / OSGi technology. It is built as a set of plugins. Extending Data Server functionality will generally be accomplished by writing and deploying new plugins, rather than editing existing ones. OK, great. But how are plugins deployed to the Data Server? The answer to that question has two parts. First, you need to know what to do when you're working inside the IDE. If you've written a new plugin for the Data Server, how do you go about deploying and testing it within your development environment? Second, you need to understand what to do in terms of actual packaging. What steps do you need to take to allow someone to take your plugin and add it to an installed and running instance of the Data Server?

How do you deploy a new plugin within the IDE?

Working within the IDE is fairly straightforward. When working on a Data Server plugin, it is recommended that you pull all of the Aperi source code from CVS into the workspace where development is being done. When you extract Aperi from CVS, you are presented with several Equinox OSGi Framework run configurations. The RunAperiDataServer configuration is, as its name implies, used to run the Data Server inside the IDE. If you want to deploy the plugin you're working on to the Data Server, open the Data Server run configuration and check the box next its name. If your plugin depends upon any other plugins, the boxes next to their names must be checked as well. The Add Required Plug-ins button can be used to automatically resolve plugin dependencies. However, be weary of its results. It often brings in more than the true minimum set of plugins required for successful operation. Note that using both the default start level and default start flag is acceptable. Doing so should not be problematic. With the appropriate boxes checked, the next time the Data Server is launched via the RunAperiDataServer configuration, your plugin will automatically be deployed and started. Proper operation can be verified by using the OSGi console to list the states of all of the plugins in the container and checking for error output in the OSGi configuration area associated with the RunAperiDataServer configuration (AperiDebug/datasvr/tmp).

Provide screenshot-based example of run configuration modification...

How do you deploy a new plugin outside of the IDE?

How about deploying a new plugin outside of the IDE? Assume that Aperi has already been installed. If such is the case, deploying a new plugin involves doing the following:

  • [1] Place the plugin JAR / folder in the datavr/plugins folder of the Aperi installation. Currently, the datasvr/plugins folder contains the plugins used to run the Data Server. Long term, given the overalp in the plugins used by the Data Server and other high-level Aperi components (e.g., Agent, Device Server, etc.), some form of consolidation is likely. Regardless, it will always be necessary to drop the plugin JAR / folder in a plugins folder located within the installation.
  • [2] Manually add a reference to the plugin in the Data Server config.ini file. With the IDE, simply dropping the plugin JAR / folder in the plugins folder is enough. The IDE will detect its availability and automatically load it upon restart. The Data Server is not yet there. Currently, for a plugin to be loaded, it must be specified as part of the osgi.bundles property in the Data Server config.ini file. That file is available in the datasvr/configuration folder of an Aperi installation.
Provide example of adding entry to Data Server config.ini file...

A brief note on start-levels... The default start level for plugins deployed to the Data Server is four. The Data Server will automatically start any bundles with a start level at or below six. By default, any plugin with a start level greater than six must be started manually, either programmatically or via the OSGi console.

Serviceability

Logging

How do you add logging to Java code?
How do you configure Java logging?
How do you add logging to native code?
How do you configure native logging?
How do you add new messages?

Adding new messages to the Data Server can be accomplished by doing the following:

  • [1] Create a resource bundle for the messages. The resource bundle can be either Java properties file or a class that extends ListResourceBundle. Note that the keys associated with the messages in the resource bundle must share a common prefix.
  • [2] If one does not already exist, create a message bundle adapter in the plugin containing the resource bundle created in the first step. A message bundle adapter is required to register the resource bundle with the message log infrastructure of the Data Server. When working outside the org.eclipse.aperi.common plugin, one can be created by simply extending MessageBundleAdapter. It is not necessary to override any of the methods defined in that class. Why not just use MessageBundleAdapter? Because the message bundle adapter must be able to access the resource bundle created in the first step. Since it is defined in the org.eclipse.aperi.common plugin, it does not have access to resource bundles defined in other plugins (even if those plugins export the packages containing their resource bundles; the org.eclipse.aperi.common plugin does not leverage the dynamic import functionality of OSGi).
  • [3] Using the message bundle adapter, register the resource bundle with the messagesSources extension point defined in the org.eclipse.aperi.common plugin. Below is an excerpt from the plugin.xml file of the org.eclipse.aperi.common plugin.
<extension
    id="org.eclipse.aperi.core.MessageSources"
    name="Aperi Core Message Sources"
    point="org.eclipse.aperi.common.messageSources">
        <messageSource
            name="org.eclipse.aperi.xmsg.AGT"
            prefix="AGT"
            source="org.eclipse.aperi.xmsg.MessageBundleAdapter"/>
        <messageSource
            name="org.eclipse.aperi.xmsg.GEN"
            prefix="GEN"
            source="org.eclipse.aperi.xmsg.MessageBundleAdapter"/>
        <messageSource
            name="org.eclipse.aperi.xmsg.SRV"
            prefix="SRV"
            source="org.eclipse.aperi.xmsg.MessageBundleAdapter"/>
</extension>

The name attribute of the messageSource tag must be set to the fully-qualified name of the resource bundle created in the first step. The prefix attribute must be set to the message key prefix used for all messages in the that bundle. The source attribute must be set to the fully-qualified name of the message bundle adapter created in the second step.

In the resource bundles that ship with the Data Server, message keys generally consist of three parts: a prefix, a numeric identifier, and a suffix. Consider, for example, the message key SRV0001E. SRV is its prefix. The Data Server has several message prefixes. SRV represents a generic server message. The numeric identifier is 0001. The suffix is E, which indiciates that the associated message is an error message. There are four different suffixes:

  • E - Signifies an error message.
  • W - Signifies a warning message.
  • I - Signifies an informational message.
  • L - Signifies a message that is used as a label within the GUI.

Present list of resource bundles that are registered with messageSources extension point.
A plugin can use any of the messages that ship with the Data Server.
Check legacy documentation for prefix information (i.e., an indication of what prefixes represent).

Tracing

How do you add tracing to Java code?

Tracing in the Data Server is accomplished using TraceLogger. TraceLogger contains the following five methods:

  • entry(). The entry() method is used to trace method entry. It takes three arguments. First, a string that contains the name of the class in which the method is defined. Second, a string that holds the name of the method. And finally, a string that used to represent the arguments of the method. The final argument is generally used to house argument names. For example, for a method that takes a single integer argument named id, the third argument passed to entry() would be "id". However, the fact that the final argument is a string yields flexibility. Argument values, or even more complex strings can be used in place of argument names. Going back to the example, the string "id (value = " + id + ")" might be used in place of just "id". Note that if entry tracing is added to a method that does not take any arguments, the third argument passed to entry() should be the empty string, rather than null.
  • exit(). The exit() method is used to trace method exit. It is overloaded, with variations that take either two or three methods. The first two arguments are the same across all definitions of exit(). The first argument is a string that contains the name of the class in which the method is defined. The second argument is a string that holds the name of the method. For definitions of exit() that take three arguments, the third argument is the value returned by the method. The value specified for the third argument can be null.
  • exception(). The exception() method is used to trace exceptions caught within a method. It takes three arguments. First, a string that contains the name of the class in which the method is defined. Second, a string that holds the name of the method. And finally, the exception to be traced. Both the message and stack trace of the exception are included in the output produce by exception().
  • traceMessage(). The traceMessage() method is used to produce custom trace output. It takes four arguments. The first argument corresponds to the debug level that should be associated with the message. The specified value must be one of the following three integer constants defined in TraceLogger: DEBUG_MIN, DEBUG_MID, and DEBUG_MAX. Setting the level of trace output on the Data Server to DEBUG_MIN (the default whenever tracing is enabled), produces the least verbose output. Setting it to DEBUG_MAX produces the most verbose output. Those facts should be kept in mind when deciding what value to pass as the first argument to traceMessage(). The second argument is a string that contains the name of the class in which the calling method is defined. The third argument is a string that holds the name of the calling method. The last argument is a custom message string.
  • auditMessage(). The auditMessage() method is used to tie activity in the Data Server to a specific user. The output it produces is sent to a different file from the four methods discussed above. The auditMessage() message takes three arguments. The first argument is a string containing the name of the class in which the calling method is defined. The second argument is a string that holds the name of the calling method. The last argument is a custom message string. The custom message string should contain any information needed to tie a given action to a specific user.

For performance reasons, every call to either entry(), exit(), exception() or traceMessage() should only be made after checking whether or not the enableTrace flag of TraceLogger is set to true. Similarly, every call to auditMessage() should only be made after checking the state of the enableAudit flag. The following code snippet brings everything together:

Insert code sample...
How do you configure Java tracing?

Configuration for Java entry, exit, exception and debug message tracing is set at the service provider level. It is discussed below. The enableTrace flag of TraceLogger is set to true if tracing is enabled for at least one service provider. Audit tracing configuration is set for the entire server. Whether or not audit tracing is enabled or disabled is controlled by the ITSRM.logger.audit.logging property in ServerTraceLog.config. It is mirrored bythe enableAudit flag defined in TraceLogger. ServerTraceLog.config is stored in the datasvr/configuration directory of an Aperi installation.

How do you add tracing to native code?

Native tracing functionality is provided by the DataLog shared library (lib/w32-ix86/DataLog.dll on Windows and lib/linux-ix86/libDataLog.so on Linux). You can add tracing to your native code by linking to that shared library and using the macros defined in LogManagerUtil.h. To see those macros in action, check out the Java_org_eclipse_aperi_logging_NativeLogManager_executeTraceMacroTest() function in LogManager.c.

How do you configure native tracing?

Discuss nativelog.config configuration file.
Go through srv.* properties.
Note that some level of tracing is always enabled (default: DEBUG_MIN).
Native tracing is configured at the JVM level. One setting is used for the entire Data Server.

Miscellaneous

How do you send an e-mail from the Data Server?

Service Infrastructure

The key service infrastructure building blocks are service providers, request handlers, request objects, and response objects. Service providers handle thread management for, and coordinate message log output across, a set of request handlers. Request handlers are responsible for processing incoming request objects and producing corresponding response objects. This section focuses on answering the following questions related to the service infrastructure:

How do you create and register a new service provider?

Before getting into a discussion of how to create and register a new service provider, it is important to note that the Data Server ships with the following service providers:

  • AgentSvp. The request handlers associated with the Agent service provider processes non-GUI originated requests. They take care of requests submitted by agents and handle several internal server-side jobs.
  • CimomSvp. The request handlers associated with CIMOM service provider are responsible for requests and internal server-side jobs that require communication with a CIMOM. The CIMOM service provider is intended to be used to process long running requests, isolating them from others that can be processed more quickly.
  • GuiSvp. The request handlers associated with the GUI service provider are designed to handle requests submitted by the GUI.
  • SchedulerSvp. The request handlers associated with the Scheduler service provider handle requests related to the Data Server scheduler (e.g., requests to obtain information about long running jobs, etc.).
  • ServerSvp. The request handlers associated with the Server service provider handle generic management tasks (e.g., service provider startup / shutdown, agent registration, etc.).

The availability of the service providers listed above largely eliminates the need to implement new service providers. Service providers and request handlers are loosely coupled. A given request handler can be associated with any existing service provider.

All of the above said, there may be certain situations that require the creation and registration of a new service provider. For example, there may be a desire to logically isolate some set of request handlers from all of the other request handlers in the Data Server. Or, there may be a requirement that the message log output produced by some set of request handlers appear in a specific file (all of the message log output produced by the request handlers associated with a given service provider is written to a single file). Whatever the case may be, creating and registering a new service provider can be accomplished by carrying out the following steps:

  • [1] Define a request type constant for the service provider. It must be a string value unique to, at least within the JVM, the service provider. In support of uniqueness, the use of a Java-based naming scheme is recommended. For example, the request type constant associated with the Agent service provider is org.eclipse.aperi.agent.svp.AgentSvp, the fully qualified name of the Agent service provider implementation class. The request type constant should be placed in a class defined in a package that is exported by the plugin in which it resides. Request type constants are used to set the typeCode instance variable of Request objects. It is involved in routing requests, within the Data Server, to the appropriate service provider. Here is a sample request type constant definition:
Insert sample code...
  • [2] Define a request type metadata source for the service provider. There are three key pieces of metadata associated with request type constants. First there is the internal name, used to identify the service provider in trace output and error messages produced by the Data Server. Next, there is the external name, used to identify the service provider in the list of Data Server services presented in the GUI. And finally, there is the log prefix, used to name the message log files associated with the service provider. The metadata source should implement IMetadataSource and look like the RequestTypeSource. The plugin containing the metadata source must import the appropriate packages from the org.eclipse.aperi.common plugin. A request type metadata source might look something like this:
Insert sample code...
Present GUI screenshot (list of Data Server services)...
  • [3] Register the request type metadata source associated with the service provider. The org.eclipse.aperi.common plugin defines the requestTypeSource extension point for registering request type metdata sources. The following XML snippet would be added to the plugin.xml of the plugin containing the metadata source presented above:
Insert sample XML content...

Request type metadata is used not only in the Data Server, but also in the Agent and the GUI. That means that the request type metadata source must be registered in multiple JVMs. Given that requirement, it makes sense to place the request type constant, and its associated metadata source, in a plugin separate from the one containing the actual service provider implementation. For example, the request type constants and metadata sources associated with the service providers that ship with the Data Server are defined in the org.eclipes.aperi.common plugin (see RequestType and RequestTypeSource), while their implementations are in the org.eclipse.aperi.server.data plugin (e.g., AgentSvp).

  • [4] Create a request handler extension point for the service provider. The extension point is used to register request handlers with the service provider. Its definition should leverage the requestHandler extension point defined in the org.eclipse.aperi.server.data plugin, and look something like this:
Insert screenshot of extension point wizard...
Insert sample code (stripped-down schema) (highlight schema inclusion)...

For the sake of reference, the request handler extension point schemas associated with the service providers that ship with the Data Server are available in the schema folder of the org.eclipse.aperi.server.data plugin.

/**
 * This abstract class must be extended by all service providers.
 * The methods declared in this class are used by the service
 * controller to initiate and terminate a service provider.
 */
public abstract class ServiceProvider
{
    /**
     * Must return the service ID that can be used to look up this
     * provider's name and major request code. The major request code
     * describes the category of requests that this provider is capable 
     * of processing.
     */
    public abstract String getTypeCode();

    /**
     * Should spawn 1 or more threads to handle the requests placed on
     * the passed ServiceQueue. Threading & routing of requests to your
     * service's request handlers can be managed by the RequestManager
     * class. Should return to caller after kicking off your threads.
     */
    public abstract void startup(ServiceQueue queue) throws Exception;

    /**
     * Should terminate all threads of execution, release all held 
     * resources and return to caller. The shutdownOption will be one of
     * Constants.SHUTDOWN_NORMAL or Constants.SHUTDOWN_IMMEDIATE.
     */
    public abstract void shutdown(byte shutdownOption);
}

The getTypeCode() method must return the request type constant associated with the service provider. The startup() method should instantiate and initialize a new RequestManager. The RequestManager class manages the request handlers registered against the service provider. It takes care of thread management and is responsible for routing incoming requests to the correct handlers. The RequestManager must be constructed using the ID of the extension point setup to accommodate request handler registration for the service provider. The shutdown() method simply needs to dispose of the RequestManager created in startup(). A full service provider implementation might look something like this:

Insert sample code (see ServerSvp)
(use pseudocode comment to refer to loading service provider configuration) 
(comment parameters passed to RequestManager constructor)
  • [6] Register the service provider implementation. The org.eclipse.aperi.server.data plugin defines the serviceProvider extension point for service provider registration. The following XML snippet would be added to the plugin.xml containing the service provider implementation presented above:
Insert sample XML content...

How do you create and register a new request handler?

Request handlers are responsible for processing the requests sent to the Data Server. To do any work, they must be registered against an existing service provider. The Data Server ships with several built-in request handlers, available in the org.eclipse.aperi.server.handler package and subpackages of the org.eclipse.aperi.server.data plugin. How does one create and register a new request handler? By doing the following:

  • [1] Define a request handler constant for the request handler. It must be a string value unique to the request handler within the scope of such values associated with the service provider against which it will be registered. In support of uniqueness, the use of a Java-based naming scheme is recommended. For example, the request handler constant associated with the server status request handler registered against the Server service provider is org.eclipse.aperi.server.handler.server.ServerStatusHndlr, its fully-qualified class name. The request handler constant should be placed in a class defined in a package that is exported by the plugin in which it resides. Request handler constants are used to set the subType instance variable of Request objects, which is involved in routing requests, within a service provider, to the appropriate request handler. Here is a sample request handler constant definition:
Insert sample code...
/**
 * Describes an object capable of processing a work request. Optionally, an
 * implementation of this class can implement either the ThreadSafe interface
 * or the ThrowAway interface. Doing so affects caching in the request manager.
 * By default, request handler instances are cached in request manager threads.
 * If the ThreadSafe interface is implemented, an instance of the request
 * handler class will be cached by the request manager and shared across
 * all of its active threads. If the ThrowAway interface is implemented, no
 * instance of the request handler class will be cached. The ThrowAway
 * interface must take priority over the ThreadSafe interface.
 */
public interface RequestHandler {
    Response handle(Request request, Transceiver transceiver);
}

The RequestHandler interface is fairly straightforward, requiring implementation of just a single method, handle(). The Request argument contains the information put together by the party that submitted the request to the Data Server. The request handler returns the results of its work with the Response object returned from handle(). The Transceiver argument is present to support two-way communication between the caller and the request handler during request processing, which is required in certain scenarios.

Notice that the handle() method does not declare any checked exceptions. All checked exceptions must be caught within handle(). Runtime exceptions and errors do not need to be taken care of by handle(). It should be noted, however, that uncaught runtime exceptions and errors result in termination of the request manager thread from which the call to handle() was made. Additionally, they result in service provider shutdown for any service provider that initializes its request manager with an instance of SvpThreadGroup. (Such is true for the service providers that ship with the Data Server (i.e., ServerSvp, GuiSvp, etc.).) For service providers that use SvpThreadGroup, an out of memory error takes down the entire Data Server.

The Javadoc on the RequestHandler interface presented above reference two additional interfaces, ThreadSafe and ThrowAway. Both interfaces are reproduced below:

/**
 * Marker interface used to flag an object that can be driven by multiple
 * threads simultaneously. If a request handler class implements this
 * interface, its instances will be cached by the request manager and
 * shared across all of its active threads. The ThrowAway interface must
 * take priority over this interface.
 */
public interface ThreadSafe {
}
/**
 * Marker interface used to flag an object that should be discarded when it
 * is no longer needed. If a request handler class implements this interface,
 * its instances will not be cached in either a request manager or any of its
 * threads. This interface must take priority of the ThreadSafe interface.
 */
public interface ThrowAway {
}

The ThreadSafe and ThrowAway interfaces can be used to control the behavior of the request handler in the context of the thread management performed by a request manager, on behalf of a service provider. By default, a single instance of the request handler is allocated for, and cached within, every request manager thread. If the request handler is marked as thread safe, then a single instance is allocated and shared across all request manager threads. If the request handler is marked as throw away, then a new instance is allocated for every request. No caching is performed. It does not make sense for a request handler to implement both the interfaces. However, if such is done, the request manager errs on the side of safety, giving priority to the ThrowAway interface. As an aside, it should be noted that the request manager does all request handler allocation in a lazy fashion (i.e., only when necessary).

The request handler can be either single-phase or multi-phase. With a single-phase request handler, the caller is expected to send all of the data required to process its request in a single transmission. In turn, the request handler responds with a single transmission, via the Response object returned from its implementation of handle(). With a multi-phase request handler, the caller and request handler use the a Transceiver to communicate and exchange data while a given request is being processed. Sample1PhaseHandler is an example of a single-phase request handler. Sample2PhaseHandleris an example of a multi-phase request handler.

It may be the case that the request handler must communicate with the Device Server to successfully process a request. If such is the case, rather than implementing the RequestHandler interface, the request handler should extend the abstract DeviceRequestHndlr class and implement the abstract deviceAPI() method:

/**
 * An implementation of this method should invoke Device Server API, place
 * the returned data in the response object and return. Any error condition
 * should cause Response.status to be set to Response.ERROR and an
 * appropriate message to be placed in Response.errorMessage.
 */
public abstract int deviceAPI(DeviceRequest request, Response response);

DeviceRequestHndlr handles the common task of initializing communication with the Device Server.

  • [3] Register the request handler implementation with an existing service provider. Every service provider should have an associated extension point for request handler registration. For example, the following XML snippet is used to register the server status handler with the Server service provider in the org.eclipse.aperi.server.data plugin:
<extension
    id="org.eclipse.aperi.server.handler.server.ServerStatusHndlr"
    name="org.eclipse.aperi.server.handler.server.ServerStatusHndlr"
    point="org.eclipse.aperi.server.data.serverHandler">
    <requestHandler impl=
        "org.eclipse.aperi.server.handler.server.ServerStatusHndlr"/>
</extension>

The request handler extension point associated with the Server service provider is serverHandler. The id attribute of the extension tag is set to the request handler constant string value. The impl attribute of the requestHandler tag (defined in the generic request handler extension, requestHandler), is set to the name of the class that implements the RequestHandler interface. In line with the naming convention discussed above, it matches the request handler constant string value. The value assigned to the name attribute of the extension tag is not important.

It is important to note that request handlers and service providers are loosely coupled. In order to do anything useful, the request handler must be registered against at least one service provider. However, there is nothing that prevents it from being registered against multiple service providers. In fact, a request handler can even be registered against the same service provider multiple times, provided that each registration is performed under a different request handler constant (e.g., the built-in configuration settings handler of the Data Server, ConfigSettingsHndlr).

How do you create a custom request / response object?

A requestData instance variable is defined in Request. It can be used to pass to custom request data objects to a request handler. If a request handler requires specification of a custom request data object, it should check the validity of an incoming request using the isDesiredType() method of RequestChecker. For example, the GUI login validation request handler, SignonHndlr, expects that requesData be an instance of SignonReq. As such, it begins with the following block of code (comments added for explanation):

// Create response object assuming request will result in success
Response response = Response.getResponse(Response.SUCCESS, null);

// Check whether or not request.requestData is an instance of SignonReq
// The status of the response is set to error if such is not the case
if (!RequestChecker.isDesiredType(SignonReq.class, request, response)) {
    // The type of request.requestData is invalid
    // Return the error response to the caller
    // The response contains a message indicating what went wrong
    Response traceResult = response;
    if (TraceLogger.enableTrace) {
        TraceLogger.exit(
            SignonHndlr.class.getName(),
            "handle",
            traceResult);
    }

    return traceResult;
}

// The type of request.requestData is valid
// Obtain the custom request data object and move on ...
SignonReq sr = (SignonReq) request.requestData;

Custom request data objects must implement the Serializable interface and appropriately set the serialVersionUID class variable. Such is required because communication with the Data Server involves the use of object serialization.

The responseData instance variable defined in Response is similar to the requestData instance variable defined in Request. However, there is nothing equivalent to the isDesiredType() method of RequestChecker for custom response data objects. The responseData instance variable can be used to pass custom response data objects from a request handler to a caller. It is important to note that custom request / response data objects must be available to both the code that submits requests / handles responses and their corresponding request handlers. Several request / response data objects are available in the org.eclipse.aperi.common plugin (e.g., see the org.eclipse.aperi.server.req package).

In addition to the creation of custom request / response data objects, as discussed above, it also possible to extend Request and Response. For example, that it what was done to create DeviceRequest, used for requests sent to the Data Server to be routed to the Device Server.

How do you configure and initialize Java logging for a service provider?

As part of its serviceability infrastructure, the Data Server provides a mechanism to add logging to Java code. Messages are logged using the methods defined in MessageLog, discussed above. Each service provider has an associated log file. The name of that log file is controlled by the log prefix metadata value specified for the request type constant of the service provider (see the LOG_PREFIX constant in RequestTypeManager). The server.config file, stored in the datasvr/configuration directory of an Aperi installation, contains the following two logging configuration parameters: logsKept and messagesPerLog. The messagesPerLog parameter sets a limit on the number of messages stored in a given log file. A new log file is created if that limit is reached. A new log file is also created whenever a service provider starts. The logsKept parameter controls how many log files the Data Server keeps around for a given service provider. The service manager automatically initializes logging for a service provider whenever it starts.

How do you configure and initialize Java tracing for a service provider?

As part of its serviceability infrastructure, the Data Server provides a mechanism to add entry, exit, exception and debug trace messages to Java code. That functionality is discussed above. Tracing is configured at the service provider level. All of the threads associated with a given service provider share the same tracing configuration. OK, great. But how do you configure and initialize tracing for a service provider and its associated threads? By doing the following:

  • [1] Create a JLog configuration file for the service provider. How? Information on creating a JLog configuration file is available in the JLog User's Guide. You can also refer to the ServerTraceLog.config file as an example. The JLog configuration file for the service provider might look something like this:
MyServiceProvider.trace.className=com.ibm.log.PDLogger
MyServiceProvider.trace.listenerNames=MyServiceProvider.handler.file
MyServiceProvider.trace.logging=false
MyServiceProvider.trace.level=DEBUG_MAX

MyServiceProvider.handler.file.className=com.ibm.log.FileHandler
MyServiceProvider.handler.file.formatter=MyServiceProvider.trace.formatter
MyServiceProvider.handler.file.appending=true
MyServiceProvider.handler.file.encoding=UTF8
MyServiceProvider.handler.file.maxFileSize=20480
MyServiceProvider.handler.file.maxFiles=5

MyServiceProvider.trace.formatter.className=org.eclipse.aperi.logging.TraceFormat
MyServiceProvider.trace.formatter.timeFormat=HH\:mm\:ss.SSS
MyServiceProvider.trace.formatter.dateFormat=yyyy-MM-dd
  • [2] Initialize tracing by invoking the two argument version of the initializeTraceLogger() method defined in TraceLoggerFactory. The first argument must be a string specifying the location of the configuration file created in the previous step. (Note that the tracing configuration file cannot be packaged inside a plugin.) The second argument must be a string representing the full path to the location where trace output is to be stored. initializeTraceLogger() only needs to be called once. It should be invoked from a static initialization block inside the service provider class. An error message is logged to the main server message log file if an error occurs while attempting to load the tracing configuration.
  • [3] Start tracing by invoking the startTraceLogger() method defined in TraceLoggerFactory. The first argument must be a string containing the name of the JLog tracing object. For the sample configuration presented above, that would be MyServiceProvider.trace. The second argument must be the request type string constant associated with the service provider. Similar to initializeTraceLogger(), an error message is logged to the main server message log file if an error occurs while attempting to start tracing. The startTraceLogger() method should be the first thing called from the startup() method of the service provider implementation. To match that invocation of startTraceLogger() the stopTraceLogger() method of TraceLoggerFactory, which takes a single string argument containing a request type constant, should be the last thing called from the shutdown() method of the service provider implementation.

After taking the steps above, the output produced by any call to a method defined in TraceLogger, from a thread associated with the service provider is written to the location specified in the call to initializeTraceLogger(). Tracing can be turned on by setting the logging property of the service provider trace object (e.g., MyServiceProvider.trace) to true. Verbosity can be controlled by setting the level property of the service provider trace object to either DEBUG_MIN, DEBUG_MID, or DEBUG_MAX.

Resource Type Infrastructure

How do you define a new resource type?

A resource type constant can be defined anywhere. However, the value assigned to the constant should be unique. The fact that a resource type is a numeric value makes that difficult. If you're sharing an instance of Aperi with other plugins, you can't currently know what resource type constants they've defined. Even if you did, your code is already written. At some point, we may wish to define some sort of public registry. As it stands, however, sharing an instance of Aperi might be complicated. Note that moving away from the use of numeric values is complicated by the fact that resource types are used all over the database. A fair amount of change would be required to the code and database schema to move to something for flexible, like string values.

With respect to where a resource type constant is defined, touch on possibility of just editing existing ResourceType class, instead of defining one specific to bundle. Note that such goes against principle of not touching base code, but is an easier solution if you're not concerned about someone being able to share your modified version of Aperi with plugins from another developer. Editing the core Aperi code, without checking those changes back into CVS, kills the ability to share the container.

Touch on use of metadata infrastructure. Mention that it is not necessary to set values for all of the metadata keys specified in ResourceTypeManager. It is only necessary to define metadata for the context in which the resource type will be used. Additionally, not that developers are not limited to the set of keys defined in ResourceTypeManager. The metadata infrastructure is flexible enough to support new keys specific to the work done by a given plugin. Provide an example of defining a new key for a new resource type constant. Mention that it is possible to define new pieces of metadata for existing resource type constants. Consider breaking this one question into multiple smaller questions.

Job / Scheduler Infrastructure

How do you create and register a new job for execution on the Agent?

The following steps are required to create and register a new job for execution
on the Agent:

1. Create job on agent. Implement IExecutable interface.
2. Register job implementation with job extension point on Agent.
3. Create resource type for job.
4. Set basic metadata values for resource type (e.g., NAME).
     (Check common & server ResourceTypeSource to see important metadata values)
     (Note that a lot of the metadata associated with resource types is used in GUI)
     (Refer reader to resource type section for additional discussion)
5. Set agent job specific metadata values for resource type in plugin resource type source:
     (Check server ResourceTypeSource for examples)
     (Remember to describe what each metadata key represents)
       - JOB_NAME
       - AGENT_JOB_RUN_NAME
       - AGENT_JOB_RUN_COMMAND
6. Register plugin resource type source with resource type source extension point
   in org.eclipse.aperi.common plugin.

Discuss how to pass parameters to job. Currently, there doesn't appear to be a way
to do so. Room for potential enhancement? How do things work? Consider the probe
job. Information sent to agent to launch job includes reference to schedule entry
associated with probe. The schedule entry is stored in the database and contains
information associated with the probe job. Instead of information being passed to
Agent with request to launch job, nothing is sent and the Agent turns around to ask
the Data Server for information. The idea was to reduce memory usage in the Agent JVM,
by allowing it to avoid holding on to the data associated with a job while the job
is queued. The way things are setup, the agent only needs to maintain information /
job parameters associated with active jobs, after it has been retrieved from the Data
Server.

Consider whether or not there is anything else that must be brought-up for Agent jobs.

How do you create and register a new job for execution on the Data Server?

The steps required to create a job for execution on the Data Server are very
similar to those required to create a job for execution on the Agent. There are
a few key differences. First, instead of implementing the IExecutable interface,
the job implementation must implement the ServerJob interface. Second, instead
of registering with the job extension point on the Agent, the implementation
must be registered against the job extension point of the Data Server. And third,
the metadata values that must be set for the resource type constant are different.
JOB_NAME still needs to be set. However, in place of AGENT_JOB_RUN_NAME and
AGENT_JOB_RUN_COMMAND, the following four metadata values must be set: SERVER_JOB_RUN_NAME,
SERVER_JOB_RUN_COMMAND, SERVER_JOB_RUN_REQUEST_CODE and SERVER_JOB_RUN_HANDLER_CODE. All
four metadata values must be explained.

If it isn't discussed elsewhere, it might make sense to touch on the VMTransceiver
in this section of the page, as a means by which requests can be transmitted within
the JVM. Such is done to support server side job execution. However, a discussion
of VMTransceiver, and communication in general, may be better off in its own section.

Note that server jobs face a similar parameter passing issue to agent jobs. There
isn't really any way to pass parameters to server jobs. However, such may not be
all that big a deal, because the jobs are running in the Data Server and can
access whatever information they need.

Either find a come-up with a good example to run through for this section.

How do you create and register a new job for execution on the Device Server?

How do you schedule a job for immediate execution?

Distinguish between what needs to be done for the Agent / Data Server and the Device Server. For the Agent / Data Server, you go through the SchRunNowHandler. For the Device Server, you can just setup a Data Server request handler that interacts with the job service on the Device Server (e.g., see / discuss SubsystemActionHandler).

How do you schedule a job for future execution?

Note that this can only be done with Agent and Data Server jobs. The job must implement either the IExecutable interface (for Agent jobs) or the ServerJob interface (for Data Server jobs).

How do you check job status?

Distinguish between what needs to be done for the Agent / Data Server and the Device Server.

How do you cancel a job?

Alert Infrastructure

The Alert Infrastructure has issues similar to those of the Resource Type, especially given it's use of the metadata infrastructure. Re-emphasize those issues. Discuss any limitations on extensibility (e.g., the static arrays in TAlertLog, etc.). Touch on relationship between GUI and Data Server with respect to defining alerts.

How do you define a new alert product?

How do you define a new alert type?

How do you define a new alert condition?

How do you define a new alert class?

How do you define an alert?

How do you trigger an alert?

Back to the top