Difference between revisions of "OSEE/ReqAndDesign"

From Eclipsepedia

Jump to: navigation, search
(Requirements)
(Design)
Line 11: Line 11:
  
 
=== Design ===
 
=== Design ===
  Log entry in DB: entry_id, parent_id, timestamp, duration, user_id, type_id, status, details in JSON format
+
  Log entry in DB: entry_id, parent_id, timestamp, duration, agent_id, type_id, status, details in JSON format
  
  Log entry in Java: long entryId, long timestamp, long duration, long userId, long typeId, long status, long parentId, String message
+
  Log entry in Java: long entryId, long timestamp, long duration, long agentId, long typeId, long status, long parentId, String message
  
 
  Log entry type in DB: type_id, log_level, message_format
 
  Log entry type in DB: type_id, log_level, message_format
Line 20: Line 20:
 
* parent_id - id of entry used for grouping this and related entries.  Is negative for root entries and is the id of source of the entry client or server random id.  Ranges are used to group by client/server kind (IDE client, app server, rest client).
 
* parent_id - id of entry used for grouping this and related entries.  Is negative for root entries and is the id of source of the entry client or server random id.  Ranges are used to group by client/server kind (IDE client, app server, rest client).
 
* timestamp - long with ms since epoch
 
* timestamp - long with ms since epoch
* user_id - long user id (artifact id of a user artifact)
+
* agent_id - long agent id (numeric osee user id known by all systems using this log)
 
* log_level - as defined by java.util.logging.Level
 
* log_level - as defined by java.util.logging.Level
 
* type_id - a fine-grained application defined type, random id, defined as tokens and stored in the db for cross application support
 
* type_id - a fine-grained application defined type, random id, defined as tokens and stored in the db for cross application support

Revision as of 16:32, 31 October 2013

Contents

Logging

Requirements

  • shall handle creation/update of fine-grained log entries for at least 500 concurrent users
  • shall support logging by OSEE and other applications
  • log entries related to an individual instance of a user request shall be able to be logically grouped and accessed
  • log entries shall be quickly accessible based on any combination of source, user, timestamp, log type, duration, status
  • log entries shall be accessible (especially) when an application server is unresponsive
  • log entries shall be available until they are deleted by an admin or admin policy (applied by server automatically)
  • at run-time logging shall be enabled/disabled based on any combination of user, source, log level, and type
  • access control shall be applied at the log entry type basis

Design

Log entry in DB: entry_id, parent_id, timestamp, duration, agent_id, type_id, status, details in JSON format
Log entry in Java: long entryId, long timestamp, long duration, long agentId, long typeId, long status, long parentId, String message
Log entry type in DB: type_id, log_level, message_format
  • id - random long returned for log method call
  • parent_id - id of entry used for grouping this and related entries. Is negative for root entries and is the id of source of the entry client or server random id. Ranges are used to group by client/server kind (IDE client, app server, rest client).
  • timestamp - long with ms since epoch
  • agent_id - long agent id (numeric osee user id known by all systems using this log)
  • log_level - as defined by java.util.logging.Level
  • type_id - a fine-grained application defined type, random id, defined as tokens and stored in the db for cross application support
  • duration - starts at -1 and is never updated if duration does not apply, otherwise updates when the associated job ends with duration in ms
  • Status:
 0     initial value
 1-99  percent complete
 100   completed normally
 101   completed abnormally
  • high performance
    • 2 identical pairs of hashmaps are allocated with an initial configurable size
    • newly created log entries are added to a ConcurrentHashMap using the entry_id as the key and the array of sql insert parameters as the value
    • updated log entries are checked for in the new entires map first and if exists updated, otherwise the update map is checked and updataed if exists, else add to update map
    • A timer tasks runs at a configurable (short) periodic rate and batch inserts the log entires in the insert map and then runs the updates. This means that any update to a log entry that occurs in less than this configured time will not require a database update (i.e. writing the duration of a short operation). This also means only one thread writes to the log table per server.
    • data structure options
    • Optimize JDBC Performance


 long createEntry(long userId, long typeId);
 
 long createEntry(long userId, long typeId, long parentId);
 
 long createEntry(long userId, long typeId, Object... messageArgs);
 
 long createEntry(long userId, long typeId, long parentId, Object... messageArgs);
 
 void updateEntry(long entryId, long status, Object... messageArgs);
 
 void updateEntry(long entryId, long status);
 
 RuntimeException createEntry[Level](long userId, long typeId, Throwable throwable);
 
 RuntimeException createEntry[Level](long userId, long typeId, long parentId, Throwable throwable);
 
 RuntimeException updateEntry(long entryId, Throwable throwable);
  • The first interface to the logging data can be the basic REST navigation

Exception Handling

Requirements

  • avoid unnecessary wrapping of exceptions

Design

Checked exceptions I love you, but you have to go Why should you use Unchecked exceptions over Checked exceptions Clean Code by Example: Checked versus unchecked exceptions

  • Use application specific exceptions that extend RuntimeException - application specific allows for setting exception breakpoints in the debugger
  • Do not declare any run-time exceptions in any method signatures