Jump to: navigation, search

Difference between revisions of "CDT/MultiCoreDebugWorkingGroup/SynchronizedOperations"

(Grouped Run Control Operations)
 
Line 10: Line 10:
 
   
 
   
 
[https://bugs.eclipse.org/bugs/show_bug.cgi?id=330974 Bug 330974]  If the user selects two nodes in the debug view most debug commands are disabled <br>
 
[https://bugs.eclipse.org/bugs/show_bug.cgi?id=330974 Bug 330974]  If the user selects two nodes in the debug view most debug commands are disabled <br>
 +
 +
==What Tensilica is providing currently?==
 +
Tensilica’s Xtensa Xplorer is currently using the CDT CDI implementation. Xplorer provide facility to debug multi-core applications that run on Xtensa system.  At present the MP system can be either an actual hardware, emulation or a simulation of the Xtensa system.
 +
 +
Depending on the implementation of the MP system, it can suspend/run synchronously or asynchronously.  For hardware and emulation, Xtensa cores can optionally be wired up with break-in/out ports connected such that one core halting can propagate the halt to other cores to get “synchronous halt/run”. If HW is not wired up like this, then sync debug is not really possible in the sense that the UI can suspend other debuggers, but only after a relatively huge delay. In simulation, there is flexible control over whether halting a core halts the clock for just that one or for all.
 +
 +
The model we use is one gdb (which is not thread aware) per target processor. Targets within a system can, of course, be heterogeneous. Thus, there may be one UI with 10 gdbs all connected to one MP simulator (or one JTAG chain controller).This approach  have its difficulties when it comes to synchronous debugging.  Xtensa  MP system which is configured to run synchronously will stop all core when one core hits a breakpoint, but UI  does not reflect this  automatically , so we explicitly have to pause all other gdb  to reflect it.
 +
 +
When user wishes to continue (step or run) then we need to issue continue to every gdb.
 +
 +
'''Our current implementation has the  following assumptions/limitations '''
 +
# All cores within one launch are assumed to be either synchronized or not … within the target system/simulation. The idea that the UI can decide some set of cores are synchronized when they are not “within” the system is missing
 +
# All cores are treated equally, no concept of a leader core and sync happening based on the leading core’s activity.
 +
# No option to temporarily exclude one core from sync refresh of UI  i.e  exclude one of the core (or more specifically the gdb connected to that core) from pausing during sync stop . Certain cases where gdb is not responsive (like core in RunStall)  attempt to do anything on that gdb causes trouble . User might know such situations and might want to exclude that core from “sync” for some time.
 +
 +
====Where we want to go as we move to DSF====
 +
 +
We will be moving our current implementation to DSF and are happy to contribute it, but of course others probably need different kinds of flexibility. So we are interested in working to establish a framework that supports the different kinds of needs if that seems possible.
 +
 +
I guess the first challenge is to work out how diverse the needs are. We are pretty low level (not thread aware) so I could expect big differences between how we and someone debugging an MP Linux system would see things.
 +
 +
From this project we  should be able to get an extensible framework  in the form of a dsf service or an extension point where all generic tasks related to synchronous debug is handled by cdt and vendors can extended them to give the specific trait they want.

Latest revision as of 01:05, 7 February 2011

Synchronized Run Control Operations

Some multi-core debuggers allow performing synchronized operations on multiple cores. Each run control operation will affect all chosen cores synchronously. Allowing the debugger or the hardware to operate on multiple cores minimizes the skid, or the time lag, in when the operations are carried out on the different cores.

Grouped Run Control Operations

Run Control Operations (step, resume, suspend) should be allowed on multiple debug entries on a single user operation. The user should be able to perform such operations on one or more processes/threads/cores/groups.

Related Bufzilla entries:

Bug 330974 If the user selects two nodes in the debug view most debug commands are disabled

What Tensilica is providing currently?

Tensilica’s Xtensa Xplorer is currently using the CDT CDI implementation. Xplorer provide facility to debug multi-core applications that run on Xtensa system. At present the MP system can be either an actual hardware, emulation or a simulation of the Xtensa system.

Depending on the implementation of the MP system, it can suspend/run synchronously or asynchronously. For hardware and emulation, Xtensa cores can optionally be wired up with break-in/out ports connected such that one core halting can propagate the halt to other cores to get “synchronous halt/run”. If HW is not wired up like this, then sync debug is not really possible in the sense that the UI can suspend other debuggers, but only after a relatively huge delay. In simulation, there is flexible control over whether halting a core halts the clock for just that one or for all.

The model we use is one gdb (which is not thread aware) per target processor. Targets within a system can, of course, be heterogeneous. Thus, there may be one UI with 10 gdbs all connected to one MP simulator (or one JTAG chain controller).This approach have its difficulties when it comes to synchronous debugging. Xtensa MP system which is configured to run synchronously will stop all core when one core hits a breakpoint, but UI does not reflect this automatically , so we explicitly have to pause all other gdb to reflect it.

When user wishes to continue (step or run) then we need to issue continue to every gdb.

Our current implementation has the following assumptions/limitations

  1. All cores within one launch are assumed to be either synchronized or not … within the target system/simulation. The idea that the UI can decide some set of cores are synchronized when they are not “within” the system is missing
  2. All cores are treated equally, no concept of a leader core and sync happening based on the leading core’s activity.
  3. No option to temporarily exclude one core from sync refresh of UI i.e exclude one of the core (or more specifically the gdb connected to that core) from pausing during sync stop . Certain cases where gdb is not responsive (like core in RunStall) attempt to do anything on that gdb causes trouble . User might know such situations and might want to exclude that core from “sync” for some time.

Where we want to go as we move to DSF

We will be moving our current implementation to DSF and are happy to contribute it, but of course others probably need different kinds of flexibility. So we are interested in working to establish a framework that supports the different kinds of needs if that seems possible.

I guess the first challenge is to work out how diverse the needs are. We are pretty low level (not thread aware) so I could expect big differences between how we and someone debugging an MP Linux system would see things.

From this project we should be able to get an extensible framework in the form of a dsf service or an extension point where all generic tasks related to synchronous debug is handled by cdt and vendors can extended them to give the specific trait they want.