Skip to main content

Notice: This Wiki is now read only and edits are no longer possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "TPTP-PMC-F2F-200803"

m
Line 99: Line 99:
 
== Working Globally ==
 
== Working Globally ==
  
----- leads talking
+
Oliver had asked each lead to come prepared to discuss methods they use for working with globally distributed teams. Oliver asks: You have people that you work with all over the world.  How do you deal with the folks that you only see virtually?
* You have people that you work with all over the world.  How do you deal with the folks that you only see virtually
+
  
Platofrm .. not so badMajority do report to her
+
Platform does not have a huge problemThe majority of engineers involved report directly to Joanna
* exception Eugene -- not having issues there
+
* One exception is Eugene and there are no communication issues there
* intel... not having issues there
+
* Also the Intel folks do not report to Joanna but she is not having issues there right now either
* code analysis team in ottowa
+
** A few times has asked Chris to get involved but those times are rare.
* Paul has seen some difficulaty because he is not a mgr.
+
* The largest examples of concern related to code analysis team in ottowa
** Joanna has had to step in and explain role to IBM mgt chain so that Paul is paid attention to
+
* Paul had seen some difficulaty because although he is the TPTP lead for his project he is not a manager at IBM
 +
** Caused some confustion nin relations
 +
** Joanna has had to step in and explain role to IBM mgt chain so that Paul is paid attention to as he should be
  
oliver asks:  You do you know someone is fulfulling their potential
+
Oliver asks:  How do you know if someone is fulfulling their potential? (i.e., is 1 or 2 bugs a week good or not?)
(i.e., is 1 or 2 bugs a week good or not?)
+
* Joanna runs reports about bug fix rates and passes to mgt chain
* runs reports about bug fix rates and passes to mgt chain
+
* This process calls attention to outliers
* calls attentiont to outliers
+
* Typically when the numbers are off it means that someone is working on a single large big defect
* typically means that someone is working on a big defect
+
** Otherwise (rare) she will followup.   
* Otherwise will followup.  may cc harm and ask addl questions
+
** In severe cases @IBM would cc Harm to get additional questions asked
 +
** W/ Intel regular meetings used to raise most problems.  In a few cases have involved Chris (quite rare)
  
working w/ Intel -- regular meetings and then followup on outliers w/ Chris @PMC
+
Paul apparantly has the team do some tracking of hours worked on defects within their bugzillas to call out defects that are taking "too long" to complete and give collateral for discussion.
*
+
  
Tracking of hours worked on defects within bugzilla to call out defects that are taking "too long".
+
Joanna looks at imbalances in upcoming defect backlog and reassigns defects accordingly
  
joanna looks at imbalances in upcoming defect backlog and reassigns defects accordingly.
+
Oliver asks what metrics Joanna uses to track Intel team.
*
+
* AlexA owns has majority of platform defects
 
+
* Some rerouting of defects between Igor and Stanislav (Agent controller support)
what metrics does she track for Intel team.
+
* Sasha has majority of platform defects
+
* Some rerouting of defects between Igor and Stanislav
+
 
** This is becoming more clear nowadays because Stanislav is on the AC now and Igor on profiler
 
** This is becoming more clear nowadays because Stanislav is on the AC now and Igor on profiler
  
Oliver asks what Joanna knows about AlexN.
+
Oliver asks what Joanna knows about AlexN's processes
* Similar
+
* As far as she knows, they are similar to what Joanna does for platform
  
Oliver doesn't want to break stuff but would it be valuable for leads to gather same metrics.
+
Oliver doesn't want to break stuff but asks if it would be valuable for leads to gather the same metrics.
* Not all folks are diligent about sizingsome over some underestimate.
+
* Not all folks are diligent about sizing defects (some over some underestimate)
 
* deferral rates might be interesting
 
* deferral rates might be interesting
 +
* Chris suggests that at the very least leads should exchange their methodologies for tracking
  
10% of Joe toomey's time question came up.
+
== Resourcing Discussion ==
* in the end rational and tivoli talked and came up w/ Richard
+
 
* there was argument that got the 10% of value from Toomey by discussion on test defects
+
The question of what constitutes 10% of Joe toomey's time came up.
 +
* In the end rational and tivoli talked and came up w/ Richard to support Joe's feature
 +
* There was argument that got the 10% of value from Toomey was his discussion on test project defects
 +
** Team discussed fact that in the plan Joe's 10% was to pursue a specific action item.
 +
** Spending time on another project, while a real portion of his time doesn't really accomplish the original assignment.
 +
*** Chris commentary as he transcribes this... -- Perhaps need to be a bit better about annotating specific line items/expectations for 10%
  
 
Richard is 70% TPTP
 
Richard is 70% TPTP
* split between monitoring and trace
+
* Time is split between monitoring and trace projects
* makes difficult to track whether (a) aggregate 70% is done and (b) trace deliverable is accomplished
+
* This split makes it difficult to track whether (a) aggregate 70% is done and (b) sufficient time is saved for accomplishing trace deliverable
** Sasha and AlexN to cooperate to define
+
** AI: AlexA and AlexN to cooperate to define responsibilities and metrics for Richard
  
efficiency of TPTP
+
== Component Assignments to Projects ==
* association of plugin components to projects
+
* platform used to be where things used by more than 1 project
+
** there are other things in trace but they are not really active
+
** there had been an intent to use more of the platform trace views across test, monitoring, etc
+
*** intent was not completely followed thru
+
*** Monitoring does have some of it but prime use case is all part of trace
+
*** The platform becomes more of a discussion and ownership of stuff like AC
+
  
Mikhail believes that with an active AC we can handle integration problems
+
The question came up asking if we are being sufficiently efficient in our project management.  Is there extra unnecessary cost associated with managing projects from a different site where the engineering is done?
* used to have these weekly forums but went away for a while
+
  
are there specific points to fix now versus things to plan to do?
+
* It is time to review association of plugin components to projects
* two main buckets. profiler and test
+
* Platform used to be where things used by more than 1 project
* monitoring is pretty segregated
+
** We may be taking this rule of thumb too seriously
* don't rename plugins but start managing fairly large bucket of stuff under trace
+
** For example, there are other projects that use trace but the vast majority is currently in profiling project
 +
*** There had been an intent to use more of the platform trace views across test, monitoring, etc
 +
*** intent was not completely followed thru and does not really have resourcing today.
 +
*** Monitoring does use some of it(trace) but vast majority is profiling related
 +
* The platform has gotton huge and it is time to discuss best location for stuff like AC
 +
* It is somewhat strange to have AlexA leading Trace where pretty much all the resourcing is from IBM while most of the profiling work is done under Joanna's project with Intel resources.
 +
 
 +
Mikhail believes that with an active AG we can handle integration problems between the projects
 +
* used to have these weekly forums but went away for a while
  
harm okay w/ 3 basic components/use cases
+
Are there specific points to fix now versus things to plan to do?
* test -- testing stuff
+
* Main buckets: profiler and test
* trace -- profiler stuff
+
** Monitoring is also pretty separable
* Platform -- truely common stuff
+
* Harm suggests that we don't rename plugins but start managing more of profiling stuff under trace
*
+
  
where for example does probekit fall?
+
Harm okay w/ 3 basic components/use cases
 +
* Test -- testing stuff
 +
* Trace -- profiler stuff (gets bigger)
 +
* Platform -- truely common stuff (gets smaller)
 +
* Chris' commentary transcoding -- I think we also agreed that Monitoring should be a 4th bucket here.
  
AI each lead -- Sasha Paul AlexN
+
AI to each lead -- AlexA, Paul, AlexN, Joanna: Create a strawman proposal for items that should be under your project.  Probably use AG for followup discussions before we take official action.
 +
* Sample question: where for example does probekit fall?
  
Mikhail notes that could do it in two phases -- critical now; strategic updates later?
+
Mikhail notes that we could do this in two phases.  Longer term bucketing design above is good but there are some specific inefficiencies now.  Can we do a few critical tweaks now and strategic updates later?
* near term movement ... discussion Sasha/Joanna
+
* General consensus this is okay.
** Joanna, Sasha, Mikhail to agree to short term things to do
+
** AI AlexA/Joanna/Mikhail to discuss near term efficiency ideas.
*** bring to pmc and just doit
+
*** Bring joint proposal to pmc and just make it so.
  
--- trace conference -- erikson discussion
+
== Trace Consolodation Discussion ==
 +
trace conference -- erikson discussion
 
* setting up non-java tracing requirements from tracing summit
 
* setting up non-java tracing requirements from tracing summit
 
** results from summit
 
** results from summit

Revision as of 14:57, 25 March 2008

Starting Logistics

Attending: Harm, Oliver, AlexA, MikhailV, Chris, Joanna(phone) Regrets: Paul, AlexN

  • Joanna is on the road but has no time constraints
  • Oliver asks if there are any additional agenda items?
    • Chris wanted to add an item about a specific enhancement

Discussion of 4.5 Release

Chris raised question about stack map attribute enhancement request

  • A few weeks ago IBM mentioned that they really wanted this to be in the release.
  • We wanted to clarify that this defect is actually about forward looking support for when VMs do not support backoff verification ncode for missing stackmap attributes.
  • Wanted to find if there is some immediate consumption from IBM for this enhancement
    • Is there a Java6 JRE that does not implement the fallback verifier for missing Stackmaps?
    • Or is it just risk of it going away at some future Java
  • Feature is currently on track but engineering is not yet 100% and IPzilla still needs approval so there is some risk.
  • Harm mentioned that this was one of the few bugzilla entries that spoke directly to Java6 support and that IBM wants to be sure of Java6 support if 4.5
    • Does not think that this particular feature is critical but will check to verify
    • Joanna raised question regarding line level coverage
      • There was a problem with Emma at one point that might have required this feature.
      • Harm noted that there are folks that use Emma today.
      • Joanna noted that code coverage seems to work for 1.6 now.
      • However, it is not really supported by developers
      • There is no commitment to get support for Java6+ (probekit) for line level coverage.
      • IBM builds line level coverage on top of TPTP
  • AI to Joanna to confirm that the issues are long term so that if there does end up being a need to defer due to legal or technical difficulties that it will not cause a panic.

Question whether this is the same of different from the issue around probekit using deprecated byte codes.

  • Different Item

Oliver asks if we know what big items need to fall off the list?

  • Leads should be working thru the process of identifying items
  • Joanna and Chris should be involved when removal happens

Staffing is open/questionable on this feature.

  • Staffing issue (committers resigned)
  • Tivoli trying to find resource but if they cannot then feature will not be in.
    • Oliver asks if IBM understands and agrees to risk
    • Harm says that they have to be okay with it because there is no choice
    • Intel has no dependence on this bugzilla.

Enhancements slipping out of I6

  • Filtering (updates to predefined filters) and make them more generic and "obvious"
    • Enriching command line will probably involve adding more parameters
    • This will make command lines more complex
      • Will need some way to simplify the command line
  • JVMTI profiler documentation and command line help/wizard

Team discussed possibility of deferring some of the documentation and command line wizards to 4.5.1

  • Too early to tell what translation support will be in 4.5.1
  • Could in theory do documentation part on the web site and roll in when translation support is provided
  • Alternately, can point to wiki documentation and bypass translation processes altogether
    • Some other projects are doing this.
    • Would enable us to be most agile.

For documentation that team will provide, we need to figure out how to review for English as second language.

  • In discussion there was a general belief that if it is a reasonable amount of content and we have a few weeks notice we can probably get reviews/edits from folks on team with English as a first language.
    • Easiest way would be to do it on the wiki and let us just edit it in place.

Andreas arrived so we tabled 4.5 discussion.

Memory Analyzer Discussion

Andreas (SAP) came in to discuss their memory analyzer that extracts interesting information from heap dumps and visualizes it.

If one has a 4g heap dump and uses brute force techniques will need 4-5x that much space for analysis. They reduce this overhead with intelligence in parser/indexer. Reduced amount of information available but makes it quite efficient.

  • classloader details, lifetimes
  • dominator tree to identify heap details (probable heap reduction from freeing objects)
    • Identify what objects maintain references, sizes, classloaders, etc, possible bad guy.

Andreas provided a demo of a 2.4g heap system.

  • currently tied to sun format
  • talking also (for example) to IBM J9 guys for other heap dump formats
    • IBM vm missing some information about GC roots
    • J9 open to changing.
    • AI to Eugene to try out the heap analyzer on trace leak issue
    • AI Harm to fwd to leakbot team at IBM
    • AI Chris to fwd to Intel memory profiler guys (Sri)

Everyone was quite excited. Project has a finely tuned data model for indexing heap data. Unclear how much the models for this could be fully integrated with models for Trace. There was discussion around when and how to incubate the project. TPTP is definately a possible host for the effort. The suggestion of talking to the JDT guys too also came up.

Netbeans Comparison

OC systems engineer dialed in and discussed their initial comparison of Netbeans to Eclipse profiler. He did several demos and then we discussed and gave several ARs for followup.

Questions that arose:

  • can one run with command line mode and then do post-hoc analysis w/ netbeans?
    • unsure... looked at web documentation... did not find such a mechanism
  • Is the data stream in a format where it can be externally consumed?
  • Does the data stream consist of aggregated stats or full data streamed
    • Suspect aggregation since there are no views that show overall execution sequences that would require full sequence.
  • Do we have any idea of the overhead of their data transport model (e.g., sockets, sharedmem/etc). We wonder if there is some magic there.

Ability to attach to a running JVM w/ no command line options in netbeans

  • Chris noted that before Java6 Netbeans would ship with a custom Sun JVM to allow this.
    • In Java6 Sun partially documented in the JVMTI spec the feature that the JVM would need to implement to get this attach to work. Unfortunately the specification is incomplete so it is unclear how other VMs could implement the functionality.

Working Globally

Oliver had asked each lead to come prepared to discuss methods they use for working with globally distributed teams. Oliver asks: You have people that you work with all over the world. How do you deal with the folks that you only see virtually?

Platform does not have a huge problem. The majority of engineers involved report directly to Joanna

  • One exception is Eugene and there are no communication issues there
  • Also the Intel folks do not report to Joanna but she is not having issues there right now either
    • A few times has asked Chris to get involved but those times are rare.
  • The largest examples of concern related to code analysis team in ottowa
  • Paul had seen some difficulaty because although he is the TPTP lead for his project he is not a manager at IBM
    • Caused some confustion nin relations
    • Joanna has had to step in and explain role to IBM mgt chain so that Paul is paid attention to as he should be

Oliver asks: How do you know if someone is fulfulling their potential? (i.e., is 1 or 2 bugs a week good or not?)

  • Joanna runs reports about bug fix rates and passes to mgt chain
  • This process calls attention to outliers
  • Typically when the numbers are off it means that someone is working on a single large big defect
    • Otherwise (rare) she will followup.
    • In severe cases @IBM would cc Harm to get additional questions asked
    • W/ Intel regular meetings used to raise most problems. In a few cases have involved Chris (quite rare)

Paul apparantly has the team do some tracking of hours worked on defects within their bugzillas to call out defects that are taking "too long" to complete and give collateral for discussion.

Joanna looks at imbalances in upcoming defect backlog and reassigns defects accordingly

Oliver asks what metrics Joanna uses to track Intel team.

  • AlexA owns has majority of platform defects
  • Some rerouting of defects between Igor and Stanislav (Agent controller support)
    • This is becoming more clear nowadays because Stanislav is on the AC now and Igor on profiler

Oliver asks what Joanna knows about AlexN's processes

  • As far as she knows, they are similar to what Joanna does for platform

Oliver doesn't want to break stuff but asks if it would be valuable for leads to gather the same metrics.

  • Not all folks are diligent about sizing defects (some over some underestimate)
  • deferral rates might be interesting
  • Chris suggests that at the very least leads should exchange their methodologies for tracking

Resourcing Discussion

The question of what constitutes 10% of Joe toomey's time came up.

  • In the end rational and tivoli talked and came up w/ Richard to support Joe's feature
  • There was argument that got the 10% of value from Toomey was his discussion on test project defects
    • Team discussed fact that in the plan Joe's 10% was to pursue a specific action item.
    • Spending time on another project, while a real portion of his time doesn't really accomplish the original assignment.
      • Chris commentary as he transcribes this... -- Perhaps need to be a bit better about annotating specific line items/expectations for 10%

Richard is 70% TPTP

  • Time is split between monitoring and trace projects
  • This split makes it difficult to track whether (a) aggregate 70% is done and (b) sufficient time is saved for accomplishing trace deliverable
    • AI: AlexA and AlexN to cooperate to define responsibilities and metrics for Richard

Component Assignments to Projects

The question came up asking if we are being sufficiently efficient in our project management. Is there extra unnecessary cost associated with managing projects from a different site where the engineering is done?

  • It is time to review association of plugin components to projects
  • Platform used to be where things used by more than 1 project
    • We may be taking this rule of thumb too seriously
    • For example, there are other projects that use trace but the vast majority is currently in profiling project
      • There had been an intent to use more of the platform trace views across test, monitoring, etc
      • intent was not completely followed thru and does not really have resourcing today.
      • Monitoring does use some of it(trace) but vast majority is profiling related
  • The platform has gotton huge and it is time to discuss best location for stuff like AC
  • It is somewhat strange to have AlexA leading Trace where pretty much all the resourcing is from IBM while most of the profiling work is done under Joanna's project with Intel resources.

Mikhail believes that with an active AG we can handle integration problems between the projects

  • used to have these weekly forums but went away for a while

Are there specific points to fix now versus things to plan to do?

  • Main buckets: profiler and test
    • Monitoring is also pretty separable
  • Harm suggests that we don't rename plugins but start managing more of profiling stuff under trace

Harm okay w/ 3 basic components/use cases

  • Test -- testing stuff
  • Trace -- profiler stuff (gets bigger)
  • Platform -- truely common stuff (gets smaller)
  • Chris' commentary transcoding -- I think we also agreed that Monitoring should be a 4th bucket here.

AI to each lead -- AlexA, Paul, AlexN, Joanna: Create a strawman proposal for items that should be under your project. Probably use AG for followup discussions before we take official action.

  • Sample question: where for example does probekit fall?

Mikhail notes that we could do this in two phases. Longer term bucketing design above is good but there are some specific inefficiencies now. Can we do a few critical tweaks now and strategic updates later?

  • General consensus this is okay.
    • AI AlexA/Joanna/Mikhail to discuss near term efficiency ideas.
      • Bring joint proposal to pmc and just make it so.

Trace Consolodation Discussion

trace conference -- erikson discussion
  • setting up non-java tracing requirements from tracing summit
    • results from summit
  • debugging components will be done this summer
    • then will pursue tracing
  • not a slice of existing tool being open sourced
  • research project w/ universities is starting as well.
    • hope for some synergy
  • team also doing linux next gen tracing LTT ng
    • this has involved mods to the linux kernel

Harm on followup from looking @wiki

  • does both stacks as well as logging of system behavior
    • main component is tracing itself includig use of static markers in kernel.

Question: full sequence and ordering or statistical aggregates?

  • eventually a framework should handle both
    • there are many layers : os, hypervisor, etc.

Interested in more than just low level devices.

  • examples, combine with data from additional data sources (e.g., memory analyzer)

when and what do we do?

  • they intend to go talk to other vendors to collect details

June 18th

  • all agree community is important but need to coordinate w/ different companies teams
  • some questions have come up regarding review by companies before/after
      • Oliver is in control

need to address community as a target.

  • get board to say that TPTP seems to be doing the right things

possible ibm perspective anything not for consuming product would not be desirable to resource

why ask board?

  • we are seeing companies who want all these community things built up and working but they dont want t pay for them w/ resources.
    • what is board doing to increase desirability of participating in community.

pog lead user -- harm

  • big blocking defects from first few passes pretty much okay
  • touching on bits and pieces of data correctness
  • by end of I6 will have done things that impact UI
    • using wiki will avoid some issues w/ translation
  • Harm was looking at model/loader

Sasha to continue driving the effort

  • triage all existing defects
  • keep fixing important defects
  • How to deal with lead user issues in this 4.5 timeframe?
    • I7/I8 development

needs to be possible to revert behavior

WTP meeting discussion --

  • TPTP part of EPP for J2EE.
    • some subset of TPTP good for profiling and/or junit stuff
      • currently include Mylin
    • requirements
      • (1) say which features to include
      • pulls from ganymede update site
      • bit more involvement --
      • document (preferably with a tutorial) the use case scenario
      • someone who tests on weekly basis (each integration build) O(30min/wk)
      • Before each milestone, someone goes and stresses it.
        • Currently there is wiki page in WTP where people document test updates

where should tutorial go? our website or elsewhere?

  • Probably sufficient.

EPP is home site for the builds

How to deal with native code within the targets?

  • features thru update mgr -- doing work to get targeted platform sensitivity
  • EPP is all in one.
    • EPP makes zips for each platform (windows, linux, linux-gtk, possibly mac)
    • zips are made via update mgr so it might "just work"

Use test cases are things provided today.

Try to get Joel to pull from JEE EPP

  • AI for Joel to come to to PMC and discuss

from build point of view need to provide update sites for ganymede

  • may need to start dropping more builds up to site.

any tutorial content that will show up on welcome page?

Back to the top