Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "PTP/designs/perf"

< PTP‎ | designs
(Case Studies)
 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
== Case Studies ==
 +
These case studies are examples of how the parallel performance tool integration in Eclipse might work.  Each represents a case of how a user might use the combined Eclipse PTP + Performance tools.
 +
 +
For outline of Performance Framework integration points (July '07) see [http://wiki.eclipse.org/PTP/planning/2.0#Performance_Analysis_Framework]
 +
 
== Case Study (Workload Characterization)==
 
== Case Study (Workload Characterization)==
 
# Select a parallel project
 
# Select a parallel project
Line 133: Line 138:
 
== Standard Makefile Automation ==
 
== Standard Makefile Automation ==
  
# Add a 'include eclipse.env' line to include variables to build for different profiling/tracing options. Have users
+
# Add an 'include eclipse.env' line to include variables to build for different profiling/tracing options. Have users
 
# use the variables set in eclipse.env when writing their makefiles.
 
# use the variables set in eclipse.env when writing their makefiles.

Latest revision as of 14:42, 30 July 2007

Case Studies

These case studies are examples of how the parallel performance tool integration in Eclipse might work. Each represents a case of how a user might use the combined Eclipse PTP + Performance tools.

For outline of Performance Framework integration points (July '07) see [1]

Case Study (Workload Characterization)

  1. Select a parallel project
    1. Specify application metadata (for performance data management)
  2. Open analysis creation wizard
    1. Select workload characterization from the list of analysis tasks
      1. Select performance measurement/monitoring tool
      2. Select hardware counter set
  3. Build
    1. Create make target (for standard make)
    • Application may or may not need to be recompiled/rebuilt
  4. Run
    1. Create a launch configuration
      1. Specify performance tool options (if desired)
      2. Specifiy other parallel execution parameters
      3. Select run sets (number of processors)
      • Generates batch script
  5. Analyze
    1. View data in analysis tool
    2. Create HTML report

Case Study #2 (Memory leak detection)

  1. Select a parallel project
    1. Specify application metadata (for performance data management)
  2. Open analysis wizard
    1. Select the 'detect memory leaks' checkbox
    2. Select any other measurement options
  3. Build
    1. Create make target (for standard make)
    • Application may or may not need to be recompiled/rebuilt
  4. Run
    1. Create a launch configuration
      1. Specify performance tool options (if desired)
      2. Specifiy other parallel execution parameters
      3. Select run sets (number of processors)
      • Generates batch script
  5. Analyze
    1. View data in analysis tool
  6. Return to source code
    1. Fix memory leaks
  7. Rebuild and launch to confirm fixes


Case Study #3 (batch + integrated run/build/launch)

  1. Select the "analysis" button (like the debug/run buttons)
    • This opens the integrated analysis wizard
    1. Select profiling set from the analysis tab
    2. Select the Make target and binary executable
  2. Create more analysis experiments
  3. Select the "batch" button from the anaylsis menu
    1. Select parameterizations (numbers of processors, binaries)
      • generates batch script
    2. Run the batch script
  4. Look at your data, run PerfExplorer

Case Study #3b (scalability analysis)

  1. Select the "analysis" button (like the debug/run buttons)
    • This opens the integrated analysis wizard
    1. Select 'scalability analysis'
    2. Select the profiling options
    3. Select the Make target and binary executable
    4. Select parameterizations (numbers of processors, binaries)
  2. Run the experiment
    • generates batch script
    1. Run the batch script
  3. Analyze data
    • Use cross-experiment analysis tools

Case Study #4 (Routine and outer loop profiling)

  1. Invoke the analysis wizard
  2. Select execution time analysis/routine level
  3. Optionally select hardware counters for monitoring
  4. Run
  5. Analyze
    1. View data in analysis tool
  6. Choose routines for outer loop profiling
    • Optionally select specific loops for profiling
  7. Select hardware counters for monitoring
  8. Run
  9. Analyze
    1. View new data in analysis tool
    2. View relevant source code regions

Case Study #5 (Communication analysis)

  1. Invoke the analysis wizard
  2. Select 'execution/communication time analysis'
  3. Run
    1. Generate routine level callpath profile
    2. Automated tool analyzes profile output and generates selective instrumentation file
    3. Automated re-run using tracing (in format suitable for automated trace analysis)
  4. Analyze
    1. Automatically perform trace analysis to identify communication bottlenecks
    2. View bottleneck analysis result
    3. Optionally view trace in graphical trace viewer (after conversion if necessary)

Case Study #6 (Tool needs data)

  1. Data mining tool determines that new performance data needs to be generated
  2. Tool somehow communicates this to Eclipse PTP
  3. Eclipse PTP performs necessary build & run
  4. Tool gets data

Case Study #7 (Data Analysis)

  1. Go to the performance data manager panel
  2. Select data to be analyzed
  3. Launch performance data analysis tool
    • Data are auto-converted if necessary


Two step usage

  1. Right-click project to open analysis configuration (Wizard)
    • This creates a ...
  2. Right-click on binary to set up analysis setting (may require rebuilding project)

TODO

  1. TAU on update site
  2. System PTP Preferences (TAU location)
  3. User Preferences
  4. Project Preferences
  5. TAU Marker in source editor
  6. Analysis Perspective
  7. MPI tool-chain (for managed and standard make projects)
  8. Extension PT Project

UI designs

  1. Performance Analysis Perspective
  2. Wizards to set up instrumentation/measurement options during both builds and run phases.
  3. Templates to set up common multiple build experiments (ie. scaling, optimization levels etc...)
  4. Let the user choose the TAU makefile file when adding a TAU configuration to a eclipse project.
  5. Add tau run-time options to the run configuration

Standard Makefile Automation

  1. Add an 'include eclipse.env' line to include variables to build for different profiling/tracing options. Have users
  2. use the variables set in eclipse.env when writing their makefiles.

Back to the top