Difference between revisions of "PTP/designs/perf"
m (→Two step usage) |
(→Case Studies) |
||
(6 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | == Case Studies == | ||
+ | These case studies are examples of how the parallel performance tool integration in Eclipse might work. Each represents a case of how a user might use the combined Eclipse PTP + Performance tools. | ||
+ | |||
+ | For outline of Performance Framework integration points (July '07) see [http://wiki.eclipse.org/PTP/planning/2.0#Performance_Analysis_Framework] | ||
+ | |||
+ | == Case Study (Workload Characterization)== | ||
+ | # Select a parallel project | ||
+ | ## Specify application metadata (for performance data management) | ||
+ | # Open analysis creation wizard | ||
+ | ## Select workload characterization from the list of analysis tasks | ||
+ | ### Select performance measurement/monitoring tool | ||
+ | ### Select hardware counter set | ||
+ | # Build | ||
+ | ## Create make target (for standard make) | ||
+ | #* Application may or may not need to be recompiled/rebuilt | ||
+ | # Run | ||
+ | ## Create a launch configuration | ||
+ | ### Specify performance tool options (if desired) | ||
+ | ### Specifiy other parallel execution parameters | ||
+ | ### Select run sets (number of processors) | ||
+ | ##* Generates batch script | ||
+ | # Analyze | ||
+ | ## View data in analysis tool | ||
+ | ## Create HTML report | ||
+ | |||
+ | == Case Study #2 (Memory leak detection) == | ||
+ | # Select a parallel project | ||
+ | ## Specify application metadata (for performance data management) | ||
+ | # Open analysis wizard | ||
+ | ## Select the 'detect memory leaks' checkbox | ||
+ | ## Select any other measurement options | ||
+ | # Build | ||
+ | ## Create make target (for standard make) | ||
+ | #* Application may or may not need to be recompiled/rebuilt | ||
+ | # Run | ||
+ | ## Create a launch configuration | ||
+ | ### Specify performance tool options (if desired) | ||
+ | ### Specifiy other parallel execution parameters | ||
+ | ### Select run sets (number of processors) | ||
+ | ##* Generates batch script | ||
+ | # Analyze | ||
+ | ## View data in analysis tool | ||
+ | # Return to source code | ||
+ | ## Fix memory leaks | ||
+ | # Rebuild and launch to confirm fixes | ||
+ | |||
+ | |||
+ | == Case Study #3 (batch + integrated run/build/launch) == | ||
+ | # Select the "analysis" button (like the debug/run buttons) | ||
+ | #* This opens the integrated analysis wizard | ||
+ | ## Select profiling set from the analysis tab | ||
+ | ## Select the Make target and binary executable | ||
+ | # Create more analysis experiments | ||
+ | # Select the "batch" button from the anaylsis menu | ||
+ | ## Select parameterizations (numbers of processors, binaries) | ||
+ | ##* generates batch script | ||
+ | ## Run the batch script | ||
+ | # Look at your data, run PerfExplorer | ||
+ | |||
+ | == Case Study #3b (scalability analysis) == | ||
+ | # Select the "analysis" button (like the debug/run buttons) | ||
+ | #* This opens the integrated analysis wizard | ||
+ | ## Select 'scalability analysis' | ||
+ | ## Select the profiling options | ||
+ | ## Select the Make target and binary executable | ||
+ | ## Select parameterizations (numbers of processors, binaries) | ||
+ | # Run the experiment | ||
+ | #* generates batch script | ||
+ | ## Run the batch script | ||
+ | # Analyze data | ||
+ | #* Use cross-experiment analysis tools | ||
+ | |||
+ | == Case Study #4 (Routine and outer loop profiling) == | ||
+ | # Invoke the analysis wizard | ||
+ | # Select execution time analysis/routine level | ||
+ | # Optionally select hardware counters for monitoring | ||
+ | # Run | ||
+ | # Analyze | ||
+ | ## View data in analysis tool | ||
+ | # Choose routines for outer loop profiling | ||
+ | #* Optionally select specific loops for profiling | ||
+ | # Select hardware counters for monitoring | ||
+ | # Run | ||
+ | # Analyze | ||
+ | ## View new data in analysis tool | ||
+ | ## View relevant source code regions | ||
+ | |||
+ | == Case Study #5 (Communication analysis) == | ||
+ | # Invoke the analysis wizard | ||
+ | # Select 'execution/communication time analysis' | ||
+ | # Run | ||
+ | ## Generate routine level callpath profile | ||
+ | ## Automated tool analyzes profile output and generates selective instrumentation file | ||
+ | ## Automated re-run using tracing (in format suitable for automated trace analysis) | ||
+ | # Analyze | ||
+ | ## Automatically perform trace analysis to identify communication bottlenecks | ||
+ | ## View bottleneck analysis result | ||
+ | ## Optionally view trace in graphical trace viewer (after conversion if necessary) | ||
+ | |||
+ | == Case Study #6 (Tool needs data) == | ||
+ | # Data mining tool determines that new performance data needs to be generated | ||
+ | # Tool somehow communicates this to Eclipse PTP | ||
+ | # Eclipse PTP performs necessary build & run | ||
+ | # Tool gets data | ||
+ | |||
+ | == Case Study #7 (Data Analysis) == | ||
+ | # Go to the performance data manager panel | ||
+ | # Select data to be analyzed | ||
+ | # Launch performance data analysis tool | ||
+ | #* Data are auto-converted if necessary | ||
+ | |||
+ | |||
+ | |||
+ | == Two step usage == | ||
+ | # Right-click project to open analysis configuration (Wizard) | ||
+ | #* This creates a ... | ||
+ | # Right-click on binary to set up analysis setting (may require rebuilding project) | ||
+ | |||
== TODO == | == TODO == | ||
Line 20: | Line 138: | ||
== Standard Makefile Automation == | == Standard Makefile Automation == | ||
− | # Add | + | # Add an 'include eclipse.env' line to include variables to build for different profiling/tracing options. Have users |
# use the variables set in eclipse.env when writing their makefiles. | # use the variables set in eclipse.env when writing their makefiles. | ||
− | |||
− | |||
− | |||
− |
Latest revision as of 14:42, 30 July 2007
Contents
- 1 Case Studies
- 2 Case Study (Workload Characterization)
- 3 Case Study #2 (Memory leak detection)
- 4 Case Study #3 (batch + integrated run/build/launch)
- 5 Case Study #3b (scalability analysis)
- 6 Case Study #4 (Routine and outer loop profiling)
- 7 Case Study #5 (Communication analysis)
- 8 Case Study #6 (Tool needs data)
- 9 Case Study #7 (Data Analysis)
- 10 Two step usage
- 11 TODO
- 12 UI designs
- 13 Standard Makefile Automation
Case Studies
These case studies are examples of how the parallel performance tool integration in Eclipse might work. Each represents a case of how a user might use the combined Eclipse PTP + Performance tools.
For outline of Performance Framework integration points (July '07) see [1]
Case Study (Workload Characterization)
- Select a parallel project
- Specify application metadata (for performance data management)
- Open analysis creation wizard
- Select workload characterization from the list of analysis tasks
- Select performance measurement/monitoring tool
- Select hardware counter set
- Select workload characterization from the list of analysis tasks
- Build
- Create make target (for standard make)
- Application may or may not need to be recompiled/rebuilt
- Run
- Create a launch configuration
- Specify performance tool options (if desired)
- Specifiy other parallel execution parameters
- Select run sets (number of processors)
- Generates batch script
- Create a launch configuration
- Analyze
- View data in analysis tool
- Create HTML report
Case Study #2 (Memory leak detection)
- Select a parallel project
- Specify application metadata (for performance data management)
- Open analysis wizard
- Select the 'detect memory leaks' checkbox
- Select any other measurement options
- Build
- Create make target (for standard make)
- Application may or may not need to be recompiled/rebuilt
- Run
- Create a launch configuration
- Specify performance tool options (if desired)
- Specifiy other parallel execution parameters
- Select run sets (number of processors)
- Generates batch script
- Create a launch configuration
- Analyze
- View data in analysis tool
- Return to source code
- Fix memory leaks
- Rebuild and launch to confirm fixes
Case Study #3 (batch + integrated run/build/launch)
- Select the "analysis" button (like the debug/run buttons)
- This opens the integrated analysis wizard
- Select profiling set from the analysis tab
- Select the Make target and binary executable
- Create more analysis experiments
- Select the "batch" button from the anaylsis menu
- Select parameterizations (numbers of processors, binaries)
- generates batch script
- Run the batch script
- Select parameterizations (numbers of processors, binaries)
- Look at your data, run PerfExplorer
Case Study #3b (scalability analysis)
- Select the "analysis" button (like the debug/run buttons)
- This opens the integrated analysis wizard
- Select 'scalability analysis'
- Select the profiling options
- Select the Make target and binary executable
- Select parameterizations (numbers of processors, binaries)
- Run the experiment
- generates batch script
- Run the batch script
- Analyze data
- Use cross-experiment analysis tools
Case Study #4 (Routine and outer loop profiling)
- Invoke the analysis wizard
- Select execution time analysis/routine level
- Optionally select hardware counters for monitoring
- Run
- Analyze
- View data in analysis tool
- Choose routines for outer loop profiling
- Optionally select specific loops for profiling
- Select hardware counters for monitoring
- Run
- Analyze
- View new data in analysis tool
- View relevant source code regions
Case Study #5 (Communication analysis)
- Invoke the analysis wizard
- Select 'execution/communication time analysis'
- Run
- Generate routine level callpath profile
- Automated tool analyzes profile output and generates selective instrumentation file
- Automated re-run using tracing (in format suitable for automated trace analysis)
- Analyze
- Automatically perform trace analysis to identify communication bottlenecks
- View bottleneck analysis result
- Optionally view trace in graphical trace viewer (after conversion if necessary)
Case Study #6 (Tool needs data)
- Data mining tool determines that new performance data needs to be generated
- Tool somehow communicates this to Eclipse PTP
- Eclipse PTP performs necessary build & run
- Tool gets data
Case Study #7 (Data Analysis)
- Go to the performance data manager panel
- Select data to be analyzed
- Launch performance data analysis tool
- Data are auto-converted if necessary
Two step usage
- Right-click project to open analysis configuration (Wizard)
- This creates a ...
- Right-click on binary to set up analysis setting (may require rebuilding project)
TODO
- TAU on update site
- System PTP Preferences (TAU location)
- User Preferences
- Project Preferences
- TAU Marker in source editor
- Analysis Perspective
- MPI tool-chain (for managed and standard make projects)
- Extension PT Project
UI designs
- Performance Analysis Perspective
- Wizards to set up instrumentation/measurement options during both builds and run phases.
- Templates to set up common multiple build experiments (ie. scaling, optimization levels etc...)
- Let the user choose the TAU makefile file when adding a TAU configuration to a eclipse project.
- Add tau run-time options to the run configuration
Standard Makefile Automation
- Add an 'include eclipse.env' line to include variables to build for different profiling/tracing options. Have users
- use the variables set in eclipse.env when writing their makefiles.