Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "PTP/tutorials/SC12/setup"

< PTP‎ | tutorials‎ | SC12
(Setup for PTP SC12 remote targets for tutorial)
(SDSC Trestles)
 
(2 intermediate revisions by the same user not shown)
Line 14: Line 14:
 
Update PTP from http://download.eclipse.org/tools/ptp/builds/juno/nightly to get 2 new Trestles target configurations
 
Update PTP from http://download.eclipse.org/tools/ptp/builds/juno/nightly to get 2 new Trestles target configurations
 
#toolchain - Linux GCC ( Remote LinuxGCC also builds ok) Remote LinuxGCC puts a bunch of (non-unc) paths in 'Paths and symbols' includes
 
#toolchain - Linux GCC ( Remote LinuxGCC also builds ok) Remote LinuxGCC puts a bunch of (non-unc) paths in 'Paths and symbols' includes
#include files - guess: i see /usr/include/stdio.h and since 'which mpicc' returns /home/diag/opt/openmpi which lead me to /home/diag/opt/openmpi/1.4.3/gnu/include
+
#include files -  
#*which means i would need to set two paths in "Paths and Symbols" of
+
#**                //trestles/usr/include  --and-- //trestles/opt/openmpi/include
#**                //trestles/usr/include  --and-- //trestles/home/diag.opt/openmpi/1.4.3/gnu/include
+
#**    hyperlink nav in editor now works
#**    but hyperlink nav in editor still doesn't work
+
 
#Resource manager -
 
#Resource manager -
 
#*Run - edu.sdsc.trestles.torque.batch - new contributed resource manager in the latest PTP builds
 
#*Run - edu.sdsc.trestles.torque.batch - new contributed resource manager in the latest PTP builds
 
#** Other config details: Queue: shared;  Number of nodes: 1:ppn=5;  MPI command: mpirun;    MPI Number of cores: 5
 
#** Other config details: Queue: shared;  Number of nodes: 1:ppn=5;  MPI command: mpirun;    MPI Number of cores: 5
 
#*Debug - edu.sdsc.trestles.torque.interactive.openmpi - details same as above, using sdm in /home/grw/sdm
 
#*Debug - edu.sdsc.trestles.torque.interactive.openmpi - details same as above, using sdm in /home/grw/sdm
#What works here? GEM? LinuxTools=yes
+
#What works here? GEM=yes  LinuxTools=yes
 
#* TAU installed and functioning. PAPI not available.  To successfully run with TAU from PTP the .bashrc must include these lines "module unload pgi ; module load gnu/4.6.1 openmpi tau"
 
#* TAU installed and functioning. PAPI not available.  To successfully run with TAU from PTP the .bashrc must include these lines "module unload pgi ; module load gnu/4.6.1 openmpi tau"
  
Line 38: Line 37:
  
 
===VirtualBox torque by Dennis===
 
===VirtualBox torque by Dennis===
[https://www.cct.lsu.edu/~dcastl2/ptp/vbox.txt Download and setup info] (Summary: Download VirtualBox, download torque-vm.zip, install VirtualBox and the torque VM, launch it and login with userid ptp and password ptp)
+
[https://www.cct.lsu.edu/~dcastl2/ptp/vbox.php Download and setup info] (Summary: Download VirtualBox, download torque-vm.zip, install VirtualBox and the torque VM, launch it and login with userid ptp and password ptp)
 
# toolchain - i have tried Linux GCC. builds ok.
 
# toolchain - i have tried Linux GCC. builds ok.
 
# include files - stdio.h is in /usr/include and mpi.h is in /usr/local/include  
 
# include files - stdio.h is in /usr/include and mpi.h is in /usr/local/include  

Latest revision as of 02:49, 9 November 2012

Setup for PTP SC12 remote targets for tutorial

Sample app: shallow

  • from cvs.ncsa.uiuc.edu, repository path: CVS/ptp-samples; open HEAD, open sample, 'Check out as...' on shallow
  • note: setup slides say checkout as C project, then convert to Sync. Alan's GEM slides say check out as C/C++ Sync project directly, which I need to test, never tried this!!! Sounds simpler.

Client: use new almost-6.0.3 builds

For each system we need to document 1. toolchain recommended and/or documented 2. include files - location for setup using UNC notation //connection-name/path/to/include/files 3. resource manager(s) to use 4. what perf tools from the tutorial work there, and 5. anything else we need to know to use system

SDSC Trestles

trestles.sdsc.edu

Update PTP from http://download.eclipse.org/tools/ptp/builds/juno/nightly to get 2 new Trestles target configurations

  1. toolchain - Linux GCC ( Remote LinuxGCC also builds ok) Remote LinuxGCC puts a bunch of (non-unc) paths in 'Paths and symbols' includes
  2. include files -
      • //trestles/usr/include --and-- //trestles/opt/openmpi/include
      • hyperlink nav in editor now works
  3. Resource manager -
    • Run - edu.sdsc.trestles.torque.batch - new contributed resource manager in the latest PTP builds
      • Other config details: Queue: shared; Number of nodes: 1:ppn=5; MPI command: mpirun; MPI Number of cores: 5
    • Debug - edu.sdsc.trestles.torque.interactive.openmpi - details same as above, using sdm in /home/grw/sdm
  4. What works here? GEM=yes LinuxTools=yes
    • TAU installed and functioning. PAPI not available. To successfully run with TAU from PTP the .bashrc must include these lines "module unload pgi ; module load gnu/4.6.1 openmpi tau"

I am using a shell via MyProxy/GSI Auth in the NCSA plugin (per instructions in ptp-07-ncsa.ppt)

SDSC Gordon

gordon.sdsc.edu

  1. toolchain
  2. include files
  3. Resource manager
    • Run -
    • Debug -
  4. What works here? GEM? LinuxTools?
    • TAU is installed and works correctly

VirtualBox torque by Dennis

Download and setup info (Summary: Download VirtualBox, download torque-vm.zip, install VirtualBox and the torque VM, launch it and login with userid ptp and password ptp)

  1. toolchain - i have tried Linux GCC. builds ok.
  2. include files - stdio.h is in /usr/include and mpi.h is in /usr/local/include
  3. Res Managers
    • Run - torque-generic-batch, queue: batch; num of nodes: 1; MPI command: mpirun; MPI num of tasks: 5
    • Debug -
  4. What works here? GEM? LinuxTools?
    • TAU not installed. Local test install had unusual issues.
  5. Other - OpenMPI 1.4.2 is installed

supercomputer the roving laptop by Galen

Temporary IP address , contact Galen for access

  1. toolchain
  2. include files - /usr/include and /usr/include/openmpi-x86_64
  3. Resource manager - openmpi
    • Run -
    • Debug -
  4. What works here? GEM? *LinuxTools*
    • TAU installed and functional

Back to the top