Table of contents
General characteristics
- Users need to get an account first (see W. Alef or H. Rottmann).
- Users should log into frontend.
- /home on the cluster is where the home directories reside. /home is visible to all nodes.
- login to nodes works from frontend without password
- All nodes have 16 GB of memory
- Each node has a scratch disk (~100 GB)
- for large amounts of disk space two 20 TB Raid are available (mounted under /data and /data1). If you need space, a directory can be created for you.
- Software available:
- gcc 4.2.1 (g++, gfortran)
- gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
- OpenMPI, MPICH (OpenMPI automatically uses Infiniband)
- Intel Performance LIbrary
- The Torque batch system with load balancing could be configured and enabled if need arises.
- One 20 TB raid for geodesy (IO03)
- One 20 TB raid and one 40 TB raid for pulsar data reduction each with 12 cores. (IO04 and IO05)
- 4 x 40 TB raids for LOFAR data reduction each with 12 cores. (IO06-IO09)
Cluster Booking
Please contact the VLBI correlator operators (Tel. 220, email: mk4) if you would like to allocate time on the cluster.
Check here to see when you can use which nodes and what diskspace on the cluster.
Defective nodes: none
Available nodes: 1 - 60
Diskspace: ~200 GB on /home; /scratch on nodes; more diskpace on request
Ask Helge Rottmann.
Cluster Layout
Infiniband Switch | 4x Infiniband Switch | 4x Infiniband Switch | |
IO11 / 82 TB, IO12 / 82 TB | Nodes 1 - | ||
IO10 / 46 TB | |||
IO05 - IO09 / 40 TB | Nodes - 68 | IO01 / 164 TB, IO02 / 164 TB |
IO10 / 52 TB, IO01 /20 TB IO03&04 / 20 TB Nodes 41-60, IO01 / 20 TB, Nodes 1 - 40
IO05 - IO09 / 40 TB FXmanager Appliance, Frontend,
IO02 / 20 TB Infiniband switch
More information about the cluster can be found on the intranet: http://intra.mpifr-bonn.mpg.de/div/v...nal/index.html