Table of contents
General characteristics
- Users need to get an account first (see W. Alef or H. Rottmann).
- Users should log into frontend.
- /home on the cluster is where the home directories reside. /home is visible to all nodes.
- login to nodes works from frontend without password
- All nodes have 16 GB of memory
- Each node has a scratch disk (~100 GB)
- for large amounts of disk space two 20 TB Raid are available (mounted under /data and /data1). If you need space, a directory can be created for you.
- Software available:
- gcc 4.2.1 (g++, gfortran)
- gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
- OpenMPI, MPICH (OpenMPI automatically uses Infiniband)
- Intel Performance LIbrary
- a Batch system with load balancing could be turned on if need arises.
- Two 20 TB raids for pulsar data reduction.
Cluster Booking
Please contact the VLBI correlator operators (Tel. 220, email: mk4) if you would like to allocate time on the cluster.
Check here to see when you can use which nodes and what diskspace on the cluster.
Defective nodes: none
Available nodes: 1 - 60
Diskspace: ~200 GB on /home; more diskpace on request
Time period | Nodes assigned | Who | What | Diskspace | Comment |
1.1.- | 21-60 | Rottmann | Korrelation | top priority | |
Reich | Simulations | gering | |||
frontend | Fromm | Test |
Cluster Layout
IO02 / 20 TB Nodes 41-60, IO01 / 20 TB, Nodes 1 - 40
Appliance, Frontend,
FXmanager
More information about the cluster can be found on the intranet: http://intra.mpifr-bonn.mpg.de/div/v...nal/index.html