VLBI HPC cluster

    Version as of 00:11, 29 Mar 2024

    to this version.

    Return to Version archive.

    View current version

    General characteristics

    • Users need to get an account first (see W. Alef or H. Rottmann).
    • Users should log into frontend.
    • /home on the cluster is where the home directories reside. /home is visible to all nodes.
    • login to nodes works from frontend without password
    • All nodes have 16 GB of memory
    • Each node has a scratch disk (~100 GB)
    • for large amounts of disk space two 20 TB Raid are available (mounted under /data and /data1). If you need space, a directory can be created for you.
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI, MPICH (OpenMPI automatically uses Infiniband)
      • Intel Performance LIbrary
      • The Torque batch system with load balancing could be configured and enabled if need arises.
    • One 20 TB raid for geodesy (IO03)
    • One 20 TB raid and one 40 TB raid for pulsar data reduction each with 12 cores. (IO04 and IO05)
    • 4 x 40 TB raids for LOFAR data reduction each with 12 cores. (IO06-IO09)

    Cluster Booking

    Please contact the VLBI correlator operators (Tel. 220, email: mk4) if you would like to allocate time on the cluster.

    Check here to see when you can use which nodes and what diskspace on the cluster.

    Defective nodes: none

    Available nodes: 1 - 60

    Diskspace: ~200 GB on /home; /scratch on nodes; more diskpace on request

    Ask Helge Rottmann.

    Cluster Layout

    Cluster.png

               IO02 / 20 TB                      Nodes 41-60, IO01 / 20 TB,            Nodes 1 - 40
                                                          Appliance, Frontend,
                                                                  FXmanager

    More information about the cluster can be found on the intranet: http://intra.mpifr-bonn.mpg.de/div/v...nal/index.html