VLBI HPC cluster

    Version as of 20:33, 28 Mar 2024

    to this version.

    Return to Version archive.

    View current version

    General information

    The main use case of the VLBI HPC cluster is  doing VLBI correlations. However some of the cluster resources can be used for other computationally intense tasks e.g. simulations. For an overview of the cluster capabilities see XXX.

    • Users need to get an account first (see W. Alef or H. Rottmann).
    • Users should log into frontend.
    • /home on the cluster is where the home directories reside. /home is visible to all nodes.
    • login to nodes works from frontend without password
    • All nodes have 16 GB of memory
    • Each node has a scratch disk (~100 GB)
    • for large amounts of disk space two 20 TB Raid are available (mounted under /data and /data1). If you need space, a directory can be created for you.
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI, MPICH (OpenMPI automatically uses Infiniband)
      • Intel Performance LIbrary
      • The Torque batch system with load balancing could be configured and enabled if need arises.
    • One 20 TB raid for geodesy (IO03)
    • One 20 TB raid and one 40 TB raid for pulsar data reduction each with 12 cores. (IO04 and IO05)
    • 4 x 40 TB raids for LOFAR data reduction each with 12 cores. (IO06-IO09)

    Cluster Booking

    Please contact the VLBI correlator operator (Tel. 220, email: hsturm) if you would like to allocate time on the cluster.

    Defective nodes: none

    Available nodes: 1 - 68

    Diskspace: to be negotiated

    Ask Helge Rottmann.

    Cluster RAIDS

     

    RAID number Size in TB VLBI
    1 162 162
    2 162 162
    3 73 73
    4 20  
    5 40  
    6 40  
    7 40  
    8 40  
    9 40  
    10 46 46
    11 82 82
    12 82 82
    13 66 66
    14 66 66
    Total disk size 959 739

    Cluster Layout

    ClusterRack.jpg

     

      Infiniband Switch 4x Infiniband Switch 4x Infiniband Switch
    IO11 / 82 TB, IO12 / 82 TB   nodes 29 - 48 Nodes 1 - 28
    IO10 / 46 TB      
    IO05 - IO09 / 40 TB Nodes 53 - 68 4 CASA nodes IO01 / 164 TB, IO02 / 164 TB

    IO10 / 52 TB, IO01 /20 TB           IO03&04 / 20 TB                  Nodes 41-60, IO01 / 20 TB,            Nodes 1 - 40
      IO05 - IO09 / 40 TB                     FXmanager                          Appliance, Frontend,
                                                          IO02 / 20 TB                         Infiniband switch

    More information about the cluster can be found on the intranet: http://intra.mpifr-bonn.mpg.de/div/v...nal/index.html