VLBI HPC cluster

    Version as of 04:08, 9 Nov 2024

    to this version.

    Return to Version archive.

    View current version

    General information

    The main use case of the VLBI HPC cluster is  doing VLBI correlations. However some of the cluster resources can be used for other computationally intense tasks e.g. simulations. For an overview of the cluster capabilities see XXX.

    If you want to have access to the cluster please get in touch with H.Rottmann (see Section Contact)

    Contact

    Who What Phone
    Helge Rottmann main cluster administrator 123
    Walter Alef deputy cluster administrator 289
    Hermann Sturm operator 220

    Using the cluster

    • Users need to get an account first (see H. Rottmann / W.Alef).
    • Once your account has been set-up you can log-into the cluster via ssh to frontend.
    • /home contains your home directory and is visible on all cluster nodes
    • /cluster contains software installations and is visible on all cluster nodes.
    • Printing to any CUPS printer of the MPIfR should be possible from frontend.
    • If you need to install software please consult with the administrators first.

    Rules of conduct

    • Do not store dataset in your home directory. If you need storage capacity talk to the cluster administrators first.
    • Use only those cluster nodes that have been assigned to you by the cluster administrators.
    • When done with your project tell the administrators so that the nodes can be used for other tasks.
    • VLBI correlation always has first priority. When asked by the administrators users have to interupt their jobs and return the assigned nodes back to the correlator (this happens rarely)

    Cluster Specs

    • The cluster has 68 compute nodes, each equipped with 20 cores. These are named node01 to node68. The nodes can be accessed from frontend.
    • All nodes are equipped with 16 GB of memory
    • Each node has a scratch disk (~100 GB)
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI (various versions)
      • Intel Performance LIbrary (IPP)
      • The Torque batch system with load balancing could be configured and enabled if need arises)

    Cluster Booking

    Please contact the VLBI correlator operator (Tel. 220, email: hsturm) if you would like to allocate time on the cluster.

    Defective nodes: none

    Available nodes: 1 - 68

    Diskspace: to be negotiated

    Ask Helge Rottmann.

    Cluster RAIDS

     

    RAID number Size in TB VLBI
    1 162 162
    2 162 162
    3 73 73
    4 20  
    5 40  
    6 40  
    7 40  
    8 40  
    9 40  
    10 46 46
    11 82 82
    12 82 82
    13 66 66
    14 66 66
    Total disk size 959 739

    Cluster Layout

    ClusterRack.jpg

     

      Infiniband Switch 4x Infiniband Switch 4x Infiniband Switch
    IO11 / 82 TB, IO12 / 82 TB   nodes 29 - 48 Nodes 1 - 28
    IO10 / 46 TB      
    IO05 - IO09 / 40 TB Nodes 53 - 68 4 CASA nodes IO01 / 164 TB, IO02 / 164 TB

    IO10 / 52 TB, IO01 /20 TB           IO03&04 / 20 TB                  Nodes 41-60, IO01 / 20 TB,            Nodes 1 - 40
      IO05 - IO09 / 40 TB                     FXmanager                          Appliance, Frontend,
                                                          IO02 / 20 TB                         Infiniband switch

    More information about the cluster can be found on the intranet: http://intra.mpifr-bonn.mpg.de/div/v...nal/index.html