VLBI HPC cluster

    Version as of 01:37, 29 Mar 2024

    to this version.

    Return to Version archive.

    View current version

    General information

    The main use case of the VLBI HPC cluster is  doing VLBI correlations. However some of the cluster resources can be used for other computationally intense tasks e.g. simulations. For an overview of the cluster capabilities see XXX.

    If you want to have access to the cluster please get in touch with H.Rottmann (see Section Contact)

    Contact

    Who What Phone
    Helge Rottmann cluster administrator 123
    Walter Alef cluster administrator 289
    Hermann Sturm operator 220

    Using the cluster

    • Users need to get an account first (see H. Rottmann / W.Alef).
    • Once your account has been set-up you can log-into the cluster via ssh to frontend.
    • /home contains your home directory and is visible on all cluster nodes
    • /cluster contains software installations and is visible on all cluster nodes.
    • Printing to any CUPS printer of the MPIfR should be possible from frontend.
    • If you need to install software please consult with the administrators first.

    Rules of conduct

    • Do not store dataset in your home directory. If you need storage capacity talk to the cluster administrators first.
    • Use only those cluster nodes that have been assigned to you by the cluster administrators.
    • When done with your project tell the administrators so that the nodes can be used for other tasks.
    • VLBI correlation always has first priority. When asked by the administrators users have to interupt their jobs and return the assigned nodes back to the correlator (this happens rarely)

    Cluster Specs

    • The cluster has 68 compute nodes, each equipped with 20 cores. These are named node01 to node68. The nodes can be accessed from frontend.
    • All nodes are equipped with 16 GB of memory
    • Each node has a scratch disk (~100 GB)
    • The cluster interconnect is realized with FDR Infininband @56 Gbps.
    • User data storage is typically available under /data11/users/username (consult with the admins if you need storage capacity)
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI (various versions)
      • Intel Performance LIbrary (IPP)
      • The Torque batch system with load balancing could be configured and enabled if need arises)

    Cluster Layout

    ClusterRack.jpg

     

      Infiniband Switch 4x Infiniband Switch 4x Infiniband Switch
    IO11 / 82 TB, IO12 / 82 TB   nodes 29 - 48 Nodes 1 - 28
    IO10 / 46 TB      
    IO05 - IO09 / 40 TB Nodes 53 - 68 4 CASA nodes IO01 / 164 TB, IO02 / 164 TB

    IO10 / 52 TB, IO01 /20 TB           IO03&04 / 20 TB                  Nodes 41-60, IO01 / 20 TB,            Nodes 1 - 40
      IO05 - IO09 / 40 TB                     FXmanager                          Appliance, Frontend,
                                                          IO02 / 20 TB                         Infiniband switch

    More information about the cluster can be found on the intranet: http://intra.mpifr-bonn.mpg.de/div/v...nal/index.html