VLBI HPC cluster

    Version as of 14:11, 29 Mar 2024

    to this version.

    Return to Version archive.

    View current version

    General characteristics

    • Users need to get an account first (see W. Alef or H. Rottmann).
    • Users should log into frontend.
    • /home on the cluster is where the home directories reside. /home is visible to all nodes.
    • login to nodes works from frontend without password
    • All nodes have 16 GB of memory
    • Each node has a scratch disk (~100 GB)
    • for large amounts of disk space a 20 TB Raid is available (mounted under /data). If you need space, a directory can be created for you.
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI, MPICH (OpenMPI automatically uses Infiniband)
      • Intel Performance LIbrary
      • a Batch system with load balancing could be turned on if need arises.

    Cluster Booking

    Please contact the VLBI correlator operators (Tel. 220, email: mk4) if you would like to allocate time on the cluster.

    Check here to see when you can use which nodes and what diskspace on the cluster.

    Defective nodes: node07 gives higher errors on IB. Will be invesigated

    Available nodes: 1 - 60

    Diskspace: ~200 GB on /home; more diskpace on request

     

    Time period Nodes assigned
    Who What Diskspace Comment
    10.12.- all Rottmann Korrelation   top priority
     2.9.-??.2009 51-60 Reich Simulations gering  
    3.11. frontend Fromm Test    

    Cluster Layout

    alef02.png

               IO02 / 20 TB                      Nodes 41-60, IO01 / 20 TB,            Nodes 1 - 40
                                                          Appliance, Frontend,
                                                                  FXmanager

    More information about the cluster can be found on the intranet: http://intra.mpifr-bonn.mpg.de/div/v...nal/index.html