VLBI HPC cluster

This page can't be edited. | Page History
    Table of contents
    You are currently comparing two old versions - only when you are comparing against the latest version can you revert. Return to version archive.

    Combined revision comparison

    Comparing version 09:54, 2 Jun 2020 by rottmann with version 09:55, 2 Jun 2020 by rottmann.

    ...

    • The cluster has 68 compute nodes, each equipped with 20 cores. These are named node01 to node68. The nodes can be accessed from frontend.
    • All nodes are equipped with 64 GB of memory
    • Each node has a scratch disk (~1TB)
    • The cluster interconnect is realized with FDR Infininband @56 Gbps.
    • User data storage is typically available under /data11/users????????? (consult with the admins if you need storage capacity)
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI (various versions)
      • Intel Performance LIbrary (IPP)
      • The SLURM batch system  could be configured and enabled if need arises

    ...

    Version from 09:54, 2 Jun 2020

    This revision modified by rottmann (Ban)

    ...

    • The cluster has 68 compute nodes, each equipped with 20 cores. These are named node01 to node68. The nodes can be accessed from frontend.
    • All nodes are equipped with 64 GB of memory
    • Each node has a scratch disk (~1TB)
    • The cluster interconnect is realized with FDR Infininband @56 Gbps.
    • User data storage is typically available under ????????? (consult with the admins if you need storage capacity)
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI (various versions)
      • Intel Performance LIbrary (IPP)
      • The SLURM batch system  could be configured and enabled if need arises

    ...

    Version as of 09:55, 2 Jun 2020

    This revision modified by rottmann (Ban)

    ...

    • The cluster has 68 compute nodes, each equipped with 20 cores. These are named node01 to node68. The nodes can be accessed from frontend.
    • All nodes are equipped with 64 GB of memory
    • Each node has a scratch disk (~1TB)
    • The cluster interconnect is realized with FDR Infininband @56 Gbps.
    • User data storage is typically available under /data11/users (consult with the admins if you need storage capacity)
    • Software available:
      • gcc 4.2.1 (g++, gfortran)
      • gcc 3.3.5 (g++, gfortran; under /cluster/gcc/3.3.5/bin/)
      • OpenMPI (various versions)
      • Intel Performance LIbrary (IPP)
      • The SLURM batch system  could be configured and enabled if need arises

    ...