Computing capacity

HW available through MetaCentrum

The computing equipment falls into several categories:

  • High density (HD) clusters have two CPUs (i.e., 8-20 cores) in shared memory (96-128 GB), offering maximal computation power/price ratio. They cover the needs of applications with limited or no internal parallelism that can make use of running many simultaneous instances (high-throughput computing). Another important class of applications are those that use commercial software with per-core licensing. Some nodes with GPU accelerators.
  • Symmetric Multiprocessing (SMP) clusters with more CPUs (40-80 cores) in shared memory (up to 1250 GB), oriented towards memory-demanding applications and applications requiring larger numbers of CPUs communicating via shared memory. SMP nodes are suitable for applications with finer parallelization on higher number of cores.
  • Special SMP machines (SGI UV2) with extremely large shared memory (6TB) and reasonably high number of CPU cores (up to 348). This machine is available for extremely demanding applications, either in terms of parallelism or memory. Typical examples of such applications are quantum chemical calculations, fluid dynamics (including climate simulations) or bioinformatics code.
  • Cluster with Xeon Phi many core processors - Xeon Phi is massively-parallel architecture consisting of high number of x86 cores. Unlike old generation, the new Xeon Phi (based on Knight Landing architecture) is a self-booting system (there is no conventional CPU needed), which is fully compatible with x86 architecture.

Cluster nodes and data storage are mutually interconnected with 1 Gbit/s and 10 Gbit/s Ethernet as well as local InfiniBand network with higher bandwidth (40Gbit/s) and lower latency.

Special machines

Two special SMP machines (SGI UV2) with extremely large shared memory (6 TB each) and reasonably high number of CPU cores (currently 384 and 288) are available for extremely demanding applications, either in terms of parallelism or memory, and are a bridge towards “classical” supercomputing environment. Naturally, applications must be tuned specifically in order to run efficiently on such machines, in particular they must be aware of non-uniform memory access time (NUMA architecture). Typical examples of applications are quantum chemical calculations, fluid dynamics (including climate simulations) or bioinformatics code.

  • urga.cerit-sc.cz (384 CPU, 1 node, January 2015)
    • 48x 8-core Intel Xeon E5-4627v2 3.30GHz
    • 6 TB RAM
    • 72 TB scratch
    • 6x Infiniband 40 Gbit/s, 2x Ethernet 10 Gbit/s
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 13198 points in the SPECfp2006 base rate benchmark (34.37 per core)
  • ungu.cerit-sc.cz (288 CPU, 1 node, December 2013)
    • 48x 6-core Intel Xeon E5-4617 2.9GHz
    • 6 TB RAM
    • 72 TB scratch
    • 6x Infiniband 40 Gbit/s, 2x Ethernet 10 Gbit/s
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 8830 points in the SPECfp2006 base rate benchmark (30.66 per core)

Cluster with 6 newest Intel Xeon Phi 7210 many core processors. Xeon Phi is massively-parallel architecture consisting of high number of x86 cores. Unlike old generation, the new Xeon Phi (based on Knight Landing architecture) is a self-booting system (there is no conventional CPU needed), which is fully compatible with x86 architecture. Thus, you can submit jobs to Xeon Phi nodes in the same way as to CPU-based nodes, using the same applications. No recompilation or algorithm redesign is needed, although may be beneficial.

  • phi.cerit-sc.cz (384 CPU, 6 nodes, December 2016), each of the nodes has the following hardware specification:
    • 6x 64-core Intel Xeon Phi 7210, 1.30GHz
    • 208 GB RAM
    • 1x 800 GB SSD, 2x 3 TB in RAID-0, giving 6,8 TB in every node
    • Ethernet 1 Gbit/s
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 748 points in the SPECfp2006 base rate benchmark (11.7 per core)
    • Xeon Phi 7210 has 256 virtual cores (64 physical) running at 1.3GHz with overall performance of 2.66 TFlops in double precision and 5.32 TFlops in single precision.

SMP clusters

Symmetric Multiprocessing (SMP) clusters (29 nodes currently, 1960 cores in total) with more CPUs (40-80 cores) in shared memory (up to 1250 GB), oriented towards memory-demanding applications and applications requiring larger numbers of CPUs communicating with shared memory. On the other hand, due to technical restrictions only CPUs with slightly lower per-core performance can be used in such systems. SMP nodes are therefore suitable for applications with finer parallelization on higher number of cores.

  • zorg.priv.cerit-sc.cz (40 CPU, 1 node)
    • 4x 10-core Intel Xeon E7-8891 v2 3.20 GHz
    • 512 GB RAM,
    • 4x 1.2TB 10k RAID5
    • 1x InfiniBand, 1x 10 Gbit/s Ethernet
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 1400 points in the SPECfp2006 base rate benchmark (35 per core)
  • zefron.priv.cerit-sc.cz (320 CPU, 8 nodes, January 2016), each of the nodes has the following hardware specification:
    • 4x 10-core Intel Xeon E5-4627 v3 @ 2.60GHz
    • 1 TiB RAM
    • 4x 1 TB 10k, 2x 480 GB SSD
    • 1x Infiniband 40 Gbit/s, 1x Ethernet 10 Gbit/s, 1x Ethernet 1 Gbit/s
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 1370 points in the SPECfp2006 base rate benchmark (34 per core)
    • Machine zefron8 contains 1 GPU card NVIDIA Tesla K40.
  • zewura.cerit-sc.cz (560 CPU, 7 nodes, from December 2011 )
    • Intel Xeon E7-2860 processors (10 CPU cores, 2.26 GHz),
    • 512 GB RAM,
    • 20x 900GB hard drives to store temporary data (/scratch), configured in RAID-10, thus having 8 TB capacity.
    • InfiniBand: two 4xQDR interfaces per node, 10 Gbit/s Ethernet
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 1250 points in the SPECfp2006 base rate benchmark (15,62 per core)
  • zebra.priv.cerit-sc.cz (960 CPU, 24 nodes, from June 2012 )
    • 8 Intel Xeon E7-4860 processors (10 CPU cores, 2.26 GHz),
    • 512 GB RAM
    • 12x 900GB hard drives to store temporary data (/scratch), configured in RAID-5, thus having 9.9 TB capacity.
    • InfiniBand: two 4xQDR interfaces per node, 10 Gbit/s Ethernet
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 1220 points in the SPECfp2006 base rate benchmark (15,25 per core).

HD clusters

High density (HD) clusters (192 nodes currently, 2624 CPU cores in total) have two CPUs (i.e., 8-20 cores) in shared memory (96-128 GB), offering maximal computation power/price ratio. They cover the needs of applications with limited or no internal parallelism that can make use of running many simultaneous instances (high-throughput computing). Another important class of applications are those that use commercial software with per-core licensing.

  • hdc.cerit-sc.cz (1744 CPU, 109 nodes, zapat, from January 2013 ), each of the nodes has the following hardware specification:
    • 2x 8-core Intel E5-2670 2.6GHz
    • 128 GB RAM
    • 2x 600 GB 15k
    • location Jihlava
    • 1x Infiniband 40 Gbit/s, 2x Ethernet 1 Gbit/s
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 471 points in the SPECfp2006 base rate benchmark (29 per core)
  • hdb.cerit-sc.cz (256 CPU, 32 nodes, zigur, from January 2013), each of the nodes has the following hardware specification:
    • 2x 4-core Intel E5-2643 3.3GHz
    • 128 GB RAM
    • 2x 600 GB 15k
    • location Jihlava
    • 1x Infiniband 40 Gbit/s, 2x Ethernet 1 Gbit/s
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 327 points in the SPECfp2006 base rate benchmark (41 per core)
  • hda.cerit-sc.cz (576 CPU, 48 nodes, zegox, from July 2012, each of the nodes has the following hardware specification:
    • 2x Intel E5-2620 (6 cores)
    • 96 GB RAM
    • 2x 600GB hard drives to store temporary data (/scratch)
    • InfiniBand: 1x per node, 10 Gbit/s Ethernet: only 8 nodes, 1 Gbit/s Ethernet: two interfaces per node.
    • The performance of each node (as measured during acceptation tests of the cluster) reaches 250 points in the SPECfp2006 base rate benchmark (20,83 per core).

The centre  CERIT-SC is an integral part and the most powerful node of the National Grid Infrastructure MetaCentrum operated by CESNET.

Being an experimental facility, the Centre represents an indispensable part of the national e-infrastructure, complementary to the CESNET association and IT4Innovations supercomputing centre, a complex system of interconnected computation and storage capacities and related services for the research community of the Czech Republic.

REGISTRATION & usage rules

Ask for access to HW and SW resources

STATISTICS

Consolidated report on computing resources usage in 2012-16

You are running an old browser version which is not fully supported information system anymore. Some applications might not display correctly, some functions might not work as expected or might not work at all.