Cookies

We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Advanced Research Computing

Systems

The Hamilton HPC service has evolved through a number of hardware generations. The latest generation is called Hamilton8 and provides a total of 15,616 CPU cores, 36TB RAM and 1.9PB disk space. It runs a linux operating system (Rocky linux 8).

Hamilton8 is formed of many powerful servers, which are connected to each other via high-performance interconnects and share access to a fast filestore. Access to the bulk of its compute resouces is provided by logging into one of the login servers (login nodes) and using the SLURM resource management system.

  • Hamilton8 consists of:
    • 120 standard compute nodes, each with 128 CPU cores (2x AMD EPYC 7702), 256GB RAM and 400GB local SSD storage.
    • 2 high-memory compute nodes, each with 128 CPU cores (2x AMD EPYC 7702), 2TB RAM and 400GB local SSD storage.
    • Infiniband HDR 200GB/s high performance interconnect, with a 2.6:1 fat tree topology formed of non-blocking islands of up to 26 nodes.
  • Each compute node has the following internal structure
    • 2 CPU sockets.
    • Each socket is divided into 4 NUMA domains, each with its own memory channels.
    • Each NUMA domain is split into 4 groups (or "chiplets"), each with its own 16MB L3 cache.
    • Each group contains 4 CPU cores, each with its own 512KB L2 cache and 32KB L1 cache.

The previous hardware generation, Hamilton7, provides 2816 CPU cores, 13TB RAM and 700TB of disk space. It is due to be phased out in 2022.

  • Hamilton 7:
    • 112 compute nodes, each with 24 CPU cores (2x Intel Xeon E5-2650v4 CPUs) and 64GB RAM.
    • Intel Omnipath 100GB/s high-performance interconnect.
    • Two high memory nodes, each providing 3TB RAM.