core-to-core-latency: A Nice Little Tool!

Saturday, 23rd Sept 2023: I've been curiously staring at my blog for quite some time, and it reminds me over and over again that it's been nearly two years since I managed to write new content here 😞. I have a few work-in-progress articles, and unfortunately, they've remained incomplete for quite some time. It's been a bit challenging to find dedicated long weekend hours to write the detailed posts that I really love. But "not having enough time" most likely is just another excuse and a form of procrastination.

So, I've decided to try a different strategy: instead of attempting to write long and detailed posts that need multiple weekends, I will write multiple shorter ones that I can easily complete over a weekend. Still revolving around the same theme of performance engineering topics and tools that I love!

Let's see how this strategy unfolds in the coming months...🚀

Background

A brief background on why we are keenly interested in measuring core-to-core latency and how a tool like this is helpful.

NUMA

In the previous article on LIKWID, there is a sufficiently detailed discussion of the Non-Uniform Memory Access (NUMA) architectural aspects, see the sections titled "II. Brief Overview of CPU Architecture" and "13. NUMA Effect: How to measure the performance impact?". In summary, NUMA architecture allows processors to access the memory attached to the same socket, as well as the memory on all other sockets within a given compute node. However, accessing local memory on each socket is faster than accessing memory on a remote socket due to the inter-socket link.

We can use tools such as numactl to examine the distance matrix between NUMA nodes or domains:

The same can be also seen from thesysfs:

The above example is of a compute node with two Cascade Lake 6248 CPUs (20 physical cores each). The memory latency distances in the same NUMA domain/node is normalized to 10 (1.0x). The distances between other NUMA domains in the node are scaled relative to this base value of 10. For example, the value 21 represents the 2.1x latency difference when NUMA node 0 accesses remote memory from NUMA node 1 vs. NUMA node 0 accessing local memory on the same NUMA node 0.

We commonly use memory benchmarks like STREAM to understand certain aspects like achievable memory bandwidth (with a single core, multiple cores, NUMA effects, etc.) but they do not easily help to understand other architectural aspects like the interplay between cache hierarchies of different cores.

Cache Coherency and Core-to-Core Latencies

The inter-socket links like Quick Path Interconnect (QPI) provide point-to-point connectivity to other processors but introduce a new challenge of keeping a uniform memory view among all the processors. To address this, the Cache Coherent NUMA (CC-NUMA) systems use inter-processor communication to keep a consistent view of memory when the same memory location is stored or manipulated in caches of different processors.

The complexity of modern memory subsystems with multiple cache levels imposes a substantial overhead in maintaining cache coherence within shared memory systems. Consequently, when multiple cores attempt to access or modify the same memory location in rapid succession, particularly when cache lines are in different coherency states, it can result in significant memory access latency and hence poor performance. Hence, a quick and easy latency measurement tool is useful to know!

Enter "core-to-core-latency" Tool

There are various other tools to measure the metrics like cores to cores latencies. Recently I came across core-to-core-latency tool written by Nicolas Viennot. Even though it's a small utility tool, it's simple and generates a nice and easy-to-read latency heatmap plot. So I thought this is a perfect candidate for a short blog post!

Install

The tool is written in Rust and can be easily installed using the cargo package manager. While I do not have prior experience with Rust, setting it up appears to be a straightforward process:

Configure the shell as suggested and use cargo command to install the tool:

Use

The CLI options are not exhaustive and straighforward:

We will use the benchmark that performs Compare-And-Swap (CAS) operation between the cores of the same socket and cores of different sockets. We will continue using the dual-socket Cascade Lake 6248 CPU as before:

Understanding the output is straightforward: Cores 0-3 belong to the first socket, while cores 20-23 are part of the second socket. CAS operation latencies on the same socket are approximately 50ns, but when cores from different sockets are involved, the latency increases to around 150ns. I tried quickly to cross-check these figures with Intel's Memory Latency Checker tool, but I'll delve into that later. (Otherwise, this post could become long and I will find myself in a perpetual waiting cycle!). Anyway, if you curious and want to see previous questions, take a quick look at the GitHub issues.

Let's produce a few more results for different configurations and produce few nice plots:

  • A) Utilizing all 8 cores of the first socket
  • B) Allocating 4 cores from the first socket and another 4 cores from the second socket
  • C) Utilizing the full node, which comprises 20 cores from the first socket and 20 cores from the second socket

Now, we can take the generated CSV reports and integrate them into the notebook, which will generate the latency heatmap plots as follows:

Latency plots for first two 8-core configurations

and 3rd configuration of all 40 cores from two different sockets:

Latency plots for the third configuration of the full node

Aren't these plots self-explanatory? Love to see when performance tools present data in an intuitive manner, aligning with our mental models! Feel free to explore the tool further.

That's all for today! I think this short post gives sufficient background about the tool, allowing me to wrap up this short post on the same weekend! 🚀

Credit

All credit goes to Nicolas Viennot who has put together this tool and made it available on GitHub Repo. Thanks, Nicolas!