Research group

Concurrent Computing Lab

Concurrent programming has gained popularity over the past decades. Every language and platform provides corresponding primitives, which become harder and harder to use efficiently with the increasing system complexity, such as having multiple NUMA nodes, as well as with relaxations of memory models. Several important questions arise from this in practice. How to build efficient concurrent algorithms nowadays? What is the best trade-off between progress guarantees, efficiency, and fairness? How to check all these algorithms for correctness? How to benchmark them? While some of the questions are partially answered in academia, a lot of the practical problems are still open. Our primary focus is to answer these questions by providing practically reasonable and theoretically valuable solutions as well as high-quality tools that can help other researchers and developers in the field of concurrency.

Our topics of interest include:

  • Concurrent algorithms and data structures
  • Non-volatile memory (NVM)
  • Testing and verification
  • Performance analysis, debugging, and optimization
  • Parallel programming languages and models
  • Memory reclamation

See the full list of publications here.

Main projects

Synchronization and communication primitives for Kotlin Coroutines

Traditional concurrent programming involves manipulating a shared mutable state. An alternative to this programming style is the communicating sequential processes (CSP) model, which share data via explicit communication. Kotlin Coroutines is a library that brings this model into the Kotlin language, where processes are represented via coroutines (they are light-weight threads). This way, there are several abstractions required for communication and synchronization between the coroutines, starting from the well-known locks and semaphores, and ending with channels — special producer-consumer structure to transfer the data among coroutines. This project is devoted to developing efficient primitives and adopting them for Kotlin Coroutines.

Lincheck: testing concurrency on the JVM

Lincheck is a practical tool for testing concurrent algorithms in JVM-based languages, which supports both stress-testing and bounded model checking. One of the main advantages of Lincheck relative to popular testing tools such as JCStress is that it provides a declarative way of writing concurrent tests, by specifying all the operations to be examined, their correctness properties, and the number of invocations to be used for each scenario. Roughly, Lincheck generates a series of concurrent scenarios, executes them in either stress-testing or model-checking mode, and checks whether there exists some sequential execution that can explain the results satisfying the specified correctness property. In addition to the above features, Lincheck is the first testing tool to support relaxed data structure semantics, dual data structure designs popular in producer-consumer and coroutine implementations, as well as durable data structures, designed for NVRAM.

Concurrent graph algorithms

Parallel graph processing is a fundamental and well-studied topic in academia in both theoretical and practical aspects. However, some applications, such as social networks analysis and compilers, require these algorithms to be online, thus, to be concurrent. This project aims to build practically efficient concurrent algorithms for different aspects of graph processing, including using multi-queues as priority schedulers for some of the algorithms, or on-line dynamic connectivity problems under edge insertion and deletions. Nowadays, most of the hardware platforms have several NUMA sockets, so it is essential to make all these algorithms NUMA-friendly.

Student Internship

We are hiring interns all year round. The ideal candidate would have a strong background in CS or Math and have some knowledge of concurrent programming. Please email Nikita Koval for details at