Concurrency allows multiple copies of a program to run simultaneously while communicating with each other.C++11 first introduced concurrency features like threads, mutexes, conditional variables. C++17 expanded this with parallel algorithms.Concurrency overlaps execution of tasks, parallelism runs multiple copies of a program on different data simultaneously.Two main ways to implement concurrency in C++ are multithreading and parallelism.Multithreading involves creating thread objects that execute delegated sub-tasks independently.Parallelism uses annotations to mark functions for potential concurrent execution by the compiler.Examples of using concurrency include email servers and web crawlers to handle many simultaneous requests.Concurrency allows utilizing multiple CPU cores efficiently for performance gains.As core counts increase, concurrency is becoming more and more important for C++ developers.
The C++ language has continuously evolved since its inception, and with each new standard update, it brings fresh features and advanced concepts that help in writing more efficient, reliable, and expressive code. In this blog post, we will delve into some of the advanced concepts of modern C++ that every seasoned developer should be familiar with.Cheatsheets: https://hackingcpp.com/cpp/cheat_sheets.html
1. Lambda Expressions:
Lambda functions or lambda expressions allow you to define anonymous functions inline. They are particularly useful for short functions that are used a limited number of times.
auto add = [](int a, int b) -> int { return a + b; }; std::cout << add(5, 3); // Outputs: 8
2. Rvalue References and Move Semantics:
Introduced in C++11, rvalue references enable the creation of move constructors and move assignment operators. This means you can transfer resources from one object to another without expensive deep copying.
std::string str1 = "Hello"; std::string str2 = std::move(str1); // Uses move semantics; str1 is now empty
3. Smart Pointers:
Gone are the days of raw pointers with their associated risks. Smart pointers, like std::shared_ptr
, std::unique_ptr
, and std::weak_ptr
, manage the lifetime of objects they point to, ensuring no memory leaks.
std::unique_ptr<int> ptr(new int(5));
4. Variadic Templates:
Variadic templates allow functions and classes to accept any number of template arguments, making it easier to create generic and reusable components.
template<typename... Args> void print(Args... args) { (std::cout << ... << args); }
5. constexpr:
With constexpr
, you can compute values at compile-time. It can be used with functions or variables, ensuring that they are evaluated at compile time.
constexpr int factorial(int n) { return (n <= 1) ? 1 : n * factorial(n - 1); }
6. Attributes:
Attributes provide a universal syntax to communicate additional information to the compiler. One of the most known is [[nodiscard]]
, which warns developers if the return value of a function is discarded.
[[nodiscard]] int compute() { return 42; }
7. Structured Bindings:
C++17 introduced structured bindings that allow you to decompose objects into their individual parts.
std::pair<int, std::string> p = {5, "five"}; auto [number, word] = p;
8. Concepts:
C++20 brought about concepts, which are a way to specify constraints on template parameters, making template code more readable and safer.
template<typename T> concept Numeric = std::is_arithmetic_v<T>; template<Numeric N> N square(N n) { return n * n; }
9. Coroutines:
Coroutines, introduced in C++20, are a form of function that can be exited and later re-entered. They pave the way for asynchronous programming.
generator<int> generateNumbers(int count) { for (int i = 0; i < count; ++i) { co_yield i; } }
Concurrency in C++ enables multiple threads to execute in parallel, making efficient use of multi-core processors and enhancing performance for compute-bound tasks. The C++11 standard introduced a concurrency library, which has since been extended in subsequent standards.Let’s delve into the main components of C++ concurrency:
1. Threads:
The basic unit of concurrency is the thread. You can create and manage threads using the std::thread
class.
#include <iostream> #include <thread> void helloFunction() { std::cout << "Hello from another thread!" << std::endl; } int main() { std::thread t(helloFunction); t.join(); // Wait for the thread to finish return 0; }
2. Mutex:
Mutex (mutual exclusion) is a synchronization primitive used to prevent multiple threads from simultaneously executing critical sections of code.
#include <mutex> std::mutex mtx; void safeFunction() { mtx.lock(); // Critical section of code mtx.unlock(); } // or using lock_guard void saferFunction() { std::lock_guard<std::mutex> lock(mtx); // Critical section of code }
3. Condition Variables:
Condition variables are synchronization primitives used to block a thread until notified by another thread.
#include <condition_variable> #include <queue> std::queue<int> dataQueue; std::mutex dataMutex; std::condition_variable dataCondition; void dataProcessingThread() { std::unique_lock<std::mutex> lock(dataMutex); dataCondition.wait(lock, []{ return !dataQueue.empty(); }); // Process data from the queue }
4. Future, Promise, and async:
- Future: Represents a future result from a potentially asynchronous computation.
- Promise: An object that can store a value of type
T
to be retrieved by astd::future
. - async: A function to execute a function asynchronously (potentially in a new thread) and returns a
std::future
that will hold the result.
#include <future> int computeFunction() { return 42; } int main() { std::future<int> result = std::async(computeFunction); int value = result.get(); // This will block until value is ready return 0; }
5. Atomic Operations:
Atomic operations are operations that run completely independently of any operations that are simultaneously occurring.
#include <atomic> std::atomic<int> counter(0); void increment() { for (int i = 0; i < 1000000; ++i) { counter++; } }
6. Memory Model and Ordering:
C++11 introduced a memory model that describes the interaction of threads through memory and their ordering constraints. The key concept here is the “happens-before” relationship, which guarantees memory consistency.
For instance, std::memory_order_acquire
and std::memory_order_release
are ordering constraints to control the ordering of reads and writes to atomic variables.
7. Parallel Algorithms (C++17 and beyond):
The C++17 standard introduced parallel algorithms in the Standard Library. This means that many algorithms in the <algorithm>
header can now utilize multiple threads.
#include <algorithm> #include <vector> int main() { std::vector<int> v = { /*... some data ...*/ }; // Parallel sort std::sort(std::execution::par, v.begin(), v.end()); return 0; }
Concurrency allows multiple tasks to be executed simultaneously to improve performance and responsiveness.C++11 introduced core concurrency features like std::thread, mutexes, conditional variables. C++17 expanded support.std::thread allows spawning a new thread of execution that can run concurrently.The thread is passed a function to execute. Access to shared data must be synchronized.Mutex stands for mutual exclusion object. It locks a section of code to prevent simultaneous access.std::mutex provides exclusive access – only one thread can acquire the lock at a time.Locking a mutex before accessing shared data prevents race conditions.
To use a mutex, call lock() before accessing the shared resource, then call unlock() when done.Calling lock() on an already locked mutex will block execution until it’s unlocked.std::unique_lock is a RAII wrapper over mutex to simplify locking and unlocking.std::condition_variable allows threads to wait for notifications from other threads.Conditional variables are used with mutexes to implement synchronized data access.Common patterns like producer-consumer queues can be implemented using mutex+condition variable.Additional tools like std::future, std::async, std::packaged_task are provided for synchronizing execution.Properly using mutexes and other concurrency features prevents race conditions and allows safe, performant concurrent C++ code.
Most Common Interview Questions
- Multitasking is the ability of an operating system to execute multiple processes or tasks simultaneously.
- Multithreading is a programming technique to run multiple threads in a single process, with each thread executing independently. It is used for improving performance, responsiveness, and resource utilization in areas like web servers, application servers, GUI apps etc.
- Advantages of multithreading include increased performance by utilizing multiple CPUs, improved responsiveness, and simplified modeling of concurrent problems.
- Java provides built-in multithreading capabilities like the Thread class while C++ relies on third party libraries like Pthreads. Java synchronization is also safer and easier.
- There are two ways to define a thread in Java – extend the Thread class or implement the Runnable interface.
- Implementing Runnable is preferred because Java does not support multiple inheritance. Runnable can be implemented by any class including one already extending another class.
- t.start() starts the execution of the thread while t.run() simply executes the run() method on the current thread.
- Thread scheduler is the part of the OS that allocates CPU time to threads and switches between them based on scheduling algorithms.
- If run() is not overridden, then nothing will happen when the thread is started.
- Overloading run() is not possible since it is defined in the Thread class.
- Overriding start() will prevent the thread from starting since start() is responsible for actually creating the thread.
- The thread lifecycle consists of states like new, runnable, running, blocked, dead etc.
- start() executes the run() method on a new thread. This allows concurrency with the main thread.
- Trying to restart a thread after it has started will throw IllegalThreadStateException.
- Thread class constructors allow setting thread name, priority etc. when creating a thread.
- getName() and setName() methods are used to get and set the name of a thread.
- Thread priorities are generally used in scheduling to determine order of execution. Higher priority threads are executed first.
- Main thread priority is 5 (NORM_PRIORITY).
- Priority of new thread is same as the thread that created it.
- getPriority() and setPriority() methods are used to get and set thread priority.
- Setting thread priority higher than MAX_PRIORITY (10) will not change the priority.
- Higher priority thread gets chance for execution compared to lower priority threads.
- For same priority threads, scheduling is arbitrary and depends on OS implementation.
- We can prevent thread execution by not calling start() or calling wait()/sleep() after starting.
- yield() causes the current thread to relinquish CPU voluntarily to allow equal sharing between threads.
- Yes, join() is overloaded to allow specifying timeout.
- sleep() temporarily blocks the thread for the specified duration to allow other threads to execute.
- synchronized provides locking so only one thread can access shared data at a time. Disadvantage is performance overhead.
- Object level lock is required when multiple threads operate on a shared object to prevent data inconsistency.
- Class level lock prevents concurrent access to static fields/methods since they are shared by all instances.
- No, only one thread can execute a synchronized method on an object at a time.
- Synchronized static methods lock at class level while non-static methods lock at object level.
- Synchronized blocks allow more fine grained control over locking compared to synchronized methods.
- Synchronized statement provides syntactic sugar for synchronized blocks to lock on a common object.
- wait(), notify() and notifyAll() allow threads to communicate by waiting for signals.
- These methods are defined in the Object class.
- Defining them in Object allows all classes in Java to use these methods for inter-thread communication.
- No, wait() must be called from a synchronized block/method that holds the lock.
- After getting notified, a waiting thread enters RUNNABLE state and competes for the lock before resuming.
- A thread releases lock on exit from synchronized method/block or by calling wait().
- wait() – pause thread execution and release lock, notify() – wake up one waiting thread, notifyAll() – wake up all waiting threads.
- notify() wakes up one random thread while notifyAll() wakes up all waiting threads.
- Scheduler picks a random waiting thread. We cannot force a specific thread to be notified.
- interrupt() method can be used to interrupt a sleeping or waiting thread.
- Deadlock occurs when two or more threads wait indefinitely due to cyclic lock dependency. This can be resolved by proper lock ordering.
- synchronized keyword can cause deadlock if used incorrectly.
- Thread’s stop() method can be used to explicitly stop a thread but it is deprecated as it can cause instability.
- suspend() and resume() have been deprecated because they can potentially leave shared data in inconsistent state.
- Starvation happens when a thread is unable to gain regular access to shared resources and is unable to make progress. Deadlock is a permanent blocking state.
- Race condition occurs when multiple threads access and manipulate shared data concurrently leading to unexpected behavior.
- Daemon threads are service provider threads that provide services to user threads. An example is garbage collection.
- isDaemon() checks daemon status, setDaemon() can change status. Main thread is non-daemon.
- ThreadGroup is used to group related threads together. It provides methods to manage threads as a group.
- ThreadLocal allows creating thread-specific data – a separate copy of the variable for each thread.
- Threads can be started with start(), stopped with stop() (deprecated), and restarted if defined as dameon threads.
Modern C++ is not just about the basics. With its rich set of advanced features, it allows developers to write performant, safe, and expressive code. The key is to stay updated and practice these concepts to truly grasp their power and utility. Whether you’re working on large software projects or small utility applications, leveraging these advanced C++ concepts can drastically improve your code’s quality and efficiency.