Concurrency allows multiple copies of a program to run simultaneously while communicating with each other.C++11 first introduced concurrency features like threads, mutexes, conditional variables. C++17 expanded this with parallel algorithms.Concurrency overlaps execution of tasks, parallelism runs multiple copies of a program on different data simultaneously.Two main ways to implement concurrency in C++ are multithreading and parallelism.Multithreading involves creating thread objects that execute delegated sub-tasks independently.Parallelism uses annotations to mark functions for potential concurrent execution by the compiler.Examples of using concurrency include email servers and web crawlers to handle many simultaneous requests.Concurrency allows utilizing multiple CPU cores efficiently for performance gains.As core counts increase, concurrency is becoming more and more important for C++ developers.

The C++ language has continuously evolved since its inception, and with each new standard update, it brings fresh features and advanced concepts that help in writing more efficient, reliable, and expressive code. In this blog post, we will delve into some of the advanced concepts of modern C++ that every seasoned developer should be familiar with.Cheatsheets: https://hackingcpp.com/cpp/cheat_sheets.html

1. Lambda Expressions:

Lambda functions or lambda expressions allow you to define anonymous functions inline. They are particularly useful for short functions that are used a limited number of times.

auto add = [](int a, int b) -> int { return a + b; }; std::cout << add(5, 3); // Outputs: 8

2. Rvalue References and Move Semantics:

Introduced in C++11, rvalue references enable the creation of move constructors and move assignment operators. This means you can transfer resources from one object to another without expensive deep copying.

std::string str1 = "Hello"; std::string str2 = std::move(str1); // Uses move semantics; str1 is now empty

3. Smart Pointers:

Gone are the days of raw pointers with their associated risks. Smart pointers, like std::shared_ptr, std::unique_ptr, and std::weak_ptr, manage the lifetime of objects they point to, ensuring no memory leaks.

std::unique_ptr<int> ptr(new int(5));

4. Variadic Templates:

Variadic templates allow functions and classes to accept any number of template arguments, making it easier to create generic and reusable components.

template<typename... Args> void print(Args... args) { (std::cout << ... << args); }

5. constexpr:

With constexpr, you can compute values at compile-time. It can be used with functions or variables, ensuring that they are evaluated at compile time.

constexpr int factorial(int n) { return (n <= 1) ? 1 : n * factorial(n - 1); }

6. Attributes:

Attributes provide a universal syntax to communicate additional information to the compiler. One of the most known is [[nodiscard]], which warns developers if the return value of a function is discarded.

[[nodiscard]] int compute() { return 42; }

7. Structured Bindings:

C++17 introduced structured bindings that allow you to decompose objects into their individual parts.

std::pair<int, std::string> p = {5, "five"}; auto [number, word] = p;

8. Concepts:

C++20 brought about concepts, which are a way to specify constraints on template parameters, making template code more readable and safer.

template<typename T> concept Numeric = std::is_arithmetic_v<T>; template<Numeric N> N square(N n) { return n * n; }

9. Coroutines:

Coroutines, introduced in C++20, are a form of function that can be exited and later re-entered. They pave the way for asynchronous programming.

generator<int> generateNumbers(int count) { for (int i = 0; i < count; ++i) { co_yield i; } }

Concurrency in C++ enables multiple threads to execute in parallel, making efficient use of multi-core processors and enhancing performance for compute-bound tasks. The C++11 standard introduced a concurrency library, which has since been extended in subsequent standards.Let’s delve into the main components of C++ concurrency:

1. Threads:

The basic unit of concurrency is the thread. You can create and manage threads using the std::thread class.

#include <iostream> #include <thread> void helloFunction() { std::cout << "Hello from another thread!" << std::endl; } int main() { std::thread t(helloFunction); t.join(); // Wait for the thread to finish return 0; }

2. Mutex:

Mutex (mutual exclusion) is a synchronization primitive used to prevent multiple threads from simultaneously executing critical sections of code.

#include <mutex> std::mutex mtx; void safeFunction() { mtx.lock(); // Critical section of code mtx.unlock(); } // or using lock_guard void saferFunction() { std::lock_guard<std::mutex> lock(mtx); // Critical section of code }

3. Condition Variables:

Condition variables are synchronization primitives used to block a thread until notified by another thread.

#include <condition_variable> #include <queue> std::queue<int> dataQueue; std::mutex dataMutex; std::condition_variable dataCondition; void dataProcessingThread() { std::unique_lock<std::mutex> lock(dataMutex); dataCondition.wait(lock, []{ return !dataQueue.empty(); }); // Process data from the queue }

4. Future, Promise, and async:

#include <future> int computeFunction() { return 42; } int main() { std::future<int> result = std::async(computeFunction); int value = result.get(); // This will block until value is ready return 0; }

5. Atomic Operations:

Atomic operations are operations that run completely independently of any operations that are simultaneously occurring.

#include <atomic> std::atomic<int> counter(0); void increment() { for (int i = 0; i < 1000000; ++i) { counter++; } }

6. Memory Model and Ordering:

C++11 introduced a memory model that describes the interaction of threads through memory and their ordering constraints. The key concept here is the “happens-before” relationship, which guarantees memory consistency.

For instance, std::memory_order_acquire and std::memory_order_release are ordering constraints to control the ordering of reads and writes to atomic variables.

7. Parallel Algorithms (C++17 and beyond):

The C++17 standard introduced parallel algorithms in the Standard Library. This means that many algorithms in the <algorithm> header can now utilize multiple threads.

#include <algorithm> #include <vector> int main() { std::vector<int> v = { /*... some data ...*/ }; // Parallel sort std::sort(std::execution::par, v.begin(), v.end()); return 0; }

Concurrency allows multiple tasks to be executed simultaneously to improve performance and responsiveness.C++11 introduced core concurrency features like std::thread, mutexes, conditional variables. C++17 expanded support.std::thread allows spawning a new thread of execution that can run concurrently.The thread is passed a function to execute. Access to shared data must be synchronized.Mutex stands for mutual exclusion object. It locks a section of code to prevent simultaneous access.std::mutex provides exclusive access – only one thread can acquire the lock at a time.Locking a mutex before accessing shared data prevents race conditions.

To use a mutex, call lock() before accessing the shared resource, then call unlock() when done.Calling lock() on an already locked mutex will block execution until it’s unlocked.std::unique_lock is a RAII wrapper over mutex to simplify locking and unlocking.std::condition_variable allows threads to wait for notifications from other threads.Conditional variables are used with mutexes to implement synchronized data access.Common patterns like producer-consumer queues can be implemented using mutex+condition variable.Additional tools like std::future, std::async, std::packaged_task are provided for synchronizing execution.Properly using mutexes and other concurrency features prevents race conditions and allows safe, performant concurrent C++ code.

Most Common Interview Questions

  1. Multitasking is the ability of an operating system to execute multiple processes or tasks simultaneously.
  2. Multithreading is a programming technique to run multiple threads in a single process, with each thread executing independently. It is used for improving performance, responsiveness, and resource utilization in areas like web servers, application servers, GUI apps etc.
  3. Advantages of multithreading include increased performance by utilizing multiple CPUs, improved responsiveness, and simplified modeling of concurrent problems.
  4. Java provides built-in multithreading capabilities like the Thread class while C++ relies on third party libraries like Pthreads. Java synchronization is also safer and easier.
  5. There are two ways to define a thread in Java – extend the Thread class or implement the Runnable interface.
  6. Implementing Runnable is preferred because Java does not support multiple inheritance. Runnable can be implemented by any class including one already extending another class.
  7. t.start() starts the execution of the thread while t.run() simply executes the run() method on the current thread.
  8. Thread scheduler is the part of the OS that allocates CPU time to threads and switches between them based on scheduling algorithms.
  9. If run() is not overridden, then nothing will happen when the thread is started.
  10. Overloading run() is not possible since it is defined in the Thread class.
  11. Overriding start() will prevent the thread from starting since start() is responsible for actually creating the thread.
  12. The thread lifecycle consists of states like new, runnable, running, blocked, dead etc.
  13. start() executes the run() method on a new thread. This allows concurrency with the main thread.
  14. Trying to restart a thread after it has started will throw IllegalThreadStateException.
  15. Thread class constructors allow setting thread name, priority etc. when creating a thread.
  16. getName() and setName() methods are used to get and set the name of a thread.
  17. Thread priorities are generally used in scheduling to determine order of execution. Higher priority threads are executed first.
  18. Main thread priority is 5 (NORM_PRIORITY).
  19. Priority of new thread is same as the thread that created it.
  20. getPriority() and setPriority() methods are used to get and set thread priority.
  21. Setting thread priority higher than MAX_PRIORITY (10) will not change the priority.
  22. Higher priority thread gets chance for execution compared to lower priority threads.
  23. For same priority threads, scheduling is arbitrary and depends on OS implementation.
  24. We can prevent thread execution by not calling start() or calling wait()/sleep() after starting.
  25. yield() causes the current thread to relinquish CPU voluntarily to allow equal sharing between threads.
  26. Yes, join() is overloaded to allow specifying timeout.
  27. sleep() temporarily blocks the thread for the specified duration to allow other threads to execute.
  28. synchronized provides locking so only one thread can access shared data at a time. Disadvantage is performance overhead.
  29. Object level lock is required when multiple threads operate on a shared object to prevent data inconsistency.
  30. Class level lock prevents concurrent access to static fields/methods since they are shared by all instances.
  31. No, only one thread can execute a synchronized method on an object at a time.
  32. Synchronized static methods lock at class level while non-static methods lock at object level.
  33. Synchronized blocks allow more fine grained control over locking compared to synchronized methods.
  34. Synchronized statement provides syntactic sugar for synchronized blocks to lock on a common object.
  35. wait(), notify() and notifyAll() allow threads to communicate by waiting for signals.
  36. These methods are defined in the Object class.
  37. Defining them in Object allows all classes in Java to use these methods for inter-thread communication.
  38. No, wait() must be called from a synchronized block/method that holds the lock.
  39. After getting notified, a waiting thread enters RUNNABLE state and competes for the lock before resuming.
  40. A thread releases lock on exit from synchronized method/block or by calling wait().
  41. wait() – pause thread execution and release lock, notify() – wake up one waiting thread, notifyAll() – wake up all waiting threads.
  42. notify() wakes up one random thread while notifyAll() wakes up all waiting threads.
  43. Scheduler picks a random waiting thread. We cannot force a specific thread to be notified.
  44. interrupt() method can be used to interrupt a sleeping or waiting thread.
  45. Deadlock occurs when two or more threads wait indefinitely due to cyclic lock dependency. This can be resolved by proper lock ordering.
  46. synchronized keyword can cause deadlock if used incorrectly.
  47. Thread’s stop() method can be used to explicitly stop a thread but it is deprecated as it can cause instability.
  48. suspend() and resume() have been deprecated because they can potentially leave shared data in inconsistent state.
  49. Starvation happens when a thread is unable to gain regular access to shared resources and is unable to make progress. Deadlock is a permanent blocking state.
  50. Race condition occurs when multiple threads access and manipulate shared data concurrently leading to unexpected behavior.
  51. Daemon threads are service provider threads that provide services to user threads. An example is garbage collection.
  52. isDaemon() checks daemon status, setDaemon() can change status. Main thread is non-daemon.
  53. ThreadGroup is used to group related threads together. It provides methods to manage threads as a group.
  54. ThreadLocal allows creating thread-specific data – a separate copy of the variable for each thread.
  55. Threads can be started with start(), stopped with stop() (deprecated), and restarted if defined as dameon threads.

Modern C++ is not just about the basics. With its rich set of advanced features, it allows developers to write performant, safe, and expressive code. The key is to stay updated and practice these concepts to truly grasp their power and utility. Whether you’re working on large software projects or small utility applications, leveraging these advanced C++ concepts can drastically improve your code’s quality and efficiency.