Concurrency
Multiple threads run simultaneously and need to coordinate access to shared state — concurrency patterns replace busy-waiting and raw locks with composable primitives that enforce ordering, alternation, and resource limits.
- Replaces spin loops and
synchronizedboilerplate with Semaphore, ReentrantLock, and barrier classes that express intent clearly - Two modes: semaphore-based signaling (ordering, alternation, group formation) and lock-based conditional waiting (producer-consumer, bounded buffers)
- Reach for it when threads must take turns, wait for others to finish, or share a bounded resource without racing
Quick Revision - Must Do Problems
If short on time, solve only these. They cover ~90% of concurrency patterns.
| # | Problem | Pattern | Why It's Essential | Time to Solve |
|---|---|---|---|---|
| 1 | Print in Order | Semaphore signaling | Foundation — every other pattern builds on this signal/wait idiom | 5–10 min |
| 2 | Print FooBar Alternately | Semaphore ping-pong | The alternation template used in dozens of variants | 10–15 min |
| 4 | Building H2O | Counting semaphore + CyclicBarrier | Only problem that combines ratio enforcement with a group barrier | 15–20 min |
| 6 | Design Bounded Blocking Queue | ReentrantLock + Condition | Canonical producer-consumer — asked in almost every concurrency interview | 15–20 min |
| 7 | The Dining Philosophers | Deadlock avoidance | The classic deadlock problem — interviewers expect you to know resource ordering | 20–25 min |
| 8 | Connection Pool | Resource pooling (BlockingQueue) | Core LLD problem — object reuse + timeout handling under concurrency | 15–20 min |
| 9 | Concurrent Rate Limiter | Check-then-act under lock | Exposes the most common thread safety bug — check and update must be atomic | 10–15 min |
Practice Tips
How to Identify a Concurrency Problem
- The problem gives you multiple threads calling methods concurrently
- Output must respect a global ordering despite arbitrary thread scheduling
- A shared resource (queue, counter, seat) must be accessed without races
- You see words like "thread-safe", "atomic", "blocking", or "synchronize"
Three Problem Types
Almost all concurrency interview problems fall into one of three categories. Identify the type first — the solution follows from it.
| Type | What Goes Wrong | Fix |
|---|---|---|
| Thread Safety (Protect Shared State) | Shared state gets corrupted. Two threads both check a seat is available, both book it. | Lock across both the check and the update — never release between them. |
| Ordering & Signaling (Thread Collaboration) | Threads need to hand off work or signal each other. A producer generates tasks faster than consumers process them. | Bounded BlockingQueue — producers block on put() when full, consumers block on take() when empty. |
| Resource Management (Concurrency Limiting) | Resources are limited. 100 requests, 10 database connections. Some must wait. | Semaphore to cap concurrent ops; BlockingQueue to pool and dispense actual objects. |
Most questions start with thread safety. Ordering & signaling and resource management often appear as follow-ups once throughput increases.
Concurrency vs Other Techniques
| If you see... | Use... | Not... |
|---|---|---|
| Thread A must run before Thread B | Semaphore signaling | volatile flag (busy-wait) |
| Two threads strictly alternating | Semaphore ping-pong | synchronized + notifyAll (works but verbose) |
| N threads must all arrive before any proceeds | CyclicBarrier | CountDownLatch (not reusable) |
| Wait for N events, then unblock a waiter | CountDownLatch | CyclicBarrier (different semantics) |
| Bounded shared buffer, block on full/empty | ReentrantLock + Condition | Semaphore alone (can't express two conditions cleanly) |
| Simple thread-safe counter | AtomicInteger | synchronized int (overkill for one field) |
| Reads vastly outnumber writes (e.g. cache, config) | ReentrantReadWriteLock | Plain ReentrantLock (blocks readers unnecessarily) |
| Limit N concurrent operations (rate limit, download cap) | Semaphore(N) | One lock per op (serializes; defeats the purpose) |
| Pool of expensive objects to reuse (connections, GPU handles) | BlockingQueue of objects | New object per request (prohibitively expensive) |
Interview Approach
- List every thread and what each one is responsible for
- Identify the synchronization constraint: strict ordering? alternation? group? resource limit?
- Pick the right primitive based on the constraint (see table above)
- Sketch the acquire/release pattern before writing code
- Check termination: how do waiting threads know when to stop?
- Review for deadlock: do any two threads acquire multiple locks? Is the order consistent?
5 Main Patterns
Pattern 1: Semaphore Signaling
Thread B must wait for thread A to complete an action before proceeding. Use a semaphore initialized to 0 — B blocks on acquire(), A unblocks it with release().
Semaphore signal = new Semaphore(0); // starts blocked
// Thread A
doWork();
signal.release(); // unblocks B
// Thread B
signal.acquire(); // waits for A's signal
doWork();
Chain signals to enforce an ordered sequence across N threads.
Pattern 2: Semaphore Ping-Pong
Two threads must strictly alternate. Use two semaphores — one starts at 1 (goes first), the other at 0 (blocked). Each thread acquires its own semaphore and releases the other's.
Semaphore aSem = new Semaphore(1); // A goes first
Semaphore bSem = new Semaphore(0); // B blocked
// Thread A (repeats n times)
aSem.acquire();
doA();
bSem.release();
// Thread B (repeats n times)
bSem.acquire();
doB();
aSem.release();
Pattern 3: Counting Semaphore + CyclicBarrier
Enforce a fixed ratio of thread types (e.g. 2H : 1O) and release them as a group. Counting semaphores cap how many of each type can enter; the barrier ensures all proceed together.
Semaphore aSem = new Semaphore(2); // at most 2 of type A
Semaphore bSem = new Semaphore(1); // at most 1 of type B
CyclicBarrier barrier = new CyclicBarrier(3); // releases when 2A + 1B arrive
// Thread type A
aSem.acquire();
doA();
barrier.await(); // wait for full group
aSem.release(); // open slot for next A (after all three passed barrier)
// Thread type B
bSem.acquire();
doB();
barrier.await();
bSem.release();
Pattern 4: ReentrantLock + Condition
Explicit conditional waiting with multiple wait queues. Use when a thread must block until a specific state is true (e.g. buffer not full, buffer not empty).
ReentrantLock lock = new ReentrantLock();
Condition notFull = lock.newCondition();
Condition notEmpty = lock.newCondition();
// Producer
lock.lock();
try {
while (isFull()) { // ALWAYS while — never if (spurious wakeups)
notFull.await(); // releases lock, waits, reacquires on signal
}
add(item);
notEmpty.signal(); // wake one consumer
} finally {
lock.unlock(); // ALWAYS in finally
}
// Consumer
lock.lock();
try {
while (isEmpty()) {
notEmpty.await();
}
item = remove();
notFull.signal();
} finally {
lock.unlock();
}
Pattern 5: ReentrantReadWriteLock
Multiple readers can hold the read lock simultaneously. The write lock is exclusive — blocks all readers and other writers. Use when reads vastly outnumber writes (e.g. 100:1 or more); at roughly equal read/write rates, a plain ReentrantLock is usually faster due to lower overhead.
ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();
// Multiple threads can read concurrently
public String get(String key) {
rwLock.readLock().lock();
try {
return data.get(key);
} finally {
rwLock.readLock().unlock();
}
}
// Write lock is exclusive — waits for all readers to finish
public void put(String key, String value) {
rwLock.writeLock().lock();
try {
data.put(key, value);
} finally {
rwLock.writeLock().unlock();
}
}
General Templates
CountDownLatch Template
One-shot barrier: N workers count down, one waiter unblocks when count hits zero. Not reusable — use CyclicBarrier if you need reset.
CountDownLatch latch = new CountDownLatch(n);
// Each worker thread
doWork();
latch.countDown(); // decrements; never blocks
// Waiting thread
latch.await(); // blocks until count == 0
AtomicInteger / CAS Quick Reference
AtomicInteger count = new AtomicInteger(0);
count.get(); // read
count.incrementAndGet(); // ++count, returns new value
count.getAndIncrement(); // count++, returns old value
count.addAndGet(delta); // count += delta, returns new value
count.compareAndSet(expected, update); // CAS: only updates if current == expected
Use AtomicInteger for a single shared counter. Use ReentrantLock when you need to protect a compound operation (check-then-act on multiple fields).
Worker Thread Shutdown Patterns
Three ways to stop a worker blocked in queue.take():
// Option 1 — Interrupt the thread (emergency stop)
workerThread.interrupt();
// take() throws InterruptedException, worker exits
// Option 2 — Poison pill (preferred for clean shutdown)
static final Runnable POISON_PILL = () -> {};
queue.put(POISON_PILL); // submit one per worker thread
// Worker checks for poison pill:
while (true) {
Runnable task = queue.take();
if (task == POISON_PILL) {
return; // exit cleanly
}
task.run();
}
// Option 3 — Poll with flag (finish current batch before stopping)
volatile boolean shutdown = false;
while (!shutdown) {
Runnable task = queue.poll(1, TimeUnit.SECONDS);
if (task != null) {
task.run();
}
}
Use poison pill when you control what enters the queue and want workers to drain remaining tasks first. Use interrupt for immediate stop. Use poll + flag when workers must finish their current batch before stopping.
Practical BlockingQueue Usage
BlockingQueue<Task> queue = new LinkedBlockingQueue<>(1000); // ALWAYS specify capacity
queue.put(task); // blocks if full (backpressure)
queue.offer(task, 100, TimeUnit.MILLISECONDS); // timeout — use on request paths
Task t = queue.take(); // blocks if empty (efficient wait)
Task t = queue.poll(1, TimeUnit.SECONDS); // timeout — use when shutdown flag needed
Unbounded queue (new LinkedBlockingQueue<>() without capacity) = OutOfMemoryError under sustained overload. Always size it based on: burst_duration_seconds × peak_produce_rate.
Problems in this guide
Problem 1: Print in Order ⭐ MUST DO
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Easy | 5–10 min | Semaphore signaling |
Three threads call first(), second(), and third() in arbitrary order. Guarantee they print in the sequence first → second → third.
Example:
Input: [1, 3, 2] (thread order)
Output: "firstsecondthird"
Solution:
import java.util.concurrent.Semaphore;
class Foo {
// s12 blocks second() until first() signals
// s23 blocks third() until second() signals
private Semaphore s12 = new Semaphore(0);
private Semaphore s23 = new Semaphore(0);
public void first(Runnable printFirst) throws InterruptedException {
printFirst.run();
s12.release(); // unblock second
}
public void second(Runnable printSecond) throws InterruptedException {
s12.acquire(); // wait for first
printSecond.run();
s23.release(); // unblock third
}
public void third(Runnable printThird) throws InterruptedException {
s23.acquire(); // wait for second
printThird.run();
}
public static void main(String[] args) throws InterruptedException {
Foo foo = new Foo();
// Start threads in reverse order — proves ordering works regardless of start order
Thread t3 = new Thread(() -> {
try { foo.third(() -> System.out.print("third")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t2 = new Thread(() -> {
try { foo.second(() -> System.out.print("second")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t1 = new Thread(() -> {
try { foo.first(() -> System.out.print("first")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
t3.start(); t2.start(); t1.start();
t1.join(); t2.join(); t3.join();
// Output: firstsecondthird
}
}
Complexity:
| Time | O(1) per call — each thread blocks at most once |
| Space | O(1) — two semaphores |
Problem 2: Print FooBar Alternately ⭐ MUST DO
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Medium | 10–15 min | Semaphore ping-pong |
Two threads share a FooBar instance. One calls foo(), the other calls bar(). Print "foobar" n times — they must strictly alternate, foo before bar each round.
Example:
Input: n = 3
Output: "foobarfoobarfoobar"
Key Insight:
- Two semaphores act as a toggle: foo's semaphore starts at 1 (goes first), bar's starts at 0 (blocked). Each thread acquires its own semaphore and releases the other's — they hand off control back and forth.
- The initial values determine who goes first. Flipping them would print bar first.
Solution:
import java.util.concurrent.Semaphore;
class FooBar {
private int n;
private Semaphore fooSem = new Semaphore(1); // foo goes first
private Semaphore barSem = new Semaphore(0); // bar blocked initially
public FooBar(int n) {
this.n = n;
}
public void foo(Runnable printFoo) throws InterruptedException {
for (int i = 0; i < n; i++) {
fooSem.acquire();
printFoo.run();
barSem.release(); // hand off to bar
}
}
public void bar(Runnable printBar) throws InterruptedException {
for (int i = 0; i < n; i++) {
barSem.acquire();
printBar.run();
fooSem.release(); // hand off back to foo
}
}
public static void main(String[] args) throws InterruptedException {
FooBar fb = new FooBar(3);
Thread t1 = new Thread(() -> {
try { fb.foo(() -> System.out.print("foo")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t2 = new Thread(() -> {
try { fb.bar(() -> System.out.print("bar")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
t1.start(); t2.start();
t1.join(); t2.join();
// Output: foobarfoobarfoobar
}
}
Complexity:
| Time | O(n) — each thread loops n times |
| Space | O(1) — two semaphores |
Problem 3: Print Zero Even Odd
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Medium | 15–20 min | Semaphore multi-thread |
Three threads share a ZeroEvenOdd instance: zero() prints "0", even() prints even numbers, odd() prints odd numbers. Print the sequence 0102030405... for n = 5.
Example:
Input: n = 5
Output: "0102030405"
Key Insight:
- The zero thread is the dispatcher. After printing 0, it knows which number comes next (odd or even) and signals exactly that thread. The number thread signals back to zero when done. Three semaphores:
zeroSem(starts at 1, zero goes first),oddSemandevenSem(start at 0).
Solution:
import java.util.concurrent.Semaphore;
import java.util.function.IntConsumer;
class ZeroEvenOdd {
private int n;
private Semaphore zeroSem = new Semaphore(1); // zero goes first
private Semaphore oddSem = new Semaphore(0);
private Semaphore evenSem = new Semaphore(0);
public ZeroEvenOdd(int n) {
this.n = n;
}
public void zero(IntConsumer printNumber) throws InterruptedException {
for (int i = 1; i <= n; i++) {
zeroSem.acquire();
printNumber.accept(0);
// Signal the right thread based on the upcoming number
if (i % 2 == 1) {
oddSem.release();
} else {
evenSem.release();
}
}
}
public void odd(IntConsumer printNumber) throws InterruptedException {
for (int i = 1; i <= n; i += 2) {
oddSem.acquire();
printNumber.accept(i);
zeroSem.release(); // hand back to zero
}
}
public void even(IntConsumer printNumber) throws InterruptedException {
for (int i = 2; i <= n; i += 2) {
evenSem.acquire();
printNumber.accept(i);
zeroSem.release(); // hand back to zero
}
}
public static void main(String[] args) throws InterruptedException {
ZeroEvenOdd zeo = new ZeroEvenOdd(5);
Thread t1 = new Thread(() -> {
try { zeo.zero(System.out::print); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t2 = new Thread(() -> {
try { zeo.odd(System.out::print); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t3 = new Thread(() -> {
try { zeo.even(System.out::print); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
t1.start(); t2.start(); t3.start();
t1.join(); t2.join(); t3.join();
// Output: 0102030405
}
}
Complexity:
| Time | O(n) — n zeros, n numbers printed total |
| Space | O(1) — three semaphores |
Problem 4: Building H2O ⭐ MUST DO
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Medium | 15–20 min | Counting semaphore + CyclicBarrier |
Threads call hydrogen() or oxygen(). Release threads in groups of exactly 2 hydrogen + 1 oxygen. All three in a group must call their release function before the next group can form.
Example:
Input: "OOHHHH"
Output: "HHOHHO" or any valid ordering with 2H before/after each O
Key Insight:
- Two layers of control: the counting semaphores enforce the 2:1 ratio by capping how many threads of each type can enter simultaneously. The
CyclicBarrier(3)ensures all three release together as a unit. Without the barrier, two hydrogens could runreleaseHydrogenbefore any oxygen arrives. The semaphore releases happen after the barrier — they reopen slots for the next molecule only after the current group is complete.
Solution:
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
import java.util.concurrent.Semaphore;
class H2O {
private Semaphore hSem = new Semaphore(2); // at most 2 hydrogens waiting
private Semaphore oSem = new Semaphore(1); // at most 1 oxygen waiting
private CyclicBarrier barrier = new CyclicBarrier(3); // 2H + 1O must all arrive
public void hydrogen(Runnable releaseHydrogen) throws InterruptedException, BrokenBarrierException {
hSem.acquire();
releaseHydrogen.run();
barrier.await(); // wait for full 2H + 1O group
hSem.release(); // reopen hydrogen slot for next molecule
}
public void oxygen(Runnable releaseOxygen) throws InterruptedException, BrokenBarrierException {
oSem.acquire();
releaseOxygen.run();
barrier.await(); // wait for full group
oSem.release(); // reopen oxygen slot for next molecule
}
public static void main(String[] args) throws InterruptedException {
H2O h2o = new H2O();
Runnable hTask = () -> {
try { h2o.hydrogen(() -> System.out.print("H")); }
catch (Exception e) { Thread.currentThread().interrupt(); }
};
Runnable oTask = () -> {
try { h2o.oxygen(() -> System.out.print("O")); }
catch (Exception e) { Thread.currentThread().interrupt(); }
};
// Input: OOHHHH — 2 molecules worth
Thread[] threads = {
new Thread(oTask), new Thread(oTask),
new Thread(hTask), new Thread(hTask),
new Thread(hTask), new Thread(hTask)
};
for (Thread t : threads) { t.start(); }
for (Thread t : threads) { t.join(); }
// Output: any valid grouping e.g. HHOHHO
}
}
Complexity:
| Time | O(n) total calls — each thread blocks at most once per molecule |
| Space | O(1) — two semaphores and one barrier |
Problem 5: Fizz Buzz Multithreaded
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Medium | 20–25 min | Semaphore dispatcher |
Four threads: fizz(), buzz(), fizzbuzz(), number(). Print the FizzBuzz sequence from 1 to n using exactly these threads — each number must be printed by the correct thread.
Example:
Input: n = 15
Output: "1 2 Fizz 4 Buzz Fizz 7 8 Fizz Buzz 11 Fizz 13 14 FizzBuzz"
Key Insight:
- The
numberthread acts as the coordinator. For each i from 1 to n, it determines which thread should print and signals exactly that thread via a dedicated semaphore. It then waits for that thread to finish (printDone) before moving to i+1. When done, it sets a flag and releases all semaphores once to unblock waiting threads so they can exit.
Solution:
import java.util.concurrent.Semaphore;
import java.util.function.IntConsumer;
class FizzBuzz {
private int n;
private volatile boolean done = false;
private Semaphore fizzSem = new Semaphore(0);
private Semaphore buzzSem = new Semaphore(0);
private Semaphore fizzBuzzSem = new Semaphore(0);
private Semaphore printDone = new Semaphore(0);
public FizzBuzz(int n) {
this.n = n;
}
public void fizz(Runnable printFizz) throws InterruptedException {
while (!done) {
fizzSem.acquire();
if (!done) {
printFizz.run();
printDone.release();
}
}
}
public void buzz(Runnable printBuzz) throws InterruptedException {
while (!done) {
buzzSem.acquire();
if (!done) {
printBuzz.run();
printDone.release();
}
}
}
public void fizzbuzz(Runnable printFizzBuzz) throws InterruptedException {
while (!done) {
fizzBuzzSem.acquire();
if (!done) {
printFizzBuzz.run();
printDone.release();
}
}
}
public void number(IntConsumer printNumber) throws InterruptedException {
for (int i = 1; i <= n; i++) {
if (i % 15 == 0) {
fizzBuzzSem.release();
printDone.acquire();
} else if (i % 3 == 0) {
fizzSem.release();
printDone.acquire();
} else if (i % 5 == 0) {
buzzSem.release();
printDone.acquire();
} else {
printNumber.accept(i); // number thread prints directly, no dispatch
}
}
// Signal all waiting threads to exit
done = true;
fizzSem.release();
buzzSem.release();
fizzBuzzSem.release();
}
public static void main(String[] args) throws InterruptedException {
FizzBuzz fb = new FizzBuzz(15);
Thread t1 = new Thread(() -> {
try { fb.fizz(() -> System.out.print("Fizz ")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t2 = new Thread(() -> {
try { fb.buzz(() -> System.out.print("Buzz ")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t3 = new Thread(() -> {
try { fb.fizzbuzz(() -> System.out.print("FizzBuzz ")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread t4 = new Thread(() -> {
try { fb.number(i -> System.out.print(i + " ")); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
t1.start(); t2.start(); t3.start(); t4.start();
t1.join(); t2.join(); t3.join(); t4.join();
// Output: 1 2 Fizz 4 Buzz Fizz 7 8 Fizz Buzz 11 Fizz 13 14 FizzBuzz
}
}
Complexity:
| Time | O(n) — number thread loops n times, each dispatch is O(1) |
| Space | O(1) — four semaphores |
Problem 6: Design Bounded Blocking Queue ⭐ MUST DO
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Medium | 15–20 min | ReentrantLock + Condition |
Implement a thread-safe bounded blocking queue. enqueue(item) blocks if the queue is full. dequeue() blocks if the queue is empty. size() returns current count.
Example:
BoundedBlockingQueue q = new BoundedBlockingQueue(2);
enqueue(1) → [1]
enqueue(0) → [1, 0]
dequeue() → returns 1, queue = [0]
enqueue(2) → [0, 2] (was blocked until dequeue freed a slot)
Key Insight:
- Two separate conditions (
notFull,notEmpty) let producers and consumers wait on independent queues — when a consumer removes an item, it signals onlynotFull(producers), not all threads. Thewhileloop on the condition check is mandatory: a signaled thread must re-check because another thread may have taken the slot before it reacquires the lock (spurious wakeup is also possible).
Solution:
import java.util.LinkedList;
import java.util.Queue;
import java.util.concurrent.locks.Condition;
import java.util.concurrent.locks.ReentrantLock;
class BoundedBlockingQueue {
private final Queue<Integer> queue = new LinkedList<>();
private final int capacity;
private final ReentrantLock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();
public BoundedBlockingQueue(int capacity) {
this.capacity = capacity;
}
public void enqueue(int element) throws InterruptedException {
lock.lock();
try {
while (queue.size() == capacity) { // while, not if
notFull.await();
}
queue.offer(element);
notEmpty.signal(); // wake one blocked dequeue
} finally {
lock.unlock();
}
}
public int dequeue() throws InterruptedException {
lock.lock();
try {
while (queue.isEmpty()) { // while, not if
notEmpty.await();
}
int val = queue.poll();
notFull.signal(); // wake one blocked enqueue
return val;
} finally {
lock.unlock();
}
}
public int size() {
lock.lock();
try {
return queue.size();
} finally {
lock.unlock();
}
}
public static void main(String[] args) throws InterruptedException {
BoundedBlockingQueue q = new BoundedBlockingQueue(2);
Thread producer = new Thread(() -> {
try {
for (int i = 1; i <= 4; i++) {
q.enqueue(i);
System.out.println("enqueued " + i + ", size=" + q.size());
}
} catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
Thread consumer = new Thread(() -> {
try {
Thread.sleep(50); // let producer fill the queue first
for (int i = 0; i < 4; i++) {
System.out.println("dequeued " + q.dequeue());
}
} catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
producer.start(); consumer.start();
producer.join(); consumer.join();
// Producer blocks on enqueue(3) until consumer dequeues — capacity enforced
}
}
Complexity:
| Time | O(1) per enqueue/dequeue — queue operations are O(1) |
| Space | O(capacity) — the queue itself |
Problem 7: The Dining Philosophers ⭐ MUST DO
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Hard | 20–25 min | Deadlock avoidance (resource ordering) |
Five philosophers sit at a round table with 5 forks. Each philosopher needs both their left fork (index i) and right fork (index (i+1) % 5) to eat. Implement wantsToEat() without deadlock or starvation.
Example:
Philosopher 0: left fork = 0, right fork = 1
Philosopher 4: left fork = 4, right fork = 0
Deadlock: all 5 pick up their left fork — nobody can get the right fork
Key Insight:
- Each philosopher's left fork index = their own index. Right fork =
(index + 1) % 5. Fork 1 is both P0's right fork and P1's left fork — that's the contention. - The deadlock scenario: if all 5 grab their left fork simultaneously, P0 holds fork0 waiting for fork1, P1 holds fork1 waiting for fork2 ... P4 holds fork4 waiting for fork0. Circular wait — nobody can proceed.
- The fix: always acquire the lower-numbered fork first. For P0–P3 nothing changes (left is already lower). P4's left=4, right=0 — since 0 < 4, P4 grabs fork0 first (its right fork), then fork4.
- Why this breaks the deadlock: P0 and P4 now both want fork0 first. One wins, the other blocks and holds nothing. The chain becomes P3 → P2 → P1 → P0 → (P4 waiting at fork0 with nothing held). P3 always has fork4 available — it eats, releases, and the chain unblocks one by one. No circular dependency.
Solution:
import java.util.concurrent.locks.ReentrantLock;
class DiningPhilosophers {
private final ReentrantLock[] forks = new ReentrantLock[5];
public DiningPhilosophers() {
for (int i = 0; i < 5; i++) {
forks[i] = new ReentrantLock();
}
}
public void wantsToEat(int philosopher,
Runnable pickLeftFork, Runnable pickRightFork,
Runnable eat,
Runnable putLeftFork, Runnable putRightFork)
throws InterruptedException {
int left = philosopher;
int right = (philosopher + 1) % 5;
// Always acquire lower-numbered fork first — breaks the circular dependency
int first = Math.min(left, right);
int second = Math.max(left, right);
forks[first].lock();
forks[second].lock();
try {
// Execute pick actions matching which physical fork is first/second
if (first == left) {
pickLeftFork.run();
pickRightFork.run();
} else {
// Philosopher 4: first=0=right, second=4=left
pickRightFork.run();
pickLeftFork.run();
}
eat.run();
putLeftFork.run();
putRightFork.run();
} finally {
forks[second].unlock();
forks[first].unlock(); // unlock in reverse order of acquisition
}
}
public static void main(String[] args) throws InterruptedException {
DiningPhilosophers dp = new DiningPhilosophers();
Thread[] threads = new Thread[5];
for (int i = 0; i < 5; i++) {
final int id = i;
threads[i] = new Thread(() -> {
try {
for (int round = 0; round < 3; round++) {
dp.wantsToEat(id,
() -> {}, () -> {},
() -> System.out.println("Philosopher " + id + " eating (round " + Thread.currentThread().getName() + ")"),
() -> {}, () -> {}
);
}
} catch (InterruptedException e) { Thread.currentThread().interrupt(); }
}, String.valueOf(id));
}
for (Thread t : threads) { t.start(); }
for (Thread t : threads) { t.join(); }
// All 5 philosophers eat 3 rounds each — no deadlock
}
}
Complexity:
| Time | O(1) per call — constant lock/unlock operations |
| Space | O(1) — 5 locks regardless of philosopher count |
Problem 8: Connection Pool ⭐ MUST DO
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Medium | 15–20 min | Resource pooling (BlockingQueue) |
Design a thread-safe connection pool. acquire() returns a connection, blocking if none are available. release(conn) returns it to the pool. Connections are expensive to create — the pool reuses a fixed set across all threads.
Example:
pool = ConnectionPool(size=3, timeoutMs=500)
Thread A: acquire() → conn1 (2 remaining)
Thread B: acquire() → conn2 (1 remaining)
Thread C: acquire() → conn3 (0 remaining)
Thread D: acquire() → blocks ... Thread A: release(conn1) → Thread D gets conn1
Thread E: acquire() → blocks 500ms → throws (all connections still in use)
Key Insight:
- The
BlockingQueuestores the actual connection objects — threads get the object, not just permission to proceed.take()blocks when the queue is empty (all connections in use) and unblocks the instant one is returned. - Use
poll(timeout)nottake()on request paths.take()blocks indefinitely — if all connections are stuck on slow queries, users wait forever. A timeout lets you fail fast and return a 503 to the caller.
Solution:
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
class ConnectionPool {
// Stub — in production this would be a real DB connection
interface Connection {
void query(String sql);
}
private final BlockingQueue<Connection> pool;
private final long timeoutMs;
public ConnectionPool(int size, long timeoutMs) {
this.pool = new LinkedBlockingQueue<>(size);
this.timeoutMs = timeoutMs;
for (int i = 0; i < size; i++) {
pool.add(createConnection()); // create all upfront — no lazy-init races
}
}
private Connection createConnection() {
return sql -> System.out.println(Thread.currentThread().getName() + " → " + sql);
}
public Connection acquire() throws InterruptedException {
Connection conn = pool.poll(timeoutMs, TimeUnit.MILLISECONDS);
if (conn == null) {
throw new RuntimeException("No connection available within " + timeoutMs + "ms");
}
return conn;
}
public void release(Connection conn) {
pool.offer(conn); // non-blocking — pool never exceeds its own capacity
}
// Preferred entry point: handles release automatically, even on exception
public void execute(Consumer<Connection> work) throws InterruptedException {
Connection conn = acquire();
try {
work.accept(conn);
} finally {
release(conn); // ALWAYS in finally — a leaked connection is gone forever
}
}
public static void main(String[] args) throws InterruptedException {
ConnectionPool pool = new ConnectionPool(2, 500);
Thread[] threads = new Thread[5];
for (int i = 0; i < 5; i++) {
final int id = i;
threads[i] = new Thread(() -> {
try {
pool.execute(conn -> conn.query("SELECT * FROM users WHERE id=" + id));
} catch (InterruptedException e) { Thread.currentThread().interrupt(); }
}, "Thread-" + i);
}
for (Thread t : threads) { t.start(); }
for (Thread t : threads) { t.join(); }
// 5 threads share 2 connections — threads 3 and 4 block until a connection is released
}
}
Complexity:
| Time | O(1) per acquire/release — BlockingQueue operations |
| Space | O(size) — pool holds exactly size connection objects |
Problem 9: Concurrent Rate Limiter ⭐ MUST DO
| Difficulty | Time to Solve | Pattern |
|---|---|---|
| Medium | 10–15 min | Check-then-act under lock |
Design a thread-safe rate limiter. allowRequest(userId) returns true if the user is under their request limit, false if they've exceeded it. Multiple threads may call this simultaneously for different users.
Example:
limiter = RateLimiter(maxRequests=3)
Thread A: allowRequest("alice") → true (alice: 1)
Thread B: allowRequest("alice") → true (alice: 2) ← concurrent with A
Thread C: allowRequest("alice") → true (alice: 3)
Thread D: allowRequest("alice") → false (at limit)
Thread E: allowRequest("bob") → true (bob: 1) ← bob unaffected
Key Insight:
- The check (
count < max) and the update (count + 1) must happen atomically under the same lock — this is the check-then-act pattern. Without a lock: Thread A reads count=2, Thread B reads count=2, both see it's under the limit of 3, both increment to 3. The user made 4 requests against a limit of 3. - Start with one coarse-grained lock (all users share one lock). For high throughput, upgrade to per-user locks — Alice and Bob can proceed simultaneously since their counters are independent. Fine-grained adds complexity; only reach for it when the interviewer pushes on scale.
Solution (coarse-grained — start here):
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.locks.ReentrantLock;
class RateLimiter {
private final Map<String, Integer> counts = new HashMap<>();
private final int maxRequests;
private final ReentrantLock lock = new ReentrantLock();
public RateLimiter(int maxRequests) {
this.maxRequests = maxRequests;
}
public boolean allowRequest(String userId) {
lock.lock();
try {
int count = counts.getOrDefault(userId, 0);
if (count >= maxRequests) {
return false;
}
counts.put(userId, count + 1);
return true;
} finally {
lock.unlock();
}
}
public static void main(String[] args) throws InterruptedException {
RateLimiter limiter = new RateLimiter(3);
String[] users = {"alice", "alice", "alice", "alice", "bob"};
Thread[] threads = new Thread[users.length];
for (int i = 0; i < users.length; i++) {
final String user = users[i];
threads[i] = new Thread(() ->
System.out.println(user + ": " + limiter.allowRequest(user))
);
}
for (Thread t : threads) { t.start(); }
for (Thread t : threads) { t.join(); }
// alice gets true x3, false x1; bob gets true x1
}
}
Scale-up — fine-grained per-user locks:
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;
class RateLimiterFineGrained {
private final ConcurrentHashMap<String, ReentrantLock> userLocks = new ConcurrentHashMap<>();
private final ConcurrentHashMap<String, Integer> counts = new ConcurrentHashMap<>();
private final int maxRequests;
public RateLimiterFineGrained(int maxRequests) {
this.maxRequests = maxRequests;
}
public boolean allowRequest(String userId) {
ReentrantLock lock = userLocks.computeIfAbsent(userId, k -> new ReentrantLock());
lock.lock();
try {
int count = counts.getOrDefault(userId, 0);
if (count >= maxRequests) {
return false;
}
counts.put(userId, count + 1);
return true;
} finally {
lock.unlock();
}
}
public static void main(String[] args) throws InterruptedException {
RateLimiterFineGrained limiter = new RateLimiterFineGrained(3);
String[] users = {"alice", "alice", "alice", "alice", "bob", "bob"};
Thread[] threads = new Thread[users.length];
for (int i = 0; i < users.length; i++) {
final String user = users[i];
threads[i] = new Thread(() ->
System.out.println(user + ": " + limiter.allowRequest(user))
);
}
for (Thread t : threads) { t.start(); }
for (Thread t : threads) { t.join(); }
// alice and bob requests proceed in parallel (different locks)
}
}
Complexity:
| Time | O(1) per call — hash map lookup + lock acquire |
| Space | O(U) — one entry per unique user |
Quick Reference Table
| # | Problem | Pattern | Key Technique | Time |
|---|---|---|---|---|
| 1 | Print in Order ⭐ | Semaphore signaling | Semaphore(0): B acquires, A releases | O(1) |
| 2 | Print FooBar Alternately ⭐ | Semaphore ping-pong | semA(1)/semB(0): each releases the other's | O(n) |
| 3 | Print Zero Even Odd | Semaphore multi-thread | zero dispatches to odd/even based on i%2 | O(n) |
| 4 | Building H2O ⭐ | Counting sem + CyclicBarrier | hSem(2)/oSem(1) + barrier(3); release after barrier | O(n) |
| 5 | Fizz Buzz Multithreaded | Semaphore dispatcher | number thread dispatches + waits for printDone | O(n) |
| 6 | Design Bounded Blocking Queue ⭐ | ReentrantLock + Condition | notFull/notEmpty conditions; while loop on check | O(1) per op |
| 7 | The Dining Philosophers ⭐ | Deadlock avoidance | Always acquire lower-numbered fork first | O(1) |
| 8 | Connection Pool ⭐ | Resource pooling (BlockingQueue) | poll(timeout) not take(); release() always in finally | O(1) per op |
| 9 | Concurrent Rate Limiter ⭐ | Check-then-act under lock | Lock wraps both check and update; per-user locks for scale | O(1) |
When to Use Each Pattern
Semaphore Signaling
- One thread must wait for a specific action in another thread to complete
- Ordering guarantees across 2 or more threads (chain releases for N-step ordering)
- One-time or repeated — unlike CountDownLatch, Semaphore can be reused
Semaphore Ping-Pong
- Exactly two threads must strictly alternate
- Each round: A runs, then B runs, then A again — never two A's or two B's in a row
- Extend to N threads by chaining: A releases B, B releases C, C releases A
Counting Semaphore + CyclicBarrier
- A fixed ratio of thread types must be grouped together before any can proceed
- The semaphore enforces the ratio; the barrier enforces the "proceed together" constraint
- Reusable: barrier resets after each group, enabling the next molecule/batch
ReentrantLock + Condition
- A thread must block until a specific data condition is true (not just another thread's action)
- Multiple distinct wait conditions (e.g. not-full AND not-empty — can't express with one
synchronizedblock cleanly) - Need
signal()to wake a specific type of waiter, not all waiting threads
CountDownLatch
- One waiter must block until N workers have each finished a one-time event
- Not reusable — count goes to zero and stays there
- Classic use: wait for N services to start, then open traffic
ReentrantReadWriteLock
- Reads vastly outnumber writes — readers don't need to block each other
- The invariant: multiple readers can hold the lock simultaneously; writers are exclusive
- If read/write ratio is close to 1:1, a plain
ReentrantLockis usually faster (less overhead)
Resource Pooling (BlockingQueue of Objects)
- A fixed set of expensive objects must be shared across many threads (connections, GPU handles)
- Threads take an object, use it, return it — the queue handles blocking and waking
- Always:
poll(timeout)on request paths (fail fast),take()only for internal pipelines where indefinite blocking is acceptable
Common Mistakes to Avoid
General Concurrency Mistakes
- Using
ifinstead ofwhilefor condition checks inawait()— another thread may take the resource between the signal and this thread reacquiring the lock. Always:while (!condition) { lock.await(); } - Forgetting
unlock()infinally— any exception betweenlock()andunlock()leaves the lock held forever. Always putunlock()in afinallyblock. - Calling
notify()when multiple threads wait on different conditions —notify()wakes an arbitrary thread; if a producer wakes another producer instead of a consumer, progress stalls. Use separateConditionobjects withsignal(), or usesignalAll()if conditions overlap.
Semaphore Mistakes
- Starting both ping-pong semaphores at 1 — both threads enter simultaneously and race; the alternation is lost. One must start at 0.
- Releasing the semaphore before doing the work — fires the signal before the action is complete; the next thread runs before the current one finishes.
- No termination signal for worker threads — fizz/buzz/fizzbuzz threads loop on
acquire()forever. The coordinator must set a flag and release each semaphore once after finishing, so waiting threads can exit.
ReentrantLock Mistakes
- Calling
lock()inside thetryblock — iflock()somehow fails (it won't, but the pattern matters forlockInterruptibly()),unlock()gets called on a lock that was never acquired. Calllock()before thetry. - Acquiring the same
ReentrantLocktwice from the same thread without releasing — ReentrantLock is reentrant, so it won't deadlock with itself, but the lock count increments and must be released an equal number of times. Unintentional re-entry hides bugs.
Deadlock Mistakes
- Acquiring multiple locks in inconsistent order across threads — the classic deadlock. A holds lock-1 waiting for lock-2; B holds lock-2 waiting for lock-1. Fix: enforce a global acquisition order (by index, by name, by any consistent rule).
- No timeout on lock acquisition — a deadlocked thread hangs silently. In production, use
lock.tryLock(timeout, unit)for any lock that could potentially deadlock; log a warning and back off.
Thread Safety Bug Patterns
- Check-then-act split across a lock boundary — reading a condition, releasing the lock, then acting on it. Another thread invalidates the condition between the release and the act. The check and the update must happen under the same lock, held continuously between them.
- Read-modify-write without atomicity — reading a value, computing a new one, and writing back as separate steps. Two threads read the same value, both compute from it, both write — one update is lost. For a single counter:
AtomicInteger. For multi-field updates: a lock across all three steps. - Different locks for related state —
synchronized(this)in one method andsynchronized(otherObject)in another. These don't coordinate. All operations that share an invariant must use the exact same lock object.