Java Concurrency in Practice is the bible of Java multi-threading. It’s dense, thorough, and frankly, a bit intimidating.
For years, I relied heavily on Java’s java.util.concurrent package (ConcurrentHashMap, my beloved), ignoring the low-level mechanics underneath. Why reinvent the wheel when the standard library is so good?
But to truly master scalability, you have to understand what happens under the hood. Recently, I decided to peel back the layers and revisit the building blocks of Java concurrency: synchronized and ReentrantLock.
Here is what I relearned about locking, implementing blocking queues, and why “newer” isn’t always “faster.”
The Old Guard: Synchronized
synchronized has been with us since day one. It is an intrinsic lock—a keyword that tells the JVM: “Only one thread can execute this block at a time.”
It’s simple, effective, and surprisingly resilient.
Let’s look at a basic thread-safe queue implementation. It uses a private internal lock object to protect the queue.
public class ThreadSafeQueue<T> {
private final LinkedList<T> queue = new LinkedList<>();
private final Object lock = new Object(); // Dedicated lock object
public void push(T item) {
synchronized (lock) {
if (item == null) {
throw new NullPointerException("Item cannot be null");
}
queue.addLast(item);
}
}
// ... other methods
}
You might also see it on the method signature directly:
public synchronized void push(T item) {
if (item == null) throw new NullPointerException();
queue.addLast(item);
}
When is synchronized enough?
For simple critical sections, synchronized is great because:
- It’s automatic: The JVM handles the lock release, even if an exception is thrown. No memory leaks from forgotten unlocks.
- It’s readable: One keyword makes the intent clear.
The Challenger: ReentrantLock
Enter ReentrantLock (from java.util.concurrent.locks). It offers everything synchronized does, but with manual control.
Here is that same queue using ReentrantLock:
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class ExplicitLockQueue<T> {
private final LinkedList<T> queue = new LinkedList<>();
private final Lock lock = new ReentrantLock();
public void push(T item) {
lock.lock(); // Manually acquire
try {
if (item == null) throw new NullPointerException();
queue.addLast(item);
} finally {
lock.unlock(); // CRITICAL: Must manually release!
}
}
}
Why bother with the extra boilerplate?
ReentrantLock provides capabilities that synchronized simply cannot:
- Fairness: You can construct it as
new ReentrantLock(true)to ensure threads acquire the lock in the order they requested it (First-In-First-Out). Note: This comes with a performance penalty. - Non-blocking attempts:
tryLock()allows a thread to say, “If the lock is busy, I’ll go do something else rather than wait forever.” - Interruptibility:
lockInterruptibly()allows a waiting thread to be interrupted and wake up, rather than hanging indefinitely.
The Real Test: Implementing a Blocking Queue
The differences become stark when we try to implement a Blocking Queue—a queue that waits when it’s empty (for consumers) or full (for producers).
Approach 1: Synchronized + wait/notify
The classic approach uses Object.wait() and Object.notifyAll().
public class SyncBlockingQueue<T> {
private final Object lock = new Object();
private final T[] buffer;
private int head = 0, tail = 0, count = 0;
@SuppressWarnings("unchecked")
public SyncBlockingQueue(int capacity) {
buffer = (T[]) new Object[capacity];
}
public void put(T item) throws InterruptedException {
synchronized (lock) {
while (count == buffer.length) {
lock.wait(); // Queue full? Go to sleep.
}
buffer[tail] = item;
tail = (tail + 1) % buffer.length;
count++;
// Wake up EVERYONE, even other producers who can't do anything yet
lock.notifyAll();
}
}
public T take() throws InterruptedException {
synchronized (lock) {
while (count == 0) {
lock.wait(); // Queue empty? Go to sleep.
}
T item = buffer[head];
head = (head + 1) % buffer.length;
count--;
// Wake up EVERYONE, even other consumers
lock.notifyAll();
return item;
}
}
}
The Problem: notifyAll() is a sledgehammer. When we put an item, we only really need to wake up a consumer (who is waiting for data). But notifyAll() wakes up everyone waiting on that lock, including other producers. This leads to “Thundering Herd” inefficiency and unnecessary context switching.
Approach 2: ReentrantLock + Conditions
ReentrantLock fixes this with Condition objects. We can have multiple “waiting rooms” associated with a single lock.
public class LockBlockingQueue<T> {
private final T[] buffer;
private int head = 0, tail = 0, count = 0;
private final ReentrantLock lock = new ReentrantLock();
// Specific waiting rooms
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();
public void put(T item) throws InterruptedException {
lock.lock();
try {
while (count == buffer.length) {
notFull.await(); // Wait specifically for space
}
buffer[tail] = item;
tail = (tail + 1) % buffer.length;
count++;
notEmpty.signal(); // wake up ONLY a consumer
} finally {
lock.unlock();
}
}
public T take() throws InterruptedException {
lock.lock();
try {
while (count == 0) {
notEmpty.await(); // Wait specifically for data
}
T item = buffer[head];
head = (head + 1) % buffer.length;
count--;
notFull.signal(); // wake up ONLY a producer
return item;
} finally {
lock.unlock();
}
}
}
This is significantly more efficient for complex coordination because signals are targeted.
Performance: The “30% Faster” Myth
You might read older articles claiming ReentrantLock is 30-50% faster than synchronized.
Be careful with this heuristic.
In modern JDKs (Java 17, 21+), synchronized is heavily optimized. For low-to-moderate contention, the JVM often optimizes synchronized to be nearly free (via biased locking or lock coarsening).
However, under high contention with many threads fighting for the same resource, ReentrantLock (and specifically StampedLock or VarHandle in newer Java versions) often scales better because it offers more predictable scheduling policies.
The Rule of Thumb:
- Start with Concurrent Collections (
ConcurrentHashMap,BlockingQueue)—they are written by experts. - Use
synchronizedfor simple mutual exclusion—it’s cleaner and safer. - Use
ReentrantLockwhen you need Conditions (like our queue example), fairness, or lock polling (tryLock).
Key Takeaways
- Scalability is about granularity: Keeping the locked section small is more important than which lock you use.
- Conditions are powerful: Separation of concerns (Wait for Empty vs Wait for Full) is the killer feature of explicit locks.
- Don’t guess, measure: Performance varies wildly based on JDK version and hardware.
If you enjoyed this deep dive, check out my previous post: Sneak peek at the asynchronous Java, where I explore how modern Java is evolving beyond traditional threading models.
Happy coding!