Java Series: Basics Q&A Part 4

Oliver Hu
5 min readJul 22, 2018

Q16. How is synchronized implemented, what is the upgrade/downgrade of a lock?

This is a fairly complicated topic. I gonna expand this topic in a separate post for sure. Here, I’ll list a couple things learnt.

synchronized is implemented via three different types of Monitor, biased locking, light-weight lock and heavy-weight lock. The upgrade/downgrade of a lock is how JVM optimizes the synchronized mechanism.

JVM defaults to biased lock when there is no contention. JVM uses CAS to set the Mark Word portion of an object to a thread’s ID to indicate the object is biased to current thread. When another thread is trying to acquire the lock, JVM revokes the bias and switch to a light-weighted lock to acquire the lock. If retry failed to acquire the lock, the lock will further be upgraded to a heavy weight lock.

Besides synchronized & Reentrant lock mentioned in previous blog post, there are two other common locks — ReadWriteLock and StampedLock. StampedLock provides ReadWrite lock and is optimized for read.

Q17. What happens when a thread invokes start() twice?

Java thread doesn’t permit double launching. A second call start() would throw an IllegalThreadStateException.

What is a thread?

Thread is the smallest unit system can coordinate. Thread has its own stack, register, thread local, etc., but it shares file descriptor & virtual address space with other threads in the same process.

Most logics in Java thread are implemented through JNI to call native code. Be cautious when using ThreadLocal, which is used to store private information of a thread. However, it requires explicit GC to recycle resources, otherwise, ThreadLocalMap will only be collected when thread exits. This is the source to a lot of OOM.

Q18. Under what scenarios would a Java program deadlock, how to locate & recover?

Deadlock is the state of a process that the process doesn’t move on. Generally speaking, there are two kinds of deadlocks — the threads depend on each other and none of them can move on, single thread has a deadlock loop.

A common scenario of deadlock in a multithreading environment is:

Thread A owns lock A, waiting for lock B.
Thread B owns lock B, waiting for lock A.

In most cases, we debug deadlock with jstack. Below is an example of deadlock program:

public class DeadLockSample extends Thread {
private String first;
private String second;
public DeadLockSample(String name, String first, String second) {
super(name);
this.first = first;
this.second = second;
}
public void run() {
synchronized (first) {
System.out.println(this.getName() + " obtained: " + first);
try {
Thread.sleep(1000L);
synchronized (second) {
System.out.println(this.getName() + " obtained: " + second);
}
} catch (InterruptedException e) {
// Do nothing
}
}
}
public static void main(String[] args) throws InterruptedException {
String lockA = "lockA";
String lockB = "lockB";
DeadLockSample t1 = new DeadLockSample("Thread1", lockA, lockB);
DeadLockSample t2 = new DeadLockSample("Thread2", lockB, lockA);
t1.start();
t2.start();
t1.join();
t2.join();
}
}

How to avoid deadlock?

  1. avoid using multiple locks.
  2. properly design the order how locks are acquired, an algorithm to do that is the Banker’s Algorithm.
  3. wait for locks with timeouts. Object.wait(..) or countDownLatch.await() both come with timed_wait.
  4. static analysis.

Q19. Utils in Java Concurrency Packages

A couple categories:

  1. Thread synchronization: CountDownLatch, CyclicBarrier, Semaphore.
  2. Thread safe container: ConcurrentHashMap, ConcurrentSkipListMap, CoyOnWriteArrayList.
  3. Thread safe queue: ArrayBlockingQueue, SynchronousQueue, PriorityBlockingQueue.
  4. Executor service.

Detailed introduction to thread synchronization utils:

a. CountDownLatch: allows one or multiple threads to wait for some operations.

b. CyclicBarrier: a supporting synchronized structure to allow multiple thread to wait for a barrier.

c. Semaphore. Java version of Semaphore.

An example on how to coordinate 10 people into 2 batches to board a bus with CountDownLatch:

import java.util.concurrent.CountDownLatch;
public class LatchSample {
public static void main(String[] args) throws InterruptedException {
CountDownLatch latch = new CountDownLatch(6);
for (int i = 0; i < 5; i++) {
Thread t = new Thread(new FirstBatchWorker(latch));
t.start();
}
for (int i = 0; i < 5; i++) {
Thread t = new Thread(new SecondBatchWorker(latch));
t.start();
}
while ( latch.getCount() != 1 ){
Thread.sleep(100L);
}
System.out.println("Wait for first batch finish");
latch.countDown();
}
}
class FirstBatchWorker implements Runnable {
private CountDownLatch latch;
public FirstBatchWorker(CountDownLatch latch) {
this.latch = latch;
}
@Override
public void run() {
System.out.println("First batch executed!");
latch.countDown();
}
}
class SecondBatchWorker implements Runnable {
private CountDownLatch latch;
public SecondBatchWorker(CountDownLatch latch) {
this.latch = latch;
}
@Override
public void run() {
try {
latch.await();
System.out.println("Second batch executed!");
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}

With CyclicBarrier:

import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
public class CyclicBarrierSample {
public static void main(String[] args) throws InterruptedException {
CyclicBarrier barrier = new CyclicBarrier(5, new Runnable() {
@Override
public void run() {
System.out.println("Action...GO again!");
}
});
for (int i = 0; i < 5; i++) {
Thread t = new Thread(new CyclicWorker(barrier));
t.start();
}
}
static class CyclicWorker implements Runnable {
private CyclicBarrier barrier;
public CyclicWorker(CyclicBarrier barrier) {
this.barrier = barrier;
}
@Override
public void run() {
try {
for (int i=0; i<3 ; i++){
System.out.println("Executed!");
barrier.await();
}
} catch (BrokenBarrierException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}

Java also provides Phaser, which functions similarly to CountDownLatch but it allows threads to dynamically register to a Phaser.

Below is a graph of concurrent containers:

A couple notes:

  1. if you care more about access & insert performance but don’t care about order, use ConcurrentHashMap, otherwise use ConcurrentSkipListMap.
  2. SkipList is much simpler than ConcurrentTreeMap since making a Red Black Tree thread safe is extremely hard.
  3. CopyOnWrite works by making any modification operation duplicate original data structure and replace the original after modification.

Q20. ConcurrentLinkedQueue & LinkedBlockingQueue?

There are two types of concurrent containers. Concurrent & Blocking containers.

Concurrent types focus on lock-free. This has low consistency but with high throughput. It has a couple characteristics:

a. Concurrent types has lower cost during modification comparing with CopyOnWrite

b. Concurrent types has lower traversing consistency. A common behavior is “fast fail”, container throws exception when elements changed.

c. size() operation is not 100% accurate.

d. read performance is not very consistent.

Blocking types base on lock with high consistency.

Most Queues implement BlockingQueue interface like — block till element exists or block till extra container space exists.

BlockingQueues also has two types — bounded or unbounded.

  • ArrayBlockingQueue is bounded as it has a final array to store data.
  • LinkedBlockingQueue is actually also bounded although its capacity is unlimited.
  • SynchronousQueue has a capacity of 0.
  • PriorityBlockingQueue is unbounded.
  • DelayedQueue & LinkedTransferQueue are unbounded as well.

--

--