JBay Solutions Development Blog on Java, Android, Play2 and others

Java Threading and Concurrency Introduction

Multi Threading is one of the most interesting subjects when developing applications, be them in Java or any other Language: The ability to have ones software performing multiple tasks at the same time, maybe handling multiple requests from users, parallel processing things... wicked stuff really!

On the other hand... when things go wrong with multi threading, they go wrong really fast, they go wrong all over the place, it is hard to debug (specially if threads "share" stuff) and you get the usual comment: "I just don't get it why this is doing this". Multi Threading is not without challenges!

In this Tutorial we'll be going through a bit of theory regarding threads and the Java memory model, but I'll try to make it as interesting as I can with examples and demonstration of concepts. We'll see how stuff can go wrong, how to avoid those situations, and we'll play with threads just to see them working! A more advanced tutorial will follow(at some point) in which I'll discuss more advanced parts of Threads and Concurrency.

All the source code used in this Tutorial can be found HERE . It is a NetBeans project and its structure should be pretty self explanatory.

Right, lets go!

So, when one writes a simple program in Java ( not using application servers or other frameworks that might make use of Threads) the execution of the code relies solely on the main thread. This is not to say that there are no other threads on that JVM working, there are, but our code will all be ran on the main execution thread.

A Thread is an Object, it exists on the heap memory but a thread has its own call stack. Each thread has its own call stack, and you get to see it every time there is and exception and a stack trace is dumped. For example, if the exception occurs on the main thread you get a dump that starts with main() call (check the end of the stack) because on the main thread the main() method was the first to be called. On the other hand, when an exception happens on a different thread the call stack starts with run(), and you'll understand why later on. The one thing to keep in mind from this is that the execution of each thread is independent.

So, (Part1 of the SourceCode) if we had a simple program that said simply:

 public static void main(String[] args){
     for (int count = 0 ; count <5 ; count++) {
        System.out.println("Going "+ count);

We would be right to expect the output to be :

 Going 0
 Going 1
 Going 2
 Going 3
 Going 4

Simple. Predictable. Zen even... One Thread... one line of execution... no surprises. But when we have more than one thread running things are a bit different. But before we get into that, how do we create a new thread?

A new Thread

The only way to create a new thread is by making use of the Thread object. A Thread should have a workload (stuff to do) and to create that bit of code there are two alternatives :

  • extending Thread object

  • implementing Runnable interface

So, telling a Thread what to do by extending a Thread object is done by overriding the run() method. The Thread object is not abstract and the run() **method does exist but simply does nothing. In the subpackage part2 of the example source code we got the class SimpleThread** :

 public class SimpleThread extends Thread{
     public void run() {
         for (int count = 0 ; count <5 ; count++) {
             System.out.println("T - Going "+ count);

Running this thread is done like so:

 SimpleThread thread1 = new SimpleThread();

Defining the workload by implementing the interface Runnable is done by providing the implementation details of its sole method run(), then a new Thread object must be created and this Runnable object passed through its constructor. In fact the Thread Object implements Runnable so when I said last that we needed to override the run() method when extending Thread object , we are overriding the implementation of run() method of the Runnable interface that Thread implements.

In the subpackage part2 of the example source code we got the class SimpleRunnable :

 public class SimpleRunnable implements Runnable{
     public void run() {
         for (int count = 0 ; count <5 ; count++) {
             System.out.println("R - Going "+ count);

Using this Runnable class is done like so:

 Thread thread2 = new Thread(new SimpleRunnable());

Running both threads in the same program will generate an unpredictable output because... both threads are running at the same time and therefore we cannot predict which will output stuff to the console first and in what order.

 SimpleThread thread1 = new SimpleThread();
 Thread thread2 = new Thread(new SimpleRunnable());

The output from one of my runs over here was for example this (**MainPart2.java **file) :

 R - Going 0
 T - Going 0
 T - Going 1
 T - Going 2
 R - Going 1
 R - Going 2
 R - Going 3
 R - Going 4
 T - Going 3
 T - Going 4

That is pretty simple till here, right? Bear in mind that these two threads that we created do not share Objects between them, things get more complicated when they do and we'll get there in a minute or two.

So, to start the execution of a thread object we call the start() method, but there are a bit more things that one should know aswell about this. When the start() method is called the JVM allocates resources for this thread, schedules the thread and calls the run() method of the thread object and the thread object is then said to be in the Runnable state. Now one could ask:** Runnable state**? What other states are there? Schedules what? First things first... Scheduler!

The Thread Scheduler

Since in a processor only on thing can be running at any given moment, this is the bit of the JVM that decides which thread is executing on a processor at a given time. So the Thread Scheduler looks at all ilegible threads (therefore in the Runnable State), which are in the Runnable Pool (the Runnable State threads are in the Runnable Pool) and decides which one is to execute at a specific moment, not guaranteeing the order of which ilegible thread  is ran.

As said, order of execution cannot be guaranteed (run the Part2 example several times and notice the different execution results) , the thread scheduler decides which one thread will run on its own... but we can influence a bit the behaviour of the thread scheduler with a few methods that will also allow us to understand the States in which a Thread can be.

These are the methods:

  • sleep() from the Thread class

  • yield() from the Thread class

  • join() from the Thread class

  • setPriority() from the Thread class

  • wait() from the Object class

  • notify() from the Object class

  • notifyAll() from the Object class


The sleep() method causes the currently executing thread to go to sleep (stop) for a specific amount of time (the two sleep methods take as arguments milliseconds and milliseconds+nanoseconds). While the thread is sleeping it does not release the locks it has over other objects. Locks are called monitors and we'll get into that aswell, but lets not get sidetracked here and carry on.

This is a static method! Calling sleep() on another thread does not make that other thread go to sleep but instead will make the current object thread to sleep().


The yield*_()*_ method causes the currently executing thread to stop for a bit and give other threads on the Runnable Pool a chance to run.

This is also a static method! Calling yield on a reference to another thread does not make the other thread yield, it makes the current object thread yield.


The join*_()*_ method allows one thread to wait for the completion of another. So, imagine two threads A and B and inside the execution of thread A we call b.join(), this makes thread A stop and wait till B finishes.


The setPriority*_()*_ method allows setting the priority of a thread on the Thread Scheduler. It takes as input an integer value between 1 (Minimum) to 10(Maximum) with the default being 5.


The wait() method some could say is somewhat like the sleep() method, which is not the case at all. Yes, both stop execution of a thread, but that's as far as the similarities go!

The wait() method stops a thread for a specific amount of time or until notify() or notifyAll() is called on another thread. The thread where wait() was called will release the monitors and will not be ilegible for scheduling. If the specified amount of time has gone by or any of the notify methods is called the thread becomes ilegible and placed on the Runnable Pool for processor time allocation. On scheduling it attempts to acquire the locks of the objects that it requires and when it finally gets them it resumes execution where it got stopped before.


The notify() method notifies the next waiting thread that execution can proceed (of all the waiting threads, the next one is chosen arbitrarily). Check the description given before on the wait() method.


The notifyAll() method notifies the all waiting threads that execution can proceed. Check the description given before on the wait() method.

Thread States

Now bearing in mind what was just explained in the Thread Scheduler bit before we give you the following diagram that shows the 5 states that a Thread can be in (similarities between this diagram and the one on the Katherine Sierra book (on the references) are not a coincidence):

So between the time one creates a new thread object and calls the start() method a thread is said to be in the** New** State.

As explained before, calling start() on a thread does several things including moving it into the Runnable State, which is the same as saying that the thread is ilegible for the Thread Scheduler to schedule execution and at that point move it to the Running State.

After the run() method is finished execution the thread is said to be dead and on the Dead State. As with anything that is Dead, it can't be brought back to life... so once a Thread is dead, it is dead, it wont run ever again (no, Not even if you call start() on it again!). The thread object is still good, but it won't ever be getting back to life.

The Waiting State is when a thread is not New, not Dead and not in the Runnable State but can be in the future. So, when does this actually happen? Well, when a thread is sleeping (check the before explanation of sleep() method), when a thread is waiting (check the before explanation of wait() method), when a thread cannot access the resources it needs and become blocked, etc...

Sharing Stuff

Now, because we are wanting to make a bit more complex programs with multi threading a natural thing will be to share object references between threads in order to just share data and resources inside your software. This should be a pretty easy thing to do, and it is, but when talking multi threading a few concepts and notions must be present when designing your software.

Memory that can be shared between threads is called shared memory or heap memory. All instance fields, static fields and array elements are stored in heap memory.

Because now different threads can be accessing the same objects and performing operations on then at the same time, and because one cannot know which Thread will be executing when because the Thread Scheduler handles this selection by itself, a poorly designed program could (and probably would) render the shared objects inconsistent and be handling bad data.

A good example would be two threads calling the same method at the same time (obviously this cannot really happen, only one thing executes in a processor at a given time) and that method modifies a property of the object. Because of this concurrent execution the property of the object might not represent exactly what has happened to it.

Thread Interference and Memory Consistency Errors

Imagine the following scenario, we got two threads, Green Thread and Red Thread . Both take in their constructor an Object of type SharedObject. In our example both get given the same reference to the same SharedObject, therefore both can access the same object:

The SharedObject class has only one method which is called  incrementValue() its implementation is like so :

 public class SharedObject{
     int value = 0;

     public void incrementValue(){
         int temp = value;
         temp = temp +1 ;
         value = temp;

The implementation of both threads just call  incrementValue() once in each.  So running our software with both threads at the same time we would expect the value of field "value" on the SharedObject where both threads call incrementValue() method  would be 2 . This is not the case always! The reason is because both threads are executing the same method at the same time and :

  • might end up overwriting each others action on the SharedObject

  • might not be aware of changes being performed by another Thread

  • can be out of sync with the remaining of the threads executing

All of this depends on how the Thread Scheduler schedules execution of the several statements inside incrementValue() for each thread, and because we cannot guarantee  that one thread executes everything before the other thread access the same object and see the already performed changes we might end up with inconsistent results on our SharedObject.

The following diagram explains graphically  what I'm trying to put into words:

So, because the Green thread started the execution of the method before the Red thread finished (step 1 and 2) , Green thread has access to a value field which has not been updated with the future changes that Red thread is going to perform and in our example Green thread ends up overwriting the value that Red thread updated on the SharedObject object (step 7 and 8).

Keep in mind that the last diagram is just and example and that nothing guarantees me that just because Red thread started execution first it will be the first to finish. Also, because of the fast CPUs and thread management optimization by the JVM of possibly allowing one thread to finish all execution before starting the other thread we could end up with  value=2 , but the point is : if it does, it is by chance and not because the code will produce the same result always! It is unpredictable!

Part 3 of the source code demonstrates this, please do give it a run and see for yourselves. Bear in mind that the code in Part3 is a bit more deliberate than this one in order to generate inconsistency errors more frequently. Still you might want to run it a few times to see the several results it generates.

Another problem that can also happen is that more than one thread see the same object in different states. This happens mainly due to performance features such as caching of object on JVMs.

Take for example this next class (example from Synchronization and Volatile - in References) :

 public class StoppableTask extends Thread {
   private boolean pleaseStop;

   public void run() {
     while (!pleaseStop) {
       // do some stuff...

   public void tellMeToStop() {
     pleaseStop = true;

Because the VM is free to cache data (some of it, we'll discuss this later) it is very possible that the VM could cache the pleaseStop variable while entering the while loop in order to achieve greater performance, but at the same time creating an infinite loop because no changes to the variable using the** tellMeToStop()** method would be visible.

How can this be fixed? By locking access to methods that are being executed by one thread and make other threads wait for their turn and also by making some variables uncachable. This is done using Synchronisation and making variables volatile.


Synchronization works by locking access to an object to all threads except the one that has locked it. We've seen before the problem with allowing several threads access the same object at the same time and performing changes in it. The solution often resides in sound object design (sometimes the problem can be overcome by simply modifying strategies) and also synchronization.

So, what is synchronization all about? First thing to have in mind are the locks (or monitors). Each and every object instance has one and only one lock, this is a core concept that must be grasped before we go any further: One object one lock. What synchronization does is that it allows access to the synchronized bits of an object to the Thread that has the Lock of that object.

The way to get the Lock of an object is by entering a synchronized bit of code while the Lock is available. When, for example, calling a synchronized method on an object with the Lock available, that thread will acquire the Lock on that object. That thread will release the Lock of the object once the synchronized method (or that part of the code) is finished. If another thread tries to access a part of synchronized code of that object it will be denied until the Lock becomes available, and at that point that thread will be allowed to continue its execution.

Because the Thread Scheduler will iterate between threads to allocate CPU time it might happen that an object Lock belongs to threads that are not running. What is meant by this is that a thread that goes into the** Waiting State will not release the Lock just because it is not in the Running State**. This also calls to attention the following: Overusing Synchronization can lead to problems.

Synchronization can be applied only to methods and blocks of statements and not everything in an object needs to be synchronized, we can have both synchronized and unsynchronized methods and blocks in the same object and Threads that do not have the Lock on an object can still execute unsynchronized code on that object.

For example in the previous diagram, if Thread1 , between step 2 and 3 went into the Waiting State, it would still retain the Locks on Object1 and Object2 during that time, and would not allow other threads to call synchronized parts of those objects. It also goes to show that a Thread can acquire more than one Lock.

Now onwards with an example. In part 3 of the source code (the one we used before to test stuff going wrong) we has an object called SharedObject. In that object we had a bit of code like this:

 int value = 0 ;

 public void incrementValue() {
     int tempValue = value;
     tempValue = tempValue + 1 ;
     value = tempValue;

So, to make this bit of code "Thread Safe" (get used to this term aswell) we not only need to defined the method** incrementValue() as synchronized** but also to make the variable value private (which was left out without private on purpose). The code would look like this then:

 private int value = 0 ;

 public synchronized void incrementValue() {
     int tempValue = value;
     tempValue = tempValue + 1 ;
     value = tempValue;

What this basically means is that any thread that wants to call incrementValue() method needs to have the Lock on this object (therefore, no other thread having the Lock on the object) and that the variable value cannot be modified in any way other that by calling this method (therefore we made it private). Any other methods that would modify this variable should be made syncronized as well.

There is another method on the SharedObject object which is the getValue() method, which is defined like so:

 public int getValue() {
     return value;

Should this method be synchronized aswell? Well the answer depends on how correct you need the return of this method to be! Not making it synchronized means that any other thread can call this method during its allocated processing time (scheduled by the Thread Scheduler) even if the Thread in question does not hold the Lock on the Object. Which in turn means that : if Thread 1 has started calling the** incrementValue() method but has not yet finished and is in the Runnable State, and Thread 2 is in the Running State and calls this getValue() method it will return a value that is not representative of the action already initiated by Thread 1**.

To cut a long story short : if you need getValue() to represent the exact number of times that** incrementValue() was called , then you must synchronize it. If you don't, then you shouldn't** make it synchronized.

Making unnecessary things synchronized is a design flaw and will lead to performance problems and possibly to a deadlocked program.

Now that we are talking about overdoing things, imagine that the incrementValue() methods was like this:

 public void incrementValue() {
     String threadName = Thread.currentThread().getName();    
     System.out.println("Incrementing Value called by " + threadName);    

     int tempValue = value;
     tempValue = tempValue + 1 ;
     value = tempValue;

     System.out.println("Finished Incrementing Value by " + threadName);

This new** incrementValue()** method does a bit more things than the original one and the extra stuff it now does maybe doesn't need to be synchronized. We just need to synchronize the bit of code that actually performs changes on the object. So instead of making the complete method synchronized one can decide to just synchronized the bit of code that is important:

     public void incrementValue() {
         String threadName = Thread.currentThread().getName();    
         System.out.println("Incrementing Value called by " + threadName);    

         synchronized( this ) {
             int tempValue = value;
             tempValue = tempValue + 1 ;
             value = tempValue;

         System.out.println("Finished Incrementing Value by " + threadName);
And this is the notation to creating _**Synchronized Blocks**_. The interesting bit about _**synchronized blocks**_ is the lock that is required to execute that bit of code can be defined by the programmer, which does not happen with _**Synchronized Methods**_! On a synchronized method the lock that is necessary to execute that method is the lock that belongs to the instance of the object the method is being called. In a synchronized block we can defined which object lock we require, and in the previous example the lock required is the lock of the instance of the object in question :
     synchronized( this ) {

The programmer could define that the lock necessary was from another Object :

     synchronized( otherObject ) {

Go check Part4 of the source code for a version of Part3 that is synchronized.


Yeah, this is a tricky one. Basically this keyword can be applied to variables and make then volatile, which basically means that that variable when accessed is always going to synchronize with main memory, therefore allowing no caching of that variable.

More Advanced Topics

In the following tutorial on threads and concurrency we'll be looking at such topics as :

  • Daemon threads

  • Thread pools

  • Executors

  • Atomic variables

  • And other new stuff that also got introduced from Java 5


Programming Concurrency on the JVM - Subramaniam, Venkat  - Pragmatic Bookshelf - 2011

Oracle Concurrency Lessons - http://download.oracle.com/javase/tutorial/essential/concurrency/

The Java Language Specification - http://java.sun.com/docs/books/jls/third_edition/html/memory.html

Revising the Java Thread / Memory Model - http://www.cs.umd.edu/~pugh/java/memoryModel/JavaOneBOF/BOF.pdf

SCJP Sun Certified Programmer for Java 6 Study Guide (CX-310-065): Exam 310-065  -  Sierra, Katherine; Bates, Bert - McGraw-Hill Osborne 2008

Java SE 6 API - http://download.oracle.com/javase/6/docs/api/java/lang/Thread.html

Synchronization and Volatile - http://www.javamex.com/tutorials/synchronization_volatile.shtml

comments powered by Disqus