Async programming in JAVA
Async programming in JAVA with CompletableFuture.
CompletableFuture was introduced to support asynchronous programming since Java 8.
The async annotation(@Async) can be another option for asynchronous calls when using the Spring framework, but CompletableFuture is applicable to any framework as long as Java is being used.
Executor should be configured to use async annotation.
However, executor does not need to be configured to use CompletableFuture.
Fork/Join pool is used as default thread pool if executor is not used.
The default pool size is, (Available processors) * 2 - 1
The code would be like below.
public CompletableFuture<ResponseEntity<String>> someAsyncMethod() {
return CompletableFuture.supplyAsync(() -> someService.someServiceMethod())
.thenApply(response -> ResponseEntity.ok(response));
}
You can use the configured executor in the method if you prefer to use a fixed thread pool instead of the Fork/Join pool.
private final Executor executor;
public CompletableFuture<ResponseEntity<String>> someAsyncMethod() {
return CompletableFuture.supplyAsync(() -> someService.someServiceMethod(), executor)
.thenApply(response -> ResponseEntity.ok(response));
}
Please ensure that you return a CompletableFuture instead of the response after calling the get method on it.
Doing so would result in blocking the CompletableFuture, rendering it useless for asynchronous calls.
The choice of pool type should be based on the type of job it performs.
The Fork/Join pool utilizes a divide-and-conquer algorithm to break down jobs into smaller pieces and execute them concurrently.
In my test case, the API was uploading file to S3 bucket, logic to check for identical file names was also added.
First row is synchronous call, without using any thread pool.
Second row is using Fork/Join pool with a size of 15.
Third row is using fixed thread pool with a size of 20.
The result shows that first and third case is unstable while second case, using Fork/Join pool, is stable.
It demonstrates that the algorithm of the Fork/Join pool can introduce additional overhead for relatively smaller jobs, resulting in lower TPS.
The sychronous call has a high TPS since it can utilize the entire resource of the machine, unrestricted by pool size, but they are very unstable due to blocking call.
If a fixed-size thread pool is being used, what should be the optimal size for the pool?
Does increasing the pool size result in better performance?
The last row is using Fork/Join pool with a size of 15.
Fixed pool size of 10 has higher TPS than Fork/Join pool with a size of 15.
It’s as I mentioned earlier, particularly for relatively smaller jobs.
The highest TPS result is achieved with a pool size of 20, not with pool sizes of 30 or 40.
This indicates that the thread pool has a certain threshold that yields the best outcome.
Since the resources of the machine are limited, there comes a point where the overhead outweighs performance.
Although we may believe we’ve found the optimal size through testing, real-life results can differ, as this API is not the only one used in the system.
However, load tests can serve as a useful reference.
This article was created by Crocoder7. It is not to be copied without permission.
Leave a comment