Uncategorised

Asynchronous processing of RESTful web services requests in Spring and Java EE worlds wrapped in Docker – part 2: JAX-RS

14th June 2016 Uncategorised No comments

A little while passed, many great things have happened 🙂

Back to Java EE world, when talking about cloud, containers, docker we should consider Wildfly Swarm. This JBoss’ child is a modularization of Java EE way beyond standard Java EE profiles (web, enterprise etc.). Main features of Swarm are:

  • Capability to construct application as a fat jar
  • Application’s fat jar will contain only required Java EE specifications (with implementation part of course) and framworks (e.g. Logstash)
  • Capability to add necessary dependencies based on structure of build automation tool descriptor (maven, gradle)
  • Capability to construct customized deployments (via Shrinkwrap)

Some of these features make Wildfly Swarm a little similar to Spring Boot, of course there are differences as configuration and deployment of Java EE application is different that that of Spring application. Let’s consider configuration – in Spring you can use java configuration or xml (if you like), every configuration is similar when this is considered. In Java EE there are different types and formats of configuration. EJBs, Serlvets mostly don’t need XML nowadays, but Java EE Batch – here you are stuck with XML. When using Wildfly Swarm there is a possibility to do some additional configuration that will augment Java EE configuration, but I would not bet that API of Swarm or Shrinkwrap will not change.

Maybe Spring Boot  can do more automatic configuration compared to Wildfly Swarm. This lack of belief is dictated by the way Java EE and Spring are configured. If we consider CDI we are not forced to add any configuration (starting from Java EE 7 even the beans.xml descriptor is not required). Looking at the JPA we can add datasources (Swarm specific part) and define persistence units (Java EE part) . So we can configure the application and how it is deployed, Spring Boot’s flexibility comes with Spring not the other way around in my opinion.

Enough of theoretical considerations. Let’s code. Spring Boot has excellent start.spring.io – Wildfly Swarm has Wildfly Swarm Project Generator. Name is maybe not so catchy as Spring’s but Swarm’s web app does basically the same thing and it is really smart to use it and not do the basic configuration your self. For demo app we need just the JAX-RS with CDI dependency. After downloading application’s zip we need to move it to where we want to and import to IDE. Below is pom file that I enhanced with Swarm’s plugin configuration, there rest is generated:

Important parts:

  • We import Wildfly swarm BOM to ensure matching versions are used. This is similar to Spring IO Platform BOM
  • We use WAR packaging – more on this below
  • We can configure debug port and we do configure bind address

Packaging set in maven impacts your application packaging and not the fat jar packaging. With Wildfly Swarm we always get packaged application and Wildfly Swarm generated fat jar. If we would use jar packaging then we would need to configure custom deployment using Shrinkwrap like in this example. Despite what is stated in Wildfly Swarm docs I could not run demo application using jar packaging. So AFAIK if you use jar packaging you are forced to write you own main method that deploys you app.

What we did so far is we created application that deploys JAX-RS with CDI and other required dependencies and only these. No EJBs, no JMS, no JSP. Great ! But if you peek into WAR’s lib there are a some libraries that could be excluded like Jackson JAXB module. Probably dependency meta data should be stated more precisely in order to exlude some of these. But some, like websockets lib, could be excluded. Java API for Websockets is separate specification and could be specified as a separate Widlfly Swarm dependency.

Passing on to Java Code, we need to create JAX-RS application class:

And then JAX-RS service:

We have only one way to execute Asycnhronous processing in JAX-RS, and this way is to use AsyncResponse that can be injected using @Suspended qualifier. AsyncResponse allows us to resume processing when response is ready, just like Servlet 3.x asynchronous API. Optionally we can specify timeout and timeout handler that will be executed when processing times out.

Example code contains also synchronous version of method. Possible upgrades include use of managed thread pools in Swarm instead of creating our own thread pool.

I had some trouble with VirtualBox VM with Docker so the app is not tested with Docker yet. I will do it in a few days. Code is available on GitHub, I will update it in a few days. In order to run application wildfly-swarm:run goal needs to be executed after project is build. I tried using wildfly-swarm:start goal but Wildfly was killed right after build was completed. Dockerized Wildfly example is available on Marcus Eisele blog.

Another way is to execute swarm-demo-swarm.jar from target directory, just execute:

java -jar swarm-demo-swarm.jar

Once Wildfly is up and running service can be accessed under http://localhost:8080/swarm/task-service/process-task?task-name=YOUR_TASK_NAME&task-param=parameter

To be continued… 🙂

Variants of SEDA

17th April 2016 Architecture, EDA, JMS, Uncategorised No comments

SEDA (Staged Event Driven Architecture) was described in Ph.D. thesis of Matt Welsh. I recently read his blog post that describes various ways SEDA was perceived and implemented . One thing that comes to my mind after reading this is that SEDA as a tool has been used in more or less correct ways, as many other patterns and architecture models.

Because SEDA is a design model and architecture pattern, one that can be used with other models like SOA. In fact some ESB’a use SEDA as a processing model, like Mule ESB or ServiceMix. SEDA is one of a few processing models in ServiceMix. But SEDA can be used elsewhere, like with Oracle Service Bus. And in more that one way:

  1. SEDA using messaging – here we would add a queue before service that consumes event messages possibly using SOAP/JMS binding. We can that add additional stages for events processed by this first service.
  2. Throttling capabilities of OSB – here we do not have a real thread pool but using WebLogic’s Work Managers we achieve similar goal, threads will be taken from WebLogic thread pool. This processing model will work with every routing action

I also used a variant of the first model, where services were exposed as SOAP/HTTP services and they published a messages on a queue and then send reply that message has been queued. This way we can still used SOAP/HTTP messages, not forcing other systems to use messaging – either JMS, AMQP, STOMP or some other.

When done right SEDA will improve system’s scalability an extensibility. Scalability is higher because if a peak of messages happens than it will be queued up to the limit of storage quota. Messages that can’t be processed when arriving at a queue will wait for their turn. With addition of reliable messaging like JMS persistent delivery with exactly-once guarantee we can have scalable and reliable message processing mechanism. Adding additional queues depends on requirements of specif domain or application. It may be viable to add queues not only because of scalability but also from extensibility point of view. We would use topics here of publish event to multiple queues but the main point is that it would me possible and quite easy to add additional event consumer in this model.

SEDA may be also good for asynchronous request processing with long processing times with addition of NIO2 and Servlet 3.x asynchronous processing model. In this case we would accept request at some endpoint like Spring controller method and then invoke asynchronous method to do backend processing. HTTP could process another request for another potentially synchronous endpoint. Backend processing service would process incoming requests and invoke completion handlers as it finishes (using Spring infrastructure).  Here we would have at least two queues – HTTP thread queue and backend processing service queue. Both could be managed and adjusted to add higher scalability.

When considering SEDA it is important co remember that request processing will be lower that with processing with one thread and no queues on the way. This is quite obvious, as dispatching, consuming, confirming event message has it’s performance impact. SEDA can also impact system’s maintainability as event messages may get stuck in some queues or get redirected to dead letter queues and this must be monitored. Fortunately it is not a big problems, especially on OSB (we can use alerts) .

SEDA can be used incorrectly, just like EDA, CEP or SOA.  Once again architecture design is probably the most important thing in system’s development.

Have a nice day !