Frameworks and solutions

Microservices, API Gateways and DevOps

11th November 2015 Frameworks and solutions No comments

My laptop didn’t return from service center. RIP. I’m waiting for new dell now to get back on track with high performance 😉

When browsing the internet on occupied laptop (borrowed from my wife) I came across article by Marcus Eisele. I like Marucus’ blog and perceive him as one of the leading evangelists in Java EE. I was happy to read that Marcus also thinks that DDD Bounded Context is a good way to start with choosing what parts of monolith to include in microservice if the monolith is to be divided into microservices. Or if we are convinced to start with microservices.

From a technical, day-to-day perspective I still see the need for ESB’s. Thing like security policies, throttling can be done in API gateway. But some things like transformations or protocol translation are not supported by every API Gateway (are there any that support tranformations… – it would be against smart endpoints and dumb pipes). So the default starting point to consider a solution would be to use API Gateway on the border of integrated systems solution and ESB inside as it still offers more features. API Gateways are more specialized on external usage (e.g. provisioning policies). API gateways are also more fitted for smart endpoint and dumb pipes solutions. When using microservices and smart endpoints and dumb pipes you have to get over some possible code duplication like aggregating calls from two services in a few other serivces. So one should be aware not start writing an ESB at one point 😉

Microservices are very popular nowadays. This is a tool that must be used wisely. I like very much the In defense of a monolith article and think alike. You now the paper by Winston Royce that made waterfall model popular was actually showing downsides of this model. But the buzz took off 🙂

Documenting decisions in architectural issues log

28th July 2015 Frameworks and solutions No comments

Life of an application

A life is about choices. Architecture is about making best possible choices in context of current requirements and constraints. Requirements can be functional or non functional, and constraints limit our solution space. Guess we now that and writing yet another article about it would not be interesting. But every architect should document the reasons of important decisions that are made. This article will focus on how to do it in Sparx Enterprise Architect with reference to project requirements.

The reasoning

As an architect one needs to document important issues, possible solutions and reasons for choosing one of the solutions. There are various reasons to do it:

  • Reasoning that stood behind a choice will blur with time so the architect will not be able to precisely defend the decisions made
  • Architect may need to defend or just describe reasons for his decisions for various reasons like some other architect joining the team, client (or consultancy company hired by client) can have questions
  • Requirements and constraints may change in time and decisions may have to be reviewed. It would be easier to review decisions if we know why they were made
  • Architect may choose to leave project and someone may have to take over

I had experiences the last situation from both sides. Luckily, when I was leaving project developed for one of Polish banks I left my architecture log so my colleagues knew how to describe and defend the decisions. But imagine a poor architect that takes over a not trivial project and has to defend choices made by previous architect. Some may be questionable, some not, but living this poor guy without data on why this decisions were made we have.

Decisions can be kept in various places:

  • Some text files
  • Excel sheets
  • Case tools

Using case tool that is used to model requirements ha an advantage of keeping all data together and allowing one to do searches on data easily.

Sparx Systems Enterprise Architect is an example of such tool. Sparx Systems Enterprise architect is often used to manage requirements,  design the solutions architecture and to convey design to development teams. It allows quite a lot of project views and aspects to be modeled like requirements, physical and logical architecture, 4+1 architecture views, TOGAF models and a lot more. Using Sparx EA makes it possible to keep requirements and architectural issues log in one place and to create connections between issues and requirements. This allows us to partially or fully automate process of selecting issues that need to be reviewed if that is the requirement. As always this possibility is a tool and one should keep in mind that full automation in some cases or stages of the project can replaces with more agile, face to face interactions.

Using Sparx EA and UML Profile

To efficiently store architectural decisions in EA one should create a profile. It is possible to create an architectural decisions log without a profile, using one of other extension mechanisms of UML like stereotypes and profiles. Using profile gives us a better separation between model elements and this is the reason I like this approach. Creating UML profiles is described well in EA Docs so there is no need for me to repeat this information. Architecture decision log profile should contain four classes:

  • Architectural issues log – this stereotype extends package
  • Architectural issue – this stereotype extends package
  • Architectural context – this stereotype extends artifact
  • Architectural decision – this stereotype extends artifact
  • Architectural alternative solution – this stereotype extends artifact

Architectural issues log and Architectural issue stereotypes extend package in order to be containers for other elements. Architectural issues log will contain many Architectural issues.

Each issues will have it’s own packages in order to lessen the maintenance cost for Enterprise Architect model in the context of document generation. If we would keep them in one package then ordering the elements would be problematic, especially if later some new alternative decisions would be added.

Architectural issue will store description of the problem and a short title (we wont to keep the log simple). Architectural issue will also group architectural issue context, decision and alternative solutions. Each context must be connected via a trace relation to the requirements that engendered this issue.

Architectural context element will specify known constraints (both technical and non-technical) that limit possible solution space.

Architectural decision will contain reasoning on why the solution was chosen. This should be a short text as every alternative solution should contain advantages and disadvantages of a given solution that will give a good insight on each solution. This model allows to work with alternative solutions more easily, each solution has its dedicated element so they do not get mixed up making them harder to read.

Each Architectural issue will contain a number of Architectural alternative solution. Each solution alternative is independent solution option.

Architectural issues log is a package that will group various architectural issues.

Custom model search

When we have the profile in place we can create a custom search that will allow us to find the decision for which requirements have changed. Both elements have modification date so it is easy to track down issues that need to be reviewed.

We can do it in several ways:

  • Building a custom application that will use EA automation interface to search repository
  • Search the EA database (no the most elegant solution – in case EA db model changes the tool will also have to be updated)
  • Create a plug-in
  • Do a custom model search using EA extension features

Last solution is quite simple and thanks to the fact that it uses EA extension features user will work with EA only to manage Architecture Decisions Log.

Such query over EA model can be done in two ways:

  • With query designer
  • With sql queries

Second option is more powerful but a little bit difficult. There is a very good description on how to create such query on Bellekens blog. In our case we wont to search over objects with type ArchitectureIssue and look for those for which the requirements changed, so the query need to look like the one below:

What we do here is we simply search for all Architectural issues that were modified earlier than last modification of requirements that they were to address.

Good thing about this query is that is will work on most if not all databases supported by EA. In case if you find yourself using some DB vendor specific statements you can use #DB# macro supported by EA.

Columns and tables used:

  • t_object – this is a table that stores information about elements in model
    • ea_guid – UUID in EA model
    • name – name of element
    • Object_ID  – object identifier in EA repository
    • Object_Type – type of object (class, artifact, interface, etc.)
    • Stereotype – stereotype of this object
    • ModifiedDate – date of modification
  • t_connector – this is table stores elements about relations (also elements in model)
    • End_Object_ID – id of object at the end of relation
    • Start_Object_ID – id of object at the start of relation

Using this simple query we can quickly validate our model and see what needs to be reviewed.

If you want to find the column names meaning not covered in this query than please take a look at this Read Inside Enterprise Architect site.

Conclusions

Keeping project documentation together is important. Thanks to EA powerful reporting features and tools such as EA Docx we can generate documentation for the projects straight out of EA. We can create our own tools to cover features not included in EA. Architecture decisions are part of projects documentation, a very important part so they need to be included. Just remember to keep thing simple enough to document those decisions.

It’s all about context

21st June 2015 Frameworks and solutions, Java EE, Weblogic No comments

Many applications need some kind of context propagation without explicitly passing context objects down the invocation stack. Context can contain data of logged in user, request data, performance data or some other required information. In most cases we need to propagate the context within one jvm instance and one thread. Like in every task in software development we have a variety of tools at our disposal to help us with the context propagation. This article will discuss some of them and give some hints on when to use each of them.

ThreadLocals

Most basic and well know context propagation tool is ThreadLocal and InheritableThreadLocal. Both of this classes use language features to propagate context. Thread local variables accessed by different threads return a copy of the variable for each thread. Inheritable thread locals expose this copy to threads spawned from parent thread. But when using ThreadLocal variables one must be aware of pitfall associated with using them:

  • Fail to remove thread local variable – these can happen when components that set ThreadLocal variable do not clean up after them selves in reliable way. Threads are pooled in most of web and application servers so these variables will remain associated with the treads and will be made available to other threads. The Classloader the loaded this application will not be garbage collected as there is a ‘live’ thread local reference associated with it. As a consequence application can suffer from memory leaks (both heap and perm gen) and heisenbugs.
  • Increase in complexity – Thread local variables should be used to pass context data. If they are used to pass arbitrary data complexity of application will increase.

Remember to clean up you thread locals after request processing is done!

TransactionSynchronisationRegistry

Another tool at our disposal is TransactionSynchonisationRegistry. This object is associated with a JTA transaction and allows us to put and get context objects from TransactionSynchronizationRegistry instance. Passing context objects using TransactionSynchronizationRegistry will work with different threads as long as they are associated with the same transaction. But semantically using this object for context propagation is not quite right and may conceive weird ideas about spanning transaction across all layers of application starting them at web layer when it’s no really needed.

  • Do not use TransactionSynchronizationRegistry when you have short span of transactions and you would need to make it longer to use TransactionSynchronizationRegistry
  • Think twice before using it as it is no meant to be application level context carrier
  • When you use TransactionSynchronizationRegistry with distributed transaction prefix your context keys to avoid key name collision with other apps

Here’s how to use it (@Resource is a Java EE annotation – code must be run inside a Java EE Container)

Setting context:

Getting context later in request processing pipeline:

Weblogic WorkContext

If you do not have transaction associated with every request being processed or transaction is not associated with request later in request processing and if you are fortunate to use Weblogic than you can use WorkContext. This patented feature allows for context propagation like TransactionSynchronizationRegistry but it does no require transaction context and can be propagated to other Weblogic cluster servers using variety of protocols including JMS, SOAP, thread and transaction to name a few, see PropagationMode for full list. There is a caveat – you must have a context key to get your context object. So what if you don’t have such an object passed on as a parameter? A solution might be to use a framework feature like RequestContextHolder or JAAS Security Context and use data passed in this context as a key, if request is processed in one thread. Another may be to use some request-unique data as key. But what if you use thread polls (or thread managers like WorkManagers on WLS)  and do some work in different threads that the one that is processing your request? If you can use Work Managers instead of Java SE thread pools then do it. Java EE apps should use managed thread pools, and Work Managers allow you to manage threads (Work Managers are NOT thread polls strictly, they are more like rule sests for Weblogic) – if you do so then WorkContext in propagation mode Work will be available in work instances executed by Work Managers. If you can’t use Work Managers then you should consider passing context data as a parameter to slave thread then.

Usage of this neat feature is described in this Oracle WLS docs.

  • WLS is necessary to use Work Context, you bind yourself to WLS using it – but you can shield yourself partially by extracting this feature into separate infrastructure service component (maybe a custom CDI scope).
  • Propagating context other than simple ones in propagation mode like SOAP most probably will result in lower performance – but we can use thread mode.
  • Keep in mind to use application prefix for WorkContext keys to prevent name collisions, if there is a possibility that mutliple apps will use this feature.
  • It is mostly useful if you want to send context data to some other app – so for mostly technical stuff, not business related.
  • It can be propagated between work instances in WLS

CDI

Last but not least. CDI is Java EE specification for dependcy injection and contextual data (Contexts and Dependency Injection). CDI, among other neat features, specifies well defined contextual lifecycle of CDI Beans in well defined scopes and way to inject them in managed beans. Since Java EE 7 CDI support is described in other specifications such as EJB and this gives solid foundations to build our apps on. Of course CDI was supported in Java EE 6 (EJB 3.1) but it was not properly described in specification and implementations of this support sometimes where not ideal. Using CDI we can inject beans in request, conversation, session or application  scope  and dependent pseudoscope and a possibility to add you own scopes. In order to use CDI to pass context data we need to use proper scope and inject our variable to Java EE container managed bean. Session and conversation contexts are available only when injection is made to a managed bean while processing servlet request. According to CDI spec request and application scoped will be available for web service invocations or ejb remote invocations (6.7.1 Request context lifecycle or CDI API), but actually in Weblogic 12C (12.1.3) it still does not work properly.

Here’s how to use it.

Context definition:

Setting context:

Getting context later in request processing pipeline:

Conclusions

In most cases ThreadLocal are sufficient, but in my opinion CDI will be a better solution soon for Java EE apps, and in some cases (like when your context does not start at some Web Service level) it is a neat feature now. In Spring you can have similar possibilities as you can have Request and session scope beans, with Spring Web Flow you can have conversation scope and flow scope beans. You can also create a custom scopes. I do not mention it here as its well documented in Spring docs and Spring itself uses ThreadLocals to achieve context propagation in many cases (see RequestContextHolder implementation).