Month: June 2015

WS-Security Policy – what’s your algorithm?

28th June 2015 SOAP Web Services, Software development No comments

This will not be a long post. I won’t describe ws-security policy and it’s usage, as you can find many sources that already do – like this blog post by Ross Lodge.

When creating a client for a Web Service that has security requirements specified in a ws-security policy document you may have to know the default values for various algorithms to not search blindly for a combination that works. It’s a basic information but only one blog that I came across mentions it, and it’s buried deep in the the article. I overlooked it when I was searching for default algorithms myself (I don’t know why I haven’t checked the spec in the first place :)). So this post is meant to be a short, focused and expose this one piece of information.

Algorithm names for a given algorithm suite are described in WS-Security Policy specification. Let’s discuss an example – a web service is secured using a X509v3 Token with Basic256 and Symetric Key Wrap algorith. To find out the algorithm names that the client must use just go to the table at the end of chapter 6.1 there look at the line for algorithm suite Basic256:

Algorithm Suite [Dig] [Enc] [Sym KW] [Asym KW] [Enc KD] [Sig KD] [Min SKL]
Basic256 Sha1 Aes256 KwAes256 KwRsaOaep PSha1L256 PSha1L192 256

Acronyms for algorithms are explained above the table:
Digital signature – Sha1 (http://www.w3.org/2000/09/xmldsig#sha1)
Xml Encryption – Aes256 (http://www.w3.org/2001/04/xmlenc#aes256-cbc)
Symmetric Key wrap – KwAes256 (http://www.w3.org/2001/04/xmlenc#kw-aes256)
Minimum symmetric key length – 256 bits

Katana vs a broadsword

21st June 2015 History No comments

After seeing one test on 9gag I digged a litte around it, and it turns out that katanas were not that good. Just Hollywood fairy tale. You can find some material on youtube, will not include all the links so that I wan’t share a link to some illegl material, but look around katana vs. longsword key words. I can share these:

There was a test in a german tv show comparing the katana and longsword and it turned out that:

  • Both had similar cutting ability
  • There were a variety of European swords not all meant to stab. Katana can stab, but some European swords can stab better.
  • Katanas were flexible but would not spring back when force that bent them was removed
  • European sword would cut a blade, katana most probably wouldn’t

It’s all about context

21st June 2015 Frameworks and solutions, Java EE, Weblogic No comments

Many applications need some kind of context propagation without explicitly passing context objects down the invocation stack. Context can contain data of logged in user, request data, performance data or some other required information. In most cases we need to propagate the context within one jvm instance and one thread. Like in every task in software development we have a variety of tools at our disposal to help us with the context propagation. This article will discuss some of them and give some hints on when to use each of them.

ThreadLocals

Most basic and well know context propagation tool is ThreadLocal and InheritableThreadLocal. Both of this classes use language features to propagate context. Thread local variables accessed by different threads return a copy of the variable for each thread. Inheritable thread locals expose this copy to threads spawned from parent thread. But when using ThreadLocal variables one must be aware of pitfall associated with using them:

  • Fail to remove thread local variable – these can happen when components that set ThreadLocal variable do not clean up after them selves in reliable way. Threads are pooled in most of web and application servers so these variables will remain associated with the treads and will be made available to other threads. The Classloader the loaded this application will not be garbage collected as there is a ‘live’ thread local reference associated with it. As a consequence application can suffer from memory leaks (both heap and perm gen) and heisenbugs.
  • Increase in complexity – Thread local variables should be used to pass context data. If they are used to pass arbitrary data complexity of application will increase.

Remember to clean up you thread locals after request processing is done!

TransactionSynchronisationRegistry

Another tool at our disposal is TransactionSynchonisationRegistry. This object is associated with a JTA transaction and allows us to put and get context objects from TransactionSynchronizationRegistry instance. Passing context objects using TransactionSynchronizationRegistry will work with different threads as long as they are associated with the same transaction. But semantically using this object for context propagation is not quite right and may conceive weird ideas about spanning transaction across all layers of application starting them at web layer when it’s no really needed.

  • Do not use TransactionSynchronizationRegistry when you have short span of transactions and you would need to make it longer to use TransactionSynchronizationRegistry
  • Think twice before using it as it is no meant to be application level context carrier
  • When you use TransactionSynchronizationRegistry with distributed transaction prefix your context keys to avoid key name collision with other apps

Here’s how to use it (@Resource is a Java EE annotation – code must be run inside a Java EE Container)

Setting context:

Getting context later in request processing pipeline:

Weblogic WorkContext

If you do not have transaction associated with every request being processed or transaction is not associated with request later in request processing and if you are fortunate to use Weblogic than you can use WorkContext. This patented feature allows for context propagation like TransactionSynchronizationRegistry but it does no require transaction context and can be propagated to other Weblogic cluster servers using variety of protocols including JMS, SOAP, thread and transaction to name a few, see PropagationMode for full list. There is a caveat – you must have a context key to get your context object. So what if you don’t have such an object passed on as a parameter? A solution might be to use a framework feature like RequestContextHolder or JAAS Security Context and use data passed in this context as a key, if request is processed in one thread. Another may be to use some request-unique data as key. But what if you use thread polls (or thread managers like WorkManagers on WLS)  and do some work in different threads that the one that is processing your request? If you can use Work Managers instead of Java SE thread pools then do it. Java EE apps should use managed thread pools, and Work Managers allow you to manage threads (Work Managers are NOT thread polls strictly, they are more like rule sests for Weblogic) – if you do so then WorkContext in propagation mode Work will be available in work instances executed by Work Managers. If you can’t use Work Managers then you should consider passing context data as a parameter to slave thread then.

Usage of this neat feature is described in this Oracle WLS docs.

  • WLS is necessary to use Work Context, you bind yourself to WLS using it – but you can shield yourself partially by extracting this feature into separate infrastructure service component (maybe a custom CDI scope).
  • Propagating context other than simple ones in propagation mode like SOAP most probably will result in lower performance – but we can use thread mode.
  • Keep in mind to use application prefix for WorkContext keys to prevent name collisions, if there is a possibility that mutliple apps will use this feature.
  • It is mostly useful if you want to send context data to some other app – so for mostly technical stuff, not business related.
  • It can be propagated between work instances in WLS

CDI

Last but not least. CDI is Java EE specification for dependcy injection and contextual data (Contexts and Dependency Injection). CDI, among other neat features, specifies well defined contextual lifecycle of CDI Beans in well defined scopes and way to inject them in managed beans. Since Java EE 7 CDI support is described in other specifications such as EJB and this gives solid foundations to build our apps on. Of course CDI was supported in Java EE 6 (EJB 3.1) but it was not properly described in specification and implementations of this support sometimes where not ideal. Using CDI we can inject beans in request, conversation, session or application  scope  and dependent pseudoscope and a possibility to add you own scopes. In order to use CDI to pass context data we need to use proper scope and inject our variable to Java EE container managed bean. Session and conversation contexts are available only when injection is made to a managed bean while processing servlet request. According to CDI spec request and application scoped will be available for web service invocations or ejb remote invocations (6.7.1 Request context lifecycle or CDI API), but actually in Weblogic 12C (12.1.3) it still does not work properly.

Here’s how to use it.

Context definition:

Setting context:

Getting context later in request processing pipeline:

Conclusions

In most cases ThreadLocal are sufficient, but in my opinion CDI will be a better solution soon for Java EE apps, and in some cases (like when your context does not start at some Web Service level) it is a neat feature now. In Spring you can have similar possibilities as you can have Request and session scope beans, with Spring Web Flow you can have conversation scope and flow scope beans. You can also create a custom scopes. I do not mention it here as its well documented in Spring docs and Spring itself uses ThreadLocals to achieve context propagation in many cases (see RequestContextHolder implementation).

Domain Driven Design, Microservices and Monoliths

14th June 2015 Domain driven design, Software development 4 comments

DDD Reference is quite good reminder of rules and practices of DDD. You can get it from Domain driven language site, but do be aware that it is not meant as a DDD introduction or full description. This post is also meant to be a reminder of most overlooked and crucial aspects of DDD and small discussion of  new terms introduced in Domain Driven Design Referance. I won’t introduce DDD here, there is an introduction to DDD available on half a million blogs (more or less), and there is a linked to a presentation on my linkedin profile site.

I recently read a few interesting articles about architecture and DDD (Domain-Driven Design RevisitedThe Biggest Flaw of Spring Web ApplicationsWhoops! Where did my architecture go). These show how complex is to design good software. Many projects end with big ball of mud, we follow new approaches (see Monolith First, response to this article or interesting view by Sander Mak). My opinion is similar to Sanders. We have the tools most of the time we develop software, we just don’t use the right tools or we are constrained to use the right tools by standards, lack of people or stubbornness of senior staff, maybe lack of time to find the proper tool using exploratory development or sacrificial architecture. There are many different reasons.

Should you start with monolith or microservices – it depends. But you should start with a good design, and here Domain Driven Design can help us. One of the fundamental building blocks of DDD are bounded contexts, explicitly defined contexts that distinguish and separate parts of bigger model.  Each bounded context can be developed independently, can use different implementation. Integration between bounded contexts is described as context map. So one way to approach the monolith vs microservice dilemma is to start with designing bounded contexts, design modules accordingly (bounded context <> module), and then we have a good foundation to go towards more coarse grained or more fine grained modules. DDD context map will set constraints on integration and this way we can avoid tangled modules that will not allow us to extract microservices and use heterogeneous cluster deployments, where one service or set of services is deployed on more instances as it needs to process more requests, or use other benefits of microservices architecture.

Domain Driven design also gives hits on designing each of the bounded contexts. I would like to focus on layered architecture and the use of packages as a way to hide classes that should not be visible outside the package. This rule, about which Oliver Gierke reminds us, is an excellent way to mark boundaries and interfaces between bounded contexts in monolithic architectures allowing us to evolve toward microservices should it prove to be viable. This simple rule can shield internal details of our bounded context from being used not the way we intended them to be used. In a non-trivial project this will be even more beneficial. We would not need some tools to check rules set by the designer of bounded context because compiler would not allow for introducing invalid dependencies. On the other hand checking dependency correctness can be a little bit harder as some rules will not be check by compiler inside the slice. Fortunately there are tools to help us here like Sonar, Structure101, Dependomenter, Sonargraph to name a few.

In my opinion Domain Driven Design gives us a lot of good rules to follow in order to build good software. Going toward monolith or toward microservices should not be a problem once you have solid foundations. But remember that Domain Driven Design is a tool – if used in wrong context it can be an burden.

Findbugs pre-commit checks on developers sandbox

7th June 2015 Quality management, Software development No comments

I like to check my code for potential bugs using Findbugs pre-commit. As Findbugs analysis can be time consuming I check only high priority potential bugs and let CI Server do the rest of code analysis and tests. A proper way to use Findbugs in this context is to enable it as part of dedicated profile with filter file specified – the goal is to fix the most dangerous issues fast and break the build due to static analysis less often on CI server. Doing full blown static code analysis would be counter productive in most cases.

There is a simple way to execute Findbugs with filter file on you developer sandbox, even in multi module projects. One way is described on Findbugs Maven plugin site. But as I experimented with it, I’ve found out that the tools project does not need to be in project structure that is being monitored, it only needs to be added as a dependency. This gives more possibilities of managing Findbugs filter files, as you don’t need to include another module aggregating project and use maven extension mechanism. You just include one library then specify which filter file you want to use. Below is snippet of aggregating project that gathers all modules and uses Findbugs for pre-commit analysis:

Filter file is located in src/main/resources folder of build-tools project.

Findbugs is enabled automatically and build will be broken if any potential bugs with High priority will be found. If you would like to create Findbugs report in addition to performing check then you need to add a dependency as a profile element child node:

Continuous Quality Monitoring (part 2)

7th June 2015 Continuous Quality Monitoring, Quality management, Software development No comments

In the first post I discussed different kinds of quality and gave some reasons on why the quality tends to suffer as project grows. In this post I’ll try to give some hints on how to keep quality high. If we have our quality goal set then we must use proper tools and actions to enforce the quality. Enforce is a good word here, because quality must be enforced after being measured, because there is a lot of different inputs and changes in today’s projects. Requirements change, developers change, libraries change. We must adapt quickly and provide fast feedback about quality. Sounds like Continuous Integration ? 🙂  Traditional Continuous Integration does include running various tests – unit tests, integration tests mostly, but other tests can also be executed. In case of some projects, for which quality goals include matters like high performance requirements or high security requirements, we must execute performance and security tests on CI server. Not on every build, event not in a build pipeline started by SCM change, but according to a schedule, like nightly builds. If we add this and add some static analysis tools that the process is something more than Continuous Integration, it is Continuous Inspection. Why do this inspections of CI server? Well, I’m fan of automation – Jenkins, Hudson, Bamboo or some other CI server does not get tired (if we don’t include build queues and garbage collection  😉 – but here we can change the CI process or add hardware to run more smoothly), it does not have bad days, does not need coffee and doesn’t fall asleep when reviewing class change number 53.  But let’s get down to the trenches

Here’s what we can monitor (a non-comprehensive list):

  • Code standards – we can use Checkstyle, PMD or some other tool
  • Code metrics – here we can use Checkstyle, Sonar, Dependometer, JDepend
  • Library and classpath issues – JBoss Tattletale
  • Architecture and design issues – Sonar, Dependometer, Structure101, Sonargraph, JDepend
  • Potential bugs – Findbugs, Sonar (including security issues)
  • Bad practices – PMD
  • Potential security vulnerabilities – Sonar, ZED Attack Proxy
  • Insufficient test coverage – JaCoCo (we can break the build if test coverage is not met)

There are more tools (see this link), but I focus here on tools that I have experience with and that can be integrated with Continuous Integration server. Check Google CodePro Analytix – an excellent developer desktop tool to analyze code – plus it it helps to write unit tests, generated comment skeletons. Check tools for architects that can compute quality metrics, visualize code structure and perform simulated refactorings – Dependometer, Sonargraph (a.k.a. SonarJ) and Structure101. Of course JBoss Tattletale also does dependencies visualization but can’t simulate refactoring.

In my opinion Sonar is underestimated tool for a few reasons:

  1. It still grows and new things are being added that go unnoticed (like provisioning and disabling automatic project creation)
  2. It gets misused often – I’ve seen many time Sonar without custom rule sets, that shows like half a million of Dead Local Storage or unnecessary Boxing (BX_BOXING_IMMEDIATELY_UNBOXED) reported on critical (DLS) or major (Boxing) level, that while being important and being a code smell are not critical or even major for 99% projects out there. Here the problem is half a million of one issue report not that the code uses autoboxing and the immediately unboxes.
  3. Often the quality gates feature is not used
  4. Sonar can and will report false positives, so again it must be configured and used appropriately

To use sonar effectively we must configure it according to our projects standards and use quality gates. It should be plugged in into CI process – that is the main point of this discussion, but it can be also used by developers on their desktops. And since using CI server plugin is also more popular I will discuss the usage of Maven plugin that can be used as part of CI build pipeline or on developer desktop.

To maximize the profits from introducing Sonar we want to automate the process then we want the CI server to notify us of error found. We can do this by using Maven and Sonar – if quality of project is below some threshold than the build will be broken by sonar or maven plugin. Static analysis of code with Sonar (including Checkstyle, PMD and Findbugs) is pretty popular so I won’t write about it. But what is maybe not so well known and popular is that we can use Sonar to check some architectural constraints like:

  • Layering
  • Dependency direction

This checks can be implemented in Dependometer or straight in Sonar. Using Dependometer gives more flexibility (like Structure 101), but Sonar provides possibility to break the build as feedback. I like this option, as this forces attention. Sadly there is no Dependometer plugin for Sonar, so we must integrate it with Maven and use build project using Maven on CI server.

In order to analyze project in Sonar we can use CI server plugin or Maven. Below is example fo maven configuration

You can find full list of analysis parameters in Sonar documentation, I don’t want to repeat these docs 🙂 Important thing to note is that we provide Sonar database property as Sonar Maven plugin is an analyzer and stores information in the database so it need to be the same as the one used by Sonar Web app.

We can also use e.g. Jenkins plugin to integrate sonar with Jenkins. After we do this we are able to inspect the code on various levels as Sonar has plugins for Findbugs, Checkstyle, Structure101, JBoss Tattletale (unofficial), PMD and can large rule base of its own (e.g. OWASP TOP 10 violations).

Adding architectural constraint in Sonar is simple – in Rules we search for Architectural constraint and click into the rule

sonar-arch-rule

Then we enter name pattern of source and target package specifying that source package can’t depend on target package
sonar-arch-rule-activation

In Dependometer we can enter allowed dependencies and specify threshold on various metrics. Specifying allowed dependencies is more convenient in my opinion by both ways give us what we wanted. We monitor no not only project standards, inspect code for bad practices, check for possible bugs but also look for layering violations, cycles, code that can be difficult to extend or maintain (complex, tangled, unstable). Tattletale can check if there are duplicated libraries in our application deployment archive (WAR, EAR – after doing some tricks). Now we have Continuous Inspection 🙂

Hav en dejlig dag !

Continuous Quality Monitoring (part 1)

5th June 2015 Continuous Quality Monitoring, Quality management, Software development No comments

This post will be about monitoring project quality on various levels. This post is an introduction to the subject of continuous quality monitoring. I next will give some technical details how to monitor this quality. In next posts I’ll write about my idea of development process and the place of quality management in it, so if you’re interested check out this blog in a few days. Please check posts given in references at the and of article. They discuss important aspects of quality management that will give you yet another perspective. I would like to focus more on technical details of monitoring.

We’ve seen some projects that had high quality, some with low quality. Most projects start with high quality expectations and promises. Some have standards setup by architect or applied as part of corporate policy. Most project have some sort of automated tests. Sadly many projects lack proper monitoring and continuous quality monitoring. I guess most of us experiences cutting down the effort spend on unit or functional testing. Architecture and quality metrics are measured even less often.

Let’s set common definition of quality from the perspective of quality assurance:

  • internal quality – these is measured on code level. Here we measure quality metrics like distance from main sequence, number of cyclic relations, afferent coupling, efferent coupling, instability etc. We can also check if code is made according to some standards chosen for this project. Business people does not see this quality, but never the less in the end is affected by it.
  • external quality – here we measure if software does what it is meant to do and has proper functional and non-functional characteristics. Business people see this type of quality and this gets more attention. This type of quality is about functional testing, running various security checks, performance, load and stress testing the application. We can measure some architecture level quality metrics like security, reliability, performance.

For majority of projects we cannot completely separate the two because they have the same source. If code has poor quality then probably it will not be scalable, secure or reliable. It will have a lot of bugs and will be difficult to maintain. And most projects start with high expectations and good will. So what goes wrong? It is hard, if possible, to give definitive answer but from my experience I can pinpoint a few reasons:

  • Lack of common quality vision and commitment to this vision – management, business and engineers must have a common understanding of quality, consequences of low quality, and level of quality they want to achieve. Showing examples of failed projects here will help to get a common understanding. On the other hand engineers have to remember that software is not an art work made for the sake of making software.
  • Lack of continuous inspection – continuous integration with continuous inspection is a must in todays projects. We simply don’t have time and money to discover failures later in the process of making software.
  • Lack of proper quality assurance – this is an interesting point. A common example is writing tests to achieve x% coverage, which is simply nothing more than throwing away money. Show me business that wants to throw away money 😉 Another example, mentioned by Petri are incorrectly introduced code reviews.
  • Lack of proper architecture and monitoring of implementation of these architecture – requirements will change, especially if the product is successful, so it must be prepared to be extended and modified without harm to its quality.

What can we do ? We can focus on our quality goal and measure it. I won’t discuss how we can setup quality goal, as I believe this depends on organization, its standards and culture. I want to focus on technical aspects of measurements.

References to articles:

Multiverse and the Witcher

1st June 2015 Astrophysics, Science No comments

Many people play Witcher 3 now. It has been about two weeks since the premiere of these game. In the Witcher universe monster (vampires, werewolves, etc.) came to existence in our world due to conjunction of spheres – a dramatic incident when two dimensions collided.  I don’t know if Andrzej Sapkowski, author of the Witcher series ever heard about multiverse. But the idea of multiple universes is a real physical theory. I’ve seen a documentary – Universe of Multiverse – hosted by Brian Green, explains the idea step by step. This is thrilling and exiting. The idea of multiverse according to three major thought processes – string theory, eternal inflation and dark energy. I’m not going to repeat the principles here, but just to present some scratch of information let me write this – the level of dark energy is crucial to live in each of the universes that make up the multiverse, eternal inflation explains why the universe keeps expanding and string theory tells about the construction of matter and multiple possibilities of that construction. And the proof of existence of multiverse may be the providing evidence that two universes did collide and left energy trace of it – cosmic fingerprint. If you like astrophysics and science – read about it. If you play the Witcher – also consider it 🙂 And watch out for ghouls 😉