Apache HTTP Client – Client side SSL certificate

Hello! I’m sharing this snippet with you because it’s been a little tricky to make it work, so I hope to save you some time.
Note: I’m referring to Apache HTTP Client 4.3. You know its API is pretty “lively” so before you dive into this, make sure it applies to the version you’re using as well.


Creating an HTTP connection using Apache HTTP Client is pretty straight forward, and this is why we love it. Fine tuning how the client works is essential in production environments, and it does require a little bit of tweaking, but the results are excellent.

It’s not a surprise that Client side SSL certificates (also known as 2-way SSL or mutual SSL authentication) is doable, but of course, being something that interferes with the intimate nature of the HTTP conversation, the process isn’t exactly straight forward.

If you’re unaware of this security practice, here’s the shortest possible summary:
to provide stricter security for HTTPS connections involving very selected parties, you can let them have a SSL client certificate that will identify them. The SSL dance will not work if server and client certificates are not a match. By doing so, no credential is sent over the Internet, just encrypted data that will be decrypted correctly only if the certificates match.
If you want to know more, here’s a useful article by Robin Howlett.


Certificates are pain already. Incomplete chains, root certificates to be updated etc. So you can assume this couldn’t be a piece of cake right?

First off, let’s be clear about the fact that you are not going to configure this for a specific connection. The Http Client instance will be built around the fact that it will support that certificate.

First, let’s create a keystore:

KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType());

The factory will create a JKS keystore by default. You can alter which one you will get by changing the parameter. For example PKCS12 is an option. I’m sticking with JKS for this example. This implies you already created a keystore and added the certificate to it.

Second thing, we load the keystore:


Any inputStream will do. We need to provide a password to decode the keystore of course.

SSLContext sslContext = SSLContexts.custom()
                          .loadKeyMaterial(ks, "password".toCharArray())
                          .loadTrustMaterial(null, new TrustSelfSignedStrategy())

We basically create a custom SSL Context, load the keystore, load the trust store (the null parameter will default to the cacerts file).

SSLConnectionSocketFactory sslConnectionFactory =
                                new SSLConnectionSocketFactory(sslContext,

We create a socket connection factory providing the SSL context. The hostname verifier is literally up to you, and it pretty much depends on the service you will be interactive with. The ALLOW_ALL_HOSTNAME_VERIFIER is loose, so you might want to consider moving it to “strict” if the server allows you to.

Registry<ConnectionSocketFactory> registry = RegistryBuilder.<ConnectionSocketFactory>create()
				.register("https", sslConnectionFactory)
				.register("http", new PlainConnectionSocketFactory())

This registry associates what socket factory to use based on the protocol.

BasicHttpClientConnectionManager connManager = BasicHttpClientConnectionManager(registry);

We instantiate a connection manager that will use the registry. We’re using the basic connection manager here because it’s simpler for the example. Connection managers are a vital part of tuning HTTP client, so make sure you’re using something that suits your use case.

HttpClient client = HttpClients.custom()

We finally instantiate our Http client that will eventually be successful when talking to our highly secure peer. Same thing we previously said, the hostname verifier has to be edited based on your needs.
Note: this example contains only the strictly necessary commands to make it work. A lot of other configurations options may be needed in your context.

And we should be set to go.

Let me repeat it, before you dive into this practice, make sure it suits your HTTP Client version. Various refactoring and changes in the API may strongly impact the way to do it.

Hope this helps! Take care.


Lonely as ASDF

I’ve hinted this concept a few times in the past.
Being a software engineer can be extremely lonely sometimes.

This time I’m not trying to be funny, I’m willing to share something important that may be helpful to you to mitigate this feeling.

There are quite common scenarios.

The most typical and less severe one is when you get an assignment you still cannot fully grasp. You already asked clarifications a few times, your source of information is now irritated because of it, and you are jumping from an obscure item to another. You feel stupid and lost.

Good news is this is often temporary!
There’s a bunch of good reasons why you’re not grasping it, and some of them might be easily solvable. For example, I’m totally not receptive when I surpass a certain level of stress. Next day? Totally get it.
Another problem is developers’ urge to receive highly detailed information. We strive to understand most people don’t think exactly like us. Relax, you haven’t written a single line of code, there’s time for details, grasp the concept for now.
But the thing that works extremely well for me is letting the information sediment. You’re now overwhelmed by new data, but dinner out and a good sleep will bring order to that chaos.

The second one is scarier.
It happens when you’re the man in charge of taking certain decisions, you’re asked to take a bold one, and you think you have no one to confront with.
You know how your decision will impact the company, but most of all the work of your mates. You can’t forget how you will be held responsible for it, and if there may or may not be a remedial action.
This situation is common to all people in command, but in software it seems even worse because people will build stuff that will need to work for years on top of your decisions.

The good news is you are in charge. It’s way worse when someone takes shitty decisions for you. Of course being “The Man” comes with a price, but no one will yell “not guilty!” pointing at you when you’re not in charge, yet followed wrong directions from the man in charge.
Also, sometimes you may be lead to think that’s super important you to decide super quickly. If you don’t feel sure, ignore everything and take your time to collect more info. Information collection is one of the most fundamental skills when you’re in command.
Finally, you’re not so alone. Of course you’ve the last word, however you are probably surrounded by geeks. If you built a healthy relationship with them, they will volunteer to help you out. And by doing so, you might discover talents you overlooked.

The third one is shattering.
You just realized you’ve done something horribly wrong and that’s impacting customers and users right now, but the problem is so intricate you have no idea on how to fix it and certainly no one but you is supposed to know. That’s complete desolation. Heart rate sky rocketing, tunnel vision, shaking hands. You will rarely feel so lonely in your whole life; the problem is so scary and you’re so freaked out your mates stay away from you or even flee.

The good news is unless you’re in medical, military or aviation, no lives will be lost because of your fault. If no one dies, everything is going to be fine.
Panicking will not solve the problem, it’ll just make it tougher, and since the issue is already rampaging, spending 5 minutes or 30 minutes to solve it won’t be that huge of a difference.
And even if everybody seems to be fleeing, no one will refuse to help you out if you explicitly ask for it. Often times just explaining the problem to a human being brings you closer to the solution.

The last one is a real problem.
You get to the office, sit in front of your computer and you feel as lonely as you can feel. Nothing makes sense. The work, the office, the keyboard, the colleagues. Literally nothing has a meaning to you. The reasons may vary and may come from inside or outside the workplace, but if you recognize it’s the workplace to cause you such distress, then it’s time to leave and never come back.
We are social animals for a good reason: we are more than the sum of the parts.
Feeling lonely because at times you don’t think you can rely on your pals is one thing, but feeling segregated and helpless is a whole different story.
Regressing to what broke the idyll can be important to avoid it to happen again, but my most sincere advice is: leave.
This kind of toxic situation can lead you to depression and that’s a passage you want to avoid at all costs.
Don’t ask yourself what’s going to happen next if you leave, because I can tell you what will happen next if you don’t: decay.
Remember that if you’re in such situation you’re one step away from an actual mental illness and ANY forecast on your future will look dark, so just accept as a dogma the fact that ANYTHING is better than what awaits you if you stay.

So you might have noticed that the three key things in all these scenarios are: time, communication and desperation.

Time. This is not a hymn to laziness or procrastination, all I want to say is something you might not realize:

We do a job that pushes our brain to work FASTER

Coding is solving complex problems by formalizing all the concepts and wiring them together. It becomes easier and easier when you do it for long time, and your ability to always have control on your overall view becomes stronger. As it grows stronger, you push faster without even knowing it.
When you’re in danger, your adrenaline pumps, it’s our ancestral biology. You become more reactive but less clear headed. You perception of time compresses, you’re moving as fast as light, and everything else is still.
In such conditions, you would be able to run from a lion hunting you, but definitely not solve a jigsaw.
Advice: accept this fact and learn to control it, at least in part. Action sports may help a lot in the process.
Understand that the unknown is not one of your daily challenges, it requires time. If you look at the problem solving approaches of people in other roles, like a project manager or a sales engineer, they don’t try to speed as much as you do. Not because they don’t care, but they tackle it in another way.
Welcome the idea that too little information or too much are equally inadequate to commence your task. You first have to integrate or skim.

Communication. Of course not being supported is a problem, but sometimes you are the one not communicating at all. Frantically yelling at your screen is not communicating and scares people. Moreover, when you’re drowning in shit and the world is in time lapse, they all look like lazy asses doing nothing and not caring at all. Trust me, it is you. You can’t expect them to be emotionally dragged into your madness, and if they did, they would become completely useless.
Be humble and generous when things are OK, and people will try to help you when things are KO, whether you want their help or not.

Desperation. When you’re desperate no decision is good. But between a desperation cycle and another, there’s a time you feel less miserable, therefore you’re more clear headed. If you’re really down, postpone any critical decision that can be postponed to a better time, but don’t forget about them. When you feel you can handle them lucidly, do so, be logical, brave and definitive.
You should never let the desperation be an inevitable part of your week, because the lucid times will get shorter and shorter down to depression.

Finally, listen to this advice very carefully, it’s the most important one.

Never let a workplace that makes you feel lonely poison you to the point you hate what you used to love most.

How (not) to tell your mom how you earn your living

Your mom won’t feel fulfilled until you tell her you’re a “manager”.
The word “manager” has a clear etymology: (person) that manages (persons/things), but
for every non English country that actively uses the English word “manager”, that word means: wealthy, money-handling rampant white collar.

But you don’t want to be a manager, you want to be a software engineer, a web developer, a data analyst.

“Mom, I am software engineer” you timidly come out.
“So what do you do exactly?” she asks, emphasizing the word “exactly”.
BEWARE, she’s not really interested in what you do, but since you’re not a manager, she wants to make sure you’re doing fine, and one of the ways to discover that is understanding what you do to earn your living, in detail.

This is the origin of some of the most hilarious and annoying anectodes I’ve ever heard. And yes, some of them are Italian moms. If your parents are now 65+, this has certainly happened to you too as well.

Here’s some of the best. these are ALL REAL:

You: I do websites for mobile devices
Mom: Like websites for the computer?
You: Yeah, for mobile devices
Mom: How is that even possible… oh you mean texts!

You: We build a social network
Mom: Facebook?
You: No, not Facebook. It’s…
Mom: Is it like Facebook?
You: No…
Mom: But you said it’s a social network

You: I work at a startup
Mom: Oh God, that’s a problem, right?

You: It’s a tool that helps people sharing their cooking recipes
Mom: Good lord, you could ask your mom
You: Mom, it’s not for me it’s for other people
Mom: You can’t even pour your cereals and you give advices to others?

You: I’m a software engineer
Mom: Basically computers, right?
You: Yes
Mom: Good, the boy that fixes my laptop from viruses is very pricey

You: Yeah, in my work I need to use the Internet a lot
Mom: Be careful

You: I’m a freelance web developer
Mom: So you have to find your own customers
You: Yes
Mom: But how do you do that? You have to be skilled to do that

You: Yes mom, it’s an intensively intellectual work
Mom: All that thinking must be very stressful. I wonder why you never looked into that postal office job offer I told you about

And I could go on and on.

At the end of the day, even if some of them are tragically annoying, they are all driven by love. Don’t get mad at your mom, hug her (and maybe she will stop talking for a minute or two).

The “Arrival” Theory

If you haven’t seen the movie Arrival, shame on you. I didn’t enjoy it as much as I thought I would, but it’s very good and it’s definitely food for brain. Anyway, If you’re considering watching it, stop reading here and maybe read this article AFTER the movie because of

So, in very short, in the movie these guys meet aliens. They eventually realize the aliens are multi-dimensional creatures and therefore the time dimension is just one of the many dimensions the perceive.
In the process of getting to know them, they learn the weird alien writing system which is non-linear, pictogram based, and seems to express concepts as events crossing time. Whatever I meant here, seriously, I don’t understand what I just wrote.

And here’s the reason why I’m talking about this. The main character of the movie starts getting glimpses of the future in her head as she’s learning the alien language. Not because of magic, but because “when you learn a foreign language, you slowly understand the way of thinking of an entire People”.

I think there’s a certain degree of truth in this idea and even before the movie itself, I could clearly feel it. The natural complexity of my mother tongue that sometimes is motivated by the mere beauty of its sound is an excellent representation of the Italian aesthetic sense. Then, gaining a certain skill in the English language taught me more of the Anglo-Saxon culture than most of my direct experiences.

But since I deal with weird languages every day, a question raised in my head.

Does this apply to programming languages as well?

Geeks are a subculture, but if you get to know developers well, you quickly realize every programming language creates pocket subcultures as well. This is even more interesting considering we live in different countries, nonetheless some peculiarities cross oceans. It’s one of the purest, most genuine forms of globalisation.

Ignore anyone you know who’s doing their job because “life” and consider everyone who’s doing it for passion. Now group them by the programming language they dig, and you will soon discover that you’re outlining tribes.
I’m not talking about just programming mindset, but everyday life.

I don’t know if it’s them who have chosen the programming language or the programming language that chose them, but in the end you can see behavioral patterns.
Note: often times a programming language implies a field of application, so that should be part of the schema as well.

I must admit I didn’t deal with tons of programming languages or programmers in general, but for the pure fun of it, read my personal conclusions. Let’s be clear, this is just for fun, don’t get offended or anything!

Javascript. The artists. The adopters of the most popular programming language in the world (as we speak) are characterized by a big ego and pride on anything they do. They’re always willing to demonstrate that theirs is the biggest, but for a noble reason: a urge of competitive creativity. Creativity and the need to turn ugly things into beautiful things is in their DNA (is that why they chose Javascript?) and this applies to work and personal life.
Directly controlling everything is part of their mindset, so if something happens in their kingdom, they know.

Java. The Architect. Adopters like myself, are thinkers, sometimes over-thinkers. We don’t like to hurry and love planning stuff in every minuscule detail, in programming and real world, to the point we explode in rage when things don’t go as planned. Many times, the planning is so important in brings nowhere, like when we’re at the grocery store with a list of things to buy, we literally take them in the list order and ignore proximity.
We always think big, and every dream has to do with radical changes, not everyday battles.
Delegation and trust is more than fine with us, also due to our natural born laziness.

Scala. The Orchestra Director. They are pretty much like Java guys with even a stronger need to orchestrate everybody and everything, control maniacs maybe? But somehow they balance usefulness and taste in a very peculiar way. All Scala devs I met are grammar nazis, guess why.

Erlang. The Scientist. Though I don’t have lots of friends digging Erlang, they all seem to be strange creatures, living between a strong mathematical mindset and an attitude to be free thinkers. Among all developers, they’re the most interesting ones, both professionally and from a personal point of view.

PHP. The Swiss Army Knife. Problems need to get solved, whatever tool or trick is needed. Independent, Generous and frenetic, travelers and open minded, they often are all over the place. They sometimes try to act cool, but reality is they’re very practical. Planning is not their thing, and most of the time they get away with it using an astonishing instinct.

C++. The Guru. They should put C++ developers in the dictionary as synonym of enlightenment. Pragmatic, calm, to the point you want to strangle them. These developers are rare people as professional and as humans. Often times old fashioned, their attitude to dig under the surface of things makes them reliable and source of interesting conversations.

Wish I know more Go and Ruby developers to talk about them. I met some, but not enough to get an opinion.

You’re allowed to think I’m disgustingly romantic, but I think there’s a beautiful thread connecting us geeks, based on how we solve problems.
It’s comforting to think that on the other side of the world there’s a Bay Area rock star or an industrious Indian pal that knows my daily challenges and shares my mindset in problem solving.
Trust me me when I say this work can be pretty lonely and this loneliness can get scary when you need to take future-changing decisions. But you know what makes it easier? Another Java developer on StackOverflow saying you’re an “idiot” because you didn’t consider the obvious. A C++ developer would say “You have a long road ahead”, but you get the concept!


Spring: a foundation block for Java programming


I want to clarify this article comes from a question I had during a consultant job for a local software company. They needed some updates on the “real world” before diving in a new project. When I introduced the DI/IoC pattern with Spring, they raised their eyebrows and asked “Is it worth it?”. Keep in mind their team was maintaining a software originally built in the early 2000s.
Note: if you’re a software engineering veteran, you might ask yourself if there really is someone out there not using Spring, and the answer is: you can’t imagine how many.

I could end it up here and say “It is”, but I think one of the major obstacles people find when adopting a technology that changes they way you program is motivation. If the motivation is high and understood, people will simply go for it without further ado.

But the benefits of a design pattern are not like a new reporting library. The benefits are subtle and show up in the long term.

What I’d like to do here is guide you through the motivations you need to adopt Spring as a foundation item of your projects, even the simplest.

This article won’t dig into incredible finding or things you didn’t know if you’re a Spring aficionado as it’s dedicated to the people who don’t know anything about Spring and are interested in learning about it.


Spring is a robust, powerful implementation of the Dependency Injection pattern.

In the Dependency injection pattern, classes should just declare what dependencies are required for them to work, but should not express how they should be instantiated or which implementation should be used.
The class has to know about the “contract” of its dependencies, a “recipe” of how methods should be invoked. In Java, they typically are interfaces.
When a class with dependencies is instantiated, an injector will take care to resolve the dependencies and fulfill the contracts. Giving up control on the instantiation of these dependencies implements the Inversion of Control pattern.

Why all this mess?

This is probably what those guys thought. So let me explain some very practical advantages of using this pattern.

Implementation swapping: this is one of the most immediate aspects of DI. As long as the contract does not reveal any implementation details, you can have different implementations ready to be used. Which one is chosen depends on configuration factors, not the code using the contract.
There are several basic scenarios of this I personally experienced, such as:

  • Data sources. Whether it’s database or files or streams, you want to be free to use what works best for you when you’re in development and not be bound to a choice when you want to switch to another service, even in production.
  • Logging. If you’ve set up a sophisticated, centralized, cluster wide logging for your application, it turns into a major obstacle when you’re in development phase, when your Log4J could do all the work. Moreover, enterprise software logging is something you might want to switch to or change as your software progresses.
  • Notifications. Oh God, you don’t want to be overflowed with stupid notification emails caused by your QA team on a staging environment, right? Then you be better have a mock implementation of them. Also, mailing services can be something that you might want to change as time passes. I personally changed 3 API controlled mailing services in 3 years.
  • Mocking. Sometimes you can’t really have all of your flows running in development because some of them depend on other software you can’t hit from your desktop. What you can do is putting your correct implementation on the side and build a mock version of it, so that you can still do whatever you need to do.
  • Customization. This sucks but it happens. Sometimes your software needs some adjustments for one specific customer. Being able to swap which implementation to use based on a configuration file prevents a lot of troubles.

Isolation of technicalities / blackboxing: if a service you’re working on has to be injected in classes you do not control and will compete with other services implementing the same contract, then you need to make sure it doesn’t require or produce any implementation specific data item. By doing so, you’re literally freeing the calling classes from the technical knowledge of specific implementations. This strong segmentation simplifies the software itself, and everybody else’s work.
While this is a good practice in general, it’s strongly enforced while using IoC/DI. After all, design patterns are not just a way to achieve a result, but also a way to improve the rest of your coding routine.

Instantiation control: while good diligence can lead to proper instantiation control, a good factory pattern helps a lot. Since dependency injection can  only happen (with some exceptions) if objects are created using a factory, the factory has complete control on how they are created and when. Singletons and preemptive creation, for example, can be dealt with a simple configuration file. Moreover, if the use of the IoC pattern spreads throughout the whole software (as it should), you can decide to extend the factory capabilities to introduce unusual creation logic, such as creating no more than X number of instances, or keeping track of them.

Testing: again, good diligence vs enforcement. When you’re creating unit tests, having sharply defined services improves your testing habits. Moreover, being able to mock some other services allows you to test the most intricate routines you would generally discard as “whatever”.

More advantages are then brought in when dealing with the specifics of Spring framework.

The Spring Framework

I’m really not going to tell you the story of the glorious Spring Framework. Enough said it’s the most reputable implementation of DI for Java and a de facto industry standard.
The framework evolved over time, introducing more and more features to improve the productivity of developers, opting for a more “convention over configuration” approach.

This is not rocket science, so a good guided example is way more exciting than all this talking.

A lightning fast example

This example is pretty dumb, but should expose the very core concept. You can access the full source code here.

Two key elements you will use are:

  • the beans. the classes Spring is going build for you and inject where they need to go;
  • the context. It is where the knowledge of what’s available and what’s instantiated resides and therefore holds the capability of building beans.

The context is initiated by providing a configuration. There are multiple ways to do it, but the one I still like best is the classic beans.xml configuration file. If you’re not going to hot swap it, placing it in the classpath is a good way to go.
The same file will hold details on how to build some of the beans, by introducing entries like:

<bean id="iNotifier" class="com.simonepezzano.lessons.springfundamentals.spring1.notifiers.impl.ConsoleNotifier"/>

What this does is informing the spring context that when a “iNotifier” ID is requested, it needs to return a ConsoleNotifier instance. If the context already created one, it will return it because beans are singletons per default. If it hasn’t, it will create a new one. By changing the class attribute, I can use any implementation.

Of course, to allow other classes to use ConsoleNotifier without knowing it, ConsoleNotifier will need to implement an interface so that the dependency consumer will only know about the interface.


public interface INotifier {

     * Notifies a message
     * @param message the message
    void notifyMessage(String message);

     * Notifies an checkpoint
    void notifyCheckpoint();

     * Notifies an error
     * @param e the exception
    void notifyError(Exception e);
public class ConsoleNotifier implements INotifier {

    public void notifyMessage(String message) {

    public void notifyCheckpoint() {
        System.out.println("Checkpoint "+new Date());

    public void notifyError(Exception e) {

You can implement as many notifiers as you want and turn them on and off by using the configuration file, as long as they implement the INotifier interface.

To inject the dependency where it has to be, there are -again- many ways, but the one that best combines simplicity and effectiveness for this case is annotations.

@Component  ("process")
public class UselessProcess implements Runnable {

     * The iNotifier dependency. You can find out which implementation is going to be used by
     * looking a the beans.xml file.
     * by convention, the name of the instance needs to match the ID defined in the beans configuration file.
    INotifier iNotifier;

    public void run() {
         * For 5 times...
        for(int i=0;i<5;i++){
            // Send a notification message
            iNotifier.notifyMessage("Running message "+i);
            // Every time i is even, send a checkpoint
            // Wait a bit
            try {
            } catch (InterruptedException e) {

Let’s start with the injection. By adding the @Autowired annotation to my INotifier instance, I’m telling Spring to inject the right dependency for this object. The type will be ignored, the only thing that matters is the name of the instance that needs to match the ID we declared in our beans.xml file.

Another interesting item is I added the @Component(“process”) annotation to the class declaration.
Two facts:

  • I will repeat it once more. If I want Spring to do his dependency injection game, the class where the dependencies are injected needs to be created by the Spring factory as well.
  • Since I’m not interested in centralizing the configuration of this bean, I’m declaring “this is a bean that will respond to the “process” ID.

If I want to have this @Component magic to work, I will need to add 2 instructions to my beans.xml file like so:

<context:annotation-config />
<context:component-scan base-package="com.simonepezzano.lessons.springfundamentals.spring1.processes"/>

Where I substantially tell the engine to look for beans starting from that package.

Finally, to get everything started, we need to have at least our context running and instantiate our first bean:

         * We create a context using 'beans.xml' as constructor's manual
        ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("beans.xml");
         * Obtaining a process object is simple. "process" is not defined as a bean in beans.xml but it's
         * annotated. Being a Runnable, we don't even know what class implements it.
        Runnable process = (Runnable) context.getBean("process");

You can access the full code of the example on my GitHub account.

The obvious

Now that you can see how it works with real code, you can tell it’s really easy to use, and it has to be because adoption means that it will impact all over your software and company departments.

The obvious advantages are the ones we previously mentioned while talking about DI. Strong code encapsulation/blackboxing, reusability, “swappability”, logical separation of business logic and technical aspects.

The less obvious

In the example we’ve seen how we can decide which dependency implementation to inject, based on a configuration file, but we apparently lose this flexibility when we’re using annotations. Yes, we can use the @Qualifier annotation to determine which implementation to use, as in:

INotifier iNotifier

Yet a big chunk of the power of Spring is lost here. However, the annotation methodology leads us to two other ways to use Spring, and even in a more efficient way.
While it’s perfectly OK to select one implementation or another, it would be reasonable to decide which ones to package, say, at build time.

Let’s start with the first scenario.

  • You have a JAR containing a contract for ISerializer and ILogger you will provide to your team, or an external team, or multiple teams. They will do some implementations in different Maven projects
  • You blindly reference them in your main application using the @Autowired annotations
  • At build time, one dependency is chosen so that only the JAR I want to use is bundled

What’s present in the classpath will be used.

The second scenario is a bit trickier. Mind this is a simplification, there are more elegant ways to do it, but I want to keep the technicalities low in this article while still achieving the objective. I leave to you the search of the best way.

Assume softwares serving content to the outer world, such as a web server (or a servlet container), are delivered with Spring annotations.
In this case the control immediately looks -again- inverted. While in our previous scenario our main software is using routines delivered via Spring, in this case it’s the web server to receive a stimulation from the outer world and run a callback for that input.

See how even though My Application IS the application and has control on what happens, it is also providing some dependencies to the Servlet Container JAR that will autowire them in.
The Servlet Container is at the same time a dependency of our main application but also a consumer of dependencies we need to provide.

In this example we experience the true nature of Spring.

  • Whoever builds a compliant servlet container, is not required to expose the nature of its dialogue with the outer world, and demands the handling of the request to a dependency
  • Moreover the servlet container can rely on a logging system that will be defined by the software embedding it
  • The main application can embed any servlet container implementation that meets a contract

You can find a very minimal example in my GitHub page.

Spring.io projects

I’m definitely not going into the details of all the activities the guys at Pivotal are doing around Spring. But given what we’ve seen in the previous chapter, it seems pretty straight forward that the “pluggability” described can certainly lead to the creation of a number of modular software packages, built around Spring, allowing you to seamlessly introduce new features to your application.
You are generally used to import new dependencies using Maven or Gradle, and then using them within your application, but this goes way beyond, because when you do import a Spring module, it literally starts to interact with your code with very little effort, introducing new complex features often requiring no more than some POJOs.

Some very popular modules that I personally used are:
Spring Boot, that allows you to embed a servlet container within your application and allows you to interact with it in an astonishingly simple way. Very useful to build Java microservices.
Spring Security, a respected, rock solid system to manage access to your resources and prevent exploits
Spring Data, a rational, consistent abstraction layer to access your databases, relational or not.

And many more.


Maybe all this talking led me away from the main discussion: why Spring is a foundation block of Java programming, but I have a good excuse. Any foundation block is a fine balance between what it provides, and what effort is required to adopt it. The reason why I wanted to give you some hands-on examples is to demonstrate how adoption is really straight forward and tempt you to try it right away, if you haven’t already.
The “what is provides” is the toughest part, because at a glance you might be fooled to think that what it gives is not much. But this kind of patterns is something that grows in you as you use it and changes the way you think of software architecture.

To conclude, it is a foundation block because it brings order to chaos, in your code and in your mental processes.

How APIs changed modern technology

To be clear, this article talks about web services (mostly RESTful), often called APIs or web APIs.
The intimate difference between the web service and API definitions is very relevant to understand how this revolution changed literally everything in our industry.

Wikipedia describes web services as:

A web service is a service offered by an electronic device to another electronic device, communicating with each other via the World Wide Web. In a Web service, Web technology such as HTTP, originally designed for human-to-machine communication, is utilized for machine-to-machine communication, more specifically for transferring machine readable file formats such as XML and JSON.

I think I couldn’t say it any better.
This is their definition of API:

[…] is a set of subroutine definitions, protocols, and tools for building application software. In general terms, it’s a set of clearly defined methods of communication between various software components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. […]

This sounds a little bit more obscure. In very short, it’s how a piece of software can interact with another piece of software, internally or through a media.

This clearly demonstrates how web services are a subset of APIs.

What’s really awesome about web services is in modern, standardized implementations, the protocols and formats are platform independent.
This specific aspect had huge impact in tech because it allows heterogeneous systems to talk to each other.
SalesForce talking to SAP, talking to Jira, talking to email services, without an actual need for each system to know each other specifics, other than being able to use a standardized, platform independent API.

But this fact has other more mundane implications. For example, you will probably build your back end in, say, Java, Node, Python, whatever, but your consumers are both an IPhone and an Android app. It appears obvious that having a common ground is what makes all this possible.

None of our modern mobility culture would exist without this kind of API. Or is it vice versa?

I’m not saying the idea of a standardized, common API language is genius, on the contrary it’s plain logic. It has been the need to do more that generated web APIs.

But there’s more to it.

Changing the way we write code

Whether you’re exposing a service for the masses or specialized software for businesses, web APIs are now not-optional.
We’ve seen big software packages struggling to provide them, sometimes using terrifying add ons that produced terrible side effects. But as of now, I think everybody complied.

As a software engineer, writing core services or business logic has to meet a certain quality standard and employ certain good practices. These are good practices in general, but they become strictly necessary (no shortcuts) when you’re exposing APIs to the outer world.

  • Dependencies. Every routine necessarily relies on other routines and libraries. Bad code (or “classic code”, to be gentle) expects dependencies to be passed somehow in their context by its invoker. While this is a bad practice in modern software -easily solved by the dependency injection pattern- it becomes a major problem as the API producer ends up knowing more of the domain than it should. Keep it clean: the API producer knows what to call, the service knows what to do, the right dependencies are injected, not passed.
  • Parallelism and concurrency. As a general rule, you should never ever  be tempted to think “users will never step on each other feet”. Remember, users will use your software in ways you cannot imagine. This is even more true when it comes to APIs. While users will follow the path your team designed in the UI (and will try to break it) APIs are meant to expose internal procedures to be consumed by other machines. You have no idea what they’re going to do. This means you need to consider what would happen if a certain API is activated one million times in a minute, used improperly, or loop. What are you going to? How will you manage the situation? And even more important, is there even a slight chance to cause a race condition? But also, when performing long running tasks will we have the API hang there for minutes or are we going to build a task ID and release the resource? When you come up with a good parallelization strategy, that will impact positively in every corner of the package.
  • Façading, blackboxing. Again another good practice, that’s always been good, but now you have to. APIs can expose the very same content you generally offer via a UI. Or more. When there’s more to it, you really must create solid gates so that the API producer will interact with just the routines it should talk to, passing very plain, non domain specific objects, and returning harmless content. A very simple way to do it is creating façades that will be the only point of contact between the API producer and the core. Everything behind the façade has to be a black box. Done right, the gate will hide routines that must not be invoked directly, provide the needed coordination to input verification, authentication and security.

Changing the way we see a software package

As web APIs evolve, get faster and more accurate, so does the software using web APIs as a communication protocol. This made web APIs a viable media also for internal software communication, so that many software packages are now smaller standalone services, collaborating with each other: the microservices.

This opposes to the classic view of the big ass package with everything in it, running in a single massive process in a huge server. Pros and cons, as in every radical architectural decision.
Certainly big monolithic packages have the positive effect of having all the domain in one place, which is a pleasing feeling especially when you’re dealing with big database schema controlled by an ORM. They will probably run slightly faster and won’t incur in communication problems related to the network.
On the other hand, which is a big hand actually, splitting applications in microservices provides a high degree of modularity (and any Java developer knows what that means to us) which implies the ability to roll code changes that do not impact the whole platform in a high availability flavor, allowing simplified parallel development trains and releases. Moreover, high availability and disaster recovery from crashing hardware becomes simpler as not the whole system will live in one place and moving microservices from one place to another is generally faster.

And finally Docker. Of course containers are exceptionally useful even if you’re not doing microservices, but one of the major uses of Docker is indeed containerization of microservices.

Changing our interest in programming languages

I’m a Java veteran and, like all Java veterans, look at other -newer- programming languages with a little bit of suspect. You know what you leave but you don’t know what you get.
What is certain, though, is more modern programming languages nowadays live around some concepts that make working with microservices a breeze.
It’s no doubt that platform like Node are phenomenal when JSON APIs are to be generated or read and it’s clearer than ever that languages with lightening fast bootstrap (Go for example) are the way to go, considering microservices lifecycles can have very fast turnarounds.

But the real life changing effects that APIs had on programming in general is that any software written with any web API capable language (probably 99.9%) can cooperate seamlessly with any other software written with any web API capable language.
While this sounds like a repetition of what we previously said, this sentence takes a completely different taste when the topic is microservices.

Choosing a programming language (and platform) was a terrifying step in the past (given that you had a way smaller pool to choose from) because it involved many aspects like:

  • does it have all it’s needed to do the job? And how hard is it going to be?
  • how will it scale?
  • how modular is it?
  • how easy is it to debug?
  • how many experts can I find in my area and outside my area?

While this still is a very important step, in enterprise software you’re now given the possibility to design a greater plan that expands in time, where a programming language or a platform can do what it’s good at.
Erlang is good in parallel computing and orchestration? Fine. Python is good at data crunching? Great. Node is good at building APIs? Excellent. Java is the industry standard? Go for it.
If you’re designing a large system and you’re a very well organized mind, you might not need to give up any of the benefits these languages offer you.

Changing the way we test

Now, if you live in the same timeline as mine, testing software is not optional. Unit testing, data driven testing, integration testing, user driven testing, you name it.
But there’s certainly a wet dream most developers had since software testing existed: being able to evaluate tests as the software is running, on live data. Even though it’s always been perfectly doable (remember the “It’s just software” mantra), we can’t really say it’s ever been a real option.
But with web APIs and microservices this is becoming a real deal, because you potentically can intercept messages going in and out and test them. Of course the way you test is different as you don’t have power on input values, but you can certainly introduce enough logic to verify how the APIs are working, adding detail where needed.

Changing the way we orchestrate

This last “change” is very close to my heart, and probably one of the most underestimated ones in the list.
Whether you are running microservices or a software beast packed with lots of different multi vendor packages, the way each item communicates with the other can be orchestrated in different ways.
Yes, you can write code to have them talk, determine when this is happening synchronously or not, how many simultaneous calls and what’s meant to happen when calls have to wait in a queue, but is it worth it?

Three years ago I met RabbitMQ and you can read about it in a previous article. Two years ago I had the pleasure to deal with MuleSoft‘s Mule and Apache Camel.
Whether it’s a queue manager like RabbitMQ (its logic connection with web APIs is a bit of a stretch) or a fully featured enterprise service bus like Mule, there really is no reason for you to write code to orchestrate your web API driven softwares.
Operations such as authentication, thread and concurrency management, or queuing strategies follow very precise patterns you don’t need to write every time and carry the burden to maintain, considering there are combat proven pieces of software out there which can do that for you.

If you can keep your code clean from details that are not relevant to what the software has to do, you better go for it.


It’s not rocket science. Actually, it’s less than brilliant. Wait, this is really boring.

But this boring, less than brilliant advancement allowed us to access natural modularity, platform independence, language independence, focus upon the software real aim and operation teams that can sleep -almost- peacefully.

It’s not the invention, it’s the infinite effects it will have on the world.