How APIs changed modern technology

To be clear, this article talks about web services (mostly RESTful), often called APIs or web APIs.
The intimate difference between the web service and API definitions is very relevant to understand how this revolution changed literally everything in our industry.

Wikipedia describes web services as:

A web service is a service offered by an electronic device to another electronic device, communicating with each other via the World Wide Web. In a Web service, Web technology such as HTTP, originally designed for human-to-machine communication, is utilized for machine-to-machine communication, more specifically for transferring machine readable file formats such as XML and JSON.

I think I couldn’t say it any better.
This is their definition of API:

[…] is a set of subroutine definitions, protocols, and tools for building application software. In general terms, it’s a set of clearly defined methods of communication between various software components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer. […]

This sounds a little bit more obscure. In very short, it’s how a piece of software can interact with another piece of software, internally or through a media.

This clearly demonstrates how web services are a subset of APIs.

What’s really awesome about web services is in modern, standardized implementations, the protocols and formats are platform independent.
This specific aspect had huge impact in tech because it allows heterogeneous systems to talk to each other.
SalesForce talking to SAP, talking to Jira, talking to email services, without an actual need for each system to know each other specifics, other than being able to use a standardized, platform independent API.

But this fact has other more mundane implications. For example, you will probably build your back end in, say, Java, Node, Python, whatever, but your consumers are both an IPhone and an Android app. It appears obvious that having a common ground is what makes all this possible.

None of our modern mobility culture would exist without this kind of API. Or is it vice versa?

I’m not saying the idea of a standardized, common API language is genius, on the contrary it’s plain logic. It has been the need to do more that generated web APIs.

But there’s more to it.

Changing the way we write code

Whether you’re exposing a service for the masses or specialized software for businesses, web APIs are now not-optional.
We’ve seen big software packages struggling to provide them, sometimes using terrifying add ons that produced terrible side effects. But as of now, I think everybody complied.

As a software engineer, writing core services or business logic has to meet a certain quality standard and employ certain good practices. These are good practices in general, but they become strictly necessary (no shortcuts) when you’re exposing APIs to the outer world.

  • Dependencies. Every routine necessarily relies on other routines and libraries. Bad code (or “classic code”, to be gentle) expects dependencies to be passed somehow in their context by its invoker. While this is a bad practice in modern software -easily solved by the dependency injection pattern- it becomes a major problem as the API producer ends up knowing more of the domain than it should. Keep it clean: the API producer knows what to call, the service knows what to do, the right dependencies are injected, not passed.
  • Parallelism and concurrency. As a general rule, you should never ever  be tempted to think “users will never step on each other feet”. Remember, users will use your software in ways you cannot imagine. This is even more true when it comes to APIs. While users will follow the path your team designed in the UI (and will try to break it) APIs are meant to expose internal procedures to be consumed by other machines. You have no idea what they’re going to do. This means you need to consider what would happen if a certain API is activated one million times in a minute, used improperly, or loop. What are you going to? How will you manage the situation? And even more important, is there even a slight chance to cause a race condition? But also, when performing long running tasks will we have the API hang there for minutes or are we going to build a task ID and release the resource? When you come up with a good parallelization strategy, that will impact positively in every corner of the package.
  • Façading, blackboxing. Again another good practice, that’s always been good, but now you have to. APIs can expose the very same content you generally offer via a UI. Or more. When there’s more to it, you really must create solid gates so that the API producer will interact with just the routines it should talk to, passing very plain, non domain specific objects, and returning harmless content. A very simple way to do it is creating façades that will be the only point of contact between the API producer and the core. Everything behind the façade has to be a black box. Done right, the gate will hide routines that must not be invoked directly, provide the needed coordination to input verification, authentication and security.

Changing the way we see a software package

As web APIs evolve, get faster and more accurate, so does the software using web APIs as a communication protocol. This made web APIs a viable media also for internal software communication, so that many software packages are now smaller standalone services, collaborating with each other: the microservices.

This opposes to the classic view of the big ass package with everything in it, running in a single massive process in a huge server. Pros and cons, as in every radical architectural decision.
Certainly big monolithic packages have the positive effect of having all the domain in one place, which is a pleasing feeling especially when you’re dealing with big database schema controlled by an ORM. They will probably run slightly faster and won’t incur in communication problems related to the network.
On the other hand, which is a big hand actually, splitting applications in microservices provides a high degree of modularity (and any Java developer knows what that means to us) which implies the ability to roll code changes that do not impact the whole platform in a high availability flavor, allowing simplified parallel development trains and releases. Moreover, high availability and disaster recovery from crashing hardware becomes simpler as not the whole system will live in one place and moving microservices from one place to another is generally faster.

And finally Docker. Of course containers are exceptionally useful even if you’re not doing microservices, but one of the major uses of Docker is indeed containerization of microservices.

Changing our interest in programming languages

I’m a Java veteran and, like all Java veterans, look at other -newer- programming languages with a little bit of suspect. You know what you leave but you don’t know what you get.
What is certain, though, is more modern programming languages nowadays live around some concepts that make working with microservices a breeze.
It’s no doubt that platform like Node are phenomenal when JSON APIs are to be generated or read and it’s clearer than ever that languages with lightening fast bootstrap (Go for example) are the way to go, considering microservices lifecycles can have very fast turnarounds.

But the real life changing effects that APIs had on programming in general is that any software written with any web API capable language (probably 99.9%) can cooperate seamlessly with any other software written with any web API capable language.
While this sounds like a repetition of what we previously said, this sentence takes a completely different taste when the topic is microservices.

Choosing a programming language (and platform) was a terrifying step in the past (given that you had a way smaller pool to choose from) because it involved many aspects like:

  • does it have all it’s needed to do the job? And how hard is it going to be?
  • how will it scale?
  • how modular is it?
  • how easy is it to debug?
  • how many experts can I find in my area and outside my area?

While this still is a very important step, in enterprise software you’re now given the possibility to design a greater plan that expands in time, where a programming language or a platform can do what it’s good at.
Erlang is good in parallel computing and orchestration? Fine. Python is good at data crunching? Great. Node is good at building APIs? Excellent. Java is the industry standard? Go for it.
If you’re designing a large system and you’re a very well organized mind, you might not need to give up any of the benefits these languages offer you.

Changing the way we test

Now, if you live in the same timeline as mine, testing software is not optional. Unit testing, data driven testing, integration testing, user driven testing, you name it.
But there’s certainly a wet dream most developers had since software testing existed: being able to evaluate tests as the software is running, on live data. Even though it’s always been perfectly doable (remember the “It’s just software” mantra), we can’t really say it’s ever been a real option.
But with web APIs and microservices this is becoming a real deal, because you potentically can intercept messages going in and out and test them. Of course the way you test is different as you don’t have power on input values, but you can certainly introduce enough logic to verify how the APIs are working, adding detail where needed.

Changing the way we orchestrate

This last “change” is very close to my heart, and probably one of the most underestimated ones in the list.
Whether you are running microservices or a software beast packed with lots of different multi vendor packages, the way each item communicates with the other can be orchestrated in different ways.
Yes, you can write code to have them talk, determine when this is happening synchronously or not, how many simultaneous calls and what’s meant to happen when calls have to wait in a queue, but is it worth it?

Three years ago I met RabbitMQ and you can read about it in a previous article. Two years ago I had the pleasure to deal with MuleSoft‘s Mule and Apache Camel.
Whether it’s a queue manager like RabbitMQ (its logic connection with web APIs is a bit of a stretch) or a fully featured enterprise service bus like Mule, there really is no reason for you to write code to orchestrate your web API driven softwares.
Operations such as authentication, thread and concurrency management, or queuing strategies follow very precise patterns you don’t need to write every time and carry the burden to maintain, considering there are combat proven pieces of software out there which can do that for you.

If you can keep your code clean from details that are not relevant to what the software has to do, you better go for it.

Conclusions

It’s not rocket science. Actually, it’s less than brilliant. Wait, this is really boring.

But this boring, less than brilliant advancement allowed us to access natural modularity, platform independence, language independence, focus upon the software real aim and operation teams that can sleep -almost- peacefully.

It’s not the invention, it’s the infinite effects it will have on the world.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s