When you work for the same company for some time, you might end up reviewing code you wrote years ago. The feeling is always shocking. Imprudent, brave, naive, it’s like watching the first season of “The Simpsons” again.
Among the embarrassing things you find in your “old” code, over-engineering is possibly one of the most dramatic, because it might compromise future years of work. It draws your attention from the actual task to a endless number of technicalities, unexplainable maintenance and constant improvements not really improving anything. It’s drowning in poo, literally.
When project complexity and will to reinvent the wheel love each other very much, over-engineering is born. Every technique, theory or library that can stop your team to bring chaos in your project is more than welcome.
First off, I’m not going to talk about this topic scientifically. I’m no scientist, I’m a software artisan, so all you’re going to read is about how certain solutions solved specific problems for me.
If your plan is to comment something like “hey that is partly wrong if you apply to a functional context!” or “this is far from being the original concept formulated in 1973!” keep in mind that: a) you’re not helping anyone by doing so b) you can pretty much go fuck yourself.
Our objective is standardizing a number of tasks that are extensive part of our everyday life in server side programming. If not properly approached, these tasks can be the source of an exponential growth of stacked, generational, over-engineered code.
- Modularization. Dividing the project in smaller semi-autonomous tasks and boxing them accordingly is vital for simplicity, robustness and testability. If you don’t, then you’re in a big problem, but are you evaluating the most rational criteria?
- Interfacing. Deciding how these boxes will interact with each other is something you feel the urge to do quick, when you start assembling your components. If modularization is done properly, then your component has few entry points, but this decision is more critical than it seems. Refactoring and extending those interfaces over and over becomes a big problem as they start getting used in multiple locations.
- Parallelism. Let’s face it, this is something you can rarely avoid in modern server side programming. And even though it might look like a work you can delay, remember that parallelization requires you to code in a certain way. If you take it as an “add on” you will need to re-engineer your code to make it happen. Moreover, going parallel is not just thread spawning; controlling the flow, the number of parallel events and their status cannot be ignored.
- Failure recovery. Assuming you’re doing all the above pretty well, there still is one thing that can drive you nuts: how to recover from a failure. Now, try..catch might sound as an answer as long as the failure is limited to a “boom” is a stateless piece of code, but what happens if you need to rollback a complex status, or the failure is related to the current status of an object? You will need to revert data changes and reinitialize your code. And how will you notice that something bad happened? Are you going to wait for a terrifying customer call?
Of course these four items are just a glimpse of the great challenges in software development, but they represent very well how a software development model can avoid the mess we often introduce without even knowing.
The actor model is a concurrent programming strategy where the fundamental computational unit is -indeed- the actor.
An actor encapsulates the necessary code to perform a task, runs as an independent worker from the calling thread, and is triggered by a message.
Actors are often stateful and won’t work on a second task ’till the completion of a previous operation. If you trigger it as it’s still working on something, the subsequent call will be queued in what many implementations call “mailbox”. Therefore, if you want to be able to simultaneously run two tasks of the same type, you will need to instantiate two.
Actors are not threads but use threads to achieve asynchrony, so their flow is also bound to the availability of threads.
The communication between actors happens with messages. In all the implementations I’ve seen, a message can be anything, an object, a string, a number. You don’t need to declare an actual interface, but rather set up data items working like letters. It is the actor duty to verify if the letter it just received is OK or not. Of course, to make everything work, it is expected you to sent messages containing at least what an actor needs to work.
Each actor should be good at solving one problem, or performing one task, but this does not necessarily mean it will take the responsibility of a whole process. It is a common -if not recommended- pattern to have actors mailing other actors as part of their tasks, therefore building chains of asynchronous events.
Failure management is also another interesting fact of actor programming. As every actor lives its very own life, every health issue an agent can encounter does not directly influence the health of the whole system. Of course a failing actor is a bad thing, but unless that actor is the keystone of your project, its failures shouldn’t be able to kill the whole application.
An interesting fact about actors and failures is it’s a common practice to let actors fail and not recover autonomously, in favor of a strategy where a supervisor handles what needs to be done to have the actor back in full working order. In most implementations, a failure deactivates the actor which will not be able to handle further messages.
This is a shocking perspective as you’re climbing the actor model learning curve, but you can clearly see it makes a lot of sense: if an actor’s the ability to perform is related to its state, then if it fails there’s a tangible possibility the state is corrupt, hence you don’t really want to simply “try-catch” the main task code, but manage the reinitialization of the actor!
At this point you can pretty much notice how the actor model looks like a representation of a real world factory. It’s kinda funny and disturbing, I would say.
- are specialized workers
- are finite
- are supervised by other actors
- perform one task at a time
- receive an heterogeneous file containing the details of the information to perform they need to interpret
- have a mutable knowledge of their condition
- need an available workbench to perform their job (threads)
- may be asked to inform other actors of events concerning their job, or passing other actors a half-processed artifact
- if hurt, they stop working and inform the supervisor
Look pretty intriguing already. Thinking at a software like this, you can clearly see how easier it is to control the behavior of your “factory”.
What else? Oh well. A factory can have offsite workers, and so does our model.
I kept this topic as the last theory class mostly because not all frameworks implement this, and not all do it in the same way, so I’m going to keep the description pretty abstract.
You now have a full picture of how the model works and how fine the grain of your control is. You have also seen how the adoption of a good actor framework can keep you from over-engineering simple things that are already optimal in the model.
Now let’s add something that is -again- a source of horrendous over-engineering pain and insomnia: scalability.
As we previously said, actors talk to each other and each instance is responsible to perform the task it’s been instructed to do, potentially interacting with other actors in the process of doing their job. So far, though, we’ve always considered this activity as an abstraction inside one running process. Or maybe not?
Maybe not. Many actor model libraries also implement the ability to create a cluster of agents deploying actors. These agents won’t be the same process and won’t necessarily reside in the same server. Since the communication between actors is performed via messages, actors don’t really need to know each other, so an actor can definitely talk to another actor whose nature is completely unknown.
This allows you to achieve three great things:
- A micro-agent system. Smaller, specialized software is easier to maintain, faster to deploy, debug, and potentially outsource. Grouping actors by analogy in micro-agents can be a winning strategy as they can be deployed in different machines and therefore provide a better performance. Micro-agent philosophy is not an effect of actor programming and the model can be achieved in a number of ways, but the actor model definitely pushed me in that direction.
- Redundancy. Micro-agents deployments don’t need to be unique and don’t need to reside on the same server. An imaginary “Mailer” agent, implementing the FailureMailActor, SuccessMailActor and DailyMailActor doesn’t need to be unique, and if one goes down for whatever reason, the other will still perform.
- No matter if you embrace the micro-agent philosophy or not, even exact same copies of the whole software, deployed in multiple servers and implementing all the possible actors can collaborate with each other and allow you to share the load between multiple nodes.
As always, if you’re looking for the model that will save us all, you probably have bigger problems you talk about with a psychologist. There’s no panacea in this world, you should be aware of it by now. What is certain is the actor model solves the problems we’ve been talking about with swag, makes the flow control clearer, allows a better use of the resources and forces the developer to be tidier.
It definitely worked for me all the way and has become foundational for all the software I’ve been writing.
In the next article, we’ll deep dive in some examples using my favorite implementation! Until then, I strongly suggest you start looking around and see what the gods of software have created for you to start working with actors.