It seems like in recent times the term “monolith” has become a dirty word. A behemoth, a beast, something to slay and revile. The “obvious” antithesis being the trendy “microservice”. Lean, agile, forward-thinking, future-proof. Watching tech talks from giants such as Netflix we can get a glimpse into how well architected systems can have the capacity to scale globally with such distinct isolated units working together to deliver a lifetimes worth of content to all our devices.
Great you might think, that is how my project should be structured. SOA at it’s finest. The problem is… you are not Netflix.
For the majority of companies I’ve worked for and with, the average development team size has been in the region of 3 – 10, with applications that need to handle hundreds of users daily.
Day to day, these developers build and maintain relatively straightforward applications to fulfil business and customer needs, or help streamline internal workflows.
I’d feel inclined to suggest this is a common environment most developers will find themselves in. Even within a larger company it’s likely you’ll be placed within a team focusing on a specific product.
It’s at this scale I question whether the microservices actually offer any benefit over the complexities they introduce. A talk from NDC 2017 Jimmy Bogard – Avoiding Microservice Megadisasters really highlights just how disastrous microservices can be when done wrong. The horrifying reality from Jimmy’s cautionary tale is this, 9.5 minutes to render a homepage, with enough HTTP requests bouncing around to saturate the internal network. Contrast this to their existing “monolithic” WebForms site which was servicing thousands of requests and still generating billions in revenue albiet whilst showing signs of aging, decay and neglect.
In reality the developers involved probably had the best intentions. Some might have wanted to show their ‘seniority’ and ability to formuate incomprehensible logic flows and network diagrams, others might have been unsure and just doing as they were directed. In the story told the main architect jumped ship prior to the ill-fated maiden voyage but with an 18 month development cycle, it’s likely many other developers also left within that time.
The talk suggests scope creep and developers “inventing” requirements in order to further their own ambitions within the business, or add the latest buzzwords to their soon to be recirculated CV’s. None of these are good “business reasons” to adopt such a risky strategy.
My experience suggests asking the following questions before deciding to dive head first into microservice architecture:
What problem are you attempting to solve?
This is the first question when deciding to introduce any change to existing processes and procedures. Without a clear goal and a means of measuring success you are likely setting yourself up for failure.
Has the problem been identified through collection of metrics or is just based on gut-feeling and intuiton?
It’s easier to point the finger and blame one part of the application for causing performance issues but without hard evidence it’s just noise. Without a current metric and a desired outcome how can you measure success?
Is there a simpler solution which doesn’t fragment the existing infrastructure?
It’s suprising to find that the solution to a bottleneck might be as simple as adding a missing database index or lending some careful attention to some unneccessarily repetitive or cumbersome logic. Try placing logging to capture how long particular functions/IO operation are actually taking and identify where you can get the biggest wins.
How much knowledge is there of microservice architecture within the team?
A single point of knowledge suggests that there may be a skill shortage within your team, and initial training is required before moving forward. This training will empower your team to make better decisions and help handle any bumps along the way.
Is your existing deployment process automated and well-oiled?
If you are performing manual deployments of your existing application, adding more manual deployments will just compound your existing problems and even if the microservice is the solution, you’ll just move the problem to deployments. More troublesome deployments will likely lead developers to want to deploy less frequently, reducing the businesses agility and ability to implement new ideas and improvements quickly.
What monitoring and alerting is in place for existing infrastructure?
If there is little to no monitoring of existing applications/servers/databases/services, increasing any or all of these items will lead to problems that are more frequent, and harder to identify. Create a baseline of what good monitoring looks like and then ensure this is met on existing applications/infrastructure before adding more.
Are you adding the appearance of separation, but still maintaining a single point of failure?
If your microservice is reliant on the single, “main” database or another microservice, and there is still a single point of failure, it is unlikely the microservice will actually offer any benefit. The microservice should operate independently and any errors that occur should be handled gracefully by all consumers.
There are definitely circumstances where microservices do make sense. Scaling horizontally is more efficient and allows you to handle spikes in traffic without dealing with costly infrastructure on-site or in the cloud.
A microservice has the potential to increase security and reduce duplication, for instance centralising authentication and authorisation into a single microservice can mitigate a rogue code change opening up your sensitive data to everyone and their dog and save multiple developers reimplementing the same logic time and again. Obviously there are other processes that need to be applied and adding a microservice won’t suddenly solve those problems too.
Likewise the simpler solution might be to version and bundle your authentication code into an npm/Nuget package and import that where required. If spikes of traffic are tanking performance, maybe try piping requests to a queue and adopting an “eventually consistent” approach to your database reads, throttling your requests to maintain overall system performance, but still allowing business critical functions to continue.
In a 2016 blog post titled “The Majestic Monolith”, David Heinemeier Hansson discusses how Basecamp has continued with their “Majestic Monolith”, delivering a product which is available over the web, native mobile apps and desktop apps on Windows and Mac. At the time of writing a team of 12 developers were maintaining and developing their software, supporting millions of users. Granted it appears they have introduced a few “shared services” where appropriate, Basecamp ID is noted as falling into this camp, handling
shared authentication for all generations of the Basecamp app, although this wasn’t without a cost, with the smaller systems making it
much easier to silo knowledge and responsibilities.
He also raises an interesting point that keeping your system as a monolith can help avoid the above problem, keeping the responsibility of the product firmly in the “team” realm and not of the individual.
Ultimately my final suggestion would be, walk before you run. Ticking the microservices box, just because, could sap valuable resource from other endeavours which might benefit your team, business and code base more significantly. Prefer quick “wins” that solve identified problems over adding extra complexity and significant changes to architecture. Once your team is confident there are no other improvements to make, maybe take a look into microservices, or then again, maybe don’t.