I've only seen bad implementations out there (tens of products), and have yet to encounter one that looks reasonable. If your average developer has trouble implementing something sane when the only problem is splitting data and functions locally into different components, why in the world would you assume that putting in a network layer, exponentially increasing the failure rates, loosing good stack traces, making refactorings that have to move data from one part of the architecture to another a pain in the butt (because requirements never change right?), etc... would be a good thing?
The horrors I've seen... oh boy, the horrors.
The people I've heard praising microservice architectures so far fall into two categories: (1) Ex developers that moved into management and read a few articles/bought a book on the subject and think it's a great idea (but have no actual practical experience) (2) developers that have just started working on a new one and are still in the honeymoon phase (mind you, not developers that have started working on an __existing__ one that is a few years old, those figure out the mess they are in pretty quickly).
You can screw up not-so-micro services too, but it's far easier to screw up microservice architectures and they are far, far harder to fix down the line when you figure out how you messed up (or re-architect when that thing that the PM told you was never going to be a use case is going to become a use case (after they tell you they are sorry of course)).
The issues with micro services are just so unbelievably large that it'd take too long to enumerate them here, but for me the largest issue is you are putting arbirary network partitions into your application and you CANNOT possibly know when these network partitions will suddenly make an operation you need to be atomic impossible (or at best eventually consistent).
In general it is excruciatingly difficult to iterate on micro services as well.
Then there is deployment, service mesh, k8s etc. etc. that you have to wrangle with, all because people refused to make bounded contexts in their application without making them into different services. It is much simpler to just design your monolith into separate contexts than it is to introduce all the error handling you need for network partitions.
I personally don't understand why Elixir isn't more loved by these micro services zealots - it allows you to get extremely far without adding network partitions and then allows you to switch to message passing when you need and then networked message passing all extremely smoothly as it makes sense for your app.
The architecture in the article here seems to be driven by a certain type of OO data/interface-modelling:
In school/courses/tutorials/books we are initially taught a type of object oriented data-modelling, which is at the level of: objects in the world are objects in our programs. A person is a Person class, a shopping cart is a ShoppingCart class. You can put stuff into the ShoppingCart, it belongs to a Person and so on.
I call this "Kindergarten-OO": It puts labels on things in the world and then models software interactions 1:1 based on that, as if software is a simulation of the world.
This conflates all kinds of things and the actual engineering and user-interaction requirements get smeared all over your codebase.
Now this kind of Microservice design looks kind of like that but lifted up to a network of computers.
You end up with suggestions like: "Design applications around eventual consistency" (article). Why? Because you didn't design your architecture around timing processes, data-structures/-flow and interactions, but around this made up simulation with ~N^2 communication channels.
A process that takes a longish time and maybe doesn't need to be real-time consistent: Analytics. Let that be eventually consistent, close the feedback loop each Friday and enrich your data which then informs the rendering (ordering, emphasis of products and so on) on your front-end.
An architecture around interactions, flow, data-processing, security/auth and so on looks much more like a decision tree, state-machine or something like that, rather than like a zettelkasten graph.
Even there, the "clear" separation of concerns means that it becomes difficult to do some things that seem like straightforward business goals. You'd like to bounce emails to a user's aliases if they are over quota but allow messages to that user from support. Unfortunately the milter and the alias resolution hooks are different services and don't get to communicate, unless you use some other functionality... I'm a bit hazy on the specifics, but I can remember needing to bend over backwards sometimes to satisfy things because you couldn't chain additional information through. This ordinarily comes up quite a bit when people compare it to exim, an SMTP monolith where you are more easily able to use information from one service in another.
I think postfix is well built, but the architectural choice has consequences and it isn't a silver bullet.
I only have a problem with the label "microservice". A service is an independent product, that's all. "micro" says absolutely nothing about the size of that product. It's like "serverless", it doesn't even explain what it actually is thus quickly become a marketing buzzword.
What is key is to have a scalable time series db to monitor the interactions of the components and services (and microservices) that comprise your platform. Measure the KPIs around hardware, OS, software and services interactions. Monitor and alert on them, via real-time dashboards.