“Digital Transformation” of Boilerplate CRUD “Middle Tier” into 100x More Useless Cloud-Native Plumbing
Let’s turn a boring and not very scalable monolithic washer machine into a robust enterprise-grade mesh of event-driven microservices… Did I get your attention with this Cloud-native digital transformation initiative?
I’d like to make just one comment about my blog before I embark on that fascinating re-architecture journey. Hope you like the topics I choose. My recent job search prompted this post, because not only I had to recall common IT wisdoms I rejected when I started working on Px100, but also to deliver Hollywood-level performance to convince my interviewers how much I love coding by the book e.g. Spring Boot using Spring Data to connect to Mongo.
I’m not a nihilist or contrarian. I’m merely an average lazy dev trying to write less code. 100x less, which requires thinking a bit outside of the box as you can imagine. Forgive me for not writing enthusiastic articles on the inner workings of say AWS API Gateway or even the Spring framework I worship, because it made my code-shrinking Px100 platform possible. There is enough writing on those topics already: from the official documentation to ecstatic blog posts. I’m more interested in controversial interview-killing (technical) topics, that may seem high level compared to describing the intricate details of some framework or Cloud service.
The Problem
I do use Spring Boot in Px100-based projects. I just don’t consider it the best thing since the sliced bread — compared to the true marvel: core Spring framework, I embraced in version one back in 2002 (vs. 2014 Boot) amidst the J2EE (Java version 2 Enterprise Edition) EJB era. “J2EE” is still common in hiring managers’ job descriptions even though the current Java version is 16, and the “EE” part was euthanized in version 8. Hope that put things in perspective for you. This post is however about the newer not being better. Amateur J2EE monoliths will long outlive current API gateways and “service meshes” like COBOL outlived C++ and maybe even Java.
Here’s how the problem typically manifests itself. Haven’t you wondered…
Why it takes several days (or worse, weeks) to add a new data entry field or conditionally hide/show a section of the screen based on the user role and current workflow status? It should be a five-minute change in the well-designed code — affecting only a couple of lines of said code instead of Martin Fowler’s “Shotgun Surgery” (an anti-pattern from his seminal Refactoring book).
That happens because of the dramatic (10–100x) increase in codebase size primarily to connect services to each other, almost always through several proxies, gateways, and other couplings. Something that wasn’t a REST service before is now 10+ services with their own APIs and other (internal) plumbing, let alone the external one: API gateways, containers, orchestrators, etc. To be completely honest the issue above existed for decades due to the low code quality alone. Fowler wrote his book long before the microservice hype. Microservices simply increased the amount of the same mediocre code and added more points of failure due to more linkages between services. The biggest question however is who needed them in the first place.
Why would one separate a typical (sorry for a beat-up example) Customers+Orders “monolith” into two: Customer and Order REST services? For a reason other than connecting them via two Kafka topics to recreate the Event-Driven aka Saga pattern from one of the countless “Enterprise/Advanced/Distributed/Cloud/Whatever Microservice Architecture Patterns” books, whose authors also offer courses on those patterns at $400+ per pop to corporate “citizen developers”. Such reason must be the Separation of Concerns because those damn Customers and Orders have tons of intricate “business logic” in their “middle tier”, right? No longer comprehensible by the dev team, so it should be split into smaller comprehensible pieces (and more “resources” hired to comprehend those I guess).
Spoiler alert: that mythical logic is nowhere near the “back end”. The developers know that, yet are still willing to pay the price of breaking the monolith: millions in man-hours to add and maintain the additional code and DevOps configuration. Obviously it is not coming out of their pockets, nor they’ve ever cared about the company’s bottomline, because the company never cared in turn (chicken and egg problem).
One way or another, it is the typical failure to “align” employee’s goals with the company’s, tons of HR- and PM-flavored MBA dissertations have been written on already. Don’t we all wish for everyone in the company to feel like its founder/owner meaning care about the end result both in terms of completeness/quality and time? Unfortunately, (and I am talking about happy software engineers, not disgruntled underpaid/overworked ones), the senior ones aka slash-architects care about the following at best: trendy new languages, nicely formatted code, unit test coverage, cool new frameworks and external services to do neat things typically of the logging or service orchestration nature, new (NoSQL) database paradigm, and all other tricks and “patterns” learned from the brightest industry minds in their clever blog articles and tech conferences presentations.
There is nothing wrong with it, other than those bright minds are likely to talk in great technical details about some single framework greatness and neat tricks: all little things compared to the big picture. Though any dignified architect would always argue how the infrastructure/deployment excellence can save the day, because it helped Netflix and other big corporations with their massive scalability challenges. Well, first, are you Netflix size-wise? Second, are you in the same infrastructure-centric business, as Netflix? And third, knowing, that you are not, how exactly it solves your customer’s more pressing (application functionality) needs and problems and what was so terribly wrong with the old (also Cloud) “monolithic” deployment?
No self-respecting developer, nor any of his/her role models speaking at conferences want to focus on the boring employer’s “business logic”, but rather improve the technical plumbing. Yet again. Investing time into learning the business doesn’t help engineers put trendy abbreviations on their resume to command higher salary at their next job, which is likely to automate different business processes. Sadly, outside FAANG with their unquestionable career future the only (financial) reward for learning the business and caring is well, keeping your job.
Not surprisingly the business-focused “big picture” has become the vague non-technical executive consulting specialty of Gartner, Deloitte, and the likes. My personal down to Earth understanding of the “big picture” comes from my bootstrapped startup founder experience. When it’s my money invested in some project, I care about two things:
- Comprehensive and robust coverage of my customer automation needs to deliver the functionality that no 100+ devs conventional team can ever build for the reasons I’ll explain below.
- Shortest TTM. Because it’s my money, duh. I will not however cut any corners because as an engineer I know it’ll cost me more down the line.
I fulfill these two impossible requirements by inventing new tech to do the heavy lifting — of what matters most: the #1 above. The operations/DevOps come second. Mind you, I’m not an abstract, say car designer. I do care about the factory my cars are going to be mass-produced at. That factory (packaging/containerization/orchestration, Cloud deployment, etc.) is fine with a few little improvements to cover my new needs.
It’s not just allegedly demotivated developers, that stopped caring about the core customer needs: the business logic. In a silly effort to eradicate “expensive” programming, the whole business software industry shifted its focus from engineering work (developing programs for real customers/users) to technician responsibilities focused on infrastructure, which (DevOps) is supposed to magically fix slow or malfunctioning code by deploying it to AI-backed, self-healing, elastic, etc. “Cloud”. Predictably, the infrastructure suffered too, since more moving pieces (now of the DevOps variety) makes the end result fragile.
I mentioned, that I do not enjoy being a contrarian. However that is the only way right now to stay a true engineer (vs. technician) and focus on the solid product first. I am writing this post in hope my engineer‘s point of view will resonate with IT leadership. I rest my case if the engineering of the core product is not important at all vs. technician responsibilities to host such SaaS product. I.e. the company growing by pumping money into marketing and sales to push its mediocre product — while hiring more and more programmers to fix more bugs, as every new feature breaks several modules.
Which BTW becomes exponentially harder: to find and fix the bug in the already spaghetti code, as the programmer now needs to recreate most of the Cloud DevOps complexity on his/her local computer: Docker containers, imitated Cloud services e.g. LocalStack, message queues like Kafka, and so on. You wish you’d only have to go through the painful process of setting up the aforementioned services once. Some of configuration needs to be updated daily due to expiring credentials.
Those expiring credentials make sense like everything else done by the book. No one has ill intentions implementing book/conference goodness. “Architects” just tend to forget about the consequences and real business cost of their ivory towers . Doing more things by trendy books outside of the business revenue/cost context means wasting the precious time (meaning money) on nice to have things. Like 90% of the effort wasted on irrelevant plumbing. Even justified by a valid issue e.g. slow SQL queries.
I’ll skip the NoSQL databases, that eliminate many eternal SQL problems. Writing efficient queries (uhm, a little shorter than a page of 20 nested selects) still matters. API gateways and elastic container orchestrators are not going to make poorly written queries any faster. In fact, splitting your “monolith” into “microservices” with own separate databases breaks the relational nature of SQL databases and instantly creates the infamous N+1 selects issue. Only this time you can do nothing about it e.g. by rewriting 20+ nested sub-selects into joins. Guess, if one didn’t have a clue how to join tables in the first place, it’ll never matter.
This is just one example, how the unnecessary complexity of the DevOps kind not only wastes developers’ time, that could have been applied to e.g. rewriting the monstrous queries mentioned above, but leads to more such atrocities i.e. affects the code itself, not just the (more complex and fragile) infrastructure.
Ever Thought of Applying IT Architectures to Normal Tangible Products?
Complex machines like cars are often used to explain design decisions in robust enterprise software, though the predictable straightforward nature of mechanical, electrical, and civil engineering raises the apples vs. oranges concern. I have a slightly simpler example perfect for this post: a washer machine.
Take a look at the picture above, showing a typical washer machine “architecture”. There are other components omitted for simplicity, like various valves, hoses, and wires. “Deployment”-wise it is hooked into your hot- and cold-water supply plus the electrical outlet — just like your typical (please forgive me for saying this terrible word — monolithic) application happily living on some server hooked up to the database and OS (file system, network, etc.). Let’s embrace the Cloud era and break up that monolith, turning it into a trendy microservice “mesh”, shall we?
I’ll start with the classic Separation of Concerns perspective, which, according to Martin Fowler (I have tremendous respect for), was the original reason behind the microservice idea at Netflix. According to that hard to believe nowadays story, the original Netflix microservices had nothing to do with performance, failover, resilience, and other scalability concerns, every reputable Enterprise Architect impresses his C-level stakeholders with nowadays, when the topic switches to microservices. Which are often valid. The question is do they apply to your problem at hand e.g. delivering a robust end solution to your customers. Shouldn’t you always start with the functional requirements (engineering)? Unless you are in the technician’s business of non-functional facilitation, administration, and maintenance.
It is all in the context e.g. engineer vs. technician responsibilities. Take for example Scrum (sorry, couldn’t resist), which was co-authored by Fowler if you didn’t know. Does it work for a highly motivated (starting with money) lean team of engineering experts? The perfect self-managing team Fowler and the other authors of the Agile Manifesto envisioned. Have you seen such teams? Obviously Scrum has the opposite (demotivating) effect on a typical team of underpaid and abused code monkeys (lighthearted industry jargon; I consider the PM lingo like “resource” more derogatory) churning out boilerplate code — annoyed daily (or twice a day as I’ve learned recently) by a micromanaging Product Owner (will never stop laughing at the audacity of this title) or Scrum Master. Same with microservices — valid at Netflix to seamlessly stream videos to billions of subscribers, but may be an overkill in your workflow- and regulations-heavy classic data entry and reports centric business application. Let alone a washer machine.
Let’s continue our exciting re-architecture of it, starting with separating all of the “concerns” (functions):
- Sense the load and fill the tub with the appropriate amount of water and detergent.
- Wash the load by wiggling/jerking back and forth or similar motion.
- Spin to empty the water.
- Rinse the clothes with clean water.
- Spin again.
- Repeat steps 2–5 as necessary.
- The next logical step is adding the gas or electric drying capability, as it’d essentially mean pumping hot air through the same tub while gently spinning it (my amateur understanding of today’s advanced all-in-one washer/dryer appliances).
If we were to “architect” a washer machine in a microservice manner, we’d do the following:
Yeah, I know, the icons. You can name their AWS acronyms? Good for you (interview-wise). When you stop laughing, let me point out the reason why I chose a washer machine vs. e.g. a more complex car. It processes the load in stages pretty much like most software systems process the data, so all streaming, pub/sub, and other paradigms apply.
Unfortunately, the unnecessary complexity above is only the beginning. In order for that uhm… “modular design” to function, it needs lots of additional pieces in between — passing the load in a resilient, secure, and performant manner according to your SLAs. I wish I was just making fun of those “non-functional requirements” (vs. the functional one — washing your damn clothes). Sadly, every improvement of the plumbing in between of our contraption’s modules is now 100% justified. It’s kind of like arithmetic. Once you made the initial mistake and went into the negative territory, multiplying that negative number by positive ones only makes it more negative. No matter how you’d improve the modules themselves or the plumbing between them, you are doomed, because the result will always be a negative number. Kind of like drowning in a swamp: any movement drags you down.
The evolution of the “distributed” washer machine above is pretty obvious. You’ll need to add sophisticated transport means like pipes or conveyor-like belts — with all kinds of optical checking and fault-mitigation mechanisms e.g. some robotic arm to reach down and pick up the piece of clothing that fell off the belt. On top of that you’ll need to add smart rerouting i.e. if something is still dirty, it needs to be routed back to the washer. Overall it’ll quickly become very complicated, meaning impressive for your bosses to justify your (e.g. consulting) pay.
They’re not however going to be happy with the final cost: neither of the initial development, nor orders of magnitude more expensive ongoing maintenance of something so complex and fragile. The latter typically comes as a surprise to IT decision makers of the “no one got fired for choosing IBM” kind. Replace “IBM” with “AWS” in 2021.
That “distributed” washer machine can easily cost 10–100x of the most advanced “monolithic” appliance of the same kind. You (the architect) can argue, that the modular design allows you to utilize all functions simultaneously splitting the load say in 10 and concurrently processing it 10 times faster; Assuming all of the pieces and plumbing between them work flawlessly, which is a pipe dream, pun intended. That hypothetical scalability could even be a step in the right direction if we were designing a “wet-cleaning” factory to replace an old dry-cleaning one: some kind of next-gen automated laundromat, where the customer dumps the dirty clothing into some loading bin, and the system processes it in a “multi-tenant” manner, keeping track of who submitted which load.
Are you in that (laundromat) business? Netflix had its reasons behind microservices. It doesn’t develop multi-tenant business process automation SaaS. If you simply want to improve your personal laundry process (and have space in your house), you can buy 10 of the best in class washer appliances for the price of the contraption above. Here are your failover and scalability — in a straightforward self-contained horizontal manner. Just start the next washing machine if your load is too big or requires a different processing cycle. But those 10 machines would be old-school monoliths, wouldn’t they? Nothing to impress your superiors with.
Modularization still matters: the most efficient composition of the tub, motor, and pump with the minimum couplings, hoses, and valves connecting them. Yes, minimalism instead of the typical “the more the merrier” microservice approach.
Anyone Ordered Italian?
Chances are, you’ve seen this picture before, and understand the root cause of the problem: spaghetti (code). Everyone knows that. Repackaging the same ugly code as microservice ravioli makes things exponentially worse due to tons of added (to each module) DevOps plumbing.
If I am to speculate about the Italian dish of the 2020s, it’ll be an all-toping pizza of little pepperoni/ham/sausage slices (lambdas — the expected meat of your application) drowned in 10 marinara and cheese sauces: VPCs, subnets, and containers of all kinds. Plus all kinds of onions, pineapples, dried tomatoes, and olives: various API Gateways, message queues, identity providers, and other Cloud services; With the dough roughly comparable to orchestrators of the universal (Kubernetes) and Cloud-native kind (Fargate). That “everything on the menu” pizza is known today as Service Mesh. Of the same hastily and mindlessly mashed together spaghetti kind. Conventional thinking (of assembling third-party pieces w/o inventing your own) always leads to spaghetti.
However, spaghetti is not the main problem IMO. The real question you should be asking is not whether the CRUD plumbing that you call your “back end” consists of quality code. Do you need to write that code at all? Let alone build DevOps “meshes” on top of it. The best code is the one you don’t have to write.
My vision of the perfect world is an eternally lean startup founded by five expert programmers doing the work of a typical not so lean eternal “startup” of the cheap Initech kind employing 100+ engineers and hiring more to keep up with never ending firefighting due to one bugfix or new functionality resulting in five new bugs.
I’ll leave out the self-explanatory financial side of paying the 5 FAANG-level experts 2–3x (which separates dysfunctional Initechs and FAANG compensation-wise), while saving millions by not employing the 95 code monkeys. The most important part business-wise is the revenue/cost dynamic over time: infinite (sales) grows at the eternally fixed headcount. Alright, even if the headcount increases 20% (meaning one person) it’s not going to break the bank, is it? And that’s just the beginning: the freed up time and good extension-centric design (API gateways, orchestrators, and VPCs are not going to help you with) lead to quick implementation of critical unique functionality for the new customers (SaaS tenants) and verticals you’ve never dared to approach with your dysfunctional traditional 100+ team.
Before I continue, I need to remind you of the most important nature of business process automation SaaS. Every client aka tenant has unique process. The requirements may be unique only 20% (using the 80/20 rule) or even 5%, but that 5% always requires the effort comparable to the 95% of common functionality. Building enterprise SaaS is an exercise in ultimate code reuse and infinite extensibility — vs. the mythical Swiss knife “flexibility” (Fowler’s Speculative Generality anti-pattern) that manifests itself with hundreds of checkbox-filled config screens to turn features on and off. When only one of those checkboxes is relevant for a given tenant due to its unique functionality.
Hope I explained it clearly. Almost-turnkey ERPs and magic no-code tools are as much of a myth, as 1990s 4GLs and the rest of never-ending attempts to eradicate “expensive” programmers. I’m all for elimination of my profession. I’m not the “job security” (via creating something complicated only I understand) kind of employee. Unfortunately, short of true AI writing all the code, the only Word- and Excel-like shrink-wrap “universal” business software is well, Excel.
Do Node and React address the issue: infinite extensibility? No, they do not, even with clever state management concepts like Redux and cutting-edge paradigms like Functional Reactive Programming. It is still your job to cleverly use those in your specific business domain. Nor Spring Boot documentation even remotely talk about such topics.
Though the core Spring, based on IoC and DI patterns, does. However only 0.0001% of Java developers (including yours truly) use it this way. Why not? Because they don’t focus on meticulous object-oriented modeling of the intricate business process, that leads to the elegant orthogonal design with infinite extensibility. They don’t care about the business process or customers at all. That’s the Product Manager’s, Product Owner’s, etc. (titles that replaced the good old Business Analyst) job, right? Compensation (and hence survival vs. creativity) issue aside, the chances are the creativity was not required in the first place, as devs were handed an endless “backlog” of bugs to fix “yesterday”. Even when it was a greenfield project, they copy-pasted the code from their previous rushed bugfixing projects. Plus there aren’t books on the topic anymore. You won’t find quality OOP guide in the Spring documentation. It just explains how to configure data sources.
To be fair, the OOP art has never been popular among “enterprise” “citizen developers” and if you ask one about it, you’d get a typical “what you want from me?” look. OOP is not even remotely worshipped like far less important TDD. To test what — your average boilerplate CRUD code copy-pasted from another project?
The grand mommy of modern distributed architectures: three-tier Microsoft DNA of the late 90s was all about DCOM plumbing. Then came its Java reincarnation: EJBs. EJB-killing Spring was based on entirely different paradigms like IoC, that beg for good OO design. Unfortunately handed to the same mediocre developers thinking in three tiers, it was and still is used in the same DCOM and EJB plumbing manner, only with 10x more moving parts.
I bet most of the Java developers with Spring on their resume knew nothing about it before Boot (2014). They only refresh revolutionary Spring concepts like IoC for interviews, like they cram the GoF patterns book to answer interview questions. FYI Spring was invented back in 2001 and underwent several radical improvements long before also revolutionary, but still only packaging/deployment-focused Boot.
Try to Own Your Work Business-Wise Even on a Being-Owned Salary.
I tried my best to avoid turning this post into a leader vs. follower rhetoric. The majority of developers do nothing wrong coding their plumbing by the book, like the generations of developers did before them. The “generation” means 18 months — typical IT project timeframe, before it predictably fails with a 70% probability. Google Michael Krigsman’s blog on ZDnet. That number used to be there, let alone MBA-flavored Change Management studies acknowledging the same statistics. Corporate IT departments keep living off those failures essentially being the R&D money sink — with kickback-heavy money transfers to buddies and relatives via staff augmentation and contracts. One’s wasted money is another one’s earned.
That failure forgiveness (culminating every few years into debt write-offs aka bailouts) embezzlement commotion means no real engineering, since the end result: reining in the out of control business process complexity and convoluted regulations no longer matters. The budget-eating process does. Without the need to solve the problem there is no need for a new technology to make it happen. The stagnated for 20+ years business software development tools and technologies are tailored towards corporate IT headcount schemes.
Unfortunately a legit software product company has no alternatives in the same space, but to “buy IBM” (AWS) so to speak. Ever wondered why Google and Amazon started inventing almost all of their internal tools and infrastructure: from UI to databases, thankfully offering it to the mankind open-source? Because IBMs, Oracles, and Salesforces or the world had nothing to offer to achieve the end result, choosing to focus on the hourly billed process.
If one however, sorry for a corny phrase, does think outside of that rusty and moldy 20+ year old box i.e. does not limit him/herself to AWS or even “open-source” offerings, there are slightly different books that have been available for a while: Bjarne Stroustrup’s, Grady Booch’s, Eric Gamma’s, Joshua Bloch’s, and Martin Fowler’s. Instead of your average trendy “Enterprise Microservice Patterns” variety to cram for job interviews and C-audience presentations; Or algorithmic puzzle ones to ace FAANG interviews, equally detached from the end goal: working application.
Revisit OOP fundamentals and dare to design and build your own “frameworks”: small and targeted. Trust me, it’ll lead to 100x less code overall, compared to relying on limited choices in someone else’s “ecosystem”. Don’t reinvent the wheel. Just question tools available for you. Have you ever?
Hope I convinced you to use the minimalism as your main code quality metric. Again, for all of the solution’s code, because you own the company that serves your customers. Just pretend, can you? Forget the low pay and push to dilute even more by unpaid overtime at your “day job”. Technical breakthroughs only happen when you own your (technology) business one way or another. Otherwise this (technical) discussion would not be pure, because your true goals will differ from delivering the best solution for your customer with the minimal effort — over time meaning fewer bugs because you didn’t cut any corners (and still wrote 100x less code working “smart” instead of “hard”).
Those other considerations and goals and hence decisions to complicate the already complex yet useless plumbing (by adding e.g. a layer of API gateways on top of it) are: learning trendy stuff to quit and apply for a better paying job, escaping your messy codebase to write “cutting edge” stuff, and last, but not least “why not”.
I know, a poor underpaid and overworked dev never questions things, and probably thinks he’s in the business of rewriting some old mainstream code into the new mainstream one e.g. converting monoliths to microservices. If those rewrites of the failed (at 70–90% rate) projects happen every 18 months, duh.
Whether, like I said, a completely different entrepreneurial mindset is required even from the purely engineering point of view. I know, my (failed, but still founder’s) definition of “entrepreneurial” is vastly different from the typical employer’s definition: coping with the longer hours and lower compensation. I get it, your employer never pays you to even remotely feel like a cofounder.
Before I get off my soap box, as I hate mixing technology and meritocracy, since we brought up money in the context of this chicken and egg problem (should the employer pay first to even inspire you to invent, or you need to invent first and negotiate the price with the employer?)… Don’t you think, any negotiation (initial offer or pay raise) should be in the context of a groundbreaking invention?
If your side of the bargain is to write the same unneeded CRUD code “better”, how it justifies your pay raise business-wise? Because you mastered the more modern way to write the same spaghetti plumbing? Instead of the art of modeling the business logic via meticulous fine-grained OO hierarchies. CRUD code monkeys deserve to be “outsourced”. Or better yet, eradicated from the face of the Earth, freeing space for the true programmers, our planet has a limited supply of, India included. At the end of the day it is all about the true (quality by definition) code, not DevOps or another magic to turn turds into candy by wrapping it in API gateways.
Effort and Cost Implosion
“Robust orchestration” of (the more the merrier) microservices always makes sense at C-level, from 35K feet, during an elevator pitch, whatever you call it. It’s logical, the icons (above) look great, and some Gartner or Accenture consultant’s pitch delivery is slick. Only after one has written all of the code and config files to turn such dignified “architecture” into a working solution, he/she can realize how much tedious typing that allegedly time-saving reuse-driven approach requires.
It goes like this if I convinced you to write less code. First, you’ll write less plumbing code and author less Cloud config, which can also be viewed and often is code (Terraform and the likes). Second, you’ll use that freed up “bandwidth” to code your business logic properly: via meticulous and intricate (fine-grained) object-oriented modeling, whether it’s robust React/Redux SPA (Single Page Application) UI or true (e.g. Java) “Middle Tier”. Plumbing or not, over 25 years in the US alone, that covered C++, C#, and Java, I’m yet to see any non-plumbing code in the Middle Tier. Have you ever heard of anyone modeling Customers/Orders/Bills/etc. as smart classes with state, identity, and behavior (Booch)? Let alone pub-sub, visitor, and other patterns at that level, where it matters most. Your “Customers” and “Orders” are dumb Lombok-annotated database entities, if not even dumber DTOs (Java reincarnation of C structs), don’t they? Because the “logic” belongs to the procedural “services”, predictably never found there either, as your services form a wrapper on top of wrapper DTO passthrough. All of that can go away.
The effort/cost implosion doesn’t stop there. Less code and especially unneeded communication channels between your ravioli will mean fewer bugs (now), and the well-though OOP to model the business logic in an easily extensible manner will mean no bugs in the future, as you keep adding features. Then of course the company will need less people, and the remaining lean team of say five can implement perfect (Fowler’s) Scrum, labeled a “self-managing team”. Sorry, PMs renamed POs and even, dare I say, functional managers. Self-managing means exactly that i.e. no coaches or coordinators to help “self-manage” or facilitate “teamwork”.
My Px100 experience proved, that the code amount and complexity implode as exponentially, as they typically explode e.g. using DevOps to mitigate for already monstrous codebases, then e.g. using higher-level DevOps orchestration to mitigate for deficiencies of the previous low-level DevOps mitigation, and so on, and so forth, once you find yourself in that vicious cycle.
The main problem with doing things by the Spring Data, Boot microservices, or another book, is that you’ll always be catching up, fixing your previous fixes and mitigating your mitigation attempts. The solution? Eliminate the root cause — the need for that very first mitigation.
CRUD “Middleware” Anatomy
Let’s look at what your “Middle Tier” does: the controller calling an interface of a single-implementation service, which makes calls to DAOs, that call repositories, that execute a single database query… That’s six classes at the minimum — including the dumb Lombok-annotated database entity, but not even dumber DTOs created just in case for a mythical “why not” CQRS flexibility. Since all of that plumbing doesn’t achieve much beyond a single SQL query or update, why we need it at all? And that’s just the “bare minimum” according to the unquestioned “best practices”. Then come ”decoupling” and “resilience” by making two services call each other through a Kafka topic instead of direct REST. You’d only hope, such pub-sub is not blindly extended to all inter-service communication as the “enterprise messaging standards”.
A modern JavaScript front-end e.g. implemented with React needs nothing, but JSON, so why not have a dead-simple BFF in the same Node instance that serves the React UI — to call Mongo? Before you accuse me of cutting corners (or better said lasagna layers) and shying away from writing the business logic code, look at your back-end code again. I dare you to find any logic there except for unneeded (eliminated by document databases like Mongo) automatic (yet not bug-free) ORM. So, where the mysterious business logic is, if it’s never found in the so-called Middle Tier? It’s in your React front end. Where else it can be?
I am an old-school strongly-typed Java fan (meaning the combination of Java and JavaScript, since there is no other option for the browser or hybrid mobile front end). I am offended by today’s JavaScript kids making fun of Java — butchered over the last 20 years by “discount resources” from “offshore” mindlessly banging on keyboards to fulfill their American middle managers dream of solving the Infinite Monkey Theorem.
It’s not the language fault, you know. Unfortunately, considering the enthusiasm the same code monkeys are cramming Angular and React, those technologies are going to share the same fate. But at least no one (so far) types layers of CRUD emperor clothes in React. It may not be the perfect OO code, but it does implement the business logic. Because the business logic — the only one, that matters to the customer — is what controls the fields he/she sees on the screen, how they are validated, where the user can navigate in the current application’s state, and other tangible aspects of the UX.
I did it with good old Java (way outside of the back-end CRUD box) by creating a better-than-React (IMHO) strongly-typed configuration-driven mechanism to declaratively define business logic in its most natural and comprehensible form: as UX. Px100 generates UI from Java (Spring IoC) configuration with lambdas and other OO and functional goodies. I called it Productivity x100, because my benchmark was to completely eliminate the CRUD plumbing, thus reducing the codebase, effort, and ultimately headcount 100 times — compared to the worst dysfunctional IT department or a stale startup aka “Initech”. I don’t want to push my way on you: the “inverted” generation of UI from the Java Middle Tier. A quality mainstream top-down (React) UI is a solid option too. In that case, if you are still obsessed with Cloud e.g. AWS, make your BFF a lambda (accessing Mongo), put an API Gateway façade on it, and call it a day. No “event-driven” Boot microservices required. It’d still be 10x instead of Px100's non-compromise 100x code minimalism. Do your cost math and decide, but make that decision once. And early.
Getting rid of any massive codebase eliminates the main issue and reason for microservices: Separation of Concerns. Once your code shrinks 100x, that small comprehensible by a single developer codebase no longer needs to be split into several pieces. That in turn means no external DevOps plumbing around each microservice: secure API gateways, service meshes, Kafka topics, etc, etc.
IMHO those, especially the latter, should never be built anyway. Unless you are in the business of impressing semi-technical decision makers with your knowledge of AWS components via PowerPoint diagrams of colored boxes.
If you however go the opposite, microservice way, there are ugly anti-patterns waiting around the corner. E.g. like an average rushed and abused developer is likely to duplicate a database field or entire table, adding to the impressive collection of triple- and quadruple-duplicated ones because it’s impossible to find the right table or field in that 1000-table database spaghetti. A scenario of copy-pasting an entire microservice (with its own separate database of course) when tasked to implement a new little feature is even likelier — out of fear of breaking some unmaintainable existing spaghetti code. A precursor to Shotgun Surgery from the Fowler’s book, don’t you think? And IMO should be in its next edition.
Here’s another one — lost traceability. Good luck mitigating it with debugging tools like Zipkin in the same manner some of die-hard relational database fans devise sophisticated automatic processes of adding/removing fields based on the changed ORM entity model — instead of embracing schema-agnostic Mongo. Explain to me what’s fundamentally wrong with one REST service calling another (if you absolutely have to make them such services to begin with). Why to change the communication between your services from direct calls to something more “resilient”, “decoupled”, and “event-driven” like Kafka or Kinesis? If you do, you have instantly lost the traceability. You know, when you’re stepping through method calls to debug something or Ctrl-click in your IDE to navigate deeper into the heart of your application.
Let’s address the three concerns above:
Resilience: Should I point out much easier ways to make something “resilient”: from straightforward horizontal clustering to self-healing Kubernetes or old-school Akka?
Decoupling: perhaps the biggest programmer’s folly of the same level, as our semi-technical (at best) IT bosses “low-code/no-code” delusion. What happens when you need to add a new data entry field to your Node-hosted front-end? Right, you add it to all DTOs and entities in between, to all “messages” and ‘events” passed via message queues, and finally to your relational database. Whether you could simply use Mongo in your payload-agnostic BFF lambda passing JSON to and from the database directly. That’s the real Shotgun-Surgery-free decoupling.
“Event-driven” everything: ever googled the difference between the message (command) and event? The former is passed from the Customer to the Order service. The latter, 100% unpredictable, is posted on the operating room’s Kafka topic by the IoT vitals monitor, when it detects the cardiac arrest (patient’s heart stopped during the surgery) and is delivered to at least two subscribers: the hospital PA system, that announces Code Blue, and the IoT crash cart nearby, that starts charging the defib paddles before the surgeon even reached for them. But hey, never hurts to generalize a “robust” pipeline connecting your Customers and Orders in a pub-sub matter. Just in case. Or because it sounded so cool at some conference presentation of the Saga pattern. Which you will bring up at your next interview to show how well you know “enterprise microservice architecture” — to ask for a $5K raise.
How about another issue: the infamous N+1 selects? Obvious, isn’t it? You’ve just thrown away the main feature of your prehistoric relational database: automatic relation between the Customer and Order. Oops, they are now two separate services with their own databases.
What’s Inside Your Ravioli?
What’s inside your microservice ravioli or monolithic pasta dish? Meat? Or another similarly sounding four-letter substance?
I’m sure you are familiar with Russian matryoshka dolls that IMO perfectly model Middle Tier layers of wrappers of invocations of single-implementation interfaces.
Here is an example of a gem I found on more than one occasion (different jobs/projects) in the inner-most matryoshka, actually responsible for the “business logic”. Well, in 1% cases when that code is not a single database query or update.
// code before the gem
…// the gem
boolean someFlag = ((blah-blah-blah || blah-blah) && blah-blah || …) ? true : false;// if-else code using the gem: someFlag
…
I still remember one Quora purist that bashed “else” clauses for ruining the code elegance. He would have had a heart attack after seeing the boolean flag above. If it’s not apparent to you, first, the use of such flags indicates the spaghetti flow, which would have surely rely on gotos if those existed in Java. Second, the “true : false” part — I cannot even call amateur. The person who wrote that gem, obviously turned off IntelliJ’s Code Insights, tired of seeing olive highlighting (suggestions to rewrite/simplify the statement) in every second line. How I know every second line was olive? Because I didn’t turn mine off.
Let me ask, and I know it’s a rhetorical question, at least for logical engineers, but I’m not asking them, since they don’t have any decision power. Do code-validating tools — from the aforementioned IDE code insights to trendier “static analysis” CI checkers like Coverity or SonarQube automatically ensure the code quality from your slightly less than averagely paid (skilled) code monkeys? Vs. the aforementioned team of five experts. From both the budget/resources and your own stress perspective. You know, the stress, robbing you of years of life by needlessly killing your brain cells and causing hypertension.
Are you OK with that cost of never-ending firefighting? Or you’d pay FAANG wages to your eternally lean engineering team of five, you task with something in a fire and forget manner and go on to strategizing with other visionary leaders, like CIOs of your biggest customers at a golf course? I tried to cautiously ask that question at a couple of interviews by telling my stress-free grandfather story. Guess, the message was too fine. So let me ask directly, how much those 10–15 years of your life are worth? Extra $150K to each of the five developers: $750K total? After saving like $10M you pay to a 100 of code monkeys.
In any case it is always the code, not the packaging and orchestration of your modules, that makes something work. You need to take care of that first, before embarking on your exciting DevOps journey. Properly designing your core appliance is how Google engineers approach a problem. Write good code first, then if it is not enough, connect it to other services, which functionality the current tools/technologies didn’t allow to implement in your code.
The biggest disconnect between the software engineers and their bosses, that unfortunately spread everywhere in the industry is the expectation of quality code “by default”, so the only thing left to worry about is adding layers (and layers) of DevOps and other plumbing to integrate and orchestrate the modules that are supposed to cover the functional requirements 100% and perform reliably. Well, that is not automatic even when one hires some “senior” dev. There are few reasons everyone knows why someone may be unable or unwilling to perform at the level expected due to the virtue of paying him/her the salary. And that quality code matters first — before any DevOps excellence.
Let’s start with defining the kind of code you need to write well. There is only one kind left after you eliminated the plumbing: your core business logic.
The Mysterious Business Logic
Bet you imagined something like this: a flowchart — the most natural way to capture the business process at a high level. Unfortunately, that’s where its intuitiveness ends for anyone embarking on a journey to implement such workflow using computers. 200 years from now, sure. You show that flowchart to the AI eye (one of your mobile apps), and just like scanning a QR code, boom… the magic happens, and a second later the complete system appears on your screen.
Until then however a workflow-centric system needs to be modeled differently — as a series of state machines. Any seasoned programmer already knows that. Even Amazon’s workflow engine: Step Functions uses the term “state machine”. Flowcharts model batch processing with no humans involved. Batch was all the rage during the punch card era of the 1960s-80s. Load your deck of them, run the program, and pick up the printout of the results. So why one needs to translate a flowchart diagram into a state machine one for the computer to implement more robust real-life business processes? A naïve answer would be: because the computer “brain” is wired that way. Computers think in terms of state transitions, right?
Not just computers. Look around. Every appliance in your house is a state machine. Any tool known to man is. Any technology product — of mechanical, electrical, or another nature. Why? Let’s imagine your life if they weren’t.
Here’s a simple morning routine:
A rather simple flowchart, isn’t it? Imagine some futuristic nightmare, when you are woken up by some robotic hand, picked up by a robotic arm and carried over to the shower. I’ll skip the intimate steps like undressing and toilet business, that can also be “automated”. You are washed, dried, carried or somehow forced to the kitchen table and fed the breakfast, then finally put in your self-driving car taking you to work. Complete the picture with Orwellian concrete slab walls.
That’s how a direct implementation of the workflow would look like: on a human that is supposed to interact with the system instead of being processed by it step by step. Free will rhetoric aside (e.g. wanting to eat breakfast before shower), imagine the complexity of such automated system.
Instead, a win-win for everyone: the house resident with his/her free will, and the engineer designing orders of magnitude simpler appliances, each of them is a state machine: e.g. a faucet or shower that has distinct open/running and closed states. There’s plenty of business logic — so intuitive, you take it for granted. E.g. you can only open the washer machine (you hoped I forgot about them, didn’t you?) after pausing or stopping it. What all this business logic has in common? It’s user-centric and user-driven.
Not surprisingly the real business logic of any human-centric workflow automation system is in the User Interface: starting from the state-based navigation between data entry screens and all the way down to the field level to show/hide or enable/disable them based on… right, the specific state of the work item (patient chart, application for employment, frigging customers and orders of all kinds, etc. etc.). There’s no escape from the state machine paradigm.
Flowcharts do have their purpose: as the initial reference of the entire process or (unfortunately unrealistic w/o real AI) a BDD guide to test the entire system validating the state progression. However outside of that testing context, a verbal description of the process by the end user is as good, as a flowchart — for an engineer to build the proper state machine.
It gets worse. Just like most tests of a typical TDD project, the flowchart quickly gets out of sync with the rapidly evolving real implementation. I hope you understand by now, that generating UI from a flowchart in a “low-code/no-code” manner is one of the biggest (Gartner-level) lies, though that BS have been successfully sold to non-technical IT bosses (who understand flowcharts, but not state machines) for decades as a magic DIY silver bullet. A real project will always have some button on the UI screen to change the work item’s state and navigate to the new screen— moving through the flowchart. The flowchart itself doesn’t imply any UI (screens or buttons), does it?
In a desperate attempt to generate UI (or any code/config/markup) from a flowchart, one can make a naïve assumption that they can get away with one generic button e.g. “Submit” or “Approve” that triggers the transition to the next box or diamond in the flowchart e.g. for the approval of the next boss. What if that stage requires two managers of different roles reviewing the item collaboratively, while approving it independently. How would you model that via flowchart blocks?
And what about the free will I mentioned above, when e.g. you want to review the item and maybe change something (and save), without approving it? It’ll require another “Just Save” button? What if you don’t work on items in your queue sequentially? What if you come to that data entry screen through a completely different list of “urgent items”?
Just like your real morning routine of showering before breakfast or wise versa, real-life UX is unpredictable and cannot fit into a flowchart. So yes, you’ll have your flowchart, and you’ll have the UI, which supersedes it, becoming the real source of truth: of what the user can and cannot do at the moment and where he can and cannot go. And every time you change something in the UI, you’ll need to update the flowchart.
So, what’s the purpose of the flowchart-drawing tools like your typical “no-code” “studio” or even minimalistic AWS Step Functions? Other than to give the non-technical CIO the warm and fuzzy feeling of seeing a familiar flowchart image instead of the completely foreign programming language. To be fair, I was once (like in circa 2003) mesmerized by those flowchart-centric BPM tools. Until I realized the difference between flowcharts and state machines. In any case, UI is king. This is where all business logic is defined and this is where the magic happens.
The UI code should be the most respected part of the application codebase. It needs to be well-designed from the OO point of view meaning backed by a robust custom framework on top of the generic ones like the React/Redux combo or Spring IoC. Such declarative config (of the business logic precisely injected into the UI via e.g. Java lambdas or Spring EL) allows, among other things, easy programmatic traversal of screens e.g. to determine if some action is permitted because it came from a button on the screen the user is permitted to see. How we know that? Because that screen was accessible via a button (tab, link, or another navigation element) from the previous valid screen, and so on, and so forth all the way back to the main menu for that specific user role.
A nice to have feature? Not really. I consider such checks a necessity to close any back door into the application. Think about all of the manual service method role-based authorization checks you can eliminate with this automatic logic. Or the entire awkward “Role-Based Authorization” service, because you do everything in a microservice fashion. I mean the checks you add (as a yet another urgent fix) after some breach, when a user sees the content he/she is not supposed to see. If you attempt to implement 100% of such authorization logic upfront via the good old Spring Security, that alone can easily become 60% or more of your already massive codebase. Hope you understand my mindset by now. Yes, I started with the Spring Security and Spring Data, like any disciplined “citizen developer”. Then I realized I can write (yes 100x) less code by not using them, while covering e.g. the aforementioned authorization 100%, I’ve never dreamt of with Spring Security.
Good UI modeling (via solid tools like React) makes everything clean. You can even generate your flowchart from the UI (but not the vice versa) if you want to present it in a simplified manner to people, who only understand flowcharts. It will look nice and straightforward, just like generating an ER diagram from a perfect relational database, or reverse-engineering the Spring controller mess (I meant robust “mesh”) into a clean and orthogonal Swagger API documentation via SpringFox. Code is always the source of truth.
Either traditional (e.g. React’s) top-down design or Px100’s unconventional inside-out IoC data entry screen modeling achieves the perfect marriage of the configurable and easy to evolve business logic with the slick modern UI. As far, as the scalability, just add more servers to distribute the load horizontally onto this classic stateless cluster of… yes, monoliths. BTW no one is stopping you from splitting one monolithic multi-tab “page” into a series of independent tab-level “pages” — a classic separation of concerns if you ask me. Should be straightforward, if your OO UI code is clean — written in React JS or Java doesn’t matter.
Of course, the product nature dictates the logic location and (deployment) structure. I’m only talking here about my area: complete end-user business process automation. Complex service meshes and routing patterns like Circuit Breaker are legit for other products, e.g. at the Motherland of microservices: Netflix. You may also be in the trendy business of “analytics”: of click logs, telemetry, and other massive data streams. Meaning there won’t be any front-end, just 100% server-side human-less batch-like data processing pipelines. I highly doubt though, that wasting your keystrokes on dumb CRUD ravioli will help you any more with such 100% back-end processing logic, than those emperor’s clothes ever solved any real-life challenges to exchange the data between the front-end and back-end.
In any case, while I do respect everyone’s niche, at the end of the day it is the customer (user) needs, that dictate how to write customer-facing programs, which in turn should dictate what commercial back-end services to use and how. Not the other way around: to build your customer-facing system around some data analytics or similar engine capabilities. I’m biased towards customer-facing software, because it’s my specialty, sorry. Well… not really. I am right, what should come first. Somehow, we lost track of what’s important, stressing the technician domain: trendy services and infrastructures instead of the applications the real people use. Imagine if the auto business was all about car repair and maintenance facilities instead of you know, designing and building never before seen cars.
Write Programs. Not Webpages, Services, or APIs.
I’m old. Though not COBOL-old. I missed punch cards in college by just one year, and that was 30+ years ago. I thought the mainframe era was gone. I was shocked to see 80y.o. software engineers pushing walkers and carrying oxygen tanks in 2021. Though I never laughed at them like some of my clueless colleagues. Those people keep the company running. Not the Enterprise Architects with their vague Reference Architecture diagrams and other PowerPoint “deliverables”. Not mediocre allegedly “full-stack” developers still writing only boilerplate back-end plumbing vs. for starters designing the UI screens instead of Web or mobile artists telling the “dev” where each field should be. Why not? It’s just a uniform Material- or Bootstrap-themed data entry. How hard it is to make those screens look clean and slick? Obviously hard for someone who’s never written clean back-end code. While no “team lead” or “architect” questioned the gems like the one I posted above as long as there is perfect unit-test coverage, right?
I have enormous respect for the elderly mainframe developers, unlike their incompetent bosses, who’ve failed replacing the mainframe with 21st Century tech. Which was supposed to happen before Y2K, you know. Though to be fair the late-90s three-tier DNA (first lasagna), while a step in the right direction, wasn’t robust enough OOP-wise, compared to Spring with IoC and DI (no one leveraged anyway, but still), let alone Mongo and other NoSQL databases, FRP, and the rest of the tech that came from FAANG, which didn’t 21 year ago. Plumbing-heavy DNA was headcount-centric (meaning hiring as many as you can as cheap as you can), which predictably failed and will fail each and every time in our creative occupation.
There is no magic way to make one “architect” invent something and throw it over the fence to “outsourced” (or just found cheap) code monkeys, micromanaged in old and new ways to keep churning out code and fixing urgent bugs after hours and on weekends. It only works with everyone in the team being an “architect” — and paid accordingly.
Of course, where would one find that many picky slash-architect aka slash-lead level developers? Let alone making those primadonnas work together. The first issue is easy. I’m not going to repeat myself. You won’t need many to write and maintain 100x less code. The second solution is, well, bad news for “project” and other kinds of managers, whose job is to motivate and coordinate. Motivation starts and pretty much ends with FAANG-level salary.
Coordination and collaboration is pure self-managed team Scrum the way it was envisioned by the engineers (like Fowler) and for the engineers in Agile Manifesto. Before the Project Management Institute and various paid Agile Coaches “reimagined” it the corporate way, retitling redundant managers into Product Owners. Isn’t it obvious? If the money and career future are taken care of (like at FAANG), the engineer’s ownership of the product and thus 100% alignment with the company (revenue, growth, etc.) goals is automatic. No managers required to make someone “fall in line”.
I’ve witnessed countless attempts to replace the mainframe with both third-party and home-cooked lasagna and ravioli over the 20+ years since Y2K. First came the brute force (offshoring), then “nearshoring”, “chasing the sun” and other staff augmentation experiments. Now it is magic DevOps: API Gateways and service meshes. Throwing darts in the dark, as always.
There is a clear engineering reason behind today’s 90% project failure rate. No statistical KPIs required to understand why the conventional business software development technology/process fails each and every time. But hey, whether one knows why or even cares, he/she already expects the project to fail and protects him/herself by avoiding the delivery of the complete system, concentrating instead on intermediate tools or modules (services, APIs, infrastructure), someone else is expected to put together — and take the fall.
In the meantime while the industry is busy, spending trillions in billed man-hours to develop all kinds of that middlewa… I’m sorry, vaporware, COBOL is alive and well in 2021 for that simplest of reasons. People write real complete programs in it, delivering the said end result. Yes, programs — you know, something that is the product of programming. Not services, not apps, not pages, not APIs, not “Infrastructure as Code”. Programs — human users interact with directly.
Don’t you wish the software, at least of the business kind (forms, databases, reports, etc.) was as straightforward, as mechanical and electrical machines like cars and washers/dryers? Programs are — unlike mysterious “resilient service meshes”. It went full circle. First there were those programs. Call them “desktop” or another term — programs coded in COBOL, C, C++, Java — relying on rich desktop UI libraries like the early 90s TurboVision, mid-nineties MFC, early 2000s Swing, and all the way to modern Angular and React. Let alone native mobile apps — the reincarnation of the good old exe files. The era of “thin” browser UI was short-lived. It went back to rich MFC- and Swing-like Single Page Application (SPAs) running in the browser. At the end of the day, that’s what the user needs — a robust program that, while constantly connected to the Internet, functions like a self-contained desktop application instead of a web page.
I wish I could turn back the clock and relive the 15 years I wasted on “enterprise” consulting, chasing the trendiest lasagna and ravioli coding to impress corporate IT bosses and justify my still shrinking six-figure salary. I did not deserve that salary anyway because I solved no real (customer) problems. I wrote complicated glue code for living. Don’t bitch and moan about IT outsourcing and stagnating middle-class income, that is now half of what it was in IT 20 years ago (adjusted for inflation). While kept at the same inflation-adjusted level at FAANG companies. Honestly ask yourself, what you are doing — for your customers — to deserve your pay. I’ve been writing real programs for real customers for the last 10 years. Of the monolithic kind — the one, that’ll outlive all hourly-paid lasagna and ravioli BS.
Excuses, Excuses…
Our industry is diverse. This uncompromised end-user biased approach is not for everyone. There are lots of moving parts from different vendors and service providers, a robust business application ultimately consumes, customized and orchestrated. I am not foolishly suggesting going back to COBOL or MFC days when all of the application code was written by the same person/team using one tool e.g. Visual Studio and a restricted set of libraries Microsoft’s software development ecosystem used to be. Composition, reuse, and collaboration are the most fundamental concepts of any engineering field, not just software development. However, there are rather embarrassing excuses to avoid writing complete customer-facing systems, that need to be pointed out.
Folly #1: Data is Everything
During my last job search (for positions with Java and Spring keywords in a couple of major US metropolitan areas) there were noticeably more jobs of the data lake/warehouse, pipeline, and BI/analytics variety vs. the ones to build the mission-critical systems: the programs I mentioned above, used daily by the company employees to do their job, i.e. the software, that generates all the data for pretty reports and dashboard charts.
Understandably the corporate decision makers at the top rarely if ever touch the typical prehistoric and otherwise awkward to use programs their subordinates have to struggle with daily. All, that the captains of the industry see, are pretty dashboards of earnings and other KPIs. Well, since they are the real “paying customers” with spending decision power, the monitoring becomes more important, than the core business functionality to monitor. Kind of like the overall stagnated technology situation in the industry, since it is the sales (to decision makers) that dictate what’s important.
Another, even more hard to swallow contributing factor is the current lazy Western Civilization demographics. I suspect most of the real programming jobs (vs. superficial “data analytics” ones), that I missed, simply moved “offshore” or otherwise became the specialty of “discount resources” — way outside of my comp range. With predictable quality consequences.
All because, you know, ever since the prosperous 1960s, we, Americans, were born managers — captains of the industry observing those KPI dashboards and “delegating” less important responsibilities (to actually build a working system) to the lower race (caste). Don’t hate me for pointing out this typical dignified excuse. Any engineer (and I suspect non-engineer IT boss hating engineers for being able to make the mental effort he/she never will) knows, that a complete customer-facing program requires significantly more effort and ingenuity, than a dashboard monitoring its performance or analyzing its data.
Folly #2: AI Emperor Clothes
This one is self-explanatory. Thankfully, Accel and Sequoia are back to churning government stimulus/incentives money steered to fund healthcare vaporware. Though the AI emperor clothes are still central in Gartner’s “invaluable” executive-level advice to corporate IT leadership, pitched the mythical vision of the future, where magic AI “no-code” tools write programs on their own.
The brute force approach that “solved’ the Y2K crisis (instead of those 30y.o. at the time systems being rewritten in modern programming languages, as their engineers envisioned in 1970s) gave birth to widespread “offshoring”, when the same visionary leadership realized its long-term dream of successfully using slave labor in our creative occupation. Predictably it was short lived, resulting in the eternal 70% project failure rate jump pretty much to 100% (witnessed personally during 2002–2008 at a handful of Fortune 100 corporations everyone knows). Now it is magic AI, that is supposed to take care of something as insignificant, as business automation programs used by that corporation’s entire staff — compared to the critical pretty enterprise data dashboard used by the 10 executives.
Folly #3: Too Complex to Build, Let’s Assemble It from Third-Party Pieces
Everything is about integrating and orchestrating external services/APIs, right? Worldwide Web, connected cars, Internet of Things, blah, blah, blah. Leave this generic 5y.old’s logic to Forbes and Gartner. Neither technical incompetence, nor being cheap to pay your engineers are valid reasons to choose “buy” over “build”. Because if something can be built quicker and better than integrating and customizing a third-party product/service, it should be. In the perfect world, where engineers are running the show.
Which kind of made me an entrepreneur. I’m not like Elon, who couldn’t stand working for the man a single day in his life. I have nothing against fruitful (for everyone involved) employment, only bad luck of finding that “perfect”employer over my 25 years in American IT. E.g. a startup cofounded by, dare I say, a real software engineer. With all of the middle managers (if there is such need for a self-managed team of five) being active engineers as well. I had no choice, but to found my own.
Speaking of third-party vs. in-house, all my Px100-based systems include a custom CRM — one of the most essential pieces of any business process automation. It’s just an internal module for me. Not even a “microservice”. I build it… because I can — being 100x more productive w/o breaking the sweat. I get it. Mainframe-era CRMs like Siebel were prohibitively expensive (in my SMB automation niches). Salesforce is 10x more efficient and affordable meaning $10M instead of a $100M. It still requires consultants and an IT department like MS Dynamics, NetSuite’s one, and the likes — all built with headcount-centric conventional technology.
Start researching the new software development tools and processes, used by Google and Facebook, sadly, only within their consumer domains, and you’ll discover how you can write 100x less code and become a one-man army (startup). Which the Great IT Consulting Food Chain: Salesforce, Microsoft, Oracle/NetSuite, and others of course don’t want you to find out, since it’d threaten their quarterly man-hour quotas.
Folly #4: Customizable and Integrate-able “Almost-Turnkey” “Packages” and “Services” of the ERP Kind.
This one is as old, as IT (departments). Other attempts to turn software development into a DIY activity for non-technical “business” fall into this category too: from the early 1990s 4GLs to modern mythical “low-code” and “no-code” tools.
Remember, what I said about the business processes being unique? Again, English is my second language, though I think the words “customer” and “custom” are related. Every business customer’s automation requirements are unique and require a precise custom implementation. Kind of like bespoke Savile Row suits if you can afford one (I can’t if you are wondering).
The situation is thankfully the opposite in software development (vs. the clothing business), when a precise custom solution is orders of magnitude faster (to develop), more robust functionality- and extension-wise, more reliable, and (goes w/o saying) more performant and scalable due to the inherent simplicity: no backwards-compatibility baggage or generic functionality to satisfy the whole world.
Provided you know what you are doing and care (have the entrepreneurial mindset I mentioned), done wright, custom development is orders of magnitude less expensive than “customizing” any allegedly “universal” ERP. The worst part is: even after all customizations (months of “tweaks” by the team of a hundred) the said ERP is likely to cover (using the 80/20 rule) only the 80% of requirements at best. Meaning you’ll need to “integrate” it with another “almost turnkey” system that’d cover 80% of the 20%, and so on, and so forth. Great for maximizing billable man-hour revenue. Not so much for building a reliably working product.
Again, I’m not trying to undermine the fundamental modularization principle to reincarnate MFC-era monoliths as React SPAs. There should be very specialized e.g. science-heavy services like NLP, video chats/streaming, databases of course, and many more, so no one has to reinvent the wheel. But somehow the art of writing a complete user-facing program was lost in favor of plugins to be used by others.
There should be some layers. An SPA should have its Best Friend Forever (aka Back-end For Front-end) service. I don’t question that. I question the “service meshes” underneath the BFF — connected via Kafka queues, so you get the Event-Driven Microservices (resume) cred.
The same goes for CQRS and model separation purism to serve e.g. the rich Web UI displaying 20 Customer fields vs. the mobile companion app (aka “microsite”) showing only five of those on a tiny phone screen. Tell me, why both cannot request the same full Customer entity from the database served as JSON by the BFF. So, the mobile app won’t use the 15 fields passed to it — is it such a big deal? That extra 400 bytes of data are going to slow the network like 0.0000001 of a nanosecond? Writing and maintaining another (DTO) class and plumbing logic to serve a different model will cost you way more in man-hours. let alone adding fields now to three classes instead of one, and G-d knows how many services and controllers inside the java code plus the DevOps plumbing on it. Assuming that additional code is bug-free: initially and after frequent changes e.g. to add/remove fields.
All of that means man-hours. But hey, it’s not your money. While that additional complexity is your job security, right? And you’ve always worked on some JIRA ticket/story assigned to you or otherwise written by someone else? Those little savings of not having to create two DTOs and the code to populate them, then the code to invoke that code, and so on, and so forth add up, and before you (I mean your CFO since you don’t give a damn about the “big picture”) know it, the codebase is 100x smaller, and fivepeople can do the job of 500 regardless of the growing (SaaS) customer base — each with its unique requirements.
Uncool Things Scientific Google Won’t Touch with a 10-Foot Pole
The reason I listed the follies above was to show you how unpopular (due to being hard and not very well appreciated — compensation-wise) real programs are. But someone has to write them. So, integrators can connect those systems and data aggregators can show their pretty charts of the data originated in the system written by a fool like myself.
Most importantly though, considering how uncool business software is due to incomprehensible business logic and convoluted government regulations, way outside of the average hip tiger mom bred MIT grad’s attention span at Google or Facebook, it created a vast niche for me, not afraid to do uncool things not appreciated (yet) by my employer.
Are you going to cram the green Gayle McDowell’s book and keep applying to FAANG every year competing with millions of tiger mom raised book crammers in hope of surviving among them and eventually making half a mil a year? I’d rather found my own Google in one of the countless business process automation niches starved of the new technology for 20+ years, since the Y2K solution paved the way to offshoring and wage reduction, while FAANG grabbed the good engineers unhappy with their shrinking salaries.
That field is ripe for taking — from cheap Initechs, that are only digging a deeper hole by refactoring, if not just lifting and shifting Win95 era (and similar archaic) codebases into 100x more lines of same-quality code plus the equal number of lines of JSON/YAML config to wrap their microservices in API gateways.