When "SSL Handshake Failed (525)" Isn't Actually SSL
I want to tell you about a bug that started with a simple Cloudflare error and ended with me staring at post-quantum cryptography specs at 2 AM, wondering what year it is.
Deep dives into software architecture, cloud infrastructure, and scalable system design.
I want to tell you about a bug that started with a simple Cloudflare error and ended with me staring at post-quantum cryptography specs at 2 AM, wondering what year it is.
Last week I found a cryptominer running on my staging server. Here's what happened and how I fixed it.
Two years ago I wrote about why reactive autoscaling falls short and what ML brings to the table. A lot has changed. LLMs are now a primary workload in most cloud fleets, and they break almost every assumption the classic autoscaling stack was built on. Here's what's actually different, and where Model Context Protocol fits into the picture.
Part 1 and Part 2 covered the theory and one major commercial platform. Now the practical question: what does the open-source Kubernetes ecosystem actually give you for intelligent autoscaling in 2024, and where is the ML layer starting to plug in? The answer is more composable — and more interesting — than it was two years ago.
I spent seven years at Turbonomic — back when it was still called VMTurbo, through the rebranding, through the IBM acquisition in 2021, and a few years past that. So writing about autoscaling without touching what I actually worked on every day would feel dishonest. This is the insider perspective: what Turbonomic actually does, why the economic model it's built on is genuinely clever, and where the edges of that model sit.