Anshuman Biswas

Anshuman Biswas

@anchoo2kewl

Administrator Since February 2024

Engineering leader specializing in threat detection, security engineering, and building enterprise B2B systems at scale. Deep hands-on roots in software architecture and AI tooling - currently exploring the frontier of AI agents as co-founder of AI Agent Lens.

12 Posts
4 Slides
2 Guides
8 Contributions
2 Comments

Posts

12
The Knowledge Spine: Why Organized Context Beats Raw Intelligence for AI Code Quality
Featured
The Knowledge Spine: Why Organized Context Beats Raw Intelligence for AI Code Quality
April 18, 2026
Claude Mythos Just Changed Cybersecurity
Featured
Claude Mythos Just Changed Cybersecurity
April 8, 2026
The Complete Engineer's Guide to AI Agents
The Complete Engineer's Guide to AI Agents
April 6, 2026
The SaaS Moat is Draining
The SaaS Moat is Draining
March 27, 2026
Securing AI Agents: From Code Scanning to Runtime Enforcement
Securing AI Agents: From Code Scanning to Runtime Enforcement
March 13, 2026
Tests Are the New Source Code
Featured
Tests Are the New Source Code
March 7, 2026
When "SSL Handshake Failed (525)" Isn't Actually SSL
When "SSL Handshake Failed (525)" Isn't Actually SSL
February 28, 2026
My Server Got Cryptojacked Through a Next.js Vulnerability
Featured
My Server Got Cryptojacked Through a Next.js Vulnerability
February 21, 2026
Autoscaling Revisited: LLMs, MCP, and the Stack
Autoscaling Revisited: LLMs, MCP, and the Stack
February 18, 2026
The Open-Source Autoscaling Stack in 2024
Featured
The Open-Source Autoscaling Stack in 2024
October 22, 2024
Autoscaling From the Inside: Seven Years at Turbonomic
Autoscaling From the Inside: Seven Years at Turbonomic
June 19, 2024
Why Reactive Autoscaling Isn't Enough — and How ML Changes That
Featured
Why Reactive Autoscaling Isn't Enough — and How ML Changes That
March 16, 2024

Presentations

4
SaaS Engineering Portfolio
SaaS Engineering Portfolio
4 production SaaS platforms built with Go, React, and PostgreSQL
March 19, 2026
Full-Stack Vibe Coding Bootcamp
Full-Stack Vibe Coding Bootcamp
March 19, 2026
Vibe Coding to Production: Mastering Cursor + AI
Vibe Coding to Production: Mastering Cursor + AI
October 16, 2025
Vibe Coding to Production
Vibe Coding to Production
September 11, 2025

Guides

2
The Complete Engineer's Guide to AI Agents — From Zero to Production
The Complete Engineer's Guide to AI Agents — From Zero to Production
Everything you need to build production-grade AI agents in Go — from the ReAct loop to multi-agent orchestration, knowledge graphs, RAG, determinism techniques, security, cost optimization, and real-world patterns. With interactive diagrams and fully working code.
April 6, 2026
The Complete Guide to Claude Code — Tips, Tricks & Advanced Workflows
The Complete Guide to Claude Code — Tips, Tricks & Advanced Workflows
Everything you need to master Claude Code — from setup to advanced multi-agent workflows, MCP servers, hooks, memory systems, and the daily workflow of a power user.
April 1, 2026

Contributions

8

Comments

2
Tests Are the New Source Code
Totally Chris, that's a genuinely uncomfortable extension of my own argument — and I think you're right to push it. I hedged. "Never zero" was comfortable to write because it felt safe. But you've identified the actual fault line: if AI reaches the point where it can solve Millennium Prize Problems — problems where we don't even fully understand the solution space — then "you'll always need humans who can verify the logic" stops being a guarantee and starts being wishful thinking. The honest answer is: I don't know where the ceiling is. Nobody does. What I do believe is that we're not there yet, and the path between "very good at logical reasoning" and "capable of novel mathematical discovery at that level" is longer than the current hype cycle suggests. But "not yet" is very different from "never" — and I conflated the two. We'll get there someday! The deeper point you're making is essentially: if the verification itself can be automated, the human role collapses from "understands the logic" to "understands that the requirements were correctly specified." Which is a much thinner foothold. I'll sit with that. It's the most honest challenge to the piece I've encountered.
Tests Are the New Source Code
This is exactly the kind of pushback I was hoping for — thank you. The tests vs. architecture distinction is on point. Tests tell the agent what, architecture tells it where and how — I'm going to be thinking about that framing for a while. You're right that the real leverage isn't in cloning existing logic, it's in extending it. And extension without architectural clarity is just chaos at scale. On agentic engineering vs. vibecoding — fair. I think we're in that uncomfortable middle period where the terminology hasn't caught up to the practice. What you're describing (structured context, explicit rules, human review) is meaningfully different from the prompt-and-hope era. It deserves a better name! The world model point on AI slop is the most interesting challenge to the evolution analogy. You're right — evolution is the real world. LLMs are pattern-matching against this real world abstraction. The test suite and the experienced human aren't just quality filters, they're the grounding mechanism. That's doing a lot of work that we probably undercount. This line is absolutely killer: "The tools have changed. Good software engineering hasn't." SOLID, DDD, TDD — those aren't legacy ideas, they're the scaffolding that lets agents reason cleanly. Couldn't agree more. Thank you for reading!