Human-AI IntersectionThought LeadershipAI Weekly

AI News Weekly - 100 years from now : The Case for Artificial Stupidity - Mar 23rd 2026

Read original

Why I picked this

Victor flags this because it inverts the entire AI capability race. While everyone's optimizing for speed and autonomy, this piece asks: what if we're building the wrong thing? The 'artificial stupidity' frame isn't cute contrarianism — it's a serious design question about intentional friction. The author's exploring what happens when we optimize for human agency preservation instead of task completion velocity. This matters now because we're hardcoding automation assumptions into systems that will compound for decades. The philosophical framing ('100 years from now') gives permission to question premises we're treating as axioms in 2025. Worth reading not for predictions but for the design principles it surfaces: when should AI deliberately slow down, ask dumb questions, or force human decision points? That's the kind of systems thinking that separates builders from feature shippers.

human-in-the-loop designautomation philosophyintentional frictionAI capability constraintslong-term systems thinking

Three lenses

Builder

The 'worse on purpose' constraint is actually a product spec — I'd prototype an AI assistant that requires human confirmation on every third action, measure task completion vs. error rate, and see if intentional friction creates better outcomes than full automation. Deployable this quarter.

Revenue Leader

Philosophically interesting, operationally vague. Show me the pilot where 'artificial stupidity' improved win rates or reduced churn, then we'll talk about rolling it out. Until then, this is a dinner party conversation, not a deployment strategy.

Contrarian

Everyone will nod along to this and then immediately go back to automating everything because 'intentional friction' doesn't show up in velocity metrics. The real test: name one company that's actually shipping AI that's deliberately less capable. You can't, because the incentives don't support it.

0

Why this matters for operators: Surfaces the design question operators aren't asking: when should AI deliberately not automate? Relevant for teams building internal tools where error cost exceeds speed benefit.

I cover AI×GTM intelligence like this every Wednesday.

Get STEEPWORKS Weekly

More picks

GTM OpsDemand Gen ReportVictor's pick

Trust is the New Currency in B2B Buying: SurveyMonkey, Reddit

These are high % stats showing what we implicitly already know

  • Peer validation (73% trust) now dramatically outweighs traditional vendor marketing (55% trust vendor sites, 39% trust AI chatbots, 36% trust social media) in early-stage B2B buying
  • 83% of B2B buyers complete self-directed research before sales engagement, with high-stakes categories (software, professional services, HR) taking several weeks to months in extended evaluation
  • Search engines serve as navigation layer, not destination—buyers use search to identify options then validate through peer communities like Reddit (121M daily users, 19% YoY growth), creating imperative for authentic community presence
community-led-growthback-to-basics-gtmhuman-first-sales
AI DevelopmentGTM AI Podcast & NewsletterVictor's pick

Claude Channels

The move from user initiated to automated workflows is one of the main transitions with current agentic capabilities IMO

  • Claude Channels (launched March 20, 2026) enables event-driven AI automation via MCP protocol, shifting from pull-based (user-initiated) to push-based (event-triggered) workflows
  • Practical use case: CI/CD failures can trigger autonomous investigation, fix deployment, and resolution without human intervention - reducing 12-hour incident windows to near-zero
  • Technical implementation uses MCP servers connecting Claude Code to messaging platforms (Telegram/Discord at launch), with Bun runtime for 4x faster cold-start performance vs Node
ai-coding-toolsautomation-stackssignal-infrastructure
AI×GTMThe InformationVictor's pick

AWS Accelerates Internal AI Agents Following Staff Cuts

If you think white collar job displacement is a joke, or a distant future concern, this is just one more sign it is most definitely NOT. It's here.

  • AWS is deploying AI agents to handle technical sales support functions previously performed by thousands of specialists
  • The AI automation directly correlates with recent layoffs of hundreds in sales, business development, and technical specialist roles
  • Major cloud provider is using its own AI capabilities to reduce headcount in customer-facing technical roles, signaling broader industry trend
ai-sdr-adoptionautomation-stacksback-to-basics-gtm

This analysis was produced using the STEEPWORKS system — the same agents, skills, and knowledge architecture available in the GrowthOS package.