192 points on Hacker News. 113 comments. In under 12 hours. A project called “Get Shit Done” just hit the front page — and developers are losing their minds over it.

The pitch is deceptively simple: describe what you want, and it builds it. No sprint ceremonies. No story points. No Jira workflow from hell. But unlike the hundreds of “vibecoding” tools that promise the same thing and deliver spaghetti code, GSD actually works at scale. Here’s why.

The Problem Nobody Talks About: Context Rot

Every AI coding tool degrades as your session gets longer. Claude starts strong — clean code, smart decisions, proper structure. Then after 30 messages, it forgets the architecture you established. After 60, it starts conflicting with its own earlier choices. By 100, you’re babysitting a confused intern.

This is context rot — the quality degradation that happens as an AI fills its context window. The model literally runs out of room to think clearly, and nobody building these tools talks about it enough.

Most developers work around it by starting new sessions, copy-pasting context, or just tolerating increasingly mediocre output. GSD treats context rot as the core problem to solve, not an edge case to ignore.

What GSD Actually Does

GSD (Get Shit Done) is a meta-prompting and context engineering system created by a developer called TÂCHES. It sits on top of Claude Code, OpenCode, Gemini CLI, Codex, Copilot, and Antigravity — giving any of them a structured workflow that prevents context decay.

The core idea: before any coding happens, the system extracts everything it needs to know through structured prompts. Your project’s architecture, constraints, coding standards, and goals get encoded into the system. Then Claude works within those rails.

# One command to install
npx get-shit-done-cc@latest

The installer asks which runtime you want to target and whether to install globally or per-project. That’s it.

The Commands That Matter

GSD gives you a handful of commands that replace an entire project management workflow:

  • /gsd:init — Extracts requirements from your description. Asks the right questions. Produces a spec.
  • /gsd:build — Takes the spec and builds it. Subagents handle different parts in parallel.
  • /gsd:test — Verifies the output against your spec. Catches regressions.
  • /gsd:ship — Prepares for deployment with a final review pass.

No standups. No tickets. No “let’s circle back on this.” Just structured execution.

# Non-interactive installs for CI/Docker
npx get-shit-done-cc --claude --global
npx get-shit-done-cc --opencode --global
npx get-shit-done-cc --gemini --global

Why This Hits Different

I’ve tested SpecKit, BMAD, OpenSpec, and Taskmaster. They all share the same DNA: they try to turn your coding workflow into a scaled agile process. Sprint ceremonies. Stakeholder syncs. Retrospectives. Enterprise theater for solo developers building side projects.

GSD’s creator nailed it in the README:

“I’m not a 50-person software company. I don’t want to play enterprise theater. I’m just a creative person trying to build great things that work.”

The complexity is in the system, not your workflow. Behind the scenes, GSD handles context engineering, XML prompt formatting, subagent orchestration, and state management. What you see is a few commands that just work.

What makes the output actually reliable:

  1. Structured extraction — The system doesn’t just take your prompt and run. It asks clarifying questions and builds a proper spec before writing any code.
  2. Subagent orchestration — Different parts of the build run in parallel sub-agents, each with focused context. No single agent drowning in information.
  3. State management — Progress is tracked across sessions. Start where you left off without re-explaining everything.
  4. Verification loops — The system tests its own output against the spec, catching issues before you see them.

The Numbers

Trusted by engineers at Amazon, Google, Shopify, and Webflow. The Hacker News thread is full of comparisons:

“I’ve done SpecKit, OpenSpec and Taskmaster — this has produced the best results for me.”

“By far the most powerful addition to my Claude Code. Nothing over-engineered. Literally just gets shit done.”

The creator recommends running Claude Code with --dangerously-skip-permissions for the full automated experience:

claude --dangerously-skip-permissions

If that makes you nervous (fair), you can add granular permissions in .claude/settings.json instead:

{
  "permissions": {
    "allow": ["Bash(date:*)", "Bash(echo:*)", "Bash(git commit:*)", "Bash(npm test:*)"]
  }
}

GSD vs. The Competition

FeatureGSDSpecKitBMADTaskmaster
Setup complexityOne commandMultiple configsHeavy scaffoldingModerate
Context engineeringCore featurePartialBasicBasic
Multi-runtime6 runtimesClaude onlyClaude onlyLimited
Subagent supportBuilt-inManualManualNo
Enterprise theaterNoneSomeLotsSome
HN reception192 pts 🔥MixedNicheModerate

How to Set It Up

Install globally for all projects:

npx get-shit-done-cc@latest

Or target a specific runtime:

# Claude Code only
npx get-shit-done-cc --claude --global

# OpenCode (open source, free models)
npx get-shit-done-cc --opencode --global

# All supported runtimes
npx get-shit-done-cc --all --global

Verify the installation:

# In Claude Code or Gemini
/gsd:help

# In OpenCode
/gsd-help

# In Codex
$gsd-help

Then describe your project and let the system do the rest:

/gsd:init I want to build a real-time dashboard that shows
GitHub Actions pipeline status across multiple repos with
alerting for failed builds.

The system will ask clarifying questions, build a spec, and hand off to /gsd:build for implementation.

The Bigger Picture

GSD represents a shift in how we think about AI coding tools. The first wave was raw access — just give the model a prompt and pray. The second wave was chat-based refinement — iterative back-and-forth. We’re entering the third wave: structured context engineering.

The models are good enough. The bottleneck is how we feed them information. GSD solves this by treating the prompt pipeline as a real engineering problem — with extraction, validation, state management, and verification.

This is where the industry is heading. The tools that win won’t be the ones with the best models — they’ll be the ones that manage context most effectively.

The Verdict

GSD is the first spec-driven dev system that feels like it was built by someone who actually uses AI coding tools daily. No bloat. No enterprise cosplay. Just a clean abstraction over the real problem: making AI agents produce consistent, high-quality output across long sessions.

If you’re using Claude Code, Copilot, or any supported runtime and you’re tired of context rot killing your productivity after 30 minutes — install GSD. It takes 30 seconds, and it might fundamentally change how you build software.

The project is open source, evolving fast, and already backed by serious developers. Update periodically with npx get-shit-done-cc@latest — it ships improvements regularly.

One-line verdict: The first AI coding tool that solves the actual problem instead of adding more ceremony around it.