When Machines Write Software: Why Anthropic Says Engineers Still Matter
Software engineers – people who design, build, and maintain the digital systems underpinning modern life – have long been associated with writing lines of code by hand. But with the rapid rise of artificial intelligence systems that can generate software automatically, that traditional image is changing. Nowhere is this shift more visible than at leading AI company Anthropic, where internal tools written by AI are producing most of the organization’s code. Yet, despite this dramatic technological change, Anthropic’s leaders are emphatic: human engineers matter even more than ever.
This article explores that apparent paradox: why a company built around AI believes people remain central to software development; how this shift came about; who is affected; and what this signals for the broader technology industry, workers, and society.
What Is Happening at Anthropic?
Anthropic, the U.S.–based artificial intelligence firm known for its “Claude” series of generative models, has revealed that a large majority of the company’s internal code is generated by AI. According to executives at the company, AI programs such as Claude Code now produce somewhere between 70–90% of the code used by engineers — and in some group leader accounts, individual engineers rarely write code manually anymore.
This means that rather than typing instructions in programming languages like Python or Java themselves, Anthropic engineers are increasingly relying on AI to create, refactor, or polish code. In internal discussions and public statements, the company’s leaders compare this to a transformation of the engineering role itself: from crafting code to guiding AI that does so.
What sets Anthropic’s situation apart is not just that AI is generating software, but that this transformation is occurring within a company that builds the very AI technologies responsible for the shift. Tools like Claude Code are not niche assistants but central pillars of the internal development workflow.
Why This Change Exists
A Decade of Progress in AI
The ability of AI systems to generate human-readable text has been developing for many years — from early neural networks trained for language modeling to large transformer models that power systems like GPT and Claude. Starting in the early 2020s, these models began to show competence at simple coding tasks: generating small functions or answering questions about software.
By the mid-2020s, advances in model size, training data, and fine-tuning techniques made it possible for generative AI to tackle increasingly complex programming tasks, including reading codebases, understanding requirements, producing multi-file systems, and repairing bugs.
Two key factors drove this evolution:
- Massive training on code repositories: AI models learned from hundreds of millions of lines of open source code and developer discussions.
- Improved prompting and tools: Developers learned how to interact with AI systems more effectively through refined prompts, toolchains, and evaluation practices.
Anthropic itself predicted early on that within months AI would be writing the majority of software code internally. Those predictions have now been realized.
Economic and Competitive Pressures
Beyond pure technical capability, broader economic incentives accelerated adoption. Software development is expensive — especially for large enterprises. AI offers a way to reduce repetitive work and increase productivity. Industries from financial services to consumer tech are adopting AI coding tools not just as curiosities, but as productivity multipliers.
Companies such as Spotify have publicly acknowledged that senior developers don’t write traditional code anymore, instead relying on AI to generate it under human supervision.
Engineers’ Roles Are Changing — Not Disappearing
When AI begins writing code, the intuitive fear is that people who used to write code will no longer be needed. But at Anthropic and other companies, that’s not how executives frame this transition. Instead, they emphasize that engineers are more important today — just in different ways.
A Shift from Typing to Thinking
While AI tools can generate syntax and boilerplate code, they do not yet make judgment calls about product goals, customer needs, ethical constraints, or system architecture. These higher-level cognitive tasks are where human engineers continue to contribute:
- Prompting and supervising AI: Engineers must craft effective instructions that guide AI systems toward correct and useful outputs.
- Design and architecture: Decisions about how software should be structured, what features are needed, and how systems integrate remain human responsibilities.
- Quality checking and validation: AI outputs often require review to catch errors, misbehavior, or safety issues.
- Cross-team coordination: Humans communicate with product managers, designers, and other stakeholders to ensure software meets user needs.
- Innovation leadership: Engineers are expected to push technology boundaries, not just execute predefined tasks.
Boris Cherny, who leads Claude Code at Anthropic, summed up this view by noting that engineers now focus on deciding what to build next, talking to customers, and coordinating work across teams — roles that AI cannot yet fulfill independently.
Not Just More Important, Different
Anthropic continues to hire engineers. The company today has more than 100 open developer roles, even as the percentage of code generated by AI increases. This reflects the belief that the nature of engineering work is evolving, not disappearing.
Industry research supports the idea that roles will shift toward orchestration, review, and problem framing — tasks requiring human judgment and context awareness that AI models currently lack.
Who Is Affected and How
Software Engineers on the Ground
Engineers across the industry are among the first to feel the impact. For entry-level developers who traditionally learned by writing code, the shift toward AI-generated programming presents both opportunities and challenges:
-
Pros
- Lower barrier to entry for completing basic tasks.
- Increased productivity and ability to focus on conceptual work.
- More rapid prototyping and iteration.
-
Cons
- Potential skill atrophy in foundational coding abilities.
- Loss of traditional apprenticeships where newcomers learn by writing and debugging.
- New expectations to master AI tools and prompting as core competencies.
Some engineers report AI fatigue — the cognitive strain of constantly reviewing and correcting machine-produced code, rather than crafting solutions themselves.
Technology Companies
For companies, adopting AI coding tools means rethinking hiring, training, and workflow design. Where teams previously scaled by adding more human coders, many firms are now experimenting with smaller groups of engineers working in tandem with AI.
This shift influences budgeting, project timelines, and organizational structure. Employers may seek engineers with strengths in system design, AI oversight, and cross-disciplinary skills rather than narrow coding expertise.
Broader Workforce and Economy
Beyond software engineers, the automation of coding tasks signals ripple effects throughout the labor market and economy.
Job Market Implications
- Potential displacement: Entry-level jobs in programming and related fields could shrink if AI can handle many tasks traditionally done by humans.
- New job types: Roles in AI management, prompt engineering, and system verification may grow.
- Geographic impact: Regions that depend heavily on IT jobs might see changing employment patterns.
Economic projections suggest that some white-collar jobs may face automation pressures similar to those earlier seen in manufacturing and routine office tasks.
Educational Impact
Curricula at universities and coding bootcamps may need to evolve. Instead of teaching syntax memorization, programs may increasingly emphasize system thinking, AI interaction, quality assurance, and ethical design.
The Broader Impacts on People and Society
The transition to AI-driven code generation raises questions that go beyond technology firms:
Productivity vs. Skill Development
AI can dramatically accelerate software development — allowing prototypes to be built in hours rather than weeks. Once exclusively human labor, coding is rapidly becoming a hybrid human–machine task. While this boosts productivity, it alters how new developers learn the craft.
There is a concern that younger programmers might not develop deep expertise if they rely heavily on AI for routine tasks. This could create a future skills gap in advanced areas where human intuition and reasoning are critical.
Ethical, Safety, and Bias Issues
Automated code generation introduces new challenges in ensuring software safety and ethical behavior. AI systems can inadvertently embed biases, security flaws, or unintended functionalities unless carefully monitored by humans.
Thus, the role of engineers as guardians of quality and safety becomes crucial — not just a convenience but a necessity.
Economic and Social Disruption
As roles evolve and some traditional jobs contract, social systems — including employment policies, unemployment safety nets, and retraining programs — may need to adapt. Workers displaced from routine coding roles might require substantial retraining to transition into areas involving AI supervision, design thinking, and cross-disciplinary problem solving.
What Comes Next: A Balanced Outlook
Potential Scenarios
1. Continued Collaboration, Not Replacement
AI becomes an integrated partner in software development, with humans directing high-level work and machines managing routine tasks. This synergy could raise productivity while keeping engineers central to innovation.
2. Rapid Disruption in Entry-Level Jobs
If AI continues to automate foundational coding tasks, entry-level positions could diminish substantially. Training pathways and career ladders in software engineering might need a major rethink.
3. New Ethical and Regulatory Frameworks
As AI systems assume larger roles, governments and industry may enforce standards for safety, accountability, transparency, and worker protection.
Challenges Ahead
- Avoiding skill erosion among new programmers.
- Ensuring AI tools are safe, reliable, and free from harmful biases.
- Developing educational systems that prepare students for hybrid human–AI roles.
- Maintaining economic stability in sectors affected by automation.
Opportunities
- Engineers can focus more on creative, high-impact work.
- Organizations can deliver software faster and with potentially fewer errors.
- New professions and specializations may emerge centered on AI oversight and system design.
Conclusion
The story unfolding at Anthropic — where AI now writes most internal code — is more than a technological curiosity. It reflects a broader transformation in how work is done, how tools augment human roles, and how society must adapt to rapid automation.
Far from signaling the obsolescence of human engineers, Anthropic and similar companies argue that this shift amplifies the need for human judgment, creativity, and oversight. What changes is not the importance of engineers, but the nature of their contribution.
As AI continues to evolve, the future of software development will likely be defined not by machines replacing people, but by how effectively humans and machines can collaborate — with engineers steering, validating, and shaping an increasingly automated world.
