30 Agent-led Future of Work Thoughts
We're going to be bringing our own agents and hopefully keeping software careers human centric
Random Observation/Comment #928: The singularity is near for developers. We’re all builders now.
//Agentic collaboration led by humans
Why this list?
What does an “Agent-Augmented Enterprise” look like? At the moment, we’ve only seen high efficiency from solo entrepreneurs, but at some point we need to build a family of contributors. It can’t just be one person and 100 agents doing everything or else we don’t feel included in the project’s strategy and trajectory.
The “efficiency-only” model of AI where you just dictate will lead to a small company with very high infrastructure costs. What we want to keep in the complexity of enterprise work is the vibrancy, diverse views, and entropy that comes from human collaboration. The agents are only acting as tools of the individual the company hired and not the replacement for creativity and headcount.
I hope the future has a “Bring Your Own Agents” (BYOA) ecosystem to the Enterprise’s core. Humans can still decide what to contribute from their agentic operating system or tech stack preferences. The software product itself will probably be a hive mind collaboration tool that also fosters a culture of understanding the product.
The Philosophy of Human-Agent Teams
BYOA (Bring Your Own Agent) - Employees are encouraged to curate their own digital “staff” that reflects their unique working style. They may be more familiar with how these agents are optimized and how they can be split to reduce context window and manage their tokens.
The Anti-Monolith - We reject the single “Corporate GPT” in favor of a diverse “Agent Marketplace” within the company. A single GPT that is the whole company’s codebase doesn’t really make sense because it would discourage diverse thinking. If your whole company is just your super computer then the human job is now the scheduling of tasks the super computer executes? That sounds like a one person job, which can get disillusioned with power.
Entropy as an Asset - Multiple agents with different “personalities” or logic models prevent groupthink and boring, mid-level outputs. Entropy comes from human understanding, which breeds wrong answers and problems that require unique solutions. We’re excellent random generators.
The Agent-to-Human Ratio - Success is measured by how many agents a person can lead, not how many humans they can replace. If you’re able to mentally cover the deliverables across delegating your tasks to multiple agents, then perhaps you can have a one-person marketing team. In many cases, smaller companies will give a head of Marketing a large budget so they can delegate out to consultants all of the chunks of work. Agents management will be similar as a single person is still accountable for the on-time delivery.
Agency over Autonomy - Humans have the final agency; agents have the task-level autonomy.
Human-in-the-Loop-by-Design - Software must require human “taste” and “ethical sign-off” at critical forks in the road.
The Software & Product Mechanics
Contribution Commitments - The collaborative product we build is a GitHub/Source control for ideas. The individuals “commit” agent-generated drafts to a shared enterprise goal. Maybe it’s a slack plug-in for discussion. Maybe it gets saved into Notion database and we spend time in meetings reviewing these core pieces. Maybe they autogenerate into slides and we talk through slides as a job.
The “Agent Handshake” - Interoperability protocols that allow my research agent to talk to your design agent seamlessly. I think this type of collaboration is more important than the agentic commerce where my agents go off and try to spend my money on things.
Attribution Ledger - Every final product clearly tracks which human led which agent to create which specific component. This is like the break-glass controls we had for finding proper accountability. I guess it can also be a shared log.
Agent “Forking” - If a teammate has a highly effective agent setup, others can “fork” it and adapt it to their own style. It would be awesome if the original creator and trainer of the agent is paid for their fork.
The Sandbox Buffer - A private space where users can experiment with their agents before “submitting” results to the team. Perhaps we all work in a new Slack with our agent teams that use this space as a discussion area of outputs.
Collective Memory - An enterprise layer that learns from the output of the agents, not the private prompts of the individuals. We can only learn from what the humans submit since they are the ones ultimately accountable.
Preserving People Skills & Growth
The “Manager of Machines” Role - I think this is one of the hardest things because engineering managers are usually empathetic with the process of being individual contributors. This younger generation of workers may need to learn more skills around managing before they even get to mature into that role. The “Agent Architects” will likely become second nature just like it was easy for our generation to be task-takers.
Prompt Empathy - Learning to communicate clearly with AI actually improves how we communicate requirements to other humans. We also want clear communication in order to get clear results. It will pay (literally) to be eloquent and expressive.
The Value of “The Mess” - Preserving whiteboards and brainstorming sessions where agents are barred, to keep human friction alive. There’s definitely a human premium here, but I think we do the best collaborative understanding together.
Skill Longevity - Focusing training on high-level strategy, ethics, and “taste” because things agents can’t yet replicate.
Peer Review 2.0 - Teammates review the logic behind a peer’s agent-workflow, not just the final text. I think it’s interesting to see through why the queries go through and what it means for the final output.
The Mentorship Loop - Seniors teach juniors not how to do the work, but how to guide the agents to do the work excellently. There’s a mental maturity that will be needed to have a successful and collaborative product.
The New Economic & VC Model
The “Non-Solo” Solo-Preneur - A startup of 3 people that operates with the output of 30, requiring (and deserving) VC scale.
IP Ownership - Individuals “own” their specific agent-configurations (the “how”), while the enterprise owns the result (the “what”).
The Dividend of Time - Using AI-gained efficiency to fund 4-day weeks or “creative sabbaticals” rather than just more tasks.
Agent Diversity Scores - Measuring a team’s health by the variety of AI models and workflows they utilize. This might be a stretch, but we’ll probably find cycles to try out different models.
The “Quality over Speed” Metric - Judging teams by the depth of their output, given that speed is now a commodity.
Culture & Human Value
Cognitive Sovereignty - No one is forced to use a specific AI tool. You own your thought process.
The Emotional Layer - Agents handle the data while the humans handle the morale, the conflict resolution, and the vision. I don’t know if we’ll “miss” the way we work together now, but there is something special about making friends with coworkers and having the opportunities to grow deeper than the paycheck.
Agent Transparency - Being open about when an agent was used, so human “raw” talent is still recognized and celebrated. I bet everyone thinks I don’t write lists of 30 anymore (but I do).
The “Human-Only” Vault - Creating space for ideas that are deliberately developed without AI interference.
Collaborative Ownership - The product makes it easy to see how 10 different people (and their 50 agents) built a single masterpiece.
Anti-Isolation Protocols - Software features that require human-to-human syncs to unlock certain agent capabilities.
The “Soul” Check - A final project phase where the team asks: “Is this technically perfect but soulless? How do we make it human?”
~See Lemons Prepare for the Agentic Future of Work


