My exact words to a small group of our finance, legal and talent colleagues last week: ‘You are committing career suicide if you’re not aggressively experimenting with AI.’
The reality is, we don’t know today how much AI will do in the future. But lots of companies are betting on a lot. Even if AI doesn’t take your job, someone who is using AI will. Is the mantra I hear most repeated.
And then, almost overnight, business leaders see the savings of replacing humans with AI — and do this en masse. They stop opening up new jobs, stop backfilling existing ones, and then replace human workers with agents or related automated alternatives.
Be it automation, off shoring, or just plain ‘do more with less’ AI is accelerating automation of many tasks. I know the long view is that technology results in more jobs.
During a fireside chat with Meta CEO Mark Zuckerberg at Meta’s LlamaCon conference on Tuesday, Microsoft CEO Satya Nadella said that 20% to 30% of code inside the company’s repositories was “written by software” — meaning AI.
I wish business had learned their lesson when once upon a time they tried to measure output by lines of code (LOC). Now we have this made up metric of ‘accepted’ suggestions. There’s lots of better ways to measure effectiveness, how fast the printer goes isn’t one of them.
Anthropic expects AI-powered virtual employees to begin roaming corporate networks in the next year, the company’s top security leader told Axios in an interview this week.
Remind me to check in on Anthropic, who currently has over 100 open positions on their careers page how this is going.
Quote Citation: Sam Sabin, “Exclusive: Anthropic warns fully AI employees are a year away”, Apr 22, 2025, https://www.axios.com/2025/04/22/ai-anthropic-virtual-employees-security
Artificial intelligence, as it exists and is useful now, is probably already baked into your businesses software supply chain. Your managed security provider is probably using some algorithms baked up in a lab software to detect anomalous traffic, and here’s a secret, they didn’t do much AI work either, they bought software from the tiny sector of the market that actually does need to do employ data scientists.
I think this is the most responsible take on AI.
Whenever a new technology is invented, the first tools built with it inevitably fail because they mimic the old way of doing things.
I think I read elsewhere, that the true power of AI will be when it finds its application niche. Not in writing emails.
Quote Citation: Pete Koomen, “AI Horseless Carriages”, April 2025, https://koomen.dev/essays/horseless-carriages/
AI is creating new work that cancels out some potential time savings from using AI in the first place.
Adoption of AI is going gangbusters, but results in the marketplace aren’t dramatic. Best case I’ve seen is that AI is like a lot of other automation, it free’s time for more work; not less.
Quote Citation: Thomas Claburn, “Generative AI is not replacing jobs or hurting wages at all, economists claim”, Tue 29 Apr 2025, https://www.
Really much too long to quote only one or two sections, but it is a line in the sand of 5-10 years to AI driven economies.
Quote Citation: Dario Amodei, “Machines of Loving Grace”, October 2024, https://www.darioamodei.com/essay/machines-of-loving-grace
A real-estate lawyer might have provided a better analysis, I thought—but not in three minutes, or for two hundred bucks. (The A.I.’s analysis included a few errors—for example, it initially overestimated the size of the property—but it quickly and thoroughly corrected them when I pointed them out.)
There’s a lot going on in this article, but the point is that ChatGPT and its ilk can summon up whatever you guide it to answer.
Developer frustrations with AI mandates often surface due to their being handed down by company leaders who don’t have close visibility into engineering workflows. Developers describe executives instituting OKRs and tracking AI usage without any regard for whether it’s actually helping, let alone where it may be making things worse. Code acceptance rate (how often developers accept the code suggestions an AI tool makes) is a popular adoption metric, but some argue it’s a poor measure because it counts people accepting suggestions that may be problematic.