Welcome to my public knowledge archive, where I document insights from articles, research, and ideas worth remembering.
About Duly Noted | RSS Feed
You might also like my longer pieces
Developer frustrations with AI mandates often surface due to their being handed down by company leaders who don’t have close visibility into engineering workflows. Developers describe executives instituting OKRs and tracking AI usage without any regard for whether it’s actually helping, let alone where it may be making things worse. Code acceptance rate (how often developers accept the code suggestions an AI tool makes) is a popular adoption metric, but some argue it’s a poor measure because it counts people accepting suggestions that may be problematic.
At some point in the very, very early days of GitHub, Tom was looking for anything that could be anthropomorphically used as a Git totem and “Octopus” was the only term in the Git lexicon that seemed to fit the bill. Tom searched for clipart that featured an “octopus” and this Simon Oxley image was the cutest that fit the bill. So the “octocat” was born.
Fun quick read on the foundation of git and the infamous git octocat.
A report by scientists and experts commissioned by the French president, Emmanuel Macron, last year concluded that children should not be allowed to use smartphones until they were 13 and should be banned from accessing conventional social media such as TikTok, Instagram and Snapchat until they were 18. No child should have a phone before age 11, the report said, and they should only have a handset without access to the internet before 13.
Ubisoft removed the game from customers’ Ubisoft Connect libraries, offering refunds only to those who purchased it recently.
If its not physical. I don’t want it.
Quote Citation: Daniel Sims, “Ubisoft argues players don’t own their games in wake of The Crew lawsuit”, April 10, 2025 at 1:47 PM, https://www.techspot.com/news/107502-ubisoft-argues-players-dont-own-their-games-wake.html
Questions of AI authorship and ownership can be divided into two broad types. One concerns the vast troves of human-authored material fed into AI models as part of their “training” (the process by which their algorithms “learn” from data). The other concerns ownership of what AIs produce.
Fully aware that vast data scraping is legally untested—to say the least—developers charged ahead anyway, resigning themselves to litigating the issue in retrospect.
By the end of the period we analyzed, in the financial dataset we estimate about 18% of the data was generated by LLM, around 24% in company press releases, up to 15% for young and small companies job postings, and 14% for international organizations.
Hard to say how accurate this is, as I don’t know that AI detection models are that accurate. But regardless of adoption rate, there is a surge of usage followed by plateauing reflecting not everyting can be solved by AI.
Ginsparg was frustrated because he couldn’t understand why implementing features that used to take him a day now took weeks. I challenged him on this, asking if there was any documentation for developers to onboard the new code base. Ginsparg responded, “I learned Fortran in the 1960s, and real programmers didn’t document,” which nearly sent me, a coder, into cardiac arrest.
Interview with the creator of arXiv which I’ve learned is pronouced ‘archive’.
Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing? When a house is being built, tons of people are involved: architects, civil engineers, plumbers, electricians, bricklayers, interior designers, roofers, surveyors, pavers, you name it. You don’t expect a single person, or even a whole single company, to be able to do all of those.
I mean, this is the business of software engineering.
The core idea is to separate the process into distinct components: a Planner, an Evaluator, and an Executor. The Planner generates a plan based on the user’s query. The Evaluator validates the generated plan. The Executor only executes plans that have been validated, ensuring that only sound plans are carried out.
And I guess the human rubber stamps it? No where is mentioned controlling for mistakes.
Quote Citation: Cedric Chee, “The DNA of AI Agents: Common Patterns in Recent Design Principles”, Dec 24, 2024, https://cedricchee.
Compound mistakes: an agent often needs to perform multiple steps to accomplish a task, and the overall accuracy decreases as the number of steps increases. If the model’s accuracy is 95% per step, over 10 steps, the accuracy will drop to 60%, and over 100 steps, the accuracy will be only 0.6%.
Herein lies the rub with agents. Once they tumble down a bad path, how can they recover? Reminds me of the rumor(?