Good arguments for ADRs being more critical with agentic workflows, but not relying on “Sarah” was always worth doing. I’ve been Sarah, and left, leaving a gap despite my best intentions.
Good arguments for ADRs being more critical with agentic workflows, but not relying on “Sarah” was always worth doing. I’ve been Sarah, and left, leaving a gap despite my best intentions.
A STAG is absolutely not a New Zealand EGOT. The point of the EGOT is you’re award-winning across multiple genre. A STAG means you’re award-winning across music.
A real NZ EGOT would be something like a NZ Screen Award in both TV and film categories, an Aotearoa Music Award, and I guess a regional theatre award, or maybe even take a slightly different tack and use a NZ Comedy Trust Award.
And:
Great engineering is not deployments. It’s not monitoring, not dashboards. It’s understanding. Knowing how the pieces connect, who owns what, how changes spread, and where risk has quietly been building for months until it suddenly matters.
Posing interesting questions.
Very true in the AI context of the article, but also for other places, for example reviews of things the reviewer has used for hours or a day.
“ACE”, a project at GitHub Next, looks really interesting. I think a lot of this would be really nice even without the AI element. It’s funny how so much of the best software engineering tooling is really old (mature?) tooling, with splashes of newness here and there.
And I’ll be honest: if I could set aside the ethical, legal, economic, and environmental issues with generative AI, it’d be pretty damn cool, too. Large language models give us a quantum leap in natural language processing, proofreading, transcription, translation, and summarization. Yes, I know all the ways in which LLMs are “bad” at all those things, but in comparison to the previous state of the art in machine-powered proofing, transcription, translation, and summarization, it’s just no contest. … But of course, you can’t set aside the ethical, legal, economic, and environmental issues with generative AI
This is how I feel, too. Lots of other thoughts around the intersection of generative AI and capitalism worth reading.
We did not align politically on many things, him being a “hyper neoliberal” and me being a “social democrat” (at least according to what I feel was our mutual impression of each other). Any time I saw that @mitsuhiko handle in a thread, I felt the urge to tell someone they are wrong on the internet. The one thing that differentiated Armin from other internet trolls was the way he conducted himself in these heated discussions. He was never emotional or aggressive. Our discussions would either end in cordial disagreement, or a newfound common understanding. That’s extremely rare on the internet.
This has almost nothing to do with the post topic (pi moving to Earendil), but it’s what the world needs more of. Some of the best working relationships I’ve had have been with people with whom I had almost no alignment with at the time, and some of those also turned into great friendships outside of work. It seems like we (the world) lost the ability to get along and collaborate while still disagreeing, somehow.
There are obviously some xkcd’s that are iconic, but their humour doesn’t always hit with me personally. However, today’s dark mode one is both amusing and one of the really impressive ones (you need to use the page, not view in a RSS reader, like I normally do).
Regardless, the raw economics of the AI data center boom are horrendous. Even in the best-case scenario, with guaranteed tenancy for years, these data centers are so debt-dependent and drenched in depreciation, opex, and maintenance that even the most meager margins are hard to attain.
I’m sad that none of our local media seem to be addressing the actual viability (🔒) of this new “AI factory” data centre in Southland. There’s at least some addressing on concerns about water (and perhaps electricity, although if we did sensible things in our energy usage that could go away; but we need a better government than the current one for that) – media seem to just accept that it’s a good idea financially, whereas the evidence doesn’t really back that up.
In any case, there is no future for any AI company that uses a subscription-based approach, at least not one where they don’t directly pass on the cost of compute. This is a huge problem for both Anthropic and OpenAI, as their scurrilous growth-lust means that they’ve done everything they can to get customers used to paying a single monthly cost that directly obfuscates the cost of doing business.
Ed Zitron always has a lot of (probably too many) words and I think there is more value in present LLM output than he sees, but his financial analysis I very much believe. This section on subscriptions and token cost seems spot on, and I also don’t know how it can possibly get resolved without some magic fix appearing.
Parents are still buying weirdly formal and awkwardly-posed school photos. Why?
An excellent question. For me, I think was this (while he was still at school):
The third reason is my own feeble sentimentality. If there is a picture that exists out there of my precious offspring, how can I say ‘no, thank you I don’t want it’?
I did love how Ahuroa School did their own photographs, and they were always vastly better than the “professional” ones. In nature, moderately natural, not in uniform, fairly relaxed. I’m not sure if they still do that or not – part of it would have been that a parent was an excellent photographer, but surely that will continue to happen, to some degree of “excellent”.
I don’t understand why schools don’t do this themselves more. Some staff member or student must be a decent photographer with reasonable equipment, and then either sell them for peanuts pocketing the money for the school, or just give them to the poor parents who have forked out for enough school stuff already.
I’m not sure everything else flows from that, but I agree this is a big part.
And
Some problems, however, only reveal their shape over long horizons.
And
When you have spent years delivering reliable tools, you earn the political capital to say “No” to the spotlight when it threatens the product.
Lots of good points about how Staff+ works differently outside of “product”.
Interesting thoughts on tracing.
Semver was an attempt to make versioning true-or-false: either a release breaks compatibility or it doesn’t. In practice it became good-or-bad anyway. Different ecosystems interpret it differently. Some communities treat it as sacred law, others as rough guidance. The spec’s existence didn’t settle the question. Hyrum’s Law makes it worse: a bug fix for one user is a breaking change for someone who depended on that bug. There’s no objectively correct version bump, only outcomes that help or hurt specific people.
This and other good points, in Package Management is a Wicked Problem.
The stories of the early days of Slack have reached the icky stage.
This is specifically about Git, a notoriously tricky tool, but is very true for many others too.
Solid advice for writing software to last 10+ years. I’ve done this once; I hope to manage to do it again before retiring.
I also find it annoying when Claude Code picks an old action version, but dependabot immediately opens a PR to fix it, so that seems as convenient as remembering to point it at something like this. It was more annoying when it used to do web searches to try to get the hash to pin to (which was always wrong), but these days I think there’s a baked in skill that knows how to use Git to get the right one.
I like “numbers you should know”, like this Python one or this latency one, but I feel you should have a sense of the difference here, and maybe some idea of the algorithmic complexity and CPU/memory trade-offs for some, but don’t need to actually know them. You can always check them, or look up pages like this.
(The Python one is a very nicely put together page, though.)
Interesting insights into managing Go dependency updates particularly in light of the recent Trivy issues.