There are obviously some xkcd’s that are iconic, but their humour doesn’t always hit with me personally. However, today’s dark mode one is both amusing and one of the really impressive ones (you need to use the page, not view in a RSS reader, like I normally do).
Regardless, the raw economics of the AI data center boom are horrendous. Even in the best-case scenario, with guaranteed tenancy for years, these data centers are so debt-dependent and drenched in depreciation, opex, and maintenance that even the most meager margins are hard to attain.
I’m sad that none of our local media seem to be addressing the actual viability (🔒) of this new “AI factory” data centre in Southland. There’s at least some addressing on concerns about water (and perhaps electricity, although if we did sensible things in our energy usage that could go away; but we need a better government than the current one for that) – media seem to just accept that it’s a good idea financially, whereas the evidence doesn’t really back that up.
In any case, there is no future for any AI company that uses a subscription-based approach, at least not one where they don’t directly pass on the cost of compute. This is a huge problem for both Anthropic and OpenAI, as their scurrilous growth-lust means that they’ve done everything they can to get customers used to paying a single monthly cost that directly obfuscates the cost of doing business.
Ed Zitron always has a lot of (probably too many) words and I think there is more value in present LLM output than he sees, but his financial analysis I very much believe. This section on subscriptions and token cost seems spot on, and I also don’t know how it can possibly get resolved without some magic fix appearing.
Parents are still buying weirdly formal and awkwardly-posed school photos. Why?
An excellent question. For me, I think was this (while he was still at school):
The third reason is my own feeble sentimentality. If there is a picture that exists out there of my precious offspring, how can I say ‘no, thank you I don’t want it’?
I did love how Ahuroa School did their own photographs, and they were always vastly better than the “professional” ones. In nature, moderately natural, not in uniform, fairly relaxed. I’m not sure if they still do that or not – part of it would have been that a parent was an excellent photographer, but surely that will continue to happen, to some degree of “excellent”.
I don’t understand why schools don’t do this themselves more. Some staff member or student must be a decent photographer with reasonable equipment, and then either sell them for peanuts pocketing the money for the school, or just give them to the poor parents who have forked out for enough school stuff already.
I’m not sure everything else flows from that, but I agree this is a big part.
And
Some problems, however, only reveal their shape over long horizons.
And
When you have spent years delivering reliable tools, you earn the political capital to say “No” to the spotlight when it threatens the product.
Lots of good points about how Staff+ works differently outside of “product”.
Interesting thoughts on tracing.
Semver was an attempt to make versioning true-or-false: either a release breaks compatibility or it doesn’t. In practice it became good-or-bad anyway. Different ecosystems interpret it differently. Some communities treat it as sacred law, others as rough guidance. The spec’s existence didn’t settle the question. Hyrum’s Law makes it worse: a bug fix for one user is a breaking change for someone who depended on that bug. There’s no objectively correct version bump, only outcomes that help or hurt specific people.
This and other good points, in Package Management is a Wicked Problem.
The stories of the early days of Slack have reached the icky stage.
This is specifically about Git, a notoriously tricky tool, but is very true for many others too.
Solid advice for writing software to last 10+ years. I’ve done this once; I hope to manage to do it again before retiring.
I also find it annoying when Claude Code picks an old action version, but dependabot immediately opens a PR to fix it, so that seems as convenient as remembering to point it at something like this. It was more annoying when it used to do web searches to try to get the hash to pin to (which was always wrong), but these days I think there’s a baked in skill that knows how to use Git to get the right one.
I like “numbers you should know”, like this Python one or this latency one, but I feel you should have a sense of the difference here, and maybe some idea of the algorithmic complexity and CPU/memory trade-offs for some, but don’t need to actually know them. You can always check them, or look up pages like this.
(The Python one is a very nicely put together page, though.)
Interesting insights into managing Go dependency updates particularly in light of the recent Trivy issues.
First and most obvious, when we read to hit a goal rather than simply for pleasure, everybody reads as fast as possible to hike up their numbers.
I don’t find this to be true. If I wanted to bump my numbers, I would read short books. The goal encourages me to read rather than scroll TikTok or other less healthy ways to consume time.
More worrisome, when we read fast, we experience nothing. The book does not have a chance to burrow into our heart.
I don’t think this is true, either. I have read fast my entire life, and absolutely there have been books that burrowed into my heart.
If we’re gamifying our reading, we stop reading widely: we pick different versions of a story that we are guaranteed to like, and with that we lose a sense of well-roundedness, a sense of discovery and surprise.
Surprise, I disagree with this too. I read more widely now than I did in years past, and I track my reading now and didn’t in the past. I don’t believe there is any connection at all between these two things.
From What We Lose When We Gamify Reading - I’m not sure I would classify tracking reading and an annual goal as gamifying either.
I don’t love Spotify, and I have issues with Wrapped, but this is an interesting insight into producing more than a billion AI generated reports.
A bit like a Mainland ad but for AI (and more thoughtful).
Clearly a bit biased, but good advice on feature flags that goes beyond Posthog.
I used a feature flags system like described here for something like a decade before moving to LaunchDarkly and it worked amazingly well. I’m not sure the move to LaunchDarkly actually paid off.
An interesting summary of expression end detection across a bunch of programming languages. Including Odin, which I have never heard of previously.