Global Electricity Review 2025

Alternatively, see Hank Green’s short overview/reading of the article here. In short; solar power ftw; almost half of new energy generated is from renewables.

A Song for Two Voices

This is a ratfic fan fiction based on the “Heralds of Valdemar” series by Mercedes Lackey. I have not read canon, and this was a recommendation from someone I met at EAG. I absolutely loved it. All great ratfics tend towards AI, but this one took its sweet time and covered a varied range of topics, wending its way through mental health, soul-deep love, statecraft, purpose, parenthood, and more, to now hold a very special place in my mind. It was longer than I expected (i misperceived the word count to be one digit less than it actually was), but all the more sweet for its length. It was fun to see the time the series spanned; I feel like many ratfics are accelerated when compared to canon or more standard stories, so it was good to see one that was not. I’m a little bit at a loss of where to go from here, but I might take a break from ratfics/reading on my phone for a while and pick up one of the American Classics I bought while in the US, like All the King’s Men. (I started Hero of a Thousand Faces but got really put off by its strong Freudian leaning). Anyways, nihil supernum.

How do we solve the alignment problem?, Joe Carlsmith

Carlsmith’s recent series of essays tries to answer its titular question. Some important things I’ve taken from this are the maps he draws - the ways he subdividies the problem and the potential solution attempts. This has made me take the empirical approaches to alignment more seriously - because what we need to do is not align a superintelligence, but align the first nascent intelligence strong enough to help us align the next one. This might be simpler in important ways. This is a lesson echoed by e.g. the new AI Control agenda. Presupposing that the object system is already superintelligence doesn’t seem the most fruitful or plausible to me anymore.

Intro to Brain-Like AGI Safety, by Steven Byrnes

I finally sat down and read Steve’s theory of Brain-like AGI Safety (partially in preparation to have a 1-1 with him at EAG earlier this Spring), and found it an interesting read. The things he says about AGI Safety were mostly known, but I appreciated the framing. The things he says about Brain-like AGI were cool, and the theory of human short- and long-scale learning was interesting, though I don’t have the neuroscience chops to actually evaluate it.

More Steven Byrnes

I’ve been enjoying reading more of Steven’s work, like his comments on recent SMTM work on seeing humans as control (feedback) systems, and Sharp Left Turn, an opinionated review, on making analogies from evolution to the development of TAI.

ChatGPT and the environment, by Andy Masley

Short article for all your information needs about personal LLM-usage’s impact on the environment. Spoiler; it’s negligible. (Industrial-scale development, training, and usage is another case, but also not what’s being discussed here. Long-term risks from AI is also another case, but also not what’s being discussed here.)

How to Save 400 000 babies a year

Vox article by Dylan Matthews on using advanced market commitments to raise money for a new cure to neonatal sepsis. The kicker; they’re not relying on Western (read; American) money to do this, but going straight to middle-income countries like Kenya, India, and South Africa to raise the money, because they don’t need that much. They’re only asking for ~$120 million, which, for governments of the size of the ones mentioned, is not that much. Let’s go AMCs; I hope this project gets off the ground.

Why we Think, Lilian Weng

Lilian Weng presents an overview of Reasoning Model development. Relatively short and succinct, felt basic, but maybe reasoning models are just (marginally) basic.

The “The Daily Show” Shrimp video

Just watch it.

On thinking in the limit, by Matt Reardon

When we imagine things into the future, we often think of that future as being an equilibrium state: they’re very unlikely to be so. And actually imagining equilibrium states; there’s no more scientific or technological development, etc., means that we quickly run into the limits of what is physically capable. The kicker is that this kind of thinking is hard, and that actually propagating effects properly quickly leads to futures that are wildly different from what we might naively expect. Also, this is difficult work.

“You’re being lied to about Protein”, from Vox

I got a Vox subscription just to read this article. It’s partly what it says on the tin; expert guidance on recommended protein intake, partly vegan/vegetarian legume propaganda (affectionately).

The Indifference Engine

This short essay was posted around a bit and it’s as good as they say. Read read read. The future is here, and it’s immensely sci-fi and immensely normal.

The Science of Woo

Not sure the article’s title is the best description - this is, in particular about how modern meditation practices and their menagerie of oriental terms can be understood scientifically, with current science, and the limits of our understanding - where deep practice begins.

It’s not the Incentives, It’s You

Yarkoni talks about the incentives in academia, and how it seems like other professions manage to maintain an actual code of conduct - read; avoiding the rat race - in the face of selfish incentives, and that academia is the only one a) that has a real problem(?) with large parts of the population “just following the incentives”, and b) has another large population complaining about it loudly all the time. Doesn’t really go into what the causes might be though. Connects maybe to Scott Alexander’s theory of speaking mental illness into existence.

AI Village, by AI Digest

4 LLMs set up to with computer use, and given the task of raising money for a charitable cause of their choosing. Still on-going.

Terence Tao vibe-coding a proof in Lean

Self-describing. Cool to see him play around with it. Future is here etc. etc.