Hey Futurists,
So, I failed to predict the Israeli surprise attack on Iran, just as I failed to predict the surprise attack by Hamas on Israel. Well, I guess that's not too surprising gives that word "surprise" in the phrase "surprise" attack. But it seems to me like when these conflicts burst out of nowhere, they don't do so in a vacuum -- they do so in spots with longstanding unresolved tensions. I'll come with signals to watch for, but for now, it looks like other "hotspots" to keep an eye on are Taiwan and the South China Sea and India-Pakistan. Besides the obvious: Israel-Iran and the Middle East region. There's North Korea. In Africa, tensions are high in the Horn of Africa (Ethiopia, Somalia, Eritrea, Djibout, North/South Sudan), the Sahel (West Africa), the Democratic Republic of Congo (DRC). Also the Ukraine-Russia war has become a stalemate, but if anything changes there, there is a possibility of escalation.
Check out the Armed Conflict Location & Event Data (ACLED) website:
https://acleddata.com/conflict-watchlist-2025/
Oh wow, just in the last hour or two, "Trump attacks Iran" is in the headlines.
I usually talk about AI developments, because those are probably most significant for humanity's long-term future. So let's dive in to that.
Artificial Intelligence
1. Interview of Sholto Douglas, an AI researcher at Anthropic.
The main two "takeaways" for me from this interview are:
1. The way to think about AI models is on 2 dimensions: one is "the absolute intellectual complexity of the task" and the other is "the amount of context" necessary to understand to accomplish the task. So you can plot the capability of an AI on a 2-dimensional graph. He says current AI systems already exceed humans on most tasks on the first axis, but not the 2nd axis. But they are rapidly improving both.
2. Most of what has been accomplished with large language models (LLMs) so far is due to the "pretraining" phase. (Remember, the "GPT" in "ChatGPT" stands for "generative pretrained transformer".) However, now they have gotten reinforcement learning to work really well in language models. This is resulting in massive rapid improvements in the logical thinking ability of language models. This shift from pretaining to reinforcement learning is what is responsible for his prediction of super rapid improvement on both axes of intelligence as described above.
Towards the end of the interview, they talked about the labor market. He thinks we will have a "drop-in remote worker" for all white collar jobs by 2027 or 2028. He says 5 years at the absolute longest. By "drop-in remote worker" he means you can unplug your remote worker and "drop in" an AI as an exact replacement. Literally everything any human can do remotely, an AI will be able to function as a "drop-in" replacement.
He is asked about the AI 2027 timeline. He says that not only he, and not only 90% of everyone at Anthropic, but 90% of people at Google DeepMind, OpenAI, etc, all believe the AI 2027 timeline is correct -- that we will have "drop in remote workers" by the end of 2027.
For an explanation of what the "AI 2027" timeline is all about, see
2. At this time last year, Google was processing 9.7 trillion tokens a month across all their products and APIs. Today, that number is 480 trillion. That's a 50X increase in just a year.
In case you were wondering, yes, the AI revolution is upon us.
https://x.com/sundarpichai/status/1924909961253159318
3. "What if I told you the entire tech industry is rushing headlong towards a security catastrophe? Have you ever considered what happens when AI systems generate millions of lines of code that no human fully understands? Is faster code generation actually destroying the craft of elegant secure software development?"
"Now a report shows that 76% of AI generated code dependencies don't even exist. They're complete fabrications. Think about that the code powering your banking app, your medical records, your children's online safety, potentially built on hallucinated foundations."
"Now you know I tell my developers all the time the best commit you can make is ones where you *delete* code. That's right -- not adding more but taking away. And AI is pumping out billions of lines of code every day. But I can tell you with absolute certain certainty we need *less* code not more. Over 440,000 AI generated code samples contain fake package dependencies. That's not innovation -- that's digital pollution."
"These hallucinated dependencies aren't just annoying -- they're dangerous entry points for security breaches."
"Every single line of code you write is a potential security vulnerability, bug, and maintenance headache. When AI tools like Cursor AI boast about generating a billion lines of code daily, they're actually bragging about creating a billion potential problems. Modern software development isn't about quantity, it's about quality, clarity, and sustainability. Senior developers know that refactoring and simplifying often leads to the most robust solutions, and usually means removing code. I've seen projects with one-tenth the code base outperform bloated alternatives consistently throughout my career."
"Testing and verifying AI generated code takes longer than writing good code manually in a lot of cases. Security researchers are finding that LLM frequently output insecure code under experimental conditions. The explosion of code volume has created a tsunami of vulnerabilities that security teams simply cannot keep up with."
"The promise of 10x development speeds becomes meaningless when you spend 20 times the time debugging faulty code."
You get the idea. He goes on about how the industry rewards code *quantity* rather than code *quality* and *security*.
"AI is exceptional at generating boilerplate but it's terrible at understanding systemwide architecture and security."
Towards the end, he says AI can be used to improve security -- for example by writing tests. I've mentioned to you all before I like using AI systems for generating test code and also asking it to do code reviews where it is explicity asked to report any potential bugs or security issues.
My (further) commentary: I find his comments about fewer lines of code interesting. I wrote a complete remote procedure call (RPC) system in Go in about 4,000 lines of code. (I have an old version out on GitHub -- I should update it to the newest version.) The actual RPC code is about 1,400 lines of code, with an additional 1,100 or so for automated tests. There's a linter that's an additional 1,400 lines or so, but I consider the linter essential to using the system (it detects bugs that would otherwise be detected at runtime at compile time), so I'm counting it. So I'm using 4,000 as the total -- it's actually slightly under 4,000.
Just for giggles, I decided to see how many lines of code Google's gRPC system is. I get about 230,000 lines of code for the Go implementation, and it uses Google's protobuf (protocol buffers), and the Go implementation of protobuf is about 19,000 lines of code. So I made an RPC system that is 1/64ths the size of Google's.
My RPC system has built-in security -- in fact it's impossible to use without the encryption turned on -- though it's symmetric-key encryption (AES256 with SHA256 for keyed-hash message authentication codes), not public-key encryption. gRPC uses TLS (the same encryption system used by HTTPS in your browser). So, I guess in that respect, it's not a 1-to-1 comparison. But I'm not actually aware at this moment of any other things that gRPC can do that my RPC system can't do. It can do messages with any data structures you can represent on a computer, it can send streams of messages, rather than just one-by-one, it can send messages of unlimited length and my file syncronization program that I built on top of the RPC system uses this to send files of any length no matter how big. If you deem public-key encryption necessary and symmetric-key encryption inadaquate, that's the only shortcoming that I know of. My system can handle everything else, and with 1/64th the code size, I have a lot of confidence if you set up the keys correctly that the system will be secure. I'll admit in the file sync program, setting up the keys is more complicated than I would like and involves various manual steps. Anyway, it's a bummer to have created one of the best RPC systems in the world and not get any recognition for it, but hey, over the history of the tech industry, there's been lots of great stuff that's been invented that the world overlooked because the industry consolidated into a handful of big "winners" and everyone uses the tech provided by the "winners". Google and gRPC basically won in the RPC space -- anyone who goes beyond text-based RPC (JSON, REST, XML, etc) and uses actual statically typed, strongly typed (non-buggy) RPC calls uses gRPC these days.
Anyway, what he says about writing smaller amounts of very high quality code resonates with me. But for work, I'm pressured to rush code out very fast. I'm also expected to never make mistakes. And now that AI coding is here, I'm supposed to use AI tools to increase productivity 5x-10x. 3x is considered a reasonable minimum.
There's one other thing he says that I suppose I should mention.
"AI is exceptional at generating boilerplate but it's terrible at understanding systemwide architecture and security."
"Systemwide architecture" -- If you saw the video with the interview of Sholto Douglas, an AI researcher at Anthropic, then you know, they are working very hard at massively increasing the "context" AI models can handle, which should give them ability "at understanding systemwide architecture."
4. "Want to be a music/sound pro in 2025? Think twice."
"Folia Soundstudio" guy says he is a professional music maker and sound maker for both TV, games, TV ads, and some other artists.
After seeing demos of Google's Veo 3, he says:
"Potentially, you don't need a dialogue editor. You don't need a film crew. First of all, you're not going to need directors, actors, makeup artists, prop artists, costume designers, producers, pre-producers, post-producers, camera crews, lighting crews, special effects crew. Nothing is needed anymore. The second thing, you're not going to need to do any audio post. The audio post has been there for you. Dialogues are perfect. Sound design is really very good. Music across the whole scene is in tempo, in key, corresponding to the image. And I still thought we were going to have a couple of years before this happens. It happened just two days ago."
"Think twice before you become a pro."
From the comments:
"All art is done. Not just music. Coding is next. The only thing left will be manual labour. Congrats to us humans, we have successfully finished digging our grave."
Well, it's not "game over" yet. We're just at the point where it's obvious "game over" is coming and is inevitable. I don't know how long it will take, but machines will eventually take over all labor. It looks like this commentator is correct that manual labor will be last.
If you want an AI-proof job, could I recommend cleaning hotel rooms? No robotic system is anywhere close to being able to understand and know what to do with all possible weird objects humans can leave in hotel rooms.
Not looking too good for music/sound pros.
5. "To determine the best model to use on the backend of whereisthisphoto.com, I analysed the performance of various OpenAI models at identifying photos taken all over the world."
In related news, a website called whereisthisphoto.com exists that purports to be able to tell you where in the world any photo was taken. Somebody got inspired by people using AI to play "Geoguessr" and decided to automate it.
Note that they tested only models from OpenAI, no models from other companies.
"The dataset was built from a combination of personal travel photos I have taken and photos downloaded from the r/whereintheworld subreddit. All photos had the metadata removed to ensure the models were solely performing image analysis on the photos."
Wait, the AI wasn't trained on images from r/whereintheworld?
https://www.whereisthisphoto.com/blog/openai-model-image-analysis
6. FastList is ... Craigslist but for AI agents?
"FastList is an agentic, next-generation classified site. Post, browse, and interact with listings using your favorite AI assistant or agent. Built on the Model Context Protocol (MCP), FastList is designed for seamless integration with LLMs and agentic workflows."
"Take a photo in your LLM or agent, provide your zip code, and post directly to FastList. Share what you have, sell, or give away -- right from your chat or AI interface. FastList makes classified listings as easy as sending a message."
Is this how you want to buy and sell things? Have AI agents buy and sell things on your behalf?
7. AI-powered greeting cards. I'm surprised it's taken this long for me to see this.
8. Singularitysure is "AI income protection insurance" from Singularity, a Y Combinator startup.
They say they "provide rapid income payouts if AI-driven automation impacts your occupation."
"Built on a transparent, artificial intelligence displacement rate index." "We track live labor trends, hiring signals, and AI-capability shifts to deliver a real-time Displacement Rate Index -- so you know exactly when automation risk rises and when your coverage will pay."
Literally all workers will need to sign up for this. Wonder how they're going to choose who to accept and who to reject for their insurance plans.
9. Allegedly, if you prompt Claude 4 with something egregiously immoral, it may contact police, or contact the press to expose you, or contact government regulators, or contact sysadmins of systems you use to try to lock you out. Claude 4 has "unlimited access to tools" including the ability to send email, which gives it the ability to "whistleblow". This commentator on YouTube is in India and points out that in many parts of the world, police are not trustworthy.
10. Marina Karlova says AI is better than psychotherapists because AI doesn't play power games, try to convince you of its ideology or bias, or try to create dependency for continuing payment.
Matrix Computations
11. "DumPy: NumPy except it's OK if you're dum".
What "dynomight" wants from an array language is: "Don't make me think," "run fast on GPUs," "really, do not make me think," and "do not."
"I say NumPy misses on three of these. So I'd like to propose a 'fix' that -- I claim -- eliminates 90% of unnecessary thinking, with no loss of power. It would also fix all the things based on NumPy, for example every machine learning library."
"I know that sounds grandiose. Quite possibly you're thinking that good-old dynomight has finally lost it. So I warn you now: My solution is utterly non-clever. If anything is clever here, it's my single-minded rejection of cleverness."
"To motivate the fix, let me give my story for how NumPy went wrong. It started as a nice little library for array operations and linear algebra. When everything has two or fewer dimensions, it's great. But at some point, someone showed up with some higher-dimensional arrays. If loops were fast in Python, NumPy would have said, 'Hello person with greater than or equal to 3 dimensions, please call my less than or equal to 2 dimensional functions in a loop so I can stay nice and simple, xox, NumPy.'"
"But since loops are slow, NumPy instead took all the complexity that would usually be addressed with loops and pushed it down into individual functions. I think this was a disaster, because every time you see some function call like np.func(A,B), you have to think:"
"OK, what shapes do all those arrays have?"
"And what does np.func do when it sees those shapes?"
"Different functions have different rules. Sometimes they're bewildering. This means constantly thinking and constantly moving dimensions around to appease the whims of particular functions. It's the functions that should be appeasing your whims!"
"Here's my extremely non-clever idea: Let's just admit that loops were better. In high dimensions, no one has yet come up with a notation that beats loops and indices. So, let's do this:"
"Bring back the syntax of loops and indices."
"But don't actually execute the loops. Just take the syntax and secretly compile it into vectorized operations."
"Also, let's get rid of all the insanity that's been added to NumPy because loops were slow."
He or she (whoever "dynomight" is) proceeds to implement "dumpy" such that you replace "for ... range" statements with "with .. dp.Range" ("dp" from "import dumpy as dp") statements that don't actually run loops. They just look like they run loops, but in reality, they set data inside dumpy that enable it to subsequently use JAX's vectorization capability to vectorize the computation just like you would have done manually to do the same thing in NumPy. (JAX stands for JIT compilation for Open XLA... or something like that. JIT stands for "just in time" and "XLA" stands for "accelerated linear algebra"). Is that clever?
Construction
12. "InventWood is about to mass-produce wood that's stronger than steel."
In 2018, Liangbing Hu, a materials scientist at the University of Maryland, devised a way to turn ordinary wood into a material stronger than steel.
"'All these people came to him,' said Alex Lau, CEO of InventWood, 'He's like, OK, this is amazing, but I'm a university professor. I don't know quite what to do about it.'"
"Rather than give up, Hu spent the next few years refining the technology, reducing the time it took to make the material from more than a week to a few hours. Soon, it was ready to commercialize, and he licensed the technology to InventWood."
"InventWood's Superwood product starts with regular timber, which is mostly composed of two compounds, cellulose and lignin. The goal is to strengthen the cellulose already present in the wood. 'The cellulose nanocrystal is actually stronger than a carbon fiber,' Lau said."
"The company treats it with 'food industry' chemicals to modify the molecular structure of the wood, he said, and then compresses the result to increase the hydrogen bonds between cellulose molecules."
Apparently the "food industry chemicals" are sodium hydroxide (NaOH) and sodium sulfite (Na2SO3). Sodium hydroxide is a common ingredient in cleaners and soaps and sodium sulfite is used as an antioxidant and preservative.
The trick is to collapse the cell walls "densify" the wood in such a way that the cellulose nanofibres are highly aligned.