Hey Futurists,
I suppose you all are wondering what's my take on the "DeepSeek" situation, the stock market drop, and whether China has taken the lead in AI? What? Nobody cares? Well, I'm going to give it to you anyway.
I was actually genuinely surprised the stock market went down, especially for Nvidia, because from the beginning, I figured the system was built on "foundation models" that cost hundreds of millions to make, and the $5.6 million figure just referred to something they did as a last step. It the technical support that they released, it looks like they used DeepSeek-V3 as the foundation model, and also did various "distillation" experiments with various others. It doesn't seem to me that they released a model competitive with, say, OpenAI's o3, build from scratch at the same expense as OpenAI's actual o3. The stock market selloff seems to have been done by people who don't understand that. The stock price of Nvidia does seem to have rebounded a bit and stabilized. It could be it was overhyped and due for a more realistic price "correction" and DeepSeek is just what prompted it. Or not -- who can read the mind of the market?
Having said that, it is my general feeling that China is more or less on par with the US with language models. I've been using DeepSeek for a long time -- since December of 2023 -- as well as ChatGLM, another Chinese model, and Qwen more recently. Their output seems to be on par with US models like ChatGPT, Gemini, and Claude. So if anyone tells you, China is 6 months behind or 12 months behind or 18 months behind, that's not true. But they haven't lept ahead, either. I'm speaking subjectively here. Your mileage may vary, as the expression goes.
AI + Geopolitics
1. Do the models DeepSeek released undermine the case for export control policies on chips? Dario Amodei, CEO of Anthropic, which makes the Claude models, doesn't think so. If anything, he thinks they make export control policies more important than they were a week ago.
"Export controls serve a vital purpose: keeping democratic nations at the forefront of AI development. To be clear, they're not a way to duck the competition between the US and China. In the end, AI companies in the US and other democracies must have better models than those in China if we want to prevail. But we shouldn't hand the Chinese Communist Party technological advantages when we don't have to."
"Before I make my policy argument, I'm going to describe three basic dynamics of AI systems that it's crucial to understand:"
The "three basic dynamics" are: Scaling laws ("all else equal, scaling up the training of AI systems leads to smoothly better results on a range of cognitive tasks, across the board"), shifting the curve ("if the innovation is a 2x 'compute multiplier' (CM), then it allows you to get 40% on a coding task for $5M instead of $10M; or 60% for $50M instead of $100M, etc."), and shifting the paradigm ("the idea of using reinforcement learning to train models to generate chains of thought has become a new focus of scaling").
"The three dynamics above can help us understand DeepSeek's recent releases. About a month ago, DeepSeek released a model called 'DeepSeek-V3' that was a pure pretrained model -- the first stage described in #3 above. Then last week, they released 'R1', which added a second stage."
"DeepSeek-V3 was actually the real innovation and what should have made people take notice a month ago (we certainly did). As a pretrained model, it appears to come close to the performance of state of the art US models on some important tasks, while costing substantially less to train (although, we find that Claude 3.5 Sonnet in particular remains much better on some other key tasks, such as real-world coding). DeepSeek's team did this via some genuine and impressive innovations, mostly focused on engineering efficiency. There were particularly innovative improvements in the management of an aspect called the 'Key-Value cache', and in enabling a method called 'mixture of experts' to be pushed further than it had before."
However...
"DeepSeek does not 'do for $6M5 what cost US AI companies billions'. I can only speak for Anthropic, but Claude 3.5 Sonnet is a mid-sized model that cost a few $10M's to train." "Sonnet's training was conducted 9-12 months ago, and DeepSeek's model was trained in November/December, while Sonnet remains notably ahead in many internal and external evals. Thus, I think a fair statement is 'DeepSeek produced a model close to the performance of US models 7-10 months older, for a good deal less cost (but not anywhere near the ratios people have suggested)'."
"If the historical trend of the cost curve decrease is ~4x per year, that means that in the ordinary course of business -- in the normal trends of historical cost decreases like those that happened in 2023 and 2024 -- we'd expect a model 3-4x cheaper than 3.5 Sonnet/GPT-4o around now. Since DeepSeek-V3 is worse than those US frontier models -- let's say by ~2x on the scaling curve, which I think is quite generous to DeepSeek-V3 -- that means it would be totally normal, totally 'on trend', if DeepSeek-V3 training cost ~8x less than the current US models developed a year ago."
"Both DeepSeek and US AI companies have much more money and many more chips than they used to train their headline models. The extra chips are used for R&D to develop the ideas behind the model, and sometimes to train larger models that are not yet ready (or that needed more than one try to get right). It's been reported -- we can't be certain it is true -- that DeepSeek actually had 50,000 Hopper generation chips, which I'd guess is within a factor ~2-3x of what the major US AI companies have (for example, it's 2-3x less than the xAI 'Colossus' cluster)7. Those 50,000 Hopper chips cost on the order of ~$1B. Thus, DeepSeek's total spend as a company (as distinct from spend to train an individual model) is not vastly different from US AI labs."
"R1, which is the model that was released last week and which triggered an explosion of public attention (including a ~17% decrease in Nvidia's stock price), is much less interesting from an innovation or engineering perspective than V3. It adds the second phase of training -- reinforcement learning, described in #3 in the previous section -- and essentially replicates what OpenAI has done with o1 (they appear to be at similar scale with similar results). However, because we are on the early part of the scaling curve, it's possible for several companies to produce models of this type, as long as they're starting from a strong pretrained model. Producing R1 given V3 was probably very cheap. We're therefore at an interesting 'crossover point', where it is temporarily the case that several companies can produce good reasoning models. This will rapidly cease to be true as everyone moves further up the scaling curve on these models."
"Making AI that is smarter than almost all humans at almost all things will require millions of chips, tens of billions of dollars (at least), and is most likely to happen in 2026-2027. DeepSeek's releases don't change this, because they're roughly on the expected cost reduction curve that has always been factored into these calculations."
"This means that in 2026-2027 we could end up in one of two starkly different worlds. In the US, multiple companies will definitely have the required millions of chips (at the cost of tens of billions of dollars). The question is whether China will also be able to get millions of chips."
https://darioamodei.com/on-deepseek-and-export-controls
2. Lei (of Lei's Real Talk YouTube channel) looks at what AI professionals and the Chinese media inside China are saying about DeepSeek.
AI professionals are very skeptical. However the mainland Chinese media is in full hype mode.
Videos of Chinese AI professionals expressing skepticism of the claims were removed from Chinese social media.
She (Lei) goes on to speculate on claims regarding whether DeepSeek may have GPUs in violation of sanctions and whether the Chinese government (CCP) may have been involved.
This person, Lei, is a member of the Falun Gong. The Falun Gong is extremely critical of the Chinese Communist Party (CCP). She talks about her Falun Gong membership in this video:
For an outsider's perspective on Falun Gong (a cult), see:
Lei made a follow-up video where she comments more on DeepSeek's GPUs and examines DeepSeek's complex ownership structure. She speculates the complex ownership structure may be to hide owners that they don't want to make public. The company's headquarters in Hangzhou could be just a shell company, with the real company in Beijing.
She ends with a rumor that the Chinese are working on language processing units (LPUs), hoping to undercut Nvidia's market dominance by flooding the market with cheap AI hardware (and make the sanctions irrelevant in the process).
3. Alexandr Wang, CEO of Scale.AI, says:
"Contrary to some lazy takes I've seen, DeepSeek R1 was trained on a shit ton of human-generated data -- in fact, the DeepSeek models are setting records for the disclosed amount of post-training data for open-source models."
"The reasoning dataset is actually quite large -- 600k reasoning samples is a LOT."
https://x.com/alexandr_wang/status/1884440764677251515
4. Open-R1 is a new project that aims to replicate DeepSeek-R1 in a fully open source manner. If you're thinking, but wait, isn't DeepSeek-R1 already open source? Not exactly. What's open is the model parameters (also called the model weights), but the complete process by which those parameters were created has not been made public. DeepSeek-R1 was released with a detailed "technical report" that explained the key steps behind its creation. But was it enough information for others to replicate the process? That's what we're going to find out. DeepSeek did not release the complete source code and training data used to create R1.
"Data collection: How were the reasoning-specific datasets curated?"
"Model training: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales."
"Scaling laws: What are the compute and data trade-offs in training reasoning models?"
"DeepSeek-R1 is a reasoning model built on the foundation of DeepSeek-V3. Like any good reasoning model, it starts with a strong base model, and DeepSeek-V3 is exactly that. This 671B Mixture of Experts (MoE) model performs on par with heavyweights like Sonnet 3.5 and GPT-4o. What's especially impressive is how cost-efficient it was to train -- just $5.5M -- thanks to architectural changes like Multi Token Prediction (MTP), Multi-Head Latent Attention (MLA) and a LOT (seriously, a lot) of hardware optimization."
"DeepSeek also introduced two models: DeepSeek-R1-Zero and DeepSeek-R1, each with a distinct training approach. DeepSeek-R1-Zero skipped supervised fine-tuning altogether and relied entirely on reinforcement learning (RL), using Group Relative Policy Optimization (GRPO) to make the process more efficient. A simple reward system was used to guide the model, providing feedback based on the accuracy and structure of its answers."
"DeepSeek-R1 started with a 'cold start' phase, fine-tuning on a small set of carefully crafted examples to improve clarity and readability. From there, it went through more RL and refinement steps, including rejecting low-quality outputs with both human preference based and verifiable reward, to create a model that not only reasons well but also produces polished and consistent answers."
"Here's our plan of attack:"
"Step 1: Replicate the R1-Distill models by distilling a high-quality reasoning dataset from DeepSeek-R1."
"Step 2: Replicate the pure RL pipeline that DeepSeek used to create R1-Zero. This will involve curating new, large-scale datasets for math, reasoning, and code."
"Step 3: Show we can go from base model -> SFT -> RL via multi-stage training."
Note that the people involved on this project (Elie Bakouch, Leandro von Werra, and Lewis Tunstall) work for HuggingFace itself -- not some company that publishes models on HuggingFace.
https://huggingface.co/blog/open-r1
Geopolitics
5. The reason Russian programmers were banned from contributing to the Linux kernel by the Linux Foundation a few months ago was in fact because of US sanctions. The Linux Foundation finally made a statement on the issue.
Everyone who manages an open-source project and lives in the US or is otherwise subject to US sanctions controls has to pay attention to this issue.
"With the US and international sanctions targeting technology companies based in Russia, this issue has become a topic in certain open source communities that have participation from entities targeted by such sanctions."
"The OFAC sanctions rules are 'strict liability', which means it does not matter whether you know about them or not. Violating these rules can lead to serious penalties, so it's important to understand how they might affect your open source work. Many OFAC sanctions restrictions generally do not care if software or technology is public or published (although US export controls generally do) and are usually completely separate and independent of any Export Administration Regulations (EARs), which the LF has published guidance about In the past. It is important to note that the OFAC SDN List for sanctions programs is very different from the BIS's Entity List for Export Controls. Entities on the BIS's Entity List are not affected by the OFAC sanctions unless they are also added by OFAC to the SDN List."
"OFAC sanctions are US government regulations that restrict or prohibit transactions with certain countries, entities, and individuals ('sanctions targets')."
"For developers, this means you need to be cautious about who you interact with and where your contributions come from. OFAC sanctions can target specific countries, regions, and individuals or organizations, with many of the individuals and organizations listed on the Specially Designated Nationals and Blocked Persons ('SDN') List. OFAC updates this list regularly, adding or removing names as global situations change. OFAC sanctions also apply to all entities that are owned 50% or more directly or indirectly by one or more SDN individuals or entities, whether or not the owned entity shows up on the SDN List. Some entities, such as entities owned or controlled by OFAC-sanctioned governments, may also be sanctioned but not included on the SDN List."
"Many other countries also have similar sanctions programs in place, including the European Union, United Kingdom, Japan, Australia, Switzerland, China, and many more."
"It is disappointing that the open source community cannot operate independently of international sanctions programs, but these sanctions are the law of each country and are not optional."
"It is important to understand the basics of these sanctions programs. OFAC's prohibited transactions often include: Financial investments and transactions (e.g. payment for services), trade (import or export) of goods, technology, or services, and any other property transactions (including intellectual property and contracts)."
"The OFAC sanctions may impact common community behaviors, such as two-way collaboration on a proposed software change which could be interpreted as a prohibited provision of services. It would be an issue for developers to provide services to a developer who is employed (directly or indirectly) by an SDN or an entity directly or indirectly owned 50% or more by an SDN."
"OFAC does publish an SDN List, and also offers an OFAC SDN list search tool. Using the search tool, a user can check if an organization is on the OFAC SDN List. OFAC SDN entities show up with 'SDN' under the column 'List'. Please note this tool also searches other 'Non-SDN Lists' that is not affected by the prohibited 'transaction' outlined in this blog and all search hits are not necessarily for the OFAC SDN List."
"The OFAC SDN list and search tool are also not exhaustive and any analysis cannot solely rely on this list."
"Just because an individual or entity is not on the SDN List today does not mean they, or their owner, will not be added tomorrow. In some weeks, OFAC may add hundreds of individuals or entities to the list. Finally, just because an individual or entity is not subject to US OFAC sanctions does not mean that another country has not sanctioned the individual or entity."
"Most OFAC sanctions contain an exemption for the importation and exportations of 'informational materials.' Open source code appears to generally be considered 'informational material' by OFAC, and a one-way receipt of source code via an SDN therefore should be exempt from OFAC sanctions. However, this would only apply to existing code sent by an SDN, and it would not apply if you requested a developer working for an SDN to create new code or to modify code."
"Many open source projects require a Contributor License Agreement (CLA), which would bar contributors from OFAC sanctioned entities. OFAC sanctions bar transacting in intellectual property, and specifically, any agreement in intellectual property or any other contract with an SDN. If your open source project requires a CLA, make sure your CLA validation process includes compliance checks against the SDN List."
We're probably going to see a Russian-Chinese Linux coming out this year that is made by Russian and Chinese programmers (and other "BRICS" programmers) that US programmers don't contribute to. So Linux is going to fork into US/European Linux and Russian/Chinese Linux.
https://www.linuxfoundation.org/blog/navigating-global-regulations-and-open-source-us-ofac-sanctions
ATMs getting blown up in Germany
6. "In Germany, the number of ATMs blown up fell slightly in 2023 -- to 461 cases, according to the BKA. Solid explosives were almost always used, which caused great damage."
Um. What? ATMs are getting blown up in Germany? And 461 ATMs blown up in a year (2023) means there has been a *decrease*? I never heard of this. This is from an article written on August 29, 2024.
Way back in the late 90s, I went to a computer security talk where a security researcher told about a time where criminals put on hard hats and construction uniforms and used a construction machine to scoop up an ATM machine. His point was that ATMs were robbed in many ways other than breaking the encryption between the machine and the bank. People discovered tricks like putting in ATM cards with test codes on the magnetic strip and puniching in special test codes that would get the machine to, for example, pop out 10 bills of the lowest denomination. But nobody ever broke the encryption. People focus a lot of effort on the encryption algorithms, but if the *rest* of the system isn't also secure, it doesn't matter how good the encryption algorithm is, people will still be able to break the system. I was under the impression the scooping the machine with the construction equipment was something that happened once. Maybe it was but apparently in Germany, simply blowing up ATMs with explosives is a regular thing?
"Bank robbers often come at night and let it rip. As the latest figures from the Federal Criminal Police Office (BKA -- for "Bundeskriminalamtes") show, ATMs remain a popular target for bombing and robbery. In 2023, a total of 461 ATMs were blown up. After the record year of 2022 with 496 attempted and completed explosions, the number of cases fell by 7.1 percent. This is evident from the BKA's 2023 Federal Situation Report. One reason for the decline: banks and savings banks have been taking more active steps to combat the problem for some time now. They rely on secure devices or lock their branches at night. Not least because the explosions repeatedly endangered residents and neighboring houses."
"As the BKA further explained, the amount of cash stolen by perpetrators was also somewhat lower last year. Compared to the previous year, it fell by 5.1 percent to 28.4 million euros. However, the sum remains 'at a comparably high level,' the authority said. The reason is the 'high proportion of cases' in which perpetrators obtained cash after a successful explosion. This was achieved in a total of 276 crimes."
"According to official statistics, solid explosives with high detonation energy were used in 87 percent of all explosions. According to the BKA, pyrotechnic devices are used in particular, but military explosives and, in rare cases, homemade explosive devices are also increasingly used. This approach caused 'significant damage' and exposed emergency personnel and bystanders to 'great danger,' the BKA explained. In contrast, it is becoming increasingly rare for a gas or gas mixture to be introduced into the ATM and then ignited. This could also be due to the fact that the failure rate is significantly higher when using gas mixtures."
"The suspects' propensity to violence remains high, according to the BKA. Last year, fatal traffic accidents were associated with "risky escape behavior" for the first time."
"According to the BKA, the police managed to identify more suspects last year. The number rose by 57 percent to 201 compared to 2022. Almost 90 percent of them traveled from abroad to commit the crime. 160 of the suspects identified had their main residence in the Netherlands -- the vast majority. Many perpetrators belong to professionally organized gangs."
So, blame the Netherlands. Alrighty then.
One possibility for banks to improve the technical security of their ATMs "is systems that automatically color banknotes in the event of an explosion, thus making them unusable for the perpetrators."
"In July, the federal government also decided to take action. In future, anyone who blows up an ATM will be punished with a prison sentence of at least two years."
Coming from a US perspective, two years doesn't seem like much.
"The draft law presented at that time also provides for changes to the Explosives Act."
Whatever that is. ("Das Sprengstoffgesetz".)
Link goes to an article in German. Translation by Google Translate.
https://www.tagesschau.de/inland/gesellschaft/bka-geldautomaten-sprengungen-100.html
"The Explosives Act (Law on Explosive Substances) regulates the civil handling and trade of, as well as the import and transit of, explosive substances and explosive accessories in Germany. It is the most important legal source of German explosives law."
https://de.wikipedia.org/wiki/Sprengstoffgesetz_(Deutschland)
AI + Domestic Politics
7. "Do OpenAI's new reasoning models (o1 series) differ politically from their predecessors?"
Nope.
8. There is a plot afoot to sabotage LLMs.
"Here is a curated list of strategies, offensive methods, and tactics for (algorithmic) sabotage, disruption, and deliberate poisoning."
What follows is a list of tools that generate garbage that people use robots.txt to tell LLM scrapers to grab.
https://tldr.nettime.org/@asrg/113867412641585520
Domestic Politics
9. neveragain.tech: "We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration's proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs."
Signed by 2,842 people, from many companies, universities, organizations I've heard of, including Apple, Google, Facebook, Nvidia, Dell, Adobe, Yahoo, GitHub, GitLab, Intuit, Airbnb, Slack, IBM, Red Hat, MITRE, IEEE, MIT, GE, Oracle, Fastly, Docker, Uber, Lyft, Medium, Meetup, Stripe, Xilinx, Dropbox, VMWare, MongoDB, Akamai, Heroku, Autodesk, LinkedIn, Palantir, Synopsys, Accenture, Pivotal, Intel, Atlassian, Canonical, Instacart, Microsoft, Rackspace, Automattic, Change.org, Cloudflare, Foursquare, Home Depot, Salesforce, Squarespace, Khan Academy, Walmart Labs, Charles Schwab, Northrop Grumman, Tableau Software, the Lifeboat Foundation, the Wikimedia Foundation, Booz Allen Hamilton, Cornell University, Harvard University, the Wharton School, Brandeis University, Stanford University, the University of Delhi, the University of Maryland, Oregon State University, the University of Washington, the University of Pennsylvania, National Institutes of Health, Illinois Institute of Technology, SUNY, Carnegie Mellon University (CMU), and others.
"We are no longer publishing new signatures to the pledge on this website, but you can still support our movement."
AI
10. "Modern continuous integration (CI) systems have become incredibly efficient at detecting issues before they land in production. Yet most pipelines still rely on manual triage when something goes wrong. Teams lose hours or even days reverting bad commits and fixing failing tests. Fortunately, with today's AI coding agents, we can now build self-healing CI pipelines capable of automatically diagnosing failures and proposing (or even implementing) fixes -- all before blocking the rest of your team."
"The core self-healing workflow:"
"1. Failure Detected: The pipeline collects logs, stack traces, and any relevant code diffs."
"2. AI Analysis: The system prompts an AI agent -- such as a GPT-based coding model -- to analyze the error and propose a fix."
"3. Candidate Fix Generated: The AI creates a patch, which could involve adjusting the failing test, the underlying application code, or both."
"4. Ephemeral Environment Validation: The fix is tested in a clean, temporary environment."
"5. Developer Review: The suggested fix is opened as a commit or pull request. A human developer can approve, refine, or reject it."
What do y'all think? Are we ready to plug AI directly into the software integration and deployment system?
https://qckfx.com/blog/how-ai-driven-self-healing-ci-pipelines-can-transform-development
11. "NYCerebro is a CLIP-powered search engine for NYC traffic cameras. It uses AI to find camera views matching your text descriptions."
Built in two hours in a hackathon.
"The 'magic' is that we use OpenAI's CLIP model to embed a semantic representation of each traffic camera's current image and then compare that with a the text vector of the user's search query. By indexing all of the camera images' vectors in a vector database we can find the 'most similar' images for a search query."
"The craziest part of this hack? All of the frontend code was 100% written with Vercel's v0! With some detailed prompting, a bit of back and forth debugging, and the occasional escalation to OpenAI o1 (and copy/pasting back its reply to v0) we were able to create a novel app in just two hours."
"For the backend, we used a Roboflow Workflow to calculate the CLIP embeddings and a Custom Python Block to save the results to our Supabase database (with pgvector as the vector store)."
To give it a whirl, I punched in "construction and bridge". I got back a lot of cameras that all had bridges in them, but usually not any construction going on.
CLIP stands for "Contrastive Language-Image Pre-training". The main idea is that you find images that have specific text in the description, and you also find images that *don't* have that specific text in the description, and you train the neural network on the "contrast" between the two. This technique, CLIP, was instrumental in making models that generate images from text, like OpenAI's Dall-E, Google's Imogen, Stable Diffusion, etc.
Satellites
12. "Satellite firm bucks miniaturization trend, aims to build big for big rockets"
"Over the last decade, much of the satellite industry has gone smaller. Similar to the trend in consumer electronics, in which more computing power and other capability can be packed into smaller devices, satellites have also gotten smaller and cheaper."
"Smaller satellites typically sacrifice a lot of power, going from as much as 20 kilowatts down to 1 or 2 kW, Karan Kunjur, co-founder and chief executive of K2, said. They also often have a smaller aperture (such as a lens in a telescope), reducing the quality of observations. And they must make difficult trades between payload capacity and on-board propellant."
"The reaction wheels that Honeywell Aerospace sells to Lockheed cost approximately $500,000 to $1 million apiece. K2 is now on its fourth iteration of an internally built reaction wheel and has driven the cost down to $35,000. Kunjur said about 80 percent of K2's satellite production is vertically integrated."
The article goes on to describe the company's 'Mega Class' satellite bus -- a satellite bus is "the main structural component of a satellite, upon which payloads are hosted" -- which they claim will have similar capabilities as Lockheed's LM2100: "20 kW of power, 1,000 kg of payload capacity, and propulsion to move between orbits" but at a fraction of the cost.
Astronomy
13. Exocomets -- comets in other solar systems outside our own -- have been spotted around 74 nearby stars.
"Exocomets are icy bodies at least one kilometer in size orbiting stars other than our Sun. While they are too small to observe directly, these bodies occasionally collide with one another, releasing copious amounts of dust and pebbles. The exocomets and the debris they shed tend to orbit stars in belts, akin to our solar system's Kuiper Belt, and those belts are within reach of modern-day telescopes."
"Astronomers used the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile and the Submillimeter Array (SMA) in Hawai'i to observe these dusty belts."
"The radio wavelengths these observatories detect come from the faint glow of warm dust within the disks. Moreover, by combining data from several dishes, the observatories can make out fine details in the disk structures."
https://skyandtelescope.org/astronomy-news/new-images-reveal-exocomets-around-74-nearby-stars/
3D Printing
14. New metal 3D printing technique.
"Metal 3D printing is currently dominated by powder-based techniques like SLM (Selective Laser Melting). These processes yield incredibly precise parts, but the build times are slow. Furthermore, dealing with the powder increases manufacturing complexity: Whenever the powder is transported, loaded into the machine, or cleaned up afterwards, rigorous steps must be followed to prevent the loose spread of powder. (The powders are flammable and present an explosion risk, and can also cause respiratory issues for workers.) Following all of these steps adds cost, time, and risk."
"A Spain-based company called Meltio has developed a new metal 3D printing technology that does away with powder-based hassles. Rather than powder, Meltio's feedstock is metal wire, which is easy to handle on spools. The wire is fed into a point where three to six low-power diode lasers converge, creating what's known as a 'melt pool.' This turns the wire into molten metal that is delivered in layers, as with Fused Deposition Modeling (FDM) 3D printing."
Fused Deposition Modeling is the name for the technique that uses plastic on spools to 3D print plastic do-dads.
https://www.core77.com/posts/135194/LMD-A-New-Less-Wasteful-Metal-3D-Printing-Technique
Electric Vehicles
15. "The sci-fi motor design that could help save the EV industry."
Donut Labs made an in-wheel motor that's cheap because made using common and inexpensive materials (so they claim), and light. The motorcycle wheel weighs 21 kg (46 lbs), while the car wheel weighs 40 kg (88 lbs). The motorcycle wheel is already in production in an electric motorcycle sold in Sweden called the Verge TS Pro.
"Historically, unsprung mass has been a major obstacle for in-wheel motors. This is an important parameter in handling, or how a vehicle feels like to drive. All mass that is in direct contact with the road without going through suspension is unsprung mass: a wheel's tires, in-wheel motors, brake rotor, control arms, steering arms, etc."
Computer Science
16. NootCode: Non-algorithmic LeetCode?
"Becoming an exceptional software engineer requires more than just acing algorithm quizzes. It requires mastery of crucial skills including Computer Science Fundamentals, System Design, Scenario Analysis and more. Just passive learning is not enough. NootCode offers an online judging and coaching platform where you can master all these non-algorithmic skills through hands-on exercises just like practicing coding on LeetCode."