Boulder Future Salon

Boulder Future Salon

Thumbnail
Someone attempted to submit code to systemd with a new feature called "detect-fash," "which scans a system for the presence of software and configurations known to be associated with fascist ideologies."

systemd (no capitalization) is a system for booting up Linux systems, and managing services that run on the machine once the computer is booted up.

Inside the "detect-fash" submission, we see functions named:

detect_omarchy
detect_ladybird
detect_hyprland
detect_dhh

Omarchy is a new Linux distribution created by David Heinemeier Hansson (who goes by the initials DHH, so I will henceforth just call him DHH) which I've heard is supposed to be easy for Mac users to migrate to, kind of like how Linux Mint is easy for Windows users to migrate to. It is said to be very "opinionated" with all sorts of user interface decisions made for you, although since it is Linux under the hood, it is actually possible to customize it all.

Ladybird is an open-source web browser made by Andreas Kling.

Hyprland is a "Wayland compositor", where "Wayland compositor" means a display server that implements the Wayland display server protocol, which is a replacement for the X Server protocol, which is what most Linux systems currently use, but people are trying to migrate to Wayland, which is a newer and supposed to be better protocol.

The "detect_dhh" detects to see if systemd is running on DHH's computer by looking for his public ssh key.

DHH is the creator of Rails (as in "Ruby on Rails" -- but he did not create the Ruby programming language -- that was done by some Japanese guy -- he did the "Rails" framework) and I have a link below that explains why people think he's a "fascist". He's the only one of the three I understand. The others I have no idea. (If you know, please explain to me.)

There is an additional interesting twist on this. If you click over to the GitHub account that issued the pull request, you'll see it's an account with Russian writing. Rendered in our alphabet, it says "otrodyas takogo ne bylo, i vot - opyat", which Google Translate translates as "I've never seen anything like this before, and here it is again." But Wikipedia translates it as "The thing that never happens just happened again." Apparently the quote is attributable to Viktor Chernomyrdin, a former Prime Minister of Russia (from 1998) who was known for comedic sayings. Another given on his Wikipedia page (link below) is "We wanted the best, but it turned out like always."

The fact that the submitter is (probably) Russian is hugely significant. Open source project maintainers in the United States are prohibited by sanctions laws from accepting submissions from anyone connected with the Russian government, which is on the US government's list of officially prohibited entities. The penalties for violating this law are said to be severe. So the fact that this was submitted from Russia may mean, rather that being an attempt to combat use or contribution to open source software by "fascists", this could actually be an attempt to take out the leadership of the systemd project by getting the people who run it punished by the US government. As I understand it the primary people who run the systemd project work at Red Hat and are located in the US.

Thumbnail
If you've heard that geofencing was used by Israel to target advertising by Christians, it looks like that's true and the reason we know is a Foreign Agents Registration Act filing, something I didn't know anything about. The Foreign Agents Registration Act (FARA) requires 'foreign agents' to register with the Department of Justice (DOJ) and disclose their activities and financial compensation.

My point here isn't to make any political or religious statement, I just think it's interesting that "geofencing" can be used for targeted advertising -- and the FARA Act is a thing that exists that can revel a foreign entity using it (although you have to wonder if, after this, such activity will get hidden behind a chain of shell companies). The idea is, when people go to church, their GPS coordinate from their mobile phone will go inside the "geofenced" area of the church grounds and identify the person as an attendee of the church. This could be used for anything, not just churches, and in mentioning this to some friends, they've told me geotargeted advertising has actually been a thing for a long time. I guess I naïvely thought that just meant, you travel to city X, you get ads for restaurants in city X, something like that, but apparently geofencing is much more sophisticated than that now. You enter the grounds of a specific church, and computers somewhere remember that you're a Christian and a member of that church forever, and you get targeted advertising on that basis. Of course -- it seems rather obvious when you spell it out like that. The FARA filing (link below) lists the specific churches targeted (starting on page 34). (Scottsdale Bible Church, Scottsdale, AZ; North Phoenix Baptist Church, Phoenix, AZ; ...) Once identified as a Christian, the person can receive targeted advertising with pro-Israel messages from the government of Israel.

The FARA filing even describes some of what those messages are: Educational messages about the history of Jews in the region, before and the creation of the state of Israel in 1948; educational messages about the history of the creation of Israel, its legitimacy as a power in the region, and its protection of non-Jewish populations; education about ongoing activities to protect civilians and maintain moral superiority; information about democratic freedoms in Israel including religious and non-religious freedoms; question the longstanding policy of a 2-state solution; highlight historical co-existence between Jews and Arabs continuing into the creation of Israel and the many concessions made by Israel in exchange for peace; Information about the great partnership between Americans and Israelis internationally; Christians In Israel and the Birthplace of Jesus Christmas Message; ...

Thumbnail
Tiny Recursive Models beat large language models on the ARC-AGI tests of intelligence.

"With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters."

The wording of that is very careful. The best LLM/multi-modal model on both ARC-AGI-1 and ARC-AGI-2 is a version of Grok 4 custom-trained for the ARC-AGI-1 and ARC-AGI-2 tests. It gets scores of 79.6 on ARC-AGI-1 and 29.4 on ARC-AGI-2. However, this model has 1.7 trillion parameters. Tiny Recursive Models are able to get 44.6 on ARC-AGI-1 and 7.8 on ARC-AGI-2 with only 7 million parameters. The ability to do so well with so few parameters is what's noteworthy.

"ARC-AGI-1 and ARC-AGI-2 are geometric puzzles involving monetary prizes. Each puzzle is designed to be easy for a human, yet hard for current AI models. Each puzzle task consists of 2-3 input-output demonstration pairs and 1-2 test inputs to be solved. The final score is computed as the accuracy over all test inputs from two attempts to produce the correct output grid. The maximum grid size is 30x30. ARC-AGI-1 contains 800 tasks, while ARC-AGI-2 contains 1120 tasks. We also augment our data with the 160 tasks from the closely related ConceptARC dataset. We provide results on the public evaluation set for both ARC-AGI-1 and ARC-AGI-2."

"While these datasets are small, heavy data-augmentation is used in order to improve generalization. ARC-AGI uses 1000 data augmentations (color permutation, dihedral-group, and translations transformations) per data example. The dihedral-group transformations consist of random 90-degree rotations, horizontal/vertical flips, and reflections."

"Tiny Recursive Model with self-attention obtains 44.6% accuracy on ARC-AGI-1, and 7.8% accuracy on ARC-AGI-2 with 7M parameters. This is significantly higher than the 74.5%, 40.3%, and 5.0% obtained by Hierarchical Reasoning Model using 4 times the number of parameters (27M)."

How does it work?

Well, the actual paper talks a lot about a previous model (which you just saw mentioned in that last quote) called Hierarchical Reasoning Model. Tiny Recursive Model was created by improving upon Hierarchical Reasoning Model.

The philosophy of Hierarchical Reasoning Model is that you actually have two models. One processes inputs at a very high frequency. The second processes outputs from the first at a low frequency. In this manner, you establish a clear hierarchy.

The Tiny Recursive Model dispenses with the explicit hierarchy in favor of "recursion". There's a single network. It contains a transformer "attention" system, but combines that with the input (remincent of residual networks), the current best answer, and a hidden latent state (reminscent of recurrent networks -- attention-based "transformers" made recurrent networks just about completely disappear).

Hierarchical Reasoning Models require a complex inner loop with fixed parameters for controlling when the high-level network runs. The Tiny Recursive Model has a simpler inner loop, though it has a fixed parameter for updates to the hidden latent state (6 times through the loop) and another fixed parameter for the number of times it does the "deep recursion " incorporating the input, current best answer, and hiden state (3 times through that loop).

The Hierarchical Reasoning Model has a complex early stopping mechanism, that in the paper the creators of the Tiny Recursive Model say was both "biologically inspired" (using ideas from neuroscience) and inspired by Q-learning in reinforcement learning. It is computationally expensive to calculate whether to "halt". The new Tiny Recursive Model uses a simple binary cross-entropy, a commonly used loss function in machine learning. The cross-entropy goes through a sigmoid function and if the result is more than 0.5 (potentially another fixed parameter), then the model considers its answer confident enough to stop.

The Hierarchical Reasoning Model outputs its final answer only from the network at the top of the hierarchy. The Tiny Recursive Model, in contrast, maintains the "current best answer" throughout the process. It maintains latent state throughout the process as well, allowing it to continuously maintain inner "thinking" that is not part of the final answer.

It remains to be seen whether this is a revolution that will revolutionize the field of AI. Since these models are so small, there would seem to be tremendous headroom to scale them up and potentially crush humans on the ARC-AGI-1 and ARC-AGI-2 tests.

Thumbnail
"User ban controversy reveals Bluesky's decentralized aspiration isn't reality. Bluesky's protocol is so complicated that not even the biggest alternative network has figured out how to become independent."

"Bluesky's engineering team has been moving ahead with its long-promised open source efforts, breaking up its software stack into several pieces to enable a federated Authenticated Transfer Protocol (ATProto) network where anyone with the know-how and funds could run their own copy of Bluesky."

But...

"The only completely independent implementation of ATProto is Bluesky. But that isn't for want of trying on the part of Rudy Fraser, the creator of Blacksky."

"Despite Fraser's efforts to implement his own PDS, Relay, and App View, however, Blacksky still remains partially dependent upon Bluesky's application server, largely because while the code to implement the dataplane of posts and users within an application server is released, the open-source version is slower. As a result, Blacksky is dependent on Bluesky's application server to give users a fast experience, which also means that it is dependent on Bluesky's labeling system and its moderation choices."

And the government is trying to influence those moderation choices.

"Federal Communications Commission Brendan Carr's threats against late night comedian Jimmy Kimmel led to his temporary suspension by ABC, and he was far from the only Republican to issue them. Louisiana Rep. Clay Higgins, chair of the House subcommittee on federal law enforcement, sent a menacing letter to Bluesky and other social media networks demanding that they identify and ban anyone deemed to be celebrating Charlie Kirk's killing."

Thumbnail
"Today's LLMs are the epicycles of intelligence: extraordinarily useful for navigation through language, capable of producing predictive charts of our symbolic universe -- but like their astronomical predecessors, perhaps working well without being fundamentally correct."

"In astronomy, it took two orthogonal insights -- Copernicus's heliocentrism and Kepler's ellipses -- spread over seventy years to break free from epicycles, and another eighty for Newton to reveal the logic behind them. By analogy, we may still be in AI's pre-Copernican era, using parameter-rich approximations that will eventually give way to a more compact and principled foundation."

Is the possibility that gradient descent and backpropagation aren't the foundations of intelligence itself keeping you up at night?

Thumbnail
camfer (no capitalization) is an AI CAD tool that works with SolidWorks on Windows.

If you're a SolidWorks user and give it a whirl, let me know how it goes.

Thumbnail
"The AI boom's reliance on circular deals is raising fears of a bubble."

"Nvidia plans to invest in OpenAI, which is buying cloud computing from Oracle, which is buying chips from Nvidia, which has a stake in CoreWeave, which is providing artificial intelligence infrastructure to OpenAI."

"If it starts to become clear that AI productivity gains -- and thus the return on investment -- may be limited or delayed, 'a sharp correction in tech stocks, with negative knock-ons for the real economy, would be very likely,' analysts with Oxford Economics research group wrote in a recent note."

Thumbnail
AI GIF Generate is an AI animated GIF generator.

Thumbnail
In the discussion between Richard Sutton, pioneer of reinforcement learning, and Dwarkesh Patel, YouTuber, the two spoke past each other because they were "speaking two different languages", says Ksenia Se of "Turing Post".

Words like "prediction", "goal", "imitate", "world model", and "priors", have different meanings in the minds of Richard Sutton and Dwarkesh Patel.

Richard Sutton thinks of them in terms of reinforcement learning, and having studied part of his textbook (co-authored with Andrew Barto) (I read about half of it and confess to not having done most of the exercises -- they are quite challenging!), I understand him very clearly, while Dwarkesh Patel thinks in terms of the current large language models.

To me, Dwarkesh Patel's thinking seems limited because he's not able to see beyond large language models and their token-oriented, self-supervised training system. It may be fine for language, but other techniques, to come primarily from the reinforcement learning research people, are likely in my mind to make robots competitive with humans in terms of physical dexterity in the physical world.

Thumbnail
"How functional programming shaped (and twisted) frontend development."

If it seems like ideas in React and Redux resemble ideas from the "functional languages paradigm" in languages like Haskell, it's not your imagination.

Some choice quotes:

"There's a strange irony at the heart of modern web development. The web was born from documents, hyperlinks, and a cascading stylesheet language. It was always messy, mutable, and gloriously side-effectful. Yet over the past decade, our most influential frontend tools have been shaped by engineers chasing functional programming purity: immutability, determinism, and the elimination of side effects."

"The web is fundamentally side-effectful. CSS cascades globally by design. Styles defined in one place affect elements everywhere, creating emergent patterns through specificity and inheritance. The DOM is a giant mutable tree that browsers optimize obsessively; changing it directly is fast and predictable. User interactions arrive asynchronously and unpredictably: clicks, scrolls, form submissions, network requests, resize events. There's no pure function that captures 'user intent.'"

"This messiness is not accidental. It's how the web scales across billions of devices, remains backwards-compatible across decades, and allows disparate systems to interoperate. The browser is an open platform with escape hatches everywhere. You can style anything, hook into any event, manipulate any node. That flexibility and that refusal to enforce rigid abstractions is the web's superpower."

"Functional programming revolves around a few core principles: functions should be pure (same inputs yields same outputs, no side effects), data should be immutable, and state changes should be explicit and traceable. These ideas produce code that's easier to reason about, test, and parallelize, in the right context of course."

"CSS was designed to be global. Styles cascade, inherit, and compose across boundaries. This enables tiny stylesheets to control huge documents, and lets teams share design systems across applications. But to functional programmers, global scope is dangerous. It creates implicit dependencies and unpredictable outcomes."

"React introduced synthetic events to normalize browser inconsistencies and integrate events into its rendering lifecycle. Instead of attaching listeners directly to DOM nodes, React uses event delegation. It listens at the root, then routes events to handlers through its own system."

"This feels elegant from a functional perspective. Events become data flowing through your component tree. You don't touch the DOM directly. Everything stays inside React's controlled universe."

"But native browser events already work. They bubble, they capture, they're well-specified. The browser has spent decades optimizing event dispatch."

Thumbnail
It is alleged (by The Citizen Lab, at the Munk School of Global Affairs and Public Policy at the University of Toronto), that Israel is using AI to create online "influence operations" aimed at "regime change" in Iran, starting with a deepfake of IDF air strikes on Evin Prison in Tehran.

Thumbnail
MEMS lidar.

"Five years ago, Eric Aguilar was fed up."

"He had worked on lidar and other sensors for years at Tesla and Google X, but the technology always seemed too expensive and, more importantly, unreliable. He replaced the lidar sensors when they broke -- which was all too often, and seemingly at random -- and developed complex calibration methods and maintenance routines just to keep them functioning and the cars drivable."

"So, when he reached the end of his rope, he invented a more robust technology -- what he calls the 'most powerful micromachine ever made.'"

"Aguilar and his team at startup Omnitron Sensors developed new micro-electro-mechanical systems (MEMS) technology that he claims can produce more force per unit area than any other."

Allegedly by replacing conventional lidar with this MEMS technology, it will be more robust to road vibrations, thermal cycles, and rain.

Thumbnail
"New research by LayerX shows how a single weaponized URL, without any malicious page content, is enough to let an attacker steal any sensitive data that has been exposed in Perplexity's Comet AI browser."

"For example, if the user asked Comet to rewrite an email or schedule an appointment, the email content and meeting metadata can be exfiltrated to the attacker."

"An attacker only needs to get a user to open a crafted link, which can be sent via email, an extension, or a malicious site, and sensitive Comet data can be exposed, extracted, and exfiltrated."

It's only been days since I found out Perplexity's Comet AI browser exists. The Comet browser is supposed to turn your browser into an AI agent that can take actions on the internet on your behalf.

Thumbnail
The claim is being made that the government of the Caribbean island of Anguilla now gets 47% of its income from registrations of .ai domains.

Honorable mention in the comments section: .io (Indian Ocean), .fm (Federated States of Micronesia), and .tk (Tokelau).

Thumbnail
The claim is being made that at JPMorgan, the shift to agentic AI "favors those who work directly with clients -- a private banker with a roster of rich investors, traders who cater to hedge fund and pension managers, or investment bankers with relationships with Fortune 500 CEOs, for instance."

"Those at risk of having to find new roles include operations and support staff who mainly deal in rote processes like setting up accounts, fraud detection or settling trades."

Thumbnail
"AOL's dial-up internet service is shutting down Tuesday, ending one of the web's first mainstream access points."

By "Tuesday", they mean September 30th, so it's already shut down by the time you read this.

End of an era.