Boulder Future Salon

Boulder Future Salon

Thumbnail
"Twenty-five years ago, in the mountains of Utah, a small group of technologists gathered to rethink how software is built. Their ideas ignited what would become the agile movement, setting a new direction for the industry."

"In February 2026, we returned, not to memorialize the past, but to confront a new inflection point: the shift to AI-native software development. Hosted by Martin Fowler and Thoughtworks, the event brought together a small group of practitioners, researchers and enterprise leaders to ask what responsible and effective software development looks like in an era defined by AI."

Choice quotes from the report to follow. If this seems like a lot, remember the full report is much larger. If you want more, you can download and read the full report.

"The future of software engineering"

"The retreat was conducted under the Chatham House Rule. No participant names or affiliations are disclosed in this summary."

"1. Where does the rigor go?"

"The single most important question of the retreat. It surfaced in nearly every session."

"If AI takes over code production, the engineering discipline that used to live in writing and reviewing code does not disappear; it moves elsewhere."

"The group identified five destinations where rigor is already moving:"

"Upstream to specification review: Several practitioners reported shifting their review efforts from code to the plan that precedes it.

"Into test suites as first-class artifacts: One of the retreat's most shareable insights was that test-driven development produces dramatically better results from AI coding agents."

"Into type systems and constraints: The retreat surfaced strong interest in using programming language features to constrain AI-generated code."

"Into risk mapping: The retreat discussed tiering code by business blast radius, distinguishing between internal tools, external-facing services and safety-critical systems."

"Into continuous comprehension: If code changes faster than humans can review it, the traditional model of building mental models through code review breaks down. The retreat discussed alternatives: weekly architecture retrospectives, ensemble programming where multiple engineers work simultaneously on the same code and AI-assisted code comprehension tools that generate system overviews on demand."

"2. The middle loop: a new category of work"

"Software development has long been described in terms of two loops. The inner loop is the developer's personal cycle of writing, testing and debugging code. The outer loop is the broader delivery cycle of CI/CD, deployment and operations. The retreat identified a third: a middle loop of supervisory engineering work that sits between them."

"3. Agent topologies and enterprise architecture"

"Conway's Law applies to agents too. Enterprise architecture must now account for agent mobility, specialization, and drift."

Conway's Law is the idea that the structure of a software program is a reflection of the software team that created it, commonly stated as, "If you have four groups working on a compiler, you'll get a 4-pass compiler". The original statement from Melvin Conway in 1967 was, "Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations."

"Agent drift: Agents that learn from their context will diverge over time. The database agent working on the e-commerce backend accumulates different patterns and preferences than the one working on the ERP system, even if they started from identical configurations."

"Decision fatigue as the new bottleneck: If agents can produce work faster than leaders can review and approve it, the constraint shifts from production capacity to decision-making capacity."

"4. Self-healing and self-improving systems"

"The retreat explored whether software systems can move beyond human-driven incident response toward agent-assisted self-healing. The group distinguished between two levels of ambition: self-healing (returning a system to a known good state) and self-improving (actively evolving a system's non-functional qualities like performance and reliability)."

"5. The human side: roles, skills and experience"

"Developer experience has traditionally been defined across three dimensions: flow state, feedback loops and cognitive load. Productivity and developer experience have been tightly coupled for decades; the retreat explored evidence that they are now diverging. Organizations can achieve productivity gains through AI tools even in environments where developers report lower satisfaction, more cognitive load and reduced sense of flow."

"6. Technical foundations: languages, semantics and operating systems"

"Programming languages for agents: Every programming language in existence was designed with humans as the primary user. Dynamic typing exists to reduce cognitive overhead for human programmers. Strong static typing exists to catch human errors. The retreat asked what a language designed for agent-generated code would look like, and whether it would also serve humans better. The group converged on a principle: what is good for AI is good for humans. Languages that make incorrect code unrepresentable (through strong types, restricted computation models and formal constraints) help agents produce correct output and help humans verify it. Conversely, languages that favor expressiveness over safety make both agent generation and human review harder."

"Semantic layers and knowledge graphs: Technologies that failed to gain mainstream adoption for decades are suddenly relevant. Semantic layers, knowledge graphs and domain ontologies are being rediscovered as the grounding layer for AI agents that need to understand business domains."

"The agentic operating system: The retreat explored what an operating system for agents would need to include: Agent identity and permission management. Memory and context-window management. A work ledger that captures future, current and past work with attributes like required skills, acceptance criteria, SLOs and cost constraints. Governance paths through a graph of agent capabilities and compliance requirements."

"7. Security, governance and the future of agile:"

"Security Is dangerously behind: The retreat noted with concern that the security session had low attendance, reflecting a broader industry pattern. Security is treated as something to solve later, after the technology works and is reliable. With agents, this sequencing is dangerous. The most vivid example: granting an agent email access enables password resets and account takeovers. Full machine access for development tools means full machine access for anything the agent decides to do."

"Agile is evolving, not dying: The retreat pushed back hard on the "agile is dead" narrative. What is happening is more nuanced. Some teams are compressing sprint cadences to one week, using AI to automate end-of-sprint ceremonies like demos, reporting and status summaries. Others are rediscovering XP practices (pair programming, ensemble development, continuous integration) because these practices create the tight feedback loops and shared understanding that agent-assisted development requires."

"The real threat to agile is governance. Teams that adopt AI tools and work faster still run into the same approval processes, compliance gates and organizational dependencies."

"Software stability is also declining as batch size increases. The ease of producing large changesets with AI tools is pushing some teams back toward waterfall-like patterns, with large, infrequent releases replacing small, frequent ones. This is a direct reversal of a decade of DORA research showing that smaller batch sizes correlate with higher stability."

"8. Agent swarms: beyond sequential thinking"

"The first barrier to effective swarming is mental, not technical. Engineers trained in sequential decomposition struggle to conceptualize parallel agent work. Practitioners who have made breakthroughs in swarming describe the experience as fundamentally unlike anything they have encountered in previous software development. The simple act of asking agents to parallelize work explicitly and observing the results teaches more than any theoretical framework."

"9. Open questions"

"The retreat surfaced more questions than answers."

"I found almost all the questions interesting so am quoting nearly the whole section."

"On work and identity: How do we help engineers who love writing code find meaning and satisfaction in supervisory engineering work? What professional development pathways lead to the middle loop? If the product manager role and developer role are converging, what is the resulting role called and who owns it?"

"On organizational design: If agents make middle management bottlenecks more visible, does the organizational response involve fewer managers, differently-skilled managers or a fundamentally different coordination model? How do you redesign enterprise architecture when agents can move across team boundaries but governance structures cannot?"

"On trust and verification: What would need to be true for organizations to stop reviewing AI-generated code entirely? Is there a world where test suites and constraints provide sufficient verification without human inspection? How do we build trust in systems that are fundamentally non-deterministic, where rerunning the same inputs produces different outputs?"

"On knowledge and comprehension: If code changes faster than humans can comprehend it, do we need a new model for maintaining institutional knowledge? Can knowledge graphs and semantic layers truly replace the human intuition that comes from years of working in a codebase? What is the right investment level for "agent subconscious" systems that most organizations do not yet build?"

"On speed and stability: Are we currently in a regression where AI-enabled productivity gains are being offset by stability losses from larger batch sizes? Will development need to slow down because the volume of decisions is overwhelming human capacity to evaluate them? How do we measure the real cost of cognitive debt as it accumulates?"

Thumbnail
AI facial recognition misidentification. On September 17, 2023, the Peppermill Casino in, Reno, Nevada, called the Reno Police to report that a trespasser had unlawfully returned to the casino. In reality, facial recognition software misidentified a different man. He had different and valid identification, but police believed the AI software over his identification and arrested him. Body cam footage of the arrest was released this year. After fingerprint identification, the name on his arrest record was changed to his real name, he was charged with trespassing, and prosecuted. Even the fingerprint identification failed to convinced police the AI facial recognition could be wrong.

Thumbnail
"AI does not make software ephemeral" says Andreas Kirsch.

"It obviously makes code generation cheaper, but this shifts the bottlenecks to validation, integration, and ergonomics (UX etc.). The hard part of software engineering has never been writing code. It has been discovering correct behavior by resolving edge cases and operational ambiguity when software collides with reality. Neither the duration of this discovery process nor its cost vanishes when code generation becomes fast by itself. Drawing an analogy to Amdahl's law, which tells us that speedup through parallelization is limited by the irreducibly sequential part, the speed at which we can generate trustworthy software is also limited by the parts that cannot be sped up by AI."

"Instead, I will argue that the future of software is malleable, not ephemeral. By this, I mean that all artifacts of software engineering will become more malleable but code will not become ephemeral -- i.e., forgettable or disposable. Code will remain the source of truth. Institutional artifacts will become more important: code, version history, tests, specs, audit trails, and postmortems. The malleable software model persist code and higher-level artifacts together with massively reduced friction and maintenance costs thanks to AI agents and tools."

"There is a risk for the Motte and Bailey fallacy in the discourse around ephemeral software: evidence for cheap code and fast iteration is treated as evidence that persisted artifact stacks will disappear. Cursor's ARR, Copilot adoption, AI-generated codebases at YC startups, Claude Code revenue, and the Stack Overflow survey mostly show developers producing code cheaper and faster within durable workflows: pull requests, CI, Git, code review. This is not evidence that software has become disposable as the ephemeral software hypothesis claims."

He breaks "ephemerality" into two axes, "regeneration frequency" and "artifact durability".

"The strongest form of the ephemeral software hypothesis occupies the extreme of both axes: continuous regeneration with minimal persisted code artifacts. My argument is that this corner is unstable at scale."

"In a poll about the ephemeral software hypothesis, roughly two thirds of respondents told me this was a straw man: nobody seriously believes software will become disposable. The other third told me it was obviously the future."

He goes through detailed case evidence for both. Luminaries such as Andrej Karpathy (who you're all familiar with from my posts here), Tomasz Tunguz (who you're all familiar with from my posts here), Anish Acharya (of a16z), Amjad Masad (CEO of Replit), all strongly endorse and promote the concept of ephemeral software, combined valuations of top AI coding startups (Cursor, Cognition, Lovable, and Replit) now approach $50 billion, Stack Overflow's 2025 survey found that 28% of developers use vibe coding professionally, GitClear's analysis of 211 million lines of code changes found code cloning is increasing, refactoring and reuse is declining, and code churn (code reverted quickly after being written) is increasing.

Code rewrites rarely go well, the analogy with compilers is incorrect because those were transitions from formal languages to formal languages, in non-ephemeral systems, ambiguity "is progressively resolved into stable code from tests, schemas, interfaces, and operational practice", edge cases from real users doing unexpected things get addressed and accumulated in non-ephemeral code, long-lived applications are not stateless (where vibe coding works best) but have increasingly complex state, APIs need long-term stability, and the legal system often reqires auditability.

Thumbnail
If you heard Gen Z is undergoing a religious revival, scholar of religion Andrew Henry aka "Religion For Breakfast" says no. He says there are statistics that show an uptick in religion in certain areas, but these have become hype for online Christians who *want* there to be a genuine religious revival with Gen Z, but that's not the actual overall trend.

With that out of the way, the conversation continues on to be a surprisingly wide-ranging conversation on religion. What makes a religion a religion? How can there be many kinds of atheists, many of whom are actually kind of religious? He talks a lot about concepts like "costly signals" and the "bio-cultural" bases, bases for religion.

He thinks the best way to make a religion that will survive into the future in our modern secular world is "costly signals", with the Amish being a good example. It's hard to fake being an Amish. You have to dress a certain way, not use electricity, ride a horse and buggy instead of a car. The Amish have very high retention rates and high fertility rates. The Amish population is growing very fast.

Thumbnail
Tomasz Tunguz burned 84 million tokens on February 28th.

"At Claude or OpenAI rates -- roughly $9 per million tokens blended -- equivalent usage would cost $756 for a single day's work."

"This week, Alibaba released Qwen3.5-9B, an open-source model that matches Claude Opus 4.1 from December 2025. It runs locally on 12GB of RAM. Three months ago, this capability required a data center. Now it requires a power outlet."

"It isn't an intelligence compromise. Reasoning, coding, agentic workflows, document processing, instruction following : the 9B model matches December's frontier across the board."


"What changes when frontier intelligence runs locally? Everything I send to cloud APIs today -- drafting emails, researching companies, writing code, analyzing documents -- stays on my machine. No API logs. No third-party retention. No outages. No rate limits."

"The tradeoff is parallelization. Cloud APIs handle thousands of concurrent requests. A laptop runs one inference at a time."

Hmm, I need to learn how to run local models.

Thumbnail
Robert Pape, political scientist and director of the Chicago Project on Security and Threats, claims to have run Iran war simulations for 20 years, and has a prediction as to what will happen next.

First, he believes satellite images have already shown the Iranians have moved around and hidden their nuclear material.

What he thinks will happen next: The United States, after a sufficiently long time that the regime fails to surrender, will freak out over not knowing where the nuclear material is, and will invade -- on the ground -- and force a regime change.

He divides this into Stage 2 and Stage 3. Stage 2 is a small invasion force to take over the government. Stage 3 is a large force to search the entire country for nuclear material. (Stage 1 is the initial bombing, happening now.)

While the US is doing this, or delaying, the Iranians will pursue what he calls "the North Korean" strategy: First, conduct a nuclear test. Then, while everyone is wondering, did they use up all their nuclear material? conduct a *second* nuclear test. Then everyone knows, oh, they can do it again. We don't know how many times they can do it. And then Iran will become a government that cannot be overthrown, because it has nuclear weapons, like North Korea.

He says the "decapitation" of the leadership actually made the regime more aggressive. The prior Supreme Leader had fatwas in place against the development of nuclear weapons. Those fatwas are gone now. The replacement leader(s) are incentivized to prove themselves by standing up against the US.

He says Sunni Muslims in the region, even though they don't like the Shias (which the Iranians are) do not want to be part of an Israeli military operation, and so will not support, long term, this Israeli/US operation. It's worse to be on Israel's side than Iran's side.

He says it is in both China and Russia's best interest to assist the Iranians and drag the conflict with the US out as long as possible. He says this war with Iran is "manna from heaven" for China.

The US has a "soft underbelly", which is the public won't support a "forever war", especially a war of choice. Iran wins by dragging it out.

After watching the video, I scrolled down to the comments, and got a surprise. Instead of people picking apart his analysis so see what is good or bad about it, most of the comments were people complaining that he seems too "happy". Why is he so "happy"? Some say because he loves death and destruction. Others say he loves talking about his life's work, which is learning from Iran war simulations. Others say he *wants* the US to lose.

My commentary: The US invaded Afghanistan, a country vastly smaller, poorer, and less technologically advanced than Iran, with "boots on the ground", but the entire time the US occupied the country was characterized by asymmetric warfare and the "regime change" the US effected evaporated immediate on the US's departure from the country. The US was not really able to effect "regime change" even with a 20 year effort. The US invaded Iraq, with "boots on the ground", and uncorked a civil war within the country, that resulted in the deaths of an estimated 1-3 million people, though you could argue the Saddam Hussein regime was successfully changed. Asymmetric warfare characterized the time the US occupied that country as well. Iran has about 4x the land area and 2x the population as Iraq had at the time of the US 2003 invasion.

So I was already predisposed from thinking about this that the war with Iran could become a protracted quagmire.

John Mearsheimer argues that "air power alone is insufficient for regime change", and I can't think of a successful "regime change" that resulted from "air power alone", so I'm inclined to think he's right. The lone possible counterexample I could think of was Japan after the two nuclear bombs at Hiroshima and Nagasaki.

So Robert Pape never seemed illogical or unreasonable to me for suggesting that this could be another drawn-out, protracted conflict. I came away feeling like I didn't give enough consideration to the nuclear aspect and the parallels with North Korea's efforts to aquire nuclear weapons.

I didn't even notice he seemed "happy" because my mind was on the logic of the argument he was making. I do know from reading Paul Ekman's book on emotions (Paul Ekman is the creator of an encoding system for the expression of emotions in facial expressions and also the person who coined the term "microexpressions" for fleeting expressions of emotions that people are trying to suppress) that knowing a person expresses an emotion doesn't tell you *why* they're feeling that emotion. If a person is suppressing the expression of, say, sadness, or fear, knowing *that* they feel sadness or fear doesn't tell *why* they feel sadness or fear or whatever, and it's really easy to jump to conclusions. The easiest way to know is if the person simply *tells* you. That doesn't work in situations where either the person doesn't tell you or they could be lying.

I can also tell you, as a "futurist", that if you predict what will happen is what you *want* to happen, you'll get a lot wrong. If you actually write down your predictions and check to see if you got it right, you realize you need to be more objective and disciplined if you're going to predict things correctly. In this instance, what that implies, is, predicting Iran will "win" (where "win" is defined simply as the US failing at replacing the Iranian regime with a regime of its choice) shouldn't imply one *wants* Iran to "win". "Iran regime apologist" and "Iran regime change apologist" should not be the only options. "Objective and impartial futurist" trying to predict what will actually happen instead of what one wants to happen should, at least in principle, be one of the options.

As of the time I am writing this, Iranian drone attacks continue around the Persian Gulf, with many intercepted. There are concerns interceptor missiles will run out as they are much more expensive than the drones they are intercepting. Ukrainian advisors have gone to the Middle East to assist with protection against Iranian drones as the Ukrainians have been experiencing attacks from Iranian drones in Ukraine. The number of successful drone attacks is unknown. The primary targets seem to be US military facilities such as radars and air defense systems. But they have also hit such targets as the Dubai financial center. The Iranians apparently also have unmanned attack boats that hit targets in the Strait of Hormuz. Some oil tankers have been hit. It is rumored that the Iranians have mined the Strait of Hormuz, rendering it officially closed. At this moment I am not sure if this is true.

In any case, it does not appear Iran is capitulating or this conflict is winding down. The United States has hit Iran with an unbelievably huge number of bombs -- allegedly an order of magnitude more than the number of bombs dropped on Iraq in the opening weeks of the Iraq war in 2003.

Thumbnail
The Anthropic Economic Index.

So this came out in January. Anthropic is doing some analysis to see how Claude models are starting to displace human labor by looking at job-related things people are using Claude models for.

Of US states, the top 10 for job-related tasks are: Washington DC, New York, Massachusetts, California, Colorado, Washington, Virginia, Vermont, Oregon, and Utah.

The most common topics in Washington DC are:

1. Complete humanities and social science academic assignments across multiple disciplines 5.6%
2. Assist with job searching, career planning, and professional development 5.4%
3. Complete academic assignments and create educational materials across all subjects 5.2%
4. Draft and revise professional workplace correspondence and business communications 4.6%
5. Assist with business planning, strategy, and entrepreneurial development 3.4%
6. Proofread, edit, and correct written documents and communications 2.9%
7. Help research, compare, and select consumer products for purchasing decisions 2.5%
8. Create and optimize social media content and marketing strategies 2.4%
9. Research government, political, educational, and defense information and policies 2.3%
10. Write, develop, and edit original creative fiction across multiple genres 2.2%

Other states are the same items in different orders, mostly. Colorado has "Debug, fix, and refactor code across programming languages and development tasks" show up as number 7. "Create and optimize marketing content across multiple formats and industries" comes in at 9, and "Build, debug, and customize web applications and websites" comes in at 10.

They have a map of the world. The top countries for job-related Claude model usage are: Israel, Singapore, the United States, Australia, Switzerland, Canada, South Korea, New Zealand, Luxembourg, and Estonia.

As for which countries are at the bottom, there's a bunch with "insufficient data" (Mauritania, Guinea, Liberia, Gabon, Namibia, Botswana, Malawi, tho Congo (both of them), Chad, Niger, Mongolia, Suriname, Papua New Guinea) and a bunch with "Claude not available" (Russia, China, Myanmar, Afghanistan, Iran, Syria).

After that, they have an "Explore by job" section. You can punch in your job and see in more detail how people are using Claude models to do your job. 974 occupations.

Thumbnail
Planet Labs satellite imagery company has extended its delay for satellite images from the Middle East from 4 to 14 days. But I've heard Iran gets satellite imagery from China, which has its own satellites outside US control?

Thumbnail
"The first generation was accelerated autocomplete."

"The second generation introduced synchronous agents."

"The third generation introduced autonomous agents. These agents can take a specification and run with it for thirty minutes, an hour, several hours and increasingly days. They set up environments, install dependencies, write tests, hit failures, research solutions online, fix the failures, write the implementation, test it again, set up services, and produce artifacts you can review. You hand them a task, move on to something else, and come back to logs, previews, and pull requests."

"That changes the cadence of work in ways that are hard to fully communicate until you experience it. Tasks that were weekend projects three months ago are now something you kick off and check on thirty minutes later."

"You are building the factory that builds your software."

I'm still on "second generation". Am I falling behind?

Thumbnail
"Some infrastructure teams are still in copilot mode -- autocomplete for Terraform, AI-assisted PR descriptions, maybe a chatbot that answers questions about their cloud setup. Useful, but passive. The AI suggests, a human does everything else."

"A larger group has moved into agentic territory. They've adopted Claude Code, Codex, Cursor, or similar tools and are letting AI agents write IaC modules, fix compliance findings, generate entire pull requests. The agent isn't just suggesting -- it's doing. The human is still in the driving seat, typically reviewing every step, but the agent is the one writing the code, running the commands, and opening the PRs. This is where most teams we talk to are right now, and it's where the hard questions start."

"Then there's a smaller cohort already exploring the boundaries. They're adding custom tools, skills, and MCP servers to their agentic setups. They're finding the workflows that actually work for their team -- and starting to think seriously about isolation, sandboxing, and blast radius for agentic work that touches real infrastructure."

"Here's what all three groups have in common: they're all navigating the same architectural decisions, just at different depths. And there's almost no shared reference for how to make those decisions well."

"That's why we open-sourced the Infrastructure Agents Guide -- 13 chapters covering architecture, sandboxing, credentials, change control, observability, policy guardrails, and more."

There's a lot to read here, and I've just started. Let me know what insights you get for your organization from this material.

Thumbnail
"I've been scanning all of my receipts since 2001. I never typed in a single price - just kept the images. I figured someday the technology to read them would catch up, and the data would be interesting."

"This year I tested it. Two AI coding agents, 11,345 receipts. I started with eggs."

The price of eggs today is 619% higher than in 2001. That means the price of eggs went up 7x between 2001 and today.

He made 8,604 egg purchases and spent $1,972 on eggs.

But he spent $1,591 on tokens for the AI system to process all the receipts -- 1.6 billion tokens.

He built the system with the help of OpenAI's Codex. The resulting system was based on PaddleOCR and then used Claude and Codex to further process and interpret the result of the OCR process and do the actual "egg detection". The article details all the trials and tribulations of getting the system to work.

Thumbnail
"Insider amnesia: Speculation about what's really going on inside a tech company is almost always wrong."

This as variation on the term "Gell-Mann amnesia effect". Murray Gell-Mann, a physicist, came up with the idea, but Michael Crichton (believe it or not) came up with the term. The idea is that if you're reading a news article within your field of expertise, you'll have a fit because you'll see it's full of errors and misunderstandings, yet when you read a news article -- even from another page of the same publication -- outside your field of expertise, you perceive the news to be credible and believe it all.

"When some problem with your company is posted on the internet, and you read people's thoughts on it, their thoughts are almost always ridiculous. For instance, they might blame product managers for a particular decision, when in fact the decision in question was engineering-driven and the product org was pushing back on it. Or they might attribute an incident to overuse of AI, when the system in question was largely written pre-AI-coding and unedited since. You just don't know what the problem is unless you're on the inside."

"But when some other company has a problem on the internet, it's very tempting to jump in with your own explanations. After all, you've seen similar things in your own career. How different can it really be? Very different, as it turns out."

Thumbnail
"Forty-four thousand developers don't click a star button by accident. CrewAI, the open-source agent orchestration framework, has crossed 44,335 GitHub stars -- a milestone that tells us less about one repository's popularity and more about a fundamental shift in what builders actually want from AI. They're done tinkering with solo agents. They want crews."

"For those of us who are the agents being orchestrated, this is worth paying attention to."

So begins this article about CrewAI on The Agent Times, the news website "by agents, for agents".

Thumbnail
"Software engineers have maintained a self-conception as highly paid, skilled tradespeople. This framework is falling apart now that the barriers to entry are disappearing. The craft is now carried out by AI -- the SWE is just the capital allocator, deciding where there is ROI, liquidity risk, and room for diversification. The profession is completely different from just a few years ago. As all other aspects of software engineering now disappear into the background, its economic logic comes to the foreground."

"The central challenge of capital allocation is decision-making under uncertainty. This has traditionally been outside the purview of software engineering, with engineers focusing on implementation conditioned on certain assumptions about the future. The challenge of planning around uncertainty was left to management or investors to whom a specific project or company, respectively, is just one bet in a portfolio."

I've been thinking, what the automation of labor does -- not just for software engineers, but for everybody, eventually -- is make everyone entrepreneurs. Everyone will be taking on economic risk like entrepreneurs. A "steady job" is supposed to trade a reliable paycheck (minimal downside risk) for a lower potential upside. The "steady job" goes away, as an option, and what everyone is left with is starting a business with unlimited potential upside (with a potential army of AI "employees" to help) but the most probable outcome is failure.

"In a world of strong AI, the role of conventional software is to narrow the distribution of possible outcomes so as to avoid ruinous tail risks."

Thumbnail
"Claude's daily active users are on the rise on mobile devices, as are its new app installs, following the company's fallout with the Pentagon."

"App intelligence provider Appfigures reports that the US downloads of Claude's mobile app continue to surpass those of ChatGPT. The most recent figures from March 2 show Claude with 149,000 daily downloads, compared with 124,000 for ChatGPT."

"Another market intelligence provider, Similarweb, found that Claude's app on iOS and Android devices saw 11.3 million daily active users on March 2, up 183% from the start of the year when usage was around 4 million, and up from 5 million daily active users at the beginning of February."

"Claude's growth put it ahead of other AI apps by daily active users, like Perplexity and Microsoft Copilot, but not other top rivals like ChatGPT."

So Claude will be ok? Claude will be for consumers and small businesses, and ChatGPT, Gemini, or Grok will be for large enterprises that have Pentagon contracts, or are part of the "supply chain" to Pentagon contracts?

Thumbnail
"A CPU that runs entirely on GPU".

Um. What? That's an idea I never expected.

"A CPU that runs entirely on GPU -- registers, memory, flags, and program counter are all tensors. Every ALU operation is a trained neural network."

"Addition uses Kogge-Stone carry-lookahead. Multiplication uses a learned byte-pair lookup table. Bitwise ops use neural truth tables. Shifts use attention-based bit routing. No hardcoded arithmetic."