Apollo & Dionysus#
+ Expand
The first portrait of Gen. Milley, from his time as the U.S. military's top officer, was removed from the Pentagon last week on Inauguration Day less than two hours after President Trump was sworn into office.
The now retired Gen. Milley and other former senior Trump aides had been assigned personal security details ever since Iran vowed revenge for the killing of Qasem Soleimani in a drone strike in 2020 ordered by Trump in his first term.
On "Fox News Sunday," the chairman of the Senate Intelligence Committee, Tom Cotton said he hoped President Trump would "revisit" the decision to pull the protective security details from John Bolton, Mike Pompeo and Brian Hook who previously served under Trump.
Asked why these actions were being taken, a senior administration official who requested anonymity replied, "There is a new era of accountability in the Defense Department under President Trump's leadership—and that's exactly what the American people expect."
Gen. Milley served as chairman of the Joint Chiefs of Staff from 2019 to 2023 under both Presidents Trump and Biden.
-- Fox News
The evolution of computing has been largely defined by the interplay between servers, browsers, and search engines, forming a cycle of querying and retrieval that has structured the way humans interact with information. Historically, the browser served as an access point for human-driven queries, acting as a window into the vast digital repository of indexed data. The client-server model reinforced this structure, with users issuing search requests through their browsers, which then routed these requests to centralized servers for processing and retrieval. The process was demand-driven—users needed information, and search engines acted as intermediaries, retrieving and ranking the most relevant responses.

Fig. 15 An essay exploring the relationship between servers, browsers, search mechanics, agentic models, and the growing demand for distributed compute. You’re setting up a discussion that touches on the architecture of information retrieval, the role of AI as an autonomous agent in querying and decision-making, and the economic implications of compute optimization. The key tension lies in the shift from user-initiated queries to AI-driven agentic interactions, where the browser becomes less of a search engine interface and more of an intermediary between users and computational agents. The equation Intelligence = Log(Compute) suggests a logarithmic efficiency in intelligence gains relative to compute expansion, which could be an interesting angle to explore in light of the exponential growth in AI capabilities.#
However, the landscape is undergoing a radical transformation with the rise of AI as an autonomous agent. The traditional client-server dynamic is shifting toward an agentic model where AI itself queries and interprets information on behalf of users. Instead of individuals actively searching for data, AI agents anticipate needs, optimize queries, and synthesize responses in real time. This shift represents a fundamental departure from the passive search paradigm: instead of computing power being allocated only when a user makes a request, continuous AI-driven inference requires persistent and distributed compute resources. The demand for compute is no longer limited to discrete search queries but extends into ongoing, real-time decision-making.
Tip
Server 🖥️
Browser, Search 🔍
Client vs. Agentic model 🤖
Querying ❓
Demand for Compute ⚡
Training data needs storage ☁️ 💾
Quantum computing: generate data for AI training ⚛️
At the core of this transformation lies the relationship between intelligence and compute, often framed as Intelligence = Log(Compute). This suggests that intelligence scales with compute but at a diminishing rate, implying that while increased computational resources drive intelligence growth, the efficiency of that growth follows a logarithmic curve. In practical terms, this means that as AI systems advance, they require exponentially more compute to make marginal gains in intelligence. This relationship has profound implications for infrastructure: data centers, cloud computing, and distributed networks must all scale accordingly to meet the escalating demands of AI inference and training.
With AI taking over as the primary agent of search and computation, the demand for distributed compute is poised to increase exponentially. Unlike traditional search engines that rely on centralized indexing and retrieval, AI-driven systems require real-time adaptation, multimodal processing, and context-aware reasoning. This necessitates a decentralized, edge-compute approach where processing occurs closer to the user rather than being funneled through a single server or cloud provider. The shift toward agentic AI intensifies the strain on hardware and network resources, making distributed compute not just an optimization but an inevitability.
The consequences of this shift extend beyond technological infrastructure to economic and strategic considerations. Compute is no longer a passive resource but an active commodity, with access to scalable, high-efficiency processing power determining competitive advantage. Cloud providers, semiconductor manufacturers, and AI research labs are all locked in a race to develop more efficient architectures, including specialized AI chips, neuromorphic computing, and quantum acceleration. The demand for compute is not just growing—it is fragmenting, with specialized workloads requiring different types of processing power, from low-latency inference engines to high-performance training clusters.
As AI continues to embed itself into the fabric of digital interactions, the fundamental nature of querying itself is changing. The traditional notion of a “search” as a discrete action initiated by a human is giving way to a more fluid and continuous process, where AI agents navigate vast datasets, optimize queries autonomously, and anticipate information needs before they even arise. This transition challenges existing paradigms of access, privacy, and infrastructure, forcing a reevaluation of how computational resources are allocated and governed.
In a world where AI agents drive information retrieval, computing infrastructure must evolve to accommodate the new reality of exponential demand. Intelligence may be logarithmically bound to compute, but the expansion of AI-driven agency ensures that the hunger for compute will only accelerate. The future will belong to those who can most efficiently distribute and optimize this resource, turning raw computational power into structured, scalable intelligence.
Show code cell source
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
# Define the neural network layers
def define_layers():
return {
'Suis': ['Foundational', 'Grammar', 'Syntax', 'Punctuation', "Rhythm", 'Time'], # Static
'Voir': ['Syntax.'],
'Choisis': ['Punctuation.', 'Melody'],
'Deviens': ['Adversarial', 'Transactional', 'Rhythm.'],
"M'èléve": ['Victory', 'Payoff', 'Loyalty', 'Time.', 'Cadence']
}
# Assign colors to nodes
def assign_colors():
color_map = {
'yellow': ['Syntax.'],
'paleturquoise': ['Time', 'Melody', 'Rhythm.', 'Cadence'],
'lightgreen': ["Rhythm", 'Transactional', 'Payoff', 'Time.', 'Loyalty'],
'lightsalmon': ['Syntax', 'Punctuation', 'Punctuation.', 'Adversarial', 'Victory'],
}
return {node: color for color, nodes in color_map.items() for node in nodes}
# Define edge weights (hardcoded for editing)
def define_edges():
return {
('Foundational', 'Syntax.'): '1/99',
('Grammar', 'Syntax.'): '5/95',
('Syntax', 'Syntax.'): '20/80',
('Punctuation', 'Syntax.'): '51/49',
("Rhythm", 'Syntax.'): '80/20',
('Time', 'Syntax.'): '95/5',
('Syntax.', 'Punctuation.'): '20/80',
('Syntax.', 'Melody'): '80/20',
('Punctuation.', 'Adversarial'): '49/51',
('Punctuation.', 'Transactional'): '80/20',
('Punctuation.', 'Rhythm.'): '95/5',
('Melody', 'Adversarial'): '5/95',
('Melody', 'Transactional'): '20/80',
('Melody', 'Rhythm.'): '51/49',
('Adversarial', 'Victory'): '80/20',
('Adversarial', 'Payoff'): '85/15',
('Adversarial', 'Loyalty'): '90/10',
('Adversarial', 'Time.'): '95/5',
('Adversarial', 'Cadence'): '99/1',
('Transactional', 'Victory'): '1/9',
('Transactional', 'Payoff'): '1/8',
('Transactional', 'Loyalty'): '1/7',
('Transactional', 'Time.'): '1/6',
('Transactional', 'Cadence'): '1/5',
('Rhythm.', 'Victory'): '1/99',
('Rhythm.', 'Payoff'): '5/95',
('Rhythm.', 'Loyalty'): '10/90',
('Rhythm.', 'Time.'): '15/85',
('Rhythm.', 'Cadence'): '20/80'
}
# Calculate positions for nodes
def calculate_positions(layer, x_offset):
y_positions = np.linspace(-len(layer) / 2, len(layer) / 2, len(layer))
return [(x_offset, y) for y in y_positions]
# Create and visualize the neural network graph
def visualize_nn():
layers = define_layers()
colors = assign_colors()
edges = define_edges()
G = nx.DiGraph()
pos = {}
node_colors = []
# Add nodes and assign positions
for i, (layer_name, nodes) in enumerate(layers.items()):
positions = calculate_positions(nodes, x_offset=i * 2)
for node, position in zip(nodes, positions):
G.add_node(node, layer=layer_name)
pos[node] = position
node_colors.append(colors.get(node, 'lightgray'))
# Add edges with weights
for (source, target), weight in edges.items():
if source in G.nodes and target in G.nodes:
G.add_edge(source, target, weight=weight)
# Draw the graph
plt.figure(figsize=(12, 8))
edges_labels = {(u, v): d["weight"] for u, v, d in G.edges(data=True)}
nx.draw(
G, pos, with_labels=True, node_color=node_colors, edge_color='gray',
node_size=3000, font_size=9, connectionstyle="arc3,rad=0.2"
)
nx.draw_networkx_edge_labels(G, pos, edge_labels=edges_labels, font_size=8)
plt.title("Grammar is the Ecosystem", fontsize=15)
plt.show()
# Run the visualization
visualize_nn()


Fig. 16 Change of Guards. In Grand Illusion, Renoir was dealing the final blow to the Ancién Régime. And in Rules of the Game, he was hinting at another change of guards, from agentic mankind to one in a mutualistic bind with machines (unsupervised pianos & supervised airplanes). How priscient!#