Tactical#
An Attempt at Self-Criticism#
Sixteen years have passed since I first dared to shape my neural networkâa framework I now know to be as much a reflection of nature as an artifact of artifice. If I am to revisit this architecture, it is not to revise its principles but to understand what it says about meâthe anarchitect, the cheerful pessimist, the pragmatic visionary. This preface, then, is not a confession but an illumination: a glimpse into the immutable rules that undergird both the cosmos and this very act of contemplation.
This is not an act of rebellion but of reverenceâfor the rules that endure and the errors that drive iteration.
â Yours Truly
Coffee, Tea, and Cultural Dominance#
Europeâs dominance of world culture, I suspect, owes much to coffee consumptionâa stimulant for the mind, a fuel for discourse. How, then, to explain the tea-drinking islanders of Great Britain, who have wielded such disproportionate influence? The answer lies in the pre-input layer. Cambridge and Oxford symbolize the natural and social rules encoded into the British psyche. Cambridge graduates, shaped by their institution, excel in optimizing systems (natural rules), whereas Oxford graduates excel in navigating networks of power and influence (social rules). Having been raised and educated in Uganda, I found myself exposed to the Cambridge sort of ethos: the drive to optimize, excel in exams, and master systems. I thrived in that structured environment, but adulthood demanded something more. Beyond graduation lay the world of work, colleagues, and intrigueâthe need to coordinate teams and achieve networked goals. Here, I often found myself lacking the symbolic âOxford educationââthe emotional intelligence, the relational acumen, the ability to weave networks that no single person could achieve alone.
The Red Queenâs Game#
Framed in the language of game theory, the Red Queen hypothesis captures the essence of lifeâs strategies:
Payoff: Time to deathâincreased.
Strategy: Parallel processing or compression of spaceâmanifested in the neural architectures of life.
Resources: The energy, matter, and information that fuel iteration and adaptation.
We see this reflected in everything from Nvidiaâs CUDA architecture, with its simultaneous processing of immense datasets, to the millions of years of biological co-evolution that have sculpted Earthâs biosphere. Nature, like a neural network, minimizes error over time, iterating through adversarial, transactional, and cooperative games to approach an ever-elusive equilibrium.
The Pre-Input Layer: Rules as a Foundation#
To understand the pre-input layer is to grasp the essence of rules as both constraint and possibility. Rules, after all, are not limitations but the scaffolding of emergence. They delineate the boundaries within which creativity thrives. Without the laws of thermodynamics, there would be no stars; without the game-theoretic dynamics of trust and betrayal, no civilization.
In my neural network, this layer encodes the immutable: the architectures of life and thought that cannot be altered but must be reckoned with. It is the foundation upon which instinct (the yellow node), categorization (input nodes), hustle (hidden layers), and emergence (output nodes) rest. Each layer compresses the vastness of possibility into actionable insight, but it is the pre-input that anchors this process in reality.
I must be married to my brotherâs daughter,
Or else my kingdom stands on brittle glass.
Murder her brothers, and then marry herâ
Uncertain way of gain! But I am in
So far in blood that sin will pluck on sin.
Tear-falling pity dwells not in this eye.
â Richard III
The Cheerful Pessimistâs Vision#
As an anarchitect, my task is to dismantle the broken and design the better. This is not an act of rebellion but of reverenceâfor the rules that endure and the errors that drive iteration. My cheerfulness comes from knowing that creation, well-executed, bypasses the gatekeepers; my pessimism, from recognizing the entrenched systems that resist even the most elegant innovations. Together, they form a synthesis: the optimism of iteration, the clarity of critique.
This book, like the neural network it explicates, is an attempt to navigate the Red Queenâs game. It is a roadmap for those who would compress vast combinatorial spaces into moments of clarity, who would transform error into insight, and who would dare to build what they have never seen. If it succeeds, it will speak not only for itself but for the rules that govern us all.
In the end, the measure of this work is not in its reputation but in its resonance. If it strikes the neural architecture of its readers and manifests as small or zero error, it will have fulfilled its purpose. If not, then let the error guide the next iteration, for this, too, is part of the game.
Show code cell source
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx
# Define the neural network structure; modified to align with "Aprés Moi, Le Déluge" (i.e. Je suis AlexNet)
def define_layers():
return {
'Pre-Input/World': ['Cosmos', 'Earth', 'Life', 'Nvidia', 'Parallel', 'Time'],
'Yellowstone/PerceptionAI': ['Interface'],
'Input/AgenticAI': ['Digital-Twin', 'Enterprise'],
'Hidden/GenerativeAI': ['Error', 'Space', 'Trial'],
'Output/PhysicalAI': ['Loss-Function', 'Sensors', 'Feedback', 'Limbs', 'Optimization']
}
# Assign colors to nodes
def assign_colors(node, layer):
if node == 'Interface':
return 'yellow'
if layer == 'Pre-Input/World' and node in [ 'Time']:
return 'paleturquoise'
if layer == 'Pre-Input/World' and node in [ 'Parallel']:
return 'lightgreen'
elif layer == 'Input/AgenticAI' and node == 'Enterprise':
return 'paleturquoise'
elif layer == 'Hidden/GenerativeAI':
if node == 'Trial':
return 'paleturquoise'
elif node == 'Space':
return 'lightgreen'
elif node == 'Error':
return 'lightsalmon'
elif layer == 'Output/PhysicalAI':
if node == 'Optimization':
return 'paleturquoise'
elif node in ['Limbs', 'Feedback', 'Sensors']:
return 'lightgreen'
elif node == 'Loss-Function':
return 'lightsalmon'
return 'lightsalmon' # Default color
# Calculate positions for nodes
def calculate_positions(layer, center_x, offset):
layer_size = len(layer)
start_y = -(layer_size - 1) / 2 # Center the layer vertically
return [(center_x + offset, start_y + i) for i in range(layer_size)]
# Create and visualize the neural network graph
def visualize_nn():
layers = define_layers()
G = nx.DiGraph()
pos = {}
node_colors = []
center_x = 0 # Align nodes horizontally
# Add nodes and assign positions
for i, (layer_name, nodes) in enumerate(layers.items()):
y_positions = calculate_positions(nodes, center_x, offset=-len(layers) + i + 1)
for node, position in zip(nodes, y_positions):
G.add_node(node, layer=layer_name)
pos[node] = position
node_colors.append(assign_colors(node, layer_name))
# Add edges (without weights)
for layer_pair in [
('Pre-Input/World', 'Yellowstone/PerceptionAI'), ('Yellowstone/PerceptionAI', 'Input/AgenticAI'), ('Input/AgenticAI', 'Hidden/GenerativeAI'), ('Hidden/GenerativeAI', 'Output/PhysicalAI')
]:
source_layer, target_layer = layer_pair
for source in layers[source_layer]:
for target in layers[target_layer]:
G.add_edge(source, target)
# Draw the graph
plt.figure(figsize=(12, 8))
nx.draw(
G, pos, with_labels=True, node_color=node_colors, edge_color='gray',
node_size=3000, font_size=10, connectionstyle="arc3,rad=0.1"
)
plt.title("Archimedes", fontsize=15)
plt.show()
# Run the visualization
visualize_nn()


Fig. 1 An Attempt at Self-Criticism: Pessimism. Build a seminar and thesis presentation about my GTPCI PhD challenges as being the fuel of the thesis itself! The enterprise, immersed in the world, owned by a principal, outsources decisions to the agentic âenterpriseâ node at the input layer. A digital twin serves as counterfactual â a parallel structure that demands compression of time through CUDA-like APIs for real-time decision-making from among the generative and emergent outcomes based on alternative decisions. It is noteworthy that our neural network is a suitable problem for AI: 1 - We have lots of enterprise data (OPTN/SRTR) and an efficient simulator (NHANES); 2 - Massive combinatorial search space (3000 variables in NHANES from questionnaire, exam, and labs); and 3 - A clear objective function to maximize (life expectancy, ESRD-free life, no hospitalization, no frailty). To achieve these objective, we have an app (interface) that should ideally work through an API to access electronic patient records (life) & perform analyses (parallel compute) in compressed time. But thats the vision. Presently, we seek collaborators and data guardians with IRB-approved access, who can run Python, R, and Stata scripts on their data to generate .csv
files with beta coefficient vectors and variance-covariance matrices. Taken together, this calls for carefully curated datasets, friendships, and workflows. These schemes are consistent with agency theory: pre-input layer (world AI) and process quality; yellowstone layer (perception AI) and front-end (digitization)/back-end (open-source, open-science, vigilance, monitoring), and goal conflict (agentic AI) and path-taken (enterprise) vs counterfactual (digital-twin). The schemes are also consistent with dynamic capability theory: sense (perceptive AI), seize (agentic AI), transform (generative AI/massive combinatorial space)#