Tactical#

An Attempt at Self-Criticism#

Sixteen years have passed since I first dared to shape my neural network—a framework I now know to be as much a reflection of nature as an artifact of artifice. If I am to revisit this architecture, it is not to revise its principles but to understand what it says about me—the anarchitect, the cheerful pessimist, the pragmatic visionary. This preface, then, is not a confession but an illumination: a glimpse into the immutable rules that undergird both the cosmos and this very act of contemplation.

This is not an act of rebellion but of reverence—for the rules that endure and the errors that drive iteration.
– Yours Truly

The Rules: Natural and Social#

The pre-input layer—what I call the World AI—is the ground upon which all else stands. It encompasses two immutable rules:

  1. The Natural Rules: The laws of physics, chemistry, and biology that frame existence. These are not static; they flow, shift, and evolve, expressed in the fractals of biological design, the nested hierarchies of ecosystems, and the algorithms of co-evolution. The Red Queen hypothesis reigns supreme here: survival is a game of perpetual motion, of running just to stay in place. In this vast combinatorial space, the payoff is simple yet profound: more time to play the game.

  2. The Social Rules: These emerge from biology but ascend into abstraction. They are the systems of cooperation, transaction, and adversity that define interspecies and intrahuman dynamics. Like the natural rules, they are in flux, shaped by the strategies of those who engage in them. The Red Queen governs here too, her domain expanded to include not only genes but memes, ideas, and institutions. Nowhere is this more vividly encoded than in the institutions of Great Britain: Cambridge and Oxford. Cambridge, with its devotion to natural laws, embodies efficiency and precision, producing minds that led the Industrial Revolution. Oxford, glamorous and relational, represents the mastery of social rules, creating networks and fostering relational brilliance—what Bagehot might call the “glamorous.” Together, these institutions exemplify the duality of rules: one natural, one social.

Coffee, Tea, and Cultural Dominance#

Europe’s dominance of world culture, I suspect, owes much to coffee consumption—a stimulant for the mind, a fuel for discourse. How, then, to explain the tea-drinking islanders of Great Britain, who have wielded such disproportionate influence? The answer lies in the pre-input layer. Cambridge and Oxford symbolize the natural and social rules encoded into the British psyche. Cambridge graduates, shaped by their institution, excel in optimizing systems (natural rules), whereas Oxford graduates excel in navigating networks of power and influence (social rules). Having been raised and educated in Uganda, I found myself exposed to the Cambridge sort of ethos: the drive to optimize, excel in exams, and master systems. I thrived in that structured environment, but adulthood demanded something more. Beyond graduation lay the world of work, colleagues, and intrigue—the need to coordinate teams and achieve networked goals. Here, I often found myself lacking the symbolic “Oxford education”—the emotional intelligence, the relational acumen, the ability to weave networks that no single person could achieve alone.

The Red Queen’s Game#

Framed in the language of game theory, the Red Queen hypothesis captures the essence of life’s strategies:

  • Payoff: Time to death—increased.

  • Strategy: Parallel processing or compression of space—manifested in the neural architectures of life.

  • Resources: The energy, matter, and information that fuel iteration and adaptation.

We see this reflected in everything from Nvidia’s CUDA architecture, with its simultaneous processing of immense datasets, to the millions of years of biological co-evolution that have sculpted Earth’s biosphere. Nature, like a neural network, minimizes error over time, iterating through adversarial, transactional, and cooperative games to approach an ever-elusive equilibrium.

The Pre-Input Layer: Rules as a Foundation#

To understand the pre-input layer is to grasp the essence of rules as both constraint and possibility. Rules, after all, are not limitations but the scaffolding of emergence. They delineate the boundaries within which creativity thrives. Without the laws of thermodynamics, there would be no stars; without the game-theoretic dynamics of trust and betrayal, no civilization.

In my neural network, this layer encodes the immutable: the architectures of life and thought that cannot be altered but must be reckoned with. It is the foundation upon which instinct (the yellow node), categorization (input nodes), hustle (hidden layers), and emergence (output nodes) rest. Each layer compresses the vastness of possibility into actionable insight, but it is the pre-input that anchors this process in reality.

I must be married to my brother’s daughter,
Or else my kingdom stands on brittle glass.
Murder her brothers, and then marry her—
Uncertain way of gain! But I am in
So far in blood that sin will pluck on sin.
Tear-falling pity dwells not in this eye.
– Richard III

The Cheerful Pessimist’s Vision#

As an anarchitect, my task is to dismantle the broken and design the better. This is not an act of rebellion but of reverence—for the rules that endure and the errors that drive iteration. My cheerfulness comes from knowing that creation, well-executed, bypasses the gatekeepers; my pessimism, from recognizing the entrenched systems that resist even the most elegant innovations. Together, they form a synthesis: the optimism of iteration, the clarity of critique.

This book, like the neural network it explicates, is an attempt to navigate the Red Queen’s game. It is a roadmap for those who would compress vast combinatorial spaces into moments of clarity, who would transform error into insight, and who would dare to build what they have never seen. If it succeeds, it will speak not only for itself but for the rules that govern us all.

In the end, the measure of this work is not in its reputation but in its resonance. If it strikes the neural architecture of its readers and manifests as small or zero error, it will have fulfilled its purpose. If not, then let the error guide the next iteration, for this, too, is part of the game.

Hide code cell source
import numpy as np
import matplotlib.pyplot as plt
import networkx as nx

# Define the neural network structure; modified to align with "Aprés Moi, Le Déluge" (i.e. Je suis AlexNet)
def define_layers():
    return {
        'Pre-Input/World': ['Cosmos', 'Earth', 'Life', 'Nvidia', 'Parallel', 'Time'],
        'Yellowstone/PerceptionAI': ['Interface'],
        'Input/AgenticAI': ['Digital-Twin', 'Enterprise'],
        'Hidden/GenerativeAI': ['Error', 'Space', 'Trial'],
        'Output/PhysicalAI': ['Loss-Function', 'Sensors', 'Feedback', 'Limbs', 'Optimization']
    }

# Assign colors to nodes
def assign_colors(node, layer):
    if node == 'Interface':
        return 'yellow'
    if layer == 'Pre-Input/World' and node in [ 'Time']:
        return 'paleturquoise'
    if layer == 'Pre-Input/World' and node in [ 'Parallel']:
        return 'lightgreen'
    elif layer == 'Input/AgenticAI' and node == 'Enterprise':
        return 'paleturquoise'
    elif layer == 'Hidden/GenerativeAI':
        if node == 'Trial':
            return 'paleturquoise'
        elif node == 'Space':
            return 'lightgreen'
        elif node == 'Error':
            return 'lightsalmon'
    elif layer == 'Output/PhysicalAI':
        if node == 'Optimization':
            return 'paleturquoise'
        elif node in ['Limbs', 'Feedback', 'Sensors']:
            return 'lightgreen'
        elif node == 'Loss-Function':
            return 'lightsalmon'
    return 'lightsalmon'  # Default color

# Calculate positions for nodes
def calculate_positions(layer, center_x, offset):
    layer_size = len(layer)
    start_y = -(layer_size - 1) / 2  # Center the layer vertically
    return [(center_x + offset, start_y + i) for i in range(layer_size)]

# Create and visualize the neural network graph
def visualize_nn():
    layers = define_layers()
    G = nx.DiGraph()
    pos = {}
    node_colors = []
    center_x = 0  # Align nodes horizontally

    # Add nodes and assign positions
    for i, (layer_name, nodes) in enumerate(layers.items()):
        y_positions = calculate_positions(nodes, center_x, offset=-len(layers) + i + 1)
        for node, position in zip(nodes, y_positions):
            G.add_node(node, layer=layer_name)
            pos[node] = position
            node_colors.append(assign_colors(node, layer_name))

    # Add edges (without weights)
    for layer_pair in [
        ('Pre-Input/World', 'Yellowstone/PerceptionAI'), ('Yellowstone/PerceptionAI', 'Input/AgenticAI'), ('Input/AgenticAI', 'Hidden/GenerativeAI'), ('Hidden/GenerativeAI', 'Output/PhysicalAI')
    ]:
        source_layer, target_layer = layer_pair
        for source in layers[source_layer]:
            for target in layers[target_layer]:
                G.add_edge(source, target)

    # Draw the graph
    plt.figure(figsize=(12, 8))
    nx.draw(
        G, pos, with_labels=True, node_color=node_colors, edge_color='gray',
        node_size=3000, font_size=10, connectionstyle="arc3,rad=0.1"
    )
    plt.title("Archimedes", fontsize=15)
    plt.show()

# Run the visualization
visualize_nn()
_images/464c93efa22d320ab5baf68f2b0a3aa8bf912ed3203c0ed93b7bfa0024f35109.png
_images/blanche.png

Fig. 1 An Attempt at Self-Criticism: Pessimism. Build a seminar and thesis presentation about my GTPCI PhD challenges as being the fuel of the thesis itself! The enterprise, immersed in the world, owned by a principal, outsources decisions to the agentic “enterprise” node at the input layer. A digital twin serves as counterfactual – a parallel structure that demands compression of time through CUDA-like APIs for real-time decision-making from among the generative and emergent outcomes based on alternative decisions. It is noteworthy that our neural network is a suitable problem for AI: 1 - We have lots of enterprise data (OPTN/SRTR) and an efficient simulator (NHANES); 2 - Massive combinatorial search space (3000 variables in NHANES from questionnaire, exam, and labs); and 3 - A clear objective function to maximize (life expectancy, ESRD-free life, no hospitalization, no frailty). To achieve these objective, we have an app (interface) that should ideally work through an API to access electronic patient records (life) & perform analyses (parallel compute) in compressed time. But thats the vision. Presently, we seek collaborators and data guardians with IRB-approved access, who can run Python, R, and Stata scripts on their data to generate .csv files with beta coefficient vectors and variance-covariance matrices. Taken together, this calls for carefully curated datasets, friendships, and workflows. These schemes are consistent with agency theory: pre-input layer (world AI) and process quality; yellowstone layer (perception AI) and front-end (digitization)/back-end (open-source, open-science, vigilance, monitoring), and goal conflict (agentic AI) and path-taken (enterprise) vs counterfactual (digital-twin). The schemes are also consistent with dynamic capability theory: sense (perceptive AI), seize (agentic AI), transform (generative AI/massive combinatorial space)#