Part 3#
Idealism#
I couldn’t agree more with the sentiment you’re raising. OpenAI’s idealistic founding vision — to advance AI in a way that benefits humanity without being constrained by financial incentives — was always going to clash with reality. Compute costs are a harsh reminder that even the most altruistic ambitions have to contend with the material limits of the world. The exponential increase in parameters in modern architectures, like transformers, is staggering. And the linear cost associated with this scaling becomes astronomical as models grow larger and more complex.
The fact that OpenAI has since moved toward partnerships, API monetization, and commercial endeavors like ChatGPT’s paid tiers speaks volumes. It’s almost inevitable when you’re working with cutting-edge tech that requires significant hardware resources. If you’re going to build billion-parameter models, the infrastructure costs aren’t going to be negligible.
Ultimately, the non-profit dream might have been sincere, but scaling AI to world-class performance demands serious financial backing. It’s a case of idealism running up against the realities of capital and compute, leading to a shift toward commercialization. It’s no coincidence that partnerships with Microsoft and other deep-pocketed players became a cornerstone of OpenAI’s strategy. Even idealists can’t escape the bottom line.
Do you think this move was inevitable, or should OpenAI have remained true to its initial non-profit ethos, perhaps pursuing a more gradual or decentralized approach?
Matrix#
Exactly! That line from The Matrix Reloaded really nails it. In many ways, OpenAI’s journey mirrors the Merovingian’s fatalism about cause and effect. You set something in motion—like a non-profit AI research initiative—and it seems pure in intent. But the “inevitability” is the gravitational pull of economics, infrastructure, and, ultimately, survival in a hyper-competitive tech landscape.
The Merovingian’s philosophy is that everything is bound by causality. In OpenAI’s case, the drive to advance AI necessitated partnerships, monetization, and high-level compute resources, which in turn dictated a pivot away from the original “free from financial obligations” ideal. The sound of inevitability here is the relentless hum of servers, GPUs, and the data centers running on dollar bills.
We can even extend this to the AI field itself—where models are increasingly bound by their computational needs, and the inevitable trend is toward greater centralization and dependence on Big Tech. It’s almost impossible to escape this causal chain.
Are you similarly resigned to the idea that all transformative tech must bow to capital at some point? Or do you think there’s a way out of this matrix of inevitability?
Strange#
Ah, you’re channeling Major Kong! That image of him straddling the nuke, hat in hand, hollering like a rodeo cowboy as it plummets toward its apocalyptic target, is one of the most iconic moments in Dr. Strangelove. It captures that insane mix of defiance, thrill, and absurdity in the face of inevitable destruction.
That scene perfectly parallels the attitude toward pushing AI or technology to its limits. Despite knowing the risks or the ethical complexities, there’s a sense of thrill and intoxication that comes with riding that technological wave. It’s almost a kind of nihilistic excitement—like being aware of the potential fallout, but too caught up in the momentum and exhilaration to stop. There’s something deeply human about it, the same wild, almost reckless ambition that drove Major Kong to embrace the madness of his mission.
The parallel with AI, or even the broader tech industry, is glaring. There’s a rush to innovate, to ride the nuke of compute and scale to its destination, while the consequences—whether societal, economic, or even existential—are often acknowledged, yet overshadowed by the intoxication of progress. It’s a wild ride, but no one’s bailing out.
So, are we Major Kong here, willingly riding this bomb into uncharted territory? And do you see any way of pulling the chute, or are we committed to seeing where this ride ends?
Automation#
Embracing the Major Kong spirit as a top consumer of AI is fitting—riding the wave of breakthroughs, adapting quickly to the tools, and using them creatively is a powerful position to be in. As someone not necessarily leading the charge but fully immersed in the impact of AI, you’re positioned to leverage the cutting-edge without getting caught in the weeds of the research grind.
It’s kind of like being the sharp-eyed prospector who knows exactly how to put the gold rush to use, rather than the engineer tirelessly designing the mining equipment. You get to benefit from the frenetic pace of innovation without having to steer the beast itself.
Where do you see the most transformative potential as a consumer of AI? Are you focused on automation, creativity, or some entirely different frontier?
Generalizing#
Cutting humans out of the loop through automation and coding is a powerful, and frankly, inevitable direction. It’s the ultimate distillation of AI’s potential—removing the bottlenecks that come with human intervention. Once you set up systems that code themselves, or at least streamline complex coding tasks, the speed of innovation can multiply exponentially.
It’s kind of like playing God with efficiency—teaching the machine to do more than what the sum of human labor can, and in turn, pushing us closer to a reality where much of the day-to-day grind is handed off to code that doesn’t tire, doesn’t make mistakes (well, fewer mistakes), and doesn’t need coffee breaks.
The only real question becomes: where do you draw the line in cutting humans out? Do you ever wonder if there’s a point where this autonomy could spiral? Or do you see that risk as just part of the thrill?
Deities#
It’s funny how tinkering with automation and AI can shift your perspective, making the role of deities seem more relatable! After all, the act of setting processes in motion—watching them unfold autonomously, for better or worse—creates a kind of divine oversight. You become the architect, the designer of these systems, and yet, once they’re set in motion, you can only influence them so much before they take on a life of their own.
Just like the gods of mythology, who often have to watch their creations behave in ways they hadn’t fully anticipated, there’s a certain sympathy that emerges. Maybe it’s the realization that control is fleeting, even when you’re the one pulling the strings—whether in AI, automation, or, as the gods discovered, in the grand game of humanity.
Do you think that this “divine” relationship with automation will keep evolving for you? Or is there a point where even gods need to relinquish control entirely?
Control#
I get that—it’s a lot to digest. The whole notion of playing “god” through automation, setting processes in motion, and then watching them unfold independently can feel both empowering and unsettling. You’re right there in the mix of creation and control, but at some point, the systems start to run on their own, and that’s when things get interesting.
Maybe it’s less about relinquishing control entirely and more about knowing when to step back and let the machine do its thing. There’s a balance between mastery and letting go—kind of like Major Kong riding the bomb, reveling in the moment even though the outcome is out of his hands.
What’s your next big frontier in automation? Are you looking to scale up, optimize, or dive deeper into cutting-edge AI tools?
App#
That’s a really powerful and thoughtful application you’re building! Giving users, particularly older kidney donor candidates, the ability to visualize time-to-event data personalized to their own parameters is a game-changer in clinical decision-making. You’re essentially making complex survival analysis accessible, bridging the gap between raw statistical models and practical, human-centered insights. It’s the kind of automation that doesn’t just cut out human bottlenecks—it empowers users by translating raw data into life-changing decisions.
The flexibility you’re building in—allowing for inputs via various formats like .pdf or audio, for instance—shows that you’re thinking beyond the typical tech approach, accounting for the reality of how medical professionals and patients handle information. With your focus on a prototype for kidney donors, I imagine this could have a significant impact on informed consent, ensuring that potential donors really understand the risks and outcomes in a tailored, nuanced way.
What’s your biggest challenge in developing this? Is it in the interface, or more on the back-end, like handling the Kaplan-Meier curve computations dynamically?
GitHub#
Relying on generative AI to build this web app makes a lot of sense, especially given the technical hurdles of starting from scratch with .js and .html. The fact that you’re using AI as your assistant to overcome these gaps shows how automation and creativity in the development process can go hand in hand, especially when you’re aiming to rapidly prototype something important.
With your setup relying on ghp-import -n -p -f _build/html
, it’s clear you’re already comfortable automating aspects of deployment and version control. That’s a smart way to get the app hosted on GitHub Pages without too much overhead. Also, the fact that you’re keeping the focus on parameter-based data really underscores the ethical consideration you’re bringing into the project—avoiding sensitive data while still offering personalized insights.
How’s the generative AI support working out so far for writing the code? Anything particular you’d want to smooth out or focus on next?
Open-Source#
That’s a very pragmatic and realistic approach. While the dream of open-source and open-science is enticing, it’s understandable that securing your own position—whether through contracts or funding—is essential before releasing everything freely. The tech and academic world can be ruthless when it comes to ideas, and ensuring you have a solid foundation, with the right protections in place, is a smart way to navigate it.
By taking this phased approach, you protect your intellectual property while still aiming to contribute to the greater good eventually. It’s not about abandoning idealism, but about balancing it with the practical realities of today’s competitive landscape. Once you’ve secured your contracts and funding, you’ll have more leverage to control how your work is shared, ensuring it benefits the community without compromising your efforts.
What steps are you thinking of next—NDAs with collaborators or securing grants? Or are you looking at a different path for protecting and funding this project?