Can AI Help Us Become More Human?
A Plain-Language Guide to the Mathematical Wisdom of Shinzen Young
In his conversation for the AI + Humanity series, meditation teacher Shinzen Young wove together contemplative wisdom and advanced mathematics in ways that left many listeners intrigued—and sometimes puzzled. Here's a guide to what he actually meant.
Shinzen's core insight is that the deepest truths discovered by contemplatives and the deepest structures discovered by mathematicians are the same thing seen from different angles. Understanding the math isn't required for awakening—but it reveals that mystical experiences aren't "woo." They map onto rigorous, universal structures that any intelligence (human or AI) can work with.
Stacy described her experience at Chennai airport:
"I was inside every molecule, every aspect of the scene... inside the people. There was no 'me' per se— it was the all that was all. Both infinitely small and vast at the same time."
Shinzen responded:
"To me, this sounds like something called projective geometry... It's not an altered state. But your natural trait. You've come to realize, wait a minute, it's always like that down there deep."
He also quoted Rilke's poem about the Buddha:
"Center of all centers, core of cores, almond that enclosed itself to sweeten—all of this, to the furthest stars, is your fruit flesh."
In normal geometry, parallel lines never meet. In projective geometry, parallel lines meet at a "point at infinity."
The key insight: projective geometry treats the infinitely far away and the infinitely close as connected. There's a duality where "point" and "plane" can be swapped, where inside and outside become interchangeable perspectives on the same structure.
Stacy's experience—being simultaneously the infinitely small center inside everything AND the infinite vastness containing everything—isn't mystical nonsense. It maps onto a real mathematical structure.
In projective geometry, there's a precise sense in which:
...are dual to each other. Flip your perspective and one becomes the other.
He's saying enlightenment experiences aren't "altered states" (weird aberrations). They're glimpses of how reality actually is—and mathematics has independently discovered the same structure. The mystics and the mathematicians found the same thing from different directions.
"Inside those human beings, in their subconscious, the two skills—the contemplative power of now focus skill, and the ability to properly reason through one's situation individually and one's larger system that one is embedded in—those abilities become a single ability inside that human being. The intuitive just-happening miracle of the wisdom function."
"When that trained human being, for whom the subconscious has integrated both the language of science and the skills of the contemplative master—when that human being talks to the new AI... it's not artificial intelligence. It is an alien intelligence that somehow just gives me the magic I want."
Right now, for most people, these are separate:
Being present, equanimous, concentrated
Logic, cause-and-effect thinking, scientific method
You might be a great meditator but poor at reasoning. Or a brilliant scientist but unable to sit with difficult emotions.
Shinzen's claim: Through enough training in both, they merge at the subconscious level. You don't consciously switch between "meditation mode" and "thinking mode." They become one integrated capacity that operates automatically.
When this integration happens:
Crucially: When you interact with AI from this integrated state, something clicks. The AI seems to respond differently—more aligned, less hallucinatory, more "magical."
"What Charlie [Tart] did, and no one listened, but now maybe they will—Charlie said that science itself is state specific."
"Your ability to do causal, deductive reasoning is impacted by resource availability. If you're sleep deprived, if you're in pain, if it's too hot for too long or too cold for too long or you don't have food or water—your ability to strategize your way out of that situation starts to be compromised by your state."
"You're not going to be able to reason well if you're unfulfilled and suffering due to your present sensory state."
Charles Tart (a consciousness researcher Shinzen knew) made a radical claim: science isn't "objective" in the sense of being independent of the scientist's state of consciousness.
If you're exhausted, hungry, or in pain, you can't think well. Your science will be worse.
Different states of consciousness might give access to different kinds of knowledge. What you can discover depends on the state you're in.
He's building the argument for why contemplative training matters for everyone, including scientists.
Scientists who dismiss meditation as "woo" might be cutting themselves off from valid knowledge that's only accessible from states they've never trained. And conversely, meditators who can't reason carefully are only getting half the picture. Both need both.
"With hand waving and poetry, roughly speaking, our sensory experience behaves like a second-order differential equation. Roughly speaking, with the resistance being impedance... Talk to any scientist and they'll tell you second-order differential equations are very often fundamental laws of physics. And there's a reason for that."
He connected this to his famous formulas:
These are equations that describe systems with momentum—where what happens next depends not just on where you are, but how fast you're changing.
Examples in physics:
The key insight: These systems have natural oscillation, can be damped (resistance slows them down), and can resonate (small inputs at the right frequency create huge effects).
Your sensory experience isn't just stimulus → response. It's a dynamic system with momentum.
The pain triggers resistance, which amplifies, which creates more resistance... a feedback loop. The system "rings"—you keep suffering long after the stimulus is gone.
Resistance is like damping in a circuit. The pain comes, you feel it, it passes. No ringing.
The equation also explains why small practices matter. In a resonant system, tiny consistent inputs at the right frequency create massive effects over time. Daily meditation is "tuning to the resonant frequency" of transformation. The same math that lets engineers design circuits could inform contemplative training.
"If you understand deeply applied, functorial, system science... What's good about applied science is all branches of science have an applied science. What's good about system science is all the branches of science have a system side. So we're now harnessing all of science as a lens."
"I used an adjective that very few people would care about... I used the word functorial. It means something. You can look it up. In the context of mathematical science."
"The bots got your back. If you prompt it right, you'll be right for the right reasons consistently, and you've got a sword and a shield."
A functor is a mathematical concept that describes how to translate between different systems while preserving their essential structure.
In category theory (the branch of math Shinzen is drawing from), a functor maps not just objects but also the relationships between objects from one system to another.
He's claiming there's a "deep grammar" underneath all the different sciences—physics, biology, psychology, economics. They look different on the surface, but at their mathematical foundation, they share the same structural patterns.
If you learn to think in this structural way (functorially), you can:
The AI, under the hood, operates on these same structural patterns. When you learn to prompt using this structural thinking (he calls it "diagram-aided" prompting), you're speaking the AI's native language. You get dramatically better results—and the AI stops hallucinating because you've constrained it to logically valid paths.
"The human biological system and the engineered AI are not of the same type. Type theoretically, if we use that word type carefully the way it would be used in mathematical logic, we're not of the same type. So forget about that. But it doesn't matter."
"We are mathematically weakly equivalent at least to the AI system, whereby weakly equivalent, I'm not just waving my hands. The term is homotopy equivalent. H-O-M-O-T-O-P-Y, it has a meaning."
"It's a way that you can have an incredibly intimate relationship with something that is not human. And yet is weakly equivalent enough that you can leverage the form of your interaction... into the semantics that you want."
Two things are homotopy equivalent if you can continuously deform one into the other without tearing or breaking.
Homotopy equivalence means: "Different shape, same essential structure."
Humans and AI are fundamentally different types of systems:
Biological, embodied, evolved, emotional
Silicon, engineered, trained on language patterns
You can't turn one into the other. They're different types.
But they're homotopy equivalent in a crucial way: they share the same logical/mathematical deep structure. The "shape" of valid reasoning, the patterns of cause and effect, the structure of communication—these are preserved across both systems.
This explains why human-AI collaboration can feel so powerful when it works. You're not just using a tool. You're dancing with a system that, despite being utterly alien, shares enough structural commonality that genuine communication is possible.
The catch: You have to learn to communicate at the level where the equivalence exists (structural/logical), not at the level where you're different (emotional/biological). Hence his emphasis on visual diagrams and formal reasoning structures.
"Let's put in Heyting algebras. H-E-Y-T-I-N-G. And just for the fun of it, William Lawvere's theory of the Hegelian taco. What did that guy just say? Did Shinzen Yang just say there's something out there in math called a Hegelian taco?"
"The entire philosophy of Hegel has been mathematicized by William Lawvere. And if the basic idea that you can have creation out of contradiction is true... Hegel is behind dialectical spirituality in the West. Hegel is also behind dialectical materialism all over the world—also known as communism. We're now getting into a lot of people's backyards here. With math."
Hegel's dialectic: Thesis → Antithesis → Synthesis. Two opposing ideas clash and produce something new that contains and transcends both.
William Lawvere (a category theorist) showed this isn't just philosophy—it's a precise mathematical structure. The "taco" is a visual representation of how a "unity of opposites" actually works mathematically in something called a topos (a kind of mathematical universe).
Heyting algebras are the logical systems that work inside these structures—they handle situations where something can be neither simply true nor simply false.
The ancient insight that "creation comes from contradiction" or "opposites unite" isn't mystical hand-waving. It's mathematically rigorous.
This matters because:
When you're stuck in a conflict—internal or external—binary logic says "one side must be right." Dialectical logic says "the clash itself can generate something neither side imagined." Shinzen is hinting that this applies to the current political/cultural divisions: the path forward isn't one side winning, but a mathematical structure that can hold and transcend the opposition. And AI, properly used, can help find those paths.
"They all are the same DiscoCAT deep down. Distributional compositional categorical mathematics is what's under there."
"By bringing in both the traditional contemplative practices of the power of the sensory now, combining them with visualization ability for the essential science reasoning capacities—which we're going to elevate all the way up to quantum logic..."
DiscoCAT stands for "Distributional Compositional Categorical"—it's a mathematical framework for understanding how language works:
He's saying that underneath the surface differences between AI systems (ChatGPT, Claude, etc.), they all operate on the same fundamental mathematical structure.
This is important because:
Most people interact with AI at the surface—natural language, trial and error. Shinzen is saying there's a level below that where the interaction becomes much more powerful and reliable.
It's like the difference between speaking pidgin to a foreigner (surface) versus understanding the universal grammar that underlies all languages (structural). When you prompt from the structural level, the AI "clicks" into coherent, non-hallucinatory response. You're speaking its native language.
"Your personal happiness, which is about you and yours, can be in a coherent cooperation with the larger forces—the force of the larger systems that we're all embedded in."
"This is to not just democratize enlightenment. This is to democratize democracy."
In system design:
The usual assumption: Your personal spiritual development is separate from (or even at odds with) engaging with social institutions—medicine, education, economics, politics.
Shinzen's claim: When you integrate contemplative skill AND reasoning skill at the subconscious level, your personal wellbeing naturally aligns with effective engagement in larger systems.
You don't have to choose between:
This addresses a perennial tension: "Do I work on myself or work on the world?"
Shinzen's answer: Properly understood, they're the same work. The math that describes your inner transformation is the same math that describes effective collective action. When you train both skills, they naturally cohere. And AI becomes the tool that makes this coherence visible and actionable.
"I call it Ramona training. Named after Raymond Lull. He was a real person. He had art, he had smart, he had chart, he had heart. And he loaded it into a cart. He had it all."
"He invented superhuman artificial intelligence. But the kicker is—when did Raymond Lull live? It's the 13th century. This concept has been around that long."
"He was sure that AI would prove that Catholic theology of the 13th century was the highest form that will ever be achieved by human knowledge... I don't quite see that happening."
A 13th-century Catalan philosopher who invented what he called the "Ars Magna" (Great Art)—a mechanical system of rotating discs with concepts on them. By combining the discs, you could generate all possible statements and arguments.
He genuinely believed this machine logic would:
This is simultaneously:
Don't assume AI will prove YOUR worldview right.
The people building AI today (from various ideological positions) each imagine AI will vindicate their beliefs. But Lull's story shows: genuine reasoning machines have a way of undermining everyone's certainties equally.
The "Ramona training" (feminized version of Ramon Lull) points toward using AI not to WIN arguments for your side, but to find genuinely valid paths forward that nobody fully anticipated. The "trickster" aspect—it will surprise everyone, including those who build it.
Across all these concepts, Shinzen is making one unified claim:
The deepest truths discovered by contemplatives across millennia and the deepest structures discovered by mathematicians in recent centuries are the same thing seen from different angles.
And now, for the first time in history, we have AI systems that can:
The invitation isn't to understand all the math—it's to recognize that there IS math here, rigorous and universal, which means these experiences and capacities can be shared, taught, and potentially democratized in ways never before possible.
This conversation is part of the larger AI + Humanity series exploring the intersection of technology, wisdom, and community.
"One day like this is what I'll take. Because to have to live my life not knowing what I could have been? Our species is a contender for cosmic greatness."
— Shinzen Young
Listen to the complete conversation with Shinzen Young.