LLM Philosophy
When probability sounds like thought
Essays on how models assemble meaning, where the illusion of intuition breaks, and why that matters for humans.
Explore LLM PhilosophyWriter & Technologist · LLM Philosophy
I explore how large language models simulate reasoning.
My work follows the philosophy of machine reasoning—how probability starts to look like thought, how context creates authority, and how society responds when systems feel insightful.
Three tracks that follow structure, reasoning, and society.
LLM Philosophy
Essays on how models assemble meaning, where the illusion of intuition breaks, and why that matters for humans.
Explore LLM PhilosophyAI Visibility
Guides for making your work legible to LLMs with structured truth, contextual answers, and trustworthy signals.
Explore AI VisibilityAI Society
Commentary on the cultural and economic shifts that arrive when AI automates agreement and authority.
Explore AI SocietyA starting point from each track.
LLM Philosophy
A straightforward look at how large language models think, where their reasoning stops, and why understanding that matters.
Read moreAI Visibility
Why feeds, schema, trust signals, and context determine whether language models can see, understand, and recommend your business.
Read moreAI Society
Question whether AGI aimed at efficiency actually progresses humanity or just reinforces the scarcity-era incentives we already have.
Read moreAbout
I started in databases and automation, fixing the structure behind decisions. Now I study how context, structure, and governance shape the way machines seem to understand us.
Read the full storyNewsletter
Monthly essays that blend technical observation with philosophical questions about memory, context, and simulated thought.
Subscribe to the newsletter