madvi.ai

Madeleine Parker (Madvi)

Madeleine Parker (Madvi)

Madeleine Parker (Madvi)

On Value, Intelligence, and a Living Way of Thinking

Most of what I work on now did not begin as the thing it is. It began as a question in relation to observances that kept returning, persistently, over more than a decade.

What do we mean by value, once humans are placed back at the center?

My academic background spans economics, philosophy, psychology, linguistics, and later data science—particularly quantitative approaches to qualitative data, and the synthesis of meaning across systems. What interested me early on was never a single discipline, but the relationships between them: how language shapes perception, how perception shapes behavior, how behavior becomes economics, culture, and eventually infrastructure.

During my years at University College London (UCL) and later in my continued studies, I became increasingly dissatisfied with how narrowly value was being defined—reduced to price, productivity, efficiency, or scale. These definitions felt incomplete, and often actively harmful, when set against human well-being, ecological limits, and the lived experience of people moving through increasingly mediated worlds. When I started out in tech, I was confused by how the majority of builders, engineers, investors, and founders had little to no background in philosophy. It was hard to find people in tech with philosophical vision or reasoning for what they were creating.

Before I moved fully into entrepreneurship, I began writing a book. Its working title was The Human Theory of Value.

The idea was not to propose another economic framework in the traditional sense, but to remap value from a humanistic perspective—one that takes seriously the role of technology, the internet, intelligence systems, ecology, and the inner lives of people. It asked how these forces interact, where agency lives within them, and how meaning is produced, distorted, or lost—and to what outcome.

By the late 2010s, many of these ideas had already been circulating for years in my research notes and papers. They were shaped by philosophy—especially the philosophy of language and the philosophy of technology—by economics, and by a sustained attention to how information flows shape culture, education, art, and collective value systems. I was trying even in art books to draw representations of these systems for people to relationally interface with one another, their desires, collective, nature, and what this might look like for a new type of internet experience and algorithmic approach.

I was—and remain—interested in how the internet makes us, as much as how we make the internet.

What does it mean to be economically prosperous once prosperity includes psychological health, relational depth, ecological continuity, and time? How do we design systems that increase agency rather than extract it? How do we help people reconnect—with each other, with their environments, and with their own sense of meaning—without defaulting to politics as the primary answer?

When I first moved to Berlin, one of the earliest conversations that clarified these questions for me was over coffee with Marcus Mutz. We spoke about a future in which people would have sovereign technology—technology that serves their interests, reflects their values, and preserves agency. In that future, systems would compete to meet human-defined needs, rather than shaping needs to serve "a system".

That conversation stayed with me.

It touched many areas of thinking for me in how I considered commerce, intelligence, and what the internet could still become. It also pushed me deeper into interdisciplinary spaces—talking with researchers, artists, technologists, economists, and philosophers—often while under-slept, over-curious, and driven by a sense that something essential was being missed.

I resonated strongly with open-internet values: openness, decentralization, resistance to capture. But the dominant technical paradigms themselves were not human-centric. Not really. 

When I began studying their underlying architectures more closely, I was struck by how little interdisciplinary thinking existed in spaces flush with resources and talent. Economics and computation were treated as fixed disciplines, rather than what they are: invented lenses.

Everything is interdisciplinary.

Anything that forgets this becomes unintelligent.

AI researchers becoming obsessed with and channeled into models, data, and the brain. Ethereum, on the other hand, represented for me an early attempt to encode open coordination and shared infrastructure, even if its dominant trajectories drifted away from human-centered intelligence. The values around Ethereum mattered deeply to me, but the implementations often failed to place the human—rather than programmability or capital—at the center.

Alongside this exploration, I was reading extensively across new economics thinkers—those questioning growth, efficiency, and extraction as default goals. These works reinforced my sense that economics must be redefined around well-being, resilience, ecology, and meaning. The more I read economics lectures and books, the clearer it became that economics, like computation, is an invented language—not a natural law. The Human Theory of Value had this as an outset. 

One of the earliest influences that gave language to these intuitions came from a book I encountered early on, The Internet and Everyone by John Chris Jones. The line that stayed with me was this:

“Design everything on the assumption that people are not heartless or stupid, but marvelously capable given the chance.”

That assumption has quietly guided my work ever since.

My thinking also deeply echoes Gregory Bateson’s Ecology of Mind, which remains, to me, one of the most important works on intelligence ever written. Bateson understood intelligence not as isolated computation, but in relationships—between mind and environment, between systems, between levels of abstraction. This research lineage expands into anthropology, art, language, and relational ethics—into how meaning is made across contexts, not extracted from them.

Anything that emerges too cleanly from a single discipline is fundamentally unintelligent. Intelligence is ecological by nature. It is diverse, relational, and contextual. Monocultures—intellectual, cultural, or technical—inevitably collapse. Any credible conversation about intelligence must also include many cultures, cosmologies, and epistemologies, not just Western technical traditions.

This brings me to the idea that intelligence should be framed as something artificial. Artificial substances have already harmed our bodies, our minds, our ecologies, and our societies. We do not need more artificial systems attempting to replace or interact with what they do not understand.

The problem is also not about defining or creating intelligent tools.

The problem is where the conversation about intelligence is being centered.

Human intelligence does not exist in isolation from planetary systems, metaphysical relationships, or lived practice. Even psychology has long acknowledged this. One of the most prolific psychologists, Carl Jung, acknowledged the role of astrology in his work in letters spanning 30 years of his research and practice—not as superstition, but as a symbolic and computational language for understanding personality, pattern, and relationship. Astrological intelligence is one expression of planetary computation: forces shaping minds, tendencies, and interactions over time. It is neither the whole picture nor something to be dismissed. Energy itself is a form of computation, shaping behavior, attention, and outcome long before it is formalized into systems or machines.

Practices that connect us to non-rational and metaphysical forms of intelligence are not peripheral; they are foundational to understanding ourselves and our desires. Practices such as meditation—without attachment to any single spirituality or doctrine—are another way humans learn to relate to different forms of intelligence: internal, relational, and ecological. If a system cannot help you understand what you actually want or need, how could it meaningfully be called intelligent?

Intelligence begins with definition.

With knowing what matters.

And with balancing that knowledge collectively.

A system is only intelligent if it consistently helps people reach what they actually want and need, in balance. 

This opens questions that go far beyond technology: questions of education, governance, economics, and coordination. These futures are available to us. But we will not enter them unless we wake up now—not through regulation, not through compute thresholds, not through naming things more precisely, but through cultivating a coherent relationship to what becoming more intelligent actually means in our own lives—first within ourselves, then with one another.

What does intelligence mean for you—in your body, your relationships, your ecology, your society?

We can start to enter radically different educational, economic, and governance paradigms if we build intelligence through this philosophic, sovereign, and collective ecological way. 

Why should anyone operating within a centralized structure have the right to own or manipulate the very relationship we have with our own relationships of intelligence? They should not. It is a fundamental violation of human agency, of humanity.

We live in a time where our most beautiful minds could fall short to the most erroneous incentives. Or, it could be the first real chance in human history to incentivise nurturing a beautiful ecology of minds—people, human connection, natural ecosystems, and progress of them. Allowing centralized structures to own or steer our main human intelligence networks is a fundamental violation of human existence and human agency.

Our dominant incentive structures today reward extraction, scale, and control, not understanding, care, or human flourishing. Incentives matter, and the most powerful incentive we have is the capacity to improve ourselves and our relationships—with one another, with our environments, and with the systems we participate in. 

Anything that perpetuates the opposite is not where to spend time or resources. 

This is also why I have grown increasingly resistant to naming things—or at least the focus on that things are things. Coming from a background in language and linguistics, I am acutely aware that names do not describe reality—they obscure it. This realization crystallized for me when I first read Nietzsche’s essay Wahrheit und Lüge im außermoralischen Sinne (On Truth and Lies in a Nonmoral Sense). I remember leaving the Senate House Library in London afterward and realizing that I no longer saw the world in the same way. I have not since. 

Our obsession with naming things prevents us from perceiving what they actually are.

Anthropology has long grappled with this shift in perception, as has philosophy.

Another formative influence for me is Walter Benjamin’s Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit (The Work of Art in the Age of Mechanical Reproduction). His concept of aura does not apply only to art—it applies to ourselves, and to our relationship with every form of content we encounter, feel, and share. Aura lives in presence, context, and relationship. This is where genius opens up.

We are not shaped by things alone, but by the spaces between them.

Much of what we call intelligence today is an attempt to isolate skill, efficiency, or specialization. But a specialist, a model, or a skill is not intelligent by itself. Intelligence lives in trans-contextualization—in the movement of meaning across domains, cultures, bodies, environment, planetary influences, energy, and time. Intelligence does not live in components, models, or tools, but in the interconnections between them. This extends into research on structured water, memory in water, conversational and predictive properties of water. These perspectives challenge the obsession with naming isolated components as opposed to understanding interconnectivity. It is not the thing that matters, but the energy, frequency, association, and meaning underneath it.

One of the ideas that emerged from first pages I wrote for The Human Theory of Value was what I once called a happiness algorithm. If the internet was a collective guiding of experiences towards individual and collective happiness, that there are things contributing to this was part of it, but asking what our relationships are in this was of more interest. What would be the weights? Would they be collective or individual, or both? What are the relationships? How do we balance them? The answer is never the thing itself. It is always the balance.

This is also true in our bodies. Balance.

And there is nothing more intelligent than nature.

We are already far more intelligent than we allow ourselves to imagine. If we aligned our systems with that reality—if intelligence were measured by whether a system reaches the goals humans actually care about—we would already be much further along.

I recently watched a pianist perform and was reminded of my mother. Watching this made me reflect more: the intelligence present was not technical alone. It lived in memory, passion, culture, time, association, and relationship—between the musician, the instrument, the audience, and history. That is intelligence worth nurturing. Artistic performance reveals intelligence as relational presence, not output—something no isolated system can replicate.

The happiness algorithm was never about optimization, but about shared weighting, relational balance, and continuous recalibration, considering the individual relationally to the collective and acknowledging that the observer is part of the system.

My current work has connected me with thinkers, co-founders, researchers, artists and philosophers with a shared perspective and various skills to implement trials for this new economic intelligence paradigm.

Of course, safety matters profoundly. Especially where physical systems, robotics, or vulnerable populations and minors are involved. Beyond these necessary boundaries lies something much larger and more opportunistic: an interdisciplinary exploration of intelligence as a shared, ecological, human endeavor to reset and reorientate. There is no single future economy, only multiple possible ones, each shaped by what we choose to value and measure—in balance individually, and collectively in our shared ecology. 

This is what I care deeply about. Nature remains the highest benchmark of intelligence we know—adaptive, balanced, and intrinsically relational. We are part of that nature and I intend to continue to learn from it, myself and others. 

When I am in the Bay Area, these conversations differ to SEA, Europe, and beyond. It's a global conversation, for everyone who might want this type of outcome.

I continue to explore this path grounded in Paññā through research, entrepreneurship, partnerships, practices and projects I choose to take on—always in service of human agency, collective well-being, and the health of our co-lived ecology of mind.

On Value, Intelligence, and a Living Way of Thinking

Most of what I work on now did not begin as the thing it is. It began as a question in relation to observances that kept returning, persistently, over more than a decade.

What do we mean by value, once humans are placed back at the center?

My academic background spans economics, philosophy, psychology, linguistics, and later data science—particularly quantitative approaches to qualitative data, and the synthesis of meaning across systems. What interested me early on was never a single discipline, but the relationships between them: how language shapes perception, how perception shapes behavior, how behavior becomes economics, culture, and eventually infrastructure.

During my years at University College London (UCL) and later in my continued studies, I became increasingly dissatisfied with how narrowly value was being defined—reduced to price, productivity, efficiency, or scale. These definitions felt incomplete, and often actively harmful, when set against human well-being, ecological limits, and the lived experience of people moving through increasingly mediated worlds. When I started out in tech, I was confused by how the majority of builders, engineers, investors, and founders had little to no background in philosophy. It was hard to find people in tech with philosophical vision or reasoning for what they were creating.

Before I moved fully into entrepreneurship, I began writing a book. Its working title was The Human Theory of Value.

The idea was not to propose another economic framework in the traditional sense, but to remap value from a humanistic perspective—one that takes seriously the role of technology, the internet, intelligence systems, ecology, and the inner lives of people. It asked how these forces interact, where agency lives within them, and how meaning is produced, distorted, or lost—and to what outcome.

By the late 2010s, many of these ideas had already been circulating for years in my research notes and papers. They were shaped by philosophy—especially the philosophy of language and the philosophy of technology—by economics, and by a sustained attention to how information flows shape culture, education, art, and collective value systems. I was trying even in art books to draw representations of these systems for people to relationally interface with one another, their desires, collective, nature, and what this might look like for a new type of internet experience and algorithmic approach.

I was—and remain—interested in how the internet makes us, as much as how we make the internet.

What does it mean to be economically prosperous once prosperity includes psychological health, relational depth, ecological continuity, and time? How do we design systems that increase agency rather than extract it? How do we help people reconnect—with each other, with their environments, and with their own sense of meaning—without defaulting to politics as the primary answer?

When I first moved to Berlin, one of the earliest conversations that clarified these questions for me was over coffee with Marcus Mutz. We spoke about a future in which people would have sovereign technology—technology that serves their interests, reflects their values, and preserves agency. In that future, systems would compete to meet human-defined needs, rather than shaping needs to serve "a system".

That conversation stayed with me.

It touched many areas of thinking for me in how I considered commerce, intelligence, and what the internet could still become. It also pushed me deeper into interdisciplinary spaces—talking with researchers, artists, technologists, economists, and philosophers—often while under-slept, over-curious, and driven by a sense that something essential was being missed.

I resonated strongly with open-internet values: openness, decentralization, resistance to capture. But the dominant technical paradigms themselves were not human-centric. Not really. 

When I began studying their underlying architectures more closely, I was struck by how little interdisciplinary thinking existed in spaces flush with resources and talent. Economics and computation were treated as fixed disciplines, rather than what they are: invented lenses.

Everything is interdisciplinary.

Anything that forgets this becomes unintelligent.

AI researchers becoming obsessed with and channeled into models, data, and the brain. Ethereum, on the other hand, represented for me an early attempt to encode open coordination and shared infrastructure, even if its dominant trajectories drifted away from human-centered intelligence. The values around Ethereum mattered deeply to me, but the implementations often failed to place the human—rather than programmability or capital—at the center.

Alongside this exploration, I was reading extensively across new economics thinkers—those questioning growth, efficiency, and extraction as default goals. These works reinforced my sense that economics must be redefined around well-being, resilience, ecology, and meaning. The more I read economics lectures and books, the clearer it became that economics, like computation, is an invented language—not a natural law. The Human Theory of Value had this as an outset. 

One of the earliest influences that gave language to these intuitions came from a book I encountered early on, The Internet and Everyone by John Chris Jones. The line that stayed with me was this:

“Design everything on the assumption that people are not heartless or stupid, but marvelously capable given the chance.”

That assumption has quietly guided my work ever since.

My thinking also deeply echoes Gregory Bateson’s Ecology of Mind, which remains, to me, one of the most important works on intelligence ever written. Bateson understood intelligence not as isolated computation, but in relationships—between mind and environment, between systems, between levels of abstraction. This research lineage expands into anthropology, art, language, and relational ethics—into how meaning is made across contexts, not extracted from them.

Anything that emerges too cleanly from a single discipline is fundamentally unintelligent. Intelligence is ecological by nature. It is diverse, relational, and contextual. Monocultures—intellectual, cultural, or technical—inevitably collapse. Any credible conversation about intelligence must also include many cultures, cosmologies, and epistemologies, not just Western technical traditions.

This brings me to the idea that intelligence should be framed as something artificial. Artificial substances have already harmed our bodies, our minds, our ecologies, and our societies. We do not need more artificial systems attempting to replace or interact with what they do not understand.

The problem is also not about defining or creating intelligent tools.

The problem is where the conversation about intelligence is being centered.

Human intelligence does not exist in isolation from planetary systems, metaphysical relationships, or lived practice. Even psychology has long acknowledged this. One of the most prolific psychologists, Carl Jung, acknowledged the role of astrology in his work in letters spanning 30 years of his research and practice—not as superstition, but as a symbolic and computational language for understanding personality, pattern, and relationship. Astrological intelligence is one expression of planetary computation: forces shaping minds, tendencies, and interactions over time. It is neither the whole picture nor something to be dismissed. Energy itself is a form of computation, shaping behavior, attention, and outcome long before it is formalized into systems or machines.

Practices that connect us to non-rational and metaphysical forms of intelligence are not peripheral; they are foundational to understanding ourselves and our desires. Practices such as meditation—without attachment to any single spirituality or doctrine—are another way humans learn to relate to different forms of intelligence: internal, relational, and ecological. If a system cannot help you understand what you actually want or need, how could it meaningfully be called intelligent?

Intelligence begins with definition.

With knowing what matters.

And with balancing that knowledge collectively.

A system is only intelligent if it consistently helps people reach what they actually want and need, in balance. 

This opens questions that go far beyond technology: questions of education, governance, economics, and coordination. These futures are available to us. But we will not enter them unless we wake up now—not through regulation, not through compute thresholds, not through naming things more precisely, but through cultivating a coherent relationship to what becoming more intelligent actually means in our own lives—first within ourselves, then with one another.

What does intelligence mean for you—in your body, your relationships, your ecology, your society?

We can start to enter radically different educational, economic, and governance paradigms if we build intelligence through this philosophic, sovereign, and collective ecological way. 

Why should anyone operating within a centralized structure have the right to own or manipulate the very relationship we have with our own relationships of intelligence? They should not. It is a fundamental violation of human agency, of humanity.

We live in a time where our most beautiful minds could fall short to the most erroneous incentives. Or, it could be the first real chance in human history to incentivise nurturing a beautiful ecology of minds—people, human connection, natural ecosystems, and progress of them. Allowing centralized structures to own or steer our main human intelligence networks is a fundamental violation of human existence and human agency.

Our dominant incentive structures today reward extraction, scale, and control, not understanding, care, or human flourishing. Incentives matter, and the most powerful incentive we have is the capacity to improve ourselves and our relationships—with one another, with our environments, and with the systems we participate in. 

Anything that perpetuates the opposite is not where to spend time or resources. 

This is also why I have grown increasingly resistant to naming things—or at least the focus on that things are things. Coming from a background in language and linguistics, I am acutely aware that names do not describe reality—they obscure it. This realization crystallized for me when I first read Nietzsche’s essay Wahrheit und Lüge im außermoralischen Sinne (On Truth and Lies in a Nonmoral Sense). I remember leaving the Senate House Library in London afterward and realizing that I no longer saw the world in the same way. I have not since. 

Our obsession with naming things prevents us from perceiving what they actually are.

Anthropology has long grappled with this shift in perception, as has philosophy.

Another formative influence for me is Walter Benjamin’s Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit (The Work of Art in the Age of Mechanical Reproduction). His concept of aura does not apply only to art—it applies to ourselves, and to our relationship with every form of content we encounter, feel, and share. Aura lives in presence, context, and relationship. This is where genius opens up.

We are not shaped by things alone, but by the spaces between them.

Much of what we call intelligence today is an attempt to isolate skill, efficiency, or specialization. But a specialist, a model, or a skill is not intelligent by itself. Intelligence lives in trans-contextualization—in the movement of meaning across domains, cultures, bodies, environment, planetary influences, energy, and time. Intelligence does not live in components, models, or tools, but in the interconnections between them. This extends into research on structured water, memory in water, conversational and predictive properties of water. These perspectives challenge the obsession with naming isolated components as opposed to understanding interconnectivity. It is not the thing that matters, but the energy, frequency, association, and meaning underneath it.

One of the ideas that emerged from first pages I wrote for The Human Theory of Value was what I once called a happiness algorithm. If the internet was a collective guiding of experiences towards individual and collective happiness, that there are things contributing to this was part of it, but asking what our relationships are in this was of more interest. What would be the weights? Would they be collective or individual, or both? What are the relationships? How do we balance them? The answer is never the thing itself. It is always the balance.

This is also true in our bodies. Balance.

And there is nothing more intelligent than nature.

We are already far more intelligent than we allow ourselves to imagine. If we aligned our systems with that reality—if intelligence were measured by whether a system reaches the goals humans actually care about—we would already be much further along.

I recently watched a pianist perform and was reminded of my mother. Watching this made me reflect more: the intelligence present was not technical alone. It lived in memory, passion, culture, time, association, and relationship—between the musician, the instrument, the audience, and history. That is intelligence worth nurturing. Artistic performance reveals intelligence as relational presence, not output—something no isolated system can replicate.

The happiness algorithm was never about optimization, but about shared weighting, relational balance, and continuous recalibration, considering the individual relationally to the collective and acknowledging that the observer is part of the system.

My current work has connected me with thinkers, co-founders, researchers, artists and philosophers with a shared perspective and various skills to implement trials for this new economic intelligence paradigm.

Of course, safety matters profoundly. Especially where physical systems, robotics, or vulnerable populations and minors are involved. Beyond these necessary boundaries lies something much larger and more opportunistic: an interdisciplinary exploration of intelligence as a shared, ecological, human endeavor to reset and reorientate. There is no single future economy, only multiple possible ones, each shaped by what we choose to value and measure—in balance individually, and collectively in our shared ecology. 

This is what I care deeply about. Nature remains the highest benchmark of intelligence we know—adaptive, balanced, and intrinsically relational. We are part of that nature and I intend to continue to learn from it, myself and others. 

When I am in the Bay Area, these conversations differ to SEA, Europe, and beyond. It's a global conversation, for everyone who might want this type of outcome.

I continue to explore this path grounded in Paññā through research, entrepreneurship, partnerships, practices and projects I choose to take on—always in service of human agency, collective well-being, and the health of our co-lived ecology of mind.









On Value, Intelligence, and a Living Way of Thinking

Most of what I work on now did not begin as the thing it is. It began as a question in relation to observances that kept returning, persistently, over more than a decade.

What do we mean by value, once humans are placed back at the center?

My academic background spans economics, philosophy, psychology, linguistics, and later data science—particularly quantitative approaches to qualitative data, and the synthesis of meaning across systems. What interested me early on was never a single discipline, but the relationships between them: how language shapes perception, how perception shapes behavior, how behavior becomes economics, culture, and eventually infrastructure.

During my years at University College London (UCL) and later in my continued studies, I became increasingly dissatisfied with how narrowly value was being defined—reduced to price, productivity, efficiency, or scale. These definitions felt incomplete, and often actively harmful, when set against human well-being, ecological limits, and the lived experience of people moving through increasingly mediated worlds. When I started out in tech, I was confused by how the majority of builders, engineers, investors, and founders had little to no background in philosophy. It was hard to find people in tech with philosophical vision or reasoning for what they were creating.

Before I moved fully into entrepreneurship, I began writing a book. Its working title was The Human Theory of Value.

The idea was not to propose another economic framework in the traditional sense, but to remap value from a humanistic perspective—one that takes seriously the role of technology, the internet, intelligence systems, ecology, and the inner lives of people. It asked how these forces interact, where agency lives within them, and how meaning is produced, distorted, or lost—and to what outcome.

By the late 2010s, many of these ideas had already been circulating for years in my research notes and papers. They were shaped by philosophy—especially the philosophy of language and the philosophy of technology—by economics, and by a sustained attention to how information flows shape culture, education, art, and collective value systems. I was trying even in art books to draw representations of these systems for people to relationally interface with one another, their desires, collective, nature, and what this might look like for a new type of internet experience and algorithmic approach.

I was—and remain—interested in how the internet makes us, as much as how we make the internet.

What does it mean to be economically prosperous once prosperity includes psychological health, relational depth, ecological continuity, and time? How do we design systems that increase agency rather than extract it? How do we help people reconnect—with each other, with their environments, and with their own sense of meaning—without defaulting to politics as the primary answer?

When I first moved to Berlin, one of the earliest conversations that clarified these questions for me was over coffee with Marcus Mutz. We spoke about a future in which people would have sovereign technology—technology that serves their interests, reflects their values, and preserves agency. In that future, systems would compete to meet human-defined needs, rather than shaping needs to serve "a system".

That conversation stayed with me.

It touched many areas of thinking for me in how I considered commerce, intelligence, and what the internet could still become. It also pushed me deeper into interdisciplinary spaces—talking with researchers, artists, technologists, economists, and philosophers—often while under-slept, over-curious, and driven by a sense that something essential was being missed.

I resonated strongly with open-internet values: openness, decentralization, resistance to capture. But the dominant technical paradigms themselves were not human-centric. Not really. 

When I began studying their underlying architectures more closely, I was struck by how little interdisciplinary thinking existed in spaces flush with resources and talent. Economics and computation were treated as fixed disciplines, rather than what they are: invented lenses.

Everything is interdisciplinary.

Anything that forgets this becomes unintelligent.

AI researchers becoming obsessed with and channeled into models, data, and the brain. Ethereum, on the other hand, represented for me an early attempt to encode open coordination and shared infrastructure, even if its dominant trajectories drifted away from human-centered intelligence. The values around Ethereum mattered deeply to me, but the implementations often failed to place the human—rather than programmability or capital—at the center.

Alongside this exploration, I was reading extensively across new economics thinkers—those questioning growth, efficiency, and extraction as default goals. These works reinforced my sense that economics must be redefined around well-being, resilience, ecology, and meaning. The more I read economics lectures and books, the clearer it became that economics, like computation, is an invented language—not a natural law. The Human Theory of Value had this as an outset. 

One of the earliest influences that gave language to these intuitions came from a book I encountered early on, The Internet and Everyone by John Chris Jones. The line that stayed with me was this:

“Design everything on the assumption that people are not heartless or stupid, but marvelously capable given the chance.”

That assumption has quietly guided my work ever since.

My thinking also deeply echoes Gregory Bateson’s Ecology of Mind, which remains, to me, one of the most important works on intelligence ever written. Bateson understood intelligence not as isolated computation, but in relationships—between mind and environment, between systems, between levels of abstraction. This research lineage expands into anthropology, art, language, and relational ethics—into how meaning is made across contexts, not extracted from them.

Anything that emerges too cleanly from a single discipline is fundamentally unintelligent. Intelligence is ecological by nature. It is diverse, relational, and contextual. Monocultures—intellectual, cultural, or technical—inevitably collapse. Any credible conversation about intelligence must also include many cultures, cosmologies, and epistemologies, not just Western technical traditions.

This brings me to the idea that intelligence should be framed as something artificial. Artificial substances have already harmed our bodies, our minds, our ecologies, and our societies. We do not need more artificial systems attempting to replace or interact with what they do not understand.


The problem is also not about defining or creating intelligent tools.

The problem is where the conversation about intelligence is being centered.

Human intelligence does not exist in isolation from planetary systems, metaphysical relationships, or lived practice. Even psychology has long acknowledged this. One of the most prolific psychologists, Carl Jung, acknowledged the role of astrology in his work in letters spanning 30 years of his research and practice—not as superstition, but as a symbolic and computational language for understanding personality, pattern, and relationship. Astrological intelligence is one expression of planetary computation: forces shaping minds, tendencies, and interactions over time. It is neither the whole picture nor something to be dismissed.

Energy itself is a form of computation, shaping behavior, attention, and outcome long before it is formalized into systems or machines.

Practices that connect us to non-rational and metaphysical forms of intelligence are not peripheral; they are foundational to understanding ourselves and our desires.

Practices such as meditation—without attachment to any single spirituality or doctrine—are another way humans learn to relate to different forms of intelligence: internal, relational, and ecological. If a system cannot help you understand what you actually want or need, how could it meaningfully be called intelligent?

Intelligence begins with definition.

With knowing what matters.

And with balancing that knowledge collectively.

A system is only intelligent if it consistently helps people reach what they actually want and need, in balance. 

This opens questions that go far beyond technology: questions of education, governance, economics, and coordination. These futures are available to us. But we will not enter them unless we wake up now—not through regulation, not through compute thresholds, not through naming things more precisely, but through cultivating a coherent relationship to what becoming more intelligent actually means in our own lives—first within ourselves, then with one another.

What does intelligence mean for you—in your body, your relationships, your ecology, your society?

We can start to enter radically different educational, economic, and governance paradigms if we build intelligence through this philosophic, sovereign, and collective ecological way. 

Why should anyone operating within a centralized structure have the right to own or manipulate the very relationship we have with our own relationships of intelligence? They should not. It is a fundamental violation of human agency, of humanity.

We live in a time where our most beautiful minds could fall short to the most erroneous incentives. Or, it could be the first real chance in human history to incentivise nurturing a beautiful ecology of minds—people, human connection, natural ecosystems, and progress of them. Allowing centralized structures to own or steer our main human intelligence networks is a fundamental violation of human existence and human agency.

Our dominant incentive structures today reward extraction, scale, and control, not understanding, care, or human flourishing. Incentives matter, and the most powerful incentive we have is the capacity to improve ourselves and our relationships—with one another, with our environments, and with the systems we participate in. 

Anything that perpetuates the opposite is not where to spend time or resources. 

This is also why I have grown increasingly resistant to naming things—or at least the focus on that things are things. Coming from a background in language and linguistics, I am acutely aware that names do not describe reality—they obscure it. This realization crystallized for me when I first read Nietzsche’s essay Wahrheit und Lüge im außermoralischen Sinne (On Truth and Lies in a Nonmoral Sense). I remember leaving the Senate House Library in London afterward and realizing that I no longer saw the world in the same way. I have not since. 

Our obsession with naming things prevents us from perceiving what they actually are.

Anthropology has long grappled with this shift in perception, as has philosophy.

Another formative influence for me is Walter Benjamin’s Das Kunstwerk im Zeitalter seiner technischen Reproduzierbarkeit (The Work of Art in the Age of Mechanical Reproduction). His concept of aura does not apply only to art—it applies to ourselves, and to our relationship with every form of content we encounter, feel, and share. Aura lives in presence, context, and relationship. This is where genius opens up.

We are not shaped by things alone, but by the spaces between them.

Much of what we call intelligence today is an attempt to isolate skill, efficiency, or specialization. But a specialist, a model, or a skill is not intelligent by itself. Intelligence lives in trans-contextualization—in the movement of meaning across domains, cultures, bodies, environment, planetary influences, energy, and time. Intelligence does not live in components, models, or tools, but in the interconnections between them. This extends into research on structured water, memory in water, conversational and predictive properties of water.

These perspectives challenge the obsession with naming isolated components as opposed to understanding interconnectivity. It is not the thing that matters, but the energy, frequency, association, and meaning underneath it.

One of the ideas that emerged from first pages I wrote for The Human Theory of Value was what I once called a happiness algorithm. If the internet was a collective guiding of experiences towards individual and collective happiness, that there are things contributing to this was part of it, but asking what our relationships are in this was of more interest. What would be the weights? Would they be collective or individual, or both? What are the relationships? How do we balance them? The answer is never the thing itself. It is always the balance.

This is also true in our bodies. Balance.

And there is nothing more intelligent than nature.

We are already far more intelligent than we allow ourselves to imagine. If we aligned our systems with that reality—if intelligence were measured by whether a system reaches the goals humans actually care about—we would already be much further along.

I recently watched a pianist perform and was reminded of my mother. Watching this made me reflect more: the intelligence present was not technical alone. It lived in memory, passion, culture, time, association, and relationship—between the musician, the instrument, the audience, and history. That is intelligence worth nurturing. Artistic performance reveals intelligence as relational presence, not output—something no isolated system can replicate.

The happiness algorithm was never about optimization, but about shared weighting, relational balance, and continuous recalibration, considering the individual relationally to the collective and acknowledging that the observer is part of the system.

My current work has connected me with thinkers, co-founders, researchers, artists and philosophers with a shared perspective and various skills to implement trials for this new economic intelligence paradigm.

Of course, safety matters profoundly. Especially where physical systems, robotics, or vulnerable populations and minors are involved. Beyond these necessary boundaries lies something much larger and more opportunistic: an interdisciplinary exploration of intelligence as a shared, ecological, human endeavor to reset and reorientate. There is no single future economy, only multiple possible ones, each shaped by what we choose to value and measure—in balance individually, and collectively in our shared ecology. 

This is what I care deeply about. Nature remains the highest benchmark of intelligence we know—adaptive, balanced, and intrinsically relational. We are part of that nature and I intend to continue to learn from it, myself and others. 

When I am in the Bay Area, these conversations differ to SEA, Europe, and beyond. It's a global conversation, for everyone who might want this type of outcome.

I continue to explore this path grounded in Paññā through research, entrepreneurship, partnerships, practices and projects I choose to take on—always in service of human agency, collective well-being, and the health of our co-lived ecology of mind.

Let's coordinate.

Let's coordinate.

Let's coordinate.

Not financial advice.


*Design everything on the assumption that people are not heartless or stupid but marvelously capable, given the chance.* – The Internet and Everyone, John Chris Jones

Not financial advice.


*Design everything on the assumption that people

are not heartless or stupid but marvelously capable,

given the chance.*

– The Internet and Everyone, John Chris Jones

Not financial advice.


*Design everything on the assumption that people are not heartless or stupid but marvelously capable,

given the chance.* – The Internet and Everyone, John Chris Jones