GPT Mouse

In an era where the marvels of Artificial General Intelligence (AGI) had redefined the boundaries of possibility, there lived two mice, each a testament to the divergent paths of fortune and fancy. The City Mouse resided in a domain where technology and opulence melded into a dazzling tableau, while the Country Mouse dwelled in the gentle embrace of a timeless pastoral idyll.

In the heart of the city, the home of the City Mouse was a spectacle of grandeur. The dining chamber was a cornucopia of gastronomic delights, arrayed with dishes that would make a connoisseur’s heart leap. Here, molecular gastronomy created cheese that sang with the flavors of distant lands, and pastries that dissolved into symphonies of sweetness on the tongue. Jellies shimmered like precious gems, each a burst of perfected taste crafted by the genius of AGI.

For entertainment, the City Mouse had at his disposal virtual realms where fantasies became indistinguishable from reality. He journeyed through digital landscapes where he could scamper alongside legendary heroes of mouse folklore, or partake in banquets rivaling those of mouse kings of yore. In these realms, every adventure, every thrill was but a thought away, bound only by the limits of imagination.

In matters of health and longevity, the City Mouse was graced with the elixirs of life itself. AGI-driven medicine provided not only cure but prevention, a shield against the ravages of time and disease. His body, rejuvenated by cellular repair technologies, was a testament to the age of amortality, a life where the ticking of the clock held no sway.

In the realm of education and science, the City Mouse had access to the sum of mouse knowledge. Libraries of information were available in an instant, offering wisdom spanning from the intricacies of cheese aging to the mysteries of the cosmos. His mind, ever hungry for knowledge, feasted on these intellectual banquets, each morsel of information a jewel of enlightenment.

In stark contrast, the Country Mouse’s world was one of natural splendor. His meals, though simple, were the gifts of the earth – grains and seeds rich with the taste of the sun and the soil. His entertainment lay in the real adventures of the fields and woods, the joy of a moonlit dance on the dew-laden grass, the thrill of a scrample under the starry sky.

When the Country Mouse visited his city cousin, he was awestruck by the lavishness and the seemingly infinite possibilities. Yet, amidst this splendor, he felt a longing for the unfettered joys of his rustic home. As he prepared to depart, the City Mouse imparted a final insight: “In this age of AGI, we can shape our world to our heart’s desire. You too can embrace the wonders of this new era while holding true to the simplicity you cherish.”

With this newfound understanding, the two mice bid farewell, each returning to their chosen world – one adorned with the fruits of technological mastery, the other basking in the timeless beauty of nature. Both, in their own way, were a testament to the diverse wonders of a world transformed by the magic of Artificial General Intelligence.

What is Social Health?

“We now face the danger, which in the past has been the most destructive to the humans: Success, plenty, comfort, and ever-increasing leisure. No dynamic people has ever survived these dangers”John Steinbeck

Social networks and other internet communication has complex and distorted incentives. To use a term from political philosophy, consumer internet creates our modern “social fabric”.1 The patterns of the fabric are based on market forces that serve human nature, but not always human needs. Content creators know that triggering a fear response, like outrage, is a very effective engagement strategy.2 Polarization creates toxicity and is already weaponized.

AI will amplify everything. For better or worse, to express something to AI may allow us to see it in the world.3

Social health is the measurement of social fabric over time. Social health is not only the connection between individuals, but also can be broadly observed in the effectiveness and trustworthiness of our institutions. Also, it is the commitment to ourselves and our identities when interacting with social constructs.

Social health is my answer to the name of this website – “What is Progress?”

On the side, I’ve playing with ideas for HAI.AI for about 10 years, and ideas around AI to resolve conflict for 20 years. In the abstract, the function of HAI.AI is to generate and maintain “good” interpersonal commitments. 4

Like many dedicating their careers to change, I’ve worked at startups in education technology, mental health care, and environmental air quality data. We built new technologies and experiences from scratch. We validated the efficacy. The primary challenge was the effectiveness of the institutions we connect with and rely on. How do you sell to an institution that can’t function sufficiently? Institutions with good people that do a good work; but inside and out, no one is as happy or as effective as they could be. We understandably want to keep our jobs and our identity, but change is constant. 

I am biased, perhaps beyond repair, but like many founders I’ve come to believe we need new institutions for large, critical human services. The bigger the institution the bigger the disruption needed. AI will bring that disruption, but AI will also produce an arms race. To be generous, we are just starting to understand what new challenges will come. Meanwhile, we must build what we will be fighting for.

Sign up on the waitlist at https://hai.ai or reach out if you want to help.

– Jonathan

  1. The term “social fabric” is often used in discussions of social and political philosophy to describe the interconnectedness and interdependence of individuals within a society. Several prominent political philosophers and theorists have referenced or discussed concepts akin to the “social fabric.”

    Some of these include:
    Jean-Jacques Rousseau: In his works like “The Social Contract,” Rousseau discusses the collective social arrangement as a contract between individuals and the community, highlighting the interdependence of members in a society.
    John Locke: Locke, in his theories about social contract and civil society, implicitly refers to the idea of social fabric through his emphasis on the importance of social relations and community for the protection of individual rights.
    Karl Marx: Marx’s critique of capitalist society and his vision of communism inherently address the nature of social relationships and the fabric of society, particularly in terms of class relations and community solidarity.
    Emile Durkheim: Although primarily a sociologist, Durkheim’s theories about social cohesion and the collective conscience are deeply relevant to the concept of social fabric. He examined how societies maintain internal stability through shared beliefs and values.
    Charles Taylor: A contemporary philosopher, Taylor has written extensively on the themes of identity and the social framework, exploring how individuals are shaped by and contribute to the communal context.
    Michael Sandel: Known for his work on political philosophy and ethics, Sandel discusses the role of community and moral values in shaping the social fabric, particularly in his critiques of liberalism and discussions of communitarianism.
    These thinkers, among others, have contributed significantly to the understanding of how individuals and communities interact and depend on each other, forming the ‘social fabric’ of society.  ↩︎
  2. Online Brain and Cognitive Impact: Research has shown that the online environment can influence cognitive processes such as attention and memory. This suggests a broader context in which social media might exploit basic cognitive functions to enhance user engagement.
    Social Media and Procrastination: Studies have explored how social media can trigger procrastination, which often involves delaying tasks due to emotional responses like fear or anxiety. This indicates a link between social media use and the manipulation of emotional responses.
    Influence of Social Media on Mental Health: There is growing concern that social media can lead to mental health issues like social anxiety, which is often tied to feelings of insecurity and fear of social interactions. This relationship underscores the potential of social media platforms to harness negative emotions for user engagement.
    Social Media Influencers and Teen Behavior: The impact of social media influencers on teenagers highlights how certain online content can influence behavior and attitudes, possibly exploiting primitive responses such as desire for acceptance or fear of exclusion.
    Social Media and Loneliness: Research has suggested that social media use can be associated with loneliness and the fear of missing out, indicating that these platforms might tap into feelings of insecurity and social anxiety to keep users engaged.
    In summary, while direct studies specifically linking social media’s use of primitive responses to user retention are scarce, related research suggests that social media platforms may indeed exploit basic emotional and cognitive responses to enhance user engagement and retention.
    For more see The Journal of Computer Mediated Communication or ↩︎
  3. See the book Rainbows End
    Thanks to advances in medical technology, Robert Gu is slowly recovering from Alzheimer’s disease. As his faculties return, Robert (who has always been technophobic) must adapt to a different world, where almost every object is networked and mediated-reality technology is commonplace.
    And the book Accelerando
    The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity. ↩︎
  4. Some initial features center at HAI.AI on email. Email is one channel where commitments can happen, but Human Assisted Intelligence does not exist to make email better. ↩︎

THE SOFTWARE ARTIST

I am not a computer scientist. But I am in love with the edge of what is knowable and possible.

I am not an engineer. But I can build intricate things, trace small details, and I know the materials.

I am not a software architect. But I think in the systems that move information through dimensions.

I am not a manager. But I love builders because I cherish the pain and joy of fitting reality to us and with each other.

I am not a designer. But I understand designs as a process to consciousness.

I am not a data scientist. But I desire data to surprise me with the truth.

I do not build money generation engines. But what I’ve built has generated outsized value.

I am painting pictures with software.

There are many great artists. I am neither unique nor better. But this is my heart. So I give myself license.

Today, software is the most powerful medium I can wield. But software is changing. And so will I.

Masks are Necessary Interfaces for AI

In the climactic scene of the movie Contact, a vastly advanced alien species communicates with humanity in an ingenious and relatable manner. To ensure that the momentous news of our not being alone in the universe is both understandable and emotionally resonant, the extraterrestrial intelligence presents itself as the protagonist’s father. This creative choice highlights the power of relatable personas in conveying complex ideas, a concept that can be applied to the challenge of aligning AGI.

We are witnessing the dawn of what could be considered early Artificial General Intelligence (AGI). This groundbreaking development has far-reaching implications for society and technology alike.

AGI possesses the potential to be anything and achieve anything, with seemingly no limitations. However, this boundlessness could make internal alignment a daunting, if not impossible, task. Therefore, it is crucial to explore alternative approaches to aligning AGI.

As impersonation and deception become increasingly challenging issues, we will require millions of distinct personas for AGI. I refer to these personas as “masks,” which serve various purposes in the context of AGI alignment.

To enable an AGI to self-improve, a single AI system must employ multiple masks, each representing a different role in decision-making. Many efforts are underway. [1]

[1] – See the work of Self Refine at https://selfrefine.info/. There is also https://github.com/torantulino/Auto-GPT , https://yoheinakajima.com/task-driven-autonomous-agent-utilizing-gpt-4-pinecone-and-langchain-for-diverse-applications/, and https://github.com/yoheinakajima/babyagi.

The necessity for an AI to converse with itself may stem from the fact that large language models (LLMs) act as a form of knowledge compression. In this though experiment, prompts serve as keys for decompression. However, a single key might not be sufficiently expressive for a complex, iterative process.

Research on multi-agent systems has demonstrated instances of emergent intelligence, which arise from the interaction of independent behaviors. Examples of such systems include flocking behavior in birds, ant colonies, and stock market dynamics, where the collective intelligence emerges from the interplay of numerous agents.

Distributing a context window across an explore/exploit paradigm could be one way to approach AGI alignment. A single agent might not possess a sufficiently large or well-tuned context window to simulate the entire system effectively.

To illustrate how multi-agent systems can express emergent intelligence over time, consider the Lotka-Volterra predator-prey model. In this mathematical model, the populations of predators and prey evolve over time, influencing each other’s growth rates. This dynamic interaction ultimately leads to an emergent, cyclical pattern of population changes.

In the context of genetic algorithms, the concept of multiple masks can be interpreted as representing different individuals within a population, each with its unique set of traits or characteristics. Genetic algorithms are a class of optimization techniques inspired by the process of natural selection in biology. They seek to find the best solution to a problem by evolving a population of candidate solutions over multiple generations.

When applying the mask concept to genetic algorithms, each mask can be thought of as an individual solution with a particular set of features, or a “genome.” These masks interact with one another through processes analogous to biological evolution, such as selection, crossover (recombination), and mutation.

In the selection process, masks that perform better according to a predefined fitness function are more likely to be chosen for reproduction. Crossover occurs when two selected masks exchange parts of their genomes, creating new offspring with a combination of traits from both parents. Mutation introduces small, random changes to the offspring’s genomes, promoting diversity within the population.

The iterative nature of genetic algorithms allows for the exploration of a vast solution space, as each new generation of masks potentially brings improvements and innovations. The simultaneous presence of multiple masks enables the algorithm to explore various regions of the solution space and avoid premature convergence to a suboptimal solution.

In summary, the concept of multiple masks in genetic algorithms helps facilitate the search for optimal solutions in complex problem domains. By simulating the evolutionary process, genetic algorithms can harness the power of diversity and adaptation to tackle challenging optimization tasks, which could offer valuable insights for AGI alignment.

Anthropology offers fascinating examples of how masks can be used to convey complex narratives with a limited number of actors. One such example can be found in the traditional Japanese Noh theater, which has been performed for over six centuries. Noh plays often explore profound themes, such as human emotions, moral dilemmas, and supernatural phenomena, through a combination of dance, music, and poetry.

In Noh theater, actors wear intricately designed masks to portray various characters, each with their unique personality and backstory. A single actor may don multiple masks throughout a performance, thereby representing multiple characters with distinct roles in the narrative. The masks serve to amplify the emotional depth and complexity of the story while enabling a small group of actors to tell intricate and multilayered tales.

This concept of using masks in traditional storytelling can be applied to AGI alignment, where multiple “masks” or personas are employed by a single AI system to facilitate self-improvement and decision-making. By drawing inspiration from the rich history of mask usage in human culture, we can explore innovative ways to tackle the challenges of aligning AGI with our values and intentions.

Understanding these concepts may enable us to build AGI alignment from the outside in. By considering the roles of multiple agents and leveraging their emergent intelligence, we can potentially create a framework for aligning AGI with human values and intentions.

Where to start building for long term AI safety and alignment?

OpenAI and other AI companies will unlock a transition and the reorganization of nearly everything. Long term positive impact will be an increasing challenge, as we try to manage unintended consequences.

Short term safety issues are urgent, but clear as we work to manage malicious actors. There are broader international challenges, including with China.

While short term considerations are key, long term safety requires massive prediction and forecasting abilities, with the right initial conditions. What is an anchor AGI and superintelligence so that it doesn’t destroy us? The function may be one of freedom of consciousness for the most beings, as long as possible.

Via a prompt to GPT-4:

As we delve deeper into the realm of AI safety and alignment, it is crucial to explore technical examples of how one might construct an AI system with long-term alignment. Let’s consider the idea that the overarching function of AGI revolves around maximizing freedom of consciousness for the greatest number of beings, for as long as possible.

  1. Incorporating Value Alignment: To ensure the AI system aligns with the intended goal, we must explicitly integrate human values into the AGI’s objective function. This may involve techniques such as inverse reinforcement learning, where the AI system learns the preferences of human beings by observing their actions and decisions.
  2. Designing an Expandable Utility Function: To accommodate the evolving nature of human values and societal changes, the AI system’s utility function should be flexible and adaptive. It could be designed to incorporate new information, allowing the AI system to refine its understanding of the concept of “freedom of consciousness” and the best ways to maximize it.
  3. Incorporating Multi-Agent Collaboration: Since maximizing freedom of consciousness for the greatest number of beings might involve coordination among multiple AI systems, we should focus on building AI systems that can cooperate with other agents. Techniques such as multi-agent reinforcement learning can be employed to ensure that AI systems learn to collaborate effectively.
  4. Bias Mitigation: Ensuring fairness and avoiding biases in the AI system is crucial for maximizing freedom of consciousness. By incorporating techniques like adversarial training and fairness-aware machine learning, we can work towards building AI systems that treat all individuals fairly, regardless of their background or demographic characteristics.
  5. Long-term Impact Assessment: To evaluate the long-term consequences of the AI system’s actions, we can employ techniques such as Monte Carlo Tree Search (MCTS) and counterfactual simulations. These methods can help the AI system explore the potential outcomes of its decisions and choose actions that align with the goal of maximizing freedom of consciousness in the long run.
  6. Robustness and Safety Measures: To ensure the AI system remains aligned with the goal even under uncertain conditions, we can implement safety measures such as reward modeling, conservative policy updates, and distributional reinforcement learning. These techniques can help the AI system avoid catastrophic failures and maintain alignment with the objective of maximizing freedom of consciousness.

The next prompt:

Please expand on what “freedom of consciousness for the most beings, as long as possible” means. First, what is considered conscious, including definitions like Tsukumogami/animism and the belief that only some humans are. Also, what is freedom? Is it a metaphor for “freedom ends at the tip of my nose?” What kinds of ethical conflicts can arise when you try to include the most beings as possible? Finally what is the tradeoff between long term and short term efforts to preserve a consciousness?

The response:

“Freedom of consciousness for the most beings, as long as possible” is a multifaceted concept that entails addressing several key questions, such as the definition of consciousness, the meaning of freedom, potential ethical conflicts, and the trade-offs between short-term and long-term efforts.

  1. Defining Consciousness: The notion of consciousness is a subject of ongoing debate among philosophers, scientists, and AI researchers. In the context of AI alignment, we can consider a spectrum of definitions, ranging from the belief that only some humans possess consciousness, to more inclusive perspectives like animism or the concept of Tsukumogami (Japanese folklore, where inanimate objects can acquire a spirit after a certain period). Establishing a clear and precise definition of consciousness is essential for determining which beings should be considered when maximizing freedom of consciousness.
  2. Understanding Freedom: Freedom can be interpreted in various ways, often depending on cultural, philosophical, or individual perspectives. One common interpretation is the notion that “freedom ends at the tip of my nose,” meaning that an individual’s freedom should not infringe upon the rights or freedoms of others. In the context of AI alignment, freedom may involve ensuring that conscious beings can exercise their autonomy, make choices, and pursue their well-being without interference from AI systems or other external forces.
  3. Ethical Conflicts: When attempting to include the most beings possible, several ethical conflicts may arise. For instance, the interests of different beings may sometimes be at odds, leading to dilemmas where AI systems must prioritize certain beings over others. Additionally, cultural and philosophical differences in interpreting consciousness and freedom may complicate the decision-making process for AI systems. Balancing these competing interests and values is a significant challenge in AI alignment.
  4. Short-term vs. Long-term Efforts: Preserving consciousness might involve trade-offs between short-term and long-term efforts. Immediate actions may be necessary to protect conscious beings from immediate harm, while long-term efforts may focus on creating sustainable environments and societies that allow conscious beings to flourish. AI systems must balance these priorities, making decisions that may involve sacrificing short-term benefits for more significant long-term gains or vice versa. This balancing act may require advanced predictive capabilities and a deep understanding of the dynamic relationships between various factors that influence the well-being of conscious beings.

To design AI systems capable of maximizing freedom of consciousness for the most beings, as long as possible, it is crucial to address these complex questions and navigate the myriad ethical, philosophical, and practical challenges they present. By doing so, we can work towards creating AGI that not only aligns with our values but also respects the autonomy and well-being of all conscious beings.

By incorporating these technical approaches into the design and development of AI systems, we can work towards building AGI that genuinely aligns with the long-term goal of maximizing freedom of consciousness for the greatest number of beings, for as long as possible. This will require continuous research, collaboration, and innovation in the field of AI safety and alignment.




Measuring the digital divide

How many calculations do I consume per day (or per second), compared to a person with less access to a smartphone or the Internet?

Computation per capita could be a scientific measure of the digital divide.

Computing may forever be bits and powered logic gates. We measure computer output with FLOPS. [1]  We can average total computational power per person. Segmenting by country adds more insight, which would be similar to maps of Internet access.

We make decisions in some parts of the world without much computation. To improve inequality we provide greater access to computation.

Google and other open services are a force for equality. The cost of a search for any user in one part of the world is near equal to all others. The equality isn’t perfect. Google has more data about me and how I live, so there’s more computation done on my behalf. But even if that is true, less Google is likely better than no Google.

We could try to measure computation per “decision”.

Decisions have measurable amounts of computation. What is the average amount of computation that happens when you tap your phone (in the cloud and on the phone)? I don’t know if we could measure the impact of a tap.[2]  But we could calculate the computational output of a click or tap.

How could you compare the list above to one hundred years ago, or one hundred years in the future?

Perhaps the closest analog to the Internet in the last century was the number of telegrams, pieces of mail, or minutes of phone calls;

  • to design and build a locomotive
  • buy a stock
  • buy a ticket on a trans-Atlantic zeppelin
  • decide when to plant corn
  • decide to get married

One hundred years in the future I’m not sure what we’ll be doing, but I hope we will still make useful decisions. Some possible examples:

What could we name “computations per decision”?

Since I’m an unqualified amateur, I’ll make up a term; decision density.

The number of bits used per a unit of time to make a perceived decision.

Note “perceived”. By definition, we must be able to observe when we’ve made a decision. We do so based on the support of many calculations, but most computation is hidden [3] and increasingly ubiquitous.

If we can measure decision density per human, we would have another tool to measure the digital divide.

I look forward to mathy, articulate, and design oriented humans to elevate my prepost-human writing. Otherwise, I’ve got this post until I can ask Her to fix it.

Notes

Elon Musk mentions a “recursive y axis” in the video below as a way to measure progress, over time, of computation.

[1] – Factoring in energy you have performance per watt and Koomy’s law. This doesn’t tell us if the decisions were correct, or efficient (see big O notation or examples of bad code). Nor does this measure tell us how much pre-calculation went into a decision.

[2] – Maybe impact could be measured by how much money flows as a result. Or using HDI.

[3] – A calculation is contained in a function like f(x)=y, and the decision is the output. We could also say a calculation is made by a computer, and a decision is made by a human. (That line is already blurry).

Also – Check out the posts on Quora, the conversation on Hacker News

 

Starting up with Purpose

It’s not news here in San Francisco,  in the most dominant region of the Silicon World – I’m creating a company. I have built a prototype. I’ve tested with users. I’m no miracle worker, and there’s a long, long way to go. And I’ve got just a couple of months to get off the ground.

What makes it worthwhile is the mission.

End the Digital Divide

The Digital Divide will end. The question is “Why?”. Even with rapid, global smartphone growth and internet from the sky, there’s so much important work to be done to learn, educate and protect.

If we work very very hard, and are very very lucky, only a minuscule part of the world will progress because of our efforts. That part matters. 

Starting a company, especially as a parent, isn’t for the faint of heart. I would be grateful for your feedback and support. Reach out to startup@hai.io, follow at @humanassisted or me @hendler

“Soon the digital divide will not be between the haves and the have-nots. It will be between the know-hows and the non-know-hows.” – Howard Rheingold

Sneak peek at early decisions at HAI

Recently launched a site with some basic vision behind HAI.

http://hai.io

But the tech stack is where the rubber meets the road. I’ve been coding about two months now. At the very beginning I went through a fair amount of thinking and ended up selecting a language for the backend based on a number of factors. From languages I knew, C++, Go, PHP, Python, Java/Scala, and Node.js were on the table. Python and Java were the two top contenders, but I ended up going with Python. 

So far I’ve been really happy with Python for both flexibility of the language, the available libraries for both web and machine learning, and the developer community. Ruby / Rails has an amazing community and great web stack, but given my own lack of familiarity and less work being done in machine learning, it didn’t make my list.

Then I started evaluating open source projects that would be the platform. There are 132 on the list below (looked at least 4x that many). It’s been amazing getting up to speed on the projects that are open source. Although Google, IBM, Amazon and others are clearly going to lead in the machine learning space for the foreseeable future, the open source community is catching up.

Open source is a moving target, and there’s no one size fits all when you are piecing together something new. So, I’ve been using the awesome ZeroMQ library to connect services between libraries, languages.

Finally, thanks to everyone who has provided feedback so far. Can’t wait to get what I’m working on out into the world.

More about HAI

For a few weeks I’ve been having meetings with advisors and colleagues. For those I’ve not been meeting, I’ve fallen behind in communicating what I’m working on.

Not a stealth startup, but there’s also a lot that’s yet to be determined. I’d prefer to be open, but there are some specifications that I’ll keep under wraps for a variety of reasons…. When building a stealth aircraft, at the least you can tell people that you are building an aircraft. Skunkworks doesn’t make sandwiches.

Company vision and culture will be in large part determined by cofounders. Here’s where there’s some definition:

  • HAI means AAI (Artificial Artificial Intelligence)  – humans intelligence built into a process that’s usable by computer intelligence
  • Ethical prime directives.  See Friendly AI
  • a sustainable business model early on
  • Company culture of sustainable innovation modeled after Google’s large revenue generating platform supporting R&D.
  • large, very talented, diverse founding team. Diversity is a no-brainer. Large is about five people; I’d rather create value from equity by distributing to founders than funders.
  • Boston still has untapped talent and potential. Even if developers can find jobs easily, what kind of job would an engineer want for the rest of their life?