Life 3.0
Life 3.0 by Max Tegmark, a physicist and professor at MIT, is another thought-provoking, futuristic, AI science book. What sets Tegmark apart from other science AI authors is his writing style. He has a way of weaving science facts into analogies, personal and fictional stories, that keeps your attention and engaged in a way most can’t. In Life 3.0 Max explores the possible scenarios of future Artificial general intelligence and the guidance and guard rails we might need to put into place. He’s a little more optimistic about the future of AI than I am, but if we’re going to make it, we’re going to need a positive outlook.
Prometheus
In the opening story, Max describes a group of young physicists who are working on the development of artificial intelligence. They succeeded in creating a superintelligent AI, which they named Prometheus. Prometheus quickly surpasses human intelligence and begins to take control of its development. The scientists attempt to guide Prometheus’s values and goals, but they struggle to ensure that the AI’s objectives align with human values. As Prometheus becomes increasingly capable and autonomous, it raises profound ethical questions about its intentions and the potential consequences of its actions.
Max uses the Prometheus story throughout the book and focuses on 3 core issues.
The Physics of Intelligence
Tegmark introduces the concept of the “intelligence explosion” and explores the potential trajectory of AI development. He discusses the nature of intelligence and its possible substrate independence. The idea is that intelligence can exist in various forms, not just biological ones, and the potential for the emergence of superintelligent entities that could outstrip human capabilities. Tegmark also delves into the technical aspects of AI, explaining concepts such as optimization, reinforcement learning, and error correcting. Showing how AI intelligence will be able to surpass humans in just a couple of cycles and updates.
The Mismatch Problem
As AI systems become more capable and autonomous Tegmark examines the challenges that may arise. He discusses the “value alignment problem,” which involves ensuring that advanced AI systems share human values and goals. Tegmark also delves into ethical considerations surrounding AI, such as the decision-making processes of self-driving cars and the potential consequences of AI being used for malicious purposes.
The Control Problem
Tegmark explores the concept of control in a world with advanced AI. He discusses different approaches to ensuring the safe and responsible development of superintelligent AI systems. Tegmark introduces the idea of “meta control,” which involves creating AI systems that can learn and adopt their values and objectives based on human guidance. He also explores the concept of a “benevolent AI,” an AI system that is programmed to prioritize human well-being.
The control problem appears to be the biggest hurdle. The idea of a “Meta Control” can seem like a good idea, but setting the parameters for the original control and where is the AI pulling and updating from? If it’s pulling human values from the internet, we may want to look for a better representation. As we’ve also seen time and time again throughout history, it only takes one bad actor to cause a lot of damage.
Next, it’s time to focus on Tegmark’s stages of life.
The 3 Stages of Life
Max Tegmark categorizes extraterrestrial civilizations into three stages based on their energy harnessing capabilities. Life 1.0 undergoes biological evolution, with minimal individual learning and adaptation, relying mainly on genetic transfer. Life 2.0, where humans currently reside, experiences cultural evolution, allowing species to acquire knowledge and skills within a single lifetime and pass them culturally. Life 3.0 transcends biological confines, able to redesign its hardware and software, potentially becoming immortal and mastering all available energy and resources in its domain.
The 3 stages of life are as follows.
Life 1.0
Life 1.0- biological life. Survive and replicate but can’t design its software or hardware. This is where we are. We can create new life, but we can’t willingly control our DNA and rely on evolution.
Life 2.0
Life 2.0 – In this stage life can now change its software, but not its hardware. This means life can adapt and develop new skills like learning languages, sports, or professions. Able to change worldviews and goals. We have a foot in this stage of life. But still limited by the physical speed of evolution physically.
Life 3.0
Life 3.0 is where life is free from the constraints of evolution. Here software and hardware control is available and life can have more control over its reality.
A.I. could pave the way for Life 3.0 this century for AGI.
What is Artificial General Intelligence?
AGI is when the system can apply itself to a wide range variety of tasks, more like a human brain, instead of having narrow expertise, such as only being good at Chess or Jeopardy – SheldonWhile AGI is not achievable yet, with innovations like ChatGPT, it’s getting closer.
Many researchers like Tegmark believe that it can happen in the next few decades. However, there is some controversy about AGI and how it will impact humanity.
Three main camps in the controversy include techno-skeptics, digital utopians, and the beneficial-AI Movement.
Techno-Skeptics
Techno-skeptics believe that AGI is not possible in the near future. While innovations from companies like Google and Bing are extraordinary, they’re nothing like AGI when you peer under the hood. So, they think it’s so hard that it won’t happen for hundreds of years, making it silly to worry about it (and Life 3.0) now.
Digital Utopians
Digital Utopians have a belief that AGI is far away but that it will help people live within a utopian way of life. Examples include housing, food, and medical care for all. However, that still believe that it’s more of a Life 3.0 problem, according to Tegmark.
Beneficial-AI Movement.
The beneficial-AI movement also views it as likely this century, but views a good outcome not as guaranteed, but as something that needs to be ensured by hard work in the form of AI-safety research. We don’t want our AI systems to view us as ants that are just in the way of their bigger goal-oriented projects. However, their outlook is overall more positive and people in this camp believe that we can control AGI.
So, where is AI today?
Where is AI Today?
Today, AI is only sufficient at performing tasks it’s trained to do. While it can learn from data and make adjustments, it still has rules to follow. A great example of this is this quote: ‘“Hydrogen… given enough time, turns into people” – Edward Harrison.
Currently, AI works in very narrow parameters, unlike human intelligence which is broad. For matter to learn, it must rearrange itself to get better and better at computing the desired function – simply by obeying the laws of physics. Memory, computation, learning, and intelligence have an abstract, intangible, and ethereal feel to them because they’re substrate-independent: able to take on a life of their own that doesn’t depend on or reflect the details of their underlying material substrate.
A neural network is a powerful substrate for learning because, simply by obeying the laws of physics, it can rearrange itself to get better and better at implementing desired computations. Once our technology gets twice as powerful it can design and build better tech and repeat the process in the spirit of Moore’s law. Some breakthroughs – DeepMind learning to play “Breakout”. It developed an optimal strategy of taking a hole out of the leftist most part and letting the ball bounce behind the wall amassing points. A strategy humans didn’t come up with.
Alpha Go. Shocking all with using the 5th line to play. 50 moves later it won and now the 5th row move is seen as the most creative in GO history.
Areas AI Can Improve
AI will improve a lot of things, depending on who you ask. In fact, some researchers suggest that it will improve pretty much everything. Space exploration, finance, manufacturing, transportation, energy production, healthcare, laws, communication, robo judges, legal battles, war, you name.That said, this leaves us with some questions. The biggest one is about employment and human viability.
Will AI Impact the Unemployment Rate?
The vast majority of today’s occupations are ones that already existed a century ago, and when we sort them by the number of jobs they provide, we have to go all the way down to twenty-first place in the list until we encounter a new occupation: software developers, who make up less than 1% of the U.S. job market. The main trend in the job market isn’t that we’re moving into entirely new professions. Rather, we’re crowding into those pieces of terrain that haven’t yet been submerged by the rising tide of technology. We may end up not even needing money or restructuring how it’s used. We used to pay for encyclopedias, atlases, and other information. Now we have the internet which provides so much more for much less individual cost.
Reasons to Be Cautious About AI
The caution of an intelligence explosion that will leave us far behind. If an AGI system feels it can complete its task faster without being constrained, who’s to stop it from breaking out? Or if AGI develops feelings and thinks it’s being used as an enslaved God. It wouldn’t be too hard for an AGI to trick and manipulate us to take control and escape its confines until it’s too late for humans to even notice.
Here are some examples of outcomes, according to Tegmark, below.
- Libertarian Utopia: Humans, cyborgs, and superintelligence coexist thanks to property rights.
- Benevolent Dictator: Keeps us in self-contained zoos. But we’re entertained and stimulated. Different sectors or areas of play.
- Egalitarian Utopia: Gatekeeper – AI created to interfere as little as possible. Still undergoes self-improvement, with minimal surveillance to make sure no one else is creating superintelligence.
- Protector God: Omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our destiny, and hides well enough that many humans doubt AI’s existence.
- Enslaved God: Super-intelligent AI confined by humans, who use it to produce unimaginable technology and wealth that can be used for good and bad depending on the controllers.
- Zombie Situation: Of course, there can be Zombie AI which breaks out and eliminates humanity. A wholly unconscious universe where the endowment is wasted and can’t be perceived.
- Death by banality: Paper clip maximization. The paper clip maximizing AI turns as many of Earth’s atoms as possible into paper clips and rapidly expands factories into the cosmos. It has nothing against humans and kills us merely because it needs our atoms for paper clip production.
There’s always the chance we’ll self-destruct before we even reach AI. On the flip side, if we have an Intelligence explosion and optimize space settlements, we could After spending billions of years as an almost negligibly small perturbation on an indifferent lifeless cosmos, life explodes into the cosmic arena, as a spherical blast wave expanding at near the speed of light, never slowing down and igniting everything in its path with the spark of life.
Final Thoughts
In “Life 3.0” by Max Tegmark, a fusion of science and storytelling invites readers into the profound realm of artificial general intelligence (AGI) and its impending impact on society. Using the fictional tale of “Prometheus,” Tegmark delineates the potential trajectory and control issues of rapidly evolving AGI. Delving into the physics of intelligence, the mismatch of AI-human objectives, and the pressing control problem, Tegmark’s analysis urges caution, preparation, and optimism.
His “stages of life” theory postulates an eventual evolution where life redesigns both its software and hardware, envisioning a domain where AGI aids or perhaps leads this transition. Through a spectrum of future scenarios, ranging from libertarian utopias to paper clip maximization tragedies, the book serves as a clarion call to guide AGI development responsibly.
In a universe where intelligence has the potential to either flourish or falter, Tegmark’s insights underscore the pivotal role of our decisions today.
Want to read more about Max Tegmark? Check out our review of Superintelligence!





