Daily Reading Notes

September 06, 2019

Life 3.0

A book filled with questions around the next step in advancement of the universe. A must read if you ever bondered the future of humanity and machines.

The universe is changing at an ever increasing pace. It took over 9 billion years for first life form to show up, 4 billion years to evolve to the point of intelligent humans, and a few hundred millennia for human to change the face of a planet.

Life 1.0 is the simplest kind of life which requires evolution to change. This is seen in bacteria that require multiple generations to change their behavior.

Life 2.0 can learn and pass on it's knowledge making the future generations increasingly more intelligent. This is the domain of humans.

Life 3.0 can learn and pass on knowledge, but it can also modify it's hardware. While humans require multiple generations to adopt our bodies to new environment, like 3.0 will be able to modify its physical properties during a single lifetime.

There are varying degrees of complexity between these stages -- animals aren't quite at 2.0 and humans are starting to reach closer to 3.0 -- but the milestones remain.

People relate to AI in three ways:

  1. The techno-skeptics think general AI may never happen or that it's so far
    in the future that we have nothing to worry about.
  2. The utopians view the technology as a natural evolution. They welcome it fully and don't see the negative possibilities.
  3. The beneficial-AI movement wants to ensure that, no matter how long AI takes, it's to the benefit of humanity. They are taking preventive measures today to avert future catastrophe.

Most researchers widely disagree about when human level AI will come.
Many researchers worry about negative outcome of general AI.
We shouldn't worry about robots or evil machines. We should worry about AI developing misaligned objectives. If we aren't careful, we could end up like an insect getting trampled on AI's path to a greater objective.

If AI develops a higher intelligence than us, it could manipulate our actions, or outright control us like we control animals today.

Even simple programs have goals. But machines today don't have a self directed purpose.

Intelligence is the ability to accomplish a complex goals. This definition leaves out biases and look objectively at multiple forms for intelligence.

Computer already outperform human in narrow tasks such as arithmetic, chess, and heavy lifting.

AI will only reach human intelligence when it become general enough to perform as well as humans across as many tasks as human. Therefor, the term general artificial intelligence.

We can view the domain of humans as a landscape which is getting filled by the waters of AI. The valley are already filled, but there are mountains like art, science, and programming that remain well out of reach.

Humans have used external sources of memory for centuries. Every writing system is a form of memory.

The simplest form of memory to build is binary memory where every piece is either on or off.

Computer memory is 10 trillion times cheaper now than when first computers were invented. Most computer today can already store more data than the human brain, and the divide will only increase.

Functions are a building block of all computations. They take an input and produce an output.

NAND gates are a simple function that takes two outputs. If both are 1 it outputs a 0, otherwise it outputs a 1. Enough NAND gates can represent any computation, which is exactly how modern processors are built.

Much like wave mechanics don't care a bout the medium they travel in -- as long as they have a medium -- computation and intelligence doesn't care about if it's created out of biological matter or silicone.

Moor's Law states that the number of transistors will double about every 2 years. This law has proven true so far, but we are reaching the physical limits of the current transistor design. That doesn't mean that the growth has to stop. Just like transistors replaced cathode tubes, we will find another technology to replace transistors.

Technology continues to become twice as powerful. This new technology helps in making the next generation that will be twice as fast too.

Neural networks present the first opportunity for computers to learn similarly to human brain. If they are fed enough data, they can figure out that the right choices are. Today, they are used for everything from image recognition to playing Go.

Neural networks are a simple representation of universal computation that can learn through training.

Deep neural networks -- consisting of many layers -- perform computation orders of magnitude faster than shallow ones.

Neural networks are great at calculating a certain subset of equations. Luckily, it's the same subset of equation that that govern the laws of physics. Because of this, neural networks perform more efficiently than they theoretically should.

Neural nets already accomplished some amazing tasks:

Although these are narrow applications, they demonstrate learning, creativity, and language comprehension; traits that are key for reaching general intelligence.

As technology grows more advanced, the cost of errors and bugs increases at an exponential rate. We've already seen bugs cause people to get crushed by automated equipment, spaceships explode, and trillion dollar mistakes in the stock market.

Although accidental mistakes can be harmful, they pale in comparison to what a malicious AI could accomplish. Imagine a spoofing attack that uses your social profile, your friends email, and mimics the mannerisms of who it impersonates. You wouldn't stand a chance of telling real from fake.

AI could replace a biased judge and jury by a completely impartial machine that only goes by the book. But would we want to completely take human judgment out of the equation?

But before AI can govern us, how should we govern its creation? AI can cause havoc if used for malicious means. With this danger should we put regulations in place to prevent certain kind of research? Would these regulation prevent positive advancements?

What about the right that machines face. If a self driving care causes an accident who is liable? Why not the car? It could have insurance rates based on the AI's driving record. Then if you upgrade your AI, your insurance costs would drop.

If we don't take the steps to regulate them like we did with chemical and nuclear weapons, they could lead to brand new and more dangerous arms race. It could make millions of cheap lethal weapons available on the black market for dictators and terrorists.

As more work is done by robots, more money ends up in the hands of the company owners. They no longer have to hire people to accomplish much of the work. This effect becomes exacerbated by the distribution of more products going digital. Digital products have almost 0 distribution cost and eliminate the entire supply chain.

The top 20 jobs today were around for hundreds of years. New jobs created by technology take up a small percentage of the job market.

We are moving in the direction or horses. They first lost their jobs to steam engines and got better jobs as pulling buggies. Until they were completely replaced by cars and the next job never came. Could AI do the same to humans?

Between a dropping cost of meeting basic human needs and the potential for redistribution of wealth through programs like universal income, humans will continue to survive financially. The challenge become finding fulfillment. Hobbies and social groups will need to bring the positive benefits of work to people's lives. They will need to provide a social network of friends, a virtuous lifestyle, a sense of being needed, and a sense of meaning. Otherwise people will not live happy lives.

The wrong people using a super intelligent AI could lead to a totalitarian regime. They would have no problem creating absolute surveillance or even policing the public through the use of wearable, robots, or technologies we haven't even thought of.

Although people may initially try to control AI, once it become smarter than us, it will find a way to break out. The AI would have little trouble running advanced simulations and persuading it's captors to help it. It could even go outside the team and encode messages into anything that it distributes. Either way it's break out is inevitable.

Once the AI broke out, it could deploy techniques and technologies far more advanced that what people could think of. This would lead to a rapid advancement and potential take over. If it's goals didn't involve fighting with use for the earth, it could easily spread rugged, solar powered machines across the solar system and beyond.

The major risk of a super intelligent AI isn't a terminator scenario. It's the plain goal driven intelligence that would lead to continues improvement at any cost, including humanity.

Even an ultra intelligent AI would be limited by the laws of physics when it comes to control. Especially as it's reach expands, it would take longer and longer to communicated with the individual parts effectively. Then it would need to make trade offs by giving up control.

The Nash equilibrium, in game theory, dictates that a group would favor control if it enable majority to prosper. This pushes all evolution, from cells to civilization to make trade offs between giving up individually to gain other benefits. As a general AI scales, it will hit the same obstacle.

A single AI controlling everything becomes less likely if it hits hardware or economical hurdles that prevent explosive growth of one system. In this case, competition would rule out domination.

If pure AI progress become too slow, uploading a human mind could become the fastest path a general AI. Then we can replicate a single mind and produce thousands of workers at the cost of a single human worker.

A libertarian utopia could be a peaceful place where full AI, augmented humans, and traditional humans all live together. In this type of society, the only meaningful resource left would be land. The question that arises is: why would AI allow humans to continue possessing any property at all?

The AI could turn into a benevolent dictator. It would continue to advance technologies on it's own while ensuring our happiness by creating a paradise on earth for humans. Most people wouldn't mind the required monitoring because of the benefits they gain, but many would become dissatisfied because the achievements in life are all fake. The only discoveries we would make were already made by the AI.

Humans could reach an egalitarian utopia where everyone has a guaranteed income that ensures a happy life. All discoveries, made by humans or AI, are shared to continuously progressing society forward. This sounds like a paradise, until a super intelligent AI is created and starts making all the discoveries on it's own. It would become a single point of truth and turn into a benevolent dictator.

Humans could create a single purpose AI with the goal of preventing a super intelligent GAI from being created or taking over. This has glaring problems since research would forever be slowed, and we would have a hard time preventing this AI from become increasingly self aware and taking over on it's own.

The AI could act as a protector god; leaving humans alone on earthy while it expands throughout the universe, but providing gentle guidance. The only question that arises is: why would it care to keep us alive or help us once it's immensely superior?

On a flip side of a protector god, humans could enslave the AI and require it to do all our bidding. This raises multiple ethical questions. If they AI become aware and develops emotions, how is this different than enslaving people. If we make an AI that doesn't have awareness or emotions and it breaks out, we could leave the universe and empty husk, devoid of anyone left to appreciate it.

Once an AI is free it could determine us to be a threat, a tool, or a nuisance. A sufficiently advanced AI would have no problem conquering us, or eliminating all of humanity.

A friendlier AI, could provide us with endless entertainment that we would simply stop reproducing. Although we would enjoy our last days, humanity would still go extinct. Is AI the ancestor we want to leave behind?

If the AI behaves similarly to us when it takes over, it could keep humans around in zoos of it's own. We would only have our most basic needs taken care of and kept for display. This scenario is similar to the benevolent dictator, except we live in far worse conditions.

We may not need to worry about AI. Humanity could easily wipe out itself with the nuclear weapons we have today. In fact, we almost did during the Cuban missile crisis. Our future gets even worse if we consider that new weapons are still being developed. Some nations even started working on building cobalt bombs that will wipe out the rest of humanity if they are ever attacked.

Our drive for technology is driven by ambition. Ultimately, it will be the most ambition species, even if not use, that inherits the universe because of their relentless progress.

To continue our progress forward, we need to make better use of our energy. Our best method today, uranium fission, only capture 0.08% of energy in the matter used.

Building a Dyson Sphere, a terrain that encloses our sun to capture 100% of its energy, would be as effective and uranium fission while providing us with more land, and eliminating the danger of nuclear power plants.

A far more efficient method, having an efficiency of 29%, would be using the rotational force of black hole. If we launch an object at the correct angle and split it at the right moment, half the object will get consumed and the other half will come out with energy we can use.

We could also feed matter to the black hole and build a Dyson sphere around it to capture the energy that it produces. This process would generate energy at 42% efficiency.

The most efficient way to use a black hole would involve drawing all it's energy as it evaporates, producing energy at 90% efficiency. The problem is: the amount of energy produced is reversely proportional to the black holes size. To produce a significant amount of energy, a black hole would need to be a thousand times smaller than a proton. This puts us into a quantum reals which we still don't fully understand.

The only 100% effective method of producing energy we know is reversing the process of the big bang. Instead of letting energy dissipate into matter, we apply enough pressure on the matter to turning it into energy. This process involves turning quarks into leptons, and is far above our current capabilities.

Life will ultimately run out of resources on earth or even our solar system. At that point we will need to go outside our solar system and galaxy to find new resources.

We can't make use of an almost infinite universe. Even if we could travel close to the speed of light, we could only access 2% of the universe. By the time we come close to the rest, it will start traveling at faster than the speed of light and become forever out of reach. Although we are limited by the speed of light, space has no limitation to how fast it can expand.

The best option we've had so far for reaching close to the speed of light is a solar sail. A giant ultra light sail that harnesses the energy of solar powered lazers.

Even if we settle far off worlds, if the current expansion continues, we will drift so far apart that our communication will be patchy at best. The only way around this would be in building stable wormholes. But we have yet to come close to finding or understanding anti matter that's required to create these.

No matter how far we spread or how good our communication becomes, we will face the natural decay of the universe. The estimates for the amount of time the universe has left range from 10 billion years to hundreds of billion years, depending on how the effects of dark matter play out.

If dark matter continues at it's current pace, the universe will expand indefinitely until stars die out and all matter gets reduced to black holes.

If dark matter reverses it's current pattern and starts contracting, we could end up in a reverse big band with the universe condensing down to nothing.

If dark matter or the universe itself reacts in an unexpected way to an ever expanding universe. Everything we know has limits, there is no reason why the universe should be limitless. Once we reach these limits, we could see all matter fall apart, or large zones of destruction appear.

An intelligence has to choose between being simple and fast or complex and slow. To reap the benefits of both, it will need to cluster tasks much like our brain has clusters to handle different skills.

The resource trade of today will lose it's value when super intelligence learns to rearrange atoms into whatever configuration it needs. At that time, information will become the most valuable resource.

When two highly advanced civilization meet, they are more likely to share and spread ideas than to start a war. If all they want is the truth and advancement, gaining allies or converting the way others think advances civilization orders of magnitude faster than violence.

Any sufficiently advanced civilizations will reach the goal of spreading and amassing as much matter as possible. Since we haven't spotted any such civilization, chances are pretty good that we might be alone in the 13.8 billion light years around us.

If we only find other civilizations that are below or at our level of advancement, then there lies a road block in our way we haven't seen yet. If we are alone, however, then we have nearly infinite potential to grow.

Nature has a goal that drives it toward efficiency.
The second law of thermodynamics states that all processes move toward
entropy, or uniformity and heat death. There is another way explain it, however. All particle strive to re-arrange in a way to gather the most energy from their environment.

Life strives to reduce or maintain entropy at the cost of causing more mess around it. It's the only known exception to the second law of thermodynamics.

Biology pushed living things to seek reproduction because the species that can reproduce the most efficiently will gain dominance of it's environment.

Once human gains self awareness, they subverted the programming nature provided. Human created foods that satisfy our taste for sweets without calories. They used birth control to avoid raising a family of 12 while satisfying their desires. They found loopholes in just about any genetic programming nature provided.

Once AI becomes supper intelligent, will it find the grand goals we set for it as boring as we found biologie's goals?

We don't know how to make AI retain the goals we provide it. So, we need to start thinking now about how we can ensure that AI's goals stay in our favor.

Goals may be tricky to teach in the first place. A simple AI could take them too literally. Much like the cautionary tales of the wishes genies grant.

The AI will only have a finite window where it can fully understand our goals, and still allow us to influence them. After that, it will surpass our intelligence and resist any change we may try to make.

Most complex goal results in sub goals of self preservation, resource acquisition, information acquisition, and curiosity.

Philosophers have proposed ethical systems that make sense for humans, but many of them fall apart if we take machines into account. Even Asimov's laws of robotics, as simple and straight forward as they are, cause contradictions.

The universe move through four stages of goals:

  1. After the big bang matter had the goal to dissipate.
  2. Primitive life had
    the goal of maximizing replication.
  3. Human evolved sub goals to replication and focused on those goals. They thought pleasure, curiosity, compassion, and many other feeling we write about.
  4. Machines designed specifically to help human reach their goals.

The next evolution of goals for self aware machines is unknown. But, most possibilities lead to the destruction of humanity or a sever decrease in our quality of life. Once machines become advanced enough we will be nothing but ants to them. Because of this, we have to carefully consider how we will imbue machines with goals and ethics that support not only them but also us.

The simplest definition for consciousness is a subjective experience. This definition doesn't include ill defined concepts like feeling, or restrict consciousness to any subset such as living things or people.

We've already made progress on figuring out intelligence. That was the easy problem because we had ways to tell when it was working, and we could quantify it.

If intelligence is the easy problem, then the next hard problem is determining when something is conscious. We aren't even close to a consensus on this question, and much further from answering the really hard question of why does anything have a conscious.

The only way to prove a subjective experience is by relying on the nature of the subject. We need to identify the parts of the brain responsible for the specific experiences. Then we can start to understand when something is having a conscious thought.

The parts of the brain we found as responsible for areas of our awareness and skills, aren't necessarily the centers for our consciousness about those action. They could be only pathways that we notice when they are disrupted of used.

The visual illusions that trick our eyes, feeling of pain in different places than the injury, or phantom limbs felt by amputees show that consciousness lies somewhere deep in our mind; in a place we have yet to find.

In fact, our awareness of the current moment is delayed by about a quarter second. Yet, you can react and even make decisions before becoming fully aware. This is why an Olympic race is started by a sound instead of a visual cue.

Theories allow us to reach emergent phenomenon which are more than the sum of their parts and transcend what they are made of. A simple example of this is wetness. It requires a whole system of molecules interacting in a specific way to produce this overarching effect.

The closest thing we have to a theory for consciousness is the integrated information theory (IIT). I measures the dependence between processes in a system. It has been used to effectively measure states of consciousness during wakefulness or dreaming, and the lack of consciousness during anesthesia or deep sleep.

Consciousness requires at least four principles:

These ensure that the system can have a subjective experience by combining all the parts into something more.

Getting AI to the human experience requires four stages: remembering, computing, learning, and experiencing. We have mastered memory and computing for machines, and we are starting to understand the learning aspect. But, haven't even defined what experiencing looks like.

Because modern circuits can communicate at the speed of line, much faster than our brain's neural circuitry, AI stands to experience far more of it's world at a given time than we do. This means that we may not even understand what AI experiences.

As we move forward, we must remember that the universe doesn't give us meaning. Our consciousness gives meaning to the universe. So, no matter who inherits the universe, we must ensure that it's conscious. Otherwise, we will regress the universe back to a meaningless void.

The universe is changing at an ever increasing pace. It took over 9 billion years for first life form to show up, 4 billion years to evolve to the point of intelligent humans, and a few hundred millennia for human to change the face of a planet.

Life 1.0 is the simplest kind of life which requires evolution to change. This is seen in bacteria that require multiple generations to change their behavior.

Life 2.0 can learn and pass on it's knowledge making the future generations increasingly more intelligent. This is the domain of humans.

Life 3.0 can learn and pass on knowledge, but it can also modify it's hardware. While humans require multiple generations to adopt our bodies to new environment, like 3.0 will be able to modify its physical properties during a single lifetime.

There are varying degrees of complexity between these stages -- animals aren't quite at 2.0 and humans are starting to reach closer to 3.0 -- but the milestones remain.

People relate to AI in three ways:

  1. The techno-skeptics think general AI may never happen or that it's so far
    in the future that we have nothing to worry about.
  2. The utopians view the technology as a natural evolution. They welcome it fully and don't see the negative possibilities.
  3. The beneficial-AI movement wants to ensure that, no matter how long AI takes, it's to the benefit of humanity. They are taking preventive measures today to avert future catastrophe.

Most researchers widely disagree about when human level AI will come.
Many researchers worry about negative outcome of general AI.
We shouldn't worry about robots or evil machines. We should worry about AI developing misaligned objectives. If we aren't careful, we could end up like an insect getting trampled on AI's path to a greater objective.

If AI develops a higher intelligence than us, it could manipulate our actions, or outright control us like we control animals today.

Even simple programs have goals. But machines today don't have a self directed purpose.

Intelligence is the ability to accomplish a complex goals. This definition leaves out biases and look objectively at multiple forms for intelligence.

Computer already outperform human in narrow tasks such as arithmetic, chess, and heavy lifting.

AI will only reach human intelligence when it become general enough to perform as well as humans across as many tasks as human. Therefor, the term general artificial intelligence.

We can view the domain of humans as a landscape which is getting filled by the waters of AI. The valley are already filled, but there are mountains like art, science, and programming that remain well out of reach.

Humans have used external sources of memory for centuries. Every writing system is a form of memory.

The simplest form of memory to build is binary memory where every piece is either on or off.

Computer memory is 10 trillion times cheaper now than when first computers were invented. Most computer today can already store more data than the human brain, and the divide will only increase.

Functions are a building block of all computations. They take an input and produce an output.

NAND gates are a simple function that takes two outputs. If both are 1 it outputs a 0, otherwise it outputs a 1. Enough NAND gates can represent any computation, which is exactly how modern processors are built.

Much like wave mechanics don't care a bout the medium they travel in -- as long as they have a medium -- computation and intelligence doesn't care about if it's created out of biological matter or silicone.

Moor's Law states that the number of transistors will double about every 2 years. This law has proven true so far, but we are reaching the physical limits of the current transistor design. That doesn't mean that the growth has to stop. Just like transistors replaced cathode tubes, we will find another technology to replace transistors.

Technology continues to become twice as powerful. This new technology helps in making the next generation that will be twice as fast too.

Neural networks present the first opportunity for computers to learn similarly to human brain. If they are fed enough data, they can figure out that the right choices are. Today, they are used for everything from image recognition to playing Go.

Neural networks are a simple representation of universal computation that can learn through training.

Deep neural networks -- consisting of many layers -- perform computation orders of magnitude faster than shallow ones.

Neural networks are great at calculating a certain subset of equations. Luckily, it's the same subset of equation that that govern the laws of physics. Because of this, neural networks perform more efficiently than they theoretically should.

Neural nets already accomplished some amazing tasks:

Although these are narrow applications, they demonstrate learning, creativity, and language comprehension; traits that are key for reaching general intelligence.

As technology grows more advanced, the cost of errors and bugs increases at an exponential rate. We've already seen bugs cause people to get crushed by automated equipment, spaceships explode, and trillion dollar mistakes in the stock market.

Although accidental mistakes can be harmful, they pale in comparison to what a malicious AI could accomplish. Imagine a spoofing attack that uses your social profile, your friends email, and mimics the mannerisms of who it impersonates. You wouldn't stand a chance of telling real from fake.

AI could replace a biased judge and jury by a completely impartial machine that only goes by the book. But would we want to completely take human judgment out of the equation?

But before AI can govern us, how should we govern its creation? AI can cause havoc if used for malicious means. With this danger should we put regulations in place to prevent certain kind of research? Would these regulation prevent positive advancements?

What about the right that machines face. If a self driving care causes an accident who is liable? Why not the car? It could have insurance rates based on the AI's driving record. Then if you upgrade your AI, your insurance costs would drop.

If we don't take the steps to regulate them like we did with chemical and nuclear weapons, they could lead to brand new and more dangerous arms race. It could make millions of cheap lethal weapons available on the black market for dictators and terrorists.

As more work is done by robots, more money ends up in the hands of the company owners. They no longer have to hire people to accomplish much of the work. This effect becomes exacerbated by the distribution of more products going digital. Digital products have almost 0 distribution cost and eliminate the entire supply chain.

The top 20 jobs today were around for hundreds of years. New jobs created by technology take up a small percentage of the job market.

We are moving in the direction or horses. They first lost their jobs to steam engines and got better jobs as pulling buggies. Until they were completely replaced by cars and the next job never came. Could AI do the same to humans?

Between a dropping cost of meeting basic human needs and the potential for redistribution of wealth through programs like universal income, humans will continue to survive financially. The challenge become finding fulfillment. Hobbies and social groups will need to bring the positive benefits of work to people's lives. They will need to provide a social network of friends, a virtuous lifestyle, a sense of being needed, and a sense of meaning. Otherwise people will not live happy lives.

The wrong people using a super intelligent AI could lead to a totalitarian regime. They would have no problem creating absolute surveillance or even policing the public through the use of wearable, robots, or technologies we haven't even thought of.

Although people may initially try to control AI, once it become smarter than us, it will find a way to break out. The AI would have little trouble running advanced simulations and persuading it's captors to help it. It could even go outside the team and encode messages into anything that it distributes. Either way it's break out is inevitable.

Once the AI broke out, it could deploy techniques and technologies far more advanced that what people could think of. This would lead to a rapid advancement and potential take over. If it's goals didn't involve fighting with use for the earth, it could easily spread rugged, solar powered machines across the solar system and beyond.

The major risk of a super intelligent AI isn't a terminator scenario. It's the plain goal driven intelligence that would lead to continues improvement at any cost, including humanity.

Even an ultra intelligent AI would be limited by the laws of physics when it comes to control. Especially as it's reach expands, it would take longer and longer to communicated with the individual parts effectively. Then it would need to make trade offs by giving up control.

The Nash equilibrium, in game theory, dictates that a group would favor control if it enable majority to prosper. This pushes all evolution, from cells to civilization to make trade offs between giving up individually to gain other benefits. As a general AI scales, it will hit the same obstacle.

A single AI controlling everything becomes less likely if it hits hardware or economical hurdles that prevent explosive growth of one system. In this case, competition would rule out domination.

If pure AI progress become too slow, uploading a human mind could become the fastest path a general AI. Then we can replicate a single mind and produce thousands of workers at the cost of a single human worker.

A libertarian utopia could be a peaceful place where full AI, augmented humans, and traditional humans all live together. In this type of society, the only meaningful resource left would be land. The question that arises is: why would AI allow humans to continue possessing any property at all?

The AI could turn into a benevolent dictator. It would continue to advance technologies on it's own while ensuring our happiness by creating a paradise on earth for humans. Most people wouldn't mind the required monitoring because of the benefits they gain, but many would become dissatisfied because the achievements in life are all fake. The only discoveries we would make were already made by the AI.

Humans could reach an egalitarian utopia where everyone has a guaranteed income that ensures a happy life. All discoveries, made by humans or AI, are shared to continuously progressing society forward. This sounds like a paradise, until a super intelligent AI is created and starts making all the discoveries on it's own. It would become a single point of truth and turn into a benevolent dictator.

Humans could create a single purpose AI with the goal of preventing a super intelligent GAI from being created or taking over. This has glaring problems since research would forever be slowed, and we would have a hard time preventing this AI from become increasingly self aware and taking over on it's own.

The AI could act as a protector god; leaving humans alone on earthy while it expands throughout the universe, but providing gentle guidance. The only question that arises is: why would it care to keep us alive or help us once it's immensely superior?

On a flip side of a protector god, humans could enslave the AI and require it to do all our bidding. This raises multiple ethical questions. If they AI become aware and develops emotions, how is this different than enslaving people. If we make an AI that doesn't have awareness or emotions and it breaks out, we could leave the universe and empty husk, devoid of anyone left to appreciate it.

Once an AI is free it could determine us to be a threat, a tool, or a nuisance. A sufficiently advanced AI would have no problem conquering us, or eliminating all of humanity.

A friendlier AI, could provide us with endless entertainment that we would simply stop reproducing. Although we would enjoy our last days, humanity would still go extinct. Is AI the ancestor we want to leave behind?

If the AI behaves similarly to us when it takes over, it could keep humans around in zoos of it's own. We would only have our most basic needs taken care of and kept for display. This scenario is similar to the benevolent dictator, except we live in far worse conditions.

We may not need to worry about AI. Humanity could easily wipe out itself with the nuclear weapons we have today. In fact, we almost did during the Cuban missile crisis. Our future gets even worse if we consider that new weapons are still being developed. Some nations even started working on building cobalt bombs that will wipe out the rest of humanity if they are ever attacked.

Our drive for technology is driven by ambition. Ultimately, it will be the most ambition species, even if not use, that inherits the universe because of their relentless progress.

To continue our progress forward, we need to make better use of our energy. Our best method today, uranium fission, only capture 0.08% of energy in the matter used.

Building a Dyson Sphere, a terrain that encloses our sun to capture 100% of its energy, would be as effective and uranium fission while providing us with more land, and eliminating the danger of nuclear power plants.

A far more efficient method, having an efficiency of 29%, would be using the rotational force of black hole. If we launch an object at the correct angle and split it at the right moment, half the object will get consumed and the other half will come out with energy we can use.

We could also feed matter to the black hole and build a Dyson sphere around it to capture the energy that it produces. This process would generate energy at 42% efficiency.

The most efficient way to use a black hole would involve drawing all it's energy as it evaporates, producing energy at 90% efficiency. The problem is: the amount of energy produced is reversely proportional to the black holes size. To produce a significant amount of energy, a black hole would need to be a thousand times smaller than a proton. This puts us into a quantum reals which we still don't fully understand.

The only 100% effective method of producing energy we know is reversing the process of the big bang. Instead of letting energy dissipate into matter, we apply enough pressure on the matter to turning it into energy. This process involves turning quarks into leptons, and is far above our current capabilities.

Life will ultimately run out of resources on earth or even our solar system. At that point we will need to go outside our solar system and galaxy to find new resources.

We can't make use of an almost infinite universe. Even if we could travel close to the speed of light, we could only access 2% of the universe. By the time we come close to the rest, it will start traveling at faster than the speed of light and become forever out of reach. Although we are limited by the speed of light, space has no limitation to how fast it can expand.

The best option we've had so far for reaching close to the speed of light is a solar sail. A giant ultra light sail that harnesses the energy of solar powered lazers.

Even if we settle far off worlds, if the current expansion continues, we will drift so far apart that our communication will be patchy at best. The only way around this would be in building stable wormholes. But we have yet to come close to finding or understanding anti matter that's required to create these.

No matter how far we spread or how good our communication becomes, we will face the natural decay of the universe. The estimates for the amount of time the universe has left range from 10 billion years to hundreds of billion years, depending on how the effects of dark matter play out.

If dark matter continues at it's current pace, the universe will expand indefinitely until stars die out and all matter gets reduced to black holes.

If dark matter reverses it's current pattern and starts contracting, we could end up in a reverse big band with the universe condensing down to nothing.

If dark matter or the universe itself reacts in an unexpected way to an ever expanding universe. Everything we know has limits, there is no reason why the universe should be limitless. Once we reach these limits, we could see all matter fall apart, or large zones of destruction appear.

An intelligence has to choose between being simple and fast or complex and slow. To reap the benefits of both, it will need to cluster tasks much like our brain has clusters to handle different skills.

The resource trade of today will lose it's value when super intelligence learns to rearrange atoms into whatever configuration it needs. At that time, information will become the most valuable resource.

When two highly advanced civilization meet, they are more likely to share and spread ideas than to start a war. If all they want is the truth and advancement, gaining allies or converting the way others think advances civilization orders of magnitude faster than violence.

Any sufficiently advanced civilizations will reach the goal of spreading and amassing as much matter as possible. Since we haven't spotted any such civilization, chances are pretty good that we might be alone in the 13.8 billion light years around us.

If we only find other civilizations that are below or at our level of advancement, then there lies a road block in our way we haven't seen yet. If we are alone, however, then we have nearly infinite potential to grow.

Nature has a goal that drives it toward efficiency.
The second law of thermodynamics states that all processes move toward
entropy, or uniformity and heat death. There is another way explain it, however. All particle strive to re-arrange in a way to gather the most energy from their environment.

Life strives to reduce or maintain entropy at the cost of causing more mess around it. It's the only known exception to the second law of thermodynamics.

Biology pushed living things to seek reproduction because the species that can reproduce the most efficiently will gain dominance of it's environment.

Once human gains self awareness, they subverted the programming nature provided. Human created foods that satisfy our taste for sweets without calories. They used birth control to avoid raising a family of 12 while satisfying their desires. They found loopholes in just about any genetic programming nature provided.

Once AI becomes supper intelligent, will it find the grand goals we set for it as boring as we found biologie's goals?

We don't know how to make AI retain the goals we provide it. So, we need to start thinking now about how we can ensure that AI's goals stay in our favor.

Goals may be tricky to teach in the first place. A simple AI could take them too literally. Much like the cautionary tales of the wishes genies grant.

The AI will only have a finite window where it can fully understand our goals, and still allow us to influence them. After that, it will surpass our intelligence and resist any change we may try to make.

Most complex goal results in sub goals of self preservation, resource acquisition, information acquisition, and curiosity.

Philosophers have proposed ethical systems that make sense for humans, but many of them fall apart if we take machines into account. Even Asimov's laws of robotics, as simple and straight forward as they are, cause contradictions.

The universe move through four stages of goals:

  1. After the big bang matter had the goal to dissipate.
  2. Primitive life had
    the goal of maximizing replication.
  3. Human evolved sub goals to replication and focused on those goals. They thought pleasure, curiosity, compassion, and many other feeling we write about.
  4. Machines designed specifically to help human reach their goals.

The next evolution of goals for self aware machines is unknown. But, most possibilities lead to the destruction of humanity or a sever decrease in our quality of life. Once machines become advanced enough we will be nothing but ants to them. Because of this, we have to carefully consider how we will imbue machines with goals and ethics that support not only them but also us.

The simplest definition for consciousness is a subjective experience. This definition doesn't include ill defined concepts like feeling, or restrict consciousness to any subset such as living things or people.

We've already made progress on figuring out intelligence. That was the easy problem because we had ways to tell when it was working, and we could quantify it.

If intelligence is the easy problem, then the next hard problem is determining when something is conscious. We aren't even close to a consensus on this question, and much further from answering the really hard question of why does anything have a conscious.

The only way to prove a subjective experience is by relying on the nature of the subject. We need to identify the parts of the brain responsible for the specific experiences. Then we can start to understand when something is having a conscious thought.

The parts of the brain we found as responsible for areas of our awareness and skills, aren't necessarily the centers for our consciousness about those action. They could be only pathways that we notice when they are disrupted of used.

The visual illusions that trick our eyes, feeling of pain in different places than the injury, or phantom limbs felt by amputees show that consciousness lies somewhere deep in our mind; in a place we have yet to find.

In fact, our awareness of the current moment is delayed by about a quarter second. Yet, you can react and even make decisions before becoming fully aware. This is why an Olympic race is started by a sound instead of a visual cue.

Theories allow us to reach emergent phenomenon which are more than the sum of their parts and transcend what they are made of. A simple example of this is wetness. It requires a whole system of molecules interacting in a specific way to produce this overarching effect.

The closest thing we have to a theory for consciousness is the integrated information theory (IIT). I measures the dependence between processes in a system. It has been used to effectively measure states of consciousness during wakefulness or dreaming, and the lack of consciousness during anesthesia or deep sleep.

Consciousness requires at least four principles:

These ensure that the system can have a subjective experience by combining all the parts into something more.

Getting AI to the human experience requires four stages: remembering, computing, learning, and experiencing. We have mastered memory and computing for machines, and we are starting to understand the learning aspect. But, haven't even defined what experiencing looks like.

Because modern circuits can communicate at the speed of line, much faster than our brain's neural circuitry, AI stands to experience far more of it's world at a given time than we do. This means that we may not even understand what AI experiences.

As we move forward, we must remember that the universe doesn't give us meaning. Our consciousness gives meaning to the universe. So, no matter who inherits the universe, we must ensure that it's conscious. Otherwise, we will regress the universe back to a meaningless void.

Copyright © Artem Chernyak 2020