LIFE 3.0-Max Tegmark Notes

Ayushi Mishra
4 min readSep 12, 2021

--

*This book is really interesting and well suited for a beginner (to physics non-fictions). I have not written the hereunder and these are the notes from the book. This blog does not contain any “spoilers” because its a non-fiction. You can absolutely read the blog and then decided it you want to read the book or not. If you read the book, then you can refer back here if you forget something. I have jotted this down because these are really interesting facts that I like to revise. If you are not a reader but still want to know what an interesting book like this contains, then you must read. Do share your thoughts in the contact or comment section*

Basis starts as follows

When a bacterium makes a copy of its DNA, no new atoms are created, but a new set of atoms are arranged in the same pattern as the original, thereby copying the information. In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

“Life 1.0”: life where both the hardware and software are evolved rather than designed.

“Life 2.0”: life whose hardware is evolved, but whose software is largely designed. By your software, author means all the algorithms and knowledge that you use to process the information from your senses and decide what to do.

Your synapses store all your knowledge and skills as roughly 100 terabytes’ worth of information, while your DNA stores merely about a gigabyte, barely enough to store a single movie download. So it’s physically impossible for an infant to be born speaking perfect English and ready to ace her college entrance exams: there’s no way the information could have been preloaded into her brain, since the main information module she got from her parents (her DNA) lacks sufficient information-storage capacity.

The AI talk starts here

One popular myth is that we know we’ll get superhuman AGI(Artificial general intelligence ) this century. In fact, history is full of technological over-hyping. Physicists know that a brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs. information can take on a life of its own, independent of its physical substrate!

A computation is a transformation of one memory state into another. In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.

The hardware is the matter and the software is the pattern. This substrate independence of computation implies that AI is possible: intelligence doesn’t require flesh, blood or carbon atoms.

Quantum computing pioneer David Deutsch controversially argues that “quantum computers share information with huge numbers of versions of themselves throughout the multiverse,” and can get answers faster here in our Universe by in a sense getting help from these other versions.

Deep-learning neural networks (they’re called “deep” if they contain many layers) are much more efficient than shallow ones for many of these functions of interest. David Rolnick showed that the simple task of multiplying n numbers requires a whopping 2n neurons for a network with only one layer, but takes only about 4n neurons in a deep network. NAND gates and neurons are two important examples of such universal “computational atoms.” what the DeepMind team had done. Instead, they’d created a blank-slate AI that knew nothing about this game — or about any other games, or even about concepts such as games, paddles, bricks or balls. All their AI knew was that a long list of numbers got fed into it at regular intervals: the current score and a long list of numbers which we (but not the AI) would recognize as specifications of how different parts of the screen were colored. The AI was simply told to maximize the score by outputting, at regular intervals, numbers which we (but not the AI) would recognize as codes for which keys to press. DeepMind soon published their method and shared their code, explaining that it used a very simple yet powerful idea called deep reinforcement learning.

Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa. There are vastly more possible Go positions than there are atoms in our Universe, which means that trying to analyze all interesting sequences of the future moves rapidly gets hopeless.

To read the full story, head-on to the link below.

Read more.

--

--