Results 1 to 9 of 9

Thread: Max Tegmark

  1. #1

  2. #2


    Life 3.0: Being Human in the Age of Artificial Intelligence

    Published on Aug 27, 2017

    Max Tegmark and his wife Meia discuss his new book "Life 3.0: Being Human in the Age of Artificial Intelligence"

  3. #3


    Everything Is Made of Quarks—Why Are Only Some Things Conscious? | Max Tegmark

    Published on Oct 11, 2017

    In the centuries since Galileo proved heliocentrism, science has gradually come to understand more and more of our universe's natural phenomena: gravity, quantum mechanics, even ripples in space-time. But the final frontier of science isn't out there, says cosmologist and MIT professor Max Tegmark, it's the world inside our heads: consciousness. It's a highly divisive issue—some scientists think it's unimportant or a question for philosophers, while others like Tegmark think that the human experience and the meaning and purpose of life would disappear if the lights of our consciousness were to go out. Ultimately, Tegmark thinks we can understand consciousness scientifically by finding the pattern of matter from which consciousness springs. What is the difference between your brain and the food you feed it? It's all quarks, says Tegmark, the difference is the pattern they're arranged into. So how can we develop a theory of consciousness? Can we build a consciousness detector? And can we really understand what we are without unlocking humanity's greatest mystery? Tegmark muses on all of this above. Max's latest book is Life 3.0: Being Human in the Age of Artificial Intelligence

    Transcript: Of all the words I know there’s no word that makes many of my colleagues more emotional and prone to foam at the mouth than the one I’m just about to say: consciousness. A lot of scientists dismiss this as complete BS and as totally irrelevant and a lot of others think this is the central thing—you have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why.

    First of all, if you are chased by a heat-seeking missile it’s completely irrelevant to you whether this heat-seeking missile is conscious, whether it’s having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. That shows that it’s a complete red herring to think that you’re safe from future AI if it’s not conscious. It’s its behavior you want to make sure is aligned with your goals.

    On the other hand there is a way in which consciousness is incredibly important, I feel, and there’s also a way in which it’s absolutely fascinating. If we rewind 400 years or so, Galileo, he could’ve told you that if you throw an apple and a hazelnut they’re going to move exactly in this shape of a parabola and he can give you all the math for it, but he would have no clue why the apple was red and the hazelnut was brown or why the apple was soft and the hazelnut was hard. That seemed to him beyond science, and science back 400 years ago could only really say sensible things about this very limited domain of phenomenon to do with motion. Then came Maxwell's equations which told us all about light and colors and that became within the realm of science. Then we got to quantum mechanics, which told us why the apple is softer than the hazelnut and all the other properties of matter, and science has gradually conquered more and more of the natural phenomenon. And if you ask now what science can do it’s actually a lot faster to describe what little it is that science cannot talk about sensibly. And I think the final frontier actually is consciousness. People mean a lot of different things by that word, I simply mean subjective experience, the experience of colors, sounds, emotions and so on, that it feels like something to be me, which is quite separate from my behavior, which I could have even if I were a zombie and didn’t experience anything at all, potentially.

  4. #4


    Inside Google's DeepMind Project: How AI is learning on its own | Max Tegmark

    Published on Oct 22, 2017

    Artificial Intelligence is already outsmarting us at '80s computer games by finding ways to beat games that developers didn't even know were there. Just wait until it figures out how to beat us in ways that matter.

    Max Tegmark: I define intelligence simply as how good something is at accomplishing complex goals.

    Human intelligence today is very different from machine intelligence today in multiple ways. First of all, machine intelligence in the past used to be just an always inferior to human intelligence.

    Gradually machine intelligence got better than human intelligence in certain very, very narrow areas, like multiplying numbers fast like pocket calculators or remembering large amounts of data really fast.

    What we’re seeing now is that machine intelligence is spreading out a little bit from those narrow peaks and getting a bit broader. We still have nothing that is as broad as human intelligence, where a human child can learn to get pretty good at almost any goal, but you have systems now, for example, that can learn to play a whole swath of different kinds of computer games or to learn to drive a car in pretty varied environments. And uh...

    Where things are obviously going in AI is increased breadth, and the Holy Grail of AI research is to build a machine that is as broad as human intelligence, it can get good at anything. And once that’s happened it’s very likely it’s not only going to be as broad as humans but also better than humans at all the tasks, as opposed to just some right now.

    I have to confess that I’m quite the computer nerd myself. I wrote some computer games back in high school and college, and more recently I’ve been doing a lot of deep learning research with my lab at MIT.

    So something that really blew me away like “whoa” was when I first saw this Google DeepMind system that learned to play computer games from scratch.

    You had this artificial simulated neural network, it didn’t know what a computer game was, it didn’t know what a computer was, it didn’t know what a screen was, you just fed in numbers that represented the different colors on the screen and told it that it could output different numbers corresponding to different key strokes, which also it didn’t know anything about, and then just kept feeding it the score, and all the software knew was to try to do randomly do stuff that would maximize that score.

    I remember watching this on the screen once when Demis Hassabis, the CEO of Google DeepMind showed it, and seeing first how this thing really played total BS strategy and lost all the time.

    It gradually got better and better, and then it got better than I was, and then after a while it figured out this crazy strategy in Breakout (where you’re supposed to bounce a ball off of a brick wall) where it would keep aiming for the upper left corner until it punched a hole through there and got the ball bouncing around in the back and just racked up crazy many points.

    And I was like, “Whoa, that’s intelligent!” And the guys who programmed this didn’t even know about that strategy because they hadn’t played that game very much.

    This is a simple example of how machine intelligence can surpass the intelligence of its creator, much in the same way as a human child can end up becoming more intelligent than its parents if educated well.

    This is just tiny little computers, the sort of hardware you can have on your desktop. If you now imagine scaling up to the biggest computer facilities we have in the world and you give us a couple of more decades of algorithm development, I think is very plausible that we can make machines that cannot just learn to play computer games better than us, but can view life as a game and to do everything better than us.

  5. #5


    Max Tegmark - Artificial Intelligence - The Future of Life 3.0

    Published on Nov 8, 2017

    How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology—and there’s nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who’s helped mainstream research on how to keep AI beneficial.

  6. #6


    The Ultimate Impact of Artificial Intelligence - Prof. Max Tegmark

    Published on Dec 28, 2017

    Max Tegmark is a Swedish-American cosmologist. Tegmark is a professor at the Massachusetts Institute of Technology and the scientific director of the Foundational Questions Institute. He is also a co-founder of the Future of Life Institute, and has accepted donations from Elon Musk to investigate existential risk from advanced artificial intelligence.

    October 2017

  7. #7


    Has our ability to create intelligence outpaced our wisdom? | Max Tegmark on A.I.

    Published on May 15, 2018

    Some of the most intelligent people at the most highly-funded companies in the world can't seem to answer this simple question: what is the danger in creating something smarter than you? They've created AI so smart that the "deep learning" that it's outsmarting the people that made it. The reason is the "blackbox" style code that the AI is based off of—it's built solely to become smarter, and we have no way to regulate that knowledge. That might not seem like a terrible thing if you want to build superintelligence. But we've all experienced something minor going wrong, or a bug, in our current electronics. Imagine that, but in a Robojudge that can sentence you to 10 years in prison without explanation other than "I've been fed data and this is what I compute"... or a bug in the AI of a busy airport. We need regulation now before we create something we can't control. Max's book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you're interested in the subject.

    Transcript: I’m optimistic that we can create an awesome future with technology as long as we win the race between the growing power of the tech and the growing wisdom with which we manage the tech.

    This is actually getting harder because of nerdy technical developments in the AI field.

    It used to be, when we wrote state-of-the-art AI—like for example IBM’s Deep Blue computer who defeated Gary Kasparov in chess a couple of decades ago—that all the intelligence was basically programmed in by humans who knew how to play chess and then the computer won the game just because it could think faster and remember more. But we understood the software well.

    Understanding what your AI system does is one of those pieces of wisdom you have to have to be able to really trust it.

    The reason we have so many problems today with systems getting hacked or crashing because of bugs is exactly because we didn’t understand the systems as well as we should have.

    Now what’s happening is fascinating, today’s biggest AI breakthroughs are a completely different kind where rather than the intelligence being largely programmed in an easy-to-understand code, you put in almost nothing except a little learning rule by which a simulated arc of neurons can take a lot of data and figure out how to get stuff done.

    This deep learning suddenly becomes able to do things often even better than the programmers were ever able to do.

    You can train a machine to play computer games with almost no hard-coded stuff at all. You don’t tell it what a game is, what the things are on the screen, or even that there is such a thing as a screen—you just feed in a bunch of data about the colors of the pixels and tell it, “Hey go ahead and maximize that number in the upper left corner,” and gradually you come back and it’s playing some game much better than I could.

    The challenge with this, even though it’s very powerful, this is very much “blackbox” now where, yeah it does all that great stuff—and we don’t understand how.

    So suppose I get sentenced to ten years in prison by a Robojudge in the future and I ask, “Why?”

    And I’m told, “I WAS TRAINED ON SEVEN TERABYTES OF DATA, AND THIS WAS THE DECISION,” It’s not that satisfying for me.

    Or suppose the machine that’s in charge of our electric power grid suddenly malfunctions and someone says, “Well, we have no idea why. We trained it on a lot of data and it worked,” that doesn’t instill the kind of trust that we want to put into systems.

    When you get the blue screen of death when your Windows machine crashes or the spinning wheel of doom because your Mac crashes, “annoying” is probably the main emotion we have, but “annoying” isn’t the emotion we have if it’s myself flying an airplane and it crashes, or the software controlling the nuclear arsenal of the U.S., or something like that.

    And as AI gets more and more out into the world we absolutely need to transform today’s packable and buggy AI systems into AI systems that we can really trust.

  8. #8


    How to get empowered, not overpowered, by AI | Max Tegmark

    Published on Jul 5, 2018

    Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we're restricted only by the laws of physics, not the limits of our intelligence. MIT physicist and AI researcher Max Tegmark separates the real opportunities and threats from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best -- rather than worst -- thing to ever happen to humanity.

  9. #9


    Max Tegmark - Intelligible Intelligence & Beneficial Intelligence

    Published on Jul 26, 2018

    Recorded July 18th, 2018 at IJCAI-ECAI-18

    Max Tegmark is a Professor doing physics and AI research at MIT, and advocates for positive use of technology as President of the Future of Life Institute. He is the author of over 200 publications as well as the New York Times bestsellers “Life 3.0: Being Human in the Age of Artificial Intelligence” and “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality”. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •