Page 1 of 2 12 LastLast
Results 1 to 10 of 17

Thread: Ben Goertzel

  1. #1

  2. #2


    Singularity Or Bust

    Nov 3, 2013

    In 2009, film-maker and former AI programmer Raj Dye spent his summer following futurist AI researchers Ben Goertzel and Hugo DeGaris around Hong Kong and Xiamen, documenting their doings and gathering their perspectives. The result, after some work by crack film editor Alex MacKenzie, was the 45 minute documentary Singularity or Bust — a uniquely edgy, experimental Singularitarian road movie, featuring perhaps the most philosophical three-foot-tall humanoid robot ever, a glance at the fast-growing Chinese research scene in the late aughts, and even a bit of a real-life love story. The film was screened in theaters around the world, and won the Best Documentary award at the 2013 LA Cinema Festival of Hollywood and the LA Lift Off Festival. And now it is online, free of charge, for your delectation.

    Singularity or Bust is a true story pertaining to events occurring in the year 2009. It captures a fascinating slice of reality, but bear in mind that things move fast these days. For more recent updates on Goertzel and DeGaris's quest for transhuman AI, you'll have to consult the Internet, or your imagination.

    © 2012 Raj Dye
    "Singularity or Bust", Raj Dye, 2012 on IMDb
    Last edited by Airicist2; 4th February 2024 at 03:01.

  3. #3

  4. #4


    Ben Goertzel - Emergence, Reduction & Artificial Intelligence

    Published on Jul 21, 2015

    The concept of emergence is controversial to some - for example Eliezer Yudkowski, who favors reductionism, wrote a critique at Less Wrong (see link below). Do reductionists often dismiss emergence?

  5. #5


    The Possibility of telepathy in robots with Ben Goertzel

    Published on May 30, 2016

    Ben Goertzel, PhD, is author of many books on artificial intelligence including Ten Years to the Singularity if We Really Really Try; Engineering General Intelligence, Vols. 1 and 2; The Hidden Pattern: A Patternist Philosophy of Mind; and The Path to Posthumanity. He is also editor (with Damien Broderick) of an anthology about parapsychology titled, Evidence for Psi: Thirteen Empirical Research Reports. He is chief scientific officer for Hanson Robotics in Hong Kong.

    Here he points out that, while the question of consciousness in robots is problematic, there are similar problems when exploring the question of consciousness in humans. He postulates that AI machines will develop some forms of awareness; and suggests thought experiments involving plugging the human brain directly into computers. He discusses credible research on extrasensory perception and suggests that people working in artificial intelligence and cognitive science will need to confront this data. He then speculates about the prospects for telepathic robots.

    New Thinking Allowed host, Jeffrey Mishlove, PhD, is author of The Roots of Consciousness, Psi Development Systems, and The PK Man. Between 1986 and 2002 he hosted and co-produced the original Thinking Allowed public television series. He is the recipient of the only doctoral diploma in "parapsychology" ever awarded by an accredited university (University of California, Berkeley, 1980). He is also past-president of the non-profit Intuition Network, an organization dedicated to creating a world in which all people are encouraged to cultivate and apply their inner, intuitive abilities.

    (Recorded on April 29, 2016)

  6. #6


    Will you ever love a robot?

    Published on Jul 28, 2016

    Ben Goertzel talks about the coming change in how we understand what is a machine and what is not.

  7. #7


    Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel

    Published on Feb 5, 2017

    For all the talk of AI, it always seems that gossip is faster than progress. But it could be that within this century, we will fully realize the visions science fiction has promised us, says Dr. Ben Goertzel – for better or worse. Humanity will always create and invent, but the last invention of necessity will be a human-level Artificial General Intelligence mind, which will be able to create a new AIG with super-human intelligence, and continually create smarter and smarter versions of itself. It will provide all basic human needs – food, shelter, water – and those of us who wish to experience a higher echelon of consciousness and intelligence will be able to upgrade to become super-human. Or, perhaps there will be war – there’s a bit of uncertainty there, admits Goertzel. “There’s a lot of work to get to the point where intelligence explodes… But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting,” he says. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence

    Transcript: The mathematician I.J. Good back in the mid-1960s introduced what he called the intelligence explosion, which in essence was the same as the concept that Vernor Vinge later introduced and Ray Kurzweil adopted and called the technological singularity. What I.J. Good said was the first intelligent machine will be the last invention that humanity needs to make. Now in the 1960s the difference between neural AI and AGI wasn’t that clear and I.J. Good wasn’t thinking about a system like AlphaGo that could beat Go but couldn’t walk down the street or add five plus five. In the modern vernacular what we can say is the first human level AGI, the first human level artificial general intelligence, will be the last invention that humanity needs to make.

    And the reason for that is once you get a human level AGI you can teach this human level AGI math and programming and AI theory and cognitive science and neuroscience. This human level AGI can then reprogram itself and it can modify its own mind and it can make itself into a yet smarter machine. It can make 10,000 copies of itself, some of which are much more intelligent than the original. And once the first human level AGI has created the second one which is smarter than itself, well, that second one will be even better at AI programming and hardware design and cognitive science and so forth and will be able to create the third human level AGI which by now will be well beyond human level.

    So it seems that it’s going to be a laborious path to get to the first human level AGI. I don’t think it will take centuries from now but it may be decades rather than years. On the other hand once you get to a human level AGI I think you may see what some futures have called a hard takeoff where you see the intelligence increase literally day by day as the AI system rewrites its own mind. And this – it’s a big frightening but it’s also incredibly exciting. Does that mean humans will not ever make any more inventions? Of course it doesn’t. But what it means is if we do things right we won’t need to. If things come out the way that I hope they will what will happen is we’ll have these superhuman minds and largely they’ll be doing their own things. They will also offer to us the possibility to upload or upgrade ourselves and join them in realms of experience that we cannot now conceive in our current human forms. Or these superhuman AGIs may help humans to maintain a traditional human-like existence.

    I mean if you have a million times human IQ and you can reconfigure elementary particles into new forms of matter at will then supplying a few billion humans with food and water and video games, virtual reality headsets and national parks and flying cars and what not – this would be trivial for these superhuman minds. So if they’re well disposed toward us people who chose to remain in human form could have a simply much better quality of life than we have now. You don’t have to work for a living. You can devote your time to social, emotional, spiritual, intellectual and creative pursuits rather than laboriously doing things you might rather not do just in order to get food and shelter and an internet connection. So I think there is tremendous positive possibilities here and there’s also a lot of uncertainty and there’s a lot of work to get to the point where intelligence explodes in the sense of a hard takeoff. But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting.

  8. #8


    Will Superhuman Intelligence Be Our Friend or Foe? | Ben Goertzel

    Published on Feb 12, 2017

    Let's just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in 'the singularity', or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway.

    Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. “The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand,” says Goertzel, and for better or worse, “that’s what we’re going to keep on doing.” Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence

  9. #9


    How to build an A.I. brain that can surpass human intelligence

    Published on May 7, 2018

    Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially

    If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.

    If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.

    A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.

    Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.

    And we then look at the different kinds of learning and reasoning the human mind can do. We can do logical deduction sometimes. We’re not always good at it. We make emotional intuitive leaps and strange creative combinations of things. We learn by trial and error and habit. We learn socially by imitating, mirroring, emulating or opposing others. These different kinds of memory and learning that the human mind has – one can attempt to achieve each of those with a cutting-edge computer science algorithm, rather than trying to achieve each of those functions and structures in the way the brain does.

    So what we have in OpenCog we have a central knowledge repository which is very dynamic and lives in RAM on a large network of computers which we call the AtomSpace. And for the mathematicians or computer science in the audience, the AtomSpace is what you’d call a weighted labeled hypergraph. So it has nodes. It has links. A link can go between two nodes or a link could go between three, four, five or 50 nodes. Different nodes and links have different types and the nodes and links can have numbers attached to them. A node or link could have a weight indicating a probability or a confidence. It could have a weight indicating how important it is to the system right now or how important it is in the long term so it should be kept around in the system’s memory.

  10. #10


    Will robots liberate humans? - Interview with Dr. Ben Goertzel at #WebSummit18

    Published on Nov 21, 2018

    In this interview at Web Summit 2018 Dr. Ben Goertzel shares some thoughts about the technological singularity, role of robots and AI, and SingularityNET.

Page 1 of 2 12 LastLast

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •