Ben Goertzel


Singularity Or Bust

Nov 3, 2013

In 2009, film-maker and former AI programmer Raj Dye spent his summer following futurist AI researchers Ben Goertzel and Hugo DeGaris around Hong Kong and Xiamen, documenting their doings and gathering their perspectives. The result, after some work by crack film editor Alex MacKenzie, was the 45 minute documentary Singularity or Bust — a uniquely edgy, experimental Singularitarian road movie, featuring perhaps the most philosophical three-foot-tall humanoid robot ever, a glance at the fast-growing Chinese research scene in the late aughts, and even a bit of a real-life love story. The film was screened in theaters around the world, and won the Best Documentary award at the 2013 LA Cinema Festival of Hollywood and the LA Lift Off Festival. And now it is online, free of charge, for your delectation.

Singularity or Bust is a true story pertaining to events occurring in the year 2009. It captures a fascinating slice of reality, but bear in mind that things move fast these days. For more recent updates on Goertzel and DeGaris's quest for transhuman AI, you'll have to consult the Internet, or your imagination.

© 2012 Raj Dye

"Singularity or Bust", Raj Dye, 2012 on IMDb
 
Last edited:

Ben Goertzel - Emergence, Reduction & Artificial Intelligence

Published on Jul 21, 2015

The concept of emergence is controversial to some - for example Eliezer Yudkowski, who favors reductionism, wrote a critique at Less Wrong (see link below). Do reductionists often dismiss emergence?
 

The Possibility of telepathy in robots with Ben Goertzel

Published on May 30, 2016

Ben Goertzel, PhD, is author of many books on artificial intelligence including Ten Years to the Singularity if We Really Really Try; Engineering General Intelligence, Vols. 1 and 2; The Hidden Pattern: A Patternist Philosophy of Mind; and The Path to Posthumanity. He is also editor (with Damien Broderick) of an anthology about parapsychology titled, Evidence for Psi: Thirteen Empirical Research Reports. He is chief scientific officer for Hanson Robotics in Hong Kong.

Here he points out that, while the question of consciousness in robots is problematic, there are similar problems when exploring the question of consciousness in humans. He postulates that AI machines will develop some forms of awareness; and suggests thought experiments involving plugging the human brain directly into computers. He discusses credible research on extrasensory perception and suggests that people working in artificial intelligence and cognitive science will need to confront this data. He then speculates about the prospects for telepathic robots.

New Thinking Allowed host, Jeffrey Mishlove, PhD, is author of The Roots of Consciousness, Psi Development Systems, and The PK Man. Between 1986 and 2002 he hosted and co-produced the original Thinking Allowed public television series. He is the recipient of the only doctoral diploma in "parapsychology" ever awarded by an accredited university (University of California, Berkeley, 1980). He is also past-president of the non-profit Intuition Network, an organization dedicated to creating a world in which all people are encouraged to cultivate and apply their inner, intuitive abilities.

(Recorded on April 29, 2016)
 

Artificial General Intelligence: Humanity's Last Invention | Ben Goertzel

Published on Feb 5, 2017

For all the talk of AI, it always seems that gossip is faster than progress. But it could be that within this century, we will fully realize the visions science fiction has promised us, says Dr. Ben Goertzel – for better or worse. Humanity will always create and invent, but the last invention of necessity will be a human-level Artificial General Intelligence mind, which will be able to create a new AIG with super-human intelligence, and continually create smarter and smarter versions of itself. It will provide all basic human needs – food, shelter, water – and those of us who wish to experience a higher echelon of consciousness and intelligence will be able to upgrade to become super-human. Or, perhaps there will be war – there’s a bit of uncertainty there, admits Goertzel. “There’s a lot of work to get to the point where intelligence explodes… But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting,” he says. Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence

Transcript: The mathematician I.J. Good back in the mid-1960s introduced what he called the intelligence explosion, which in essence was the same as the concept that Vernor Vinge later introduced and Ray Kurzweil adopted and called the technological singularity. What I.J. Good said was the first intelligent machine will be the last invention that humanity needs to make. Now in the 1960s the difference between neural AI and AGI wasn’t that clear and I.J. Good wasn’t thinking about a system like AlphaGo that could beat Go but couldn’t walk down the street or add five plus five. In the modern vernacular what we can say is the first human level AGI, the first human level artificial general intelligence, will be the last invention that humanity needs to make.

And the reason for that is once you get a human level AGI you can teach this human level AGI math and programming and AI theory and cognitive science and neuroscience. This human level AGI can then reprogram itself and it can modify its own mind and it can make itself into a yet smarter machine. It can make 10,000 copies of itself, some of which are much more intelligent than the original. And once the first human level AGI has created the second one which is smarter than itself, well, that second one will be even better at AI programming and hardware design and cognitive science and so forth and will be able to create the third human level AGI which by now will be well beyond human level.

So it seems that it’s going to be a laborious path to get to the first human level AGI. I don’t think it will take centuries from now but it may be decades rather than years. On the other hand once you get to a human level AGI I think you may see what some futures have called a hard takeoff where you see the intelligence increase literally day by day as the AI system rewrites its own mind. And this – it’s a big frightening but it’s also incredibly exciting. Does that mean humans will not ever make any more inventions? Of course it doesn’t. But what it means is if we do things right we won’t need to. If things come out the way that I hope they will what will happen is we’ll have these superhuman minds and largely they’ll be doing their own things. They will also offer to us the possibility to upload or upgrade ourselves and join them in realms of experience that we cannot now conceive in our current human forms. Or these superhuman AGIs may help humans to maintain a traditional human-like existence.

I mean if you have a million times human IQ and you can reconfigure elementary particles into new forms of matter at will then supplying a few billion humans with food and water and video games, virtual reality headsets and national parks and flying cars and what not – this would be trivial for these superhuman minds. So if they’re well disposed toward us people who chose to remain in human form could have a simply much better quality of life than we have now. You don’t have to work for a living. You can devote your time to social, emotional, spiritual, intellectual and creative pursuits rather than laboriously doing things you might rather not do just in order to get food and shelter and an internet connection. So I think there is tremendous positive possibilities here and there’s also a lot of uncertainty and there’s a lot of work to get to the point where intelligence explodes in the sense of a hard takeoff. But I do think it’s reasonably probable we can get there in my lifetime, which is rather exciting.
 

Will Superhuman Intelligence Be Our Friend or Foe? | Ben Goertzel

Published on Feb 12, 2017

Let's just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in 'the singularity', or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway.

Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. “The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand,” says Goertzel, and for better or worse, “that’s what we’re going to keep on doing.” Ben Goertzel's most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence
 

How to build an A.I. brain that can surpass human intelligence

Published on May 7, 2018

Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has to be strong for that artificial brain power to grow exponentially

If you think much about physics and cognition and intelligence it’s pretty obvious the human mind is not the smartest possible general intelligence any more than humans are the highest jumpers or the fastest runners. We’re not going to be the smartest thinkers.

If you are going to work toward AGI rather than focusing on some narrow application there’s a number of different approaches that you might take. And I’ve spent some time just surveying the AGI field as a whole and organizing an annual conference on the AGI. And then I’ve spent a bunch more time on the specific AGI approach which is based on the OpenCog, open source software platform. In the big picture one way to approach AGI is to try to emulate the human brain at some level of precision. And this is the approach I see, for example, Google Deep Mind is taking. They’ve taken deep neural networks which in their common form are mostly a model of visual and auditory processing in the human brain. And now in their recent work such as the DNC, differential neural computer, they’re taking these deep networks that model visual or auditory processing and they’re coupling that with a memory matrix which models some aspect of what the hippocampus does, which is the part of the brain that deals with working memory, short-term memory among other things. So this illustrates an approach where you take neural networks emulating different parts of the brain and maybe you take more and more neural networks emulating different parts of the human brain. You try to get them to all work together not necessarily doing computational neuroscience but trying to emulate the way different parts of the brain are doing processing and the way they’re talking to each other.

A totally different approach is being taken by a guy named Marcus Hutter in Australia National University. He wrote a beautiful book on universal AI in which he showed how to write a superhuman infinitely intelligence thinking machine in like 50 lines of code. The problem is it would take more computing power than there is in the entire universe to run. So it’s not practically useful but they’re then trying to scale down from this theoretical AGI to find something that will really work.

Now the approach we’re taking in the OpenCog project is different than either of those. We’re attempting to emulate at a very high level the way the human mind seems to work as an embodied social generally intelligent agent which is coming to grips with hard problems in the context of coming to grips with itself and its life in the world. We’re not trying to model the way the brain works at the level of neurons or neural networks. We’re looking at the human mind more from a high-level cognitive point of view. What kinds of memory are there? Well, there’s semantic memory about abstract knowledge or concrete facts. There’s episodic memory of our autobiographical history. There’s sensory-motor memory. There’s associative memory of things that have been related to us in our lives. There’s procedural memory of how to do things.

And we then look at the different kinds of learning and reasoning the human mind can do. We can do logical deduction sometimes. We’re not always good at it. We make emotional intuitive leaps and strange creative combinations of things. We learn by trial and error and habit. We learn socially by imitating, mirroring, emulating or opposing others. These different kinds of memory and learning that the human mind has – one can attempt to achieve each of those with a cutting-edge computer science algorithm, rather than trying to achieve each of those functions and structures in the way the brain does.

So what we have in OpenCog we have a central knowledge repository which is very dynamic and lives in RAM on a large network of computers which we call the AtomSpace. And for the mathematicians or computer science in the audience, the AtomSpace is what you’d call a weighted labeled hypergraph. So it has nodes. It has links. A link can go between two nodes or a link could go between three, four, five or 50 nodes. Different nodes and links have different types and the nodes and links can have numbers attached to them. A node or link could have a weight indicating a probability or a confidence. It could have a weight indicating how important it is to the system right now or how important it is in the long term so it should be kept around in the system’s memory.
 

Will robots liberate humans? - Interview with Dr. Ben Goertzel at #WebSummit18

Published on Nov 21, 2018

In this interview at Web Summit 2018 Dr. Ben Goertzel shares some thoughts about the technological singularity, role of robots and AI, and SingularityNET.
 

Consciousness, panpsychism, and AGI: What is it like to be a hat? | Ben Goertzel

Published on Dec 19, 2018

What if consciousness isn't all about the brain?

- Panpsychism is the idea that there is an element of consciousness in everything in the universe. The theory goes like this: You're conscious. Ben Goertzel is conscious. And his hat is conscious too. What if consciousness isn't about the brain at all, but it's as inherent to our universe as space-time?

- "Now, panpsychism, to me, is not even that interesting, it's almost obvious — it's just the foundation, the beginning for thinking about consciousness... " says Goertzel. It's what comes after that excites him, like the emerging technology that will let us connect our minds to bricks, hats, earthworms, other humans, and super AGIs like Sophia, and perhaps glimpse at the fabric of consciousness.

- Goertzel believes brain-brain interfacing and brain-computer interfacing will unfold in the coming decades, and it's by that means that we may finally crack the nut of consciousness to discover whether panpsychism makes any sense, and to learn why humans are so differently conscious than, for example, his hat.

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Published on Dec 19, 2018

What if consciousness isn't all about the brain?

- Panpsychism is the idea that there is an element of consciousness in everything in the universe. The theory goes like this: You're conscious. Ben Goertzel is conscious. And his hat is conscious too. What if consciousness isn't about the brain at all, but it's as inherent to our universe as space-time?

- "Now, panpsychism, to me, is not even that interesting, it's almost obvious — it's just the foundation, the beginning for thinking about consciousness... " says Goertzel. It's what comes after that excites him, like the emerging technology that will let us connect our minds to bricks, hats, earthworms, other humans, and super AGIs like Sophia, and perhaps glimpse at the fabric of consciousness.

- Goertzel believes brain-brain interfacing and brain-computer interfacing will unfold in the coming decades, and it's by that means that we may finally crack the nut of consciousness to discover whether panpsychism makes any sense, and to learn why humans are so differently conscious than, for example, his hat.

Ben Goertzel is CEO and chief scientist at SingularityNET, a project dedicated to creating benevolent decentralized artificial general intelligence. He is also chief scientist of financial prediction firm Aidyia Holdings and robotics firm Hanson Robotics; Chairman of AI software company Novamente LLC; Chairman of the Artificial General Intelligence Society and the OpenCog Foundation.His latest book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.
 

AI and the Human Condition - Ben Goertzel's Bitcoin Magazine interview at the 2019 Malta AI Summit

Published on Jul 3, 2019

Can Human Robot/AI interaction grow to a point where both parties can feel mutual respect for one another? Could this grow further to encompass genuine affection? Ben Goertzel explains while being interviewed by the Bitcoin Magazine NL, at the 2019 AI summit in Malta.
 
It's the first time I hear about this bitcoin journal. Haven't seen the interview yet, but I doubt that Ben can possibly say something off.
There's one thing which just won't make sense- why major journals or the public figures are dead silent when it comes to bitcoin gambling.
 

The Future of AI - A Fireside Chat with Ben Goertzel and Lavine Hemlani at Xccelerate

Dec 3, 2019

The final goal of artificial intelligence (AGI - that a machine can have a type of general intelligence similar to that of humans) is one of the most ambitious ever proposed by science. In terms of difficulty, it is comparable to other great scientific goals, such as explaining the origin of life or the Universe or discovering the structure of matter. In recent centuries, this interest in building intelligent machines has led to the invention of models or metaphors of the human brain.

Yet the future roles toward us at an ever-increasing pace. What what is the future of AI and how will it affect humanity in our everyday lives? Dr Ben Goertzel a world-leading AI scientist who is the founder and CEO of SingularityNET, a decentralised market place for AI services, gives his vision of the future and answers some very astute questions in a fireside chat with Lavine Hemlani the founder and CEO of Xccelerate.
 

Decentralized AI | Ben Goertzel | TEDxBerkeley

Apr 23, 2019

Dr. Ben Goertzel is the CEO of the decentralized AI network SingularityNET, a blockchain-based AI platform company, and the Chief Scientist of Hanson Robotics. Dr. Goertzel is one of the world’s foremost experts in Artificial General Intelligence, a subfield of AI oriented toward creating thinking machines with general cognitive capability at the human level and beyond He has published 20 scientific books and 140+ scientific research papers, and is the main architect and designer of the OpenCog system and associated design for human-level general intelligence. Dr. Ben Goertzel is the CEO of the decentralized AI network SingularityNET, a blockchain-based AI platform company, and the Chief Scientist of Hanson Robotics. Dr. Goertzel is one of the world’s foremost experts in Artificial General Intelligence, a subfield of AI oriented toward creating thinking machines with general cognitive capability at the human level and beyond He has published 20 scientific books and 140+ scientific research papers, and is the main architect and designer of the OpenCog system and associated design for human-level general intelligence.
 

Ben Goertzel: Artificial General Intelligence | AI Podcast #103 with Lex Fridman

Jun 22, 2020

Ben Goertzel is one of the most interesting minds in the artificial intelligence community. He is the founder of SingularityNET, designer of OpenCog AI framework, formerly a director of research at the Machine Intelligence Research Institute, Chief Scientist of Hanson Robotics, the company that created the Sophia Robot. He has been a central figure in the AGI community for many years, including in the Conference on Artificial General Intelligence. This conversation is part of the Artificial Intelligence podcast.

Outline:

0:00 - Introduction
3:20 - Books that inspired you
6:38 - Are there intelligent beings all around us?
13:13 - Dostoevsky
15:56 - Russian roots
20:19 - When did you fall in love with AI?
31:30 - Are humans good or evil?
42:04 - Colonizing mars
46:53 - Origin of the term AGI
55:56 - AGI community
1:12:36 - How to build AGI?
1:36:47 - OpenCog
2:25:32 - SingularityNET
2:49:33 - Sophia
3:16:02 - Coronavirus
3:24:14 - Decentralized mechanisms of power
3:40:16 - Life and death
3:42:44 - Would you live forever?
3:50:26 - Meaning of life
3:58:03 - Hat
3:58:46 - Question for AGI
 
Back
Top