Miscellaneous


Accepting Artificial Intelligence - A robot designed to decrease the fear of future robot technology
February 6, 2014

This video shows the result of my graduation project. A robot with face detection, designed to decrease the fear of future (robot) technology, for people with technofobia,

About the project: Accepting Artificial Intelligence
Because technology is changing more rapidly everyday, not everyone has a positive view about the future. A certain amount of people fear and don’t trust technology, especially technology of the future, this is called technophobia. And because there is a great change that we can reach AI with human intelligence this century, which can give us many benefits, it is important for those people who fear this development, to get a more positive view of the future. Therefore I developped a robot that will help take away the fear of people with technophobia.

Video production:
Robin de Bruin & Sara Dubbeldam

Music:
Amon Tobin - Piece of Paper
 

The Terrifying Promise of Robot Bugs

Published on May 5, 2013

Imitating nature to build a better (or possibly more terrifying) future. We've been trying to build flapping-wing robots for hundreds of years, and now, ornithopters are finally being developed, and may be used mostly for military purposes.

Piezoelectrics make those little bugs possible, and also enhances the ability of robot arms to feel, in other news from the International Journal of Robotics.
 

Killer robots, the end of humanity, and all that: What should a good AI researcher do?

Published on Aug 12, 2015

Buenos Aires-July, 29, 2015.

Talk by Stuart Russell-Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley Adjunct Professor of Neurological Surgery, University of California, San Francisco.

Hear an update on the campaign to ban lethal autonomous weapons, as well as the fears that AI poses an existential threat to mankind.
 

Why is Elon Musk afraid of A.I.?

Published on May 26, 2016

Elon Musk, along with a bevy of smart people, have expressed concern over our experiments with artificial intelligence, particularly with the weaponization of sentient AI.

So have these guys been watching too much Terminator, or is there a larger existential crisis we should be worried about?
 

Prof. Max Tegmark and Nick Bostrom speak to the UN about the threat of AI

Published on Jun 10, 2016

Rising to the Challenges of International Security and the Emergence of Artificial Intelligence

7 October 2015, United Nations Headquarters, New York
 

Can we build AI without losing control over it? | Sam Harris

Published on Oct 19, 2016

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.
 

Cafe Neu Romance 2016: Philip Hilm: The existential risk from artificial intelligence

Published on Nov 9, 2016

On the 25 October 2016 Philip Hilm presented his lecture The existential risk from artificial intelligence at Institute of Intermedia of the Czech Technical University in Prague.

Philip Hilm had earlier an career as a professional poker player and is now an artificial intelligence researcher .
 

AI Can Now Self-Reproduce—Should Humans Be Worried? | Eric Weinstein

Published on May 22, 2017

Those among us who fear world domination at the metallic hands of super-intelligent AI have gotten a few steps ahead of themselves. We might actually be outsmarted first by fairly dumb AI, says Eric Weinstein. Humans rarely create products with a reproductive system—you never have to worry about waking up one morning to see that your car has spawned a new car on the driveway (and if it did: cha-ching!), but artificial intelligence has the capability to respond to selective pressures, to self-replicate and spawn daughter programs that we may not easily be able to terminate. Furthermore, there are examples in nature of organisms without brains parasitizing more complex and intelligent organisms, like the mirror orchid. Rather than spend its energy producing costly nectar as a lure, it merely fools the bee into mating with its lower petal through pattern imitation: this orchid hijacks the bee's brain to meet its own agenda. Weinstein believes all the elements necessary for AI programs to parasitize humans and have us serve its needs already exists, and although it may be a "crazy-sounding future problem which no humans have ever encountered," Weinstein thinks it would be wise to devote energy to these possibilities that are not as often in the limelight.

Transcript: There are a bunch of questions next to or adjacent to general artificial intelligence that have not gotten enough alarm because, in fact, there’s a crowding out of mindshare. I think that we don’t really appreciate how rare the concept of selection is in the machines and creations that we make. So in general, if I have two cars in the driveway I don’t worry that if the moon is in the right place in the sky and the mood is just right that there’ll be a third car at a later point, because in general I have to go to a factory to get a new car. I don’t have a reproductive system built into my sedan. Now almost all of the other physiological systems—what are there, perhaps 11?—have a mirror.

So my car has a brain, so it’s got a neurological system. It’s got a skeletal system in its steel, but it lacks a reproductive system.So you could ask the question: are humans capable of making any machines that are really self-replicative? And the fact of the matter is that it’s very tough to do at the atomic layer but there is a command in many computer languages called Spawn. And Spawn can effectively create daughter programs from a running program.

Now as soon as you have the ability to reproduce you have the possibility that systems of selective pressures can act because the abstraction of life will be just as easily handled whether it’s based in our nucleotides, in our A, C, Ts and Gs, or whether it’s based in our bits and our computer programs. So one of the great dangers is that what we will end up doing is creating artificial life, allowing systems of selective pressures to act on it and finding that we have been evolving computer programs that we may have no easy ability to terminate, even if they’re not fully intelligent.

Further if we look to natural selection and sexual selection in the biological world we find some very strange systems, plants or animals with no mature brain to speak of effectively outsmart species which do have a brain by hijacking the victim species’ brain to serve the non-thinking species. So, for example, I’m very partial to the mirror orchid which is an orchid whose bottom petal typically resembles the female of a pollinator species. And because the male in that pollinator species detects a sexual possibility the flower does not need to give up costly and energetic nectar in order to attract the pollinator. And so if the plant can fool the pollinator to attempt to mate with this pseudo-female in the form of its bottom petal, it can effectively reproduce without having to offer a treat or a gift to the pollinator but, in fact, parasitizes its energy. Now how is it able to do this? Because if a pollinator is fooled then that plant is rewarded. So the plant is actually using the brain of the pollinator species, let’s say a wasp or a bee, to improve the wax replica, if you will, which it uses to seduce the males.
 
Back
Top