World Economic Forum (WEF), Davos, Switzerland

Administrator

Administrator
Staff member
Website - weforum.org

youtube.com/@wef

facebook.com/worldeconomicforum

x.com/wef

linkedin.com/company/world-economic-forum

instagram.com/worldeconomicforum

World Economic Forum on Wikipedia

Founder and Executive Chairman - Klaus Schwab

World Economic Forum Annual Meeting 2026 - January 19-23, 2026, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2025 - January 20-24, 2025, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2024 - January 15-19, 2024, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2023 - January 16-20, 2023, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2022 - May 22-26, 2022, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2020 - January 20-24, 2020, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2019 - January 22-25, 2019, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2018 - January 23-26, 2018, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2017 - January 17-20, 2017, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2016 - January 20-23, 2016, Davos-Klosters, Switzerland

World Economic Forum Annual Meeting 2015 - January 21-24, 2015, Davos-Klosters, Switzerland
 
Last edited:

The global risks report 2015

Published on Jan 15, 2015

The 2015 edition of the Global Risks report completes a decade of highlighting the most significant long-term risks worldwide, drawing on the perspectives of experts and global
decision-makers.
 

Davos 2015 - Issue Briefing: Artificial Intelligence

Published on Feb 2, 2015

World-leading experts provide a briefing and answer questions related to the latest developments in artificial intelligence.
Speakers
• Ken Goldberg, Professor, University of California, Berkeley, USA
• Alison Gopnik, Professor of Psychology, University of California, Berkeley, USA
Moderated by
• Oliver Cann, Director, Media Relations, World Economic Forum.
 

Designing robots as smart as babies | Alison Gopnik

Published on Feb 24, 2015

Berkeley Psychologist, Alison Gopnik, says computers may be able to play chess or drive, but they’re still not as smart as a two-year-old. Gopnik that even the youngest babies learn from imitation and interaction, and are smarter than machines and most adults too.
 

Power dynamics: Who controls the robots controls the future | Anthony Stenz

Published on Feb 24, 2015

Manage and mentor robots from the comfort of your own home, while robots half a globe away slog it out in the trenches. That’s the messsage from Anthony Stenz, Carnegie Mellon University, USA. He says robots are fast, strong and accurate and the opportunities for future applications outweigh the risks.
 

The Automated Economy | Illah R. Nourbakhsh

Published on Feb 24, 2015

Put people ahead of profits and make robots that are not products, but raw materials for people to create a new society, says Illah Nourbaksh, from Carnegie Mellon University. He says we must use robots to dignify and empower people, but warns we may lose our identity if machines start to look more like us.
 

Social Artificial Intelligence

Published on Feb 24, 2015

We need a different kind of Artificial Intelligence, with social reasoning, says Justine Cassel from Carnegie Mellon University. She says studying people’s social interactions, goals and desires can help us build better robots.
 

The Neuroscience of Compassion

Published on Mar 9, 2015

Can training our brains help make the world a better place? Tania Singer from the Max Planck Institute for Human Cognitive and Brain Sciences thinks it can. She’s a social neuroscientist and psychologist who says the brain’s plasticity means it can be trained to make us less selfish and more compassionate. In this video for the World Economic Forum, Singer shows how our decision making is driven by a set of psychological motivations - from power to fear - that can be altered to help us make better decisions for society and for our health. Her research has also influenced the development of a new model of “caring economics” that hopes to work towards sustainability and global cooperation.

Watch Tania Singer’s presentation in the video above, or read key quotes below.

On the plasticity of the brain
“The concept of plasticity is really the concept of changeability and trainability, not only of our brain but also of our immune system and stress system. So, I’m not just talking about the brain but the whole body. I’m presenting very fresh data about a one year longitudinal study."

“You’ve probably have heard the concept of mindfulness, about training the attention of your mind, stabilising your mind, becoming present in the moment. This is what we spend the first three months training in the module we call presence. So it’s really just getting your mind stable and developing introspective body awareness. Then there’s a module called Affect, and this is about emotions and it’s about training compassion, loving kindness, empathy and how to regulate emotion in the context of anger or stress. This is juxtaposed with perspective - a cognitive model that allows you to get a perspective on yourself and on others.”

"People have to do these core exercises for 20 to 30 minutes each day, and integrate it into their daily routine, like brushing your teeth. We give them a cell phone and we can monitor their progress. We have exercises you do on your own and dyadic exercises where you have to call up a partner."

On Compassion
“Compassion is really important. Psychopaths are very good at manipulating and understanding what the other person needs, but they have no compassion and empathy, so they don’t care. Participants go into the scanner five times in the year and they see screens. One screen shows videos of people explaining real suffering stories of their lives. And you measure the brain, the empathic response to these stories and also what they say they feel. The stories need a lot of belief and understanding, so you can compute a social intelligence score based on how well someone can do cognitive perspective taking.”

“What we have shown in our study, that just being tested in these exercise doesn’t do anything, doesn’t improve your theory of mind. Doing three months of mindfulness training does nice other things, but doesn’t do anything with theory of mind. It’s really the perspective taking module which brings a huge increase in theory of mind. Just breathing doesn’t make you more compassionate, but compassion training makes you more compassionate.”

On the brain as a muscle
“We are brain scientists, so we wanted to know if you can change the hardware of your brain. We always thought our brains are just declining after the age of 25. So this is showing whether you can increase in cortical thickness, the grey matter volume of your brain through training. We have data that shows that we can increase these abilities through training. You can train different networks in the brain, just as you train different muscles in the gym. This is what we do with the mind, so different mental practices cultivate different aspects.”

“I’m interested in how we can activate care and affiliation as this leads to prosocial behaviour and global cooperation. There are ways to shift our motivation system, like institutional design and changing laws, and there’s also internal mental training and education.”
 

Extreme Robotics

Published on Mar 9, 2015

Robots are transforming our world and doing the jobs that humans can't do safely and efficiently, says Dr. William Whittaker in this video for the World Economic Forum. “Robots,” he says “have particular advantage where humans are limited from deep ocean pressure, the vacuum of space, radiation hazards.” Whittaker, who is principal scientist with the Robotics Institute at Carnegie Mellon University, charts the rise of robots from the factory floor to lunar exploration.

Click on the video above or read key quotes below.

On early robots
“When the work began some few decades ago, robotics was mostly science fiction and fantasy. That all changed very quickly on the occasion of a nuclear accident. We rose to create the robots that did the exploration, then the work and the clean up activities."

“These were a leap of technology at the time. They developed the rudiments of manipulation, combined that with driving machines, achieved the reliability and performed substantial work. Then really crossing a threshold, machines in all kinds of extreme environments and to add the beginnings of what you call intelligence today. This now matters every time we have a leaking gas well in the deep oceans, or it matters every time we lose a downed aeroplane. They’re working day to day, while we’re sitting here.”

On innovation and impact
“Perhaps the single greatest impact is to cropping and farming. I am a farmer. I farm 900 acres, 300 cattle, and I couldn’t be here unless my machines were there. One of the things that’s interesting about people, is that you can only be at one place in one time. At some in life you can figure out that you can do anything, but maybe not do everything. It’s so interesting to multiply a presence in these ways.”

“Underground is another world and so different because there’s no GPS and no radio. No communication that works. The machines determine where they are by using the sensors and then make their decisions about where to turn and where they’re going …. this is incredibly invaluable in rescue, as well as in safety and in driving the machines that increasingly dig underground."

“Consider for example search recovery of an airplane like Air France 447. Robots searched for two years and found it, identified all the parts and brought up the black box in depths far below humans could go.”

On future use of robots
“There is no more easy mining. We come from Pittsburg. In Pittsburgh we had easy steel, easy coal, easy iron. There’s no more. As the world looks to resources in the future. It’s smaller seams, deeper mining and tougher conditions."

“You may know that there is an international space station and that is always supplied. And that used to be supplied by government. And now it is just a commercial endeavour by the kilogram. In many ways for 40 years, governments have been in charge of not going back to the moon. Humans will always use tools, and those tools will always evolve. If it's the Iron Age and you are still using stones, then you probably won’t make it. If you're making cars and you don't embrace the tools, it’s not just a disadvantage ... you can't be in business."

“We used to have astronauts and they used to go places. Now all the destinations are robotic and the missions are robotic, and for many, many good reasons. Eventually people will once again have those experiences, but for now it’s robotic. It’s not a plot, it’s not like there was some intention to take things over, it’s just the better way of doing it.”
 

Future Computing: Brain-Based Chips

Published on Mar 9, 2015

"Every ten to twenty minutes today we produce the same amount of data we produced over the past one hundred years. In the next ten years we’ll produce that in five seconds,” says Henry Markram in this video for the World Economic Forum. Markram, who is Professor of Neuroscience at Swiss Federal Institute of Technology (EPFL), describes the new era of brain inspired computer science that’s evolving to meet the big data challenge.

Watch the full video above or read key quotes below

On the data challenge
“One of the biggest challenges is the volume of data we’re producing, and the next challenge is the speed at which we process the data. Every ten to twenty minutes today we produce the same amount of data we produced over the past one hundred years. In the next ten years we’ll produce that in five seconds.”

“What is absolutely clear to almost every technologist out there, is that we as humans can no longer read and digest this information. We need serious help. The essential help is in the form of algorithms. There are basically three kinds of algorithms that can go beyond the kind of algorithms that we used to use in the past. We need very sophisticated algorithms, and we need machines to help us build those sophisticated algorithms.”

“They exist today and are being evolved at an incredibly high speed in order for us to make decisions on exabytes of big data as fast as possible. There’s clearly hope that we are going to be able to deal with the speed of making decisions on such massive volumes of data.”

On deep learning and cognitive computing
“The one that is very popular today is deep learning. It’s what Google is going into, and Microsoft and Facebook are using. Deep learning is a series of neurons or nodes with successive layers. You can train one of these nodes to recognise all the difference features and conditions of a face, so it become a face detecting node. And if you show it enough images of faces you can train and develop the algorithm.”

“This will be a powerful tool that lives in the cloud, and when you want to recognise something you won’t realise that it ran through this deep learning algorithm to decide what you were looking at. This is going to become more and more important, because the trend is everything is becoming digital. Our self is becoming digital. Our health is becoming digital. Being able to recognise patterns is going to become increasingly important.”

“The second approach is brain inspired design. Brain inspired design is more of a massive set of interconnections. It’s a concept of what the brain could be doing, and we try to mimic that concept. IBM “Watson” for example is a good idea of a kind of cognitive computing. We look at the brain and we see that it’s got sensory areas, and reasoning areas and decision making areas, and reward areas. And we mimic those mathematically and try to get the machinery to make these decisions. So Watson can take all of the millions of pages of Wikipedia, for example, and run it through this conceptual model of the brain and it can make decisions on them. And it’s incredibly powerful and very useful.”

On mapping the brain
The third direction is the emerging direction and this depends now on much more concrete information about the brain. You can think of it as brain derived design; mimic the brain as accurately as possible, after all it is the product of four billion years of evolution. To get to brain design, you need to understand a lot more about the brain: how it’s put together, how the neurons are structured.

The essence is really that you have neurons and you have a lot of cables. You have enough cables in your brain to wrap around the moon a couple of times. There’s a lot of cables that are connecting and forming this intricate network. And what’s it’s really doing is carrying out an algorithm through these different networks. You also have synapses that have to connect these neurons. And in a piece of the brain the size of a pin head, you have 40 million synapses that have to connect to about 30,000 neurons. They are the messengers between cells and by controlling these messengers, you can control the algorithm. ”
 

Future Computing: DNA Hard Drives | Nick Goldman

Published on Mar 7, 2015

Molecular Biologist Nick Goldman and his team at the European Bioinformatics Institute, have created a way to use DNA to store data. “All the information in the world could be encoded and stored in DNA, and it would fit in the back of a SUV,” says Nick Goldman in this video for the World Economic Forum. He explains why DNA is a stable, long term way to store digital information that might otherwise be lost.

Watch the video for the full talk or read key quotes below

On DNA as nature’s hard drive
“DNA is the hard drive, the memory in every cell in every living organism that has the instructions for how to make that cell. It’s a chemical molecule, and is four different kinds of molecules that can be stuck together in a chain, and you can put those four in any order and if you read that back you have a sequence of characters. If you want to think of it like a digital code.”

“We have a big data revolution in genomics. Ten years ago the cost of sequencing a genome of one person or one living organism was about the same as the most expensive house in London. And ten years later, the cost of sequencing one genome was the price of a season ticket to Arsenal football club. The price is plummeting and scientists are doing more and more genome sequencing.”

On hiding a message in DNA
“After scientists have sequenced a genome, they want to keep their data safe - and that’s where I come in - so they send their data via the internet and ask us to store that information. We buy more and more computer servers, and more and more hard disc drives to store this information. And we started to realise all the information we’re storing is about DNA, but DNA itself could be a digital storage medium. We thought maybe we could manipulate some DNA to put a message in there ourselves. Life on earth has used DNA as its hard disc drive for hundreds of millions of years, so maybe we could use it too.”

“We devised an experiment to see if DNA was a good way to store information. We had to decide what would be high value information that you might want to store for a long time in a DNA format? We thought about a .txt file of all of Shakespeare's sonnets, and an .mp3 of Martin Luther King’s speech, “I Have a Dream”; and because we’re molecular biologists at heart, a .pdf of Watson and Crick’s paper from 1953 describing the helix structure of DNA in living cells. We encoded those and had it made into DNA by the Agilent company in California. And we got back a tiny bit of dust at the bottom of a test tube, and that was the DNA."

On reading DNA
“Can we get the information back out? Yes, we can read DNA easily and cheaply, and we can copy it. But writing it in the first place is very difficult. It takes too long and is very expensive, and this is the rate limiting step. So, you could encode all the information in the world into DNA, but there isn’t enough money on earth to be able to do that.”

“But it’s a good solution for the challenge of creating a long term digital archive. Within a few years all forms of digital media become obsolete. No one on the earth is currently archiving digital information, yet most information is now being created, stored and observed digitally. But how long will memory sticks last, compared with DNA?”

“We’ve looked at mammoth DNA that is 20,000 years old and ancient horses with 700,000 year old DNA sequences that have been successfully read. All you need is somewhere very cold and dry to store it, and as long as we have humans that are technologically advanced, we will be able to read DNA. So, what are we going to store in the long term? Maybe the American presidential records, or where nuclear waste has been dumped, or even our family photographs.”
 

How to Build an Intelligent Machine | Bjorn Schuller

Published on Oct 21, 2015

The secret to raising smart machines will be teaching them social and emotional intelligence, says Bjorn Schuller. The World Economic Forum Young Scientist, says future machines will learn like a child from its mother how to read emotions, sense moods, spot health conditions and be creative. “We’ll then need to ensure that we remain in control,” says Schuller, “so we know what these machines are thinking and doing.”

Click on the link to watch the whole video, or read key quotes below.

On cultivating machine intelligence
“An intelligent machine needs great perceptive abilities and great communicative skills. What does it actually takes for a machine to be intelligent? Let me highlight just three aspects: it needs to be able to learn from data, to achieve goals and ultimately reach brain-like, or maybe even beyond brain-like, intelligence.”

“An example from the movies: you may know Johnny Five from Short Circuit movie. Johnny Five was going around and exploring the world by talking to a lot of humans, and constantly asking for more input from them. So he was interacting a lot with them, and cooperatively learning with humans about the world around him.”

“So the computer gets helps from us, and it learns from us what this new data means. It can add it to its experience and enhance its model of the world. An intelligence system would then not only decide when to ask, but also who to ask, just like a child would decide to ask mommy something.”

On Learning together
“The movie Matrix had a sinister view on what machines will do in the future - that they will use it as bio batteries. Let’s share a more positive view on how they can benefit from us. Rather than using us as these battery tanks, they could source knowledge from the crowd, even directly scanning our brain to learn from us.”

“Building an intelligent machine will need human help, in particular to tell the machine about the world. Let me give you one example from my favorite application domain - speech processing. We've been collecting a lot of information sources about what is inside speech. You can sense a whole lot of things from the voice: the intoxication of somebody, the sleepiness of somebody, and the cognitive load level at that very moment.”

“You can also spot health-related states such as autism or Parkinson's disease. You can also sense the emotion, interest, height, and personality. But what you really need is loads of data, and you need human help to get information on what is inside of this data.”

On reading emotions
“You might have seen the Hollywood movie “Big Hero 6”, about a medical robot that teaches a boy about moral values. It teaches the boy that revenge is not the right way to go. How does this look in reality? In a European project I coordinated, we worked with an intelligent computer system, to teach autistic children (in a playful way) how to express emotion in a way that other people understand.”

“This machine uses its perceptive abilities to sense the voice, body gesturing and facial expression. From that the machine then tells the child how to better express emotions so that others understand. It seems that intelligent machines in the future can not only learn from us, but also teach us - leading to a loop of exchange between machines and humans, increasing intelligence of the machines.”

“But we’re lacking one component for a machine to be intelligent, and that is emotional intelligence. Maybe you've also seen “Ex Machina” – a machine that is not only capable of having a human fall in love with her, but to exploit that human for her own goals. You can imagine how much social and emotional intelligence that takes. Mankind has had emotion as a survival factor in its history. It's my belief that in the future intelligent machines will also have emotion as a key survival factor.”
 

How to Build an Intelligent Machine | Michael Bronstein

Published on Oct 21, 2015

“Our laptops, tablets, and smartphones will become precision instruments that will be able to measure three-dimensional objects in our environment”, says Michael Bronstein in this video for the World Economic Forum. The associate professor from the University of Lugano, Switzerland, says 3D sensors are key to future intelligent machines and will transform the way we interact with our computers.

Click on the link for the full presentation, or read selected quotes below.

On Making machines see
“I would say that an intelligent machine should be able to sense the environment and be able to understand the environment it is found in. If we look at the sensorial information that our body is exposed to – that we perceive through our eyes, through our nose or our tongue or our skin – the majority of this information, over 90 percent actually comes from vision. It would make sense to equip an intelligent machine with the ability to see, and to understand the world around it.”

“We humans, our visual system is very well developed. But it starts evolving from age zero as we are born. And we acquire the capability to analyze visual information way before we start walking or learn how to talk. It is so natural to us to analyze objects, that we are not fully aware of how complex this task is and what it takes for the brain to do this every second that we open our eyes.”

“I work in the field of computer vision where we try to replicate or imitate some of these wonderful capabilities of our visual system on a computer. Vision is really hard, not only for a machine; it is hard for humans and we can easily be fooled and get it wrong.”

On visioning of the future
“I’m sure that many of you have seen the movie – Minority Report – that appeared in 2002. This is a dystopian vision of how our world (hopefully not) might look in the 2020’s. And one of the most famous scenes in this movie is when Tom Cruise is using his hands to manipulate virtual objects on a giant holographic screen. This is also a vision of the producers of this movie about what our interaction with our future intelligent machines might look like.”

“If you want to design such an interface based on computer vision, you need to solve what we call the hand-tracking problem. You need to detect and recognize different parts of our fingers that constitute our hands, and there are many degrees of freedom, many ambiguities. For example, the fingers can be hidden from view. And this is why this problem is quite challenging. It’s a notoriously hard problem in computer vision.”

“Fast forward several years after the movie Minority Report. Microsoft came up with a very successful product called Kinect. It was an add-on to the Xbox gaming machine that allowed the users to control their games using their bare hands. You can move your hands and animate or activate your virtual self in your game, or interact in this natural way with your computer. Basically this was the same capability that Tom Cruise had in the very futuristic science fiction movie, but without any light gloves that made the task in the movie much easier.”

On 3D sensors
“In this case no special equipment was required to interact with the machine. This capability came from a novel 3D sensor that projected invisible laser light on the objects in front of it and, using triangulation techniques, extracted the geometric structure of these objects to the accuracy of several millimeters. It appeared that this three dimensional information actually solved many degrees of freedom, many ambiguities that exist in standard two dimensional images. So suddenly the hand-tracking problem becomes much easier in three dimensions, because many of these ambiguities are gone.”

“Kinect was a revolutionary product in the sense that technology that existed only in the lab and cost a fortune had suddenly become a commodity. Of course it was designed for gaming, and manufacturers of laptops, tablets and smartphones are fighting for every gram and millimeter in the design of their gadgets; no one would want to use a smartphone that weighs a kilogram and requires an external power source. So I was involved with my colleagues in Israel in the startup company that tried to take this dream of a 3D seeing machine one step further. We designed a technology that would allow us to shrink the size of the 3D sensor to dimensions that would fit into the display of a laptop or a tablet.”

“These technological capabilities all exist today. We are not talking about the future. We are talking about the present. I believe that 3D sensing tech is a key ingredient that might be needed for a paradigm shift in the way we interact with our intelligent machines. It will bring us closer to a more natural way of interacting with computers, replacing traditional input devices such as keyboard, touch screens or a mouse."
 

How to Build an Intelligent Machine | Louis-Philippe Morency

Published on Oct 21, 2015

Can computers help diagnose depression? Louis-Philippe Morency, from Carnegie Mellon University, thinks they can. He’s been working at enabling machines to understand and analyze subtle human behavior that can betray sadness and happiness. In this video for the World Economic Forum, Morency introduces SimSensei - a virtual human platform specifically designed for healthcare support - and explains why he hopes future intelligent machines will work alongside doctors as colleagues.

Click on the video link for the full presentation, or read some quotes below.

On detecting subtle communication
“Let's look at one of the most intelligent machines - the human. Us. And if you look at one of the most important factors of the development of humans, it is communication. Really early on we communicate our happiness, our surprise, even our sadness. We do it through our gesture. Communication is a core aspect between mother and child, and we develop these communication skills through all our life.”

“Communication is done through three main modalities. The three “Vs”: the verbal, the vocal and the visual. The first the first one is the verbal. When we communicate we decide on specific words; their meaning is important, but every word can have really subtle changes to it. Even a word like “okay”. This subtlety of human communication is so powerful, and makes communication efficient and meetings possible. The next “V” is the visual - so a lot of what we do is to my gestures, my facial expression, to my posture.

All of these “V's” are essential to human communication, so how are we communicating with computers? We have keyboards and mouse, and also these days we have newer technology like touch screens. And we're getting closer, but it's much further than with human communication. When we think about machine human communication, being able to understand and communicate with humans in a natural way becomes important. This is true for embodied interaction machine like robots. But it is also true for non-embodied interaction machine, like your cell phones or computers. This capability of intelligent machines to understand the sovereignty of human communication is what is exciting to the scientific community.”

On developing new technology
“We are able to go from a really shallow interpretation of human communication to a much deeper understanding; going from understanding only the facial landmark motion of eyebrows, to a level where we also understand subtlety in the appearance of a face. We have got to the point where we can study all 48 muscles of the face and from this infer the expression of emotion. We're getting closer to deeper interpretation.”

“One of the key aspects is that we're looking at multiple levels, and we are now there is now a lot more information freely available. And so we can go from small data to Big Data, and a lot of examples of people communicating, not just in the laboratory but in different scenarios, different cultures and different individuals.”

“The last piece is that the algorithms are getting a lot smarter, and we can handle a lot more of this data. So multimodal artificial intelligence is one of the key components in an intelligent machine, and that's also what allows us to think what do we want these intelligent machine to be or to do.”

On taking the interaction test
“Right now I can command my cell phone to do something, but I believe that computers are lot more about being colleagues and co-workers. For example they can help during diagnosis of a mental health disorder, by giving feedback to the doctor. Let me give you an example of a first step toward these intelligent machines: Ellie is there to help doctors during the assessment of mental health. She is not a virtual doctor, she is an assistant - an interviewer gathering information. And when we are looking at depression, or distress in general, it is a challenging thing to successfully diagnose. A doctor has the intuition about a disease, and they will do a blood test and then look at the details. For mental health we now have the possibility of doing an interaction sample. For instance, asking how did that person change over time? So maybe we can reduce the treatment and change the medication.”

“Let me introduce an interaction and show how she can interact in real-time and perceive their visual and vocal cues. We had more than 250 people interacting with our virtual human platform SimSensei, and what was interesting was that in just over a period of a few weeks, the interaction was supposed to be only 15 minutes. But people enjoyed talking with her, so she talked for 30 or 40 minutes. In fact if you look at how much sadness or real behavior they show, they share more when they’re interacting with the intelligent machine, then when they think there's a human behind pressing the button.”
 
Article "Global economic turmoil to dominate Davos discussions"
Business leaders and policymakers at the World Economic Forum will focus on Chinese downturn, a commodities rout and stock market turmoil

by Katie Allen
January 17, 2016

Article "Women to lose out in technology revolution as robotics threatens jobs, warns WEF"
Survey on future of working life predicts white collar and administrative roles to see the greatest job losses

by Jill Treanor
January 18, 2016

Article "Robots, new working ways to cost five million jobs by 2020, Davos study says"

by Ben Hirschler
January 18, 2016
 

Davos 2016 - Issue Briefing: Infusing Emotional Intelligence into AI

Published on Jan 22, 2016

Learn first-hand about how to endow artificial intelligence with emotional intelligence using social-interaction skills that are too often ignored in emerging technologies.

Justine Cassell, Associate Dean, Technology, Strategy and Impact, School of Computer Science, Carnegie Mellon University, USA
Vanessa Evers, Professor of Human Media Interaction, University of Twente, Netherlands
Maja Pantic, Professor of Affective and Behavioral Computing, Imperial College London, United Kingdom
Moderated by
Michael Hanley, Head of Digital Communications, Member of the
Executive Committee, World Economic Forum
 

Davos 2017 - Artificial Intelligence

Published on Jan 17, 2017

As business opportunities for artificial intelligence multiply, how can industry leaders design the principles and technical standards into their products that benefit society as a whole?

- Ron Gutman, Founder and Chief Executive Officer, HealthTap, USA
- Joichi Ito, Director, Media Lab, Massachusetts Institute of Technology, USA
- Satya Nadella, Chief Executive Officer, Microsoft Corporation, USA
- Ginni Rometty, Chairman, President and Chief Executive Officer, IBM Corporation, USA

Moderated by
- Robert F. Smith, Chairman and Chief Executive Officer, Vista Equity Partners, USA
 
Back
Top