Article "The Five Capability Levels of Deep Learning Intelligence"
by Carlos Perez
December 5, 2016
Smart senses for robots
Published on Nov 3, 2016
alongside humans.From touch to sight, robots are getting a sensory upgrade. Artificial intelligence isn't just mental smarts. By giving robots physical intelligence, researchers hope to build machines that can work
Article "For robots, artificial intelligence gets physical"
To work with humans, machines need to sense the world around them
by Meghan Rosen
November 2, 2016
Brian Cox presents Science Matters - Machine Learning and Artificial intelligence
Streamed live on Jan 10, 2017
"Brian Cox presents Science Matters: Machine learning and artificial intelligence"We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a computer.
But how and when will machines be able to explain themselves? Should we be worrying about an artificial intelligence taking over our world or are there bigger and more imminent challenges that advances in machine learning are presenting here and now?
Join Professor Brian Cox, the Royal Society Professor of Public Engagement, as he brings together experts on AI and machine learning to discuss key issues that will shape our future.
Panelists will include:
Professor Jon Crowcroft FRS, Marconi Professor of Networked Systems at the University of Cambridge
Professor Joanna Bryson, Reader in AI Ethics, University of Bath
Professor Sabine Hauert, Lecturer in Robotics at the University of Bristol
by Robohub Editors
January 11, 2017
Intrinsically motivated multi-task reinforcement learning
Published on Jan 25, 2017
Poppy Project, open-source humanoid platform, Talence Cedex, FranceIntrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot
Sebastien Forestier, Yoan Mollard, Damien Caselli, Pierre-Yves Oudeyer, Flowers Team, Inria Bordeaux.
2nd rank at Demonstration Awards, NIPS 2016, Barcelona, Spain, December 6th, 2016.
Stanford Seminar: Deep Learning in the Age of Zen, Vega, and Beyond
Published on Jan 26, 2017
EE380: Computer Systems Colloquium Seminar
Computer Architecture : Deep Learning in the Age of Zen, Vega, and Beyond
Speaker: Allen Rush
Deep Learning and Machine Intelligence is maturing to the point where is it is being deployed to many applications, particularly large data, imaging classification and detection. This talk addresses the challenges of deep learning from a computational challenge perspective and discusses the ways in which new compute platforms of Zen (x86) and Vega (GPU) provide high performance solutions to different training and inference applications. The ROCm software stack completes the support with libraries and framework support for a variety of environments.
About the Speaker:
Allen Rush is a fellow at AMD, focusing on imaging and machine learning architecture development. He has been active in imaging and computer vision projects for over 25 years, including several startups. He is the domain architect for ISP and current machine learning development activities in HW, SW and application support.
Support for the Stanford Colloquium on Computer Systems Seminar Series provided by the Stanford Computer Forum.
Colloquium on Computer Systems Seminar Series (EE380) presents the current research in design, implementation, analysis, and use of computer systems. Topics range from integrated circuits to operating systems and programming languages. It is free and open to the public, with new lectures each week.
Learning language through interaction
Published on Jan 27, 2017
Machine learning-based natural language processing systems are amazingly effective, when plentiful labeled training data exists for the task/domain of interest. Unfortunately, for broad coverage (both in task and domain) language understanding, we're unlikely to ever have sufficient labeled data, and systems must find some other way to learn. I'll describe a novel algorithm for learning from interactions, and several problems of interest, most notably machine simultaneous interpretation (translation while someone is still speaking). This is all joint work with some amazing (former) students He He, Alvin Grissom II, John Morgan, Mohit Iyyer, Sudha Rao and Leonardo Claudino, as well as colleagues Jordan Boyd-Graber, Kai-Wei Chang, John Langford, Akshay Krishnamurthy, Alekh Agarwal, St?phane Ross, Alina Beygelzimer and Paul Mineiro.
Towards practical machine learning with differential privacy and beyond
Published on Jan 27, 2017
Machine learning (ML) has become one of the most powerful classes of tools for artificial intelligence, personalized web services and data science problems across fields. However, the use of ML on sensitive data sets involving medical, financial and behavioral data are greatly limited due to privacy concern. In this talk, we consider the problem of statistical learning with privacy constraints. Under Vapnik's general learning setting and the formalism of differential privacy (DP), we establish simple conditions that characterizes the private learnability, which reveals a mixture of positive and negative insight. We then identify generic methods that reuse existing randomness to effectively solve private learning in practice; and discuss a weaker notion of privacy — on-avg KL-privacy — that allows for orders-of-magnitude more favorable privacy-utility tradeoff, while preserving key properties of differential privacy. Moreover, we show that On-Average KL-Privacy is **equivalent** to generalization for a large class of commonly-used tools in statistics and machine learning that sample from Gibbs distributions---a class of distributions that arises naturally from the maximum entropy principle. Finally, I will describe a few exciting future directions that use statistics/machine learning tools to advance he state-of-the-art for privacy, and use privacy (and privacy inspired techniques) to formally address the problem of p-hacking in scientific discovery.
Live Q&A with the Deep Learning Foundations Team
Streamed live 6 hours ago
Udacity, massive open online courses (MOOCs), Mountain View, California, USAFriday, February 3rd from 5:00pm to 6:30pm PST, join us for a live deep dive on Deep Learning with Mat, Siraj and the Deep Learning Foundations Team. We'll be answering all of your questions on lessons, projects, and everything you wanted to know about this very unique program.
Deep reinforcement learning for driving policy
Published on Jan 31, 2017
Mobileye N.V., Jerusalem, IsraelAutonomous driving is a multi-agent setting where the host vehicle must apply sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways.
Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy.
Moreover, one must balance between unexpected behavior of other drivers/pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained.
Symposium on "Information, Control, and Learning" at The Hebrew University of Jerusalem.
By Prof. Shai Shalev Shwartz, VP Technologies of Mobileye
and professor of computer science at The Hebrew University of Jerusalem.
Социальные закладки