Miscellaneous


ODSC East 2016 | Rahul Dave - "Machine Learning for Suits"

Published on Jul 14, 2016

Abstract: You will learn the basic concepts of machine learning – such as Modeling, Model Selection, Loss or Profit, overfitting, and validation – in a non-mathematical way, so that you can ask for data analysis and interpret the results of a model in the context of making business decisions. The concepts behind machine learning are actually quite simple, so expect to take away not just words and acronyms, but rather, a deep understanding. We will work in the context of concrete examples from different domains, including finance and medicine.

1. What is probability? What is a model? Supervised vs unsupervised learning. Regression and Classification. Minimizing Cost and Maximizing likelihood.

2. Models and Data: Bias, Variance, Noise, Overfitting, and how to solve Overfitting with Regularization and Validation

3. Different kinds of models, including ensembles and deep learning.

4. How good is a model? Profit Curves, ROC curves, and the expected value formalism.

Bio: Rahul Dave is a lecturer at Harvard University and partner at LxPrior, a small Data Science consultancy. LxPrior offers its clients data analysis services as well as data science training. Rahul trained as an astrophysicist, doing research on dark energy, and worked at the University of Pennsylvania, NASA’s Astrophysics Data System, as well as at Harvard University. As a computational scientist, he has developed time series databases, semantic search engines, and techniques for classifying astronomical objects. He was one of the people behind Harvard’s Data Science course CS109, and Harvard Library’s Data Science Training For Librarians course. This year he is teaching courses in computer science and stochastic methods to scientists and engineers.
 

Smart senses for robots

Published on Nov 3, 2016

From touch to sight, robots are getting a sensory upgrade. Artificial intelligence isn't just mental smarts. By giving robots physical intelligence, researchers hope to build machines that can work
alongside humans.

Article "For robots, artificial intelligence gets physical"
To work with humans, machines need to sense the world around them

by Meghan Rosen
November 2, 2016
 

Brian Cox presents Science Matters - Machine Learning and Artificial intelligence

Streamed live on Jan 10, 2017

We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a computer.

But how and when will machines be able to explain themselves? Should we be worrying about an artificial intelligence taking over our world or are there bigger and more imminent challenges that advances in machine learning are presenting here and now?

Join Professor Brian Cox, the Royal Society Professor of Public Engagement, as he brings together experts on AI and machine learning to discuss key issues that will shape our future.

Panelists will include:

Professor Jon Crowcroft FRS, Marconi Professor of Networked Systems at the University of Cambridge

Professor Joanna Bryson, Reader in AI Ethics, University of Bath

Professor Sabine Hauert, Lecturer in Robotics at the University of Bristol

"Brian Cox presents Science Matters: Machine learning and artificial intelligence"

by Robohub Editors
January 11, 2017
 

Intrinsically motivated multi-task reinforcement learning

Published on Jan 25, 2017

Intrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot
Sebastien Forestier, Yoan Mollard, Damien Caselli, Pierre-Yves Oudeyer, Flowers Team, Inria Bordeaux.
2nd rank at Demonstration Awards, NIPS 2016, Barcelona, Spain, December 6th, 2016.

Poppy Project, open-source humanoid platform, Talence Cedex, France
 

Stanford Seminar: Deep Learning in the Age of Zen, Vega, and Beyond

Published on Jan 26, 2017

EE380: Computer Systems Colloquium Seminar
Computer Architecture : Deep Learning in the Age of Zen, Vega, and Beyond
Speaker: Allen Rush

Deep Learning and Machine Intelligence is maturing to the point where is it is being deployed to many applications, particularly large data, imaging classification and detection. This talk addresses the challenges of deep learning from a computational challenge perspective and discusses the ways in which new compute platforms of Zen (x86) and Vega (GPU) provide high performance solutions to different training and inference applications. The ROCm software stack completes the support with libraries and framework support for a variety of environments.

About the Speaker:
Allen Rush is a fellow at AMD, focusing on imaging and machine learning architecture development. He has been active in imaging and computer vision projects for over 25 years, including several startups. He is the domain architect for ISP and current machine learning development activities in HW, SW and application support.

Support for the Stanford Colloquium on Computer Systems Seminar Series provided by the Stanford Computer Forum.

Colloquium on Computer Systems Seminar Series (EE380) presents the current research in design, implementation, analysis, and use of computer systems. Topics range from integrated circuits to operating systems and programming languages. It is free and open to the public, with new lectures each week.
 

Learning language through interaction

Published on Jan 27, 2017

Machine learning-based natural language processing systems are amazingly effective, when plentiful labeled training data exists for the task/domain of interest. Unfortunately, for broad coverage (both in task and domain) language understanding, we're unlikely to ever have sufficient labeled data, and systems must find some other way to learn. I'll describe a novel algorithm for learning from interactions, and several problems of interest, most notably machine simultaneous interpretation (translation while someone is still speaking). This is all joint work with some amazing (former) students He He, Alvin Grissom II, John Morgan, Mohit Iyyer, Sudha Rao and Leonardo Claudino, as well as colleagues Jordan Boyd-Graber, Kai-Wei Chang, John Langford, Akshay Krishnamurthy, Alekh Agarwal, St?phane Ross, Alina Beygelzimer and Paul Mineiro.
 

Towards practical machine learning with differential privacy and beyond

Published on Jan 27, 2017

Machine learning (ML) has become one of the most powerful classes of tools for artificial intelligence, personalized web services and data science problems across fields. However, the use of ML on sensitive data sets involving medical, financial and behavioral data are greatly limited due to privacy concern. In this talk, we consider the problem of statistical learning with privacy constraints. Under Vapnik's general learning setting and the formalism of differential privacy (DP), we establish simple conditions that characterizes the private learnability, which reveals a mixture of positive and negative insight. We then identify generic methods that reuse existing randomness to effectively solve private learning in practice; and discuss a weaker notion of privacy — on-avg KL-privacy — that allows for orders-of-magnitude more favorable privacy-utility tradeoff, while preserving key properties of differential privacy. Moreover, we show that On-Average KL-Privacy is **equivalent** to generalization for a large class of commonly-used tools in statistics and machine learning that sample from Gibbs distributions---a class of distributions that arises naturally from the maximum entropy principle. Finally, I will describe a few exciting future directions that use statistics/machine learning tools to advance he state-of-the-art for privacy, and use privacy (and privacy inspired techniques) to formally address the problem of p-hacking in scientific discovery.
 

Live Q&A with the Deep Learning Foundations Team

Streamed live 6 hours ago

Friday, February 3rd from 5:00pm to 6:30pm PST, join us for a live deep dive on Deep Learning with Mat, Siraj and the Deep Learning Foundations Team. We'll be answering all of your questions on lessons, projects, and everything you wanted to know about this very unique program.

Udacity, massive open online courses (MOOCs), Mountain View, California, USA
 

Deep reinforcement learning for driving policy

Published on Jan 31, 2017

Autonomous driving is a multi-agent setting where the host vehicle must apply sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways.

Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy.

Moreover, one must balance between unexpected behavior of other drivers/pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained.

Symposium on "Information, Control, and Learning" at The Hebrew University of Jerusalem.

By Prof. Shai Shalev Shwartz, VP Technologies of Mobileye
and professor of computer science at The Hebrew University of Jerusalem.

Mobileye N.V., Jerusalem, Israel
 
Back
Top