PDA

View Full Version : Miscellaneous



Airicist
29th January 2013, 20:37
https://youtu.be/Qe0256zAsNU

Machine Learning and Intelligence in Our Midst

Published on Mar 28, 2012


The creation of intelligent computing systems that perceive, learn, and reason has been a long-standing and visionary goal in computer science. Over the last 20 years, technical and infrastructural developments have come together to create a nurturing environment for developing and fielding applications of machine learning and reasoning--and for harnessing machine intelligence to provide value to businesses and to people in the course of their daily lives. Key advances include jumps in the availability of rich streams of data, precipitous drops in the cost of storing and retrieving large amounts of data, increases in computing power and memory, and jumps in the prowess of methods for performing machine learning and reasoning. The combination of these advances have created an inflection point in our ability to harness data to generate insights and to guide decision-making. This talk will present recent efforts on learning and inference, highlighting key ideas in the context of applications, including advances in transportation and health care, and the development of new types of applications and services. Opportunities for creating systems with new kinds of competencies by weaving together multiple data sources and models will also be discussed.

Airicist
11th April 2013, 22:46
https://youtu.be/f3RAFmkqx_4

DiGORO - Robot with the Ability to Learn

Uploaded on Jan 13, 2010

Airicist
17th April 2013, 21:00
https://youtu.be/XCE7_OfSkFI

Georgia Tech LAGR Robot Learning

Published on Apr 16, 2013


Tucker Balch, Richard Roberts

Airicist
5th May 2013, 18:00
https://youtu.be/IWYlOzsmedU

GPU-based Brain Research Helps Japanese Robot Hit it Out of the Park

Published on Apr 26, 2013


The human cerebellum is a mysterious thing. Responsible for motor control, it's the reason why we can walk, run, or learn to hit a baseball without having to consciously think through the mechanics of what we're doing. These are some of the tasks that robots -- with their 'electronic' brains -- struggle with most.
Now a pair of researchers in Japan has used GPUs and the CUDA parallel programming model to create a 100,000 neuron simulation of the human cerebellum, one of the largest simulations of its kind in the world. And they've put their model to the test by applying this knowledge to teach a robot to learn to hit a ball.

Airicist
30th July 2014, 08:55
Article "Robots helped inspire deep learning and might become its killer app (https://gigaom.com/2014/07/29/robots-helped-inspire-deep-learning-and-might-become-its-killer-app/)"

by Derrick Harris
July 30, 2014

Airicist
30th March 2015, 16:24
https://youtu.be/MWlRXLpUXkU

AI is Learning to See the Forest in Spite of the Trees, with Stefan Weitz

Published on Mar 30, 2015


Stefan Weitz, Microsoft's Director of Search, explains that the future of machine learning consists of teaching artificial intelligence to identify patterns. This will allow, for instance, a search engine to critically analyze your search queries rather than simply scouring the web's index of results.

Transcript: So machine learning. What is machine learning? Machine learning really is teaching machines how to find patterns in large amounts of data. The way it works is you’ve got a black box. Think of this as just this set of algorithms in the center that can turn a mass of unstructured data or a mass of confusing data into something which is less confusing and more structured. So what happens is you basically tell the machine I’m going to give you all this input on this side and I’m going to tell you what the input should look like post processing on this side. So you kind of give it the hint. And what it does is the machine says okay, well how do I get from point A to point B. And it builds, in essence, a pattern to say oh, okay, when I see all this data to get to this structured set of data I have to do all these computations in the middle to move it from unstructured or messy to structured and beautiful. And that can apply not just to data. It can apply to anything. It can apply to faces. It can apply to types of cats. Whatever it might be you’re basically saying hey machine, this is a cat.

And it says okay, when I see two eyes and a little pink nose and some whiskers – it doesn’t actually say this but that’s what it’s thinking – then that is a cat. So you teach machines in essence to recognize patterns in data, in pictures and whatever it might be. So that’s machine learning basically. You’re in essence helping machines find patterns in massive amounts of data. How does it apply to things like natural language? Well the beauty of machine learning, the beauty of things called deep neural networks allow in essence machines to not think like humans, that’s too much of a stretch. But certainly operate in the same way that we operate. The same way that, for example, when you’re a child you might see a ball on the floor. You don’t know what it’s called. You don’t know how it’s constructed or anything else but over time people as you’re walking around the house your mom or your dad will say look at that ball or go get the ball. And so what’s happening is that over time you’re getting reinforced that when you see an object on the floor that is stationary and has a certain circumference and looks a certain way you begin to understand ah, that’s a ball because you’ve heard it over and over again. And machine learning and natural language processing operates much the same way except instead of having your mom or dad point at the thing and say that’s a ball three or four times, machines now have trillions of observations about the real world so they can learn these things much, much faster. So for NLP it’s critical because our ability to interact with search really is predicated on the system’s understanding of what it is we are asking.

Traditionally again machines will return back results or web pages based on the keywords that we put into the box. But if I were to ask a search engine why is there no jaguar in this room today we would get back five and a half million results for that question, none of which make any sense of course. With natural language suddenly because the search systems understand what a jaguar is...[TRANSCRIPT TRUNCATED]

Airicist
20th May 2015, 21:30
https://youtu.be/Ik6FjIXn8RI

Teaching Welding Robots by Demonstration -- Kinetiq Teaching from Robotiq

Published on Oct 22, 2013


Kinetiq Teaching enables to easily implement robotic welding with simplified teaching. It reduces set-up times by allowing operators to guide the robot by hand to desired weld positions. An icon-based menu is presented on the teach pendant's color touch screen to allow the operator to define the task. Programming time is greatly reduced with the more intuitive manual positioning. The Graphic User Interface allows robot programming to be performed with minimal training.

Airicist
20th May 2015, 21:31
https://youtu.be/jzR5NZrZSu0

Kinetiq teaching vs Teach Pendant Programming

Published on Nov 19, 2013


Kinetiq Teaching a new technology to quickly and easily task welding robots without requiring in-depth programming knowledge. Welders no longer need to have expertise in programming to move the welding robot and teach it a welding task. They simply have to put their hands on the robot welder and move it to the desired position, then add welding tasks by selecting options on a smart phone style touch-screen interface. With Kinetiq Teaching, robotic welding is passing from complex lines of programming to intuitive user-friendly teaching.

Airicist
25th May 2015, 01:42
https://youtu.be/JeVppkoloXs

BRETT the Robot learns to put things together on his own

Published on May 21, 2015


UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence. In their experiments, the PR2 robot, nicknamed BRETT for Berkeley Robot for the Elimination of Tedious Tasks, used “deep learning” techniques to complete various tasks without pre-programmed details about its surroundings.
Video footage courtesy of UC Berkeley Robot Learning Lab, edited by Phil Ebiner
Full Story: "New ‘deep learning’ technique enables robot mastery of skills via trial and error (https://news.berkeley.edu/2015/05/21/deep-learning-robot-masters-skills-via-trial-and-error)"

by Sarah Yang
May 21, 2015


https://youtu.be/oQasCj1X0e8

BRETT the Robot assembles toy airplane part

Published on May 20, 2015


https://youtu.be/WK8J4gZFdH0

The robot which learns like a child - BBC Click

Published on Oct 21, 2015


A robot which learns like a child - by trial and error - has been developed by researchers at the University of California, Berkeley.
Brett (Berkeley Robot for the Elimination of Tedious Tasks) used its deep learning algorithm to perform various tasks from putting hangers on a rack to screwing a cap on a bottle of water.
The researchers believe that if a robot can learn autonomously, it will be more successful at completing tasks in the real world.
BBC Click's Talia Franco spoke to Sergey Levine to find out more.

Airicist
9th October 2015, 21:04
https://youtu.be/pMb_CIK14lU

RI Seminar: Louis-Philippe Morency (https://www.linkedin.com/in/morency) : multimodal machine learning

Streamed live on Oct 9, 2015


Multimodal Machine Learning: Modeling Human Communication Dynamics

Louis-Philippe Morency
Assistant Professor, LTI

October 9, 2015

Abstract
Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today's computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, my research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context. I formalize this new research endeavor with a Human Communication Dynamics framework, addressing four key computational challenges: behavioral dynamic, multimodal dynamic, interpersonal dynamic and societal dynamic. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements modeling multiple aspects of human communication dynamics, motivated by applications in healthcare (depression, PTSD, suicide, autism), education (learning analytics), business (negotiation, interpersonal skills) and social multimedia (opinion mining, social influence).

Additional Information
Host: Kris Kitani
Appointments: Stephanie Matvey
Speaker Biography
Louis-Philippe Morency is Assistant Professor in the Language Technology Institute at the Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He received his Ph.D. and Master degrees from MIT Computer Science and Artificial Intelligence Laboratory. In 2008, Dr. Morency was selected as one of "AI's 10 to Watch" by IEEE Intelligent Systems. He has received 7 best paper awards in multiple ACM- and IEEE-sponsored conferences for his work on context-based gesture recognition, multimodal probabilistic fusion and computational models of human communication dynamics. For the past three years, Dr. Morency has been leading a DARPA-funded multi-institution effort called SimSensei which was recently named one of the year’s top ten most promising digital initiatives by the NetExplo Forum, in partnership with UNESCO.

Airicist
22nd November 2015, 07:25
https://youtu.be/JSNZA8jVcm4

Deep Learning - Jurgen Schmidhuber

Published on Apr 14, 2014


We're very excited to have one of the world experts in this field at our first meetup. The recent resurrection of multi-layer neural networks is generating a lot of interest currently, with deep learning appearing on the New York Times front page, and big companies like Google and Facebook hunting for the experts in this field. J?rgen's talk will shed more light on how deep learning methods work, and why they work.

Airicist
23rd November 2015, 10:07
https://youtu.be/EX1CIVVkWdE

David Silver (Google DeepMind) - Deep Reinforcement Learning

Published on May 18, 2015


ICLR 2015 Invited Talk: David Silver (Google DeepMind) "Deep Reinforcement Learning"

Airicist
6th February 2016, 08:32
Article "Energy-friendly chip can perform powerful artificial-intelligence tasks (https://news.mit.edu/2016/neural-chip-artificial-intelligence-mobile-devices-0203)"
Advance could enable mobile devices to implement “neural networks” modeled on the human brain.

by Larry Hardesty
February 3, 2016

Airicist
17th February 2016, 09:53
https://youtu.be/jkkmBpJ-Eeo

Distinguished Lecturer : Eric Xing - Strategies & Principles for Distributed Machine Learning

Published on Feb 16, 2016


Eric Xing - Distinguished Lecturer

Strategies & Principles for Distributed Machine Learning

The rise of Big Data has led to new demands for Machine Learning (ML) systems to learn complex models with millions to billions of parameters that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distributed cluster with 10s to 1000s of machines, it is often the case that significant engineering efforts are required --- and one might fairly ask if such engineering truly falls within the domain of ML research or not. Taking the view that Big ML systems can indeed benefit greatly from ML-rooted statistical and algorithmic insights --- and that ML researchers should therefore not shy away from such systems design --- we discuss a series of principles and strategies distilled from our resent effort on industrial-scale ML solutions that involve a continuum from application, to engineering, and to theoretical research and development of Big ML system and architecture, on how to make them efficient, general, and with convergence and scaling guarantees. These principles concern four key questions which traditionally receive little attention in ML research: How to distribute an ML program over a cluster? How to bridge ML computation with inter-machine communication? How to perform such communication? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typical in traditional computer programs, and by dissecting successful cases of how we harness these principles to design both high-performance distributed ML software and general-purpose ML framework, we present opportunities for ML researchers and practitioners to further shape and grow the area that lies between ML and systems.
This is joint work with the CMU Petuum Team.

Airicist
20th February 2016, 01:14
Article "A Short History of Machine Learning -- Every Manager Should Read (https://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning-every-manager-should-read)"

by Bernard Marr
Februaru 19, 2016

Airicist
4th March 2016, 16:22
Article "Artificial Intelligence & Machine Learning: Top 100 Influencers and Brands (https://www.onalytica.com/blog/posts/artificial-intelligence-machine-learning-top-100-influencers-and-brands)"

by Joe Fields
March 3, 2016

Airicist
16th March 2016, 05:58
https://youtu.be/vNiLVQ5GOn8

Machine Learning inside Virtual Worlds

Published on Mar 15, 2016


The realistic 3-D graphics in video games can help deep-learning algorithms make sense of the real world.

Airicist
9th May 2016, 06:28
Article "Microsoft and Google Want to Let Artificial Intelligence Loose on Our Most Private Data (https://www.technologyreview.com/s/601294/microsoft-and-google-want-to-let-artificial-intelligence-loose-on-our-most-private-data)"
New ways to use machine learning without risking sensitive data could unlock new ideas in industries like health care and finance.

by Tom Simonite
April 19, 2016

Airicist
2nd June 2016, 21:23
Article "The barbell effect of machine learning (https://techcrunch.com/2016/06/02/the-barbell-effect-of-machine-learning)"

by Nick Beim
June 2, 2016

Airicist
17th June 2016, 12:02
https://youtu.be/BW6QaPOlpCk

Robot learns to push object and identifies patch friction model

Published on Feb 25, 2016


ICRA 2016 paper:
A Convex Polynomial Force-Motion Model for Planar Sliding:
Identification and Application
Jiaji Zhou, Robert Paolini, J. Andrew Bagnell and Matthew T. Mason

"A Convex Polynomial Force-Motion Model for Planar Sliding:
Identification and Application (https://arxiv.org/pdf/1602.06056v1.pdf)"

by Jiaji Zhou, Robert Paolini, J. Andrew Bagnell and Matthew T. Mason

"Teaching robots the physics of sliding and pushing objects (https://robohub.org/teaching-robots-the-physics-of-planar-sliding-and-pushing-objects-effectively)"

by Jiaji Zhou
June 16, 2016

Airicist
23rd June 2016, 06:24
Article "Teaching machines to predict the future (https://news.mit.edu/2016/teaching-machines-to-predict-the-future-0621)"
Deep-learning vision system from the Computer Science and Artificial Intelligence Lab anticipates human interactions using videos of TV shows.

by Adam Conner-Simons, Rachel Gordon
June 21, 2016

Airicist
23rd June 2016, 06:26
Article "How Google is remaking itself as a machine learning first company (https://backchannel.com/how-google-is-remaking-itself-as-a-machine-learning-first-company-ada63defcb70)"

by Steven Levy
June 22, 2016

Airicist
24th June 2016, 13:28
Article "Overview: Are the sceptics right? Limits and potentials of deep learning in robotics (https://robohub.org/overview-are-the-sceptics-right-limits-and-potentials-of-deep-learning-in-robotics)"

by John McCormac
June 23, 2016

Airicist
5th July 2016, 06:31
Article "Power to the People: How One Unknown Group of Researchers Holds the Key to Using AI to Solve Real Human Problems (https://medium.com/@atduskgreg/power-to-the-people-how-one-unknown-group-of-researchers-holds-the-key-to-using-ai-to-solve-real-cc9e75b1f334)"

by Greg Borenstein
July 1, 2016

Airicist
13th July 2016, 01:04
Article "To supervise or not to supervise in AI? (https://www.oreilly.com/ideas/to-supervise-or-not-to-supervise-in-ai)"
If you look carefully at how humans learn, you see surprisingly little unsupervised learning.

by Mike Loukides
July 11, 2016

Airicist
14th July 2016, 09:48
Article "Right Now, Artificial Intelligence Is The Only Thing That Matters: Look Around You (https://www.forbes.com/sites/enriquedans/2016/07/13/right-now-artificial-intelligence-is-the-only-thing-that-matters-look-around-you)"

by Enrique Dans
July 13, 2016

Airicist
14th July 2016, 13:48
https://youtu.be/AJzvuvVrEVM

ODSC East 2016 | Rahul Dave - "Machine Learning for Suits"

Published on Jul 14, 2016


Abstract: You will learn the basic concepts of machine learning – such as Modeling, Model Selection, Loss or Profit, overfitting, and validation – in a non-mathematical way, so that you can ask for data analysis and interpret the results of a model in the context of making business decisions. The concepts behind machine learning are actually quite simple, so expect to take away not just words and acronyms, but rather, a deep understanding. We will work in the context of concrete examples from different domains, including finance and medicine.

1. What is probability? What is a model? Supervised vs unsupervised learning. Regression and Classification. Minimizing Cost and Maximizing likelihood.

2. Models and Data: Bias, Variance, Noise, Overfitting, and how to solve Overfitting with Regularization and Validation

3. Different kinds of models, including ensembles and deep learning.

4. How good is a model? Profit Curves, ROC curves, and the expected value formalism.

Bio: Rahul Dave is a lecturer at Harvard University and partner at LxPrior, a small Data Science consultancy. LxPrior offers its clients data analysis services as well as data science training. Rahul trained as an astrophysicist, doing research on dark energy, and worked at the University of Pennsylvania, NASA’s Astrophysics Data System, as well as at Harvard University. As a computational scientist, he has developed time series databases, semantic search engines, and techniques for classifying astronomical objects. He was one of the people behind Harvard’s Data Science course CS109, and Harvard Library’s Data Science Training For Librarians course. This year he is teaching courses in computer science and stochastic methods to scientists and engineers.

Airicist
1st October 2016, 08:43
Article "How to Steal an AI (https://www.wired.com/2016/09/how-to-steal-an-ai)"

by Andy Greenberg
September 30, 2016

Airicist
5th October 2016, 01:42
Article "Fujitsu Memory Tech Speeds Up Deep-Learning AI (https://spectrum.ieee.org/tech-talk/computing/software/fujitsu-memory-tech-speeds-up-deep-learning-ai)"

by Jeremy Hsu
October 4, 2016

Airicist
2nd November 2016, 23:23
"What Deep Learning Means for Artificial Intelligence (https://www.slideshare.net/jmugan/what-deep-learning-means-for-artificial-intelligence)"

by Jonathan Mugan
November 2, 2016

Airicist
4th November 2016, 08:23
https://youtu.be/kZJiL0L-8Zw

Smart senses for robots

Published on Nov 3, 2016


From touch to sight, robots are getting a sensory upgrade. Artificial intelligence isn't just mental smarts. By giving robots physical intelligence, researchers hope to build machines that can work alongside humans.

Article "For robots, artificial intelligence gets physical (https://www.sciencenews.org/article/robots-artificial-intelligence-gets-physical)"
To work with humans, machines need to sense the world around them

by Meghan Rosen
November 2, 2016

Airicist
24th December 2016, 02:15
Article "The Five Capability Levels of Deep Learning Intelligence (https://www.kdnuggets.com/2016/12/5-capability-levels-deep-learning-intelligence.html)"

by Carlos Perez
December 5, 2016

Airicist
28th December 2016, 03:50
"Deep Learning in Clojure With Cortex (http://gigasquidsoftware.com/blog/2016/12/27/deep-learning-in-clojure-with-cortex)"

December 27, 2016

Airicist
13th January 2017, 23:10
https://youtu.be/IunNpGGt3H8

Brian Cox presents Science Matters - Machine Learning and Artificial intelligence

Streamed live on Jan 10, 2017


We're beginning to see more and more jobs being performed by machines, even creative tasks like writing music or painting can now be carried out by a computer.

But how and when will machines be able to explain themselves? Should we be worrying about an artificial intelligence taking over our world or are there bigger and more imminent challenges that advances in machine learning are presenting here and now?

Join Professor Brian Cox, the Royal Society Professor of Public Engagement, as he brings together experts on AI and machine learning to discuss key issues that will shape our future.

Panelists will include:

Professor Jon Crowcroft FRS, Marconi Professor of Networked Systems at the University of Cambridge

Professor Joanna Bryson, Reader in AI Ethics, University of Bath

Professor Sabine Hauert, Lecturer in Robotics at the University of Bristol

"Brian Cox presents Science Matters: Machine learning and artificial intelligence (https://robohub.org/brian-cox-presents-science-matters-machine-learning-and-artificial-intelligence)"

by Robohub Editors
January 11, 2017

Airicist
26th January 2017, 10:21
https://youtu.be/NOLAwD4ZTW0

Intrinsically motivated multi-task reinforcement learning

Published on Jan 25, 2017


Intrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot
Sebastien Forestier, Yoan Mollard, Damien Caselli, Pierre-Yves Oudeyer, Flowers Team, Inria Bordeaux.
2nd rank at Demonstration Awards, NIPS 2016, Barcelona, Spain, December 6th, 2016.

Poppy Project (https://pr.ai/showthread.php?3517), open-source humanoid platform, Talence Cedex, France

Airicist
27th January 2017, 08:19
https://youtu.be/2LksDHe43rU

Stanford Seminar: Deep Learning in the Age of Zen, Vega, and Beyond

Published on Jan 26, 2017


EE380: Computer Systems Colloquium Seminar
Computer Architecture : Deep Learning in the Age of Zen, Vega, and Beyond
Speaker: Allen Rush

Deep Learning and Machine Intelligence is maturing to the point where is it is being deployed to many applications, particularly large data, imaging classification and detection. This talk addresses the challenges of deep learning from a computational challenge perspective and discusses the ways in which new compute platforms of Zen (x86) and Vega (GPU) provide high performance solutions to different training and inference applications. The ROCm software stack completes the support with libraries and framework support for a variety of environments.

About the Speaker:
Allen Rush is a fellow at AMD, focusing on imaging and machine learning architecture development. He has been active in imaging and computer vision projects for over 25 years, including several startups. He is the domain architect for ISP and current machine learning development activities in HW, SW and application support.

Support for the Stanford Colloquium on Computer Systems Seminar Series provided by the Stanford Computer Forum.

Colloquium on Computer Systems Seminar Series (EE380) presents the current research in design, implementation, analysis, and use of computer systems. Topics range from integrated circuits to operating systems and programming languages. It is free and open to the public, with new lectures each week.

Airicist
28th January 2017, 02:49
https://youtu.be/Vl_r0IdEak4

Learning language through interaction

Published on Jan 27, 2017


Machine learning-based natural language processing systems are amazingly effective, when plentiful labeled training data exists for the task/domain of interest. Unfortunately, for broad coverage (both in task and domain) language understanding, we're unlikely to ever have sufficient labeled data, and systems must find some other way to learn. I'll describe a novel algorithm for learning from interactions, and several problems of interest, most notably machine simultaneous interpretation (translation while someone is still speaking). This is all joint work with some amazing (former) students He He, Alvin Grissom II, John Morgan, Mohit Iyyer, Sudha Rao and Leonardo Claudino, as well as colleagues Jordan Boyd-Graber, Kai-Wei Chang, John Langford, Akshay Krishnamurthy, Alekh Agarwal, St?phane Ross, Alina Beygelzimer and Paul Mineiro.

Airicist
28th January 2017, 02:50
https://youtu.be/eKH7lqj7C8w

Towards practical machine learning with differential privacy and beyond

Published on Jan 27, 2017


Machine learning (ML) has become one of the most powerful classes of tools for artificial intelligence, personalized web services and data science problems across fields. However, the use of ML on sensitive data sets involving medical, financial and behavioral data are greatly limited due to privacy concern. In this talk, we consider the problem of statistical learning with privacy constraints. Under Vapnik's general learning setting and the formalism of differential privacy (DP), we establish simple conditions that characterizes the private learnability, which reveals a mixture of positive and negative insight. We then identify generic methods that reuse existing randomness to effectively solve private learning in practice; and discuss a weaker notion of privacy — on-avg KL-privacy — that allows for orders-of-magnitude more favorable privacy-utility tradeoff, while preserving key properties of differential privacy. Moreover, we show that On-Average KL-Privacy is **equivalent** to generalization for a large class of commonly-used tools in statistics and machine learning that sample from Gibbs distributions---a class of distributions that arises naturally from the maximum entropy principle. Finally, I will describe a few exciting future directions that use statistics/machine learning tools to advance he state-of-the-art for privacy, and use privacy (and privacy inspired techniques) to formally address the problem of p-hacking in scientific discovery.

Airicist
4th February 2017, 07:41
https://youtu.be/HgLplBRpRcs

Live Q&A with the Deep Learning Foundations Team

Streamed live 6 hours ago


Friday, February 3rd from 5:00pm to 6:30pm PST, join us for a live deep dive on Deep Learning with Mat, Siraj and the Deep Learning Foundations Team. We'll be answering all of your questions on lessons, projects, and everything you wanted to know about this very unique program.

Udacity (https://pr.ai/showthread.php?16288), massive open online courses (MOOCs), Mountain View, California, USA

Airicist
6th February 2017, 18:02
https://youtu.be/cYTVXfIH0MU

Deep reinforcement learning for driving policy

Published on Jan 31, 2017


Autonomous driving is a multi-agent setting where the host vehicle must apply sophisticated negotiation skills with other road users when overtaking, giving way, merging, taking left and right turns and while pushing ahead in unstructured urban roadways.

Since there are many possible scenarios, manually tackling all possible cases will likely yield a too simplistic policy.

Moreover, one must balance between unexpected behavior of other drivers/pedestrians and at the same time not to be too defensive so that normal traffic flow is maintained.

Symposium on "Information, Control, and Learning" at The Hebrew University of Jerusalem.

By Prof. Shai Shalev Shwartz, VP Technologies of Mobileye
and professor of computer science at The Hebrew University of Jerusalem.

Mobileye N.V. (https://pr.ai/showthread.php?2163), Jerusalem, Israel

Airicist
28th March 2017, 13:43
"Deep Learning A-Z™: Online Course in Artificial Intelligence (https://www.kickstarter.com/projects/kirilleremenko/deep-learning-a-ztm-online-course)" on Kickstater

Airicist
5th April 2017, 09:52
https://youtu.be/zIi4yHYJdJY

Reinforcement learning to quadrotor control

Published on Mar 3, 2017

Airicist
9th April 2017, 22:54
https://youtu.be/8UQzJaa0HPU

On Deep Learning with Ian Goodfellow, Andrew Trask, Kelvin Lwin, Siraj Raval and the Udacity Team

Streamed live on Mar 17, 2017


Join us on March 17 at 6pm PST for a panel on the state of deep learning. Brought to you by Udacity's Deep Learning Nanodegree Foundation program.

Airicist
12th April 2017, 02:02
Article "The Dark Secret at the Heart of AI (https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai)"
No one really knows how the most advanced algorithms do what they do. That could be a problem.

by Will Knight
April 11, 2017

Airicist
10th May 2017, 22:32
https://youtu.be/QQplTBx6rV0

Teaching robots to teach robots

Published on May 10, 2017

Airicist
16th May 2017, 22:42
https://youtu.be/QGFlZfflYYg

Forget catastrophic forgetting: AI that learns after deployment

Published on May 16, 2017


Neurala CTO Anatoly Gorshechnikov on Lifelong Deep Learning Neural Networks. One of the major hassles of Deep Learning is the need to fully retrain the network on server every time new data becomes available in order to preserve the previous knowledge. This is called 'catastrophic forgetting' and severely impairs the ability to develop a truly autonomous AI. We present the patent pending technology that allows us to solve this problem by simply training on the fly the new object without retraining of the old. Our results not only show state of the art accuracy, but real time performance suitable for deployment of AI directly on the edge, thus moving AI out of the server room and into the hands of consumers. Imagine a toy that can learn to recognize and react to its owner or a drone that can learn and detect objects of interest identified while in flight. (Recorded at the NVIDIA GTC Conference in 2017 at San Jose.)

Airicist
30th May 2017, 00:06
https://youtu.be/dz_jeuWx3j0

The Deep End of Deep Learning | Hugo Larochelle | TEDxBoston

Published on Oct 12, 2016


Artificial Neural Networks are inspired by some of the "computations" that occur in human brains—real neural networks. In the past 10 years, much progress has been made with Artificial Neural Networks and Deep Learning due to accelerated computer power (GPUs), Open Source coding libraries that are being leveraged, and in-the-moment debates and corroborations via social media. Hugo Larochelle shares his observations of what’s been made possible with the underpinnings of Deep Learning.

Hugo Larochelle is a Research Scientist at Twitter and an Assistant Professor at the Université de Sherbrooke (UdeS). Before 2011, he spent two years in the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at Université de Montréal, under the supervision of Yoshua Bengio. He is the recipient of two Google Faculty Awards. His professional involvement includes associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), member of the editorial board of the Journal of Artificial Intelligence Research (JAIR) and program chair for the International Conference on Learning Representations (ICLR) of 2015, 2016 and 2017.

Airicist
6th June 2017, 20:37
https://youtu.be/Luza51nFG0E

Stanford Seminar - Crowdsourcing for machine learning

Published on Jun 6, 2017


CS547: Human-Computer Interaction Seminar
Crowdsourcing for Machine Learning
Speaker: Dan Weld, University of Washington

Airicist
7th June 2017, 22:09
Article "'Black box' technique may lead to more powerful AI (https://www.engadget.com/2017/03/26/black-box-strategy-helps-neural-networks)"
The strategy is easier, faster and more flexible.

by Jon Fingas
March 26, 2017

Airicist
31st October 2017, 18:06
https://youtu.be/s2zDa5nc8bw

The progress we've made in machine learning - Tom Dietterich

Published on Oct 31, 2017


The National Academies of Sciences, Engineering, and Medicine organized a two-day workshop on the capabilities and applications of artificial intelligence and machine learning for the intelligence community on August 9-10, 2017.

Airicist
5th November 2017, 21:31
https://youtu.be/-A1tVNTHUFw

Developing bug-free machine learning systems using formal mathematics

Published on Nov 5, 2017


Noisy data, non-convex objectives, model misspecification, and numerical instability can all cause undesired behaviors in machine learning systems. As a result, detecting actual implementation errors can be extremely difficult. We demonstrate a methodology in which developers use an interactive proof assistant to both implement their system and to state a formal theorem defining what it means for their system to be correct. The process of proving this theorem interactively in the proof assistant exposes all implementation errors since any error in the program would cause the proof to fail. As a case study, we implement a new system, Certigrad, for optimizing over stochastic computation graphs, and we generate a formal (i.e. machine-checkable) proof that the gradients sampled by the system are unbiased estimates of the true mathematical gradients. We train a variational autoencoder using Certigrad and find the performance comparable to training the same model in TensorFlow.

Airicist
12th November 2017, 21:48
https://youtu.be/095Ee0rKC14

Probabilistic Machine Learning - Prof. Zoubin Ghahramani

Published on Nov 12, 2017


Zoubin Ghahramani is Professor of Information Engineering at the University of Cambridge, Co-Director of Uber AI Labs, and the Cambridge Director of the Alan Turing Institute, the UK's national institute for Data Science.

He is also the Deputy Academic Director of the Leverhulme Centre for the Future of Intelligence. He has worked and studied at the University of Pennsylvania, MIT, the University of Toronto, the Gatsby Unit at UCL, and CMU.

His research spans Neuroscience, AI, Machine Learning and Statistics. In 2015 he was elected a Fellow of the Royal Society.

Recorded, 7th March 2017

Airicist
4th June 2018, 21:06
https://youtu.be/KRvjGYIdJrg

Machine learning - a new programming paradigm

Published on Jun 4, 2018


In this video from RedHat Summit 2018, Cassie Kozyrkov (https://www.linkedin.com/in/cassie-kozyrkov-9531919) demystifies machine learning and AI. She describes how they're simply a different way to program computers, letting you explain your wishes with examples instead of instructions. See why this concept is powerful and how to think about applying it to solve your problems.

Airicist
7th November 2018, 15:51
https://youtu.be/ixmE5nt2o88

Learning to dress: synthesizing human dressing motion via deep reinforcement learning

Published on Sep 10, 2018


Video results for the paper "Learning To Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning" to be presented at Siggraph Asia 2018.

Airicist
13th May 2019, 15:10
Article "How to tell whether machine-learning systems are robust enough for the real world (https://news.mit.edu/2019/how-tell-whether-machine-learning-systems-are-robust-enough-real-worl-0510)"
New method quickly detects instances when neural networks make mistakes they shouldn’t.

by Rob Matheson
May 10, 2019

Airicist
21st June 2019, 04:49
https://youtu.be/ZJixNvx9BAc

Machine learning: living in the age of AI | A WIRED film

Published on Jun 20, 2019


“Machine Learning: Living in the Age of AI,” examines the extraordinary ways in which people are interacting with AI today. Hobbyists and teenagers are now developing tech powered by machine learning and WIRED shows the impacts of AI on schoolchildren and farmers and senior citizens, as well as looking at the implications that rapidly accelerating technology can have. The film was directed by filmmaker Chris Cannucciari, produced by WIRED, and supported by McCann Worldgroup.

Airicist
11th January 2020, 04:37
https://youtu.be/0VH1Lim8gL8

Deep Learning State of the Art (2020) | MIT Deep Learning Series

Jan 10, 2020


Lecture on most recent research and developments in deep learning, and hopes for 2020. This is not intended to be a list of SOTA benchmark results, but rather a set of highlights of machine learning and AI innovations and progress in academia, industry, and society in general. This lecture is part of the MIT Deep Learning Lecture Series.

Website: https://deeplearning.mit.edu
Slides: http://bit.ly/2QEfbAm
Playlist: http://bit.ly/deep-learning-playlist

OUTLINE:
0:00 - Introduction
0:33 - AI in the context of human history
5:47 - Deep learning celebrations, growth, and limitations
6:35 - Deep learning early key figures
9:29 - Limitations of deep learning
11:01 - Hopes for 2020: deep learning community and research
12:50 - Deep learning frameworks: TensorFlow and PyTorch
15:11 - Deep RL frameworks
16:13 - Hopes for 2020: deep learning and deep RL frameworks
17:53 - Natural language processing
19:42 - Megatron, XLNet, ALBERT
21:21 - Write with transformer examples
24:28 - GPT-2 release strategies report
26:25 - Multi-domain dialogue
27:13 - Commonsense reasoning
28:26 - Alexa prize and open-domain conversation
33:44 - Hopes for 2020: natural language processing
35:11 - Deep RL and self-play
35:30 - OpenAI Five and Dota 2
37:04 - DeepMind Quake III Arena
39:07 - DeepMind AlphaStar
41:09 - Pluribus: six-player no-limit Texas hold'em poker
43:13 - OpenAI Rubik's Cube
44:49 - Hopes for 2020: Deep RL and self-play
45:52 - Science of deep learning
46:01 - Lottery ticket hypothesis
47:29 - Disentangled representations
48:34 - Deep double descent
49:30 - Hopes for 2020: science of deep learning
50:56 - Autonomous vehicles and AI-assisted driving
51:50 - Waymo
52:42 - Tesla Autopilot
57:03 - Open question for Level 2 and Level 4 approaches
59:55 - Hopes for 2020: autonomous vehicles and AI-assisted driving
1:01:43 - Government, politics, policy
1:03:03 - Recommendation systems and policy
1:05:36 - Hopes for 2020: Politics, policy and recommendation systems
1:06:50 - Courses, Tutorials, Books
1:10:05 - General hopes for 2020
1:11:19 - Recipe for progress in AI
1:14:15 - Q&A: what made you interested in AI
1:15:21 - Q&A: Will machines ever be able to think and feel?
1:18:20 - Q&A: Is RL a good candidate for achieving AGI?
1:21:31 - Q&A: Are autonomous vehicles responsive to sound?
1:22:43 - Q&A: What does the future with AGI look like?
1:25:50 - Q&A: Will AGI systems become our masters?

Airicist
23rd September 2020, 16:24
Article "What’s the best way to prepare for machine learning math? (https://bdtechtalks.com/2020/09/23/machine-learning-mathematics)"

by Ben Dickson
September 23, 2020