Page 1 of 6 123 ... LastLast
Results 1 to 10 of 58

Thread: Miscellaneous

  1. #1

    Miscellaneous



    Machine Learning and Intelligence in Our Midst

    Published on Mar 28, 2012

    The creation of intelligent computing systems that perceive, learn, and reason has been a long-standing and visionary goal in computer science. Over the last 20 years, technical and infrastructural developments have come together to create a nurturing environment for developing and fielding applications of machine learning and reasoning--and for harnessing machine intelligence to provide value to businesses and to people in the course of their daily lives. Key advances include jumps in the availability of rich streams of data, precipitous drops in the cost of storing and retrieving large amounts of data, increases in computing power and memory, and jumps in the prowess of methods for performing machine learning and reasoning. The combination of these advances have created an inflection point in our ability to harness data to generate insights and to guide decision-making. This talk will present recent efforts on learning and inference, highlighting key ideas in the context of applications, including advances in transportation and health care, and the development of new types of applications and services. Opportunities for creating systems with new kinds of competencies by weaving together multiple data sources and models will also be discussed.

  2. #2


    DiGORO - Robot with the Ability to Learn

    Uploaded on Jan 13, 2010

  3. #3


    Georgia Tech LAGR Robot Learning

    Published on Apr 16, 2013

    Tucker Balch, Richard Roberts

  4. #4


    GPU-based Brain Research Helps Japanese Robot Hit it Out of the Park

    Published on Apr 26, 2013

    The human cerebellum is a mysterious thing. Responsible for motor control, it's the reason why we can walk, run, or learn to hit a baseball without having to consciously think through the mechanics of what we're doing. These are some of the tasks that robots -- with their 'electronic' brains -- struggle with most.
    Now a pair of researchers in Japan has used GPUs and the CUDA parallel programming model to create a 100,000 neuron simulation of the human cerebellum, one of the largest simulations of its kind in the world. And they've put their model to the test by applying this knowledge to teach a robot to learn to hit a ball.

  5. #5

  6. #6


    AI is Learning to See the Forest in Spite of the Trees, with Stefan Weitz

    Published on Mar 30, 2015

    Stefan Weitz, Microsoft's Director of Search, explains that the future of machine learning consists of teaching artificial intelligence to identify patterns. This will allow, for instance, a search engine to critically analyze your search queries rather than simply scouring the web's index of results.

    Transcript: So machine learning. What is machine learning? Machine learning really is teaching machines how to find patterns in large amounts of data. The way it works is you’ve got a black box. Think of this as just this set of algorithms in the center that can turn a mass of unstructured data or a mass of confusing data into something which is less confusing and more structured. So what happens is you basically tell the machine I’m going to give you all this input on this side and I’m going to tell you what the input should look like post processing on this side. So you kind of give it the hint. And what it does is the machine says okay, well how do I get from point A to point B. And it builds, in essence, a pattern to say oh, okay, when I see all this data to get to this structured set of data I have to do all these computations in the middle to move it from unstructured or messy to structured and beautiful. And that can apply not just to data. It can apply to anything. It can apply to faces. It can apply to types of cats. Whatever it might be you’re basically saying hey machine, this is a cat.

    And it says okay, when I see two eyes and a little pink nose and some whiskers – it doesn’t actually say this but that’s what it’s thinking – then that is a cat. So you teach machines in essence to recognize patterns in data, in pictures and whatever it might be. So that’s machine learning basically. You’re in essence helping machines find patterns in massive amounts of data. How does it apply to things like natural language? Well the beauty of machine learning, the beauty of things called deep neural networks allow in essence machines to not think like humans, that’s too much of a stretch. But certainly operate in the same way that we operate. The same way that, for example, when you’re a child you might see a ball on the floor. You don’t know what it’s called. You don’t know how it’s constructed or anything else but over time people as you’re walking around the house your mom or your dad will say look at that ball or go get the ball. And so what’s happening is that over time you’re getting reinforced that when you see an object on the floor that is stationary and has a certain circumference and looks a certain way you begin to understand ah, that’s a ball because you’ve heard it over and over again. And machine learning and natural language processing operates much the same way except instead of having your mom or dad point at the thing and say that’s a ball three or four times, machines now have trillions of observations about the real world so they can learn these things much, much faster. So for NLP it’s critical because our ability to interact with search really is predicated on the system’s understanding of what it is we are asking.

    Traditionally again machines will return back results or web pages based on the keywords that we put into the box. But if I were to ask a search engine why is there no jaguar in this room today we would get back five and a half million results for that question, none of which make any sense of course. With natural language suddenly because the search systems understand what a jaguar is...[TRANSCRIPT TRUNCATED]

  7. #7


    Teaching Welding Robots by Demonstration -- Kinetiq Teaching from Robotiq

    Published on Oct 22, 2013

    Kinetiq Teaching enables to easily implement robotic welding with simplified teaching. It reduces set-up times by allowing operators to guide the robot by hand to desired weld positions. An icon-based menu is presented on the teach pendant's color touch screen to allow the operator to define the task. Programming time is greatly reduced with the more intuitive manual positioning. The Graphic User Interface allows robot programming to be performed with minimal training.

  8. #8


    Kinetiq teaching vs Teach Pendant Programming

    Published on Nov 19, 2013

    Kinetiq Teaching a new technology to quickly and easily task welding robots without requiring in-depth programming knowledge. Welders no longer need to have expertise in programming to move the welding robot and teach it a welding task. They simply have to put their hands on the robot welder and move it to the desired position, then add welding tasks by selecting options on a smart phone style touch-screen interface. With Kinetiq Teaching, robotic welding is passing from complex lines of programming to intuitive user-friendly teaching.

  9. #9


    BRETT the Robot learns to put things together on his own

    Published on May 21, 2015

    UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence. In their experiments, the PR2 robot, nicknamed BRETT for Berkeley Robot for the Elimination of Tedious Tasks, used “deep learning” techniques to complete various tasks without pre-programmed details about its surroundings.
    Video footage courtesy of UC Berkeley Robot Learning Lab, edited by Phil Ebiner
    Full Story: "New ‘deep learning’ technique enables robot mastery of skills via trial and error"

    by Sarah Yang
    May 21, 2015


    BRETT the Robot assembles toy airplane part

    Published on May 20, 2015



    The robot which learns like a child - BBC Click

    Published on Oct 21, 2015

    A robot which learns like a child - by trial and error - has been developed by researchers at the University of California, Berkeley.
    Brett (Berkeley Robot for the Elimination of Tedious Tasks) used its deep learning algorithm to perform various tasks from putting hangers on a rack to screwing a cap on a bottle of water.
    The researchers believe that if a robot can learn autonomously, it will be more successful at completing tasks in the real world.
    BBC Click's Talia Franco spoke to Sergey Levine to find out more.

  10. #10


    RI Seminar: Louis-Philippe Morency : multimodal machine learning

    Streamed live on Oct 9, 2015

    Multimodal Machine Learning: Modeling Human Communication Dynamics

    Louis-Philippe Morency
    Assistant Professor, LTI

    October 9, 2015

    Abstract
    Human face-to-face communication is a little like a dance, in that participants continuously adjust their behaviors based on verbal and nonverbal cues from the social context. Today's computers and interactive devices are still lacking many of these human-like abilities to hold fluid and natural interactions. Leveraging recent advances in machine learning, audio-visual signal processing and computational linguistic, my research focuses on creating computational technologies able to analyze, recognize and predict human subtle communicative behaviors in social context. I formalize this new research endeavor with a Human Communication Dynamics framework, addressing four key computational challenges: behavioral dynamic, multimodal dynamic, interpersonal dynamic and societal dynamic. Central to this research effort is the introduction of new probabilistic models able to learn the temporal and fine-grained latent dependencies across behaviors, modalities and interlocutors. In this talk, I will present some of our recent achievements modeling multiple aspects of human communication dynamics, motivated by applications in healthcare (depression, PTSD, suicide, autism), education (learning analytics), business (negotiation, interpersonal skills) and social multimedia (opinion mining, social influence).

    Additional Information
    Host: Kris Kitani
    Appointments: Stephanie Matvey
    Speaker Biography
    Louis-Philippe Morency is Assistant Professor in the Language Technology Institute at the Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He received his Ph.D. and Master degrees from MIT Computer Science and Artificial Intelligence Laboratory. In 2008, Dr. Morency was selected as one of "AI's 10 to Watch" by IEEE Intelligent Systems. He has received 7 best paper awards in multiple ACM- and IEEE-sponsored conferences for his work on context-based gesture recognition, multimodal probabilistic fusion and computational models of human communication dynamics. For the past three years, Dr. Morency has been leading a DARPA-funded multi-institution effort called SimSensei which was recently named one of the year’s top ten most promising digital initiatives by the NetExplo Forum, in partnership with UNESCO.

Page 1 of 6 123 ... LastLast

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •