PDA

View Full Version : Miscellaneous



Airicist
29th January 2013, 14:42
https://youtu.be/gcK_5x2KsLA

Neural networks, a simple explanation

Published on Jan 14, 2013


Oolution Technologies (a software company) presents a simple explanation about one type of Artificial Intelligence, Neural Networks. In particular Neural Networks are about computers simulating biological neurons and the way they process information.

To keep it as simple as possible, this short animated video does not show how Neural Networks learn and it does not show an in depth explanation of the math behind Neural Networks. Instead it is meant to provide most people with an easy to understand format regarding the inner workings of Neural Networks and how they process inputs/information into outputs/results.

To see Artificial Intelligence in action, please visit http://oolutiontech.com/Products.aspx?ref=youtube001 to try for free the ANNI program (ANNI is an acronym for Advanced Neural Network Investing), or the Dynamic Debt Annihilator program (which provides an optimal debt payoff strategy to eliminate debt the fastest way possible). Both of these programs use various Artificial Intelligence Technologies to provide a higher level of features and benefits to its users than other similar programs provide.

Airicist
20th March 2015, 22:36
http://vimeo.com/19569529

Neural Net in C++ Tutorial
February 4, 2011


Update: For a newer neural net simulator optimized for image processing, see neural2d.net.

Update: For a beginner's introduction to the concepts and abstractions needed to understand how neural nets learn and work, and for tips for preparing training data for your neural net, see the new companion video "The Care and Training of Your Backpropagation Neural Net" at vimeo.com/technotes/neural-net-care-and-training .

Neural nets are fun to play with. Join me as we design and code a classic back-propagation neural net in C++, with adjustable gradient descent learning and adjustable momentum. Then train your net to do amazing and wonderful things. More at the blog: millermattson.com/dave

Airicist
15th July 2015, 20:38
https://youtu.be/DG5-UyRBQD4

Intro to Neural Networks

Uploaded on Dec 14, 2009


My final project for my Intro to Artificial Intelligence class was to describe as simply as I can one concept from Artificial Intelligence. I chose Neural Networks because they are one of the better known AI concepts, but are still very poorly understood by most people.

Airicist
24th October 2015, 18:48
Article "A Robot Finds Its Way Using Artificial “GPS” Brain Cells (https://www.technologyreview.com/2015/10/19/10343/a-robot-finds-its-way-using-artificial-gps-brain-cells)"
One robot has been given a simulated version of the brain cells that let animals build a mental map of their surroundings.

by Will Knight
October 19, 2015

Airicist
27th January 2016, 03:49
Article "The Neural Network That Remembers (https://spectrum.ieee.org/computing/software/the-neural-network-that-remembers)"
With short-term memory, recurrent neural networks gain some amazing abilities

by Zachary C. Lipton and Charles Elkan
January 26, 2016

Airicist
22nd April 2017, 23:37
Article "Explained: Neural networks (http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414)"
Ballyhooed artificial-intelligence technique known as “deep learning” revives 70-year-old idea.

by Larry Hardesty
April 14, 2017

Airicist
12th June 2017, 17:14
https://youtu.be/1FvYJhpNvHY

The science of learning: How to turn information into intelligence

Published on Jun 12, 2017


Cramming for a test and having a hard time understanding something? Might be best to go away and come back after a while. Your brain is constantly fluctuating between a "learning" mode and an "understanding" mode. When you're sitting there reading (and re-reading!) a textbook, unable to make sense of it, your brain is actually learning. It just takes the decompressing part of your brain for it to all be unpacked. It's called the neural chunk theory and you can learn to utilize it to your advantage by learning how to study differently; small bursts of inactivity and breaks can really make a big difference in how to memorize seemingly difficult information by combining bigger and bigger "chunks" of information until you understand the big picture. It's fascinating stuff.

A very important idea that people are often unaware of is the fact that we have two completely different ways of seeing the world, two different neural networks we access when we’re perceiving things.

So what this means is when we first sit down to learn something—for example, we’re going to study math. You sit down and you focus on it. So you focus and you’re activating task-positive networks. And then what happens is you’re working away and then you start to get frustrated. You can’t figure out what’s going on. What’s happening is you’re focusing and you’re using one small area of your brain to analyze the material. But it isn’t the right circuit to actually understand and comprehend the material. So you get frustrated. You finally give up, and then when you give up and get your attention off it it turns out that you activate a completely different type of or set of neural circuits. That’s the default mode network and the related neural circuits. So what happens is you stop thinking about it, you relax, you go off for a walk, you take a shower. You’re doing something different. And in the background this default mode network is doing some sort of neural processing on the side. And then what happens is you come back and voila, suddenly the information makes sense. And, in fact, it can suddenly seem so easy that you can’t figure out why you didn’t understand it before. So learning often involves going back and forth between these two different neural modes – focus mode and what I often call diffuse mode which involves **** resting states. You can only be in one mode at the same time

So you might wonder, is there a certain task that is more appropriate for focus mode or diffuse mode? The reality is that learning involves going back and forth between these two modes. You often have to focus at first in order to sort of load that information into your brain and then you do something different, get your attention off it and that’s when that background processing occurs. And this happens no matter what you’re learning. Whether you’re learning something in math and science, you’re learning a new language, learning music, a dance. Even learning to back up a car. And think about it this way. Here’s a very important related idea and that is that when you’re learning something new you want to create a well practiced neural pattern that you can easily draw to mind when you need it. So this is called a neural chunk and chunking theory is incredibly important in learning. So, for example, if you are trying to learn to back up a car when you first begin it’s crazy, right. You’re looking all around. Do you look in this mirror or this mirror or do you look behind you? What do you do? It’s this crazy set of information. But after you’ve practiced a while you develop this very nice sort of pattern that’s well practiced. So all you have to do is think I’m going to back up a car. Instantly that pattern comes to mind and you’re able to back up a car. Not only are you doing that but you’re maybe talking to friends, listening to the radio. It’s that well practiced neural chunk that makes it seem easy. So it’s important in any kind of learning to create these well practiced patterns. And the bigger the library of these patterns, the more well practiced sort of deeper and broader they are as neural patterns in your mind, the more expertise you have in that topic.

And chunking was first sort of thought of or explored by Nobel Prize Winner [Herbert] Simon who found that, if you’re a chess master, that the higher you’re ranking in chess, the more patterns of chess you have memorized. So you could access more and more patterns of chess.

Airicist
8th July 2017, 00:16
Article "Peering into neural networks (http://news.mit.edu/2017/inner-workings-neural-networks-visual-data-0630)"
New technique helps elucidate the inner workings of neural networks trained on visual data.

by Larry Hardesty
June 29, 2017

Airicist
21st July 2017, 09:19
Article "Bringing neural networks to cellphones (http://news.mit.edu/2017/bringing-neural-networks-cellphones-0718)"
Method for modeling neural networks’ power consumption could help make the systems portable.

by MIT News Office
July 18, 2017

Airicist
30th August 2017, 16:45
https://youtu.be/RmUGVtZEgd4

An intro to artificial neural networks by Tawn Kramer

Published on Aug 30, 2017

Airicist
15th December 2017, 07:17
Article "Reading a neural network’s mind (http://news.mit.edu/2017/reading-neural-network-mind-1211)"
Technique illuminates the inner workings of artificial-intelligence systems that process language.

by Larry Hardesty
December 10, 2017

Airicist
17th December 2019, 18:21
https://youtu.be/7OxvXv5Bvlg

Dec 16, 2019


This video is about learning Artificial Neural Network. We will learn how Artificial Neural Networks learn? Artificial Neural Networks or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules

Airicist
27th May 2020, 14:43
Article "Everything you need to know about artificial neural networks (https://thenextweb.com/neural-basics/2020/05/27/everything-you-need-to-know-about-artificial-neural-networks)"

by Ben Dickson
May 27, 2020