Results 1 to 9 of 9

Thread: Miscellaneous

  1. #1


    Neural networks, a simple explanation

    Published on Jan 14, 2013

    Oolution Technologies (a software company) presents a simple explanation about one type of Artificial Intelligence, Neural Networks. In particular Neural Networks are about computers simulating biological neurons and the way they process information.

    To keep it as simple as possible, this short animated video does not show how Neural Networks learn and it does not show an in depth explanation of the math behind Neural Networks. Instead it is meant to provide most people with an easy to understand format regarding the inner workings of Neural Networks and how they process inputs/information into outputs/results.

    To see Artificial Intelligence in action, please visit to try for free the ANNI program (ANNI is an acronym for Advanced Neural Network Investing), or the Dynamic Debt Annihilator program (which provides an optimal debt payoff strategy to eliminate debt the fastest way possible). Both of these programs use various Artificial Intelligence Technologies to provide a higher level of features and benefits to its users than other similar programs provide.

  2. #2

    Neural Net in C++ Tutorial

    Update: For a newer neural net simulator optimized for image processing, see

    Update: For a beginner's introduction to the concepts and abstractions needed to understand how neural nets learn and work, and for tips for preparing training data for your neural net, see the new companion video "The Care and Training of Your Backpropagation Neural Net" at .

    Neural nets are fun to play with. Join me as we design and code a classic back-propagation neural net in C++, with adjustable gradient descent learning and adjustable momentum. Then train your net to do amazing and wonderful things. More at the blog:

  3. #3

    Intro to Neural Networks

    Uploaded on Dec 14, 2009

    My final project for my Intro to Artificial Intelligence class was to describe as simply as I can one concept from Artificial Intelligence. I chose Neural Networks because they are one of the better known AI concepts, but are still very poorly understood by most people.

  4. #4
    Article "A Robot Finds Its Way Using Artificial “GPS” Brain Cells"
    One robot has been given a simulated version of the brain cells that let animals build a mental map of their surroundings.

    by Will Knight
    October 19, 2015

  5. #5
    Article "The Neural Network That Remembers"
    With short-term memory, recurrent neural networks gain some amazing abilities

    by Zachary C. Lipton and Charles Elkan
    January 26, 2016

  6. #6
    Article "Explained: Neural networks"
    Ballyhooed artificial-intelligence technique known as “deep learning” revives 70-year-old idea.

    by Larry Hardesty
    April 14, 2017

  7. #7

    The science of learning: How to turn information into intelligence

    Published on Jun 12, 2017

    Cramming for a test and having a hard time understanding something? Might be best to go away and come back after a while. Your brain is constantly fluctuating between a "learning" mode and an "understanding" mode. When you're sitting there reading (and re-reading!) a textbook, unable to make sense of it, your brain is actually learning. It just takes the decompressing part of your brain for it to all be unpacked. It's called the neural chunk theory and you can learn to utilize it to your advantage by learning how to study differently; small bursts of inactivity and breaks can really make a big difference in how to memorize seemingly difficult information by combining bigger and bigger "chunks" of information until you understand the big picture. It's fascinating stuff.

    A very important idea that people are often unaware of is the fact that we have two completely different ways of seeing the world, two different neural networks we access when we’re perceiving things.

    So what this means is when we first sit down to learn something—for example, we’re going to study math. You sit down and you focus on it. So you focus and you’re activating task-positive networks. And then what happens is you’re working away and then you start to get frustrated. You can’t figure out what’s going on. What’s happening is you’re focusing and you’re using one small area of your brain to analyze the material. But it isn’t the right circuit to actually understand and comprehend the material. So you get frustrated. You finally give up, and then when you give up and get your attention off it it turns out that you activate a completely different type of or set of neural circuits. That’s the default mode network and the related neural circuits. So what happens is you stop thinking about it, you relax, you go off for a walk, you take a shower. You’re doing something different. And in the background this default mode network is doing some sort of neural processing on the side. And then what happens is you come back and voila, suddenly the information makes sense. And, in fact, it can suddenly seem so easy that you can’t figure out why you didn’t understand it before. So learning often involves going back and forth between these two different neural modes – focus mode and what I often call diffuse mode which involves **** resting states. You can only be in one mode at the same time

    So you might wonder, is there a certain task that is more appropriate for focus mode or diffuse mode? The reality is that learning involves going back and forth between these two modes. You often have to focus at first in order to sort of load that information into your brain and then you do something different, get your attention off it and that’s when that background processing occurs. And this happens no matter what you’re learning. Whether you’re learning something in math and science, you’re learning a new language, learning music, a dance. Even learning to back up a car. And think about it this way. Here’s a very important related idea and that is that when you’re learning something new you want to create a well practiced neural pattern that you can easily draw to mind when you need it. So this is called a neural chunk and chunking theory is incredibly important in learning. So, for example, if you are trying to learn to back up a car when you first begin it’s crazy, right. You’re looking all around. Do you look in this mirror or this mirror or do you look behind you? What do you do? It’s this crazy set of information. But after you’ve practiced a while you develop this very nice sort of pattern that’s well practiced. So all you have to do is think I’m going to back up a car. Instantly that pattern comes to mind and you’re able to back up a car. Not only are you doing that but you’re maybe talking to friends, listening to the radio. It’s that well practiced neural chunk that makes it seem easy. So it’s important in any kind of learning to create these well practiced patterns. And the bigger the library of these patterns, the more well practiced sort of deeper and broader they are as neural patterns in your mind, the more expertise you have in that topic.

    And chunking was first sort of thought of or explored by Nobel Prize Winner [Herbert] Simon who found that, if you’re a chess master, that the higher you’re ranking in chess, the more patterns of chess you have memorized. So you could access more and more patterns of chess.

  8. #8
    Article "Peering into neural networks"
    New technique helps elucidate the inner workings of neural networks trained on visual data.

    by Larry Hardesty
    June 29, 2017

  9. #9
    Article "Bringing neural networks to cellphones"
    Method for modeling neural networks’ power consumption could help make the systems portable.

    by MIT News Office
    July 18, 2017

Социальные закладки

Социальные закладки

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts