ANN Representation
ANNs are skilled on AI lessons because of their inspiration from brain studies and the truth that they are applied in an AI jobs, namely machine learning. However, I would dispute that their actual home is in statistics, because, as a representation method, they are just fancy mathematical methods.
Suppose being asked to come up with a method to get the following inputs and methods their associated outputs:
Input Output
1 1
2 4
3 9
4 16
Presumably, the method you would learn would be f(x) = x2. Suppose now that you had a group of values, rather than a single occurrence as input to your method:
Input Output
[1,2,3] 1
[2,3,4] 5
[3,4,5] 11
[4,5,6] 19
Here, it is still achievable to learn a function: for example, multiply the first and last element and get the middle one from the result. Note that the method we are learning are having more complex, but they are still mathematical. ANNs just get this further: the procedures they learn are commonly so complex that it's not easy to get them on a global level. But they are still only methods which play around with numbers.
Suppose, now, for example, that the inputs to our procedure were arrays of pixels, actually taken from photographs of vehicles, and that the output of the procedure is either 1, 2 or 3, where 1 refers for a car, 2 refers for a bus and 3 refers for a tank:
In this type of case, the function which takes an array of integers representing pixel data and outputs either 2, 1 or 3 will be fairly completed, but it is just doing the similar kind of thing as the 2 simpler functions.
Because the functions learned to, for instance, categories photos of vehicles into a category of bus, car or tank, are so complexes, we say the ANN approach is a black box approach because, while the function performs good at its job, we can't look inside it to achieve knowledge of how it works. This is a little unjust; as there are some projects which have addressed the problem of translating learned neural networks into human readable forms. However, in the general way, ANNs are used in cases where the predictive accuracy is of greater value than understanding the learned concept.
Artificial Neural Networks consist of a number of units which are mini calculation devices. They take real-valued input from multiple other nodes and they produce a solo real valued output. By real-valued output and input we mean real numbers which are able to take any particular decimal value. The architecture of ANNs is following:
1. A set of input units that take in information regarding the example to be propagated through the network. We mean, by propagation, that the information from the input will be passed through the network and an output produced. The set of input units made what is known as the input layer.
2. A set of hidden units that take input from the input layer. The hidden
Units jointly form the hidden layer. For simplicity, we suppose that each unit in the input layer is connected to each unit of the hidden layer, but units form the input to all hidden unit. Notice that the number of hidden units is generally smaller than the number of input units.
3. In learning tasks, a set of output units which dictate the category assigned
to an example propagated through the network. The output units make the output layer. For simplicity, we again suppose that each unit in the hidden layer is connected to each unit in the output layer. A weighted sum of outputs from the hidden units forms the input to all output units.