A multilayer perceptron is a feedforward neural network with one or more hidden layers.Typically, the network consists of an input layer of source neurons, at least one middle or hidden layer of computational neurons, and an output layer of computational neurons. The input signals are propagated in a forward direction on a layer-by-layer basis
Each layer in a multilayer neural network has its own specific function.
With one hidden layer, we can represent any continuous function of the input signals, and with two hidden layers even discontinuous functions can be represented.
 
        A hidden layer ‘hides’ its desired output, Neurons in the hidden layer cannot be observed through the input/output behaviour of the network.
          Commercial ANNs incorporate three and sometimes four layers, including
          one or two hidden layers. Each layer can contain from 10 to 1000
          neurons.
          
          Experimental neural networks may have five or even six layers,
          including three or four hidden layers, and utilise millions of
          neurons, but most practical applications use only three layers,
          because each additional layer increases the computational burden
          exponentially.
        
          The Backpropagation algorithm is a supervised learning method for
          multilayer feed-forward networks from the field of Artificial Neural
          Networks.
          
          Typically, a back-propagation network is a multilayer network that has
          three or four layers. The layers are fully connected that is every
          neuron in each layer is connected to every other neuron in the
          adjacent forward layer.
        
 
           
           
          