Information Back-propagation for Blind Separation
of Sources from Non-linear Mixture
Howard H. Yang, Shun-ichi Amari and Andrzej Cichocki
Brain Information Processing Group, FRP, RIKEN
Hirosawa 2-1, Wako-shi, Saitama 351-01, JAPAN
Speaker: Howard H. Yang
Abstract
Assume that the mixture model is non-linear and the non-linear mixing function
can be accurately approximated by a invertible two-layer perceptron.
An invertible two-layer perceptron is used as a de-mixing system to extract
independent sources from the non-linear mixture. The learning algorithms
for the de-mixing system are derived by two approaches: maximum entropy and
minimum mutual information. The algorithms derived from the two approaches
have a common structure. The learning equations for the first layer of the
de-mixing system are different from our previous learning equations for
the linear mixture model. The natural gradient descent
method is applied in maximizing entropy and minimizing mutual information.
The information (entropy or mutual information) back-propagation
method is given in deriving the learning equations for the first layer.