MACHINE LEARNING

Research Projects and publications

Multi Loss function Sequential Training

 

Author(s): Umberto Michelucci (TOELT GmbH)

Share & Cite This Article       

NOT YET PUBLISHED

Abstract

In this paper a new training method for training neural networks is proposed that can deal with overfitting without the need of regularisation terms. Classical ways of dealing with overfitting is to add to the loss function terms like the \ell_1 or \ell_2 regularisation that have the effect of making the weights of the network go to zero, and therefore effectively reducing its complexity. Another approach is what is called drop-out, that effectively removes neurons from the network in a random fashion during training. The method that is proposed here is called Multi Loss function Sequential Training (MLST), and uses three different loss functions used sequentially each epoch, each having a specific effect on the training of the network. It is shown here how this method deal with overfitting automatically and in addition has a much faster convergence, with respect to classical methods.