CrossSplit: Mitigating Label Noise Memorization through Data Splitting

1Samsung Advanced Institute of Technology (SAIT), Suwon, South Korea 2SAIT AI Lab, Montreal, Canada

Abstract

overview

We approach the problem of improving robustness of deep learning algorithms in the presence of label noise. Building upon existing label correction and co-teaching methods, we propose a novel training procedure to mitigate the memorization of noisy labels, called CrossSplit, which uses a pair of neural networks trained on two disjoint parts of the labeled dataset. Cross-Split combines two main ingredients: (i) Cross-split label correction. The idea is that, since the model trained on one part of the data cannot memorize example-label pairs from the other part, the training labels presented to each network can be smoothly adjusted by using the predictions of its peer network; (ii) Cross-split semi-supervised training. A network trained on one part of the data also uses the unlabeled inputs of the other part. Extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and mini-WebVision datasets demonstrate that our method can outperform the current state-of-the-art in a wide range of noise ratios.

Results

Citation

@InProceedings{Jihye_2023_ICML,
  title={CrossSplit: Mitigating Label Noise Memorization through Data Splitting}, 
  author={Jihye Kim and Aristide Baratin and Yan Zhang and Simon Lacoste-Julien},
  booktitle={Proceedings of the 40th International Conference on Machine Learning (ICML)},
  year={2023}, 
  pages = {16377-16392}