Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The Lottery Ticket Hypothesis (LTH) suggests there exists a sparse LTH mask and weights that achieve the same generalization performance as the dense model while using significantly fewer parameters. LTH achieves this by iteratively sparsifying and re-training within the pruned solution basin. However, finding a LTH solution is computationally expensive, and a LTH’s sparsity mask does not generalize to other random weight initializations. Recent work has suggested that neural networks trained from random initialization find solutions within the same basin modulo permutations, and proposes a method to align trained models within the same loss basin. We hypothesize that misalignment of basins is the reason why LTH masks do not generalize to new random initializations and propose permuting the LTH mask to align with the new optimization basin when performing sparse training from a different random initialization. We empirically show a significant increase in generalization when sparse training from random initialization with the permuted mask as compared to using the non-permuted LTH mask, on multiple datasets (CIFAR-10/100 & ImageNet) and models (VGG11 & ResNet20/50).

Description

Citation

Jain, R. (2025). Sparse training from random initialization: aligning lottery ticket masks using weight symmetry (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.