Transfer-learning is the process of using pre-trained weights from one-model to make better predictions with another model. In the chemical domain, labeled data is sparse and self-supervised methods using unlabeled data are used to obtain initial weights. While transferring these weights to a fine-tuning model, several questions need to be investigated: Should all layers be frozen? Or only some of them? Does this depend on the amount of layers or the network used? During this project you will investigate the factors that allow for the optimal transfer of weights from a pre-trained to a fine-tuning model.
Contact: Anatol Ehrlich