Investigating the Loss Landscape of Graph Neural Networks

Graph neural networks (GNNs) have emerged as powerful tools for learning from graph-structured data, with applications spanning multiple domains. Despite their widespread use, understanding their internal mechanics remains challenging, especially how optimization, expressivity, and generalization are interrelated. In response, the project aims to investigate the loss landscape of GNNs to gain insights into the optimization process and how typical GNN design choices affect it. In particular, we will study how the weights of the GNN evolve during the training process and how this affects the loss and the overall performance of the trained GNN models. The project will include running experiments with a learnable dimensionality reduction method to observe different loss landscapes and optimization trajectories of different GNN architectures and setups. Examples of relevant design choices include sparsification, quantization, preconditioning, or readout functions and expressivity-preserving modifications.

Contact: Samir Moustafa, Nils Kriege