Derek Lim*, Joshua Robinson*, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, Stefanie Jegelka
ICLR Spotlight
Publication year: 2023

Many machine learning tasks involve processing eigenvectors derived from
data. Especially valuable are Laplacian eigenvectors, which capture useful
structural information about graphs and other geometric objects. However,
ambiguities arise when computing eigenvectors: for each eigenvector v, the
sign flipped -v is also an eigenvector. More generally, higher dimensional
eigenspaces contain infinitely many choices of basis eigenvectors. These
ambiguities make it a challenge to process eigenvectors and eigenspaces in a
consistent way. In this work we introduce SignNet and BasisNet — new neural
architectures that are invariant to all requisite symmetries and hence process
collections of eigenspaces in a principled manner. Our networks are universal,
i.e., they can approximate any continuous function of eigenvectors with the
proper invariances. They are also theoretically strong for graph representation
learning — they can approximate any spectral graph convolution, can compute
spectral invariants that go beyond message passing neural networks, and can
provably simulate previously proposed graph positional encodings. Experiments
show the strength of our networks for learning spectral graph filters and
learning graph positional encodings.