Other attributes
Directional graph networks (DGN) have been used to overcome the expressive limitations of graph neural networks (GNN). They accomplish this by defining a vector field in the graph and applying directional derivatives and smoothing by projecting node-specific messages into the field. This allows for graph convolutions to be defined according to topologically-derived directional flows. Overall, a DGN enables graph networks to embed directions in an unsupervised way and allows for a better representation of anisotropic features in different physical or biological problems.
The theory of directional graph networks were proposed by Dominique Beaini, Saro Passaro, Vincent Letourneau, William L. Hamilton, Garbiele Corso, and Pietro Lio in October of 2020. The paper proposed using Laplacian eigenvectors as vector fields and they were able to show that the method generalizes convolutional neural networks (CNNs) on an n-dimensional grid and is more discriminative than standard GNNs in regards to the Weisfeiler-Lehman 1-WL test. Further, the researchers' method, when compared to other benchmarks, saw a relative error reduction of 8% on the CIFAR10 graph dataset, an 11% to 32% error reduction on the molecular ZINC dataset, and a relative increase in precision of 1.6% on the MoIPCBA dataset.
Further, the researchers were able to bring the power of CNN data augmentation to graphs by providing a means of doing reflection, rotation, and distortion on the underlying directional field.