Introduction to Graph Neural Networks

Pragati Baheti
Heartbeat
Published in
5 min readJun 27, 2023

--

Photo by Resource Database on Unsplash

Introduction

Neural networks have been operating on graph data for over a decade now. Neural networks leverage the structure and properties of graph and work in a similar fashion. Graph Neural Networks are a class of artificial neural networks that can be represented as graphs.

Different Graph neural networks tasks [Source]

Convolution Neural Networks in the context of computer vision can be seen as GNNs that are applied to a grid (or graph) of pixels. Similarly, RNNs can be applied to graph structures where each node is represented by a word.

There are three different types of learning tasks that are associated with GNN. They are as follows:

  • Node-level tasks refer to tasks that concentrate on nodes, such as node classification, node regression, and node clustering.
  • Edge-level tasks, on the other hand, entail edge classification and link prediction. These tasks require the model to categorize edge types or predict the existence of an edge between two given nodes.
  • Graph-level tasks involve graph classification, graph regression, and graph matching. In these tasks, the model must learn comprehensive graph representations.

Graph in terms of representation of data

Structure of graph [Source]

Graphs are a set of objects that are interlinked by connections between them. A graph represents the edges between a collection of nodes; in terms of data, this means the relations between entities or data points. Each component the of graph (like the edges, nodes or the complete graph) can store information. An additional property that comes with this data structure is the directionality of the edges between the nodes.

Graphs are an abstract data structure that have a powerful data representation technique. Images and text can also be modeled as graphs as it would be easy and intuitive to learn about the symmetries in the data from a graph like grid-data.

Working of Graph Convolutional Neural Network

I will be showing an example of how GNN can be used as a model to extract features by fusing it with the architecture of a Convolutional Neural Network.

Graph Convolutional Neural Network [Source]

Each node of the graph represents some data features which it generates by applying some aggregation function over all the information it gets from neighboring nodes. The resultant vector is passed through a dense neural layer to better extract the hidden features and then a non-linear activation function is applied on top of that. This complete process is looped through multiple times.

Two tunable parameters are how much modification we want to do in the node’s own feature vectors and how much of the transformation we want to be propagated to a neighbor’s feature vectors.

Other types of Graph Neural Networks

Graph Auto-Encoder Networks utilize an encoder and decoder to acquire graph representations and reconstruct the input graph, respectively, with a bottleneck layer that connects the two. These networks are often employed in link prediction tasks since Auto-Encoders are adept at handling class balance.

Recurrent Graph Neural Networks have the ability to learn the most effective diffusion pattern and can manage multi-relational graphs in which a node has multiple connections while using less computation. To enhance smoothness and reduce over-parameterization, this category of graph neural network employs regularizers. They are commonly utilized in a variety of applications such as text generation, machine translation, speech recognition, image captioning, video tagging, and text summarization.

Gated Graph Neural Networks are more adept than RGNNs at handling tasks that involve long-term dependencies. GGNNs enhance RGNNs by introducing node, edge, and time gates to deal with long-term dependencies. The gates function similarly to Gated Recurrent Units (GRUs) by enabling the retention or suppression of information in different states using attention mechanism.

Naive neural networks vs GNNs

How are GNNs different from standard Artificial neural networks (ANNs)?

Graph Neural Networks (GNNs) differ from standard neural networks in the way the graph execution takes place. Standard neural networks operate through a graph of neurons in a linear fashion with weights that can be modified based on input dataset. GNNs operate through a graph that incorporates edges and nodes as well as neurons. This is particularly useful when there is overlapping data or missing data points with different associated values, as the graph transfer functions can fill in values from connected nodes.

GNNs also differ in their graph execution process. While standard deep learning neural networks process all neurons before moving on to the next data point, GNNs go through the graph one node at a time. Additionally, GNNs have their own set of parameters for each graph node, graph edge, and data point, whereas standard deep learning models have linear functions with real-valued weights for their inputs. This complexity makes the graph transfer function of GNNs more intricate than that of regular deep learning models since GNNs have multiple graph nodes and graph edges for a single data point.

Limitations of GNN

  1. GNNs are not robust to noise. The feature extraction of a node is highly dependent on its neighboring environment. Hence, addition or deletion of any edge can have a huge impact on the classifications that GNN produces.
  2. GNNs fail to distinguish between similar graph structures if the input is uniform. This is a blocker in downstream tasks such as graph classification.

Summary

GNNs conceptually build on graph theory and deep learning i.e., it combines the feature information and the graph structure to learn better representations. GNNs are mostly used for graph classification, social network analysis, graph analytics like in risk prediction, facebook prediction etc.

Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.

Editorially independent, Heartbeat is sponsored and published by Comet, an MLOps platform that enables data scientists & ML teams to track, compare, explain, & optimize their experiments. We pay our contributors, and we don’t sell ads.

If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletter (Deep Learning Weekly), check out the Comet blog, join us on Slack, and follow Comet on Twitter and LinkedIn for resources, events, and much more that will help you build better ML models, faster.

--

--

SDE at Microsoft | Amalgamation of different technologies | Deep learning enthusiast