This project involves a simplified, yet exhaustive approach to implementation and explanation of various Graph Representation Learning techniques developed in the recent past. We cover major papers in the field as part of the review series and we aim to add blogs on many more significant papers in the field.

## 1. Understanding DeepWalk

Unsupervised online learning approach, inspired from word2vec in NLP, but, here the goal is to generate node embeddings.

## 2. A Review : Graph Convolutional Networks (GCN)

GCNs draw on the idea of Convolution Neural Networks re-defining them for the non-euclidean data domain. They are convolutional, because filter parameters are typically shared over all locations in the graph unlike typical GNNs.

- GCN Blog
- Jupyter Notebook
- Code
- Paper -> Semi-Supervised Classification with Graph Convolutional Networks

## 3. Graph SAGE(SAmple and aggreGatE)

Previous approaches are transductive and don’t naturally generalize to unseen nodes. GraphSAGE is an inductive framework leveraging node feature information to efficiently generate node embeddings.

## 4. ChebNet: CNN on Graphs with Fast Localized Spectral Filtering

ChebNet is a formulation of CNNs in the context of spectral graph theory.

- ChebNet Blog
- Jupyter Notebook
- Code
- Paper -> Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

## 5. Understanding Graph Attention Networks

GAT is able to attend over their neighborhoodsâ€™ features, implicitly specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation or depending on knowing the graph structure upfront.