Comparison Between Operators in Vector Fields and Graph Theory

This article organizes the relationship between the operator $\mathrm{div} \cdot \mathrm{grad} = \Delta$ in vector fields and the Laplacian matrix $\mathbf{L}$ in graph theory. It serves as a memo to document their theoretical background. This is a highly fascinating topic.

GitHub

  • The Jupyter Notebook file is available here .

Google Colaboratory

  • To run it on Google Colaboratory, click here .

Execution Environment

The operating system used here is macOS. Please note that options for Linux or Unix commands may differ.

!sw_vers
ProductName:		macOS
ProductVersion:		13.5.1
BuildVersion:		22G90
!python -V
Python 3.9.17

We import basic libraries and use the watermark module to check their versions. Additionally, we set a random seed.

%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import random

import scipy
import numpy as np

import matplotlib
import matplotlib.pyplot as plt

seed = 123
random_state = 123

random.seed(seed)
np.random.seed(seed)

from watermark import watermark

print(watermark(python=True, watermark=True, iversions=True, globals_=globals()))
Python implementation: CPython
Python version       : 3.9.17
IPython version      : 8.17.2

scipy     : 1.11.2
matplotlib: 3.8.1
numpy     : 1.25.2

Watermark: 2.4.3

Operators in Vector Fields (div, grad)

Overview of Vector Fields

A vector field assigns a vector to each point in space. In physics, it is used to describe phenomena like fluid flow or electromagnetic fields. In machine learning, gradient vector fields are often used in optimization problems.

Consider a vector field $\mathbf{F}$, such as $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$. For each point $\mathbf{x} \in \mathbb{R}^n$, $\mathbf{F}(\mathbf{x})$ returns an $n$-dimensional vector.

Gradient Operator (grad)

For a scalar field $f(\mathbf{x})$, its gradient is written as $$ \mathrm{grad} \cdot f = \nabla f. $$ In components, $$ \nabla f = \left( \frac{\partial f}{\partial x_1}, \ldots, \frac{\partial f}{\partial x_n} \right). $$ In Euclidean space, it is represented as a simple vector of partial derivatives. Physically, it indicates the direction of the steepest ascent. Gradient descent in machine learning uses this gradient to update parameters.

Divergence Operator (div)

For a vector field $\mathbf{F}(\mathbf{x}) = (F_1, F_2, \ldots, F_n)$, its divergence is expressed as $$ \mathrm{div} \cdot \mathbf{F} = \nabla \cdot \mathbf{F}. $$ In components, $$ \nabla \cdot \mathbf{F} = \sum_{i=1}^n \frac{\partial F_i}{\partial x_i}. $$ In fluid dynamics, a positive divergence indicates a “source,” while a negative divergence represents a “sink.” In signal processing, divergence-based methods are used for tasks like noise removal. The Laplacian operator is defined using this combination.

Laplacian Operator (div grad)

For a scalar field $f(\mathbf{x})$, the Laplacian is given by $$ \Delta f = \mathrm{div}(\mathrm{grad} \cdot f), $$ or equivalently, $$ \Delta f = \nabla \cdot (\nabla f) = \sum_{i=1}^n \frac{\partial^2 f}{\partial x_i^2}. $$ This operator plays a critical role in partial differential equations, modeling phenomena like heat transfer and wave propagation. Its discrete counterpart is linked to the Laplacian matrix in graph theory.


Laplacian Matrix in Graph Theory

Basics of Graph Structure

In graph theory, we deal with structures consisting of vertex and edge sets. Recommender systems often treat items and users as vertices. Edge weights frequently represent similarity or relevance.

Consider an undirected graph $G = (V, E)$ with $n$ vertices and $m$ edges, where $V = {1, 2, \ldots, n}$ and an edge $(i, j) \in E$ indicates a connection between vertices $i$ and $j$.

Adjacency and Degree Matrices

The adjacency matrix of graph $G$, denoted $\mathbf{A}$, is an $n \times n$ matrix where $\mathbf{A}_{ij} = 1$ if vertices $i$ and $j$ are adjacent. For weighted graphs, the entry contains the weight of the edge.

The degree matrix $\mathbf{D}$ is a diagonal matrix where $\mathbf{D}_{ii}$ represents the degree (number of connections or total weight) of vertex $i$. All off-diagonal elements are zero.

Graph Laplacian Matrix

The Laplacian matrix $\mathbf{L}$ is defined as $$ \mathbf{L} = \mathbf{D} - \mathbf{A}. $$ It represents the difference between vertex degrees and their adjacency.

The normalized Laplacian matrix, often used in spectral clustering and graph signal processing, is given by $$ \mathbf{L}_{\text{norm}} = \mathbf{D}^{-\frac{1}{2}} \mathbf{L} \mathbf{D}^{-\frac{1}{2}}. $$

Eigenvalues and Eigenvectors

The matrix $\mathbf{L}$ is a positive semi-definite matrix with non-negative eigenvalues. The smallest eigenvalue is 0, corresponding to the constant vector. Information about graph connectivity and clusters is encoded in these eigenvalues.

The Laplacian matrix is viewed as a discrete counterpart to the continuous Laplacian, representing $\mathrm{div} \cdot \mathrm{grad}$ in matrix form, describing discrete connections between vertices.


Explanation of Mathematical Relationships

Continuous and Discrete Correspondence

In continuous space, the differential operator $\Delta = \mathrm{div} \cdot \mathrm{grad}$ represents the sum of second derivatives at each point. In discrete structures, the graph Laplacian $\mathbf{L}$ expresses differences.

Replacing the scalar field $f(\mathbf{x})$ with the scalar values $f(i)$ at graph vertices, the gradient becomes “differences” calculated for edges, aggregated at vertices to form the divergence. As a result, $\mathbf{L}$ serves as the discrete equivalent of $\Delta$.

Definition of Discrete Laplacian

For a vertex $i$, the differences $f(i) - f(j)$ with its neighbors $j$ are summed. Using the degree matrix $\mathbf{D}$ and adjacency matrix $\mathbf{A}$, this operation is represented as $$ (\mathbf{L} \mathbf{f})(i) = \sum_{j \in N(i)} [f(i) - f(j)], $$ where $N(i)$ is the set of neighbors of vertex $i$.

This corresponds to the continuous form $$ \Delta f (\mathbf{x}) = \sum_{k=1}^n \frac{\partial^2 f}{\partial x_k^2}. $$ The operation aggregates differences from neighbors, similar to discretizing second derivatives.

Spectral Perspective

The eigenvalue decomposition of $\mathbf{L}$ is analogous to the eigenfunction decomposition of the continuous Laplacian. Small eigenvalues represent “smooth” variations, while larger eigenvalues indicate “sharp” changes. Graph clustering utilizes eigenvectors for partitioning.

Physical equations like the heat or diffusion equations can also be interpreted discretely through spectral decomposition. In graph signal processing, Laplacian eigenvectors serve as Fourier bases, bridging the gap between continuous and discrete perspectives.


Summary

There is a profound correspondence between the operator $\mathrm{div} \cdot \mathrm{grad} = \Delta$ in vector fields and the Laplacian matrix $\mathbf{L}$ in graph theory. Continuous differential operators are replaced by discrete difference operators, enabling the representation of phenomena such as heat diffusion and wave propagation in a graph-based context.

Conclusion

This article provided an overview of the relationship between the operator $\mathrm{div} \cdot \mathrm{grad} = \Delta$ in vector fields and the Laplacian matrix $\mathbf{L}$ in graph theory.

References