This post aims to describe how a set of images can be represented as points onto a multi-dimensional manilfold. Such a process is extremely useful when someone wants to cluster together visually similar images, or when someone wants to create a dynamic indexing structure for large volumes of visual data.

### The idea

The idea, for representing images as points, is to relate the visual similarities between images, as Euclidean distances between points onto a multi-dimensional manifold over which these images are projected. So, assuming that we have in our disposal a dataset of \(N\) images, we can start, firstly, by estimating a visual similarity metric between every pair of images and then find a way to relate this metric with the Euclidean distances between points that lie on a manifold.

### Estimation of visual similarity metric

A common way for checking how visually similar two different images are, is to estimate the number of their correspondent points. For this reason, the estimation of the visual similarity metric is based on the exploitation of local visual descriptors. In this case, I chose to use the ORB descriptor[1], but SURF[2] and SIFT[3] descriptors can be used too. ORB describes the visual content of an image by detecting \(K\) keypoints and characterizing each one of them by a binary feature. Let’s denote as \(k_{i,(A)}\) the \(i\)-th keypoint of the image \(A\) extracted via the ORB algorithm. Then, its corresponding point \(k_{j_i,(B)}\) belonging to image \(B\) can be estimated by performing a nearest-neighbor keypoints matching algorithm.

Having detected all corresponded points between two images \(A\) and \(B\) a set \(M_{(A \rightarrow B)}\) that contains pairs of all keypoints from the first image along with the correspondent points from the second image can be formed as:

\[ \begin{equation} M_{(A \rightarrow B)} = \left \{(k_{i,(A)}, k_{j_i,(B)}) | i=1,\dots ,K \right \} \end{equation} \]

The similar approach can be used to form the set \(M_{(B \rightarrow A)}\). The set of final matches, \(M_{(A,B)}\), between images \(A\) and \(B\) can be defined as the intersection of the sets \(M_{(A \rightarrow B)}\) and \(M_{(B \rightarrow A)}\). As the number of extracted keypoints for each image is equal to \(K\), a visual similarity metric between images \(i=A\) and \(j= B\) can be defined as:

\[ \begin{equation} s_{(i,j)} = \frac{| M_{(A,B)} | }{K} \end{equation} \]

\(| M_{(A,B)} |\) stands for the cardinality of the set \(M_{(A,B)}\). The output of the aforementioned process for \(N\) images is an \(N \times N\) symmetric matrix \(S\) with elements \(s_{ij}\) , \(i, j=1,2,…,N\). Variable \(s_{ij}\) takes value of zero in case that the visual content of image \(i\) has no relation with the content of image \(j\). Instead, for two similar images variable \(s_{ij}\) takes value equal to one. In the following, as \(D\) is denoted the negative log version of matrix \(S\) so as to similar images receive close to zero while quite dissimilar very high value.

\[ \begin{equation} D = \left [ d_{ij} \right ] = – \log{(S)} \end{equation} \]

\(D\) is a square \(N \times N\) symmetric matrix with non-negative elements and zeros on the main diagonal. In Eq. (3), \(d_{ij}\) stands for an element of matrix \(D\).

### Images as points on manifolds

Having estimate a dissimilarity metric, next we have to find a way to relate this metric with the Euclidean distances between points that lie on a manifold. Let’s define as \(x_{(i)} \in \mathbb{R}^\mu\) the coordinates of \(i\)-th image in the \(\mu\)-dimensional space. The \(\mu\)-dimensional space is defined such that the norm between two points of the space represented by the coordinates \(x_{(i)}\) and \(x_{(j)}\) should be equal to their respective image distance \( d_{ij} = -\log{(s_{ij})}\) defined by Eq.(3). The coordinates of all \(N\) images in the dataset can be compactly represented by a matrix \(X\), which has the following form:

\[ \begin{equation} X = \left [ x_{(1)} \:\: \dots \:\: x_{(N)} \right ]^T \in \mathbb{R}^{N \times \mu} \end{equation} \]

Defining the Gram matrix \(B = X \cdot X^T\) of images’ coordinates, classical Multi-Dimensional Scaling (cMDS) [4] can be used to establish a connection between the space of the distances and the space of Gram matrix B based on the following theorem [5].

**Theorem 1**. A non-negative symmetric matrix \(D \in \mathbb{R}^{N \times N}\) with zeros on the diagonal, is an Euclidean distance matrix if and only if \(B := – \frac{1}{2}H\cdot D\cdot H \), where \(H := I – \frac{1}{N}1 \cdot 1^T \), is positive semidefinite. Furthermore, this \(B\) will be the Gram matrix for a mean centered configuration with interpoint distances given by \(D\).

In cases where dissimilarity matrix \(D\) is not Euclidean the matrix \(B\) as described by the above theorem will not be positive semi-definite, and thus will not be a Gram matrix. To handle such cases, cMDS projects the Gram matrix \(B\) onto the cone of positive semi-definite matrices by setting its negative eigenvalues to zero. In order to get matrix \(X\), the Gram matrix \(B\) is spectrally decomposed into \(U\cdot V\cdot U^T\) and then \(X=U\cdot V^{1/2}\) . If we denote as \(q_i\) and \(\lambda_i\) for \(i=1,2,…Ν\) the eigenvectors and eigenvalues of \(B\), then matrix \(U\) is the square \(N \times N\) matrix whose \(i\)-th column is the eigenvector \(q_i\) of \(B\) and \(V=[v_{ii}]\) is the diagonal matrix whose diagonal elements \(v_{ii}\) are the corresponding eigenvalues, i.e. \(v_{ii}=\lambda_i\). Finally, the dimension \(\mu\) of the multidimensional space is equal to the multiplicity of non-zero eigenvalues of matrix \(B\).

The figure below presents a result for the aforementioned process. Firstly, a similarity metric was computed for two similar and two dissimilar images using the matches of ORB algorithm, and then these images were projected onto a manifold. For visualization purposes this figure presents a 2-dimensional manilfold. Actually the aforementioned method estimates the number of manifold dimensions directly from the data. A more detailed description of this method can be found in [6]

[1] Rublee, Ethan, et al. “ORB: an efficient alternative to SIFT or SURF.” *Computer Vision (ICCV), 2011 IEEE International Conference on*. IEEE, 2011.

[2] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. “Surf: Speeded up robust features.” *Computer Vision–ECCV 2006*. Springer Berlin Heidelberg, 2006. 404-417.

[3] Lowe, David G. “Object recognition from local scale-invariant features.” *Computer vision, 1999. The proceedings of the seventh IEEE international conference on*. Vol. 2. Ieee, 1999.

[4] T. Cox, M. Cox, and T. Cox, Multidimensional Scaling, Second Edition. {Chapman & Hall/CRC}, 2000.

[5] L. Cayton, “Algorithms for manifold learning,” University of California, San Diego, Tech. Rep. CS2008-0923, 2006.

[6] Makantasis, K., Doulamis, A., Doulamis, N., & Ioannides, M. (2014). In the wild image retrieval and clustering for 3D cultural heritage landmarks reconstruction. *Multimedia Tools and Applications*, 1-37.