Graph Neural Network for Cell Tracking in Microscopy Videos

[ECCV 2022]
School of Electrical and Computer Engineering, Ben-Gurion University, Israel

Abstract

We present a novel graph neural network (GNN) approach for cell tracking in high-throughput microscopy videos. By modeling the entire time-lapse sequence as a direct graph where cell instances are represented by its nodes and their associations by its edges, we extract the entire set of cell trajectories by looking for the maximal paths in the graph. This is accomplished by several key contributions incorporated into an end-to-end deep learning framework. We exploit a deep metric learning algorithm to extract cell feature vectors that distinguish between instances of different biological cells and assemble same cell instances. We introduce a new GNN block type which enables a mutual update of node and edge feature vectors, thus facilitating the underlying message passing process. The message passing concept, whose extent is determined by the number of GNN blocks, is of fundamental importance as it enables the `flow' of information between nodes and edges much behind their neighbors in consecutive frames. Finally, we solve an edge classification problem and use the identified active edges to construct the cells' tracks and lineage trees. We demonstrate the strengths of the proposed cell tracking approach by applying it to 2D and 3D datasets of different cell types, imaging setups, and experimental conditions. We show that our framework outperforms current state-of-the-art methods on most of the evaluated datasets.

cars peace

How does it work?

(a) The input is composed of a live cell microscopy sequence of length T and the corresponding sequence of label maps.

(b) Each cell instance in the sequence is represented by a feature vector which includes DML and spatio-temporal features.

(c) The entire microscopy sequence is encoded as a direct graph where the cell instances are represented by its nodes and their associations are represented by the graph edges. Each node and edge in the graph has its own embedded feature vector.

(d) These feature vectors are encoded and updated using Graph Neural Network (GNN). The GNN is composed of L message passing blocks which enable an update of edge and node features by their L-th order neighbors (i.e., cell instances which are up to L frames apart).

(e) The GNN’s edge feature output is the input for an edge classifier network which classifies the edges into active (solid lines) and non-active (dashed lines). During training, the predicted classification is compared to the GT classification for the loss computation. Since all the framework components are connected in an end-to-end manner the loss backpropogates throughout the entire network.

(f) At inference time, cell tracks are constructed by concatenating sequences of active edges that connect cells in consecutive frames.

BibTeX

@inproceedings{ben2022graph,
title={Graph Neural Network for Cell Tracking in Microscopy Videos},
author={Ben-Haim, Tal and Riklin-Raviv, Tammy},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2022},
}