Summary of architectures, prediction methods, data sets and validation results used for cell detection/segmentation, object detection, cell phenotyping, and community analysis
Domain . | Network architecture . | Prediction method . | Data set . | Validation details . | Metrics . |
---|---|---|---|---|---|
Cell detection | Supervised MaskRCNN with ResNet-18 backbone | 512 × 512 pixel crops, 0.5 mpp | 16 500 cells from different patients, manually segmented by histopathologists | Train/test split were stratified according to patient, that is, crops in train and test phases contain no crops from the same patient. Train/test proportions were set as 75% to 25% correspondingly | F1-score, 0.74 |
Cell segmentation | IoU, 0.76 | ||||
Object detection (fat, trabeculae, and endothelium) | Supervised DeepLabV3+ with EfficientNet-b0 encoder | 1024 × 1024 pixel crops, 0.5 mpp | 120 crops (40 for each object) from different patients manually annotated by histopathologists | Fat IoU, 0.91 Trabeculae IoU, 0.94 Endothelium IoU, 0.90 | |
Cell typing | Supervised ResNet-18 | 128 × 128 pixel window centered around cell of interest | 12 500 cells from different patients manually classified by histopathologists | F1-score, 0.923 Accuracy, 0.977 | |
Community analysis | Unsupervised ARGVA59 | Slide graph of cell-to-cell interactions is taken to compute embeddings for each cell | 29 slide cell neighborhood graphs were used for self-supervised training, 1 510 295 nodes overall | Not applicable | Not applicable |
Domain . | Network architecture . | Prediction method . | Data set . | Validation details . | Metrics . |
---|---|---|---|---|---|
Cell detection | Supervised MaskRCNN with ResNet-18 backbone | 512 × 512 pixel crops, 0.5 mpp | 16 500 cells from different patients, manually segmented by histopathologists | Train/test split were stratified according to patient, that is, crops in train and test phases contain no crops from the same patient. Train/test proportions were set as 75% to 25% correspondingly | F1-score, 0.74 |
Cell segmentation | IoU, 0.76 | ||||
Object detection (fat, trabeculae, and endothelium) | Supervised DeepLabV3+ with EfficientNet-b0 encoder | 1024 × 1024 pixel crops, 0.5 mpp | 120 crops (40 for each object) from different patients manually annotated by histopathologists | Fat IoU, 0.91 Trabeculae IoU, 0.94 Endothelium IoU, 0.90 | |
Cell typing | Supervised ResNet-18 | 128 × 128 pixel window centered around cell of interest | 12 500 cells from different patients manually classified by histopathologists | F1-score, 0.923 Accuracy, 0.977 | |
Community analysis | Unsupervised ARGVA59 | Slide graph of cell-to-cell interactions is taken to compute embeddings for each cell | 29 slide cell neighborhood graphs were used for self-supervised training, 1 510 295 nodes overall | Not applicable | Not applicable |
F1, F-score; IoU, identity over union; ARGVA, adversarially regularized variational graph autoencoder.