GrandQCModel

class GrandQCModel(num_output_channels=2)[source]

GrandQC Tissue Detection Model.

This model implements a UNet++ architecture with an EfficientNet encoder for tissue detection in whole slide images (WSIs). It is designed to identify tissue regions and background areas for quality control in digital pathology workflows.

The model uses JPEG compression and ImageNet normalization during preprocessing and applies argmin-based postprocessing to generate tissue masks.

Example

>>> from tiatoolbox.models.engine.semantic_segmentor import SemanticSegmentor
>>> segmentor = SemanticSegmentor(model="grandqc_tissue_detection")
>>> results = segmentor.run(
...     ["/example_wsi.svs"],
...     masks=None,
...     auto_get_mask=False,
...     patch_mode=False,
...     save_dir=Path("/tissue_mask/"),
...     output_type="annotationstore",
... )

References

[1] Weng, Zhilong et al. “GrandQC: A comprehensive solution to quality control problem in digital pathology.” Nature Communications, 2024. DOI: 10.1038/s41467-024-54769-y URL: https://doi.org/10.1038/s41467-024-54769-y

Initialize GrandQCModel.

Sets up the UNet++ decoder, EfficientNet encoder, and segmentation head for tissue detection.

Parameters:

num_output_channels (int) – Number of output classes. Defaults to 2 (Tissue and Background).

Methods

forward

Forward pass through the GrandQC model.

infer_batch

Run inference on a batch of images.

postproc

Postprocess model output to generate tissue mask.

preproc

Preprocess input image for inference.

Attributes

training

forward(x, *args, **kwargs)[source]

Forward pass through the GrandQC model.

Sequentially processes the input tensor through the encoder, decoder, and segmentation head to produce tissue segmentation predictions.

Parameters:
  • x (torch.Tensor) – Input tensor of shape (N, C, H, W).

  • *args (tuple) – Additional positional arguments (unused).

  • **kwargs (dict) – Additional keyword arguments (unused).

  • self (GrandQCModel)

Returns:

Segmentation output tensor of shape (N, num_classes, H, W).

Return type:

torch.Tensor

static infer_batch(model, batch_data, *, device)[source]

Run inference on a batch of images.

Transfers the model and input batch to the specified device, performs forward pass, and returns softmax probabilities.

Parameters:
  • model (torch.nn.Module) – PyTorch model instance.

  • batch_data (torch.Tensor) – Batch of input images in NHWC format.

  • device (str) – Device for inference (e.g., “cpu” or “cuda”).

Returns:

Inference results as a NumPy array of shape (N, H, W, C).

Return type:

np.ndarray

Example

>>> batch = torch.randn(4, 256, 256, 3)
>>> probs = GrandQCModel.infer_batch(model, batch, device="cpu")
>>> probs.shape
(4, 256, 256, 2)
static postproc(image)[source]

Postprocess model output to generate tissue mask.

Applies argmin across channels to classify pixels as tissue or background.

Parameters:

image (np.ndarray) – Input probability map as a NumPy array of shape (H, W, C).

Returns:

Binary tissue mask where 0 = Tissue and 1 = Background.

Return type:

np.ndarray

Example

>>> probs = np.random.rand(256, 256, 2)
>>> mask = GrandQCModel.postproc(probs)
>>> mask.shape
... (256, 256)
static preproc(image)[source]

Preprocess input image for inference.

Applies JPEG compression and ImageNet normalization to the input image.

Parameters:

image (np.ndarray) – Input image as a NumPy array of shape (H, W, C) in uint8 format.

Returns:

Preprocessed image normalized to ImageNet statistics.

Return type:

np.ndarray

Example

>>> img = np.random.randint(0, 255, (256, 256, 3), dtype=np.uint8)
>>> processed = GrandQCModel.preproc(img)
>>> processed.shape
... (256, 256, 3)