KongNetDecoder¶

class KongNetDecoder(encoder_channels, decoder_channels, n_blocks=5, attention_type='scse', *, center=True)[source]¶

Decoder module for KongNet architecture.

This decoder implements a U-Net style decoder with multiple decoder blocks, attention mechanisms, and optional center block at the bottleneck.

Parameters:
  • encoder_channels (list[int]) – Number of channels at each encoder level

  • decoder_channels (Tuple[int, ...]) – Number of channels at each decoder level

  • n_blocks (int) – Number of decoder blocks. Default: 5

  • attention_type (str) – Type of attention mechanism. Default: ‘scse’

  • center (bool) – Whether to use center block at bottleneck. Default: True

Raises:

ValueError – If n_blocks doesn’t match length of decoder_channels

Initialize KongNetDecoder.

Parameters:
  • encoder_channels (list[int]) – Number of channels at each encoder level.

  • decoder_channels (Tuple[int, ...]) – Number of channels at each decoder level.

  • n_blocks (int) – Number of decoder blocks. Default is 5.

  • attention_type (str) – Type of attention mechanism to use. Default is ‘scse’.

  • center (bool) – Whether to include a center block at the bottleneck. Default is True.

Methods

forward

Forward pass through the decoder.

Attributes

training

forward(*features)[source]¶

Forward pass through the decoder.

Parameters:

*features (Tensor) – Feature tensors from encoder at different scales

Returns:

Decoded output tensor

Return type:

torch.Tensor