DecoderBlock¶
tiatoolbox.models.architecture.grandqc.DecoderBlock
- class DecoderBlock(in_channels, skip_channels, out_channels)[source]¶
Decoder block for UNet++ architecture.
This block performs upsampling and feature fusion using skip connections from the encoder. It consists of two convolutional layers with ReLU activation and optional attention mechanisms (not implemented).
- conv1¶
First convolutional block applied after concatenating input and skip features.
- Type:
- conv2¶
Second convolutional block for further refinement.
- Type:
- attention1¶
Attention mechanism applied before the first convolution (currently Identity).
- Type:
nn.Module
- attention2¶
Attention mechanism applied after the second convolution (currently Identity).
- Type:
nn.Module
Example
>>> block = DecoderBlock(in_channels=128, skip_channels=64, out_channels=64) >>> input_tensor = torch.randn(1, 128, 64, 64) >>> skip = torch.randn(1, 64, 128, 128) >>> output = block(input_tensor, skip) >>> output.shape ... torch.Size([1, 64, 128, 128])
Initialize DecoderBlock.
Creates two convolutional layers and optional attention modules for feature refinement during decoding.
- Parameters:
Methods
Forward pass through the decoder block.
Attributes
training- forward(input_tensor, skip=None)[source]¶
Forward pass through the decoder block.
Upsamples the input tensor, concatenates it with the skip connection (if provided), and applies two convolutional layers with attention.
- Parameters:
input_tensor (torch.Tensor) – (B, C_in, H, W). Input tensor from the previous decoder layer.
skip (torch.Tensor | None) – (B, C_skip, H*2, W*2). Skip connection tensor from the encoder. Defaults to None.
self (DecoderBlock)
- Returns:
(B, C_out, H*2, W*2). Output tensor after decoding and feature refinement.
- Return type: