Conv2dReLU¶

class Conv2dReLU(in_channels, out_channels, kernel_size, padding=0, stride=1, *, bias=False)[source]¶

Conv2d + BatchNorm + ReLU block.

This class implements a common convolutional block used in encoder-decoder architectures. It consists of a 2D convolution followed by batch normalization and a ReLU activation function.

conv¶

Convolutional layer for feature extraction.

Type:

nn.Conv2d

norm¶

Batch normalization layer for stabilizing training.

Type:

nn.BatchNorm2d

activation¶

ReLU activation function applied after normalization.

Type:

nn.ReLU

Example

>>> block = Conv2dReLU(
... in_channels=32, out_channels=64, kernel_size=3, padding=1
... )
>>> x = torch.randn(1, 32, 128, 128)
>>> output = block(x)
>>> output.shape
... torch.Size([1, 64, 128, 128])

Initialize Conv2dReLU block.

Creates a convolutional layer followed by batch normalization and a ReLU activation function. This block is commonly used in UNet++ and similar architectures for feature extraction.

Parameters:
  • in_channels (int) – Number of input channels.

  • out_channels (int) – Number of output channels.

  • kernel_size (int) – Size of the convolution kernel.

  • padding (int) – Padding applied to the input. Defaults to 0.

  • stride (int) – Stride of the convolution. Defaults to 1.

  • bias (bool) – If True, adds a learnable bias to the output. Default: False

Methods

Attributes

training