sconce.models package¶
Subpackages¶
Submodules¶
sconce.models.basic_autoencoder module¶
-
class
sconce.models.basic_autoencoder.
BasicAutoencoder
(image_height, image_width, hidden_size, latent_size)[source]¶ Bases:
torch.nn.modules.module.Module
A basic 2D image autoencoder built up of fully connected layers, three each in the encoder and the decoder.
- Loss:
- This model uses binary cross-entropy for the loss.
- Metrics:
- None
Parameters:
sconce.models.basic_classifier module¶
-
class
sconce.models.basic_classifier.
BasicClassifier
(image_height, image_width, image_channels, convolutional_layer_kwargs, fully_connected_layer_kwargs, num_categories=10)[source]¶ Bases:
torch.nn.modules.module.Module
A basic 2D image classifier built up of some number of convolutional layers followed by some number of densly connected layers.
- Loss:
- This model uses cross-entropy for the loss.
- Metrics:
- classification_accuracy: [0.0, 1.0] the fraction of correctly predicted labels.
Parameters: - image_height (int) – image height in pixels.
- image_width (int) – image width in pixels.
- image_channels (int) – number of channels in the input images.
- convolutional_layer_kwargs (list[dict]) – a list of dictionaries describing the convolutional layers. See
Convolution2dLayer
for details. - fully_connected_layer_kwargs (list[dict]) – a list of dictionaries describing the fully connected layers. See
FullyConnectedLayer
for details. - num_categories (int) – [2, inf) the number of different image classes.
-
layers
¶
-
classmethod
new_from_yaml_file
(yaml_file)[source]¶ Construct a new BasicClassifier from a yaml file.
Parameters: yaml_file (file) – a file-like object that yaml contents can be read from. Example yaml file contents:
--- # Values for MNIST and FashionMNIST image_height: 28 image_width: 28 image_channels: 1 num_categories: 10 # Remaining values are not related to the dataset convolutional_layer_attributes: ["out_channels", "stride", "padding", "kernel_size"] convolutional_layer_values: [ # ============== ======== ========= ============= [16, 1, 4, 9], [8, 2, 1, 3], [8, 2, 1, 3], [8, 2, 1, 3], [8, 2, 1, 3], ] fully_connected_layer_attributes: ['out_size', 'dropout'] fully_connected_layer_values: [ # ====== ========= [100, 0.4], [100, 0.8], ]
-
classmethod
new_from_yaml_filename
(yaml_filename)[source]¶ Construct a new BasicClassifier from a yaml file.
Parameters: filename (path) – the filename of a yaml file. See new_from_yaml_file()
for more details.
sconce.models.multilayer_perceptron module¶
-
class
sconce.models.multilayer_perceptron.
MultilayerPerceptron
(image_height, image_width, image_channels, layer_kwargs, num_categories=10)[source]¶ Bases:
torch.nn.modules.module.Module
A basic 2D image multi-layer perceptron built up of a number of densly connected layers.
- Loss:
- This model uses cross-entropy for the loss.
- Metrics:
- classification_accuracy: [0.0, 1.0] the fraction of correctly predicted labels.
Parameters: - image_height (int) – image height in pixels.
- image_width (int) – image width in pixels.
- image_channels (int) – number of channels in the input images.
- layer_kwargs (list[dict]) – a list of dictionaries describing layers. See
FullyConnectedLayer
for details. - num_categories (int) – [2, inf) the number of different image classes.
-
classmethod
new_from_yaml_file
(yaml_file)[source]¶ Construct a new MultilayerPerceptron from a yaml file.
Parameters: yaml_file (file) – a file-like object that yaml contents can be read from. Example yaml file contents:
--- # Values for MNIST and FashionMNIST image_height: 28 image_width: 28 image_channels: 1 num_categories: 10 layer_attributes: ['out_size', 'dropout', 'with_batchnorm'] layer_values: [ # ====== ========= ================ [100, 0.4, true], [100, 0.8, true], ]
-
classmethod
new_from_yaml_filename
(yaml_filename)[source]¶ Construct a new MultilayerPerceptron from a yaml file.
Parameters: filename (path) – the filename of a yaml file. See new_from_yaml_file()
for more details.
sconce.models.wide_resnet_image_classifier module¶
-
class
sconce.models.wide_resnet_image_classifier.
AdaptiveAveragePooling2dLayer
(in_channels, output_size, inplace_activation=False, preactivate=False, with_batchnorm=True)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
sconce.models.wide_resnet_image_classifier.
WideResnetBlock_3x3
(in_channels, out_channels, stride)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
sconce.models.wide_resnet_image_classifier.
WideResnetGroup_3x3
(in_channels, out_channels, stride, num_blocks)[source]¶ Bases:
torch.nn.modules.module.Module
-
class
sconce.models.wide_resnet_image_classifier.
WideResnetImageClassifier
(image_channels=1, depth=28, widening_factor=10, num_categories=10)[source]¶ Bases:
torch.nn.modules.module.Module
A wide resnet image classifier, based on this paper
- Loss:
- This model uses cross-entropy for the loss.
- Metrics:
- classification_accuracy: [0.0, 1.0] the fraction of correctly predicted labels.
Parameters: - image_channels (int) – number of channels in the input images.
- depth (int) – total number of convolutional layers in the network. This should be divisible by (6n + 4) where n is a positive integer.
- widening_factor (int) – [1, inf) determines how many convolutional channels are in the network (see paper above for details).
- num_categories (int) – [2, inf) the number of different image classes.