niteshade.models.CifarClassifier
- class niteshade.models.CifarClassifier(optimizer='adam', loss_func='cross_entropy', lr=0.0001, optim_kwargs={'weight_decay': 1e-06}, loss_kwargs={}, seed=None)
Bases:
niteshade.models.BaseModelResNet-18 classifier inheriting from BaseModel for the torchvision CIFAR10 dataset. Achieves 80% accuracy on the held out test set in 20 epochs using mini-batches of size 32. More details on ResNet-18 can be found on the paper “Deep Residual Learning for Image Recognition” by Zhang et al (https://arxiv.org/abs/1512.03385).
- Parameters
optimizer (str) –
String specifying optimizer to use in training neural network (Default = “adam”). Options:
”adam”: torch.optim.Adam(), “adagrad”: torch.optim.Adagrad(), “sgd”: torch.optim.SGD()
loss_func (str) –
String specifying loss function to use in training neural network (Default = “cross_entropy”). Options:
”mse”: nn.MSELoss(), “nll”: nn.NLLLoss(), “bce”: nn.BCELoss(), “cross_entropy”: nn.CrossEntropyLoss().
lr (float) – Learning rate to use in training neural network (Default = 0.0001).
optim_kwargs (dict) – dictionary containing additional optimizer key-word arguments (Default = {“weight_decay”: 1e-6}).
loss_kwargs (dict) – dictionary containing additional key-word arguments for the loss function (Default = {}).
- __init__(optimizer='adam', loss_func='cross_entropy', lr=0.0001, optim_kwargs={'weight_decay': 1e-06}, loss_kwargs={}, seed=None)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
Methods
__init__([optimizer, loss_func, lr, ...])Initializes internal Module state, shared by both nn.Module and ScriptModule.
add_module(name, module)Adds a child module to the current module.
apply(fn)Applies
fnrecursively to every submodule (as returned by.children()) as well as self.bfloat16()Casts all floating point parameters and buffers to
bfloat16datatype.buffers([recurse])Returns an iterator over module buffers.
children()Returns an iterator over immediate children modules.
cpu()Moves all model parameters and buffers to the CPU.
cuda([device])Moves all model parameters and buffers to the GPU.
double()Casts all floating point parameters and buffers to
doubledatatype.eval()Sets the module in evaluation mode.
evaluate(X_test, y_test[, batch_size])Test the accuracy of the CIFAR10 classifier on a test set.
extra_repr()Set the extra representation of the module
float()Casts all floating point parameters and buffers to
floatdatatype.forward(x)Perform a forward pass through the model.
get_buffer(target)Returns the buffer given by
targetif it exists, otherwise throws an error.get_extra_state()Returns any extra state to include in the module's state_dict.
get_parameter(target)Returns the parameter given by
targetif it exists, otherwise throws an error.get_submodule(target)Returns the submodule given by
targetif it exists, otherwise throws an error.half()Casts all floating point parameters and buffers to
halfdatatype.load_state_dict(state_dict[, strict])Copies parameters and buffers from
state_dictinto this module and its descendants.modules()Returns an iterator over all modules in the network.
named_buffers([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters([recurse])Returns an iterator over module parameters.
predict(x)Predict on a data sample.
register_backward_hook(hook)Registers a backward hook on the module.
register_buffer(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook(hook)Registers a forward hook on the module.
register_forward_pre_hook(hook)Registers a forward pre-hook on the module.
register_full_backward_hook(hook)Registers a backward hook on the module.
register_module(name, module)Alias for
add_module().register_parameter(name, param)Adds a parameter to the module.
requires_grad_([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state(state)This function is called from
load_state_dict()to handle any extra state found within the state_dict.share_memory()See
torch.Tensor.share_memory_()state_dict([destination, prefix, keep_vars])Returns a dictionary containing a whole state of the module.
step(X_batch, y_batch)Perform a step of gradient descent on the passed inputs (X_batch) and labels (y_batch).
to(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty(*, device)Moves the parameters and buffers to the specified device without copying storage.
train([mode])Sets the module in training mode.
type(dst_type)Casts all parameters and buffers to
dst_type.xpu([device])Moves all model parameters and buffers to the XPU.
zero_grad([set_to_none])Sets gradients of all model parameters to zero.
Attributes
T_destinationalias of TypeVar('T_destination', bound=
Mapping[str,torch.Tensor])dump_patchesThis allows better BC support for
load_state_dict().- evaluate(X_test, y_test, batch_size=32)
Test the accuracy of the CIFAR10 classifier on a test set.
- Parameters
X_test (np.ndarray, torch.Tensor) – test input data.
y_test (np.ndarray, torch.Tensor) – test target data.
batch_size (int) – Size of mini-batches to test model on.
- Returns
Accuracy on test set.
- Return type
accuracy (float)
- forward(x)
Perform a forward pass through the model.
- Parameters
x (np.ndarray or torch.Tensor) – Input array of shape (batch_size, input_shape).
- Returns
- Predictions from current state of
the model.
- Return type
(np.ndarray or torch.Tensor)
- predict(x)
Predict on a data sample.