backwardcompatibilityml.loss package

Submodules

backwardcompatibilityml.loss.new_error module

class backwardcompatibilityml.loss.new_error.BCBinaryCrossEntropyLoss(h1, h2, lambda_c, discriminan_pivot=0.5, **kwargs)

Bases: torch.nn.modules.module.Module

Backward Compatibility Binary Cross-entropy Loss

This class implements the backward compatibility loss function with the underlying loss function being the cross-entropy loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCBinaryCrossEntropyLoss(h1, h2, lambda_c)

for x, y in training_data:
loss = bcloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the bcloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_support_output_sigmoid, target_labels)
forward(x, y, reduction='mean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class backwardcompatibilityml.loss.new_error.BCCrossEntropyLoss(h1, h2, lambda_c, **kwargs)

Bases: torch.nn.modules.module.Module

Backward Compatibility Cross-entropy Loss

This class implements the backward compatibility loss function with the underlying loss function being the cross-entropy loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCCrossEntropyLoss(h1, h2, lambda_c)

for x, y in training_data:
loss = bcloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the bcloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output_logit, target_labels)
forward(x, y, reduction='mean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class backwardcompatibilityml.loss.new_error.BCKLDivergenceLoss(h1, h2, lambda_c, num_classes=None, **kwargs)

Bases: torch.nn.modules.module.Module

Backward Compatibility Kullback–Leibler Divergence Loss

This class implements the backward compatibility loss function with the underlying loss function being the Kullback–Leibler Divergence loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCKLDivergenceLoss(h1, h2, lambda_c, num_classes=num_classes)

for x, y in training_data:
loss = bcloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the bcloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
  • num_classes – An integer denoting the number of classes that we are attempting to classify the input into.
dissonance(h2_output_log_softmax, target_labels)
forward(x, y, reduction='batchmean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class backwardcompatibilityml.loss.new_error.BCNLLLoss(h1, h2, lambda_c, **kwargs)

Bases: torch.nn.modules.module.Module

Backward Compatibility Negative Log Likelihood Loss

This class implements the backward compatibility loss function with the underlying loss function being the Negative Log Likelihood loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCNLLLoss(h1, h2, lambda_c)

for x, y in training_data:
loss = bcloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the bcloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
forward(x, y, reduction='mean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

backwardcompatibilityml.loss.strict_imitation module

class backwardcompatibilityml.loss.strict_imitation.StrictImitationBinaryCrossEntropyLoss(h1, h2, lambda_c, discriminant_pivot=0.5, **kwargs)

Bases: torch.nn.modules.module.Module

Strict Imitation Binary Cross-entropy Loss

This class implements the strict imitation loss function with the underlying loss function being the cross-entropy loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) siloss = StrictImitationBinaryCrossEntropyLoss(h1, h2, lambda_c)

for x, y in training_data:
loss = siloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the siloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h1_output_sigmoid, h2_output_sigmoid)
forward(x, y, reduction='mean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class backwardcompatibilityml.loss.strict_imitation.StrictImitationCrossEntropyLoss(h1, h2, lambda_c, **kwargs)

Bases: torch.nn.modules.module.Module

Strict Imitation Cross-entropy Loss

This class implements the strict imitation loss function with the underlying loss function being the cross-entropy loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) siloss = StrictImitationCrossEntropyLoss(h1, h2, lambda_c)

for x, y in training_data:
loss = siloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the siloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h1_output_labels, h2_output_logit)
forward(x, y, reduction='mean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class backwardcompatibilityml.loss.strict_imitation.StrictImitationKLDivergenceLoss(h1, h2, lambda_c, num_classes=None, **kwargs)

Bases: torch.nn.modules.module.Module

Strict Imitation Kullback–Leibler Divergence Loss

This class implements the strict imitation loss function with the underlying loss function being the Kullback–Leibler Divergence loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) siloss = StrictImitationKLDivergenceLoss( h1, h2, lambda_c, num_classes=num_classes)

for x, y in training_data:
loss = siloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the siloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
  • num_classes – An integer denoting the number of classes that we are attempting to classify the input into.
dissonance(h1_output_logsoftmax, h2_output_logsoftmax)
forward(x, y, reduction='batchmean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class backwardcompatibilityml.loss.strict_imitation.StrictImitationNLLLoss(h1, h2, lambda_c, **kwargs)

Bases: torch.nn.modules.module.Module

Strict Imitation Negative Log Likelihood Loss

This class implements the strict imitation loss function with the underlying loss function being the Negative Log Likelihood loss.

Example usage:

h1 = MyModel() … train h1 … h1.eval() (it is important that h1 be put in evaluation mode)

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) siloss = StrictImitationNLLLoss(h1, h2, lambda_c)

for x, y in training_data:
loss = siloss(x, y) loss.backward()

Note that we pass in the input and the target directly to the siloss function instance. It calculates the outputs of h1 and h2 internally.

Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h1_output_prob, h2_output_prob)
forward(x, y, reduction='mean')

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

Module contents