backwardcompatibilityml.tensorflow.loss package

Submodules

backwardcompatibilityml.tensorflow.loss.new_error module

class backwardcompatibilityml.tensorflow.loss.new_error.BCBinaryCrossEntropyLoss(h1, h2, lambda_c)

Bases: object

Backward Compatibility New Error Binary Cross Entropy Loss

This class implements the backward compatibility loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCBinaryCrossEntropyLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
class backwardcompatibilityml.tensorflow.loss.new_error.BCCrossEntropyLoss(h1, h2, lambda_c)

Bases: object

Backward Compatibility New Error Cross Entropy Loss

This class implements the backward compatibility loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCCrossEntropyLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
class backwardcompatibilityml.tensorflow.loss.new_error.BCKLDivLoss(h1, h2, lambda_c)

Bases: object

Backward Compatibility New Error Kullback Liebler Divergence Loss

This class implements the backward compatibility loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCKLDivLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
class backwardcompatibilityml.tensorflow.loss.new_error.BCNLLLoss(h1, h2, lambda_c, clip_value_min=1e-10, clip_value_max=4.0)

Bases: object

Backward Compatibility New Error Negative Log Likelihood Loss

This class implements the backward compatibility loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCNLLLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
nll_loss(target_labels, model_output)

backwardcompatibilityml.tensorflow.loss.strict_imitation module

class backwardcompatibilityml.tensorflow.loss.strict_imitation.BCStrictImitationBinaryCrossEntropyLoss(h1, h2, lambda_c)

Bases: object

Strict Imitation Binary Cross Entropy Loss

This class implements the strict imitation loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCStrictImitationBinaryCrossEntropyLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
class backwardcompatibilityml.tensorflow.loss.strict_imitation.BCStrictImitationCrossEntropyLoss(h1, h2, lambda_c)

Bases: object

Strict Imitation Cross Entropy Loss

This class implements the strict imitation loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCStrictImitationCrossEntropyLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
class backwardcompatibilityml.tensorflow.loss.strict_imitation.BCStrictImitationKLDivLoss(h1, h2, lambda_c)

Bases: object

Strict Imitation Kullback Liebler Loss

This class implements the strict imitation loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCStrictImitationKLDivLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
class backwardcompatibilityml.tensorflow.loss.strict_imitation.BCStrictImitationNLLLoss(h1, h2, lambda_c, clip_value_min=1e-10, clip_value_max=4.0)

Bases: object

Strict Imitation Negative Log Likelihood Loss

This class implements the strict imitation loss function with the underlying loss function being the Negative Log Likelihood loss.

Note that the final layer of each model is assumed to have a softmax output.

Example usage:

h1 = MyModel() … train h1 … h1.trainable = False

lambda_c = 0.5 (regularization parameter) h2 = MyNewModel() (this may be the same model type as MyModel) bcloss = BCStrictImitationNLLLoss(h1, h2, lambda_c) optimizer = tf.keras.optimizers.SGD(0.01)

tf_helpers.bc_fit(
h2, training_set=ds_train, testing_set=ds_test, epochs=6, bc_loss=bc_loss, optimizer=optimizer)
Parameters:
  • h1 – Our reference model which we would like to be compatible with.
  • h2 – Our new model which will be the updated model.
  • lambda_c – A float between 0.0 and 1.0, which is a regularization parameter that determines how much we want to penalize model h2 for being incompatible with h1. Lower values panalize less and higher values penalize more.
dissonance(h2_output, target_labels)
nll_loss(target_labels, model_output)

Module contents