hgq.utils.sugar package
Submodules
hgq.utils.sugar.beta_pid module
- class hgq.utils.sugar.beta_pid.BaseBetaPID(target_ebops: float, p: float, i: float, d: float = 0.0)
Bases:
Callback
- get_ebops()
- on_epoch_begin(epoch, logs: dict | None = None)
Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- set_beta(beta: float)
- class hgq.utils.sugar.beta_pid.BetaPID(target_ebops: float, init_beta: float | None = None, p: float = 1.0, i: float = 0.002, d: float = 0.0, warmup: int = 10, log: bool = True, max_beta: float = inf, min_beta: float = 0.0, damp_beta_on_target: float = 0.0)
Bases:
BaseBetaPID
Control the beta value of the Q Layers using a PID controller to reach a specified target EBOPs.
- Parameters:
target_ebops (float) – The target EBOPs to reach.
init_beta (float, optional) – The initial beta value to set before training starts. If None, the average beta of the model is used. If initial beta is set, it will be applied to the model at the beginning of training.
p (float, default 1.0) – The proportional gain of the PID controller.
i (float, default 2e-3) – The integral gain of the PID controller.
d (float, default 0.0) – The derivative gain of the PID controller. As EBOPs is noisy, it is recommended to set this to 0.0 or a very small value.
warmup (int, default 10) – The number of epochs to warm up the beta value. During this period, the beta value will not be updated.
log (bool, default True) – If True, the beta value and error in EBOPs will be processed in logarithmic scale.
max_beta (float, default float('inf')) – The maximum beta value to set. If the computed beta exceeds this value, it will be clamped to this maximum.
min_beta (float, default 0.0) – The minimum beta value to set. If the computed beta is below this value, it will be clamped to this minimum.
damp_beta_on_target (float, default 0.0) – A damping factor applied to the beta value when the target EBOPs is reached: beta *= (1 - damp_beta_on_target). This can help mitigating beta overshooting.
- get_avg_beta() float
- on_epoch_begin(epoch: int, logs: dict | None = None) None
Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- on_epoch_end(epoch: int, logs: dict | None = None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- on_train_begin(logs=None)
Called at the beginning of training.
Subclasses should override for any actions to run.
- Parameters:
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
hgq.utils.sugar.beta_scheduler module
- class hgq.utils.sugar.beta_scheduler.BetaScheduler(beta_fn: Callable[[int], float])
Bases:
Callback
Schedule the beta value of the Q Layers.
- Parameters:
beta_fn (Callable[[int], float]) – A function that takes the current epoch and returns the beta value.
- on_epoch_begin(epoch, logs=None)
Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- class hgq.utils.sugar.beta_scheduler.PieceWiseSchedule(intervals: Sequence[tuple[int, float, str]])
Bases:
object
Get interpolated schedule from key points.
- Parameters:
intervals (sequence of tuple[epoch:int, beta:float, interp:str]) –
The key points of the schedule. Each tuple contains the starting epoch, beta, and interpolation for the interval.
epoch: the epoch number beta: the beta value at that epoch interp: the interpolation type in the interval after that epoch, one of ‘linear’, ‘log’, ‘constant’. After the last epoch defined in the intervals, the beta value will always be constant disregarding the interpolation type.
# Example: [(0, 0, ‘linear’), (10, 1e-5, ‘log’), (20, 1e-3, ‘constant’)] will start with beta=0, then increase to 1e-5 in 10 epochs linearly, and increase to 1e-3 in another 10 epochs logarithmically. beta will stay at 1e-3 after 20 epochs.
hgq.utils.sugar.early_stopping_ebops module
- class hgq.utils.sugar.early_stopping_ebops.EarlyStoppingWithEbopsThres(ebops_threshold: float, monitor='val_loss', min_delta: float = 0, patience: int = 0, verbose: int = 0, mode='auto', baseline: float | None = None, restore_best_weights: bool = False, start_from_epoch: int = 0)
Bases:
EarlyStopping
Vanilla Keras EarlyStopping but only after a given EBOPs threshold is met.
This callback stops training when: - EBOPs is lower than a given threshold, and - monitored metric has stopped improving
Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be ‘loss’, and mode would be ‘min’. A model.fit() training loop will check at end of every epoch whether the loss is no longer decreasing, considering the min_delta and patience if applicable. Once it’s found no longer decreasing, model.stop_training is marked True and the training terminates.
The quantity to be monitored needs to be available in logs dict. To make it so, pass the loss or metrics at model.compile().
- Parameters:
ebops_threshold (float) – The target EBOps value. This callback will not stop the training until the model’s EBOPs is at or below this value.
monitor (str, default "val_loss") – Quantity to be monitored.
min_delta (float, default 0) – Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.
patience (int, default 0) – Number of epochs with no improvement after which training will be stopped.
verbose (int, default 0) – Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action.
mode ({"auto", "min", "max"}, default "auto") – In min mode, training will stop when the quantity monitored has stopped decreasing; in “max” mode it will stop when the quantity monitored has stopped increasing; in “auto” mode, the direction is automatically inferred from the name of the monitored quantity.
baseline (float, optional) – Baseline value for the monitored quantity. If not None, training will stop if the model doesn’t show improvement over the baseline.
restore_best_weights (bool, default False) – Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. An epoch will be restored regardless of the performance relative to the baseline. If no epoch improves on baseline, training will run for patience epochs and restore weights from the best epoch in that set.
start_from_epoch (int, default 0) – Number of epochs to wait before starting to monitor improvement. This allows for a warm-up period in which no improvement is expected and thus training will not be stopped.
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
hgq.utils.sugar.ebops module
- class hgq.utils.sugar.ebops.FreeEBOPs
Bases:
Callback
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
hgq.utils.sugar.pareto module
- class hgq.utils.sugar.pareto.ParetoFront(path: str | Path, metrics: list[str], sides: list[int], fname_format: str | None = None, enable_if: Callable[[dict[str, Any]], bool] | None = None)
Bases:
Callback
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- on_train_begin(logs=None)
Called at the beginning of training.
Subclasses should override for any actions to run.
- Parameters:
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
hgq.utils.sugar.pbar module
- class hgq.utils.sugar.pbar.PBar(metric='loss: {loss:.2f}/{val_loss:.2f}', disable_ebops=False)
Bases:
Callback
- on_epoch_begin(epoch, logs=None)
Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- on_train_end(logs=None)
Called at the end of training.
Subclasses should override for any actions to run.
- Parameters:
logs – Dict. Currently the output of the last call to on_epoch_end() is passed to this argument for this method but that may change in the future.
Module contents
- class hgq.utils.sugar.BetaPID(target_ebops: float, init_beta: float | None = None, p: float = 1.0, i: float = 0.002, d: float = 0.0, warmup: int = 10, log: bool = True, max_beta: float = inf, min_beta: float = 0.0, damp_beta_on_target: float = 0.0)
Bases:
BaseBetaPID
Control the beta value of the Q Layers using a PID controller to reach a specified target EBOPs.
- Parameters:
target_ebops (float) – The target EBOPs to reach.
init_beta (float, optional) – The initial beta value to set before training starts. If None, the average beta of the model is used. If initial beta is set, it will be applied to the model at the beginning of training.
p (float, default 1.0) – The proportional gain of the PID controller.
i (float, default 2e-3) – The integral gain of the PID controller.
d (float, default 0.0) – The derivative gain of the PID controller. As EBOPs is noisy, it is recommended to set this to 0.0 or a very small value.
warmup (int, default 10) – The number of epochs to warm up the beta value. During this period, the beta value will not be updated.
log (bool, default True) – If True, the beta value and error in EBOPs will be processed in logarithmic scale.
max_beta (float, default float('inf')) – The maximum beta value to set. If the computed beta exceeds this value, it will be clamped to this maximum.
min_beta (float, default 0.0) – The minimum beta value to set. If the computed beta is below this value, it will be clamped to this minimum.
damp_beta_on_target (float, default 0.0) – A damping factor applied to the beta value when the target EBOPs is reached: beta *= (1 - damp_beta_on_target). This can help mitigating beta overshooting.
- get_avg_beta() float
- on_epoch_begin(epoch: int, logs: dict | None = None) None
Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- on_epoch_end(epoch: int, logs: dict | None = None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- on_train_begin(logs=None)
Called at the beginning of training.
Subclasses should override for any actions to run.
- Parameters:
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- class hgq.utils.sugar.BetaScheduler(beta_fn: Callable[[int], float])
Bases:
Callback
Schedule the beta value of the Q Layers.
- Parameters:
beta_fn (Callable[[int], float]) – A function that takes the current epoch and returns the beta value.
- on_epoch_begin(epoch, logs=None)
Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- class hgq.utils.sugar.Dataset(x_set, y_set=None, batch_size=None, device: str = 'cpu:0', drop_last=False, **kwargs)
Bases:
PyDataset
- batch(batch_size)
- class hgq.utils.sugar.EarlyStoppingWithEbopsThres(ebops_threshold: float, monitor='val_loss', min_delta: float = 0, patience: int = 0, verbose: int = 0, mode='auto', baseline: float | None = None, restore_best_weights: bool = False, start_from_epoch: int = 0)
Bases:
EarlyStopping
Vanilla Keras EarlyStopping but only after a given EBOPs threshold is met.
This callback stops training when: - EBOPs is lower than a given threshold, and - monitored metric has stopped improving
Assuming the goal of a training is to minimize the loss. With this, the metric to be monitored would be ‘loss’, and mode would be ‘min’. A model.fit() training loop will check at end of every epoch whether the loss is no longer decreasing, considering the min_delta and patience if applicable. Once it’s found no longer decreasing, model.stop_training is marked True and the training terminates.
The quantity to be monitored needs to be available in logs dict. To make it so, pass the loss or metrics at model.compile().
- Parameters:
ebops_threshold (float) – The target EBOps value. This callback will not stop the training until the model’s EBOPs is at or below this value.
monitor (str, default "val_loss") – Quantity to be monitored.
min_delta (float, default 0) – Minimum change in the monitored quantity to qualify as an improvement, i.e. an absolute change of less than min_delta, will count as no improvement.
patience (int, default 0) – Number of epochs with no improvement after which training will be stopped.
verbose (int, default 0) – Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action.
mode ({"auto", "min", "max"}, default "auto") – In min mode, training will stop when the quantity monitored has stopped decreasing; in “max” mode it will stop when the quantity monitored has stopped increasing; in “auto” mode, the direction is automatically inferred from the name of the monitored quantity.
baseline (float, optional) – Baseline value for the monitored quantity. If not None, training will stop if the model doesn’t show improvement over the baseline.
restore_best_weights (bool, default False) – Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. An epoch will be restored regardless of the performance relative to the baseline. If no epoch improves on baseline, training will run for patience epochs and restore weights from the best epoch in that set.
start_from_epoch (int, default 0) – Number of epochs to wait before starting to monitor improvement. This allows for a warm-up period in which no improvement is expected and thus training will not be stopped.
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- class hgq.utils.sugar.FreeEBOPs
Bases:
Callback
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- class hgq.utils.sugar.PBar(metric='loss: {loss:.2f}/{val_loss:.2f}', disable_ebops=False)
Bases:
Callback
- on_epoch_begin(epoch, logs=None)
Called at the start of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- on_train_end(logs=None)
Called at the end of training.
Subclasses should override for any actions to run.
- Parameters:
logs – Dict. Currently the output of the last call to on_epoch_end() is passed to this argument for this method but that may change in the future.
- class hgq.utils.sugar.ParetoFront(path: str | Path, metrics: list[str], sides: list[int], fname_format: str | None = None, enable_if: Callable[[dict[str, Any]], bool] | None = None)
Bases:
Callback
- on_epoch_end(epoch, logs=None)
Called at the end of an epoch.
Subclasses should override for any actions to run. This function should only be called during TRAIN mode.
- Parameters:
epoch – Integer, index of epoch.
logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.
- on_train_begin(logs=None)
Called at the beginning of training.
Subclasses should override for any actions to run.
- Parameters:
logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.
- class hgq.utils.sugar.PieceWiseSchedule(intervals: Sequence[tuple[int, float, str]])
Bases:
object
Get interpolated schedule from key points.
- Parameters:
intervals (sequence of tuple[epoch:int, beta:float, interp:str]) –
The key points of the schedule. Each tuple contains the starting epoch, beta, and interpolation for the interval.
epoch: the epoch number beta: the beta value at that epoch interp: the interpolation type in the interval after that epoch, one of ‘linear’, ‘log’, ‘constant’. After the last epoch defined in the intervals, the beta value will always be constant disregarding the interpolation type.
# Example: [(0, 0, ‘linear’), (10, 1e-5, ‘log’), (20, 1e-3, ‘constant’)] will start with beta=0, then increase to 1e-5 in 10 epochs linearly, and increase to 1e-3 in another 10 epochs logarithmically. beta will stay at 1e-3 after 20 epochs.