hgq.utils.sugar package

Submodules

hgq.utils.sugar.beta_pid module

class hgq.utils.sugar.beta_pid.BaseBetaPID(target_ebops: float, p: float, i: float, d: float = 0.0)

Bases: Callback

get_ebops()
on_epoch_begin(epoch, logs: dict | None = None)

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

set_beta(beta: float)
class hgq.utils.sugar.beta_pid.BetaPID(target_ebops: float, init_beta: float | None = None, p: float = 1.0, i: float = 0.002, d: float = 0.0, warmup: int = 10, log: bool = True, max_beta: float = inf, min_beta: float = 0.0, damp_beta_on_target: float = 0.0)

Bases: BaseBetaPID

Control the beta value of the Q Layers using a PID controller to reach a specified target EBOPs.

Parameters:
  • target_ebops (float) – The target EBOPs to reach.

  • init_beta (float, optional) – The initial beta value to set before training starts. If None, the average beta of the model is used. If initial beta is set, it will be applied to the model at the beginning of training.

  • p (float, default 1.0) – The proportional gain of the PID controller.

  • i (float, default 2e-3) – The integral gain of the PID controller.

  • d (float, default 0.0) – The derivative gain of the PID controller. As EBOPs is noisy, it is recommended to set this to 0.0 or a very small value.

  • warmup (int, default 10) – The number of epochs to warm up the beta value. During this period, the beta value will not be updated.

  • log (bool, default True) – If True, the beta value and error in EBOPs will be processed in logarithmic scale.

  • max_beta (float, default float('inf')) – The maximum beta value to set. If the computed beta exceeds this value, it will be clamped to this maximum.

  • min_beta (float, default 0.0) – The minimum beta value to set. If the computed beta is below this value, it will be clamped to this minimum.

  • damp_beta_on_target (float, default 0.0) – A damping factor applied to the beta value when the target EBOPs is reached: beta *= (1 - damp_beta_on_target). This can help mitigating beta overshooting.

get_avg_beta() float
on_epoch_begin(epoch: int, logs: dict | None = None) None

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch: int, logs: dict | None = None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_begin(logs=None)

Called at the beginning of training.

Subclasses should override for any actions to run.

Parameters:

logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

class hgq.utils.sugar.beta_pid.PID(p: float, i: float, d: float, neg: bool)

Bases: object

update_gains(p: float, i: float, d: float)
update_gains_KcTiTd(Kp: float, Ti: float, Td: float)

hgq.utils.sugar.beta_scheduler module

class hgq.utils.sugar.beta_scheduler.BetaScheduler(beta_fn: Callable[[int], float])

Bases: Callback

Schedule the beta value of the Q Layers.

Parameters:

beta_fn (Callable[[int], float]) – A function that takes the current epoch and returns the beta value.

on_epoch_begin(epoch, logs=None)

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

class hgq.utils.sugar.beta_scheduler.PieceWiseSchedule(intervals: Sequence[tuple[int, float, str]])

Bases: object

Get interpolated schedule from key points.

Parameters:

intervals (sequence of tuple[epoch:int, beta:float, interp:str]) –

The key points of the schedule. Each tuple contains the starting epoch, beta, and interpolation for the interval.

epoch: the epoch number beta: the beta value at that epoch interp: the interpolation type in the interval after that epoch, one of ‘linear’, ‘log’, ‘constant’. After the last epoch defined in the intervals, the beta value will always be constant disregarding the interpolation type.

# Example: [(0, 0, ‘linear’), (10, 1e-5, ‘log’), (20, 1e-3, ‘constant’)] will start with beta=0, then increase to 1e-5 in 10 epochs linearly, and increase to 1e-3 in another 10 epochs logarithmically. beta will stay at 1e-3 after 20 epochs.

hgq.utils.sugar.ebops module

class hgq.utils.sugar.ebops.FreeEBOPs

Bases: Callback

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

hgq.utils.sugar.pareto module

class hgq.utils.sugar.pareto.ParetoFront(path: str | Path, metrics: list[str], sides: list[int], fname_format: str | None = None, enable_if: Callable[[dict], bool] | None = None)

Bases: Callback

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_begin(logs=None)

Called at the beginning of training.

Subclasses should override for any actions to run.

Parameters:

logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

hgq.utils.sugar.pbar module

class hgq.utils.sugar.pbar.PBar(metric='loss: {loss:.2f}/{val_loss:.2f}', disable_ebops=False)

Bases: Callback

on_epoch_begin(epoch, logs=None)

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_end(logs=None)

Called at the end of training.

Subclasses should override for any actions to run.

Parameters:

logs – Dict. Currently the output of the last call to on_epoch_end() is passed to this argument for this method but that may change in the future.

Module contents

class hgq.utils.sugar.BetaPID(target_ebops: float, init_beta: float | None = None, p: float = 1.0, i: float = 0.002, d: float = 0.0, warmup: int = 10, log: bool = True, max_beta: float = inf, min_beta: float = 0.0, damp_beta_on_target: float = 0.0)

Bases: BaseBetaPID

Control the beta value of the Q Layers using a PID controller to reach a specified target EBOPs.

Parameters:
  • target_ebops (float) – The target EBOPs to reach.

  • init_beta (float, optional) – The initial beta value to set before training starts. If None, the average beta of the model is used. If initial beta is set, it will be applied to the model at the beginning of training.

  • p (float, default 1.0) – The proportional gain of the PID controller.

  • i (float, default 2e-3) – The integral gain of the PID controller.

  • d (float, default 0.0) – The derivative gain of the PID controller. As EBOPs is noisy, it is recommended to set this to 0.0 or a very small value.

  • warmup (int, default 10) – The number of epochs to warm up the beta value. During this period, the beta value will not be updated.

  • log (bool, default True) – If True, the beta value and error in EBOPs will be processed in logarithmic scale.

  • max_beta (float, default float('inf')) – The maximum beta value to set. If the computed beta exceeds this value, it will be clamped to this maximum.

  • min_beta (float, default 0.0) – The minimum beta value to set. If the computed beta is below this value, it will be clamped to this minimum.

  • damp_beta_on_target (float, default 0.0) – A damping factor applied to the beta value when the target EBOPs is reached: beta *= (1 - damp_beta_on_target). This can help mitigating beta overshooting.

get_avg_beta() float
on_epoch_begin(epoch: int, logs: dict | None = None) None

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch: int, logs: dict | None = None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_begin(logs=None)

Called at the beginning of training.

Subclasses should override for any actions to run.

Parameters:

logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

class hgq.utils.sugar.BetaScheduler(beta_fn: Callable[[int], float])

Bases: Callback

Schedule the beta value of the Q Layers.

Parameters:

beta_fn (Callable[[int], float]) – A function that takes the current epoch and returns the beta value.

on_epoch_begin(epoch, logs=None)

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

class hgq.utils.sugar.Dataset(x_set, y_set=None, batch_size=None, device: str = 'cpu:0', drop_last=False, **kwargs)

Bases: PyDataset

batch(batch_size)
class hgq.utils.sugar.FreeEBOPs

Bases: Callback

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

class hgq.utils.sugar.PBar(metric='loss: {loss:.2f}/{val_loss:.2f}', disable_ebops=False)

Bases: Callback

on_epoch_begin(epoch, logs=None)

Called at the start of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_end(logs=None)

Called at the end of training.

Subclasses should override for any actions to run.

Parameters:

logs – Dict. Currently the output of the last call to on_epoch_end() is passed to this argument for this method but that may change in the future.

class hgq.utils.sugar.ParetoFront(path: str | Path, metrics: list[str], sides: list[int], fname_format: str | None = None, enable_if: Callable[[dict], bool] | None = None)

Bases: Callback

on_epoch_end(epoch, logs=None)

Called at the end of an epoch.

Subclasses should override for any actions to run. This function should only be called during TRAIN mode.

Parameters:
  • epoch – Integer, index of epoch.

  • logs – Dict, metric results for this training epoch, and for the validation epoch if validation is performed. Validation result keys are prefixed with val_. For training epoch, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

on_train_begin(logs=None)

Called at the beginning of training.

Subclasses should override for any actions to run.

Parameters:

logs – Dict. Currently no data is passed to this argument for this method but that may change in the future.

class hgq.utils.sugar.PieceWiseSchedule(intervals: Sequence[tuple[int, float, str]])

Bases: object

Get interpolated schedule from key points.

Parameters:

intervals (sequence of tuple[epoch:int, beta:float, interp:str]) –

The key points of the schedule. Each tuple contains the starting epoch, beta, and interpolation for the interval.

epoch: the epoch number beta: the beta value at that epoch interp: the interpolation type in the interval after that epoch, one of ‘linear’, ‘log’, ‘constant’. After the last epoch defined in the intervals, the beta value will always be constant disregarding the interpolation type.

# Example: [(0, 0, ‘linear’), (10, 1e-5, ‘log’), (20, 1e-3, ‘constant’)] will start with beta=0, then increase to 1e-5 in 10 epochs linearly, and increase to 1e-3 in another 10 epochs logarithmically. beta will stay at 1e-3 after 20 epochs.