HGQ.quantizer package

Submodules

HGQ.quantizer.quantizer module

class HGQ.quantizer.quantizer.HGQ(init_bw: float, skip_dims, rnd_strategy: str | int = 'floor', exact_q_value=True, dtype=None, bw_clip=(-23, 23), trainable=True, regularizer: Callable | None = None, minmax_record=False)

Bases: object

Heterogenous quantizer.

adapt_bw_bits(ref: Tensor)

Adapt the bitwidth of the quantizer to the input tensor, such that each input is represented with approximately the same number of bits. (i.e., 1.5 with be represented by ap_fixed<2,1> and 0.375 will be represented by ap_fixed<2,-2>).

bias_forward(x, training=None, channel_loc=-1)

Forward pass for the bias term. Grammatical sugar

build(x, name=None)
build(input_shape: tuple, name: str | None = None)
bw_clip

(min, max) of bw. 23 by default in favor of float32 mantissa.

Type:

tuple

degeneracy

Degeneracy of the quantizer. Records how many values are mapped to the same quantizer.

exact_q_value

Whether to use exact quantized value during training.

Type:

bool

forward(x, training=None, record_minmax=None)

Forward pass of HGQ. :param training: if set to True, gradient will be propagated through the quantization process. :param record_minmax: if set to True, min and max of quantized values will be recorded for deriving the necessary integer bits. Only necessary for activation/pre-activation values.

classmethod from_config(config: dict)
get_bits(ref=None, quantized=None, pos_only=False)

Get approximated int/frac/keep_negative bits of the equivalent fixed-point quantizer. :param ref: Input tensor to compute the bits. If None, use the min/max record. :param quantized: If input is already quantized. Skip quantization pass if set to True. :param pos_only: If True, only compute the bits for positive values. Useful if have a ReLU layer after.

get_bits_exact(ref=None, pos_only=False)

Get exact int/frac/keep_negative bits of the equivalent fixed-point quantizer. :param ref: Input tensor to compute the bits. If None, use the min/max record. :param pos_only: If True, only compute the bits for positive values. Useful if have a ReLU layer after.

init_minmax()
minmax_record

Whether to record min and max of quantized values.

Type:

bool

minmax_reg_reset()

Reset min and max to inf and -inf, respectively.

regularizer

Regularizer for bw.

rnd_strategy

stochastic round, 2: fast uniform noise injection (uniform noise in [-0.5, 0.5]), 3: floor

Type:

How to round the quantized value. 0

Type:

standard round (default, round to nearest, round-up 0.5), 1

skip_dims

Dimensions to use uniform quantizer. If None, use full heterogenous quantizer.

Type:

tuple

HGQ.quantizer.quantizer.get_arr_bits(arr: ndarray)

Internal helper function to compute the position of the highest and lowest bit of an array of fixed-point integers.

HGQ.quantizer.quantizer.q_round(x: Tensor, strategy: int = 0)

Round the tensor.

strategy:

0: standard round (default, round to nearest, 0.5 to even) 1: stochastic round 2: fast uniform noise injection (uniform noise in [-0.5, 0.5]) 3: floor

Module contents