hgq.layers.rnn package

Submodules

hgq.layers.rnn.gru module

class hgq.layers.rnn.gru.QGRU(*args, **kwargs)

Bases: QRNN, GRU

Gated Recurrent Unit - Cho et al. 2014.

The QGRU only allows the backend native implementation (no CuDNN kernel). When the jax backend is used, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: linear, effectively hard_tanh by the pre-activation quantizer.

  • recurrent_activation (str, optional) – Activation function to use for the recurrent step. Default: linear, effectively hard_sigmoid (slope=0.5) by the pre-activation quantizer.

  • use_bias (bool, optional) – Whether the layer should use a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • activity_regularizer (optional) – Regularizer function applied to the output of the layer (its “activation”). Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • seed (int, optional) – Random seed for dropout.

  • return_sequences (bool, optional) – Whether to return the last output in the output sequence, or the full sequence. Default: False.

  • return_state (bool, optional) – Whether to return the last state in addition to the output. Default: False.

  • go_backwards (bool, optional) – If True, process the input sequence backwards and return the reversed sequence. Default: False.

  • stateful (bool, optional) – If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. Default: False.

  • unroll (bool or None, optional) – None is equivalent to False. However, for the JAX backend, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop. If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. Default: None.

  • reset_after (bool, optional) – GRU convention (whether to apply reset gate after or before matrix multiplication). False is “before”, True is “after” (default and cuDNN compatible). Default: True.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None (global default)

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for post-activation quantizer. Default: None (hard tanh like, w/ global default)

  • praq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-recurrent activation quantizer. Default: None (hard sigmoid like, w/ global default)

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None (global default)

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None (global default)

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None (global default)

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None (global default)

  • oq_conf (QuantizerConfig or None, optional) – Quantizer configuration for output quantizer. Default: None (global default)

  • rhq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent hidden state quantizer. Default: None (global default)

  • parallelization_factor (int, optional) – Factor for parallelization. Default: 1.

  • enable_oq (bool or None, optional) – Whether to enable output quantizer. Default: None (global default)

  • enable_iq (bool or None, optional) – Whether to enable input quantizer. Default: None (global default)

  • enable_ebops (bool or None, optional) – Whether to enable energy-efficient bit operations. Default: None (global default)

  • beta0 (float or None, optional) – Beta0 parameter for quantizer. Default: None (global default)

  • enable_ebops – Whether to enable EBOPs resource consumption estimation. Default: None (global default).

  • parallelization_factor – Number of cells to be computed in parallel. Default: 1.

Notes

inputsarray_like

A 3D tensor, with shape (batch, timesteps, feature).

maskarray_like, optional

Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked (optional). An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. Defaults to None.

trainingbool, optional

Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional). Defaults to None.

initial_statelist, optional

List of initial state tensors to be passed to the first call of the cell (optional, None causes creation of zero-filled initial state tensors). Defaults to None.

get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class hgq.layers.rnn.gru.QGRUCell(*args, **kwargs)

Bases: QLayerBase, GRUCell

Cell class for the GRU layer.

This class processes one step within the whole time sequence input, whereas keras.layer.GRU processes the whole sequence.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • recurrent_activation (str, optional) – Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias (bool, optional) – Whether the layer should use a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • reset_after (bool, optional) – GRU convention (whether to apply reset gate after or before matrix multiplication). False = “before”, True = “after” (default and cuDNN compatible).

  • seed (int, optional) – Random seed for dropout.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None.

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for post-activation quantizer. Default: None.

  • praq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-recurrent activation quantizer. Default: None.

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None.

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None.

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None.

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None.

  • oq_conf (QuantizerConfig or None, optional) – Quantizer configuration for output quantizer. Default: None.

  • rhq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent hidden state quantizer. Default: None.

  • standalone (bool, optional) – Whether this cell is used standalone or as part of a larger RNN layer. EBOPS computation will be skipped when used as a sublayer. Default: True.

  • enable_ebops (bool or None, optional) – Whether to enable energy-efficient bit operations. Default: None.

  • enable_iq (bool or None, optional) – Whether to enable input quantizer. Default: None.

  • enable_oq (bool or None, optional) – Whether to enable output quantizer. Default: None.

Notes

inputsarray_like

A 2D tensor, with shape (batch, features).

statesarray_like

A 2D tensor with shape (batch, units), which is the state from the previous time step.

trainingbool, optional

Python boolean indicating whether the layer should behave in training mode or in inference mode. Only relevant when dropout or recurrent_dropout is used.

property bq
build(input_shape)
call(inputs, states, training=False)
property enable_ebops
get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

get_initial_state(batch_size=None)
property iq
property kq
property paq
property praq
qactivation(x)
property qbias
property qkernel
qrecurrent_activation(x)
property qrecurrent_kernel
property rhq
property rkq
property sq

hgq.layers.rnn.simple_rnn module

class hgq.layers.rnn.simple_rnn.QRNN(*args, **kwargs)

Bases: RNN

property beta
build(sequences_shape, initial_state_shape=None)
property ebops
property enable_ebops
property enable_iq
property enable_oq
classmethod from_config(config, custom_objects=None)

Creates an operation from its config.

This method is the reverse of get_config, capable of instantiating the same operation from the config dictionary.

Note: If you override this method, you might receive a serialized dtype config, which is a dict. You can deserialize it as follows:

```python if “dtype” in config and isinstance(config[“dtype”], dict):

policy = dtype_policies.deserialize(config[“dtype”])

```

Parameters:

config – A Python dictionary, typically the output of get_config.

Returns:

An operation instance.

get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class hgq.layers.rnn.simple_rnn.QSimpleRNN(*args, **kwargs)

Bases: QRNN, SimpleRNN

Quantized Fully-connected RNN where the output is to be fed back as the new input.

When the jax backend is used, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: linear, effectively hard_tanh by the pre-activation quantizer.

  • use_bias (bool, optional) – Whether the layer uses a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • activity_regularizer (optional) – Regularizer function applied to the output of the layer (its “activation”). Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • return_sequences (bool, optional) – Whether to return the last output in the output sequence, or the full sequence. Default: False.

  • return_state (bool, optional) – Whether to return the last state in addition to the output. Default: False.

  • go_backwards (bool, optional) – If True, process the input sequence backwards and return the reversed sequence. Default: False.

  • stateful (bool, optional) – If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. Default: False.

  • unroll (bool or None, optional) – None is equivalent to False. However, for the JAX backend, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop. If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. Default: None.

  • seed (int, optional) – Random seed for dropout.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None.

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None.

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None.

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None.

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None.

  • oq_conf (QuantizerConfig or None, optional) – Quantizer configuration for output quantizer. Default: None.

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-activation quantizer. Default: None.

  • parallelization_factor (int, optional) – Factor for parallelization. Default: -1 (automatic).

  • enable_sq (bool or None, optional) – Whether to enable state quantizer. When the output is already quantized and the state is 0-inited, the state quantizer should be disabled to avoid double quantization. Default: False.

  • enable_oq (bool or None, optional) – Whether to enable output quantizer. Default: None.

  • enable_iq (bool or None, optional) – Whether to enable input quantizer. Default: None.

  • enable_ebops (bool or None, optional) – Whether to enable energy-efficient bit operations. Default: None.

  • beta0 (float or None, optional) – Beta0 parameter for quantizer. Default: None.

Notes

sequencearray_like

A 3D tensor, with shape [batch, timesteps, feature].

maskarray_like, optional

Binary tensor of shape [batch, timesteps] indicating whether a given timestep should be masked. An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored.

trainingbool, optional

Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used.

initial_statelist, optional

List of initial state tensors to be passed to the first call of the cell.

get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class hgq.layers.rnn.simple_rnn.QSimpleRNNCell(*args, **kwargs)

Bases: QLayerBaseSingleInput, SimpleRNNCell

Cell class for the QSimpleRNN layer.

This class processes one step within the whole time sequence input, whereas QSimpleRNN processes the whole sequence.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: “linear”.

  • use_bias (bool, optional) – Whether the layer should use a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • seed (int, optional) – Random seed for dropout.

  • enable_sq (bool or None, optional) – Whether to enable state quantizer. When the output is already quantized and the state is 0-inited, the state quantizer should be disabled to avoid double quantization. Default: None, which means it will be set to the value of standalone.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None.

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None.

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None.

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None.

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None.

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-activation quantizer. Default: None.

  • standalone (bool, optional) – Whether this cell is used standalone or as part of a larger RNN layer. Default: True.

property bq

Bias Quantizer

build(input_shape)
call(sequence, states, training=False)
property enable_ebops
property enable_sq
get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

property iq

Input Quantizer

property kq

Kernel Quantizer

property paq

Pre-Activation Quantizer

property qbias
property qkernel
property qrecurrent_kernel
property rkq

Recurrent Kernel Quantizer

property sq

State Quantizer

Module contents

class hgq.layers.rnn.QGRU(*args, **kwargs)

Bases: QRNN, GRU

Gated Recurrent Unit - Cho et al. 2014.

The QGRU only allows the backend native implementation (no CuDNN kernel). When the jax backend is used, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: linear, effectively hard_tanh by the pre-activation quantizer.

  • recurrent_activation (str, optional) – Activation function to use for the recurrent step. Default: linear, effectively hard_sigmoid (slope=0.5) by the pre-activation quantizer.

  • use_bias (bool, optional) – Whether the layer should use a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • activity_regularizer (optional) – Regularizer function applied to the output of the layer (its “activation”). Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • seed (int, optional) – Random seed for dropout.

  • return_sequences (bool, optional) – Whether to return the last output in the output sequence, or the full sequence. Default: False.

  • return_state (bool, optional) – Whether to return the last state in addition to the output. Default: False.

  • go_backwards (bool, optional) – If True, process the input sequence backwards and return the reversed sequence. Default: False.

  • stateful (bool, optional) – If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. Default: False.

  • unroll (bool or None, optional) – None is equivalent to False. However, for the JAX backend, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop. If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. Default: None.

  • reset_after (bool, optional) – GRU convention (whether to apply reset gate after or before matrix multiplication). False is “before”, True is “after” (default and cuDNN compatible). Default: True.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None (global default)

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for post-activation quantizer. Default: None (hard tanh like, w/ global default)

  • praq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-recurrent activation quantizer. Default: None (hard sigmoid like, w/ global default)

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None (global default)

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None (global default)

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None (global default)

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None (global default)

  • oq_conf (QuantizerConfig or None, optional) – Quantizer configuration for output quantizer. Default: None (global default)

  • rhq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent hidden state quantizer. Default: None (global default)

  • parallelization_factor (int, optional) – Factor for parallelization. Default: 1.

  • enable_oq (bool or None, optional) – Whether to enable output quantizer. Default: None (global default)

  • enable_iq (bool or None, optional) – Whether to enable input quantizer. Default: None (global default)

  • enable_ebops (bool or None, optional) – Whether to enable energy-efficient bit operations. Default: None (global default)

  • beta0 (float or None, optional) – Beta0 parameter for quantizer. Default: None (global default)

  • enable_ebops – Whether to enable EBOPs resource consumption estimation. Default: None (global default).

  • parallelization_factor – Number of cells to be computed in parallel. Default: 1.

Notes

inputsarray_like

A 3D tensor, with shape (batch, timesteps, feature).

maskarray_like, optional

Binary tensor of shape (samples, timesteps) indicating whether a given timestep should be masked (optional). An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored. Defaults to None.

trainingbool, optional

Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used (optional). Defaults to None.

initial_statelist, optional

List of initial state tensors to be passed to the first call of the cell (optional, None causes creation of zero-filled initial state tensors). Defaults to None.

get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class hgq.layers.rnn.QGRUCell(*args, **kwargs)

Bases: QLayerBase, GRUCell

Cell class for the GRU layer.

This class processes one step within the whole time sequence input, whereas keras.layer.GRU processes the whole sequence.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: hyperbolic tangent (tanh). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • recurrent_activation (str, optional) – Activation function to use for the recurrent step. Default: sigmoid (sigmoid). If you pass None, no activation is applied (ie. “linear” activation: a(x) = x).

  • use_bias (bool, optional) – Whether the layer should use a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • reset_after (bool, optional) – GRU convention (whether to apply reset gate after or before matrix multiplication). False = “before”, True = “after” (default and cuDNN compatible).

  • seed (int, optional) – Random seed for dropout.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None.

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for post-activation quantizer. Default: None.

  • praq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-recurrent activation quantizer. Default: None.

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None.

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None.

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None.

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None.

  • oq_conf (QuantizerConfig or None, optional) – Quantizer configuration for output quantizer. Default: None.

  • rhq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent hidden state quantizer. Default: None.

  • standalone (bool, optional) – Whether this cell is used standalone or as part of a larger RNN layer. EBOPS computation will be skipped when used as a sublayer. Default: True.

  • enable_ebops (bool or None, optional) – Whether to enable energy-efficient bit operations. Default: None.

  • enable_iq (bool or None, optional) – Whether to enable input quantizer. Default: None.

  • enable_oq (bool or None, optional) – Whether to enable output quantizer. Default: None.

Notes

inputsarray_like

A 2D tensor, with shape (batch, features).

statesarray_like

A 2D tensor with shape (batch, units), which is the state from the previous time step.

trainingbool, optional

Python boolean indicating whether the layer should behave in training mode or in inference mode. Only relevant when dropout or recurrent_dropout is used.

property bq
build(input_shape)
call(inputs, states, training=False)
property enable_ebops
get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

get_initial_state(batch_size=None)
property iq
property kq
property paq
property praq
qactivation(x)
property qbias
property qkernel
qrecurrent_activation(x)
property qrecurrent_kernel
property rhq
property rkq
property sq
class hgq.layers.rnn.QSimpleRNN(*args, **kwargs)

Bases: QRNN, SimpleRNN

Quantized Fully-connected RNN where the output is to be fed back as the new input.

When the jax backend is used, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: linear, effectively hard_tanh by the pre-activation quantizer.

  • use_bias (bool, optional) – Whether the layer uses a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • activity_regularizer (optional) – Regularizer function applied to the output of the layer (its “activation”). Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • return_sequences (bool, optional) – Whether to return the last output in the output sequence, or the full sequence. Default: False.

  • return_state (bool, optional) – Whether to return the last state in addition to the output. Default: False.

  • go_backwards (bool, optional) – If True, process the input sequence backwards and return the reversed sequence. Default: False.

  • stateful (bool, optional) – If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch. Default: False.

  • unroll (bool or None, optional) – None is equivalent to False. However, for the JAX backend, if any WRAP quantizers are used, unroll will be set to True to avoid the side effect issue in the jax.lax.scan loop. If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences. Default: None.

  • seed (int, optional) – Random seed for dropout.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None.

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None.

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None.

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None.

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None.

  • oq_conf (QuantizerConfig or None, optional) – Quantizer configuration for output quantizer. Default: None.

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-activation quantizer. Default: None.

  • parallelization_factor (int, optional) – Factor for parallelization. Default: -1 (automatic).

  • enable_sq (bool or None, optional) – Whether to enable state quantizer. When the output is already quantized and the state is 0-inited, the state quantizer should be disabled to avoid double quantization. Default: False.

  • enable_oq (bool or None, optional) – Whether to enable output quantizer. Default: None.

  • enable_iq (bool or None, optional) – Whether to enable input quantizer. Default: None.

  • enable_ebops (bool or None, optional) – Whether to enable energy-efficient bit operations. Default: None.

  • beta0 (float or None, optional) – Beta0 parameter for quantizer. Default: None.

Notes

sequencearray_like

A 3D tensor, with shape [batch, timesteps, feature].

maskarray_like, optional

Binary tensor of shape [batch, timesteps] indicating whether a given timestep should be masked. An individual True entry indicates that the corresponding timestep should be utilized, while a False entry indicates that the corresponding timestep should be ignored.

trainingbool, optional

Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the cell when calling it. This is only relevant if dropout or recurrent_dropout is used.

initial_statelist, optional

List of initial state tensors to be passed to the first call of the cell.

get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

class hgq.layers.rnn.QSimpleRNNCell(*args, **kwargs)

Bases: QLayerBaseSingleInput, SimpleRNNCell

Cell class for the QSimpleRNN layer.

This class processes one step within the whole time sequence input, whereas QSimpleRNN processes the whole sequence.

Parameters:
  • units (int) – Positive integer, dimensionality of the output space.

  • activation (str, optional) – Activation function to use. Default: “linear”.

  • use_bias (bool, optional) – Whether the layer should use a bias vector. Default: True.

  • kernel_initializer (str, optional) – Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: “glorot_uniform”.

  • recurrent_initializer (str, optional) – Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: “orthogonal”.

  • bias_initializer (str, optional) – Initializer for the bias vector. Default: “zeros”.

  • kernel_regularizer (optional) – Regularizer function applied to the kernel weights matrix. Default: None.

  • recurrent_regularizer (optional) – Regularizer function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_regularizer (optional) – Regularizer function applied to the bias vector. Default: None.

  • kernel_constraint (optional) – Constraint function applied to the kernel weights matrix. Default: None.

  • recurrent_constraint (optional) – Constraint function applied to the recurrent_kernel weights matrix. Default: None.

  • bias_constraint (optional) – Constraint function applied to the bias vector. Default: None.

  • dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.

  • recurrent_dropout (float, optional) – Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.

  • seed (int, optional) – Random seed for dropout.

  • enable_sq (bool or None, optional) – Whether to enable state quantizer. When the output is already quantized and the state is 0-inited, the state quantizer should be disabled to avoid double quantization. Default: None, which means it will be set to the value of standalone.

  • iq_conf (QuantizerConfig or None, optional) – Quantizer configuration for input quantizer. Default: None.

  • sq_conf (QuantizerConfig or None, optional) – Quantizer configuration for state quantizer. Default: None.

  • kq_conf (QuantizerConfig or None, optional) – Quantizer configuration for kernel quantizer. Default: None.

  • rkq_conf (QuantizerConfig or None, optional) – Quantizer configuration for recurrent kernel quantizer. Default: None.

  • bq_conf (QuantizerConfig or None, optional) – Quantizer configuration for bias quantizer. Default: None.

  • paq_conf (QuantizerConfig or None, optional) – Quantizer configuration for pre-activation quantizer. Default: None.

  • standalone (bool, optional) – Whether this cell is used standalone or as part of a larger RNN layer. Default: True.

property bq

Bias Quantizer

build(input_shape)
call(sequence, states, training=False)
property enable_ebops
property enable_sq
get_config()

Returns the config of the object.

An object config is a Python dictionary (serializable) containing the information needed to re-instantiate it.

property iq

Input Quantizer

property kq

Kernel Quantizer

property paq

Pre-Activation Quantizer

property qbias
property qkernel
property qrecurrent_kernel
property rkq

Recurrent Kernel Quantizer

property sq

State Quantizer