hgq package
Subpackages
- hgq.config package
- hgq.constraints package
- hgq.layers package
- Subpackages
- Submodules
- hgq.layers.activation module
- hgq.layers.batch_normalization module
- hgq.layers.conv module
- hgq.layers.einsum_dense_batchnorm module
- hgq.layers.linformer_attention module
- hgq.layers.multi_head_attention module
- hgq.layers.pooling module
- hgq.layers.softmax module
- Module contents
QAddQAveragePooling1DQAveragePooling2DQAveragePooling3DQAveragePow2QAvgPool1DQAvgPool2DQAvgPool3DQBatchNormDenseQBatchNormalizationQConv1DQConv2DQConv3DQDenseQDenseTQDotQEinsumQEinsumDenseQEinsumDenseBatchnormQGRUQGlobalAveragePooling1DQGlobalAveragePooling2DQGlobalAveragePooling3DQGlobalAvgPool1DQGlobalAvgPool2DQGlobalAvgPool3DQGlobalMaxPool1DQGlobalMaxPool2DQGlobalMaxPool3DQGlobalMaxPooling1DQGlobalMaxPooling2DQGlobalMaxPooling3DQLinformerAttentionQMaxPool1DQMaxPool2DQMaxPool3DQMaxPooling1DQMaxPooling2DQMaxPooling3DQMaximumQMeanPow2QMinimumQMultiHeadAttentionQMultiplyQSimpleRNNQSoftmaxQSubtractQSumQUnaryFunctionLUTQuantizer
- hgq.quantizer package
- hgq.regularizers package
- hgq.utils package
Module contents
High Granularity Quantization 2
The HGQ2 library provides a set of tools to quantize neural networks meant to be deployed on edge devices, mainly FPGAs with Keras. The library is designed to be scalable, allowing for the construction of fully-quantized models suitable for deployment with minimal effort.
Provides
Scalable quantization-aware training
Drop-in replacement quantized keras layers
Support for various quantization schemes
Trainable weights and quantization bitwidths
TensorFlow/JAX/PyTorch backend support with Keras v3
Library Structure
The library is organized in a keras-like structure, with the following modules:
config: Configuration settings for the layers and quantizers
layers: Quantized keras layers
quantizer: Quantizer wrappers and internal quantizers
utils: Utility functions and classes, and some useful sugars
constraints: Custom constraints for quantization-aware training
regularizers: Custom regularizers for quantization-aware training