hgq package
Subpackages
- hgq.config package
- hgq.constraints package
- hgq.layers package
- Subpackages
- Submodules
- hgq.layers.activation module
- hgq.layers.batch_normalization module
- hgq.layers.conv module
- hgq.layers.einsum_dense_batchnorm module
- hgq.layers.linformer_attention module
- hgq.layers.multi_head_attention module
- hgq.layers.pooling module
- hgq.layers.softmax module
- Module contents
QAdd
QAveragePooling1D
QAveragePooling2D
QAveragePooling3D
QAveragePow2
QAvgPool1D
QAvgPool2D
QAvgPool3D
QBatchNormDense
QBatchNormalization
QConv1D
QConv2D
QConv3D
QDense
QDot
QEinsum
QEinsumDense
QEinsumDenseBatchnorm
QGlobalAveragePooling1D
QGlobalAveragePooling2D
QGlobalAveragePooling3D
QGlobalAvgPool1D
QGlobalAvgPool2D
QGlobalAvgPool3D
QGlobalMaxPool1D
QGlobalMaxPool2D
QGlobalMaxPool3D
QGlobalMaxPooling1D
QGlobalMaxPooling2D
QGlobalMaxPooling3D
QLinformerAttention
QMaxPool1D
QMaxPool2D
QMaxPool3D
QMaxPooling1D
QMaxPooling2D
QMaxPooling3D
QMaximum
QMeanPow2
QMinimum
QMultiHeadAttention
QMultiply
QSoftmax
QSubtract
QSum
QUnaryFunctionLUT
Quantizer
- hgq.quantizer package
- hgq.regularizers package
- hgq.utils package
Module contents
S-QUARK
Scalable Quantization-Aware Realtime Keras
The S-QUARK library provides a set of tools to quantize neural networks meant to be deployed on edge devices, mainly FPGAs with Keras. The library is designed to be scalable, allowing for the construction of fully-quantized models suitable for deployment with minimal effort.
Provides
Scalable quantization-aware training
Drop-in replacement quantized keras layers
Support for various quantization schemes
Trainable weights and quantization bitwidths
TensorFlow/JAX/PyTorch backend support with Keras v3
Library Structure
The library is organized in a keras-like structure, with the following modules:
config: Configuration settings for the layers and quantizers
layers: Quantized keras layers
quantizer: Quantizer wrappers and internal quantizers
utils: Utility functions and classes, and some useful sugars
constraints: Custom constraints for quantization-aware training
regularizers: Custom regularizers for quantization-aware training
Compatibility
hls4ml: WIP
QKeras: Never, as it is built on Keras v2. However, this library comes with a QKeras-like compatibility API. Refer to the qkeras module (top level, not under this) for more information.