BitLogic: A Training Framework for Gradient-Based FPGA-Native Neural Networks
TMLR (under review)
End-to-end framework for training FPGA-native neural networks using differentiable lookup tables instead of MAC operations. Provides modular PyTorch APIs, hardware-aware components, and automated RTL export with bit-accurate equivalence. Achieves 72.3% CIFAR-10 accuracy with <0.3M LUTs and <20 ns inference latency using LUT resources only.