View Proposal


Proposer
Rob Stewart
Title
Verification aware quantisation of neural networks for FPGAs
Goal
Develop a new quantisation algorithm guided by robustness against adversarial attack
Description
Neural networks have until recent times predominately been trained and executed on GPUs in data centres. The recent trend is to push trained neural networks to Edge Computing devices, such as smart phones, autonomous vehicles and smart cameras. One hardware architecture in this space are FPGAs, which are programmable hardware chips that can be deployed with a sensor for low powered operation, reaching extremely high throughput. This makes FPGAs ideally suited for data driven AI problems that involves high volumes of input data and where a high throughput of inferences are required. One downside of FPGAs and other Edge Computing AI accelerators, is that they are limited in how much memory they have to store trained neural network parameters. The solution is the compress neural networks, e.g. by quantising the precision of their weights. Doing so has an inevitable effect on inference accuracy, but this is a surprisingly small amount. Another, more serious, effect of quantisation is the loss of robustness a neural network might have against adversarial attack. Currently, quantisation algorithms focus on the trade off between accuracy and memory requirements. This project will instead design a new quantisation algorithm, one that guided by formal verification approaches such that reducing precision of neural network parameters does not affect the robustness of the overall network to adversarial attack. This will be done in the context of the open source Brevitas component in the FINN compiler framework developed by Xilinx Research. This project can be done in close collaboration with Xilinx Research if the student wishes.
Resources
Background
Url
External Link
Difficulty Level
High
Ethical Approval
None
Number Of Students
1
Supervisor
Rob Stewart
Keywords
Degrees