View Proposal
-
Proposer
-
Rob Stewart
-
Title
-
Compressing neural networks for FPGA based hardware accelerators
-
Goal
-
Explore the performance tradeoffs of compressing neural networks for programmable hardware
-
Description
- For executing deep learning algorithms based on neural networks, there is a shift expensive (finance and energy) centralised GPUs to Edge Computing devices such as embedded CPUs and programmable hardware (Field Programmable Gate Arrays, FPGAs).
Trained neural networks require hundred of Megabytes or memory, and high computation resources. Emerging compression techniques are able to reduce those resource costs significantly. However, this also affects the accuracy of neural networks.
This project will explore industry driven frameworks e.g. Xilinx's Python FINN framework and Intel's Distiller framework, to assess the speed, accuracy and resource use performance metrics for across a set of neural network models.
The outcomes of the project will inform users of neural networks and of embedded processors on how best to construct and refine deep learning models for application domains such as remote image processing smart cameras, autonomous robot, and driverless vehicles.
- Resources
-
-
Background
-
-
Url
-
-
Difficulty Level
-
Challenging
-
Ethical Approval
-
None
-
Number Of Students
-
1
-
Supervisor
-
Rob Stewart
-
Keywords
-
-
Degrees
-