View Proposal
-
Proposer
-
Ekaterina Komendantskaya
-
Title
-
Explainable AI for Verification
-
Goal
-
-
Description
- As machine learning algorithms find their ways in safety-critical systems, such as autonomous cars, robot nurses, conversational agents, the question of ensuring their safety and security becomes important. It is often difficult to explain and interpret machine learning models, yet this seems to be the key to their safe and secure implementation and applications.
Explainability of AI is a growing subject area, with different applications, with several tools already available on the market, such as e.g. LIME. These tools start to gradually find their place in AI verification. You will study applications of methods of explainable AI in the area of Verification, and will create your own prototype tools that support these methods.
You will have a chance to collaborate with researchers in the lab for AI and Verification: LAIV.uk.
- Resources
-
-
Background
-
-
Url
-
-
Difficulty Level
-
High
-
Ethical Approval
-
None
-
Number Of Students
-
3
-
Supervisor
-
Ekaterina Komendantskaya
-
Keywords
-
-
Degrees
-