View Proposal
-
Proposer
-
Andrew Ireland
-
Title
-
A Neuro-symbolic Assistant for Explaining Accidents
-
Goal
-
Build a tool that assists in explaining accidents involving autonomous systems
-
Description
- Hazard analysis is the process of trying to anticipate and prevent or mitigate for accidents. The goal of this project is to build a tool that exploits the results of hazard analysis in order to explain how an accident occurred. To achieve this, the tool will need to be able to firstly capture the logical reasoning (symbolic part) that underpinned the hazard analysis. Second, the tool will need to be able to systematically disrupt the 'logical reasoning' in order to model a given accident. Such disruption allow can not fully explain an accident, moreover, many of the 'disruption' will be infeasible. This is where we propose the use of Large language Models (neuro part). That is, each 'logical disruption' will be used to automatically generate a promote for a LLM, e.g., google Gemini. The output from the tool will take the form of an accident report. Ultimately, however, a human accident investigator would need to decide whether or not to accept the findings provided within the auto-generated accident report.
Problem frames is a technique that uses common patterns that occur within problems to capture requirements within the context of software engineering. A feature of the problem frames approach is that it draws a distinction between system requirements (world view) and software requirements (software view). The consistency between the world and software views can then be verified mechanically. Note that inconsistencies at this level have historically led to catastrophic system failures. Consistency verification will typically rely upon assumptions about the 'world' in which the intended system will operate. Problems frames does not help with ensuring the validating of such assumptions. This is where LLMs might help, i.e., we propose to use LLMs to explain the validity (or otherwise) of a given set of assumptions. Ultimately, it will be the responsibility of a human engineer to decide if they believe the LLM's explanations. But a tool that combines symbolic 'consistency verification' with neuro (i.e., LLM) explanations of 'assumption validating' could increase the productivity of an engineer. It would be a cool tool too!
- Resources
-
LLM and an appropriate programming language, i.e., library support for propositional reasoning.
-
Background
-
Logic and Proof, AI, Software Engineering.
-
Url
-
-
Difficulty Level
-
High
-
Ethical Approval
-
None
-
Number Of Students
-
1
-
Supervisor
-
Andrew Ireland
-
Keywords
-
-
Degrees
-
Bachelor of Science in Computer Science
Master of Engineering in Software Engineering
Master of Science in Software Engineering