IROHMS accelerator grant
The IROHMS accelerator grant is a financial initiative offered by the Cardiff Centre for Artificial Intelligence, Robotics and Human-Machine Systems (IROHMS).
The fund aims to offer financial bidding opportunities to our working groups to support and facilitate collaborative research activity pertaining to artificial intelligence, robotics and human-machine systems.
Supported activity includes, but is not limited to:
- running events
- inviting guest speakers
- interacting with industrial collaborators
- development of prototypes
- test beds and ideas
- proof of concept and more.
Projects
The following projects are funded by the IROHMS accelerator grant:
Cyber security risk in changing work environments as a result of COVID-19 lockdown
This project addresses the growing cyber security risk presented by changes in working environments over the short - and likely longer - term because of COVID-19 lockdowns.
The project aims to understand the changes in employee cyber security behaviours to recommend how they can be better supported with best practices when working remotely, with consideration of additional pressures from home working.
The project involves online surveys and individual interviews to explore:
- practices under the different stages of lockdown (past and current)
- changes that take place since moving to domestic work environments
- barriers and challenges that face remote workers
- security-related workarounds and risks they are unintentionally encouraged to take.
The study will inform the field about risks and provide the opportunity to make actionable, evidence-based guidelines for supporting widespread remote working.
Rapid Internal Simulation of Knowledge (RISK)
RISK helps make machine systems better at communicating their intentions and reasoning.
There is a growing need for explainable AI as machine learning techniques take the form of a black box. In the same way, it is difficult to understand the human mind. If a human can be trusted with safety critical processes, when the human mind is also considered a black box, a machine should be able to be trusted with the same process.
Currently, tools to allow a machine to explain its decisions are lacking. The project proposes using simulations to visualise decisions made by machine systems, as well as visualising other machine actions; decision processes can be shown, and language generation added, to explain these simulations and decisions.
This study develops the first parts of the system by using simple simulations to determine a robot’s response. It will look at different types of participant feedback to determine the best way for a robot to explain its actions.
XAI and I
This project develops deep-learning networks (DNNs) that humans can understand by using concepts that are identified as understandable to humans, as part of its learning process.
This allows the application of DNNs to real-world problems where human understanding of DNN performance, rather than just good DNN performance, is required. DNNs and human brains are treated as black boxes that need to understand each other to enhance the performance of both. We train DNNs and humans on the same learning task and use their performance to cross-inform the learning and understandability of the domain.
Such human understanding is necessary for the real-world implementation of DNN-based decision support systems. This applies to fields where trust in decisions is critical (e.g. security, medical imaging, financial prediction, intelligent mobility etc.).