Skip to main content

Recent years have seen significant progress in robotics and AI (Artificial Intelligence), enabling robots and machines to accomplish challenging missions autonomously. Research in robotics and AI can generate an enormous impact on industry and society, through collaboratively developing solutions to real-world-driven challenges.

The Robotics and Autonomous Intelligent Machines (RAIM) group undertake fundamental research in autonomous navigation, manipulation, machine vision/smart sensing, and more, to create autonomous robots and intelligent systems that can tackle challenging problems in unstructured environments, including healthcare, energy, agriculture, manufacturing, and environmental science.

We have four main members and more than 13 associate members from across all disciplines within the School of Engineering. Our broad spectrum of expertise enables us to apply our work in several high-impact research areas. Some of these include:

  • robot perception and robot learning
  • autonomous navigation and manipulation
  • human-robot collaboration
  • robot and machine vision
  • smart sensing and signal processing

Research

We aim to give a higher level of autonomy to unmanned systems, providing capabilities of advanced situation awareness, localisation and mapping, multi-modal sensing, robot learning, autonomous manipulation, path planning, and human-robot collaboration. This includes robot perception, in which we aim to enhance robot cognitive capability by deploying advanced perceptual capabilities, such as vision, to understand environments and interpret human gestures.

Our current research areas include:

Robot perception and learning

We envisage an enhanced robot cognitive capability through deploying advanced multi-modal perceptual capabilities, and continuous self-learning to understand the environment in 3D, predict situations via machine learning, and support humans in real-world problems. This includes:

  • intelligent sensor processing/sensor fusion for multi-modal sensors, such as computer cameras and Lidar, for environmental mapping and robot localisation, including GPS-denied environments
  • robot learning and planning for safe and reliable navigation solutions for autonomous systems and mobile robots to operate in real-world environments that are safety-critical, dynamic, and unstructured
  • robot learning and planning for autonomous manipulation tasks, such as robotic assembly and motion planning.

Human-robot collaboration

Our work in this area introduces humans into the loop of robotic control, presenting more technical challenges such as:

  • environmental understanding through visual object recognition and pose identification for grasping and manipulations in unstructured workspaces
  • human behaviour monitoring and understanding
  • shared task planning for robots and humans or multiple robots
  • robot programming by demonstration, where robots learn new skills by observing demonstration by human operators.

Autonomous structural health monitoring

We have extensive experience in structural health monitoring (SHM), specifically acoustic emission and acousto-ultrasonics. We aim to develop solutions for autonomous SHM, by integrating our low power and wireless acoustic emission system into autonomous systems. This would reduce requirements of the number of sensors required to monitor large structures such as bridges or turbines, and integrate artificial intelligence to optimise data collection, management and interrogation.

Deep learning models

The popularisation of video surveillance and the vast increase of video content on the web has rendered video one of the fastest growing resources of data. Deep learning methods have demonstrated success in many areas of computer vision, including human action and activity recognition. However, to be confident in their predictions, their decisions need to be transparent and explainable. The aim of our research in this area is to develop algorithms capable of explaining the decisions made by the deep learning methods, specifically when applied to human activity recognition. This research is a result of collaboration with the School of Computer Science and Informatics.

Projects

Project: PHYDL: Physics-informed Differentiable Learning for Robotic Manipulation of Viscous and Granular Media

  • Ze Ji, Yukun Lai
  • Funder: EPSRC New Horizons
  • October 2022 – October 2024
  • Value: 250k

Project: Reinforcement Learning for autonomous navigation with GNSS-based localisation

  • Ze Ji
  • Funder: Spirent Communications
  • Value: £32,000
  • 2021 – 2024

Project: BIM and Digital Twins in support of Smart Bridge Structural Surveying

  • Haijiang Li, Ze Ji, Abhishek Kundu
  • Funder: Innovate UK/KTP (with Industry partner: Centregreat Rail)
  • September 2021 – September 2024
  • Value: £280,000

Project: DIGIBRIDGE: Physics-informed digital twin supported smart bridge maintenance

  • Haijiang Li, Abhishek Kundu, Ze Ji
  • Industry partners: Centregreat Rail, Crouch Waterfall, Centregreat Engineering)
  • Funder: SmartExpertise/WEFO
  • Value: £111,884
  • November 2021 - December 2022

Project: Active Robot Learning for Subtractive, Formative, and Additive Manipulations ofGranular and Viscous Materials

  • Ze Ji
  • Funder: Royal Society
  • March 2020 - March 2022
  • £17,812

Project: Additive Manufacturing and Robotics to Support Improved Response to Increased Flexibility

  • Value: £232,012
  • Funder: WEFO (through ASTUTE), collaborating with Continental Teves
  • February 2018 – February 2019
  • Rossi Setchi, Ze Ji

Project: 3D reconstruction and characterisation of spattering behaviours in SLM processing by fusing images from multiple cameras

  • Value: £40,000
  • April 2019 – April 2020
  • Ze Ji, Samuel Bigot, Rossi Setchi

Project: Pushing the boundary of vision-based 3D surface imaging

  • Ze Ji, Jing Wu, Rossi Setchi
  • £40,000,
  • Funder: Renishaw and Cardiff Strategic Partner Fund,
  • April 2018 - April 2019

Project: SRS – Multi-role shadow robotics system for independent living

  • EU FP7 project,
  • EUR 5 136 039,
  • Funder: Commission of the European Communities,
  • February 2010 – February 2013

Project: IWARD - Intelligent robot swarm for attendance, recognition, cleaning and delivery

  • Total cost: EUR 3 880 067,
  • Funder: Commission of the European Communities,
  • January 2007 - January 2010

Meet the team

Group leader

Picture of Ze Ji

Dr Ze Ji

Senior Lecturer (Teaching and Research)

Telephone
+44 29208 70017
Email
JiZ1@cardiff.ac.uk

Group members

Picture of Yulia Hicks

Dr Yulia Hicks

Senior Lecturer - Teaching and Research

Telephone
+44 29208 75945
Email
HicksYA@cardiff.ac.uk
Picture of Seyed Amir Tafrishi

Dr Seyed Amir Tafrishi

Lecturer in Robotics and Autonomous Systems

Telephone
+44 29208 76176
Email
TafrishiSA@cardiff.ac.uk

Associated members

Picture of Samuel Bigot

Dr Samuel Bigot

Reader - Head of International for Mechanical and Medical Engineering

Telephone
+44 29208 75946
Email
BigotS@cardiff.ac.uk
Picture of Daniel Gallichan

Dr Daniel Gallichan

Lecturer in Medical Imaging

Telephone
+44 29208 70045
Email
GallichanD@cardiff.ac.uk
Picture of Ze Ji

Dr Ze Ji

Senior Lecturer (Teaching and Research)

Telephone
+44 29208 70017
Email
JiZ1@cardiff.ac.uk
Picture of Abhishek Kundu

Dr Abhishek Kundu

Senior Lecturer - Teaching and Research

Telephone
+44 29208 75953
Email
KunduA2@cardiff.ac.uk
Picture of Jonny Lees

Dr Jonny Lees

Head of Department, Electrical & Electronic Engineering
Reader

Telephone
+44 29208 74318
Email
LeesJ2@cardiff.ac.uk
Picture of Agustin Valera Medina

Professor Agustin Valera Medina

Co-Director of the Net Zero Innovation Institute
Professor - Teaching and Research

Telephone
+44 29208 75948
Email
ValeraMedinaA1@cardiff.ac.uk
Picture of Jianzhong Wu

Professor Jianzhong Wu

Head of School, Engineering.

Telephone
+44 29208 70668
Email
WuJ5@cardiff.ac.uk
Picture of Yue Zhou

Dr Yue Zhou

Lecturer in Cyber Physical Systems

Email
ZhouY68@cardiff.ac.uk

Facilities

Robotics and Autonomous Systems Laboratory

The Robotics and Autonomous Systems Laboratory was established in 2016 and is managed by Dr Ze Ji. It provides cutting-edge robotic facilities, including two Kuka LBR iiwa collaborative robots, one Robotnik VOGUE+ mobile manipulator, three Kuka mobile robots (Youbot), a number of Turtlebots, quadcopters and many advanced sensors, such as high-definition 3D cameras (e.g., structured light and stereo vision), Lidar, RTK GPS, and UWB to support a broad range of research activities.

Equipment in the lab

  • Robotnik Vogue+:
  • Kuka Youbot (x3)
  • Kuka iiwa lbr Collaborative Robots (x2)
  • High-definition cameras:
    • Zivid 3D Camera
    • Roboception Stereo 3D camera
    • RealSense cameras
    • Industrial cameras (GigE)
  • Robot-based large-scale high-definition 3D surface imaging (Multi-view Photometric Stereo)
  • Autonomous Collaborative Drones and USVs (Unmanned Surface Vehicle)

Human Factors Technology Laboratory

This is an interdisciplinary lab established between the Schools of Engineering, Computer Science and Informatics and Psychology under the direction of Dr Yulia Hicks, Professor David Marshall and Professor Simon Rushton respectively.

Key equipment includes:

  • Motion Capture Systems including Phasespace 80 infrared marker 480Hz 16 Camera system (3 Person Tracker) and several electromagnetic trackers.
  • 3dMD 4D Colour Video Camera with 100Hz Frame Rate and colour + 3D point output.
  • Powerful PCs with multiple GPUs.

Find out more about the Human Factors Technology Laboratory.

Next steps

academic-school

Research that matters

Our research makes a difference to people’s lives as we work across disciplines to tackle major challenges facing society, the economy and our environment.

microchip

Postgraduate research

Our research degrees give the opportunity to investigate a specific topic in depth among field-leading researchers.

icon-chat

Our research impact

Our research case studies highlight some of the areas where we deliver positive research impact.