Summer Undergraduate Research Experience (SURE)


UMTRI Project #1: Adaptive Safety Designs for Injury Prevention: Human Modeling and Impact Simulations

Faculty Mentor: Jingwen Hu,


  • Proficiency in Matlab or other programing tools
  • Interested in machine-learning, statistical modeling, and/or injury biomechanics research
  • Demonstrated ability in 3D human geometry model and/or FE model development and application is a plus

Project Description: Unintentional injuries, such as those occurred in motor vehicle crashes, falls, and sports are a major public health problem worldwide. Finite element (FE) human models have the potential to better estimate tissue-level injury responses than any other existing biomechanical tools.

However, current FE human models were primarily developed and validated for midsize men, and yet significant morphological and biomechanical variations exist in human anatomy. The goals of this study are to develop parametric human geometry and FE models accounting for the geometric variations in the population, and to conduct a feasibility study using population-based simulations to evaluate the influence of human morphological variation on human impact responses in motor-vehicle crashes and sport-related head impacts. Specifically, in this study, students will use medical image analysis and statistical/machine-learning methods to quantify the geometric variance of the skeleton among the population; use mesh morphing methods to rapidly morph a baseline human FE model to a large number of human models with a wide range of size and shape for both males and females; conduct impact simulations with those models; and use machine-learning models to build surrogate models for injury assessment toward adaptive safety designs.

Research Mode: In Lab, Online or Hybrid

UMTRI Project #2: Driver State Monitoring for Automated Vehicles

Faculty Mentor: Monica L.H. Jones,

Prerequisites: Motivated students, keen to work both independently and within a group.  Some experience with scientific programming languages is required (e.g. Mathematica, MatLab, Python).  Familiarity with computer vision programming is desired.

Project Description: With increasing automation (SAE Levels 2 and 3), the role of the driver will transition from Driver Driving (DD) to Driver Not Driving (DND). Freed from completing operational tasks of driving, drivers will have a much larger behavioral repertoire. Driver state monitoring (DSM) systems attempt to predict the driver’s readiness to respond to a takeover request or other emerging need within the situation from information obtained from cameras and other sensors. These systems face several challenges to comprehensively track the continuum of possible driver postures and behaviors. Many research questions persist with respect to the efficacy and effectiveness of DSM systems. The results of this project may identify disallowed states and provide further design guidance for DSMs.

This project explores the characteristics and behaviors associated with non-nominal postures, driver engagement, monitoring, and state levels – under day and night conditions. It also seeks to quantify driver responses to unscheduled automated-to-manual (non-critical) transitions in L3 automated driving conditions. Data were gathered on the American Center for Mobility closed test facility.  Continuous measures during in-vehicle exposures include: 2D image and 3D depth data, physiological response, driver performance and behavior data, vehicle data, and available DSM outputs.

Student researchers will also assist with data analysis, develop image processing models &/or computational models that predict driver engagement.

Research Mode: In Lab, Hybrid 

UMTRI Project #3: Augmented Virtual Reality (AVR) based Driving Scenario Simulation and Analysis

Faculty Mentor: Shan Bao,

Prerequisites:  Motivated students who are comfortable working with a big group. Having skills of website development is a great plus!!

Project Description: When evaluating and testing automated vehicle technologies, it is pretty challenging and expensive to test the prototype system using real cars on real roads. Ideally, parameter setting of sensors and vehicle control systems can be tested and evaluated under variety of simulated scenarios at first.  This work is sponsored by a mixed of sponsors with several focuses. The work is designed to simulate 2D and/3D real world driving scenarios in the virtual environment through software (e.g., Carla or Carsim) or Virtual Reality techniques. Student interns get to work with the exciting concepts and interact with our industry sponsors directly and will be able to implement your simulation results through hands on experiences. We are looking for multiple motivated student helpers. Training on certain software (e.g., Carla or Carsim) and hardware (AVR-Heads set) are available.

The research team will be working with industry experts directly on this project. Students will have hands-on experiences working on instrumenting AVs at Mcity, and testing.   

Research Mode: Online or Hybrid

UMTRI Project #4: Safety and Independence of Passengers in Wheelchairs Using Automated Vehicles and Aircraft

Faculty Mentor: Kathleen D. Klinich,

Prerequisites: Strong technical writing skills, experience with spreadsheet/data analysis, mechanical design/controls experience, and an interest in improving user travel experience and working with people who have disabilities.

Project Description: We have multiple projects to ensure that people who travel while seated in their wheelchairs can safely and independently do so in automated vehicles where there may not be a driver to assist in securing the wheelchair, or in aircraft where personal wheelchair use is not currently allowed. Student researchers could help with measuring posture and shape of volunteers using wheelchairs, help with dynamic test fixture design and laboratory testing, assist with data analysis, or help create computational models of wheelchair geometry 

 Research Mode: In lab, hybrid 

UMTRI Project #5: Motion Sickness to Inform Automated Vehicle Design

Faculty Mentor: Monica L.H. Jones,

Prerequisites:  Motivated students, keen to work both independently and within a group.  Some experience with scientific programming languages is required (e.g. Mathematica, MatLab, Python). Familiarity with computer vision programming is desired

Project Description: Motion sickness in road vehicles may become an increasingly important problem as automation transforms drivers into passengers. However, lack of a definitive etiology of motion sickness challenges the design of automated vehicles (AVs) to address and mitigate motion sickness susceptibility effectively. The quantification of motion sickness severity and identification of objective parameters is fundamental to informing future countermeasures. Data were gathered on-road and on the Mcity and Michigan Proving Ground test facilities.  Continuous measures include: 2D image and 3D depth data, thermal imaging, physiological response, vehicle data, and self-reported motion sickness response. Modeling effort will elucidate relationships among the factors contributing to motion sickness for the purpose of generating hypotheses and informing future countermeasures for AVs.

Students will have hands-on experiences working on instrumenting AVs at Mcity, and testing.   Student researchers will also assist with data analysis or develop computational models that detect and predict passenger motion sickness.

Research Mode: In Lab, Hybrid 

UMTRI Project #6: Development for Automated Vehicle Intelligent Lane-Weaving Function

Faculty Mentor: Brian T. W. Lin,

Prerequisites: (this field is optional)

  • Know ROS framework
  • Some experience on Python and Linux; experience with projects using ROS is a huge plus
  • Have great communication skills and teamwork experience

Project Description: 

For autonomous vehicles (AV) to engage a weaving movement, the system needs to decide when and how the lane change should be safely executed, according to the vehicle telematics, ramp geometry, and the maneuver of the other weaving/non-weaving vehicles. The research team had previously implemented the decision-making models in the augmented reality environment and evaluated them. In this project, we aim to deploy the complete computational ROS-based weaving decision models to Mcity’s Lincoln MKZ autonomous vehicle (AV), for which the models had been validated in computer simulations. We are keen to implement the models in AV with the signals input from the other vehicle on the test track through RTK and evaluate the performance of the models, communications among different entities, and the safety issues.

The students who are involved will help program in Python to subscribe/broadcast ROS topics to control the AV and subscribe GPS data as the input for the decision model, conduct the test track experiment at Mcity, and analyze the data.

Research Mode: In Lab, Hybrid

UMTRI Project #7: Online Parametric 3D Wheelchair Model Development 

Faculty Mentor: B-K. Daniel Park,

Prerequisites: (this field is optional)

  • Proficiency in computer programming languages (Javascript preferred)
  • Familiarity with computer-aided design (CAD) is desired

Project Description:

The proposed study hypothesizes that having digital tools that can represent the diversity of wheelchair 3D geometries will significantly improve vehicle designs for better accommodation and safety for wheelchair-seated occupants. Three-dimensional (3D) wheelchair shape data were collected from commercial wheelchair products and will be categorized into a few groups based on the functional shapes. In this project, a series of online parametric wheelchair models will be developed using an open-source modeling tool (OpenJS). 3D shapes of the wheelchairs in each category will be first simplified according to the functionality of the wheelchairs, and the key dimensions will be derived from statistical analysis to represent the simplified shapes as well as the main functions. These dimensions will be used as shape parameters of the online model, and an intuitive and easy-to-use graphical user interface (GUI) will be implemented to control the model parameters.

Research Mode: In Lab, Remote, Hybrid

UMTRI Project #8: Automated Vehicle Malfunction and Coping Strategies Development

Faculty Mentor: Shan Bao,

Prerequisites: Team players who are motivated in working with other group members. Experience with human factors knowledge and/or text mining experiences are plus! 

 Project Description: 

Automated systems that control/drive a vehicle or assist a driver may fail/malfunction at any time while driving in traffic and lead to crashes. This Mcity sponsored project is designed to understand the typical and important failure types and taxonomies for automated vehicle systems that are currently on the road, as well as to develop coping strategies in mitigating hazards of such vehicle failures and supporting safe and efficient responses for drivers from both subject and surrounding vehicles. A hybrid approach is proposed to address the research questions both qualitatively and quantitatively.

The research team will be working with industry experts directly on this project. Students will have hands-on experiences working on instrumenting AVs at Mcity, and testing.   

Research Mode: In-lab (Mcity testing)Online or Hybrid

UMTRI Project #9: A Tool for Augmented Reality (AR) Assisted Surgery: 3D Human Modeling and Visualization

Faculty Mentor: Jingwen Hu,

Prerequisites: Proficiency in computer programming languages (C#, C++, Unity, Python, etc.).  Previous experience of using Microsoft HoloLens will be a plus.

Project Description: An AR-assisted surgery tool will provide a composite view between computer-generated patient anatomy and a surgeon’s view of the operative field, which may lead to more precise understanding of the detailed anatomy and also significantly increase accuracy in tumor localization and resection. In this study, we will focus on a software tool that can address the rapid development of computer anatomy models and accurate registration between the anatomy model and real patient geometry, which are the two key aspects of AR-assisted surgery tools.  We plan to use an AR device, Microsoft HoloLens, as the main hardware to demonstrate the software capability, although our software should not be limited to HoloLens only.  In this study, we will use liver surgery as an example, thus the medical images and anatomy models will only focus on the liver and the surrounding tissues.  Because liver is the largest solid organ in the abdomen, is pliable, and operative interventions can alter its anatomy, it will pose significant challenges on model registration, which will be a good test for the AR-assisted surgery tool. For surgeons who have to deal with complex anatomical structures that are not always visible, the proposed AR-assisted surgery tool will provide much needed understanding of anatomic relations beneath the surface, and will likely lead to better accuracy, safer resection, lower complications, and superior surgical outcomes.

Research Mode: In Lab, Online, Hybrid

UMTRI Project #10: Support for Driver Interface Research

Faculty Mentor: Paul Green,

Prerequisites: none, but being a licensed driver is helpful

Project Description: 

We are conducting a variety of projects for which help is needed.  In support of a number of Army projects, we are writing a standard that defines measures of driving performance and provides representative data based on the literature and possibly based on original research.  We have developed an industry standard for this purpose in the past for cars and trucks driven on-road, but for this research, we need to include off-road vehicles and armored vehicles.  This research is quite fundamental in that it is defining the science of driving, but quite applied in that we need real-world data to support what we do.  In addition, anyone working in the group invariably becomes involved in other projects as well, if for no other reason than to provide a broader research experience.

Research Mode: In Lab (possibly), Online, Remote, Hybrid

UMTRI Project #11: Driving Simulator Development – Unreal Engine

Faculty Mentor: Paul Green,

Prerequisites: none, but being a licensed driver is helpful, knowledge of Unreal is helpful

Project Description: 

We have a number of projects with the U.S. Army related to driving combat vehicles.  In support of them, we need to develop a simulation in Unreal of driving in a specific virtual world, adding sound, vehicle dynamics, minimaps and a HUD to represent a particular vehicle.  We also need to record driving performance in real time.  We know this is feasible because a student completed elements of this in the past, but the documentation is incomplete and we need to add more features.  We have requested hardware for this task from the Army.

Research Mode: In Lab (possibly), Online, Remote, Hybrid

UMTRI Project #12: Continuing Development of a Manned Driving Simulator

Faculty Mentor: Paul Green,

Prerequisites: none, but being a licensed driver is helpful, knowledge of Python is helpful

Project Description: 

For almost 2 years, various MDP teams have been working on the development of a driving simulator that includes a moving base cab for studies of human interaction with partially automated and automated vehicles.  Our focus is on 3 elements: (1) a GUI to allow for the rapid creation of experiments (especially scenarios and vehicle placement), (2) the ability to import virtual worlds, and (3) control of a 2-DOF motion platform in real time (pitch and roll).  The underlying code runs under LINUX and uses CARLA and ROADRUNNER.

Research Mode: In Lab (possibly), Online, Remote, Hybrid

UMTRI Project #13: Development and Implementation of Software Tools for Human Centered Design

Faculty Mentor: Matt Reed,

Prerequisites: Prior experience with R and/or Python

Project Description:

The Biosciences Group has developed a wide range of statistical models of human posture and body shape for use in human-centered design. However, the complexity of these models is such that relatively few people are able to use them. The goal of this project is to make more of these models available online for people around the world to use for human centered design. (As an example, see: The tools include interactive analysis of standard anthropometry (body dimensions), three-dimensional anthropometry, head and face geometry, and vehicle occupant postures.

The student(s) will work with the faculty to develop and deploy design tools using R and/or Python.  Applications may also be developed for implementation in open-source tools such as FreeCAD and Blender3D. 

Research Mode: In-person, Remote, or Hybrid

UMTRI Project #14:  Vehicle Position-in-Lane: Ground Truth System

Faculty Mentor:  Dave LeBlanc

Prerequisites:  Programming experience.  Experience with Matlab and/or image processing is encouraged but not required.

Project Description:  UMTRI’s Engineering Systems Group uses experiments, simulations, and analytics to help industry and government sponsors (1) quantify the requirements of automated and semi-automated vehicles, and (2) design and demonstrate test methods to ensure that vehicles meet the requirements.   This project’s goal is to develop an automated processing pipeline for accurately determining the position of a vehicle within its lane using downward looking cameras.  The pipeline will consist of image processing the camera images and pushing the results to an SQL database for analyses, such as comparing these “ground truth” results to those that a prototype or production vehicle generates.  The student will interact with and be supported by the faculty mentor and experienced research engineers. 

Research Mode: Hybrid (may include help with occasional hands-on testing)

UMTRI Project #15: Machine Learning for 3D Point Cloud Data via Deep Generative Models

Faculty Mentor: Wenbo Sun, 


  • Proficiency in Python
  • Experience with deep neural networks, especially generative adversarial neural networks, and preferably diffusion neural networks
  • Experience with 3D point cloud data analysis is encouraged

Project Description: Deep generative models have been widely used for image reconstruction in the computer vision field. Going beyond the conventional methodologies on 2D images, we aim to expand the deep generative model techniques to 3D point clouds and 2.5D depth images, which has a more complicated data structure and may bring about potential research challenges. The goals of this study are to reconstruct high resolution human models represented by 3D point clouds or 2.5D depth images from low resolution samples, to validate the reconstruction accuracy through specific evaluation metrics, and to provide an empirical estimation of the joint distribution of the 3D point cloud data among the whole population. In particular, in this study, students will use specifically designed neural network architectures to build a generative model of the high resolution 3D point cloud data, formulate and solve an optimization problem to estimate the corresponding parameters under specific regularizations. It is expected that the reconstruction results will be imported to existing softwares for visualization, along with a journal / conference paper on the proposed methodology.

Research Mode: Online or hybrid