Human-Robot Teaming
Mentor: Dr. Julie A. Adams

This project seeks to develop new methods for adapting interaction between a human and a robot in a one-to-one partnership relationship based on the human's cognitive workload. The human dons various wearable sensors that provide a data stream that is analyzed to detect high and low workload conditions. Once a high or low workload condition is detected, the robot can adapt its interaction method with the human. REU students will work to develop new algorithms for detecting changes in workload and new algorithms for adapting the robot's interactive capabilities based on the workload detection. The project will require algorithm design, implementation and software testing. Further, REU students will evaluate the algorithms with actual human-robot teams.


Design of Implants for Attaching Muscle and Tendons to Improve Human Hand Function
Mentor: Dr. Ravi Balasubramanian

Current reconstructive orthopedic surgeries use sutures to attach muscles and tendons. However, this leads to poor surgical outcomes because of the suture’s limited ability to transmit the muscle’s forces and movement to the tendons. It is expected that using passive implants, such as pulleys and rods, to surgically construct mechanisms in situ using the existing biological tendons will significantly improve post-surgery function (when compared to using sutures) and lead to the development of new surgical procedures. 

Example projects:


Robotic Deburring System
Mentors: Dr. Ravi Balasubramanian, Dr. Cindy Grimm, and Dr. Burak Sencer

Burrs are undesirable projections of material at the edges of a finished part’s surface.  They pose a fundamental problem for manufacturing operations since they affect part handling and assembly operations and lead to part failure. Deburring, the process of removing burrs, is currently performed by human operators. But deburring is labor-intensive and causes injury to operators due to the forces and movements involved. This project seeks to develop a computer-vision based robotic system for deburring.

Example projects:

  • Develop computer vision algorithms and a system for detecting poor-quality edges on machined parts
  • Develop a robotic system that removes burrs based on vision-based detection


Multifunctional Materials for Soft Robots
Mentor: Dr. Joe Davidson

There are numerous open questions surrounding the design, fabrication, and control of soft robots. In this project, we will explore the integration of ‘smart’ materials in soft bodies for actuation and control. Smart materials are materials whose properties can be altered with an external stimulus such as heat or electric fields.

Example projects:

  • Designing and fabricating magnetorheological elastomers with embedded magnetic particles
  • Developing soft valve and pump prototypes
  • Designing soft sensor networks for shape detection


Socially Assistive Robots for the Workplace
Mentor: Dr. Naomi Fitter

As computer use becomes more central to work in many fields, office workers face increased risks of health challenges including heart disease, diabetes, and eyestrain due to prolonged periods of sitting and looking at a screen without taking a break. We view robots as one impactful tool to help frequent computer users to take breaks, stand up, and move more during the workday. REU students working on this project will help us to gain an understanding of what behaviors of a social and physically embodied robotic system are most effective for encouraging healthier workplace practices during short- and long-term use cases.

Example projects:

  • Using reinforcement learning to improve the robot’s prompting strategy
  • Designing and evaluating new activity-prompting strategies
  • Gathering and analyzing human user data


Robot Grasping
Mentors: Dr. Cindy Grimm and Dr. Ravi Balasubramanian

Humans have no trouble picking up and manipulating objects, yet we've struggled to impart that ability to robotic manipulators. This is in part because robotic manipulators lack much of the sensory feedback human hands have, but it's also because we, as humans, are not very good about reasoning about what we do instinctively. The goal of this project is to develop tools and user studies that will let us capture that knowledge and apply it to robotic manipulation tasks. We apply these studies to several areas: fruit picking, object manipulation, and light industrial tasks.

Example projects:

  • Analyze human data and apply machine learning to develop a robotic controller for a specific manipulation task
  • Use existing studies to design a hand actuation mechanism that is suitable for a specific manipulation task
  • Integrating in-hand, novel sensors to improve manipulation


Geometry of Locomotion
Mentor: Dr. Ross Hatton

Many animals make full-body contact with the ground as they crawl, slither, or burrow. We're studying the geometry of the system's body motions to better understand how this process works, and using that knowledge to make robots that can take advantage of the underlying principles.

Example projects:

  • Construction of snake-like robots
  • Mathematical analysis of kinematics and dynamics


Autonomy for Underwater and Surface Vehicles Exploring Ocean Environments
Mentor: Dr. Geoff Hollinger

There are many ocean environments which are unsuitable for manned research vessels, either because they are too dangerous (e.g., near a calving glacier or in the deep ocean), or require too many resources to be effective. We seek to design a new generation of Autonomous Research Vehicles (ARVs) that can be programmed to measure ocean dynamics in extreme environments. This research project involves the hardware design, construction, and programing of robust surface and underwater systems that will be used to explore ocean dynamics in Greenland, Alaska, and in remote ocean basins. The REU student will join an interdisciplinary team of researchers from mechanical engineering, computer science, and oceanography to assist in building and programming ARVs currently being designed at Oregon State University.

Example projects:

  • Programming autonomous vehicles to operate with minimal operator control
  • Learning from human operators to coordinate autonomous marine vehicles
  • Designing algorithms for robust underwater manipulation and grasping
  • Optimization and testing of a multi-vehicle underwater docking infrastructure that stays in place for substantial periods of time


Stretchable Electronics
Mentor: Dr. Matt Johnston

Stretchable electronics will be critical for future wearable devices and smart textiles, where existing fabrication approaches severely limit conformal deformation. This is especially true for wearable sensors and actuators, conformable electronic skins and textiles, soft robots, and emerging physical human-machine interfaces. We have ongoing projects developing stretchable sensors and electronic systems using printable conductors and substrates.

Example projects:

  • Development of stretchable printed circuit boards
  • Material and electrical characterization of liquid metal conductors
  • Design of hardware and software for measure and control of stretchable electronics
  • Design of stretchable sensors for strain, force, pressure, etc.


Autonomous Rearranging Furniture
Mentor: Dr. Heather Knight

The focus of this project is to improve the intelligent path planning of a group of robot furniture for use in a typical dining room during daily use, parties, and cleaning. This work requires programming experience, interest in exploratory design research, and aiding in human user studies.


Experiments with a Multi-Robot Comedy System
Mentor: Dr. Heather Knight

What makes an effective robot comedian? What helps to make a show feel live? The REU student on this project will help to run a human subjects study to evaluate some of these research questions and analyze the resulting data using statistical analyses on the live audience responses as well as participant surveys.


    Long-Term Autonomy for Mobile Robots
    Mentor: Dr. Bill Smart

    The goal of this project is to develop algorithms and techniques that allow robots operating for extended periods of time to both improve their performance and to specialize it to their environment.  Part of this project looks at building rich semantic representations of the world.  These representations include meaningful labels for places and things (walls, corridors, mugs, tables, etc) that the robot learns from raw sensor data.  Current algorithms for localization, navigation, and other tasks are generic; they do not take advantage of this semantic information.  For example, the best localization algorithm for a corridor might be different from the one for a crowded room.  How the robot should navigate in a corridor is likely different from how it should navigate in an open space.  This project looks as adapting the current, often generic, algorithms used in mobile robotics, specializing them to particular semantic contexts (such as the type of space that the robot is in), with a view to improving the robot's behavior over time.

    Example projects:

    • Design, implement, and evaluate robust, context-aware localization, navigation, and planning algorithms for various types of places in a building (such as corridors, rooms, doorways, etc)
    • Design human-in-the-loop learning an optimization approaches to learn good context-aware localization, navigation, and planning strategies for the robots
    • Design, implement, and evaluate systems capable of building rich semantic maps of real-world environments, including the objects within them


    Improving the ROS Software Ecosystem
    Mentor: Dr. Bill Smart

    ROS, the Robot Operating System, relies on a set of on-line tools such as a wiki, question-and-answer site, and source code repositories.  As ROS has grown and become more successful, these tools have not kept pace, and they are no longer sufficient for the growing diversity of ROS users.  The focus of this project is to survey the ROS community, to find out what the weak points of the current ecosystem are, to come up with some potential solutions, and then to develop and test prototype solutions.  This will involve designing, deploying, and analysing surveys of ROS users, to identify the problems.  We will then design and implement on-line solutions that are designed to address the most pressing of these problems, and then test them with a subset of the community. Our overall goal is to identify the pain points, along with possible solutions (backed by experimental evidence that they work), and to use this as the starting point for a follow-on project that will implement production versions of these solutions that will be deployed with the whole ROS community.  Students interested in this project should, ideally, have experience with ROS and web programming technologies.

    Example projects:

    • Design, deploy, and analyse surveys of the ROS community to identify high-priority problems with the existing ecosystem tools
    • Design, implement, and evaluate solutions to one or more of these problems.  For example, to address the problem of knowing the quality of a package, we might implement a star-based rating scheme on the wiki pages, deploy it on a subset of the pages, and design a study to see if it helps
    • Look at techniques that will incentivize people to contribute to the larger ROS ecosystem, in addition to contributing to the code base.  Implement and evaluate one or more strategies for this such as, for example, a leaderboard for questions answered on ROS Answers as a way of incentivizing participation through gamification


    Haptic Wearables as Navigation Aids
    Mentors: Dr. Bill Smart and Dr. Naomi Fitter

    For individuals with visual impairments, navigating the world and understanding nearby geometries can be a big challenge. We have been working with individuals with visual impairments to design and evaluate a worn haptic system that can sense nearby obstacles and convey information about the environment to the human user via vibrotactile feedback. REU students working on this project will help us to improve and evaluate the system with the guidance and advice of our system users.

    Example projects:

    • Improving mapping of sensed obstacles to touch sensations
    • Integrating and evaluating new system components
    • Gathering and analyzing human user data


    Privacy and Telepresence
    Mentors: Dr. Bill Smart and Dr. Cindy Grimm

    As robots become more ubiquitous there are open questions about how we should think of robots in our living spaces. In this project we look specifically at telepresence - "skype on a stick" - where a remote user controls a semi-autonomous robot in your home or office. Unlike a person in your space, we can control what the robot can see, do and hear - for example, blurring or blacking out part of the video to hide what's there, or preventing the robot from going into some rooms.

    This project takes a broad look at privacy in the context of remote presence systems, drawing from related work in areas from legal studies to psychology. We are trying to figure out how people think about privacy in the context of these systems, how we can build interfaces that allow people to specify their privacy concerns, and how we can build the underlying technology that supports privacy-protection.

    Example projects:

    • Interfaces for specifying privacy concerns for remote presence systems
    • Real-time modification of sensor streams (camera images) to protect privacy (by redacting, blurring, replacing, etc)
    • Setting and enforcing physical restrictions on remote presence systems


    Multi-Robot Coordination
    Mentor: Dr. Kagan Tumer

    Many interesting real world problems require multiple robots to work together. For example, search and rescue missions require coordinating dozens of autonomous robots, as well as ensuring that the robots and humans work together. But providing hard-coded coordination instructions is too limiting. This project explores the science of coordination, and focuses on how to provide incentives to individual robots so that they work collectively.

    Example projects:

    • Programming intelligent decision making for robots
    • Implementing incentives for robots
    • Testing coordination algorithms in hardware (wheeled and flying robots)