The Magic Carpet project is an innovative aerial robotics experiment conducted in Spring 2025 at the MARHES Lab. This project explores soft, underactuated platforms for aerial locomotion, using a novel flexible surface suspended by tethers and actuated by aerial thrust.
The system, dubbed the “Magic Carpet,” consists of a pliable fabric sheet suspended in midair and stabilized by a set of coordinated propellers. Each corner is attached to an independent tethered thrust unit, enabling lift and basic attitude control. Unlike traditional rigid-body drones, this platform showcases compliance and passive damping, making it ideal for interaction in cluttered or sensitive environments.
The videos demonstrate the Magic Carpet hovering in place and dynamically responding to inputs, revealing promising results in stability and control despite its unconventional structure. The system was designed and tested by researcher Kevin Aubert, who continues to explore ways to improve its robustness and expand its capabilities.
This project reflects the MARHES Lab’s ongoing commitment to developing new paradigms for mobility in aerial robotics—especially those that depart from traditional rigid and over-actuated designs.
Stay tuned for future updates as the Magic Carpet continues its journey toward controlled soft aerial locomotion.
The team implemented SKETCH, a real-time boundary-tracing algorithm using two UAVs to detect and trace volcanic CO₂ plume edges. One UAV flies inside the plume boundary and another outside, adjusting paths based on real-time gas readings to maintain a “sandwich invariant.” Compared to a baseline single-drone strategy (ZIGZAG), SKETCH offered significant efficiency gains—shorter flight paths, less turning, and faster mapping times—without sacrificing boundary-tracking accuracy.
Field tests were conducted with physical Dragonfly drones in simulated and natural environments, including Balloon Fiesta Park and the Valles Caldera supervolcano in New Mexico. In simulation and real-world scenarios, SKETCH outperformed traditional mapping strategies, demonstrating robust performance even in complex multi-plume structures and narrow “dumbbell” configurations.
Gradient Descent and Rasterization Mapping
Earlier phases of VolCAN demonstrated the Dragonfly swarm’s ability to autonomously follow CO₂ gradients using a modified flocking algorithm, allowing UAVs to collaboratively locate emission sources. Rasterization techniques, including lawnmower and spiral search algorithms, were used for large-area mapping, while Kriging interpolation was applied to create CO₂ concentration heatmaps.
Impact and Future Directions
The VolCAN project reduces the risk to human researchers, improves temporal resolution in gas monitoring, and provides tools for better volcanic eruption prediction. The SKETCH system bridges the theory-implementation gap in autonomous boundary tracing and lays the foundation for future applications in environmental robotics, including hazardous leak detection and industrial monitoring.
This project developed a decentralized adaptive controller for aerospace robots—including hexarotors and space tugs—to cooperatively transport unknown payloads in both Earth and zero-gravity environments. The system dynamically adapts to changes in object mass, inertia, and grasping points without prior knowledge. Using simulations in MuJoCo, researchers validated robust 6-DoF payload manipulation, even under robot loss scenarios. This work supports scalable and fault-tolerant aerial and space logistics and satellite servicing.
In the realm of Advanced Air Mobility (AAM), the ability for multiple autonomous UAVs to arrive at predefined destinations simultaneously is critical—whether to transport heavy payloads cooperatively or to synchronize actions in mission-critical scenarios. However, this task is far from simple. UAVs must navigate complex spatial environments, not only avoiding collisions with each other, but also steering clear of non-cooperative flying objects (NCFOs) that aren’t part of the system.
To tackle this challenge, the MARHES Lab has contributed to the development of MORRIS—the safe terminal tiMe-cOordinated contRolleR for multI-uav Systems. MORRIS is a novel safe linear quadratic optimal control algorithm designed to achieve time-synchronized arrivals while also ensuring collision-free trajectories.
MORRIS is made up of two integrated components:
1. Terminal Time-Coordinated Planner: Calculates optimal acceleration inputs to minimize the timing error between actual and desired arrival times for all UAVs.
2. Safety Layer using Control Barrier Functions (CBFs): Adjusts those acceleration commands to guarantee that the UAVs maintain safe distances from each other and from NCFOs, ensuring real-time collision avoidance.
This layered approach means that MORRIS not only keeps UAVs safely spaced in dynamic environments but also ensures that they complete their missions in perfect temporal coordination.
The MARHES Lab at the University of New Mexico continues to break new ground in aerial robotics with two groundbreaking actuation projects: the Fully-Actuated Hexarotor and the Omni-Directional Multirotor (Omnicopter). These vehicles, developed as part of Riley McCarthy’s Master’s thesis and supported by both Sandia National Laboratories and the Air Force Research Laboratory, are engineered to transform multirotors from passive inspection tools into platforms capable of active interaction with their environment.
Fully-Actuated Hexarotor
Designed to overcome the limitations of traditional co-planar multirotor drones, the fully-actuated hexarotor uses six strategically oriented propellers to decouple translational and rotational motion—unlocking six degrees of freedom (6 DOF). This allows it to hover, pitch, roll, and translate independently, making it ideal for complex manipulation tasks.
Hardware & Build:
Based on a modified Tarot TL65B01 carbon fiber frame
Equipped with tri-blade 9-inch propellers and 2806.5-1460KV motors
Integrated with an Odroid-XU4 companion computer and a Rokubi Mini force/torque sensor
Controlled via PX4 flight stack and ROS2 Foxy middleware
Testing & Control:
Controlled using a custom PID controller for position and PX4’s built-in attitude controller
Executed trajectory tracking tasks: hovering with pitch/roll offsets, figure-eight paths, and level translation
Showcased force-feedback manipulation via a hybrid position/force controller, enabling wall contact, sustained pushing, and motion during contact using real-time feedback from the Rokubi sensor
Omni-Directional Multirotor (Omnicopter)
Expanding upon the hexarotor’s capabilities, the omnicopter introduces true omni-directionality—it can thrust in any 3D direction, regardless of its orientation. Built for extreme aerial flexibility, this design is well-suited for emulating spacecraft dynamics and next-generation maneuvering.
Hardware & Control:
Features a symmetric design and fixed, non-planar propellers
Uses PX4’s updated control allocation for managing complex thrust vectors
Inputs calculated using the Moore-Penrose pseudo-inverse method for dynamic allocation
Tested extensively in Gazebo simulation and real-world flight using the Vicon system
Experimental Achievements:
Hovering while rotating 360° about multiple axes
Translational motion while spinning in arbitrary orientations
Demonstrated thrust envelope coverage across 3D space
Both of these projects represent major advancements in the control and design of aerial robotic systems. By equipping drones with true six-degree freedom and hybrid force control, the MARHES Lab is opening the door to a future where aerial systems can manipulate objects, interact with surfaces, and perform complex real-world tasks previously reserved for grounded robots.
This project introduced a novel cooperative aerial manipulation strategy using two “catenary robots”—each formed by two quadrotors linked by a cable. Rather than attaching cables to a payload, the robots wrap cables around a box and pull it using friction. An adaptive controller compensates for unknown object parameters (mass, inertia, and contact points), enabling autonomous transport of cuboid payloads without human setup. This method provides a flexible, low-intervention solution for package delivery and urban drone logistics.
The Mobile Adaptive/Reactive Counter-Uncrewed System (MARCUS) is an international collaboration between the University of New Mexico, Sandia National Laboratories, ETH Zurich, the University of Zagreb, and Switzerland’s armasuisse. Supported by the NATO Science for Peace and Security Programme, the project addresses the rising threat posed by unauthorized drones (UAS) entering protected airspaces.
The MARCUS project develops a heterogeneous autonomous multi-robot system that can detect, track, and intercept rogue drones with minimal collateral damage. It combines ground-based mobile robots, aerial pursuer UAVs, and interceptor UAVs with capture mechanisms—each fitted with a diverse array of sensors including LiDAR, RGB-D cameras, radar, and stereo vision.
Key Features:
Runtime Assurance Control: Developed and implemented by Isaac Seslar, this ensures safe UAV operation by dynamically switching between high-performance and safety controllers when needed.
Deep Learning for Drone Detection: Using variants of YOLO and Kalman Filters for robust visual tracking in real-world and simulated environments.
Deep Reinforcement Learning (DRL): Algorithms like MAGNET and PPO-A2C were developed to train pursuers to collaborate while avoiding collisions in dynamic airspaces.
Multimodal Sensing: Integration of multiple sensor types improves robustness in unstructured or cluttered environments.
Field-Tested Systems: The final MARCUS architecture has been physically tested in outdoor environments, successfully demonstrating target detection, interception, and cooperative behavior between robots.
Applications:
MARCUS is aimed at protecting sensitive airspaces around:
Airports
Military installations
Large public events
Critical infrastructure
Its modular and scalable design makes it adaptable for future defense and civilian applications.
The aim of this project is to exploit Deep Q-Learning with the trajectory generation algorithm developed at the MARHES Lab for vision-aided quadrotor navigation. Certain motion primitives in three directions are computed prior to the flight and are executed online. A simulated 2-D laser scan is used as a set of raw features which are further processed.
Epsilon-greedy policy is used to maintain a balance between exploration and exploitation. The Q-values are recursively updated based on the Bellman equation to calculate the error in the neural network which is then back-propagated to train the network. Keras library in Python is used for training the network and predicting desired actions. The Python node is also used to subscribe to and process the laser scan features. However, the front end C++ node is used to detect collisions and to execute the trajectories if they are collision-free. In this way Python script/node, exploiting the Keras library is used along with the pre-designed collision detection C++ node using service-client architecture (thanks to ROS!). The approach used in this project ensures learning while the robot undergoes collision-free exploration. The preliminary results are shown.
This research is inspired by a popular computer game “Race the Sun”, where a UAV has to reach as far as possible consuming solar energy while the sun sets slowly. Hence, while maneuvering forward it has to save energy and form trajectories around obstacles at the same time. A similar idea is used along-with the real-time trajectory generation techniques combined with stereo vision. Multi-threading features in the Robot Operating System (ROS) and C++ are utilized to achieve parallel trajectory generation and execution. OpenCV is extensively used for image processing tasks. The hardware is also developed to run the proposed algorithm. Jetson TX2 is used to perform all the computations onboard and a forward-facing ZED-mini stereo camera is used to provide visual odometry and the depth image stream to be used by the planning algorithm for further processing. The system is completely independent and does not need any GPS or motion capture system as well, to navigate.
The main idea behind the project is to develop a fully autonomous system, free of external sensings like motion capture and GPS. At the same time, it should be capable of sensing its environment for various tasks. NVIDIA Jetson TK1 is used as the main processor on-board while a forward-facing ZED stereo camera is used to get visual odometry and to detect objects in the environment. The test prototype is then used to implement autonomous navigation through a set of square targets. The stereo camera is hence used to detect the square targets and their center points and finally, a path is planned through the center points. The algorithm is implemented in C++ using the Robot Operating System (ROS) framework.