• Reinforcement Learning for Obstacle Avoidance

    Reinforcement Learning for Obstacle Avoidance

    Students: Shakeeb Ahmad

    The aim of this project is to exploit Deep Q-Learning with the trajectory generation algorithm developed at the MARHES Lab for vision-aided quadrotor navigation. Certain motion primitives in three directions are computed prior to the flight and are executed online. A simulated 2-D laser scan is used as a set of raw features which are further processed.

    Epsilon-greedy policy is used to maintain a balance between exploration and exploitation. The Q-values are recursively updated based on the Bellman equation to calculate the error in the neural network which is then back-propagated to train the network. Keras library in Python is used for training the network and predicting desired actions. The Python node is also used to subscribe to and process the laser scan features. However, the front end C++ node is used to detect collisions and to execute the trajectories if they are collision-free. In this way Python script/node, exploiting the Keras library is used along with the pre-designed collision detection C++ node using service-client architecture (thanks to ROS!). The approach used in this project ensures learning while the robot undergoes collision-free exploration. The preliminary results are shown.

  • Stereo Vision-Based Obstacle Avoidance

    Stereo Vision-Based Obstacle Avoidance

    Students: Shakeeb Ahmad

    This research is inspired by a popular computer game “Race the Sun”, where a UAV has to reach as far as possible consuming solar energy while the sun sets slowly. Hence, while maneuvering forward it has to save energy and form trajectories around obstacles at the same time. A similar idea is used along-with the real-time trajectory generation techniques combined with stereo vision. Multi-threading features in the Robot Operating System (ROS) and C++ are utilized to achieve parallel trajectory generation and execution. OpenCV is extensively used for image processing tasks. The hardware is also developed to run the proposed algorithm. Jetson TX2 is used to perform all the computations onboard and a forward-facing ZED-mini stereo camera is used to provide visual odometry and the depth image stream to be used by the planning algorithm for further processing. The system is completely independent and does not need any GPS or motion capture system as well, to navigate.

    Papers:
    [1] S. Ahmad, “High-Performance Testbed for Vision-Aided Autonomous Navigation for Quadrotor UAVs in Cluttered Environments”, The University of New Mexico (Digital Repository), 2018

    [2] S. Ahmad, R. Fierro, “Real-time Quadrotor Navigation Through Planning in Depth Space in Unstructured Environments.”

  • Autonomous Maneuver through Square Targets

    Autonomous Maneuver through Square Targets

    Students: Shakeeb Ahmad, Greg Brunson

    The main idea behind the project is to develop a fully autonomous system, free of external sensings like motion capture and GPS. At the same time, it should be capable of sensing its environment for various tasks. NVIDIA Jetson TK1 is used as the main processor on-board while a forward-facing ZED stereo camera is used to get visual odometry and to detect objects in the environment. The test prototype is then used to implement autonomous navigation through a set of square targets. The stereo camera is hence used to detect the square targets and their center points and finally, a path is planned through the center points. The algorithm is implemented in C++ using the Robot Operating System (ROS) framework.

    Papers:
    [1] S. Ahmad, “High-Performance Testbed for Vision-Aided Autonomous Navigation for Quadrotor UAVs in Cluttered Environments”, The University of New Mexico (Digital Repository), 2018