Toy Project
Autonomous Driving

Aim to develop an autonomous driving system that can map its environment and navigate a predefined track while avoiding collisions, staying on the road, and obeying traffic signals.
I build intelligent aerial robots that see, think, and act—pushing the boundaries of AI-driven autonomy.
Aim to develop an autonomous driving system that can map its environment and navigate a predefined track while avoiding collisions, staying on the road, and obeying traffic signals.
Integrate computer vision, and geometric modeling to develop an autonomous system that can generate point cloud from disparity images and produce 3D surface mesh.
Aim to design a fault-tolerant quadrotor controller using deep reinforcement learning techniques, highlighting both the possibilities and limitations of this novel approach.
Introduce a pioneering method for designing a learning-based, single-shot UAV path planner combining supervised learning with reinforcement learning in an image-to-image single-shot strategy.