What:  Put together a low cost research platform capable of vision based autonomous navigation which can be later coupled with higher level cognitive frameworks for experimentation.

I’d like to stay in the open source domain as much as possible.  The robot in this instance is an octocopter drone from 3D Robotics but the work ought to translate well to any other platform such as  ground vehicles such as autonomous rovers, humanoids, ect.

Why:  Currently drones and more generally – robots lack a sense of spacial awareness which is needed to get them perform useful work like construction or deliveries for example. GPS as a primary means of autonomous navigation lacks in accuracy in order to avoid collisions with objects in addition to the fact that GPS lock cannot be achieved in certain areas for all sorts of reasons including signal occlusion by buildings.

Application domains: agriculture, construction, art  – choreography & lighting.

Research Landscape
System Architecture
Visual SLAM Algorithms

Scene Segmentation,
Object Recognition,
Context Recognition,
Speech Engine,
Real Time Kinematic (RTK), LIDAR extensions.
Comparative study of cognitive architectures.

wee lab by promethian on photosynth