That’s pretty cool – they throw a camera around at the end.
That’s you. You’re the camera. Seeing things from a ball’s-eye POV, via the network.
I’d like to know how the learning-algorithm works. I’ve been tinkering with this today… teaching an arduino to control a motor to point a sensor at a light (little baby steps). It’s easy enough to mathematically control the thing, but it would be a lot more interesting to genetically algorithmically control it.
Here’s a microcopter learning to do a triple flip
Unfortunately, there are links to the algorithms (or at least the theory) to do this here, and it might as well be in Klingon.
Robotic Sensory Loops and Whatnot
Servo Bender