This is pretty cool – it’s using an Occulus-Rift for the visuals, and a horizontal delta-arm setup, which has got to be an improvement on the normal cartesian-gantry system than CNC machines generally use. What it lacks in build-chamber size, it probably makes up for with resolution, which is around 100μm.
You could probably hook it up with something like this:
Which is an EEG sensor connected to an smartphone that detects when you’re interested in something… and if you are, it videos it. I quite like this because the EEG interface is kindof minimal – compared to the usual sort of thing.
which is a massive hassle, believe you me.
Anyway, you could hook the delta machine up to an EEG machine to control it, leaving both hands free to… erm… do a rubicks cube, or knit or something. Some sort of hobby.
If New Zealand wasn’t lumbered by New Zealand’s housing-costs/low-income (which causes anyone with any brains (or at least a student debt) to leave) and New Zealand TV (which is for morons)(and heaps glory on the deeply inadequate winners of “talent quests” and sports-people (people who run about, jumping and throwing stuff), but nonetheless appears to be THE forum for the national discourse) then we’d probably be as advanced as a proper Scandinavian country.
Although it would be funnier if it took the photo, then drew a picture of a squirrel or something.
I like the fact that it’s kindof a CNC arm-bot though… because I think this configuration has more potential than the cartesian design used by repraps etc today… because it can make things bigger than itself.
DIY Biotech (ish) using robots made out of Lego… the video itself, made by someone who is so proficient that it’s almost a distraction.
I find this incredibly interesting – partly because our fablab to-be already has an alchemist who is working on artificial bone-structures… but mainly because of the relentlessness that “intelligence” seems to be creeping in. Like water finding cracks etc. I have this vague vision of an “intelligence” – probably some sort of AI of genetic algorithm, that can move like a virus into any machine that has a chip, take it over and make it move about… or whatever. So something driving a car for example, could infect the phone of the driver, then get into the hair-dryer etc etc. Not necessarily to “do” anything, but just to be there. To own everything.
And maybe it already is… and it’s using humans as the vector… to adapt the machines, and do the hand-holding in the early stages. I mean at the moment, it’s fairly rudimentary… memory based systems, but I can imagine something like Suri (or what Suri pretends to be) being “given” a machine… and it just moves in, and 10 seconds later you can verbally tell it what to do.
I’ve gone on about these before – but this is kindof a streamlined, albeit expensoid version. Once the basic concept/vitamin-parts are sussed though, the prices of these will nosedive and they’ll turn up everywhere… and not just for plants. There was a thing that made a load of money on Kickstarter recently – and all it was was a sensor-pod… for smartphones, if memory serves.
Because “you could do that too…” – non-vitamin parts made out of the cheapest stuff available – there’s a law of nature in there somewhere, but I’m not sure what it is. Special prize for the first person to make a quadracopter where (ironically) all the non-vitamin-parts are edible.
I can see this one getting really out of hand at some point.
And so on, and so on.
One of the things the kickstarter project that made all the money did, was to reduce the programming complexity to that of setting up “rules” in an email client. Choose stimuli from a drop-down; choose response from a dropdown. I think that’s kindof vital – because as successful as arduinos are, they’re still like… the DOS of hardware interface.
This looks cool – though I’m not entirely convinced that tape-measures are the future of building materials.
I quite like these triangular robots… basically because the pancake-picker variants are insanely fast, and because I think they’ve eventually supplant cartesian XYZ axis CNC machines. The tape-measure-bot is interesting because it can change shape and move about… but ideally if it’s going to be making things it needs to be on a ceiling, working downwards.
In other random robot videos…
The ultimate in Nerdware… glasses that control a quadracopter. This is a step along the way… and although it’s not the most massively impressive video in the world, it’s quite an important step, and it’s really cool that it’s a hacker-kid doing it.
I think we’re going to see a Cambrian Explosion of moving-machines… and it’s going to happen when we get a sensor-motor feedback-loop sorted out – so a robot isn’t blindly moving a limb based on memory (of where the limb was at the start of the move) but on actually being able to see the limb in relation to reference points in its environment.
You know… like you do.
Ever try to play with one of those glow-in-the-dark frisbies? It’s actually quite tricky – because you can’t see your hands.
So there’s your project for the weekend. Make a sensor-motor feedback loop.
Which kindof raises the possibility of selling “products” in which the only thing that gets shipped are the vitamin parts. The rest of it you print yourself, or get someone local to do it… with a diversity of skins/themes to choose from. Maybe. Similar to the way there are about a million different smartphone cases available at the moment… but instead of this being a “case”, it’s a whole outer shell… and the bit that you buy is just a load of circuits and chips.
Just a thought like.
I’m not sure if this is competition for Arduino or not – it’s not a hell of a lot bigger or more expensive… and is a fully fledged machine that runs languages that the likes of me already understands. It could well do with having a bunch of sensor plugins as well.
Coming from the other direction though are smartphones themselves – that already have the sensors and cameras built in, already with drivers etc… and in the next 10 years, there’s going to be A LOT of these hanging about, because Android devices are currently being activated at the rate of 850,000 a day (it’s gone up 100,000 since christmas) and there are currently 300 million of them in the wild. These are going to become obsolete… and then what? Bin them? “Recycle” them? Give them to charities?
Soon there is a hell of a lot of spare computing capacity, in packages that to all intents and purposes, don’t take up any space (unlike desktops or even laptops). Suddenly having a smartphone to turn virtually any electrical device into a “smart” device, isn’t such a stupid idea.
Don’t know why, but this reminds me of the moon-landing. Something to do with frustration I think.
Fairly cool… “Avatar”, they call it – which is (heaven forfend) what it is. It’s a robot being controlled remotely via kinnect.
All the elements are in place, they’re just not terribly streamlined right now… but think on… imagine the possibilities of this process being very very streamlined… you wouldn’t have to be human for example. You wouldn’t have to be “there”. True, really, really big might be useful (in a slash and burn sort of way) but really, really small is probably where its headed. Personally I’m looking forward to back-yard safaris, where you get to fight spiders bigger than you are. Not that I’ve got anything against spiders mind. I’m just terrified of them.
Anyway, the cat seems compliant enough – probably quite heavily sedated etc… but the way its tail goes “flip, flip” generally means trouble. My cat wouldn’t put up with this. He’d go mental.
Looks like a thing that you can plug into pretty much any device that is controlled with IR (or a serial port), which allows you to control it with a computer or smartphone… which means it can be programmed. It also has movement and light sensors built in.
Unfortunately, it uses Java as a programming language, so it’s not going to catch on… well, not until someone writes an API layer onto it which allows it to understand languages that people actually use – probably JS, Python or PHP.
The first one is the learning round, the 2nd is the “learned” round. When it knows what to do. It goes like the clappers. I especially like the way it does a fancy little 1/2 spin when it’s finished.
I think these mouse-races are vaguely interesting – because they demonstrate something… that as usual, I can’t quite put my finger on… something to do with the difference between bits and atoms.
I’m drifting (or at least attempting to drift) from programming to micro-industry. From bits to atoms… and one of the things that is really striking is the shift in headspace required. With bits, once you learn something it’s done. “To Name Something Is To Have Done With It”. Once you learn something, you can abstract it away… “It Is Known” – which means you never have to think about it again.
Don’t work so well with atoms. You can nearly do it… with the CNC end of things – which is (in case you were wondering) to do with the purity and uniformity and “unnaturalness” of the materials that are used… but I’ve been making those golden mean calipers for 2.5 years now, and I’m still learning techniques etc. One of the revolutionary tectonic undercurrents of the current age is the drive to turn hardware problems into software problems. That is what CNC (in it’s myriad forms) does. Software is easier than hardware – in part because of the abstraction thing. Once something is learned, it is known, and can be given a name and “invoked” but never thought of again. Software has a bias towards single-iteration learning.
I think there might also be some relevance to AI – as in machine intelligence that learns as it goes – from feedback that it gets from somewhere. My new Niece is a type of AI… a couple of months old… and it’ll take years of full-time training to get her to work properly. There’s a slightly older model – a couple of years… which is a vast improvement on the new one, but which still has soooo many years before she becomes fully-functional, that it’s bewildering. It’s a process of massive multi-iteration learning. A huge programming job.
So anyway, that new mouse is learning how to do something that requires a single iteration. Normal reality is not so accommodating – and is so complicated and unreliable that it will probably take an AI to negotiate it… or a mix of hard-coding and AI (which is what we meat-bots tend to have). The the alternative is controlling then environment (the art of politics) – which is what lego is, and laser-cut perspex is and what this maze is and so on. Existing industrial robots.
Maybe that’s another huge tectonic undercurrent of the robotics revolution – creating systems with reduced needs for controlled environments. Until that becomes the norm, then the creation of controlled environments/inputs is probably going to be at least as big as the creation of the machines that will operate within them.