In a recent mock battle between two armored brigades (“Red” and “Blue”) in the Chinese Army, the Red Army was the victim of a virus attack which erased all their orders for re-supply.
“During the exercise, the Red Army basic command post, command and control station, received information from the main attack force that 3/4 of their ammunition had been depleted. A resupply order was immediately sent to the rear command post. However, after transmission, the order form appeared blank.”
Follow-up requests for ammunition were answered with the response that the request had been processed. The Red Army eventually lost the exercise once their ammunition ran out. It makes one wonder if all the money we’re pouring into the latest military gadgets could be compromised by a programmer working on a virus that would cost a few thousand.
It’s crazy to think that an army could be waylaid by a computer virus, but with our increasing reliance on technology for better and more efficient armies is was only a matter of time. You may have heard about how when Russia invaded Northern Georgia they preceded the attack by hacking Georgian systems as well as flooding Georgian government sites, shutting them down. There’s no doubt that cyber attacks are now a part of a nations battle-chest. This is the future of war.
Although Google finally got approval for its voice recognition upgrade released earlier this week for the iPhone, it has run into some snags overseas. Not downloading problems, but more of a language barrier.
Although there has been some amazing feedback to the voice recognition feature here in the US, people in the United Kingdom have some serious issues with the update. Mainly, the fact that it can’t understand their thick accents. “The free application, which allows iPhone owners to use the Google search engine with their voice, mistook the word “iPhone” variously for “sex,” “Einstein” and “kitchen sink,” said the Daily Telegraph.” It seems that the accents of those in the UK are responsible for limiting voice recognition technology. It makes one wonder if people will have to develop a North American accent until voice recognition is able to deal with the varied British accents.
Will there be a Universal Voice Recognition Voice?
The video you see here is of a robot made by MobileRobots.com using the MobileRanger Stereo Vision System. “MobileRanger stereovision systems are top-of-the-line instruments for measuring depth for demanding applications such as mobile robot navigation, people tracking, gesture recognition, targeting, 3D surface visualization and advanced human computer interaction.” You can see how objects at different ranges are represented by different colors (see my hand?). Very cool.
Above you see a photo from the display Boston Engineering had. What you see is a robotic fish they hope to build in the near future (sorry, no prototypes yet). I’m going to stay in contact with these guys on the project since it’s a pretty cool concept that could be built fairly quickly with the latest technology (the fact that they’re basing it off a Tuna fish is proof alone that this thing will be fast and powerful).
Dr. Sebastian Thrun, Professor of Computer Science and Electrical Engineering at Stanford University where he directs the Stanford Artificial Intelligence Laboratory, went over the steps his team has made in developing a self-driving vehicle at RoboDev in Santa Clara today. He showed some incredible video of cars smashing into obstacles (sometimes even seeking other cars out to smash into) but ended with videos of their latest vehicle successfully navigating slowly around other moving cars.
The great thing about his presentation was his appeal not to the side that wants self-driving cars, but to a side we can all agree with — saving energy, lives, and time.
In saving energy, Dr. Thrun explained that 22% of the Nation’s energy consumption is used by cars. You also only use your car on average during about 10% of your day, making it useless the other 90%. If self-driving cars could be developed, one car could be used by multiple people. “You could be dropped off at work and then send the car back home to pick up your wife.” Added safety will also increase gas mileage since removing the extra weight of safety features (airbag, reinforced steel) would increase fuel efficiency by 30%. (It should also be noted that convoys reduce energy consumption by 11%-17%)
Robo Dev 2008 has kicked off with a free breakfast (yay!) and not free Wi-Fi (son of a…). Yep, in the heart of Silicon Valley and I had to shell out $13 to get internet for the day. But I’m not bitter (I’ll slash some tires on the way home today).
The first speaker of the day is Tandy Trower, General Manager of the Microsoft Robotics Group. He started off by saying “It’s hard for me to answer what exactly a robot is.” His presentation is about collaborating with parters in order to develop software to run robots on.
He showcased the Microsoft Robotics Studio which is a downloadable program that lets people design and program virtual robots. “Virtual robots are programmed the same way physical robots are.” His hope is that designers and programmers will build robots on their computers which can then be built in the real world.
By the way, although Tandy said he didn’t expect to see robots like the one above for another ten years, he says that you can expect to see it on the market next year for around $3,000. Anyone have spare pocket change?
Interesting Robot Fact
1. It’s wheels have other little wheels around it, allowing tires to move in any direction regardless of position.
2. It runs on Windows XP and dances to Britney Spears (ha).
For those interested, Mielle Sullivan and I will be live-blogging the Robo Development Conference and Expo tomorrow (Tuesday) in Santa Clara. Check the site for latest breaking robotic news as well as photos and vids from the conference. Also check out our Twitter updates under usernames jheylin and mielle_s.
Look! A great new App is here for the iPhone! Google has incorporated voice recognition capability into their search, allowing users to speak what they want and get results through their phone. Check out the vid below.
Seems pretty cool, eh? Oh wait, there’s a problem.
Turns out that although Apple approved the App for a Friday release (Nov. 14th), it remains to be seen. You’d think that this application, which is coming out first on the iPhone and created by the mega-giant Google, wouldn’t run into any problems. You’d be wrong.
Apple really screwed the the pooch on this one. Although Google decided to release it first on the iPhone, this snub could cause some ramifications down the line between the two companies. Sad thing is, this technology is amazing. The fact the application was able to pick out “Fahrenheit” shocks me (I have a hard enough time spelling it myself). And combined with all the other issues developers of iPhone Apps have been facing, Apple seems to be becoming the “bad guy.”
If there’s one thing movies have shown us, it’s that identifying people through biometrics can be flawed. Blood can be faked (GATTACA), eyes can be removed for retinal scans (Demolition Man), voices can be recorded (Sneakers) and fingerprints can be used from the guard you just used the Vulcan neck-pinch on (Spaceballs).
But have you ever thought of using your veins as an identification device?
The Hitachi Vein ID bounces Infrared Light from multiple angles which is “partially absorbed by hemoglobin in the veins and the pattern is captured by a camera as a unique 3D finger vein profile.” Veins are believed to be even more unique than fingerprints — even twins have different vein patterns.
Are veins the answer to biometric data theft concerns?
The great thing about veins is that, since they are located within the body and are invisible to the naked eye, they are incredibly hard to forge. One would have to have a scan of your vein structure and build a replica, something even crazy evil scientists might have a problem with. On top of this, if someone were to chop off your finger to access your data, the blood would drain out of your finger making vein identification useless (no blood, skinny veins).
We here in the US have fallen so far behind the rest of the world in ground-breaking technology (cough! Large Hadron Collider, stem cells, cloning cough!) that even Russia is kicking our butts. Evidence? Here’s a nifty video from our Siberian rivals friends.
What you saw there was the video of HTCs MAX 4G, a smartphone capable of download speeds up to 10Mps on Russia’s Yota Mobile WiMAX network. “The MAX 4G will support the Yota Video network and the device is capable of displaying up to nine TV channels simultaneously.” On top of the usual bells and whistles (bluetooth, WiFi, GPS), it also sports a five-megapixel camera and even an FM radio.
What you are seeing is a literal replication of movie magic come to life. Dubbed “g-speak’ by its developers, it uses a “combination of gestural i/o, recombinant networking, and real-world pixels brings the first major step in computer interface since 1984.” They believe this method of computer interaction will be far better suited to the “data-intensive” work people are increasingly doing with their computers (the fact that more than one user can operate a single machine speaks volumes towards this belief).
The tie-in with Minority Report is no coincidence. One of the founders of Oblong worked as a science advisor on the set of that movie and incorporated many of his earlier work at MIT into the set. You can see similarities in the design like having a dedicated room, wearing special gloves, even specialized hand gestures that give it an almost Tai Chi-like feel to it. You could Zen out while doubling your productivity.