1. Korean Researchers Develop Powerful Head-Mounted Display
Researchers at the Korea Advanced Institute of Science and Technology (KAIST) are launching their answer to Google Glass – a head mounted display (HMD) named K-Glass.
The hands-free wearable features a 65nm augmented reality chip which KAIST claims is 76 per cent more energy efficient than “other devices”. It delivers 1.22 TOPS (tera-operations per second) peak performance at 250 MHz, eating up 778mW on a 1.2V power supply. Parallel data processing helps reduces overall power consumption. This economy in power consumption allows the device to be worn nearly all day. This processor of the device is based on the Visual Attention Model (VAM) that duplicates the ability of human brain to process visual data. VAM, almost unconsciously or automatically, disentangles the most salient and relevant information about the environment in which human vision operates, thereby eliminating unnecessary data unless they must be processed. In return, the processor can dramatically speed up the computation of complex AR algorithms.
KAIST suggests that the technology could be used in restaurants, where the menu can be explained and previewed through 3D graphics of sample, or when reading magazines, pulling out feature content and adding to it with online context.
Facial recognition for Google Glass has been discussed back and forth in the U.S. Congress. Google has responded by saying it will ban facial recognition apps from its marketplace of apps for Glass, or Glassware. However developers at FacialNetwork see a great opportunities for Google Glass users – they recently launched a facial recognition app for Glass, called NameTag, hoping for Google to change its policy. The app will also run on Android and iOS smartphones.
The technology that will allow you to take a snapshot of the person and send the picture to NameTag’s servers, where it will be compared to pictures available on social media sites. If a match is found, then the picture is sent back with the person’s name, as well as other significant personal details, including hobbies, interests, and even their current relationship status. If a criminal record is also found in the public record, it will flash in nice big red letters.
“No longer will social media be limited to the screens of desktops, tablets and smartphones. With the NameTag app running on Google Glass a user can simply glance at someone nearby and instantly see that person’s name, occupation and even visit their Facebook, Instagram or Twitter profiles in real-time.” sayss NameTag’s creator Kevin Alan Tussy.
Developers also plan to allow users to have one profile that is seen during business hours and another that is seen in social situations. People will also be able to choose whether or not they want their name and information displayed to others. Would you want it?!
PUMA has launched the PUMA VISAR app, an application for iPhone and Android that makes use of augmented reality features to showcase the latest Mobium v2 running shoe and evoPOWER football boot. To see them an iPhone or Android phone’s camera should be held over a magazine advertisement, in-store or over a schoolyard poster, bringing each shoe to life in 3D and highlighting their various technical aspects. The app was created by South African digital agency Gloo.
CEO of Gloo, Pete Case, comments, “We aimed to showcase the technology behind the brand’s innovative performance shoes using an equally innovative digital communication. This took the form of an interactive app that pushes the limits of augmented reality technology on mobile devices. The app creates a digital layer that brings the footwear innovations to life in 3D. Using the capabilities of smart mobile devices, this allows consumers to experience the performance benefits of these features in a fun and interactive environment.”
Puma gives consumers a chance to win one pair of Mobium Elites and one pair of evoPOWER football boots by simply downloading the free application from the iTunes app store or Google Play. The competition is open to South Africans only and closes on Friday 28 February 2014.
– The police department in Byron, became the first police department in the U.S. to use Glass when it partnered with Georgia Tech and the surveillance-technology company CopTrax to test the device for a day in September. But it won’t be the last. The latest to do so is the New York City Police Department, which began testing two pairs of Glass in December. Google Glass could be a good assisting tool able to improve efficiency of police work as well as monitor the conduct of police officers. “The devices have not been deployed in any actual field or patrol operations, but rather are being assessed as to how they may be appropriately utilized or incorporated into any existing technology-based functions,” says NYPD deputy commissioner Stephen Davis.
– Another implementation of Google Glass for professional purposes was shown by Virgin Atlantic, which equipped some of its concierge staff with Google Glass at London’s Heathrow airport in an effort to make its airline service more glamorous and personalized. Employee wearing Google glass will assist business class passengers through the check-in process, provide updates about their flight and answer any queries about their destination, including as the local weather, events or translating information supplied in another language.
5. Qualcomm Releases New Update for Its Vuforia AR SDK
For those our readers, who are developing augmented reality solutions it would be interesting to learn that Vuforia 2.8 has been released. According to Qualcomm updated Vuforia includes extended tracking that allows users to follow game play over bigger distance from the target than before. The newly launched Java APIs make it easier than ever to add Vuforia-enabled experiences to Android apps in Eclipse. Another new feature is an ability to store up to 1 MB of content (e.g.images and 3D models) with each cloud-based targets which makes it easier to build and manage cloud recognition apps. Vuforia lets you create more responsive end-user experience by using a single request for both recognition and content delivery.