Andrea Gaggioli blogged about the Pocket Supercomputer by Accenture. The original article was published by NewScientistTech:

Live video footage is fed from the handset to a central server, which rapidly matches on-screen objects to images previously entered into a database. The server then sends find relevant information and sends it back to user (…) The central server uses an algorithm called the Scale-Invariant Feature Transform to match objects. The algorithm uses hundreds or thousands of reference points, corresponding to physical features such as edges, corners or lettering, to find a match. The process works no matter how the object is oriented, but objects must first be carefully imaged and entered into the central database.

[youtube:http://www.youtube.com/watch?v=PkqUjQj8H3M]

This is certainly a step forward compared to RFID and 2D barcodes such as Semacodes or QR codes. It reminded me of Atom tags that could recognize existing logo’s and also used server-side shape analysis and pattern recognition.

[youtube:http://www.youtube.com/watch?v=B_7Yy-zQiRo]

Unlike these two techniques, the existing 2D barcodes are not human-readable.