This is based on traditional computer vision concepts, as developed by the Ishikawa-Komuro Lab. A camera attachment is employed to track the motion of a user's finger. This is then translated into on-screen commands.
While this is just a research project at the moment, it will be interesting to see whether the hype of Project Natal catches on in the mobile area as well. Evidence such as this definitely suggest it.