What is Meta’s amazing Surface Touch Typing Technology highlighted by Mark Zuckerberg?
Meta claims it can turn “any flat surface” into a virtual keyboard capable of achieving typing speeds up to 120 WPM.
Have you ever imagined typing any smooth surface just by touching it? Did you know, now you can type even if you don’t have a keyboard? Research is being done to develop Surface touch type technology and Meta seems to have recently crossed a milestone.
In a report in The Verge, Meta CEO Mark Zuckerberg revealed an impressive typing speed of 100 words per minute (WPM) while wearing a virtual reality (VR) headset. What’s even more remarkable is Meta’s claim that it can turn “any flat surface” into a virtual keyboard capable of achieving speeds of up to 120 wpm. This development represents a significant leap from Meta’s prior technology, as evidenced by their 2020 “punch type” method that averaged just 12 WPM. However, in the same year, their “Surface Touch Typing” achieved an average speed of 73 WPM.
Surface Touch Typing Technology
Meta’s latest development demonstrates its dedication to advancing text entry practices for VR and augmented reality (AR) environments. According to a blog by Meta. A groundbreaking text decoding technique that enables touch typing on a flat, porous surface. This method eliminates the need for a physical keyboard or capacitive touch interface. Touch typing depends on the hand movement captured by hand tracking technology. This motion data is decoded directly into text characters, resulting in a smooth and efficient typing experience.
Meta uses a temporal convolutional network, which serves as a motion model that translates hand motion — represented as a sequence of hand pose features — into text input. A major challenge addressed by meta researchers was to calculate the random movement of typing caused by finger flow, as was the absence of haptic feedback from physical keys. To overcome this, the company first integrated a language model as text and used a beam search algorithm to intelligently combine motion and language models. This fusion enables accurate decoding of text from both ambiguous and random movements of the hands.
To validate their approach, Meta collected a dataset from 20 touch typists and subjected their model to various benchmarks, including contact-based text decoding and traditional physical keyboard typing. The results speak volume: their proposed method takes advantage of continuous hand pose data to outperform contact-based techniques in terms of text decoding accuracy. An offline study demonstrated parity with typing on a physical keyboard, achieving a speed of 73 wpm with an impressive 2.38% uncorrected error rate.