Last night, I hacked together a quick proof-of-concept video to show those who aren’t inclined to read the debug logs the multitouch capabilities of the G1.
Rather than go the proper route, where the multiple finger events would get pushed up from the touchscreen driver and into the java layer where the android applications live and play – another #android member (ionstorm, who also prompted me to look into multitouch in the first place) – suggested just reading the touchscreen event data from a file in the java layer as a quick way to skip over the hard work and demonstrate the possibility to people visually.
I will clean up and post all the necessary source code and instructions for someone to do this on their own phone so that more people can start playing with this later. For now, the brief rundown is that I modified the Synaptics touchscreen driver to have it create a character device at /dev/tsout that it dumps the touchscreen events to. I made /dev/tsout readable by the java layer, and then modified a fingerpaint example program that Google has posted to draw the circles. I have a thread in there that constantly polls that file, and when it sees data there it fires off an update event to the UI thread which scales the x and y position from the coordinate space of the touchscreen driver into the coordinate space of the android canvas and then draws a small circle there. I have it using a different color for the two fingers to make it easier to see.
The performance of this kind of polling is not that great. There are a few quirks to the multitouch aspect of the G1 – but the most important thing that I see is that even though it might not be a perfect multitouch screen capable of detecting millions of discrete touches – just being able to track 2 fingers should give me the only thing I really want… which is the ability to one day pinch to zoom in on the browser and get rid of those silly magnifying glasses.
Shameless plug Hey – if you want to hook a brother up, buy stuff from amazon and use this search box