Friday, June 27, 2008

Touch User Interface Resources - References



Papers

A direct texture placement and editing interface
The creation of most models used in computer animation and computer games requires the assignment of texture coordinates, texture painting, and texture editing. We present a novel approach for texture placement and editing based on direct manipulation of textures on the surface. Compared to conventional tools for surface texturing, our system combines UV-coordinate specification and texture editing into one seamless process, reducing the need for careful initial design of parameterization and providing a natural interface for working with textures directly on 3D surfaces.A combination of efficient techniques for interactive constrained parameterization and advanced input devices makes it possible to realize a set of natural interaction paradigms. The texture is regarded as a piece of stretchable material, which the user can position and deform on the surface, selecting arbitrary sets of constraints and mapping texture points to the surface; in addition, the multi-touch input makes it possible to specify natural handles for texture manipulation using point constraints associated with different fingers. Pressure can be used as a direct interface for texture combination operations. The 3D position of the object and its texture can be manipulated simultaneously using two-hand input.
A flexible full-body tactile sensor of low cost and minimal connections
In this paper, we introduce a new tactile sensor made of textiles. It has the basic structure of a matrix tactile sensor but uses only four connections to measure the magnitude and position of applied pressure, similar to the analog tactile sensor, which is widely used in touch screens and touch pads. The structure and model of the sensor are introduced. Although we could not map the exact contour of the contact area due to limited connections, the maximum number of touch points in each direction could be detected. The sensor could be used as the full-body tactile sensor for a robot, the smart carpet for a security room and so on.
A malleable surface touch interface
A modular expandable tactile sensor using flexible polymer
In this paper, we have proposed and demonstrated a modular expandable tactile sensor using PDMS elastomer. A sensor module consists of 16 /spl times/ 16 tactile cells with 1 mm spatial resolution, similar to that of human skin, and interconnection lines for expandability. Tactile response of a cell has been measured with a force gauge. Initial capacitance of each cell is about 180 fF. The fabricated cell shows a sensitivity of 3%/mN within the full scale range of 40 mN (250 kPa). Four tactile modules have been successfully attached by using ACP to demonstrate expandability. Various tactile images have been successfully captured by one sensor module as well as the expanded 32 /spl times/ 32 modular array sensors.
A multi-touch three dimensional touch-sensitive tablet
A prototype touch-sensitive tablet is presented. The tablet's main innovation is that it is capable of sensing more than one point of contact at a time. In addition to being able to provide position coordinates, the tablet also gives a measure of degree of contact, independently for each point of contact. In order to enable multi-touch sensing, the tablet surface is divided into a grid of discrete points. The points are scanned using a recursive area subdivision algorithm. In order to minimize the resolution lost due to the discrete nature of the grid, a novel interpolation scheme has been developed. Finally, the paper briefly discusses how multi-touch sensing, interpolation, and degree of contact sensing can be combined to expand our vocabulary in human-computer interaction.
A remote control interface for large displays
We describe a new widget and interaction technique, known as a "Frisbee," for interacting with areas of a large display that are difficult or impossible to access directly. A frisbee is simply a portal to another part of the display. It consists of a local "telescope" and a remote "target". The remote data surrounded by the target is drawn in the telescope and interactions performed within it are applied on the remote data. In this paper we define the behavior of frisbees, show unique affordances of the widget, and discuss design characteristics. We have implemented a test application and report on an experiment that shows the benefit of using the frisbee on a large display. Our results suggest that the frisbee is preferred over walking back and forth to the local and remote spaces at a distance of 4.5 feet.
A textile based capacitive pressure sensor
This paper introduces an approach for decoding the pressure information exerted over a broad piece of fabric by means of capacitive sensing. The proposed sensor includes a distributed passive array of capacitors (i.e. an array where no active elements are involved), whose capacitance depends on the pressure exerted on the textile surface, and an electronic system that acquire and process the subsequent capacitance variations. Capacitors can be made in different ways, though, in our demonstrator they have been implemented between rows and columns of conductive fibers patterned on the two opposite sides of an elastic synthetic foam. Measures performed over a prototype has been demonstrated the reliability of the approach by detecting pressure images at 3 F/s and by measuring capacitances as low as hundreds of fF spaced apart at meters of distance.
Affordances for manipulation of physical versus digital media on interactive surfaces
This work presents the results of a comparative study in which we investigate the ways manipulation of physical versus digital media are fundamentally different from one another. Participants carried out both a puzzle task and a photo sorting task in two different modes: in a physical 3-dimensional space and on a multi-touch, interactive tabletop in which the digital items resembled their physical counterparts in terms of appearance and behavior. By observing the interaction behaviors of 12 participants, we explore the main differences and discuss what this means for designing interactive surfaces which use aspects of the physical world as a design resource.
AppLens and launchTile: two designs for one-handed thumb use on small devices
We present two interfaces to support one-handed thumb use for PDAs and cell phones. Both use Scalable User Interface (ScUI) techniques to support multiple devices with different resolutions and aspect ratios. The designs use variations of zooming interface techniques to provide multiple views of application data: AppLens uses tabular fisheye to access nine applications, while LaunchTile uses pure zoom to access thirty-six applications. We introduce two sets of thumb gestures, each representing different philosophies for one-handed interaction. We conducted two studies to evaluate our designs. In the first study, we explored whether users could learn and execute the AppLens gesture set with minimal training. Participants performed more accurately and efficiently using gestures for directional navigation than using gestures for object interaction. In the second study, we gathered user reactions to each interface, as well as comparative preferences. With minimal exposure to each design, most users favored AppLens's tabular fisheye interface.
Cooperative gestures: multi-user gestural interactions for co-located groupware
Multi-user, touch-sensing input devices create opportunities for the use of cooperative gestures -- multi-user gestural interactions for single display groupware. Cooperative gestures are interactions where the system interprets the gestures of more than one user as contributing to a single, combined command. Cooperative gestures can be used to enhance users' sense of teamwork, increase awareness of important system events, facilitate reachability and access control on large, shared displays, or add a unique touch to an entertainment-oriented activity. This paper discusses motivating scenarios for the use of cooperative gesturing and describes some initial experiences with CollabDraw, a system for collaborative art and photo manipulation. We identify design issues relevant to cooperative gesturing interfaces, and present a preliminary design framework. We conclude by identifying directions for future research on cooperative gesturing interaction techniques.
CoR<sup>2</sup>Ds
We present a new popup widget, called CoR2Ds (Context-Rooted Rotatable Draggables), designed for multi-user direct-touch tabletop environments. CoR2Ds are interactive callout popup objects that are visually connected (rooted) at the originating displayed object by a semi-transparent colored swath. CoR2Ds can be used to bring out menus, display drilled-down or off-screen ancillary data such as metadata and attributes, as well as instantiate tools. CoR2Ds can be freely moved, rotated, and re-oriented on a tabletop display surface by fingers, hands, pointing devices (mice) or marking devices (such as a stylus or light pen). CoR2Ds address five issues for interaction techniques on interactive tabletop display surfaces: occlusion, reach, context on a cluttered display, readability, and concurrent/coordinated multi-user interaction. In this paper, we present the design, interaction and implementation of CoR2Ds. We also discuss a set of current usage scenarios.
Direct-touch vs. mouse input for tabletop displays
We investigate the differences -- in terms of bothquantitative performance and subjective preference -- between direct-touch and mouse input for unimanual andbimanual tasks on tabletop displays. The results of twoexperiments show that for bimanual tasks performed ontabletops, users benefit from direct-touch input. However,our results also indicate that mouse input may be moreappropriate for a single user working on tabletop tasksrequiring only single-point interaction.
Distant freehand pointing and clicking on very large, high resolution displays
We explore the design space of freehand pointing and clicking interaction with very large high resolution displays from a distance. Three techniques for gestural pointing and two for clicking are developed and evaluated. In addition, we present subtle auditory and visual feedback techniques to compensate for the lack of kinesthetic feedback in freehand interaction, and to promote learning and use of appropriate postures.
DTLens
Supporting groups of individuals exploring large maps and design diagrams on interactive tabletops is still an open research problem. Today's geospatial, mechanical engineering and CAD design applications are mostly single-user, keyboard and mouse-based desktop applications. In this paper, we present the design of and experience with DTLens, a new zoom-in-context, multi-user, two-handed, multi-lens interaction technique that enables group exploration of spatial data with multiple individual lenses on the same direct-touch interactive tabletop. DTLens provides a set of consistent interactions on lens operations, thus minimizes tool switching by users during spatial data exploration.
DTLens: multi-user tabletop spatial data exploration
Supporting groups of individuals exploring large maps and design diagrams on interactive tabletops is still an open research problem. Today's geospatial, mechanical engineering and CAD design applications are mostly single-user, keyboard and mouse-based desktop applications. In this paper, we present the design of and experience with DTLens, a new zoom-in-context, multi-user, two-handed, multi-lens interaction technique that enables group exploration of spatial data with multiple individual lenses on the same direct-touch interactive tabletop. DTLens provides a set of consistent interactions on lens operations, thus minimizes tool switching by users during spatial data exploration.
Dual touch: a two-handed interface for pen-based PDAs
Earpod: eyes-free menu selection using touch input and reactive audio feedback
We present the design and evaluation of earPod: an eyes-free menu technique using touch input and reactive auditory feedback. Studies comparing earPod with an iPod-like visual menu technique on reasonably-sized static menus indicate that they are comparable in accuracy. In terms of efficiency (speed), earPod is initially slower, but outperforms the visual technique within 30 minutes of practice. Our results indicate that earPod is potentially a reasonable eyes-free menu technique for general use, and is a particularly exciting technique for use in mobile device interfaces.
Enabling interaction with single user applications through speech and gestures on a multi-user tabletop
Co-located collaborators often work over physical tabletops with rich geospatial information. Previous research shows that people use gestures and speech as they interact with artefacts on the table and communicate with one another. With the advent of large multi-touch surfaces, developers are now applying this knowledge to create appropriate technical innovations in digital table design. Yet they are limited by the difficulty of building a truly useful collaborative application from the ground up. In this paper, we circumvent this difficulty by: (a) building a multimodal speech and gesture engine around the Diamond Touch multi-user surface, and (b) wrapping existing, widely-used off-the-shelf single-user interactive spatial applications with a multimodal interface created from this engine. Through case studies of two quite different geospatial systems -- Google Earth and Warcraft III -- we show the new functionalities, feasibility and limitations of leveraging such single-user applications within a multi user, multimodal tabletop. This research informs the design of future multimodal tabletop applications that can exploit single-user software conveniently available in the market. We also contribute (1) a set of technical and behavioural affordances of multimodal interaction on a tabletop, and (2) lessons learnt from the limitations of single user applications.
FlowMenu: combining command, text, and data entry
Fluid DTMouse: better mouse support for touch-based interactions
Although computer mice have evolved physically (i.e., new form factors, multiple buttons, scroll-wheels), their basic metaphor remains the same: a single-point of interaction, with modifiers used to control the interaction. Many of today's novel input devices, however, do not directly (or easily) map to mouse interactions. For example, when using one's finger(s) or hand directly on a touchable display surface, a simple touch movement could be interpreted as either a mouse-over or a drag, depending on whether the left mouse button is intended to be depressed at the time. But how does one convey the state of the left mouse button with a single touch? And how does one fluidly switch between states? The problem is confounded by the lack of precision input when using a single finger as the mouse cursor, since a finger has a much larger "footprint" than a single pixel cursor hotspot. In this paper we introduce our solution, Fluid DTMouse, which has been used to improve the usability of touch tables with legacy (mouse-based) applications. Our technique is applicable to any direct-touch input device that can detect multiple points of contact. Our solution solves problems of smoothly specifying and switching between modes, addressing issues with the stability of the cursor, and facilitating precision input.
GIA: design of a gesture-based interaction photo album
This paper describes a gesture-based interaction photo album (GIA) device which stores and manages digital images. It provides a natural interface—as if you are turning the pages of a photo album using gestural input on a touch screen. We feel that emotional satisfaction is more important than efficiency in this kind of task and requires a more natural user interface that is easily learned by novices. GIA demonstrates an innovative convergence between the digital and the analogue in this respect.
Glimpse: a novel input model for multi-level devices
Gummi: a bendable computer
Gummi is an interaction technique and device concept based on physical deformation of a handheld device. The device consists of several layers of flexible electronic components, including sensors measuring deformation of the device. Users interact with this device by a combination of bending and 2D position control. Gummi explores physical interaction techniques and screen interfaces for such a device. Its graphical user interface facilitates a wide range of interaction tasks, focused on browsing of visual information. We implemented both hardware and software prototypes to explore and evaluate the proposed interaction techniques.Our evaluations have shown that users can grasp Gummi's key interaction principles within minutes. Gummi demonstrates promising possibilities for new interaction techniques and devices based on flexible electronic components.
Gummi: user interface for deformable computers
We show interaction possibilities and a graphical user interface for deformable, mobile devices. WIMP (windows, icons, mouse, pointer) interfaces are not practical on mobile devices. Gummi explores an alternative interaction technique based on bending of a handheld device.
High-Speed Pressure Sensor Grid for Humanoid Robot Foot
This paper describes a 32 /spl times/ 32 matrix scan type high-speed pressure sensor for the feet of humanoid robots that has 1 kHz sampling rate. This sensor has matrix scan circuit. The matrix scan method has a problem of interference by bypass current. To resolve this problem, we suggest a novel method using a very thin conductive rubber. We adopted a very thin (0.6 mm) force sensing conductive rubber sheet for high speed sensing. Each sensing area is 4.2 /spl times/ 7.0 mm and can measure vertical force of approximately 0.25-20 N. Walking cycle of humanoid robot as well as human being is about 0.4-0.8 s and dual leg phase is about 0.1-0.15 s. The target of the sensor is biped walk stabilization so that high-speed input is important. Matrix scan type circuit is connected to sensor, and the system runs 1 kHz with 14 bit resolution at 4.2 /spl times/ 7.0 mm grid for 32 /spl times/ 32 points, and the sensor size is the same as humanoid robot foot 135 /spl times/ 228 mm, The system is running high-speed because of the very thin conductive rubber and simultaneous measurement. The sensor system, a novel scan method, and evaluation results are described.
HybridPointing: fluid switching between absolute and relative pointing with a direct input device
We present HybridPointing, a technique that lets users easily switch between absolute and relative pointing with a direct input device such as a pen. Our design includes a new graphical element, the Trailing Widget, which remains "close at hand" but does not interfere with normal cursor operation. The use of visual feedback to aid the user's understanding of input state is discussed, and several novel visual aids are presented. An experiment conducted on a large, wall-sized display validates the benefits of HybridPointing under certain conditions. We also discuss other situations in which HybridPointing may be useful. Finally, we present an extension to our technique that allows for switching between absolute and relative input in the middle of a single drag-operation.
HybridTouch: an intuitive manipulation technique for PDAs using their front and rear surfaces
This paper describes a new manipulation technique for small-screen mobile devices. The proposed technique, called HybridTouch, uses a touchpad attached to the rear surface of a PDA. A user can manipulate the PDA by simultaneously touching the front surface with a stylus pen held by the dominant hand and the rear surface with a finger of the nondominant hand. User studies were conducted via applications augmented by HybridTouch, and proved that users could perform manipulation tasks intuitively.
I Sense A Disturbance in the Force: Mobile Device Interaction with Force Sensing
We propose a new type of input for mobile devices by sensing forces such as twisting and bending applied by us-ers. Deformation of the devices is not necessary for such “force gestures” to be detectable. Our prototype implemen-tation augments an ultra-mobile PC (UMPC) to detect twisting and bending forces. We detail example interac-tions using these forces, employing twisting to perform application switching (alt-tab) and interpreting bending as page-down/up. By providing visual feedback related to the force type applied, e.g. of an application window twisting in 3D to reveal another and of pages bending across, these force-based interactions are made easy to learn and use. We also present a user study exploring users’ abilities to apply forces to various degrees, and draw implications from this study for future force-based interfaces.
Integrated Microeletronics for Smart Textiles
This article is about smart textiles. It is a short overview about this growing technology. After an introduction the key technologies of smart textiles are described: the conductive fibres and interconnect technologies. Then the possibilities of smart textiles are shown by means of two examples. The whole paper is based on the article “Integrated Microelectronics for Smart Textiles” written by S. Jung and C. Lauterbach.
Issues and techniques in touch-sensitive tablet input
Touch-sensitive tablets and their use in human-computer interaction are discussed. It is shown that such devices have some important properties that differentiate them from other input devices (such as mice and joysticks). The analysis serves two purposes: (1) it sheds light on touch tablets, and (2) it demonstrates how other devices might be approached. Three specific distinctions between touch tablets and one button mice are drawn. These concern the signaling of events, multiple point sensing and the use of templates. These distinctions are reinforced, and possible uses of touch tablets are illustrated, in an example application. Potential enhancements to touch tablets and other input devices are discussed, as are some inherent problems. The paper concludes with recommendations for future work.
Keepin' it real: pushing the desktop metaphor with physics, piles and the pen
We explore making virtual desktops behave in a more physically realistic manner by adding physics simulation and using piling instead of filing as the fundamental organizational structure. Objects can be casually dragged and tossed around, influenced by physical characteristics such as friction and mass, much like we would manipulate lightweight objects in the real world. We present a prototype, called BumpTop, that coherently integrates a variety of interaction and visualization techniques optimized for pen input we have developed to support this new style of desktop organization.
Lucid touch: a see-through mobile device
Touch is a compelling input modality for interactive devices; however, touch input on the small screen of a mobile device is problematic because a user's fingers occlude the graphical elements he wishes to work with. In this paper, we present LucidTouch, a mobile device that addresses this limitation by allowing the user to control the application by touching the back of the device. The key to making this usable is what we call pseudo-transparency: by overlaying an image of the user's hands onto the screen, we create the illusion of the mobile device itself being semi-transparent. This pseudo-transparency allows users to accurately acquire targets while not occluding the screen with their fingers and hand. Lucid Touch also supports multi-touch input, allowing users to operate the device simultaneously with all 10 fingers. We present initial study results that indicate that many users found touching on the back to be preferable to touching on the front, due to reduced occlusion, higher precision, and the ability to make multi-finger input.
Making an impression: force-controlled pen input for handheld devices
The properties of force-based input on a handheld device were examined. Twenty-one participants used force input to set 10 different target levels representing consecutive force ranges (0 to 4N) with visual feedback (digits or bar graphs) or no feedback. Both accuracy and speed were greater with analog feedback (bar graph). Statistical comparisons of adjacent targets/digits indicated that subjects differentiated roughly seven input levels within the set of ten force ranges actually used. Time taken to input the target force increased significantly with the size of the target force, suggesting that smaller force ranges should be considered in future implementations of force input. The results are discussed in terms of the design of appropriate feedback for force input.
Mobile interaction using paperweight metaphor
Conventional scrolling methods for small sized display in PDAs or mobile phones are difficult to use when frequent switching of scrolling and editing operations are required, for example, browsing and operating large sized WWW pages.In this paper, we have proposed a new user-interface method to provide seamless switching of scrolling / zooming mode and editing mode, based on a "Paperweight Metaphor". A sheet of paper that has been placed on a slippery table is difficult to draw on. Therefore, in order to write or draw something on the sheet of paper, a person must secure the paper with his/her palm to avoid the paper from moving. This will be a good metaphor to design switching operation of scroll and editing modes.We have made a prototype system by placing a touch sensor under a PDA screen where user's palm will be hit. We also have developed an application program to switch scrolling / editing mode by the sensor output and assessed our method.
Modal spaces: spatial multiplexing to mediate direct-touch input on large displays
We present a new interaction technique for large direct-touch displays called Modal Spaces. Modal interfaces require the user to keep track of the state of the system. The Modal Spaces technique adds screen location as an additional parameter of the interaction. Each modal region on the display supports a particular set of input actions and the visual background indicates the space's use. This "workbench approach" exploits the larger form factor of display. Our spatial multiplexing of the display supports a document-centric paradigm (as opposed to application-centric), enabling input gesture reuse, while complementing and enhancing the current existing practices of modal interfaces. We present a proof-of-concept system and discuss potential applications, design issues, and future research directions.
Multi-finger gestural interaction with 3d volumetric displays
Volumetric displays provide interesting opportunities and challenges for 3D interaction and visualization, particularly when used in a highly interactive manner. We explore this area through the design and implementation of techniques for interactive direct manipulation of objects with a 3D volumetric display. Motion tracking of the user's fingers provides for direct gestural interaction with the virtual objects, through manipulations on and around the display's hemispheric enclosure. Our techniques leverage the unique features of volumetric displays, including a 360° viewing volume that enables manipulation from any viewpoint around the display, as well as natural and accurate perception of true depth information in the displayed 3D scene. We demonstrate our techniques within a prototype 3D geometric model building application.
Pointing lenses: facilitating stylus input through visual-and motor-space magnification
Using a stylus on a tablet computer to acquire small targets can be challenging. In this paper we present pointing lenses -- interaction techniques that help users acquire and select targets by presenting them with an enlarged visual and interaction area. We present and study three pointing lenses for pen-based systems and find that our proposed Pressure-Activated Lens is the top overall performer in terms of speed, accuracy and user preference. In addition, our experimental results not only show that participants find all pointing lenses beneficial for targets smaller than 5 pixels, but they also suggest that this benefit may extend to larger targets as well.
Precise selection techniques for multi-touch screens
The size of human fingers and the lack of sensing precision can make precise touch screen interactions difficult. We present a set of five techniques, called Dual Finger Selections, which leverage the recent development of multi-touch sensitive displays to help users select very small targets. These techniques facilitate pixel-accurate targeting by adjusting the control-display ratio with a secondary finger while the primary finger controls the movement of the cursor. We also contribute a "clicking" technique, called SimPress, which reduces motion errors during clicking and allows us to simulate a hover state on devices unable to sense proximity. We implemented our techniques on a multi-touch tabletop prototype that offers computer vision-based tracking. In our formal user study, we tested the performance of our three most promising techniques (Stretch, X-Menu, and Slider) against our baseline (Offset), on four target sizes and three input noise levels. All three chosen techniques outperformed the control technique in terms of error rate reduction and were preferred by our participants, with Stretch being the overall performance and preference winner.
PreSenseII: bi-directional touch and pressure sensing interactions with tactile feedback
This paper introduces a new input device called "PreSenseII" that recognizes position, touch and pressure of a user's finger. This input device acts as a normal touchpad, but also senses pressure for additional control. Tactile feedback is provided to indicate the state of the user interface to the user. By sensing the finger contact area, pressure can be treated in two ways. This combination enables various user interactions, including multiple hardware button emulation, map scrolling with continuous scale change, and list scrolling with pressure-based speed control.
Pressure marks
Selections and actions in GUI's are often separated -- i.e. an action or command typically follows a selection. This sequence imposes a lower bound on the interaction time that is equal to or greater than the sum of its parts. In this paper, we introduce pressure marks -- pen strokes where the variations in pressure make it possible to indicate both a selection and an action simultaneously. We propose a series of design guidelines from which we develop a set of four basictypes of pressure marks. We first assess the viability of this set through an exploratory study that looks at the way users draw straight and lasso pressure marks of different sizes and orientations. We then present the results of a quantitative experiment that shows that users perform faster selection-action interactions with pressure marks than with a combination of lassos and pigtails. Based on these results, we present and discuss a number of interaction designs that incorporate pressure marks.
Recognition of Grip-Patterns by Using Capacitive Touch Sensors
A novel and intuitive way of accessing applications of mobile devices is presented. The key idea is to use grip-pattern, which is naturally produced when a user tries to use the mobile device, as a clue to determine an application to be launched. To this end, a capacitive touch sensor system is carefully designed and installed underneath the housing of the mobile device to capture the information of the user's grip-pattern. The captured data is then recognized by a minimum distance classifier and a naive Bayes classifier. The recognition test is performed to validate the feasibility of the proposed user interface system
Release, relocate, reorient, resize: fluid techniques for document sharing on multi-user interactive tables
Group work frequently involves transitions between periods of active collaboration and periods of individual activity. We aim to support this typical work practice by introducing four tabletop direct-manipulation interaction techniques that can be used to transition the status of an electronic document from private to group-accessible. After presenting our four techniques - release, relocate, reorient, and resize - we discuss the results of an empirical study that compares and evaluates these mechanisms for sharing documents in a co-located tabletop environment.
Sensing techniques for mobile interaction
Shift: a technique for operating pen-based interfaces using touch
Retrieving the stylus of a pen-based device takes time and requires a second hand. Especially for short intermittent interactions many users therefore choose to use their bare fingers. Although convenient, this increases targeting times and error rates. We argue that the main reasons are the occlusion of the target by the user's finger and ambiguity about which part of the finger defines the selection point. We propose a pointing technique we call Shift that is designed to address these issues. When the user touches the screen, Shift creates a callout showing a copy of the occluded screen area and places it in a non-occluded location. The callout also shows a pointer representing the selection point of the finger. Using this visual feedback, users guide the pointer into the target by moving their finger on the screen surface and commit the target acquisition by lifting the finger. Unlike existing techniques, Shift is only invoked when necessary--over large targets no callout is created and users enjoy the full performance of an unaltered touch screen. We report the results of a user study showing that with Shift participants can select small targets with much lower error rates than an unaided touch screen and that Shift is faster than Offset Cursor for larger targets.
Single-Handed Interaction Techniques for Multiple Pressure-Sensitive Strips
We present a set of interaction techniques that make novel use of a small pressure-sensitive pad to allow one-handed direct control of a large number of parameters. The surface of the pressure-sensitive pad is logically divided into four linear strips which simulate traditional interaction metaphors and the functions of which may be modified dynamically under software control. No homing of the hand or fingers in needed once the fingers are placed above their corresponding strips. We show how the number of strips on the pad can be virtually extended from four to fourteen by detecting contact pressure differences and dual-finger motions. Due to the compact size of the device and the method of interaction, which does not rely on on-screen widgets or the 2D navigation of a cursor, the versatile input system may be used in applications, where it is advantageous to minimize the amount of visual feedback required for interaction.
SmartSkin: an infrastructure for freehand manipulation on interactive surfaces
This paper introduces a new sensor architecture for making interactive surfaces that are sensitive to human hand and finger gestures. This sensor recognizes multiple hand positions and shapes and calculates the distance between the hand and the surface by using capacitive sensing and a mesh-shaped antenna. In contrast to camera-based gesture recognition systems, all sensing elements can be integrated within the surface, and this method does not suffer from lighting and occlusion problems. This paper describes the sensor architecture, as well as two working prototype systems: a table-size system and a tablet-size system. It also describes several interaction techniques that would be difficult to perform without using this architecture
Superflick: a natural and efficient technique for long-distance object placement on digital tables
Moving objects past arms' reach is a common action in both real-world and digital tabletops. In the real world, the most common way to accomplish this task is by throwing or sliding the object across the table. Sliding is natural, easy to do, and fast: however, in digital tabletops, few existing techniques for long-distance movement bear any resemblance to these real-world motions. We have designed and evaluated two tabletop interaction techniques that closely mimic the action of sliding an object across the table. Flick is an open-loop technique that is extremely fast. Superflick is based on Flick, but adds a correction step to improve accuracy for small targets. We carried out two user studies to compare these techniques to a fast and accurate proxy-based technique, the radar view. In the first study, we found that Flick is significantly faster than the radar for large targets, but is inaccurate for small targets. In the second study, we found no differences between Superflick and radar for either time or accuracy. Given the simplicity and learnability of flicking, our results suggest that throwing-based techniques have promise for improving the usability of digital tables.
Supporting multi-point interaction in visual workspaces
Multi-point interaction tasks involve the manipulation of several mutually-dependent control points in a visual workspace -- for example, adjusting a selection rectangle in a drawing application. Multi-point interactions place conflicting requirements on the interface: the system must display objects at sufficient scale for detailed manipulation, but it must also provide an efficient means of navigating from one control point to another. Current interfaces lack any explicit support for tasks that combine these two requirements, forcing users to carry out sequences of zoom and pan actions. In this paper, we describe three novel mechanisms for view control that explicitly support multi-point interactions with a single mouse, and preserve both visibility and scale for multiple regions of interest. We carried out a study to compare two of the designs against standard zoom and pan techniques, and found that task completion time was significantly reduced with the new approaches. The study shows the potential of interfaces that combine support for both scale and navigation.
System design of Smart Table
The paper describes the system design of Smart Table, a table that can track and identify multiple objects simultaneously when placed on top of its surface. The table has been designed to support a smart problem-solving environment for early childhood education in a project called "Smart Kindergarten". We introduce our technology and present the incorporation of location information and identification provided by Smart Table into context-aware computing applications. In addition, the paper discusses the prototype design, localization algorithm, and the results from final implementation.
Tactile interfaces for small touch screens
We present the design, implementation, and informal evaluation of tactile interfaces for small touch screens used in mobile devices. We embedded a tactile apparatus in a Sony PDA touch screen and enhanced its basic GUI elements with tactile feedback. Instead of observing the response of interface controls, users can feel it with their fingers as they press the screen. In informal evaluations, tactile feedback was greeted with enthusiasm. We believe that tactile feedback will become the next step in touch screen interface design and a standard feature of future mobile devices.
ThinSight: integrated optical multi-touch sensing through thin form-factor displays
ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multiple objects such as fingertips placed on or near the display surface. We describe this new hardware, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without compromising display quality. Our aim is to capture rich sensor data through the display, which can be processed using computer vision techniques to enable interaction via multi-touch and physical objects. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, low profile form factor making such interaction techniques more practical and deployable in real-world settings.
ThinSight: versatile multi-touch sensing for thin form-factor displays
ThinSight is a novel optical sensing system, fully integrated into a thin form factor display, capable of detecting multiple objects such as fingertips placed on or near the display surface. We describe this new hardware, and demonstrate how it can be embedded behind a regular LCD, allowing sensing without compromising display quality. Our aim is to capture rich sensor data through the display, which can be processed using computer vision techniques to enable interaction via multi-touch and physical objects. A major advantage of ThinSight over existing camera and projector based optical systems is its compact, low profile form factor making such interaction techniques more practical and deployable in real-world settings.
Two-finger input with a standard touch screen
Most current implementations of multi-touch screens are still too expensive or too bulky for widespread adoption. To improve this situation, this work describes the electronics and software needed to collect more data than one pair of coordinates from a standard 4-wire touch screen. With this system, one can measure the pressure of a single touch and approximately sense the coordinates of two touches occurring simultaneously. Naturally, the system cannot offer the accuracy and versatility of full multi-touch screens. Nonetheless, several example applications ranging from painting to zooming demonstrate a broad spectrum of use.
Two-handed interaction on a tablet display
A touchscreen can be overlaid on a tablet computer to support asymmetric two-handed interaction in which the preferred hand uses a stylus and the non-preferred hand operates the touchscreen. The result is a portable device that allows both hands to interact directly with the display, easily constructed from commonly available hardware. The method for tracking the independent motions of both hands is described. A wide variety of existing two-handed interaction techniques can be used on this platform, as well as some new ones that exploit the reconfigurability of touchscreen interfaces. Informal tests show that, when the non-preferred hand performs simple actions, users find direct manipulation on the display with both hands to be comfortable, natural, and efficient.
Using classification to determine the number of finger strokes on a multi-touch tactile device
On certain types of multi-touch touchpads, determining the number of finger stroke is a non-trivial problem. We investigate the application of several classification algorithms to this problem. Our experiments are based on a flat prototype of the spherical Touchglobe touchpad. We demonstrate that with a very short delay after the stroke, the number of touches can be determined by a Support Vector Machine with an RBF kernel with an accuracy of about 90% (on a 5-class problem). [1] C. van Wrede, P. Laskov, and G. Ratsch, "Using classification to determine the number of finger strokes on a multi-touch tactile device," European Symposium on Artificial Neural Networks, pp. 549?554, 2004.
Visual touchpad: a two-handed gestural input device
This paper presents the Visual Touchpad, a low-cost vision-based input device that allows for fluid two-handed interactions with desktop PCs, laptops, public kiosks, or large wall displays. Two downward-pointing cameras are attached above a planar surface, and a stereo hand tracking system provides the 3D positions of a user's fingertips on and above the plane. Thus the planar surface can be used as a multi-point touch-sensitive device, but with the added ability to also detect hand gestures hovering above the surface. Additionally, the hand tracker not only provides positional information for the fingertips but also finger orientations. A variety of one and two-handed multi-finger gestural interaction techniques are then presented that exploit the affordances of the hand tracker. Further, by segmenting the hand regions from the video images and then augmenting them transparently into a graphical interface, our system provides a compelling direct manipulation experience without the need for more expensive tabletop displays or touch-screens, and with significantly less self-occlusion.
Visual tracking of bare fingers for interactive surfaces
Visual tracking of bare fingers allows more direct manipulation of digital objects, multiple simultaneous users interacting with their two hands, and permits the interaction on large surfaces, using only commodity hardware. After presenting related work, we detail our implementation. Its design is based on our modeling of two classes of algorithms that are key to the tracker: Image Differencing Segmentation (IDS) and Fast Rejection Filters (FRF). We introduce a new chromatic distance for IDS and a FRF that is independent to finger rotation. The system runs at full frame rate (25 Hz) with an average total system latency of 80 ms, independently of the number of tracked fingers. When used in a controlled environment such as a meeting room, its robustness is satisfying for everyday use.
W. Westerman, "HAND TRACKING, FINGER IDENTIFICATION, AND CHORDIC MANIPULATION ON A MULTI-TOUCH SURFACE," University of Delaware, 1999.
This research introduces methods for tracking and identifying multiple nger and palm contacts as hands approach, touch, and slide across a proximity-sensing multi-touch surface (MTS). Though MTS proximity images exhibit special topological characteristics such as absence of background clutter, techniques such as bootstrapping from hand-position estimates are necessary to overcome the invisibility of structures linking ngertips to palms. Context-dependent segmentation of each proximity image constructs and parameterizes pixel groups corresponding to each distinguishable surface contact. Path-tracking links across successive images those groups which correspond to the same hand part, reliably detecting touchdown and lifto of individual ngers. Combinatorial optimization algorithms use biomechanical constraints and anatomical features to associate each contact's path with a particular ngertip, thumb, or palm of either hand. Assignment of contacts to a ring of hand part attractor points using a squared-distance cost metric e ectively sorts the contact identities with respect to the ring structure. Despite the ascension of the mouse into everyday computing, more advanced devices for bimanual and high degree-of-freedom (DOF) manipulation have failed to enter the mainstream due to awkward integration with text entry devices. This work introduces a novel input integration technique which reserves synchronized motions of multiple ngers on the MTS for multi-DOF gestures and hand resting, leaving asynchronous single nger taps on the MTS to be recognized as typing on a QWERTY key layout. The operator can then switch instantaneously between typing and several 4-DOF graphical manipulation channels with a simple change in hand con guration. This integration technique depends upon reliable detection of synchronized nger touches, extraction of independent hand translation, scaling, and rotational velocities, and the aforementioned nger and hand identi cations. The MTS optimizes ergonomics by eliminating redundant pointing and homing motions, minimizing device activation force without removing support for resting hands, and distributing tasks evenly over muscles in both hands. Based upon my daily use of a prototype to prepare this document, I have found that the MTS system as a whole is nearly as reliable, much more ecient, and much less fatiguing than the typical mouse-keyboard combination.




Patents

Adiel Abileah et al, "Integrated optical light sensitive active matrix liquid," US7009663, Dec. 17, 2003.
A liquid crystal device including a front electrode layer, rear electrode layer, a liquid crystal material located between the front electrode layer and the rear electrode layer. A polarizer is located between the liquid crystal material and the front electrode layer and changing an electrical potential between the rear electrode layer and the front electrode layer modifies portions of the liquid crystal material to change the polarization of the light incident thereon. A plurality of light sensitive elements are located together with the rear electrode layer and a processor determines the position of at least one of the plurality of light sensitive elements that has been inhibited from sensing ambient light.
B. Ording (Apple), "Operation of a computer with touch screen interface," US20060053387, Mar 9, 2006.
A touch screen computer executes an application. A method of operating the touch screen computer in response to a user is provided. A virtual input device is provided on the touch screen. The virtual input device comprises a plurality of virtual keys. It is detected that a user has touched the touch screen to nominally activate at least one virtual key, and a behavior of the user with respect to touch is determined. The determined behavior is processed and a predetermined characteristic is associated with the nominally-activated at least one virtual key. A reaction to the nominal activation is determined based at least in part on a result of processing the determined behavior.
D. R. Kerr et al., "Hand held electronic device with multiple touch sensing devices," US20060197750, Sept. 7, 2006.
Hand held devices with multiple touch sensing devices are disclosed. The touch sensing devices may for example be selected from touch panels, touch screens or touch sensitive housings.
H. P. Dietz and D. L. Leigh (Mitsubishi Electric Research Laboratories, Inc.), "MULTI-USER TOUCH SURFACE," US20020185981, Dec. 12, 2002
A multi-user touch system includes a surface on which are a pattern of mounted antennas. A transmitter transmits uniquely identifiable signals to each antenna. Receivers are capacitively coupled to different users, the receivers are configured to receive the uniquely identifiable signals. A processor then associates a specific antenna with a particular users when multiple users simultaneously touch any of the antennas.
M. Uy (Apple), "Integrated sensing display," US20060007222, Jan. 12, 2006.
An integrated sensing display is disclosed. The sensing display includes display elements integrated with image sensing elements. As a result, the integrated sensing device can not only output images (e.g., as a display) but also input images (e.g., as a camera).
S. Hotelling et al. (Apple), "Mode-based graphical user interfaces for touch sensitive input devices," US20060026535, Feb. 2, 2006
A user interface method is disclosed. The method includes detecting a touch and then determining a user interface mode when a touch is detected. The method further includes activating one or more GUI elements based on the user interface mode and in response to the detected touch.
S. Hotelling et al. (Apple), "Multipoint touchscreen," US20060097991, May 11, 2006.
A touch panel having a transparent capacitive sensing medium configured to detect multiple touches or near touches that occur at the same time and at distinct locations in the plane of the touch panel and to produce distinct signals representative of the location of the touches on the plane of the touch panel for each of the multiple touches is disclosed.
S. P. Hotelling et al., "TOUCH SCREEN LIQUID CRYSTAL DISPLAY," US20080062140, Mar 13, 2008.
Disclosed herein are liquid-crystal display (LCD) touch screens that integrate the touch sensing elements with the display circuitry. The integration may take a variety of forms. Touch sensing elements can be completely implemented within the LCD stackup but outside the not between the color filter plate and the array plate. Alternatively, some touch sensing elements can be between the color filter and array plates with other touch sensing elements not between the plates. In another alternative, all touch sensing elements can be between the color filter and array plates. The latter alternative can include both conventional and in-plane-switching (IPS) LCDs. In some forms, one or more display structures can also have a touch sensing function. Techniques for manufacturing and operating such displays, as well as various devices embodying such displays are also disclosed.
S. P. Hotelling, "MULTIPOINT TOUCH SURFACE CONTROLLER," US20070257890, Nov. 8, 2007.
A multipoint touch surface controller is disclosed herein. The controller includes an integrated circuit including output circuitry for driving a capacitive multi-touch sensor and input circuitry for reading the sensor. Also disclosed herein are various noise rejection and dynamic range enhancement techniques that permit the controller to be used with various sensors in various conditions without reconfiguring hardware.




Misc. Articles

4-Wire Panels - How They Work
A resistive touch panel is made by sandwiching together ITO (Indium Tin Oxide) coated glass and PET (Poly Ethylene Terephthalate). The process is illustrated by pictures 1 through 5, opposite. The glass is for mechanical stability and the PET provides the flexible medium through which the two parts connect.
7 Things you should know about ... Multi-Touch Interfaces
Capacitive Sensor Tutorial
Capacitive sensors can directly sense a variety of things—motion, chemical composition, electric field—and, indirectly, sense many other variables which can be converted into motion or dielectric constant, such as pressure, acceleration, fluid level, and fluid composition.
Capacitor Interface
Digesting the Apple iPhone
Apple will be launching it's first mobile phone effort this June in the US with the iPhone. Steve Jobs, when announcing the phone, claimed that it was five years ahead of any other phone. But what makes the iPhone so special? Apple are being characteristically secretive about the hardware details, however, there are some hints out there. By analysing these, predictions can be made about the technology and components that might be used.
How an Five-Wire Touchscreen Works
The five-wire resistive touch screen uses a glass panel with a uniform resistive coating. A thick polyester coversheet is tightly suspended over the top of a glass substrate, separated by small, transparent insulating dots. The coversheet has a hard, durable coating on the outer side and a conductive coating on the inner side.
How SAW Touchscreen Works
Surface-Wave touch technologies have a glass overlay with transmitting and receiving piezoelectric transducers for both the X and Y axes. The touchscreen controller sends a electrical signal to the transmitting transducer, which converts the signal into ultrasonic waves within the glass. These waves are directed across the front surface of the touchscreen by an array of reflectors. Reflectors on the opposite side gather and direct the waves to the receiving transducer, which reconverts them into an electrical signal—a digital map of the touchscreen surface. When you touch the screen, you absorb a portion of the wave traveling across it. The received signal is then compared to the stored digital map, the change recognized, and a coordinate calculated. This process happens independently for both the X and Y axes. By measuring the amount of the signal that is absorbed, a Z-axis is also determined. The digitized coordinates are transmitted to the computer for processing.
How To Calibrate Touch Screens
Touch screens are finding their way into a variety of embedded products. Most touch-enabled devices will require a calibration routine. Here's a good one.
Howstuffworks "How do touch-screen monitors know where you're touching?"
Touch-screen monitors have become more and more commonplace as their price has steadily dropped over the past decade. There are three basic systems that are used to recognize a person's touch: * Resistive * Capacitive * Surface acoustic wave
Howstuffworks "How the iPhone Works"
The Apple iPhone is a cell phone that works like a computer. Get the inside scoop on iPhone technology and the hype around the iPhone release.
In-depth Analysis: Touch Screen Panel Industry Trends and Business Strategies
Apple and LG Electronics have launched new iPhone and Prada Phone embedded with touch screen panels in 2007, spurring increasing interests in the touch screen panel. The touch screen panel, which has been incorporated in ATM, industrial use monitors and Kiosk, has started to broaden its application field to portable devices and even home electronics appliances such as monitors, notebooks and refrigerators.
Natural Input On Mobile PC Systems
N-trig's Dual-Mode Pen and Touch
N-trig Innovative Technologies, an Israeli startup, has developed a dual-mode pen and touch digitizer aimed at the Tablet PC market. N-trig first showed a public demo of the technology embedded in a Motion Computing Tablet PC at Microsoft's WinHEC 2006 conference. The first product in which the technology will be used is Motion's LE1700 slate; it is expected to be utilized in several other new Tablet PCs later in 2007.
Tactile Sensors
Touch and tactile sensor are devices which measures the parameters of a contact between the sensor and an object. This interaction obtained is confined to a small defined region. This contrasts with a force and torque sensor that measures the total forces being applied to an object. In the consideration of tactile and touch sensing, the following definitions are commonly used: Touch Sensing This is the detection and measurement of a contact force at a defined point. A touch sensor can also be restricted to binary information, namely touch, and no touch. Tactile Sensing This is the detection and measurement of the spatial distribution of forces perpendicular to a predetermined sensory area, and the subsequent interpretation of the spatial information. A tactile-sensing array can be considered to be a coordinated group of touch sensors. Slip This is the measurement and detection of the movement of an object relative to the sensor. This can be achieved either by a specially designed slip sensor or by the interpretation of the data from a touch sensor or a tactile array.
The Apple iPhone's Impact on the Touch-Panel Industry
The Apple iPhone (Fig. 1) is arguably the most talked about consumer-electronics device that has yet to hit the market, even though according to most media reports it will not be available to the public until June 11. While the world will be watching to see if Apple's latest gizmo attains the same status as its iPod, display-industry professionals will be watching for a different reason – simply put, if the iPhone succeeds, it could have a significant impact on the touch-panel industry.
The art of capacitive touch sensing
Capacitive sensing is an attractive switch option, but it needs appropriate physical dimensions and electrical interfaces
Using resistive touch screens for human/machine interface
Touch-screen interfaces are effective in many information appliances, in personal digital assistants (PDAs), and as generic pointing devices for instrumentation and control applications. Getting the information from a touch screen into a microprocessor can be challenging. This article introduces the basics of how resistive touch screens work and how to best convert these analog inputs into usable digital data. Issues such as settling time, noise filtering, and speed trade-offs are addressed.




Magazines

Pen Computing and Rugged PC Review - your source for mobile and rugged computing reviews and specs
"Pen Computing and Rugged PC Review cover all aspects of mobile and rugged computing, including reviews of rugged and semi-rugged notebookds, Tablet PCs, slates, smartphone, handheld computers, Pocket PCs, pen computers, industrial handhelds, PDAs and other ruggedized computing equipment. Rugged PC Review also explains rugged computing standards and definitions.
Rugged PC Review - your source for rugged computing reviews and specs
Rugged PC Review covers all aspects of rugged computing, including reviews of rugged and semi-rugged notebookds, Tablet PCs, slates, pen computers, industrial handhelds, rugged PDAs and other ruggedized computing equipment. Rugged PC Review also explains rugged computing standards and definitions.
SID - Society For Information Display
Source for display industry news and technology information
Touch Panel
The Veritas et Visus family of newsletters...
Veritas et Visus
Veritas et Visus provides readers with pertinent, timely, and affordable information about the fascinating and rapidly expanding flat panel display industry.
Walker Mobile, LLC
Walker Mobile, LLC is a technical marketing consulting firm specializing in mobile computing. Walker Mobile, LLC offers product and strategic marketing consulting to OEMs, ODMs, IHVs and ISVs engaged in developing or marketing mobile computing hardware or software products.

No comments:

LinkWithin

Force, Pressure, and Touch - kitronyx.com
Force pressure touch technology: FSR sensor, electronics, firmware and software
Design Service Low Cost Pressure Mapping
Related Posts with Thumbnails