Smartphone: A ubiquitous input device for positioning tasks in a variety of ways

Dec 21
09:20

2012

Dipanwita Ghosh Mitra

Dipanwita Ghosh Mitra

  • Share this article on Facebook
  • Share this article on Twitter
  • Share this article on Linkedin

‘Smartphone:_A ubiquitous input device for positioning tasks in a variety of ways’ The position could refer to screen position, or physical position i...

mediaimage

‘Smartphone:_A ubiquitous input device for positioning tasks in a variety of ways’

The position could refer to screen position,Smartphone: A ubiquitous input device for positioning tasks in a variety of ways Articles or physical position in the real world.

The smartphone has been used for positioning tasks in a variety of ways: such as

·         Continuous indirect translation with a trackpad. Remote Commander enables individuals to use the touch screen on a PDA as a trackpad to control the relative position of a cursor on a remote display.

·         Continuous indirect translation with velocity-controlled joystick. A return-to-zero joystick controls the velocity of a continuously repositioned object. Zero displacement of the joystick corresponds to zero velocity. Silfverberg et al.  has done an in-depth study of isometric joysticks on handheld devices to control the cursor on a public display. Many of today’s mobile phones are shipping with binary resolution, 4 to 8 direction, return-to-zero joysticks.

·         Continuous indirect translation with accelerometers. Accelerometers are beginning to emerge in handheld devices such as Samsung’s SCH-S310 mobile phone with an integrated 3-D accelerometer. Rock ’n’ Scroll allows users to scroll (e.g. through an electronic photo album) by tilting the handheld device. Although this technique was used to interact with an application directly on the device, it could clearly be extended to positioning tasks in ubiquitous computing environments.

·         Continuous indirect translation with directional step keys. The location of an object is controlled by using up, down, left, right step keys for 2-D applications, plus in and out for 3-D. In the Blinkenlights project, users played the arcade classic Pong using the side of a building as a large public display. Each window equalled one pixel on the 18x8 pixel display. Players connected to the display by making a standard call to a phone number. Pressing the number 5 on the phone keypad moves the paddle up, and the number 8 moves it down.

·         Continuous direct translation with camera tracking. Madhavapeddy et al. presents camera-based interactions involving tagging interactive GUI elements such as sliders and dials. In manipulating the position and orientation of the phone camera, the user can position a graphical slider, or orient a graphical dial. An analogy can be drawn to the classic light pen with a tracking cross. As the light pen moves to a new position, the cross follows the motions of the pen. Tracking may be lost if the pen is moved to fast, but can be easily resumed by repositioning the pen back to the tracking cross. Madhavapeddy et al.’s interactions rely on the tagged GUI widget instead of a cross for tracking.

In Madhavapeddy et al.’s positioning technique the phone is responsible for tracking, and the environment is responsible for displaying and refreshing the tracked images. Other interactions have been created with these roles swapped. For example, smart phones have been augmented with laser pointers, as in, making them suitable for positioning tasks described by Olsen et al. that use a camera in the environment to track the laser.

·         Continuous indirect translation with camera tracking. The C-Blink system allows users to position a cursor on a large display using a mobile phone with a colored screen. The user runs a program on the phone that rapidly changes the hue of the phone screen and waves the phone in front of a camera mounted above the large display. The displayed hue sequence encodes an ID to support multiple users. The camera tracks the absolute position of this signal in the camera image to control the cursor on the display.

·         Discrete direct translation using a camera image. The locus of attention is on the phone screen. You aim using a cross hair displayed over a live camera image on the mobile phone. To reposition the cursor, you press and release the joystick button while aiming at the desired position with the cross hair. This triggers a grid of tags to shortly superimpose over the large display contents for the camera. The grid is used to derive a perspective independent coordinate system to determine the selected point with pixel-level accuracy, which is enabled by the special properties of the Visual Code tags. An analogy can be drawn to the classic light pen with position discretely determined by displaying a raster scan when the user clicks a button on the light pen.

When the raster scan is sensed by the pen, the position of the pen is known because of a tight coupling between the pen clock and display clock. In Point & Shoot, a visual tag grid replaces the functionality of the raster scan. This technique is robust to different display technologies and loose coupling between camera and display. Point & Shoot needs only one visual tag entirely in the camera view to establish a coordinate system, but a grid is used to increase the probability of satisfying this requirement.

·         Continuous indirect translation with camera optical flow. The Sweep interaction technique uses optical-flow image processing, which involves rapidly sampling successive images from a camera phone and sequentially comparing them to determine relative motion in the (x,y, Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ) dimensions. No visual tags are required. The camera doesn’t even need to be pointed at the display. To invoke the Sweep function, you rotate the joystick button downward, which acts as a clutch to indicate to the system that you are actively controlling the cursor.

Pressing the joystick button inward indicates selection or dragging. The phone is waved in the air to control the cursor. You can release the clutch button to reposition your arm, which is similar to the way a mouse can be lifted to be repositioned on the desktop surface. In the Sweep mode, you can ignore the screen on the phone and focus your attention on the large display to observe cursor movement.