Logo
Windows XP
Windows Vista
Windows 7
Windows Azure
Windows Server
Windows Phone
PREGNANCY
 
 
Windows Phone

Windows Phone 8 : Developing for the Phone - The Phone Experience (part 2) - Designing for Touch

6/16/2013 5:20:27 PM

2. Designing for Touch

The primary input on the phone is touch. It is the main reason that designing for the phone is so different from designing for other applications. 

If you are coming to Windows Phone development from Microsoft platforms (for example, Silverlight, .NET, and so on), it might seem obvious to handle touch by using the built-in mouse events. The touch interface is mimicked in the mouse events, as you might expect. For example, to handle tapping on the surface of an application, you can simply handle the mouse event:

public partial class MainPage : PhoneApplicationPage
{
  // Constructor
  public MainPage()
  {
    InitializeComponent();

    ContentPanel.MouseLeftButtonUp +=
      new MouseButtonEventHandler(ContentPanel_MouseLeftButtonUp);

  }

  void ContentPanel_MouseLeftButtonUp(object sender,
                                      MouseButtonEventArgs e)
  {
    theText.Text = "Content Panel Tapped";
  }

  // ...
}

Although the built-in mouse events do work, they are often not fine-grained enough for a rich user experience, so it is recommended that you not use them unless you have a very specific reason. Touch is different because we tend to drag, swipe, and pinch the screen very differently than we did (or could) with the mouse. The phone supports four points of touch so that the mouse events end up falling down on any input that enables more than one finger. Instead of relying on the mouse events, you should use one of the other methods for working with touch (as shown in the following).


To help with touch, the phone has several layers of APIs to help you get to the touch surface. At the lowest level, the Touch class can report every touch interaction the user does. Windows Phone’s Touch class is most useful when you want to get as close to the metal as possible. The Touch class has a static event called FrameReported, which is called as touch interaction is happening. With it you can get information about how many touch points are being used (for example, how many fingers are on the touch surface) as well as where the touch points are. The event contains an argument that enables you to get at the point information. Here is an example using the FrameReported event to show where the touch is being dragged on the surface of the phone:

public partial class MainPage : PhoneApplicationPage
{
  // Constructor
  public MainPage()
  {
    InitializeComponent();

    Touch.FrameReported += Touch_FrameReported;
  }

  void Touch_FrameReported(object sender, TouchFrameEventArgs e)
  {
    var mainTouchPoint = e.GetPrimaryTouchPoint(this);
    if (mainTouchPoint.Action == TouchAction.Move)
    {
      theText.Text = string.Concat("Moving: ",
                                   mainTouchPoint.Position);
    }
  }

  // ...
}

The TouchFrameEventArgs class has several pieces of functionality. The two that are most important are the GetPrimaryTouchPoint and GetTouchPoints methods. The GetPrimaryTouchPoint method is used to retrieve a TouchPoint object that is relative to a particular-UIElement of the design. The primary touch point is determined by the first thing that touches the screen. Being relative means that all touch positions will be relative to that UIElement. The TouchPoint class can tell you the position and action that are occurring, as shown in the following code sample:

void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
  // Get the main touch point (relative to a UIElement)
  TouchPoint mainTouchPoint = e.GetPrimaryTouchPoint(ContentPanel);

  // Get the position
  Point position = mainTouchPoint.Position;

  // Get the Action of the Touch
  switch (mainTouchPoint.Action)
  {
    case TouchAction.Move:
      theText.Text = "Moving";
      break;
    case TouchAction.Up:
      theText.Text = "Touch Ended";
      break;
    case TouchAction.Down:
      theText.Text = "Touch Started";
      break;
  }
}

The GetTouchPoints method also retrieves touch information that is relative to a UIElement, but in this case all the current touch points are returned as a collection of TouchPoint objects:

void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{

  // Get all the touch points
  TouchPointCollection points = e.GetTouchPoints(ContentPanel);

  theText.Text = string.Concat("#/Touch Points: ", points.Count);

}

Each TouchPoint object in the collection represents a single touch point on the phone. All phones support at least four points of touch. The way you work with the individual touch points from the collection is identical to the TouchPoint you retrieved from the GetPrimaryTouchPoint method.

Although the Touch.FrameReported event will give you a lot of control, you might want something higher-level so that you can handle simple movement or sizing behavior. To fill that need, Windows Phone also supports manipulations. The concept behind manipulations is to be able to easily interact with common manipulations of objects on the screen. These include sizing and moving of objects. The UIElement class supports these directly by supporting three events, as described in Table 1.

TABLE 1 Manipulation Events

Image

These events are used to support manipulation of objects on the screen, more than just touch. The basic idea of a manipulation is to be notified about attempts to change objects on the phone by dragging or resizing. For example, the ManipulationDelta event sends information about the manipulation while it is happening in the form of a ManipulationDeltaEventArgs object argument. This argument includes both the cumulative amount of manipulation and the difference between the last delta and the current one. The manipulation amount is defined in a class called ManipulationDelta. The ManipulationDelta class contains two pieces of information: translation and scale. These pieces of information correlate directly to the idea of how transforms in XAML work (for example, TranslateTransform and ScaleTransform specifically). The amount of translation indicates how far an item has been dragged (or moved). The scale indicates how much the item has been sized using the pinch touch gesture. For example, to move an object using a manipulation, you might add a TranslateTransform to your design:

...
<Ellipse Fill="Red"
         Width="200"
         Height="200"
         x:Name="theCircle">
  <Ellipse.RenderTransform>
    <TranslateTransform x:Name="theTransform" />
  </Ellipse.RenderTransform>
</Ellipse>
...

With the transform in place, you can handle the ManipulationDelta event and use the TranslateTransform to move the element in response to dragging:

public partial class MainPage : PhoneApplicationPage
{
  // Constructor
  public MainPage()
  {
    InitializeComponent();

    ManipulationDelta += MainPage_ManipulationDelta;
  }

  void MainPage_ManipulationDelta(object sender,
                                  ManipulationDeltaEventArgs e)
  {

    // Move (for example Translate) the ellipse based on delta
    ManipulationDelta m = e.CumulativeManipulation;
    theTransform.X = m.Translation.X;
    theTransform.Y= m.Translation.Y;

  }

  ...

}

In this particular example, the CumulativeManipulation is used to get the entire touch manipulation. The Manipulation’s Translation property contains the amount of the translation, but we cannot be sure this manipulation is about a particular element on our page (as we registered for the page’s ManipulationDelta event). We could register just for our ellipse’s manipulation, but alternatively we could also test to see which container was being manipulated by testing the ManipulationContainer like so:

void MainPage_ManipulationDelta(object sender,
                                ManipulationDeltaEventArgs e)
{
  if (e.ManipulationContainer == theCircle)
  {
    // Move (for example Translate) the ellipse based on delta
    ManipulationDelta m = e.CumulativeManipulation;
    theTransform.X = m.Translation.X;
    theTransform.Y = m.Translation.Y;
  }

}

Using pinch and zoom touch gestures works in the same way you can use a ScaleTransform to change the size instead of the TranslateTransform:

<Ellipse Fill="Red"
          Width="200"
          Height="200"
          x:Name="theCircle">
  <Ellipse.RenderTransform>
    <ScaleTransform x:Name="theTransform"
                    CenterX="100"
                    CenterY="100"/>
  </Ellipse.RenderTransform>
</Ellipse>

Then, in the manipulation events, you can simply use the scale properties in the ManipulationDelta instead of Translate, like so:

void MainPage_ManipulationDelta(object sender,
ManipulationDeltaEventArgs e)
{
  if (e.ManipulationContainer == theCircle)
  {
    // Size (for example Scale) the ellipse based on delta
    ManipulationDelta m = e.CumulativeManipulation;
    theTransform.ScaleX = m.Scale.X;
    theTransform.ScaleY = m.Scale.Y;
  }
}

Manipulations also include information about the inertia of the touch gestures. The idea behind inertia is to be able to tell whether the user was still moving when the manipulation ended. The inertia information becomes important to making more organic interactions with users. If you have seen how flicking a list box on the phone makes the list scroll up even when the user is no longer touching the phone, this is accomplished using inertia.

For example, on the ManipulationCompleted event you can test for IsInertial to see whether the manipulation contains inertial velocity information:

void MainPage_ManipulationCompleted(object sender,
                                    ManipulationCompletedEventArgs e)
{
  if (e.IsInertial)
  {
    var m = e.TotalManipulation;
    var velocity = e.FinalVelocities;
    theTransform.X =
      m.Translation.X + (velocity.LinearVelocity.X / 100);
    theTransform.Y =
      m.Translation.Y + (velocity.LinearVelocity.Y / 100);
  }
}

After the code determines it is inertial, it can use the FinalVelocities to change the outcome of the translation (in this example). You could also see whether the LinearVelocity is greater than some threshold to determine whether it is a “flick” :

void MainPage_ManipulationCompleted(object sender,
                                    ManipulationCompletedEventArgs e)
{
  if (e.IsInertial)
  {
    var velocity = e.FinalVelocities;

    // Is it a Right Flick?
    if (velocity.LinearVelocity.X > 100)
    {
      // ...
    }
  }
}

The manipulation events are used specifically to handle the drag and scale, but as we discussed before, there are a number of types of touch gestures. Although having access to manipulations and lower-level access with the Touch class helps, for most of your touch interface, you’d like to access events for those gestures directly.

The UIElement class has access to the most common types of touch gestures. You can see the touch events the UIElement class exposes in Table 2.

TABLE 2 UIElement Touch Events

Image

Wiring up to these events is as simple as wiring up the event:

public partial class MainPage : PhoneApplicationPage
{
  // Constructor
  public MainPage()
  {
    InitializeComponent();

    var listener = theCircle.Hold += theCircle_Hold;
  }

  void theCircle_Hold(object sender, GestureEventArgs e)
  {
    // Do a hold
  }
}

The UIElement class represents any visual element on a page, so you can wire up these events on any object (for example, Button, ListBox, Grid, Ellipse, and so on). As you work with touch, you will use a variety of these touch-based APIs in your application. When working with common gestures, the element approach is easiest, but because you want more control over the nature of the touch surface, you will have to delve further down into the stack of APIs.

Other -----------------
- Windows Phone 8 : Developing for the Phone - Application Lifecycle (part 3) - Tombstoning
- Windows Phone 8 : Developing for the Phone - Application Lifecycle (part 2) - Navigation
- Windows Phone 8 : Developing for the Phone - Application Lifecycle (part 1)
- Windows Phone 8 : Designing for the Phone - Implementing the Look and Feel of the Phone
- Windows Phone 8 : Designing for the Phone - Designing with Visual Studio
- Windows Phone 7 : 3D Game Development (part 4) - Rendering 3D Models
- Windows Phone 7 : 3D Game Development (part 3) - The Game Class
- Windows Phone 7 : 3D Game Development (part 2) - Rendering 3D Primitives
- Windows Phone 7 : 3D Game Development (part 1) - 3D Game Concepts
- Windows Phone 8 : Phone-Specific Design (part 3) - Using the Pivot Control in Blend
- Windows Phone 8 : Phone-Specific Design (part 2) - Using the Panorama Control in Blend
- Windows Phone 8 : Phone-Specific Design (part 1) - The ApplicationBar in Blend
- Windows Phone 7 : AlienShooter Enhancements (part 2) - Tombstone Support, Particle System
- Windows Phone 7 : AlienShooter Enhancements (part 1) - Load and Save Game State
- Windows Phone 7 Programming Model : Application Execution Model
- Windows Phone 7 Programming Model : Bing Maps Control
- Windows Phone 8 : Designing for the Phone - Blend Basics (part 4) - Working with Behaviors
- Windows Phone 8 : Designing for the Phone - Blend Basics (part 3) - Creating Animations
- Windows Phone 8 : Designing for the Phone - Blend Basics (part 2) - Brushes
- Windows Phone 8 : Designing for the Phone - Blend Basics (part 1) - Layout
 
 
Most view of day
- Windows Phone 8 : Configuring Basic Device Settings - Phone Storage
- Managing Windows Small Business Server 2011 : Adding a Terminal Server (part 3) - Configuring RD Licensing
- Microsoft Systems Management Server 2003 : The Four-Phase Patch Management Process (part 2) - The Evaluate & Plan Phase, The Deploy Phase
- BizTalk 2006 : Getting Started with Pipeline Development (part 3) - Configuring Recoverable Interchanges, Using the Default Pipelines
- Exchange Server 2007 : Migrating from Windows 2000 Server to Windows Server 2003 (part 6) - Upgrading Domain and Forest Functional Levels
- SharePoint 2010 : Configuring Search Settings and the User Interface - The Preferences Page: An Administrator's View
- Microsoft Visio 2010 : Formatting Individual Shapes (part 2) - Curing Menu Cascade-itis
- BizTalk 2010 : ASDK SQL adapter examples (part 2) - Select, Table Valued Function, and Execute Reader
- Microsoft Excel 2010 : Protecting and Securing a Workbook - Setting Macro Security Options
- Maintaining Windows Home Server 2011 : Deleting Unnecessary Files from the System Drive
Top 10
- BizTalk 2006 : Creating More Complex Pipeline Components (part 4) - Custom Disassemblers
- BizTalk 2006 : Creating More Complex Pipeline Components (part 3) - Validating and Storing Properties in the Designer
- BizTalk 2006 : Creating More Complex Pipeline Components (part 2) - Schema Selection in VS .NET Designer
- BizTalk 2006 : Creating More Complex Pipeline Components (part 1) - Dynamically Promoting Properties and Manipulating the Message Context
- BizTalk 2006 : Custom Components (part 2) - Key BizTalk API Objects
- BizTalk 2006 : Custom Components (part 1) - Component Categories, Component Interfaces
- Microsoft Access 2010 : Enhancing the Queries That You Build - Ordering Query Results, Refining a Query by Using Criteria
- Microsoft Access 2010 : Enhancing the Queries That You Build - Everything You Need to Know About Query Basics
- Microsoft Exchange Server 2010 : Getting Started with Email Archiving - Enabling Archiving (part 2) - Using Exchange 2010 Discovery, Offline Access
- Microsoft Exchange Server 2010 : Getting Started with Email Archiving - Enabling Archiving (part 1) - Archive Quotas , Exchange 2010 Discovery Operation Considerations
 
 
Windows XP
Windows Vista
Windows 7
Windows Azure
Windows Server
Windows Phone
2015 Camaro