2. Designing for Touch
The primary input on the phone is touch. It is
the main reason that designing for the phone is so different from
designing for other applications.
If you are coming to Windows Phone development
from Microsoft platforms (for example, Silverlight, .NET, and so on), it
might seem obvious to handle touch by using the built-in mouse events.
The touch interface is mimicked in the mouse events, as you might
expect. For example, to handle tapping on the surface of an application,
you can simply handle the mouse event:
public partial class MainPage : PhoneApplicationPage
{
// Constructor
public MainPage()
{
InitializeComponent();
ContentPanel.MouseLeftButtonUp +=
new MouseButtonEventHandler(ContentPanel_MouseLeftButtonUp);
}
void ContentPanel_MouseLeftButtonUp(object sender,
MouseButtonEventArgs e)
{
theText.Text = "Content Panel Tapped";
}
// ...
}
Although the built-in mouse events do work, they
are often not fine-grained enough for a rich user experience, so it is
recommended that you not use them unless you have a very specific
reason. Touch is different because we tend to drag, swipe, and pinch the
screen very differently than we did (or could) with the mouse. The
phone supports four points of touch so that the mouse events end up
falling down on any input that enables more than one finger. Instead of
relying on the mouse events, you should use one of the other methods for
working with touch (as shown in the following).
To help with touch, the phone has several layers of APIs to help you get to the touch surface. At the lowest level, the Touch
class can report every touch interaction the user does. Windows Phone’s Touch
class is most useful when you want to get as close to the metal as possible. The Touch
class has a static event called FrameReported
,
which is called as touch interaction is happening. With it you can get
information about how many touch points are being used (for example, how
many fingers are on the touch surface) as well as where the touch
points are. The event contains an argument that enables you to get at
the point information. Here is an example using the FrameReported
event to show where the touch is being dragged on the surface of the phone:
public partial class MainPage : PhoneApplicationPage
{
// Constructor
public MainPage()
{
InitializeComponent();
Touch.FrameReported += Touch_FrameReported;
}
void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
var mainTouchPoint = e.GetPrimaryTouchPoint(this);
if (mainTouchPoint.Action == TouchAction.Move)
{
theText.Text = string.Concat("Moving: ",
mainTouchPoint.Position);
}
}
// ...
}
The TouchFrameEventArgs
class has several pieces of functionality. The two that are most important are the GetPrimaryTouchPoint
and GetTouchPoints
methods. The GetPrimaryTouchPoint
method is used to retrieve a TouchPoint
object that is relative to a particular-UIElement
of the design. The primary touch point is determined by the first thing that touches the screen. Being relative means that all touch positions will be relative to that UIElement
. The TouchPoint
class can tell you the position and action that are occurring, as shown in the following code sample:
void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
// Get the main touch point (relative to a UIElement)
TouchPoint mainTouchPoint = e.GetPrimaryTouchPoint(ContentPanel);
// Get the position
Point position = mainTouchPoint.Position;
// Get the Action of the Touch
switch (mainTouchPoint.Action)
{
case TouchAction.Move:
theText.Text = "Moving";
break;
case TouchAction.Up:
theText.Text = "Touch Ended";
break;
case TouchAction.Down:
theText.Text = "Touch Started";
break;
}
}
The GetTouchPoints
method also retrieves touch information that is relative to a UIElement
, but in this case all the current touch points are returned as a collection of TouchPoint
objects:
void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
// Get all the touch points
TouchPointCollection points = e.GetTouchPoints(ContentPanel);
theText.Text = string.Concat("#/Touch Points: ", points.Count);
}
Each TouchPoint
object in the
collection represents a single touch point on the phone. All phones
support at least four points of touch. The way you work with the
individual touch points from the collection is identical to the TouchPoint
you retrieved from the GetPrimaryTouchPoint
method.
Although the Touch.FrameReported
event will give you a lot of control, you might want something
higher-level so that you can handle simple movement or sizing behavior.
To fill that need, Windows Phone also supports manipulations. The
concept behind manipulations is to be able to easily interact with
common manipulations of objects on the screen. These include sizing and
moving of objects. The UIElement
class supports these directly by supporting three events, as described in Table 1.
TABLE 1 Manipulation Events
These events are used to support manipulation of
objects on the screen, more than just touch. The basic idea of a
manipulation is to be notified about attempts to change objects on the
phone by dragging or resizing. For example, the ManipulationDelta
event sends information about the manipulation while it is happening in the form of a ManipulationDeltaEventArgs
object argument. This argument includes both the cumulative amount of
manipulation and the difference between the last delta and the current
one. The manipulation amount is defined in a class called ManipulationDelta
. The ManipulationDelta
class contains two pieces of information: translation and scale. These pieces of information correlate directly to the idea of how transforms in XAML work (for example, TranslateTransform
and ScaleTransform
specifically). The amount of translation indicates how far an item has
been dragged (or moved). The scale indicates how much the item has been
sized using the pinch touch gesture. For example, to move an object
using a manipulation, you might add a TranslateTransform
to your design:
...
<Ellipse Fill="Red"
Width="200"
Height="200"
x:Name="theCircle">
<Ellipse.RenderTransform>
<TranslateTransform x:Name="theTransform" />
</Ellipse.RenderTransform>
</Ellipse>
...
With the transform in place, you can handle the ManipulationDelta
event and use the TranslateTransform
to move the element in response to dragging:
public partial class MainPage : PhoneApplicationPage
{
// Constructor
public MainPage()
{
InitializeComponent();
ManipulationDelta += MainPage_ManipulationDelta;
}
void MainPage_ManipulationDelta(object sender,
ManipulationDeltaEventArgs e)
{
// Move (for example Translate) the ellipse based on delta
ManipulationDelta m = e.CumulativeManipulation;
theTransform.X = m.Translation.X;
theTransform.Y= m.Translation.Y;
}
...
}
In this particular example, the CumulativeManipulation
is used to get the entire touch manipulation. The Manipulation
’s Translation
property contains the amount of the translation, but we cannot be sure
this manipulation is about a particular element on our page (as we
registered for the page’s ManipulationDelta
event). We
could register just for our ellipse’s manipulation, but alternatively we
could also test to see which container was being manipulated by testing
the ManipulationContainer
like so:
void MainPage_ManipulationDelta(object sender,
ManipulationDeltaEventArgs e)
{
if (e.ManipulationContainer == theCircle)
{
// Move (for example Translate) the ellipse based on delta
ManipulationDelta m = e.CumulativeManipulation;
theTransform.X = m.Translation.X;
theTransform.Y = m.Translation.Y;
}
}
Using pinch and zoom touch gestures works in the same way you can use a ScaleTransform
to change the size instead of the TranslateTransform
:
<Ellipse Fill="Red"
Width="200"
Height="200"
x:Name="theCircle">
<Ellipse.RenderTransform>
<ScaleTransform x:Name="theTransform"
CenterX="100"
CenterY="100"/>
</Ellipse.RenderTransform>
</Ellipse>
Then, in the manipulation events, you can simply use the scale properties in the ManipulationDelta
instead of Translate
, like so:
void MainPage_ManipulationDelta(object sender,
ManipulationDeltaEventArgs e)
{
if (e.ManipulationContainer == theCircle)
{
// Size (for example Scale) the ellipse based on delta
ManipulationDelta m = e.CumulativeManipulation;
theTransform.ScaleX = m.Scale.X;
theTransform.ScaleY = m.Scale.Y;
}
}
Manipulations also include information about the inertia
of the touch gestures. The idea behind inertia is to be able to tell
whether the user was still moving when the manipulation ended. The
inertia information becomes important to making more organic
interactions with users. If you have seen how flicking a list box on the
phone makes the list scroll up even when the user is no longer touching
the phone, this is accomplished using inertia.
For example, on the ManipulationCompleted
event you can test for IsInertial
to see whether the manipulation contains inertial velocity information:
void MainPage_ManipulationCompleted(object sender,
ManipulationCompletedEventArgs e)
{
if (e.IsInertial)
{
var m = e.TotalManipulation;
var velocity = e.FinalVelocities;
theTransform.X =
m.Translation.X + (velocity.LinearVelocity.X / 100);
theTransform.Y =
m.Translation.Y + (velocity.LinearVelocity.Y / 100);
}
}
After the code determines it is inertial, it can use the FinalVelocities
to change the outcome of the translation (in this example). You could also see whether the LinearVelocity
is greater than some threshold to determine whether it is a “flick” :
void MainPage_ManipulationCompleted(object sender,
ManipulationCompletedEventArgs e)
{
if (e.IsInertial)
{
var velocity = e.FinalVelocities;
// Is it a Right Flick?
if (velocity.LinearVelocity.X > 100)
{
// ...
}
}
}
The manipulation events are used specifically to
handle the drag and scale, but as we discussed before, there are a
number of types of touch gestures. Although having access to
manipulations and lower-level access with the Touch
class helps, for most of your touch interface, you’d like to access events for those gestures directly.
The UIElement
class has access to the most common types of touch gestures. You can see the touch events the UIElement
class exposes in Table 2.
TABLE 2 UIElement Touch Events
Wiring up to these events is as simple as wiring up the event:
public partial class MainPage : PhoneApplicationPage
{
// Constructor
public MainPage()
{
InitializeComponent();
var listener = theCircle.Hold += theCircle_Hold;
}
void theCircle_Hold(object sender, GestureEventArgs e)
{
// Do a hold
}
}
The UIElement
class represents any visual element on a page, so you can wire up these events on any object (for example, Button
, ListBox
, Grid
, Ellipse
,
and so on). As you work with touch, you will use a variety of these
touch-based APIs in your application. When working with common gestures,
the element approach is easiest, but because you want more control over
the nature of the touch surface, you will have to delve further down
into the stack of APIs.