Many of the 2D concepts I have covered so far
still apply in 3D game development as well, such as applying velocity,
collision detection, and so on. This section focuses on drawing 3D
models and some of the additional considerations related to 3D
rendering.
1. 3D Game Concepts
Programming in
2D is similar in process to hand-drawing a cartoon on paper. You can
draw a scene in 2D that makes objects appear near or in the background,
providing perspective that models the 3D real world. 3D game development
results in the same 2D projection on a flat 2D screen, however, the
path to rendering is very different.
In 3D game development, you create a 3D world using
wireframe models with a skin over the wireframe that is a 2D texture.
The view into the world is similar to viewing the real world behind a
camera, where the front aspect of an object is visible and the back of
an object is not; however, the back of an object still exists. As an
example, Figure 1 shows a sample 3D rendering named Pac Man displayed in CaligritruSpace with views from different angles.
You can spin, rotate, and zoom in on the model to get
a different view of the environment. Just as in the real world, any
view you pick is translated into a 2D rendering. However, with 3D game
development, as hardware grows more powerful, the 3D model and rendering
comes closer and closer to the real world. With the power of matrix
mathematics, 3D vectors, and Algorithms, a game developer can fly around
a 3D model of an imaginary world resulting in an immersive experience.
1.1. Modeling the Real World
For 3D game development, you mathematically create
the camera view on the loaded model. As a user manipulates a game
controller or tilts a phone in a mobile game experience, the Accelerometer readings can be measured and the movement is translated to affect the camera view in the model.
A 3D world can be quite expansive, so much so that
trying to render an entire world model will bog down even the most
powerful graphics cards. Just like viewing the real world through a
camera, developers mathematically create a view that consists of a boxed
3D area that is rendered called the Frustum.
The Frustum consists of a near plane, a far plane, left and right plane that boxes an area of view, as shown in Figure 2.
Notice that some objects are shown outside of the
Frustum; this means that if the scene is rendered those objects would
not be shown. If the user moves the camera closer in, the people would
come into view. Likewise, moving the camera towards the cars could begin
to bring in the building in the background. Notice in Figure 8-4
that the developer could decide to make the Frustum longer, with the
near plane in front of the crowd of people and behind the buildings. As
mentioned before, this could overload the graphics card resulting in
frame rate loss, especially in a high-speed racing game. This is the
essence of 3D game development, building out a model and rendering it in
such a way that user movement feels "natural" within the game.
1.2. 3D Coordinate Systems
The XNA Framework is engineered to work with the
modeling techniques in the previous section, modeling the real world via
3D coordinate systems and mathematics to project a view of a 3D
experience to a game player. At the heart of 3D is the coordinate
system.
Remember in 2D game development that the 2D origin
(X,Y) is in the upper left-hand corner of the screen with positive X
going left to right and positive Y going top to bottom. In 3D game
development, the origin (0,0,0) can be at the center of a car model, or
it could be at an arbitrary place in the overall 3D world. In reality,
there are multiple 3D origins that are relatively positioned to each
other. A model has a 3D origin that runs through the center of the model
or could be centered at the bottom of the model, depending on what
makes most sense for the game at that time. You can rotate around the
model origin, and the model would spin like a top.
Likewise, the overall world will have an origin as
well. You can also rotate a model about the 3D world origin like a plane
orbiting the sun in our Solar System.
One aspect of coordinate systems that needs to be
consistent is how the three axis (X,Y,Z) are oriented. In 3D game
development, this is called either a right-handed or left-handed
coordinate system. Figure 3 shows the two coordinate systems taken from the DirectX documentation on MSDN.
So the idea is you can use your hand to remember the
direction of the positive Z access. Point your fingers in the direction
of the positive X axis and curl your fingers up toward the positive Y
axis, and your thumb points in the direction of the positive Z axis.
The XNA Framework is a right-handed
coordinate system, meaning that the Z axis is positive coming out of the
screen. There are many programming models including DirectX that are
left-handed systems so be cognizant of the coordinate convention when
reviewing sample code and documentation.