Spaces and transformations
본문 바로가기
ComputerGraphics

Spaces and transformations

by Midfall 2022. 10. 18.
320x100

Computer graphics and animation involve transforming data from a defining space into a world space in order to build a synthetic environment. Object data are transformed as a function and finally, they are transformed to view the object on a screen. The workhorse transformation representation of graphics is the 4x4 transformation matrix, which can. Be used to represent combinations of three-dimensional rotations, translations, and scales, as well as perspective projection.

 

A coordinate space can be defined by using left- or right-handed coordinate system. Left-handed coordinate systems have the x-, y-, and z-coordinate. The right-handed coordinate system is organized similarly with trespect to the right hand. There is no series of puer rotations. Which configuration to use is a matter of convention. It makes no difference. Another arbitrary convention is the axis to use as the up vector. Some application areas assume that the y-axis is "up".

 

2.1.1 The display pipeline

The display pipeline refers to the transformation of object data from its original defining space through a series of intermediate spaces until its final mapping onto the screen. The object data are transformed to compute illumination, clip the data to the view volume, and perform the perspective transformation. While an important process that eliminates lines and parts of lines that are not within the viewable space, clipping is not relevant to motion control.

 

The space in which an object is originally defined is referred to as object space. The data in object space are usually centered at the origin and are created to lie within some limited standard range such as -1 to 1. The object, as defined by its data points(vertices) is transformed by a series of rotations, translations, and scales, into world space. 

 

World space is the sapce where light sources and the observer are placed. Observer position is used synonymously with camera position and eye position. The observer parameters include its position and orientation. The orientation is fully specified by the view direction and the up vector. There are various ways to specify these orientation. Sometimes the view direction is specified by giving a center of interest(COI), where the case the view direction is the vector from the observer or eye position to the center of interest. The eye position is also known as the look-from point, and the COI is also known as the look-to point. The default orientation of "straight up" is defined as the observer's up vector being perpendicular to the view direction. A rotation away from this up direction will effect a tilt of the observer's head.

 

To project the data onto a view plane, the data must be defined relative to the camera, in a camera-centric coordinate system (u, v, w); the v-axis is the observer's y-axis, or up vector and the w-axis is the observer's z-axis, or view vector. The u-axis completes(composes) the local coordinate system of the observer. For this discussion, a left-handed coordinate system of the camera is assumed. These vectors can be computed in the right-handed world space by taking the cross-product of the view direction vector and the y-axis, fo the u-vector and then taking the cross-product of the u-vector and the view direction vector to form v.

 

After computing these vectors, they should be normalized in order to form a unit coordinate system at the eye position. A world space data can be defined by taking the dot product of the vector from the eye to the data point (each of three coordinate system vectors).

 

Head-tilt information can be given by specifying an angle deviation from the straight-up direction. In this case, a head-tilt rotation matrix can be formed and incorporated in the world-to-eye-space transformation or can be applied to the observer's default u-vector and v-vector.

 

Alternatively, head-tilt information can be given by specifying an up-direction vector. The user-supplied up-direction vector is typically not required  to be perpendicular to the view direction as that would require too much work on the part of the user. The vector supplied by the user defines the plane in which the up vector lies. The difference between user-supplied up-direction vector and the up vector is that the up vector by definition is perpendicular to the view direction vector. The computation of the perpendicular up vector, u, is the same with the user-supplied up direction vector, UP, replacing the y-axis.

 

care must be taken when using a default up vector. Defined as perpendicular to the view vector and in the plane of the view vector and global y-axis, it is undefined for straight-up and straight-down views. Some observer motions can result in unanticipated effects. For example, the default head-up orientation means that if the observer has a fixed center of interest and the observer's position arcs directly over the center of interest, then just before and after being directly overhead, the observer's up vector will instantaneously rotate by up to 180 degrees.

 

The field of view(FOV) has to be specified to fully define a vieable volume of world space. This includes an angle of view(half fangle of view), near clpping distance, and far clipping distance. The fov information is used to set up the perspective projection. 

 

The visible area of world space is formed by the observer position and orientation, angle of view, near/far clipping plane distance. The angle of view defines the angle between the upper and lowewr clipping planes. If the angle is different than the angle between the left and right side clipping planes, then two angles are identical as the vertical angles of view and the horizontal angle of view e far clipping distance sets the distance beyond which data are not viewed. This is used to avioid processing data that is too far away. The near clipping distance sets the distance before which data are not viewed. These defines view frustum containing data that need to be considered for disiplay.

 

Other view specification use an additional vector to indicate orientation of the projection plane, allow an arbitrary viewport to be specified on the plane of projection that is not symmetrical about view direction to allow for off-center projections and dallow for a parallel projection.

 

The data points defining the objects are usually transformed from world space to eye sapce. In eye sapce, the observer is positioned along the z-axis. This allows perspective scaling, to be dependent only on the point's z-coordinate. The observer is positioned at the origin looking down the positive z-axis in left-handed space. In eye space as in world space, lines of sight emanate from the observer position and diverge as they expand into the visible view frustum, whose shape is referred to as a trucated pyramid.

 

The perspective transformation transforms the object's data points from eye space to image space. The perspective transformation takes the obsever back to negative infinity in z, and makes the lines of sight parallel to each other and to the z-axis. The pyramid-shaped view frustum becomes a rectangular solid, or cuboid, whose opposite sides are parallel. Thus, points that are farther away from the observer in the eye space have their x- and y-coordinate scaled down more than points that are closer to the observer. This is sometimes referred to as perspective foreshortening. Visible extents in image space are usually standardized into the -1 to 1 range in x and y and from 0 to 1 in z. Image space points are scaled and translated into screen space by mapping the visible ranges in x and y(-1 to 1) into ranges that concide with the viewing area.

 

Ray casting differs from the above sequence of transformation in that act of tracing rays from the observer's position out into world space accomplishes the perspective transformation. If the rays are constructed in world space based on pixel coordinates of a virtual frame buffer positioned in front of the observer, then the progression through spaces for ray casting reduces to the transformations. Alternatively, data can be transformed to eye space and, through a virtual frame buffer, the rays can be formed in eye space.

 

 

300x250

'ComputerGraphics' 카테고리의 다른 글

Homogeneous coordinates and the transformation matrix  (0) 2022.10.23
Spaces and transformations  (0) 2022.10.18
Digital Video/Audio  (0) 2022.10.17
Digital Editing  (0) 2022.10.17
Computer animation production  (0) 2022.10.17

댓글