Tutorial
Normals, Visibility, and Shadows
Solutions
are also available.
-
Suppose that we have a normal which we wish to transform from object model
coordinates to device coordinates. How can this be done?
-
Back-face culling can be carried out in either VCS or NDCS. Explain why
the criteria for back-face culling are different in VCS and NDCS.
-
Will a z-buffer algorithm always produce the same image, regardless of
the order in which the primitives are processed? Will the final values
in the z-buffer always be the same?
-
Suppose we are rendering a rectangle of dimensions a x b, where
a >> b . If we draw two versions of this rectangle, one lying
on its side, and one standing on its end, will they take equal amounts
of time to scan-convert?
-
Shadows are an important feature in modelling any scene.
-
For the following scene, give the projection matrix which produces the
shadow image (on the ground) of points on the tree, given their world-space
coordinates.
-
Determine the projection matrix to determine the shadow image points cast
by a point-light source located at (0,h,0).
-
Texture mapping is a process where an image is effectively pasted onto
the surface of a polygon. This is a very commonly used technique for adding
interesting surface detail to an otherwise bland polygon. The colour of
scan-converted pixels in the polygon are determined by looking up point
in the image. The figure below shows how texture mapping works.
Each vertex is assigned a set of texture coordinates, denoted (u,v),
which serve to define how the image maps onto the polygon. In order to
generate the texture coordinates for other points within the polygon, binlinear
interpolation is commonly used.
-
Describe how the texture coordinates can be obtained for points in the
interior of the polygon.
-
Suppose that for each polygon pixel we simply use the image pixel (called
a 'texel') determined by (round(u), round(v)). This is often referred to
as `point sampling' in graphics. Would this work well? What type of problems
might we expect?