In the lecture video, it was stated there is a mistake on the slide, but the video doesn't capture the people pointing to the slides, so it's not entirely clear what's wrong.

Is it that the two middle images should be the same?

The slide is correct. It's illustrating OpenGL's clip space, which is [-1,-1,1] to [1,1,1]. Given a point P in 3D-H, apply the perspective transform, then move from homogeneous space to 3D clip space (divide by w).

Good online discussions:

https://community.khronos.org/t/normalized-device-coordinates/60709/5

For the boolean union, why f(x) is min(d1(x)) + min(d2(x)), not min(d1(x), d2(x))?

Is the cube in the clip coordinate a unit cube? The length of each side is 2.

This slide gives a sketch of an explanation for why attribute/w is linear in 2D screen coordinates. This is important because the rasterizer can simply set up a linear equation per attribute f, that allows it to compute f/w for any screen sample point (x,y).

Note that for simplicity, the perspective projection matrix on this slide is a simple one, it's not the exact matrix that you would use to convert a point into OpenGL's normalized device coordinates, which you can see here. But notice that using a real OpenGL perspective projection matrix would only modify things in that the coefficients in from of `x_2d`

and `y_2d`

in the final equation would be different. The main point, that f/w is affine function of `x_2d`

and `y_2d`

still holds.

Yes, it is now fixed. Thanks!

Is the *squaring* of abs(x1 - x2) a typo?

@fenmax. Fixed. Thanks.

@cobaltblue, this is a visualization of the footprint of a pixel in texture space. As we zoom in to the surface a single screen pixel corresponds to a tiny part of the texture. Consider putting your eye right up next to your desk. You can only see a very small fraction of the entire desk (represented by the small purple box above).

Now imagine that you zoom way out. The footprint of a single pixel gets bigger in texture space. Consider a scene where an object is so small that is falls within a single pixel on screen. In this case, a single screen pixel would span all of texture space.

Am slightly confused about this slide. When we zoom in, aren't we supposed to sample more from the texture because we render more of the image, and vice versa for far away objects? If so, why are the shapes getting bigger as we head from upsampling to downsampling?

reminded me of this really cool project: http://www.lofibucket.com/articles/oscilloscope_quake.html

There's a typo on the slide:

dL_i(x, y+1) = L_i(x, y) - dX_i

i.e. the sign should be flipped.

Note to students: we came back to this question and answered it in Lecture 2:

See slide 23 of Lecture 2.

In short. That it is pointing downward on screen. (Assuming that your triangles follow a consistent counter-clockwise winding on screen.)

Here's a good reference. Look for the section call "fill rules". https://fgiesen.wordpress.com/2013/02/08/triangle-rasterization-in-practice/

What does it mean for an edge to be "on the left side of the triangle"? I thought of a couple of different ways an edge might be considered "on the left": 1. Edge's midpoint is left of the triangle centroid 2. If there is only 1 left-most point, then the two edges that have the left-most point would be left edges 3. If there is only 1 right-most point, then the edge which doesn't have the right-most point is left edge

Whichever way we choose to interpret it, the triangle 1 definitely does not cover the pixel and triangle 4 definitely does. If we take the interpretation that any pixel touched by a triangle, then both 2 and 3 are covered. If we take the center of the pixel as its "location", and compute whether that center is inside the triangle, we will only cover with triangle 3.

I'm confused by the Bezier curve in terms of the interpolation property. I thought that the control points in a Bezier curve are endpoints?