CS 184: Computer Graphics and Imaging, Spring 2023

Project 1: Rasterizer

Rahul Shah and Jeffrey Tan, CS184-pandasterizer



Overview

Give a high-level overview of what you implemented in this project. Think about what you've built as a whole. Share your thoughts on what interesting things you've learned from completing the project.

Section I: Rasterization

Part 1: Rasterizing single-color triangles





Part 2: Antialiasing triangles

Extra Credit: We tried doing supersampling with a different method, which is to simply take the average of the colors of the pixels that lie within the triangle. This is a much simpler method, but it does not work as well as supersampling with a higher granularity. Other approached we did ended up giving the same result such as using a different interpolation method (specifically we tried the low discrepancy method of using a Halton sequence to generate the sample points within the pixel instead of a uniform grid of points within the pixel as we did in our supersampling algorithm), so we decided to stick with supersampling in our final submission since the images are the exact same.
Rate 1
Rate 9
Rate 16




Part 3: Transforms





Section II: Sampling

Part 4: Barycentric coordinates





Part 5: "Pixel sampling" for texture mapping

Pixel sampling is the process of determining which pixel in the texture image to sample from based on the barycentric coordinates of the current pixel in the image. This is done by multiplying the barycentric coordinates by the width and height of the texture image, and then rounding the result to the nearest integer if using nearest sampling, or using bilinear sampling if using bilinear sampling. Specifically, to use bilinear sampling, we lerp thrice in the u, v dirs between texture values at a unit square containing the pixel. Use the pixel inspector to find a good example of where bilinear sampling clearly defeats nearest sampling. Show and compare four png screenshots using nearest sampling at 1 sample per pixel, nearest sampling at 16 samples per pixel, bilinear sampling at 1 sample per pixel, and bilinear sampling at 16 samples per pixel. Comment on the relative differences. Discuss when there will be a large difference between the two methods and why.
Nearest at rate 1.
Bilinear at rate 1.
Nearest at rate 16.
Bilinear at rate 16.




Part 6: "Level sampling" with mipmaps for texture mapping

Level sampling uses mip maps of varying "levels" or resolutions. Each additional level sub-samples the original image by a factor of 4 (2 per dimension). Mipmapping essentially acts like a pre-processed look-up table to quickly apply texture maps of varying resolutions. Thus, level sampling is faster than per-pixel sampling due to its preprocessed nature, but sacrifices memory/storage space (33% more for RGB images) in order to store the mipmaps for quick access. Our implementation of level sampling first determines what level of the mipmap we should use using the given values (u, v) that tell us where each vertex of the triangle lies on the mipmap. By calculating the longest norm of the two uv vectors (formed from the three points), we found L and took the log_2 of that to get the level of the mipmap as a float. For L_ZERO, the level was hardcoded to be 0. For L_NEAREST, we rounded the float level to the nearest integer value. For L_LINEAR, we sampled from both the floor and ceiling of the level and linearly interpolated the resulting colors based off of the float level.

Here is an example 2x2 gridlike structure using an HTML table. Each tr is a row and each td is a column in that row. You might find this useful for framing and showing your result images in an organized fashion.



Level 0, Nearest Pixel
Level 0, Bilinear Pixel
Nearest Level, Nearest Pixel
Nearest Level, Bilinear Pixel
Bilinear Level, Nearest Pixel
Bilinear Level, Bilinear Pixel
Extra Credit: See images above!



Section III: Art Competition

We are participating in the optional art competition, with the following image!

Part 7: Draw something interesting!

EC Extra Credit: We made this by supersampling in an incorrect way: we counted the proportion of the pixel that was covered by the triangle, and then multiplied the color of the pixel by that proportion. We did this in the rasterize function, without barycentric coordinates. We then swapped the way we paired alpha, beta, and gamma for a cool effect.

Link to this website!