What's Going On
xiaoruiL commented on slide_036 of Accelerating Geometric Queries (on 03/24/19)

What does the horizontal arrow line represent in this diagram?

yilong_new commented on slide_083 of Perspective Projection and Texture Mapping (on 03/22/19)

Same question: Are there any quantitive evaluation methods of anti-aliasing algorithms? I saw that most papers only gave examples without quantitive evaluation results.

hteo commented on slide_024 of Course Summary + Graphics at Stanford (on 03/22/19)

Disney also does a bunch of research on how to design these physically fabricated mechanisms: https://la.disneyresearch.com/main-research-area/materials-fabrication/

hteo commented on slide_027 of Efficient Rasterization on Mobile GPUs (on 03/22/19)

Lots of modern mobile UIs have slick animations and such, which actually probably require compositing on the part of the OS. Windows 7 used to do something like disabling compositing and falling back to flat graphics on power saving mode. Why don't phones do this?

hteo commented on slide_033 of Modern Rasterization Techniques (on 03/22/19)

And now people are also looking into ambient occlusion by neural networks: theorangeduck.com/media/uploads/other_stuff/nnao.pdf Pretty cool.

yilong_new commented on slide_053 of Coordinate Spaces and Transforms (on 03/22/19)

If we increase the number of groups, we will need to calculate more transformation matrix multiplication as well. But since the size of matrix is so small and it is specifically optimized in most cases I think the performance loss is negligible.

hteo commented on slide_083 of Color (on 03/22/19)

There are also other interesting color spaces like CIELAB and xvYCC that have their own strengths and weaknesses. Give them a google, it's actually pretty cool that you can design your own color space to suit your application.

yilong_new commented on slide_038 of Drawing a Triangle (+ Sampling Basics) (on 03/22/19)

@gracie I think it is correct if we set A to be negative and B to be positive

hteo commented on slide_032 of Dynamics and Time Integration (on 03/22/19)

I know blender saves some representation space on CPU by forming bunches of hair, so each explicitly represented hair strand becomes 100s of hair strands essentially.

yilong_new commented on slide_045 of Materials and Lighting (on 03/22/19)

In my opinion both are diffuse models but the latter one has a texture map so that the BRDF is a hemisphere + offset based on its position and the map information -- right?

hteo commented on slide_063 of Kinematics and Motion Capture (on 03/22/19)

There's also conjugate gradient descent and various momentum-based methods that can perform better depending on the smoothness of the objective.

yilong_new commented on slide_012 of Efficient Rasterization on Mobile GPUs (on 03/22/19)

@ecohen2 The regularity of grid supersampling can cause some artifacts (); using randomized samples can help removing these grid-related aliasing patterns.

xiaoruiL commented on slide_024 of Introduction + Drawing Basics (on 03/21/19)

FYI, for 3D building modeling, Revit (provided by Autodesk) is a commonly used software.

seanxu commented on slide_061 of Image Compression and Image Processing (on 03/21/19)

@CynthiaJia I believe this can be useful when we want to detect horizontal edges and vertical edges.

vnambiar commented on slide_022 of Kinematics and Motion Capture (on 03/21/19)

Here's a really cool presentation on motion matching: https://www.youtube.com/watch?v=KSTn3ePDt50.

This requires minimal animators to clean up the animation captured from mo cap suits.

vnambiar commented on slide_063 of Accelerating Geometric Queries (on 03/21/19)

Is this an active area of research? Finding new types of spatial geometric interpretations for the objects on the screen?

vnambiar commented on slide_071 of Materials and Lighting (on 03/21/19)

Compute has a reached a point where modern games are able to achieve this effect in real time.

vnambiar commented on slide_054 of Image Compression and Image Processing (on 03/21/19)

Are there any hardware optimizations for these filters in graphics cards?

vnambiar commented on slide_015 of Introduction to Animation (on 03/21/19)

To answer @jballouz it's incredibly difficult to animate complex structures such as hair in a realistic way. But I thought Pixar did a great job in Brave. https://www.khanacademy.org/partner-content/pixar/simulation/hair-simulation-101/v/hair-simulation-intro

vnambiar commented on slide_074 of Geometry Processing (on 03/21/19)

Wow for some reason the cow reminds me of line drawings done by Picasso. http://www.pablopicasso.net/drawings/

vnambiar commented on slide_040 of Coordinate Spaces and Transforms (on 03/21/19)

If people are interested in learning about the math group that these transformation matrices belong too. Check out https://en.wikipedia.org/wiki/Lie_group.

vnambiar commented on slide_077 of The Rasterization Pipeline (on 03/21/19)

Graphics cards are optimized using SIMD. Ray tracing requires branching and SIMD aren't designed to handle branching at all. If I had to guess I'm pretty sure RTX cards handle branching in hardware to get the current performance.

vnambiar commented on slide_047 of Coordinate Spaces and Transforms (on 03/21/19)

Does this convention change depending on which api we use?

ehsan commented on slide_019 of Rendering Challenges of VR (on 03/21/19)

Adding the bokeh effect to a non-bokeh image is relatively simple: https://developer.apple.com/documentation/accelerate/vimage/adding_a_bokeh_effect

ehsan commented on slide_035 of Modern Rasterization Techniques (on 03/21/19)

@nphirning definitely agree, it's hard to pinpoint the exact reason, but part of me thinks it's because everything in the scene seems too shiny to be real?

ehsan commented on slide_049 of Introduction to Animation (on 03/21/19)

is there a way to prove that this matrix that represents a polynomial is always invertible?

ehsan commented on slide_046 of Color (on 03/21/19)

why are (or at least why does it seem like) the green screen pixels much smaller than the blue or red?

ehsan commented on slide_029 of Image Compression and Image Processing (on 03/21/19)

I've read that the Wavelet transform is used in JPEG2000, how does that relate to the DCT method here?

ehsan commented on slide_026 of Dynamics and Time Integration (on 03/21/19)

what types of approximations do molecular dynamics simulations like these make? it seems infeasible to solve the associated n-body problem

yilong_new commented on slide_053 of Dynamics and Time Integration (on 03/21/19)

Link to Yarn-level cloth simulation papers: https://www.cs.cornell.edu/projects/YarnCloth/

yilong_new commented on slide_007 of Kinematics and Motion Capture (on 03/21/19)

In forward kinematics, it just changes the "local" transform matrix based on the current configuration and propagate the position change to all its descendant elements.

yilong_new commented on slide_022 of Materials and Lighting (on 03/21/19)

@anon01 Here is an example of the wavelength-intensity distribution of an LCD monitor:


mihirg commented on slide_013 of Rendering Challenges of VR (on 03/21/19)

The new Oculus Rift S has a 1280 x 1440 display per eye — interested to see whether this will make the experience more indistinguishable from reality and when we'll reach a point of diminishing returns on display resolution in VR. Although seems like the main barrier right now is rendering realism rather than display resolution...

mihirg commented on slide_029 of Dynamics and Time Integration (on 03/21/19)

What are some common, non-obvious uses of springs in graphics/games? Isn't the possibility of oscillation/instability in numerical integration environments always a risk?

mihirg commented on slide_066 of Image Compression and Image Processing (on 03/21/19)

Another use case for bilateral filters: refining really low-resolution, noisy depth data to follow edges in higher-resolution color images — https://drive.google.com/file/d/1zFzCaFwkGK1EGmJ_KEqb-ZsRJhfUKN2S/view

mihirg commented on slide_044 of Color (on 03/21/19)

Are there any stats about the "density" of metameters in "color spectrum space"? How much info do our eyes lose? How does this change for people who are colorblind?

mihirg commented on slide_032 of Image Compression and Image Processing (on 03/21/19)

Still a bit confused about why the blocks are so prominent here — sure, the blocks are passed through DCT and even quantized separately, but if no information loss occurs in DCT and they're all hit with the same quantization matrix, why do adjacent blocks of similar color/texture (e.g., on Kayvon's lower forehead) look so different?

mihirg commented on slide_009 of Modern Rasterization Techniques (on 03/21/19)

PatchMatch is super cool and another place it is often used is in stereo (getting depth based on images from multiple cameras, or multiple images taken from the same camera): https://www.microsoft.com/en-us/research/publication/patchmatch-stereo-stereo-matching-with-slanted-support-windows/

mihirg commented on slide_039 of Kinematics and Motion Capture (on 03/21/19)

Example code if you want to start playing around with TrueDepth: https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera

mihirg commented on slide_038 of Efficient Rasterization on Mobile GPUs (on 03/21/19)

I'm honestly pretty shocked that deferred rendering is this beneficial — seems like a LOT of memory overhead (writing everything out to a geometry buffer), wouldn't this be especially problematic given the high cost of memory storage/lookup Kayvon mentioned for mobile devices? Is there a rule of thumb for how rich a scene has to be/how many overlaps on average each pixel needs to have for deferred rendering to make sense?

mihirg commented on slide_006 of Efficient Rasterization on Mobile GPUs (on 03/21/19)

In this vein of optimizations in the real-world for mobile GPU's, Google Seurat is another one I recently came across: https://www.theverge.com/2017/5/18/15660218/google-seurat-daydream-mobile-vr-rendering-star-wars-io-2017

mihirg commented on slide_033 of Materials and Lighting (on 03/21/19)

I read online that omnidirectional lights don't typically cast shadows (this is true in, e.g., ARKit — https://developer.apple.com/documentation/scenekit/scnlight/1523816-castsshadow) — why is this? Is it for computational reasons (and if so, what are they, because none really come to mind compared to, e.g., a directional light) or physical reasons?

mihirg commented on slide_025 of Course Summary + Graphics at Stanford (on 03/21/19)

And here's how Facebook does their 3D photo feature: http://visual.cs.ucl.ac.uk/pubs/instant3d/instant3d_siggraph_2018.pdf and https://research.fb.com/wp-content/uploads/2017/11/casual3d_siggasia_2017.pdf?

mihirg commented on slide_025 of Course Summary + Graphics at Stanford (on 03/21/19)

Another really interesting link on how the iPhone XR does Portrait Mode using DPAF + a neural network given that it only has a single camera: https://blog.halide.cam/iphone-xr-a-deep-dive-into-depth-47d36ae69a81

mihirg commented on slide_049 of Geometric Queries (on 03/21/19)

Do we use mipmapping to pre-filter the shadow map at different resolutions as well? Seems like that could help with this, right?

mihirg commented on slide_079 of Geometry Processing (on 03/21/19)

I was wondering where I had seen the name Delaunay before — realized he created Delaunay triangulation (https://en.wikipedia.org/wiki/Delaunay_triangulation), the standard method for turning point clouds into triangular meshes. Is this sort of Delaunay remeshing equivalent (or effectively similar to) to just considering the vertices of the mesh as a point cloud and running Delaunay triangulation in terms of the overall characteristics and quality of the mesh? Seems like we have a bit more info (local connectivity) here that makes this superior to just starting with a point cloud, but I'm having trouble understanding to what extent we preserve local connectivity patterns/treat them as accurate during the remeshing process.

mihirg commented on slide_052 of Introduction to Geometry (on 03/21/19)

One comment here — seems like a lot of the research in computer vision/3D perception/robotics is moving in the direction of processing point clouds directly since they're the raw data representation returned by a real-world 3D sensor like a LiDAR sensor or RGB-D camera (e.g., many self-driving car companies, like Waymo, are doing deep learning directly on point clouds to identify people, other cars, etc.). The results here are actually super impressive given that point clouds are sparse "sampled" approximations of continuous surfaces. @prathikn and I actually wrote an article on deep learning in 3D (and particularly some state-of-the-art results on point cloud learning) if you're interested in learning more: https://thegradient.pub/beyond-the-pixel-plane-sensing-and-learning-in-3d

mihirg commented on slide_054 of The Rasterization Pipeline (on 03/21/19)

Seems like the big limitation here is that translucent triangles need to be parallel to the camera plane, right? Could we solve this by breaking down the triangles into smaller fragments that could have different positions in the back-to-front order? This seems like a pretty major limitation for realism in gaming and AR... what do modern games and GPU's do?