
Back to Lecture Thumbnails

pslui88

chii
Maybe the masks are just obtained from the pixel values.

mithrandir
From a different class (we actually had to compute the weights and construct our own HDR images): w_{k,i,j} = exp(-4*(I_{k,i,j}-0.5)^2 / (0.5^2)) for exposures k, image indices i,j. The idea is the enforce a distribution centered at the middle pixel value (0.5, or 127.5, depending on representation method). The image I should be on the linear scale (i.e. not gamma corrected for human viewing pleasure).
Insert plug for EE367/CS3448I

sagoyal
@mithrandir oh that makes sense!
Please log in to leave a comment.
Copyright 2021 Stanford University
How are these weight masks obtained? An initial guess is that a heuristic could be used to make mid-tone pixels white and pixels that are too dark or too bright black.