Skip to content

Reconstruction I: Single Viewpoint

Radiometry and Reflectance

Radiometric concepts

Here we show several basic concepts of radiometry

rad1

Angle (2D): \(d\theta = \frac{dl}{r}\), where l is the arc length, r is the radius.

Soild angle (3D): \(d\omega = \frac{dA^\prime}{r^2} = \frac{dA\cos\theta}{r^2}\)

Light flux: Power emitted within a solid angle, \(d\Phi\)

Radiant intensity: Light flux emitted per unit solid angle, \(J = \frac{d\Phi}{d\omega}\)

Surface irradiance: Light flux incident per unit surface area, \(E=\frac{d\Phi}{dA}\). We can observe that

\[ \begin{aligned} E &=\frac{d\Phi}{dA}=J\frac{d\omega}{dA}=J\frac{\frac{dA\cos\theta}{r^2}}{dA} \\ &=J\frac{\cos\theta}{r^2} \end{aligned} \]

Surface radiance: Light flux emitted per unit foreshortened area per unit solid angle:

\[ L= \frac{d^2\Phi}{(dA\cos\theta_r)d\omega} \]

Scene radiance & surface irradiance

We want to understand the relationship between image irradiance E and scene radiance L:

rad1 rad2

From the figure above, we know the solid angle of image equals to the solid angle of surface, i.e, \(d\omega_i=d\omega_s\), we have:

\[ \frac{dA_i\cos\alpha}{(f/\cos\alpha)^2} = \frac{dA_s\cos\alpha}{(z/\cos\alpha)^2} \Rightarrow \frac{dA_s}{dA_i} = \frac{\cos\alpha}{\cos\theta}(\frac{z}{f})^2 \]

We also have solid angle subtended by the lens with length \(d\):

\[ d\omega_L = \frac{\pi d^2}{4}\frac{\cos\alpha}{(z/\cos\alpha)^2} \]

Given the energy conservation, i.e., flux received by lens from \(dA_s\) equals to the flux projected onto \(dA_i\). The flux received by lends from \(dA_s\) is

\[ d^2\Phi = L(dA_s\cos\theta)d\omega_L \]

The flux projected onto \(dA_i\) is:

\[ d\Phi = EdA_i \]

We could come up with the equation of image irradiance:

\[ \begin{aligned} E &= \frac{d\Phi}{dA_i}\\ &= \frac{L(dA_s\cos\theta)d\omega_L}{dA_i}\\ &= L * \frac{\cos\alpha}{\cos\theta}(\frac{z}{f})^2 * \cos\theta * \frac{\pi d^2}{4}\frac{\cos\alpha}{(z/\cos\alpha)^2} \end{aligned} \]

The final equation would be:

\[ E = L\frac{\pi}{4}(\frac{d}{f})^2\cos^4\alpha \]

We thus know: 1. Image irradiance is proportional to scene radiance 2. Image brightness falls off from the image center as \(\cos^4\alpha\) 3. For small fields of view, effects of \(\cos^4\alpha\) are small 4. Image brightness does not vary with scene depth, e.g., if you move camera away, the brightness of the same pixel will not change.

Bidirectional reflectance distribution function (BRDF)

Surface reflection depends on both the viewing and illumination directions.

brdf

Given the irradiance \(E(\theta_i, \phi_i)\) due to source in direction \((\theta_i, \phi_i)\) and radiance \(L(\theta_r, \phi_r)\) of surface in direction \((\theta_r, \phi_r)\), the BRDF is defined as:

\[ f(\theta_i, \phi_i, \theta_r, \phi_r) = \frac{L(\theta_r, \phi_r)}{E(\theta_i, \phi_i)} \]

Some properties of BRDF: 1. Non-negative: \(f(\theta_i, \phi_i, \theta_r, \phi_r) > 0\)

  1. Helmholtz Reciprocity: \(f(\theta_i, \phi_i, \theta_r, \phi_r) = f(\theta_r, \phi_r, \theta_i, \phi_i)\)

  2. For rotationally symmetric reflectance (Isotropic surfaces), BRDF can be represented as a 3D function: \(f(\theta_i, \theta_r,\phi_i-\phi_r)\). This means if we rotate the surface around, the BRDF will not change.

Reflectance models

The reflections can be categorized into:

  1. Surface reflection: Specular reflection that gives glossy appearance on smooth surface (e.g., mirror, glass)
  2. Body reflection: Diffuse reflection that gives matte appearance on non-homogeneous medium (e.g. clay, paper)

reflectance

Lambertian model (Body)

Lambertian model states that surface appears equally bright from ALL directions. The Lambertian BRDF is:

\[ f(\theta_i, \phi_i, \theta_r, \phi_r) = \frac{\rho_d}{\pi} \]

Here \(0\le\rho_d\le 1\) is albedo. According to the BRDF model, the relationship between surface radiance \(L\) and irradiance \(E\), we can have:

\[ L = \frac{\rho_d}{\pi}E,\quad\text{where}\hspace{2pt} E = \frac{J\cos\theta_i}{r^2}=\frac{J}{r^2}(\mathbf{n}\cdot \mathbf{s}) \]

Thus, we can have surface radiance:

\[ L = \frac{\rho_d}{\pi}\frac{J}{r^2}(\mathbf{n}\cdot \mathbf{s}) \]

Here an important property is: surface radiance \(L\) is independent of viewing direction.

lambertian

Ideal specular model (Surface)

We can consider specular model like a perfect mirror, where all incident energy is reflected in a single direction. All the reflection happens at the interface itself, so it is surface not body reflection. The mirror BRDF is:

\[ f(\theta_i, \phi_i, \theta_r, \phi_r) = \frac{\delta(\theta_i-\theta_r)\delta(\phi_i+\pi-\phi_r)}{\cos\theta_i\sin\theta_i} \]

In this case, the BRDF can be expressed as the product of two delta functions, which means viewer receives light only when \(\mathbf{v}=\mathbf{r}\)

Reflection from Rough surfaces

We can model the surface roughness using Gaussian Micro-Facet model:

\[ p(\alpha,\sigma) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{\alpha^2}{2\sigma^2}} \]

rough surface

Specular reflection from rought surface

Torrance-Sparrow BRDF model: Each facet is a perfect mirror (but each facet can have its own orientation):

\[ f(\rvs,\rvx) = \frac{\rho_s}{\rvn \cdot\rvs}(\rvn\cdot\rvv)p(\alpha,\sigma)G(\rvs, \rvn, \rvv) \]

where \(p(\alpha,\sigma)\) is surface roughness distribution. \(G(\rvs, \rvn, \rvv)\) is geometric factor (masking, shadowing). When we set \(\sigma=0\), we will have a perfect mirror model.

Body reflection from rough surfaces

Oren-Nayar BRDF model: Each facet is Lambertian.

\[ f(\theta_i, \phi_i, \theta_r, \phi_r) = \frac{\rho_d}{\pi} (A + B \cdot \max(0, \cos(\phi_r-\phi_i))\cdot\sin\alpha\cdot \tan\beta) \]

where

\[ \begin{aligned} A &= 1 - \frac{\sigma^2}{2(\sigma^2 + 0.33)} & \alpha = \max(\theta_i, \theta_r)\\ B &= \frac{0.45\sigma^2}{\sigma^2 + 0.09} & \beta = \min(\theta_i, \theta_r) \end{aligned} \]

Again, \(\sigma\) here is surface roughness. When \(\sigma=0\), it becomes a perfect Lambertian model.

rough surface (body)

When the roughness of the surface improves, the body reflection makes its appearance becomes more flat.

Dichromatic model

  1. Color of body (diffuse) reflection = color of object \(\times\) color of illumination.
  2. Color of surface (specular) reflection = color of illumination.
  3. Pixel color is a linear combination of the color of body reflection and the color of surface reflection.

Photometric Stereo

Method for recovering 3D shape informatioon from image intensities measured using multiple sources:

\[ \text{Image intensity (Know)} = \gF(\text{Light source (Known)}+\text{Surface normal (Unknown)}+\text{Surface reflectance (Known)}) \]

Gradient space & reflectance map

Surface gradient and normal

Let \(z=f(x, y)\) represent a 3D surface, we can obtain surface gradient:

\[ (-\frac{\partial z}{\partial x}, -\frac{\partial z}{\partial y}) = (p, q) \]

The surface normal can thus be represented as:

\[ \rmN = (-\frac{\partial z}{\partial x}, -\frac{\partial z}{\partial y}, 1) = (p, q, 1) \]

The unit surface normal is:

\[ \rvn = \frac{\rmN}{\|\rmN\|} = \frac{(p, q, 1)}{\sqrt{p^2 + q^2 + 1}} \]

Reflectance map \(R(p, q)\)

For a given source direction \(\rvs\) and surface reflectance, image intensity a point \((x, y)\) is \(I=R(p, q)\).

Lambertian surface

\[ I = c\frac{\rho}{\pi}\frac{J}{r^2}\cos\theta_i = c\frac{\rho}{\pi} k(\rvn\cdot\rvs) \]

where \(k\) is source brightness, \(\rho\) is surface albedo (reflectance), \(c\) is cosntant (camera gain). Let \(c\frac{\rho}{\pi} k=1\), then

\[ I = \cos\theta_i = \rvn\cdot\rvs = \frac{pp_s + qq_s + 1}{\sqrt{p^2+q^2+1}\sqrt{p^2_s+q^2_s+1}} = R(p, q) \]

reflectance map

**Can we estimate shape from a single image? ** Given the image, source direction, and surface reflectance, can we estimate surface gradient \((p, q)\) at each pixel? The answer is NO as intensity at each pixel maps to infinite \((p, q)\) values along the corresponding iso-brightness contour, i.e., we have multiple solution that has the same \(\theta\) as in figure above (left).

Photometric Stereo

Idea: Use multiple images under different lighting to resolve the ambiguity in surface orientation.

Photometric stereo

Basic idea

  1. Step 1: Acquire K images with K known light sources.
  2. Step 2: Using known source direction and BRDF, construct reflectance map for each source direction.
  3. Step 3: For each pixel locatioon \((x, y)\), find \((p, q)\) as the intersection of \(K\) curves. This \((p, q)\) gives the surface normal at pixel \((x, y)\).

Photometric Stereo - Lambertian case

Image intensities measured at point \((x, y)\) under each of the three light sources:

\[ I_1 = \frac{\rho}{\pi}\rvn\cdot\rvs_1 \quad I_2 = \frac{\rho}{\pi}\rvn\cdot\rvs_2 \quad I_3 = \frac{\rho}{\pi}\rvn\cdot\rvs_3 \]

where \(\rvn = \begin{bmatrix}n_x\\n_y\\n_z\end{bmatrix}\) and \(\rvs_i = \begin{bmatrix}s_{x_{i}}\\ s_{y_{i}}\\ s_{z_{i}}\end{bmatrix}\). We can write this in matrix form:

\[ \begin{bmatrix}I_1\\I_2\\I_3\end{bmatrix} = \frac{\rho}{\pi} \begin{bmatrix}s_{x_1} &s_{y_1} & s_{z_1} \\s_{x_2} &s_{y_2} & s_{z_2} \\s_{x_3} &s_{y_3} & s_{z_3}\end{bmatrix}\rvn \Rightarrow I = SN \quad\text{where:}\quad \rmN = \frac{\rho}{\pi}\rvn \]

Solution: \(\rmN = (\S)^{-1} I\), surface normal \(\rvn = \frac{\rmN}{\|\rmN\|}\), albedo: \(\frac{\rho}{\pi} = \|\rmN\|\). To make it work, we need to make sure \(\rmS\) is invertible, that is to say, one source direction can be represented as a linear combination of the other two. Better results by using more (K>3) light sources and solve it by normal equation.

Calibration based photometric stereo

Use a calibration object of known size, shape, and same reflectance as the scene objects. Points with the same surface normal produce the same set of intensities under different lighting.

Calibration procedure:
  1. Capture \(K\geq 3\) images under \(K\) different light sources.
  2. Using the know size of the sphere, calculate the surface normal \((p, q, 1)\) for each point on the sphere.
  3. Create a lookup table for the K-tuple: \((I_1, I_2,\dots, I_k)\rightarrow (p, q)\).
  4. Capture \(K\) images of the scene object under the same \(K\) light sources.
  5. For each pixel in the scene, use lookup table to map \((I_1, I_2,\dots, I_k)\rightarrow (p, q)\).

Shape from normals

Shape from surface normals

Estimate surface by integrating surface gradient:

\[ z(x, y) = z(x_0, y_0) + \int_{(x_0, y_0)}^{(x, y)} -(pdx + qdy) \]

where \((x_0, y_0)\) is a reference point and \(z(x_0, y_0) = 0\). \(z(x, y)\) obtained by integration along any path from \((x_0, y_0)\).

Naive algorithm

  1. Intialize reference depth \(z(0, 0) = 0\)
  2. Compute depth for first column:

    \[ \begin{aligned} &\text{For $y=1$ to $(H-1)$:}\\ &\qquad z(0, y) = z(0, y-1) - q(0, y) \end{aligned} \]
  3. Compute depth for each row.

    \[ \begin{aligned} &\text{For $y=0$ to $(H-1)$:}\\ &\qquad \text{For $x=1$ to $(W-1)$}\\ &\qquad\qquad z(x, y) = z(x-1, y) - p(x, y) \end{aligned} \]

Denoise

The presence of the noise in normal can result in inaccurate depth estimation, and the estimated depth depends on the integration path.

Normal integration

A simple solution would be computing depth map using different paths, and find average of computed depth maps to reduce error. However, this is really time-consuming. Instead, we can minimize the errors between measured surface gradients \((p, q)\) and surface gradients of estimated surface \(z(x, y)\). The error measure \(D\) would be:

\[ D =\iint_{\mathrm{Image}} (\frac{\partial z}{\partial x} + p)^2 + (\frac{\partial z}{\partial y} + q)^2 dxdy \]

where \(\frac{\partial z}{\partial y}\) are gradients of the estimated surface. To solve this, we can use Frankot-Chellappa algorithm, which solves this problem in Fourier domain. Let \(Z(u, v)\), \(P(u, v)\) and \(Q(u, v)\) be the Fourier Transformas of \(z(x, y)\), \(p(x, y)\) and \(q(x, y)\), then:

\[ \begin{aligned} z(x, y) &= \iint_{-\infty}^{\infty}Z(u, v) e^{i2\pi(ux+vy)} dudv\\ p(x, y) &= \iint_{-\infty}^{\infty}P(u, v) e^{i2\pi(ux+vy)} dudv\\ q(x, y) &= \iint_{-\infty}^{\infty}Q(u, v) e^{i2\pi(ux+vy)} dudv \end{aligned} \]

We could find \(Z(u, v)\) that minimizes \(D\) using \(\frac{D}{Z}=0\). The solution becomes:

\[ \tilde{Z}(u, v) = \frac{iuP(u, v) + ivQ(u, v)}{u^2 + v^2} \]

This is the Fourier Transform of the best fit surface, we can compute inverse Fourier transform to obtain \(\tilde{z}(u, v)\)

Interreflection

Brightness of a scene point is not only due to the light source, but aslo due to light from other scene points. Thus, photometric stereo overestimates albedo and underestimates surface tilt (Surface ends up shallower).

Interreflection

We can iteratively estimate the correct shape taking into account the interreflections.

Shape from shading

Stereographic projection

Shape from shading algorithm

Basic assumptions

Surface gradient on occluding boundary are known and can be used as boundary conditions. We can use edge detection to estimate the boundary and estimate the surface normal:

normal boundary

Assumption 1: Image irradiance (intensity) should equal the reflectance map. That is, \(I(x, y) = R_s (f, g)\). We can thus minimize:

\[ e_r = \iint (I(x, y) - R_s(f, g))^2 dxdy \]

i.e., penalize errors between image irradiance and reflectance map.

Assumption 2: Smoothness constraint: Object surface is smooth. That is, the gradient values \((f, g)\) vary slowly. Thus, we can minimize:

\[ e_s = \iint (f_x^2 + f_y^2) + (g_x^2 + g_y^2) dxdy \]

i.e., penalize rapid changes in \(f\) and \(g\) during surface estimation.

Numerical shape from shading

We want to find the surface gradients \((f, g)\) at all image points that minimize the function:

\[ e = e_s + \lambda e_r \]

The smoothness error at point \((i, j)\) is:

\[ e_{s_{i, j}} = \frac{1}{4}\left((f_{i+1, j} - f_{i, j})^2 + (f_{i, j+1} - f_{i, j})^2 + (g_{i+1, j} - g_{i, j})^2+(g_{i, j+1} - g_{i, j})^2\right) \]

The image irradiance error at point \((i, j)\):

\[ e_{r_{i, j}} = \left(I_{i, j} - R_s(f_{i.j}, g_{i, j})\right)^2 \]

Find \((f_{i, j}, g_{i, j})\) for all \((i, j)\) that minimizes:

\[ e = \sum_i\sum_j (e_{s_{i, j}} + \lambda e_{r_{i,j}}) \]

If \((f_{k, l}, f_{k, l})\) minimizes \(e\), then \(\frac{\partial e}{\partial f_{k, l}}=0\) and \(\frac{\partial e}{\partial g_{k,l}}=0\). Given an image of size \(N\times N\), there are \(2N^2\) unknowns. However, since each \(f_{i, j}\) and \(g_{i, j}\) appears in 4 terms in our objective function. Therefore:

\[ \frac{\partial e}{\partial f_{k, l}} = 2(f_{k, l} - \bar{f}_{k, l}) - 2\lambda (I_{k, l} - R_s (f_{k,l}, g_{k, l}))\frac{\partial R_s}{\partial f}|_{f_{k, l}, g_{k, l}} = 0\\ \frac{\partial e}{\partial g_{k, l}} = 2(g_{k, l} - \bar{g}_{k, l}) - 2\lambda (I_{k, l} - R_s (f_{k,l}, g_{k, l}))\frac{\partial R_s}{\partial g}|_{f_{k, l}, g_{k, l}} = 0 \]

wbere \(\bar{f}_{k, l}\) and \(\bar{g}_{k, l}\) are local averages:

\[ \bar{f}_{k, l} = \frac{1}{4}(f_{k+1, l} + f_{k-1, l} + f_{k, l+1} + f_{k, l-1} )\\ \bar{g}_{k, l} = \frac{1}{4}(g_{k+1, l} + g_{k-1, l} + g_{k, l+1} + g_{k, l-1} ) \]

We can solve the equation by moving \(f_{k, l}\) and \(g_{k, l}\) to one side:

\[ \begin{aligned} f_{k, l}^{(n+1)} = \bar{f}^n_{k, l} + \lambda (I_{k, l} - R_s(f_{k, l}^n, g_{k, l}^n))\frac{\partial R_s}{\partial f} |_{f^n_{k, l}, g^n_{k, l}}\\ g_{k, l}^{(n+1)} = \bar{g}^n_{k, l} + \lambda (I_{k, l} - R_s(f_{k, l}^n, g_{k, l}^n))\frac{\partial R_s}{\partial g} |_{f^n_{k, l}, g^n_{k, l}}\\ \end{aligned} \]

The overall iterative process would be:

  1. Use known normals to fix \((f, g)\) values on occluding boundary. Initialize the rest to \((0, 0)\).
  2. Iteratively compute \((f, g)\) until the solution has converged.