Hair is one of the crucial elements in representing believable digital humans. It is one of the most challenging elements, too, due to the large number of hairs on a human head, their length, and their complex interactions. Hair appearance, in rendering and simulation, is dominated by collective properties, yet most of the current approaches model individual hairs. In this paper we build on the existing approaches to illumination and simulation by introducing a volumetric representation of hair which allows us to efficiently model collective properties of hair. We use this volumetric representation of hair to describe hair response to illumination, hair to hair collisions, and to subtly direct simulations of hair. Our method produces realistic results for different types of hair colors and styles and has been used in a production environment.

Our approach is based on the observation that hair behaves and is perceived to be a bulk material in its interactions with the environment. For example, we often perceive the appearance hair as a surface with illumination effects corresponding to surface "normals" even though that surface does not properly exist. In classical Phong illumination, complex scattering of light off surface microfacets has been simplified to an observable effect of light reflecting about a surface normal. In this paper we utilize the level set surfaces of Osher and Fedkiw [2002] and Sethian [1993] to automatically construct hair surfaces. From these hair surfaces we derive normals to be used in illumination.

  1. Illumination

We render hair geometry by drawing each individual hair as a Bspline. To shade the hairs, we base our illumination algorithm on Kajiya and Kay’s [1989] illumination model, treating each hair as an infinitely thin cylinder. Kajiya illumination considers each hair individually, and calculates an illumination response based on the hair’s tangent vector, independently of the surrounding hairs. With the addition of deep shadows [Lokovic and Veach 2000] we achieve the effect of hair self shadowing. In reality, light scattering from neighboring hairs accounts for much of the illumination on an individual hair. We build upon these models by adding an approximation of light scattering off neighboring hairs. We approximate light scattering by constructing a surface normal about which the light ray reflects, similarly in spirit to Phong illumination.

While the Earth’s oceans are known as five separate entities, there is really only one ocean. So, how big is the ocean? As of 2013, it takes up 71% of the Earth, houses 99% of the biosphere, and contains some of Earth’s grandest geological features.

https://ed.ted.com/lessons/how-big-is-the-ocean-scott-gass

About 97 percent of the Earth's water can be found in our oceans. Of the tiny percentage that's not in the ocean, about two percent is frozen up in glaciers and ice caps. Less than one percent of all the water on Earth is fresh. A tiny fraction of water exists as water vapor in our atmosphere.

According to the U.S. Geological Survey, there are over 332,519,000 cubic miles of water on the planet. A cubic mile is the volume of a cube measuring one mile on each side. Of this vast volume of water, NOAA's National Geophysical Data Center estimates that 321,003,271 cubic miles is in the ocean.
That's 352,670,000,000,000,000,000 gallons.
Distribution of Earth's Water

In the first bar, notice how only 2.5% of Earth's water is freshwater - the amount needed for life to survive.
The middle bar shows the breakdown of freshwater. Almost all of it is locked up in ice and in the ground. Only a little more than 1.2% of all freshwater is surface water, which serves most of life's needs.

The right bar shows the breakdown of surface freshwater. Most of this water is locked up in ice, and another 20.9% is found in lakes. Rivers make up 0.49% of surface freshwater. Although rivers account for only a small amount of freshwater, this is where humans get a large portion of their water from.

The Utah teapot, or the Newell teapot, is a 3D test model that has become a standard reference object within the computer graphics community. It is a mathematical model of an ordinary teapot that appears solid, cylindrical, and partially convex. The teapot model was created in 1975 by early computer graphics researcher Martin Newell, a member of the pioneering graphics program at the University of Utah.

For his work, Newell needed a simple mathematical model of a familiar object. His wife, Sandra Newell, suggested modelling their household teapot. Now the original Melitta teapot that Martin Newell modelled for his ground breaking work resides at The Computer Museum, in Boston.

The teapot shape contained a number of elements that made it ideal for the graphics experiments of the time: it was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, and could be displayed accurately without a surface texture.

Newell made the mathematical data that described the teapot's geometry (a set of three-dimensional coordinates) publicly available, and soon other researchers began to use the same data for their computer graphics experiments. Although technical progress has meant that the act of rendering the teapot is no longer the challenge it was in 1975. Still the Utah Teapot continues to be used as a reference object for increasingly advanced graphics techniques.

The real teapot is ~33% taller (ratio 4:3)
The original, physical teapot was purchased from ZCMI (a department store in Salt Lake City) in 1974. It was donated to the Boston Computer Museum in 1984 where it was on display until 1990. It now resides in the ephemera collection at the Computer History Museum in Mountain View, California where it is catalogued as "Teapot used for Computer Graphics rendering" and bears the catalogue number X00398.1984.

Versions of the teapot model — or sample scenes containing it — are distributed with or freely available for nearly every current rendering and modelling program and even many graphic APIs, including AutoCAD, Houdini, Lightwave 3D, MODO, POV-Ray, 3ds Max, and the OpenGL and Direct3D helper libraries.

Teapot scenes are commonly used for renderer self-tests and benchmarks.