Self-shadowing

Self-Shadowing is a computer graphics lighting effect, used in 3D rendering applications such as computer animation and video games. Self-shadowing allows non-static objects in the environment, such as game characters and interactive objects (buckets, chairs, etc.), to cast shadows on themselves and each other. For example, without self-shadowing, if a character puts his or her right arm over the left, the right arm will not cast a shadow over the left arm. If that same character places a hand over a ball, that hand will cast a shadow over the ball.

One thing that needs to be specified is whether the shadow being cast is dynamic or static. A wall with a shadow on it is a static shadow. The wall is not moving and so its geometric shape is not going to move or change in the scene. A dynamic shadow is something that has its geometry changes within a scene.

Self-Shadowing methods have trade-offs between quality and speed depending on the desired result. To keep speed up, some techniques rely on fast and low resolution solutions which could result in wrong looking shadows which may be out of place in a scene. Others require the CPU and GPU to calculate with algorithms the exact location and shape of a shadow with a high level of accuracy. This requires a lot of computational overhead, which older machines could not handle.

Techniques

Height Field Self-Shadowing

A technique was created where a shadow on a rough surface can be calculated quickly by finding the high points along from the light source's origin and ignoring any other geometric points underneath the peaks. Imagine a sunrise in the mountains where the light hits a peak behind you but you are still in the dark. The computer wouldn’t need to worry about you needing a shadow or light since you are below the peak behind you. “Height Field Self-Shadowing” renders self-shadows on dynamic height fields under dynamic light environments in real time.[1]

3D Hair

Self-Shadowing can be used for interactive hair animation, which is normally very difficult for computers to render due to the major increase in individual geometric shapes that hair can take. Self shadowing is a major part of a 3d application that contributes to the impression of volume.[2]

Shadow volume

Shadow volume is one way that self-shadowing can be used in a 3D image or scene. The method basically makes a 3D object occupy an enclosed volume in a scene where a shadow is being cast. This allows the renderer, or shader, to perform an analysis on whether or not the point or pixel is inside a shadowed area. This eventually allows the program to determine how the object will be lit.

Shadow Maps

3D shadow mapping is another method which creates approximate shadows from a set position to create very diffuse shadows that may not be entirely accurate.

Radiosity Normal Mapping

Chris Green of Valve, a video game maker, says that bump map data is derived from geometric descriptions of the objects surface significant lighting cues due to lighting occlusion by surface details are not calculated.[3] A common fix is to use an additional texture channel to create an ambient occlusion field. This only provides a darkening effect that is not connected to the direction of the light source on acting on the surface.[3]

History

Shadow volume was proposed by Frank Crow in 1977.[4] The advantage of a shadow volume was that is could be used to shadow everything, including itself.

See also

References

  1. Timonen, V., & Westerholm, J. (2010). Scalable Height Field Self-Shadowing. Computer Graphics Forum, 29(2), 723-731. doi:10.1111/j.1467-8659.2009.01642.x
  2. Bertails, F., Ménier, C., & Cani, M.-P. (2005, May). A practical self-shadowing algorithm for interactive hair animation. (PDF)
  3. Green, Chris. "Efficient self-shadowed Radiosity normal mapping" (PDF). valvesoftware.com. Archived from the original (PDF) on March 16, 2015.
  4. Crow, Franklin C: "Shadow Algorithms for Computer Graphics", Computer Graphics (SIGGRAPH '77 Proceedings), vol. 11, no. 2, 242-248.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.