Volume Rendering 101

Volume Rendering 101

fish bunny
Pictures above from: http://www.cs.utah.edu/~jmk/simian/
translucent_l translucent_d
There is quite a bit of documentation and papers on volume rendering. But there aren’t many good tutorials on the subject (that I have seen). So this tutorial will try to teach the basics of volume rendering, more specifically volume ray-casting (or volume ray marching).
What is volume ray-casting you ask? You didn’t? Oh, well I’ll tell you anyway. Volume rendering is a method for directly displaying a 3D scalar field without first fitting an intermediate representation to the data, such as triangles. How do we render a volume without geometry? There are two traditional ways of rendering a volume: slice-based rendering and volume ray-casting. This tutorial will be focusing on volume ray-casting. There are many advantages over slice-based rendering that ray-casting provides; such as empty space skipping, projection independence, simple to implement, and single pass.
Volume ray-casting (also called ray marching) is exactly how it sounds. [edit: volume ray-casting is not the same as ray-casting ala Doom or Nick’s tutorials] Rays are cast through the volume and is sample along equally spaced intervals. As the ray is marched through the volume scalar values are mapped to optical properties through the
use of a transfer function which results in an RGBA color value that includes the corresponding emission and absorption coefficients for the current sample point. This color is then composited by using front-to-back or back-to-front alpha blending.
This tutorial will focus specifically on how to intersect a ray with the volume and march it through the volume. In another tutorial I will focus on transfer functions and shading.
First we need to know how to read in the data. The data is simply scalar values (usually integers or floats) stored as slices [x, y, z], where x = width, y = height, and z = depth. Each slice is x units wide and y units high, and the total number of slices is equal to z. A common format for the data is to be stored in 8-bit or 16-bit RAW format. Once we have the data, we need to load it into a volume texture. Here’s how we do the whole process:

//create the scalar volume texturemVolume = new Texture3D(Game.GraphicsDevice, mWidth, mHeight, mDepth, 0,
TextureUsage.Linear, SurfaceFormat.Single);

private void loadRAWFile8(FileStream file)
BinaryReader reader = new BinaryReader(file);

byte[] buffer = new byte[mWidth * mHeight * mDepth];
int size = sizeof(byte);

reader.Read(buffer, 0, size * buffer.Length);


//scale the scalar values to [0, 1]
mScalars = new float[buffer.Length];
for (int i = 0; i < buffer.Length; i++)
mScalars[i] = (float)buffer[i] / byte.MaxValue;


In order to render this texture we fit a bounding box or cube, that is from [0,0,0] to [1,1,1] to the volume. And we render the cube and sample the volume texture to render the volume. But we also need a way to find the ray that starts at the eye/camera and intersects the cube.
We could always calculate the intersection of the ray from the eye to the current pixel position with the cube by performing a ray-cube intersection in the shader. But a better and faster way to do this is to render the positions of the front and back facing triangles of the cube to textures. This easily gives us the starting and end positions of the ray, and in the shader we simply sample the textures to find the sampling ray.
Here’s what the textures look like (Front, Back, Ray Direction):
Front Back Ray Direction
And here’s the code to render the front and back positions:

//draw front faces
//draw the pixel positions to the texture
Game.GraphicsDevice.SetRenderTarget(0, mFront);


Game.GraphicsDevice.SetRenderTarget(0, null);
//draw back faces
//draw the pixel positions to the texture
Game.GraphicsDevice.SetRenderTarget(0, mBack);
Game.GraphicsDevice.RenderState.CullMode = CullMode.CullCounterClockwiseFace;


Game.GraphicsDevice.SetRenderTarget(0, null);
Game.GraphicsDevice.RenderState.CullMode = CullMode.CullClockwiseFace;

Now, to perform the actual ray-casting of the volume, we render the front faces of the cube. In the shader we sample the front and back position textures to find the direction (back – front) and starting position (front) of the ray that will sample the volume. The volume is then iteratively sampled by advancing the current sampling position along the ray at equidistant steps. And we use front-to-back compositing to accumulate the pixel color.

float4 RayCastSimplePS(VertexShaderOutput input) : COLOR0{
//calculate projective texture coordinates
//used to project the front and back position textures onto the cube
float2 texC = input.pos.xy /= input.pos.w;
texC.x = 0.5f*texC.x + 0.5f;
texC.y = -0.5f*texC.y + 0.5f;

float3 front = tex2D(FrontS, texC).xyz;
float3 back = tex2D(BackS, texC).xyz;

float3 dir = normalize(back - front);
float4 pos = float4(front, 0);

float4 dst = float4(0, 0, 0, 0);
float4 src = 0;

float value = 0;

float3 Step = dir * StepSize;

for(int i = 0; i < Iterations; i++)
pos.w = 0;
value = tex3Dlod(VolumeS, pos).r;

src = (float4)value;
src.a *= .5f; //reduce the alpha to have a more transparent result

//Front to back blending
// dst.rgb = dst.rgb + (1 - dst.a) * src.a * src.rgb
// dst.a = dst.a + (1 - dst.a) * src.a
src.rgb *= src.a;
dst = (1.0f - dst.a)*src + dst;

//break from the loop when alpha gets high enough
if(dst.a >= .95f)

//advance the current position
pos.xyz += Step;

//break if the position is greater than <1, 1, 1>
if(pos.x > 1.0f pos.y > 1.0f pos.z > 1.0f)

return dst;

And here’s the result when sampling a foot, teapot with a lobster inside, engine, bonsai tree, ct scan of an aneurysm, skull, and a teddy bear:
So, not very colorful but pretty cool. When we get into transfer functions we will start shading the volumes. The volumes used here can be found at volvis.org
Refer to the scene setup region in VolumeRayCasting.cs, Volume.cs, and RayCasting.fx for relevant implementation details.
Also, a Shader Model 3.0 card (Nvidia 6600GT or higher) is needed to run the sample.

* This article was originally published here

Leave a Reply

Your email address will not be published. Required fields are marked *