Visualization and Labeling of the Visible Humantex2html_wrap_inline199 Dataset: Challenges & Resolves

R. Mullick and H. T. Nguyen

Abstract:

The Visible Humantex2html_wrap_inline199 is a vast resource, which few methods can fully explore and analyze. We aim to completely label and volume display it; here we discuss the main obstacles, and our means for overcoming them. Our segmentation into over 350 anatomical regions spans the various body systems. Volume rendering by a memory-independent, partitioned, color and translucency scheme gives realistic human images, with many possibilities for medical education and ``edutainment''.
Keywords Human Anatomy, Volume Rendering, Visualization, Visible Human, Segmentation

Introduction

The Visible Humantex2html_wrap_inline199 dataset (VHD) from the National Library of Medicine [1, 2] is the much-awaited standard for medical image visualization, analysis, and registration. The dataset consists of mutually registered MRI, CT, and high resolution cryogenic macrotome sections of a male (tex2html_wrap_inline207) and female (tex2html_wrap_inline209) cadaver. Numerous research and commercial sites are developing tools to browse, visualize, and label this gigantic 3D dataset. Although details of segmentation methods used at various centers are not available in public domain, it appears that most groups have based their labeling of the VHD on one of the RGB channels of the macrotome sections. A majority of the groups report usage of manual methods to achieve this monumental task. The sheer magnitude of the data offers numerous challenges in management, visualization and analysis. We have embarked on the task of completely labeling this data into the various anatomical regions, and have developed visualization techniques to volume render it in its entirety. Our approach allows the user to independently specify the various rendering parameters, and relative translucency, of each segmented anatomical structure (Figure 3). We present an overview of our methods for segmentation and visualization of this data. The challenges faced at each step of this process are discussed in the relevant sections, followed by proposed ways to resolve them.

 figure23

Information Flow

We have used the full resolution RGB for segmentation of the VHD section data. The labeling was done manually by drawing Bézier curves for each anatomical region in the image. Corresponding slices of the segmented(tex2html_wrap_inline213)-volume and photo RGB sections were then registered to generate the RGBtex2html_wrap_inline213 volume, which was used for all volume rendering. An overview of the information flow is presented in Figure 1.

Segmentation & Labeling

Anatomical regions in each RGB section of the data were outlined manually by students from the Anatomy, Medicine, and other science departments at the National University of Singapore, under the supervision of CIeMed staff. For each structure, the student sketched out one or more Bézier curves, such that the interior of the region corresponded exactly to that of the anatomical feature. This representation of the labeling has infinite resolution and offers easy scalability of any structure, with an option to generate its geometric model. Figure 2(a) shows the liver region outlined for slice 1500 of the dataset. Thereafter, a labeled image is generated for each slice using the control points and label of each contour. This is done in two steps: (i) A connected raster version of each curve is drawn into a blank (or partially labeled) image; (ii) then, an efficient region filling algorithm is used to fill/erase the interiors of each curve with a region-specific label represented by an intensity value. The region filling algorithm robustly handles special cases such as regions with single or multiple hollow interiors.

The stability of the region filling algorithm comes from a two-mode filling technique. Once the closed contours corresponding to each label are drawn into an image, the region filling algorithm begins by employing the traditional parity check method[3] in a left-to-right raster scan. Then, at points of possible ambiguity, defined as boundary locations and pixels adjacent to the boundary, the algorithm employs a ``point in/outside polygon'' test in order to correctly label the underlying region. The result of labeling the slice in Figure 2(a) is presented in Figure 2(b).

 figure38

Although our segmentation of this dataset is an ongoing process, we have labeled all the visible anatomical structures comprising the primary human systems. They are: (i) Central Nervous; (ii) Respiratory; (iii) Digestive; (iv) Excretory; (vi) Skeletal; (vii) Circulatory; (viii) Endocrine; and (ix) Sensory organs. A Java Applet to view the labeled VH data is available on WWW at http://ciemed.iss.nus.sg/research/brain/IIBA/JavaAtlas/VHD.html.

A majority of the difficulties of this phase were linked to one main issue, the sheer size of this dataset. To flawlessly store, modify, and manage this data requires significant amounts of time, human resources, and hardware. In order to minimize some of this burden, an in-house interactive volume browser was developed. The browser facilitates efficient resource management, with easy search options for volume data and associated anatomical structures. The magnitude of this dataset also pushes the hardware memory requirements far beyond their limits, allowing few ways to simultaneously segment or visualize a particular chunk of data.

Volume Rendering

We have developed a new volume rendering technique called Partitioned RGBtex2html_wrap_inline213 Volume Rendering of Segmented Data (ParVo) to visualize the segmented photo images of the VHD. The unique features of this approach are:

The volume rendering method is based on object space projection rendering. This basic approach has been modified to resample all voxels to planes transverse to the viewing direction, then project them onto the viewing plane in back-to-front or front-to-back order using a color blending equation [4]. A detailed description of this method is given in [5, 8]. The following paragraphs highlight the qualitative enhancements to this method.

  1. A special 3D blurring operation is applied to the tex2html_wrap_inline213-channel to remove aliasing effects due to a round-off methodology used in interpolation during resampling. Each anatomical structure represented by a specific intensity value (tex2html_wrap_inline227), is allowed a unique range (tex2html_wrap_inline229) for the blur operation. The blur operation at a voxel tex2html_wrap_inline231 is carried out only if the tex2html_wrap_inline233 neighborhood surrounding this voxel has only one label (excluding the background). This operation averages the transformed values (tex2html_wrap_inline235) of the labels (tex2html_wrap_inline231) in a tex2html_wrap_inline233 neighborhood: tex2html_wrap_inline241, where tex2html_wrap_inline243. The transformation limits the output of the blur operation to fall within the specified range tex2html_wrap_inline229.This step is applied to the tex2html_wrap_inline213-volume only once prior to rendering.
  2. An opacity of range between 0 and 1 is assigned to each label. To smooth out the aliasing effect at the surface, opacity within the blurring range (tex2html_wrap_inline229) of the specified label is linearly ramped from 0 to the object's opacity value (tex2html_wrap_inline251). (See Figure 2(c))
  3. Up to four light sources can be specified, with independent control of their location, intensity, color, attenuation, and specular parameters based on the Phong lighting model.
  4. The surface normal for shading and lighting is approximated based on the gray-scale value computed from the R, G, and B values using the equation g=0.299R+0.587G+0.114B. Four methods of gradient calculation are provided: central difference, adaptive gray-level, Zucker-Hummel[7], and 3D Sobel edge detector which is a 3D extension of the 2D Sobel operator [6].
  5. Each channel (R, G, and B) is blended, shaded, and lighted individually.

A voxel based volume of 1-mm resolution of VHD was created from photo images (as rendered in Figure 3) containing a total of (586tex2html_wrap_inline257340tex2html_wrap_inline2571878tex2html_wrap_inline257[3+2+1]) 2.24 Gbytes where each voxel consists of three bytes for RGB, a two byte label, and one byte for the grey channel. Hard disk requirements (tex2html_wrap_inline26317 Gbytes) limited us in creating and thus rendering the entire dataset in full resolution. With a volume dataset of this magnitude, it is not practical to require that the entire volume be resident in memory. Hence a divide-and-conquer approach called Partitioned Volume Rendering [5] was adopted. PaVe4-16 was originally designed for multi-processor systems, but has been applied quite naturally and efficiently to a single processor architecture (ParVo [8]) to handle large datasets. ParVo renders n slices of data (n depends on the available memory and memory required to contain each slice) at a time, and merges (blends) each of them in order to form a final image. The slice, and hence partition, ordering depends on the viewing direction. ParVo has allowed us to render the 1-mm resolution multi-Gbyte VHD volume on an SGI Indigo-2 with 32 MB of main memory in < 3 hours (time estimate for unoptimized code).

Results & Future Directions

A rendition highlighing many of the over 350 segmented anatomical structures is presented in Figure 3. This translucent view (15tex2html_wrap_inline273 roll) of the human form clearly depicts the relative and absolute position of bodily organs with respect to the skin surface and the skeletal system. Most of the images presented here have been rendered with a single light source, Zucker-Hummel gradient estimate, and back-to-front compositing.

Figure 5 is a detailed the whole body rendered from a 586x340x1878 sized RGB volume. An isolated illustration of the pelvis region is depicted in Figure 4. This was rendered from a 1mm-cubic voxel RGB volume of size 320x192x300. An unshaded unlighted visualization of the neuroanatomy is illustrated in Figure 7. Pseudo-color volume rendering of the blurred tex2html_wrap_inline213-volume for the digestive and cardio-pulmonary systems is shown in Figure 6 and 8 respectively. (Additional images and animations can be obtained via the WWW site at http://ciemed.iss.nus.sg/research/human_av/human_av.html)

Naming of each skeletal tissue and labeling of the muscular system is also being completed. Other more efficient methods and fidelity enhancement techniques for rendering this type of data are under investigation.

Transformation of the raw VHD into structured geometric objects is vital for all uses beyond passive display. For a simulated catheter to explore its arteries, the arteries' geometry must be known, before it can be translated into force walls that limit the catheter's motion. For the VHD head to smile realistically, the muscles must be mapped and transformed into biomechanical models that contract and bunch up according to suitable virtual force laws and nerve impulses. Transforming the VH into an electronic `crash dummy' depends on passive biomechanics that follow the segmentation into bone, sinus, soft tissue and so on. All such things require, before any progress can be made, an effective segmentation that responds to discontinuity in any data channel (red, green, blue or scan density), and efficient manipulation of the mass of data involved. (Recall also that succeeding data sets will be far larger.) The toolkit described here is a key element in CIeMed's movement beyond visualization, to actively modeled 3D medicine.

References

1
Spitzer, V. et al.: The Visible Human Male: A Technical Report. Journal of American Medical Informatics Association (1995)
2
http://www.nlm.nih.gov/extramural_research.dir/visible_human.html
3
Pavlidis, T.: Algorithms for Graphics and Image Processing. Computer Science Press, Rockville, MD, (1982).
4
Levoy, M.: Display of surfaces from Volume Data. IEEE Computer Graphics and Applications. 8.5 (1988) 29-37
5
Nguyen, H. T., Srinivasan R.: PaVe4-16: A Distributed Volume Visualization Technique. TR94-135-0 Technical Report, Inst. of Sys. Science, National Univ. of Singapore, Singapore (1993)
6
Jain, A. K.: Fundamentals of Digital Image Processing. Prentice Hall, Englewood Cliffs, NJ (1989)
7
Monga, O., Deriche, R., Malandain, G., Cocquerez: Recursive filtering and edge closing: 2 primary tools for 3D edge detection. Image and Vision Comp., 9.4 (1991)
8
Nguyen, H. T., Mullick R., Srinivasan, R.: Partitioned Volume Rendering (ParVo): An efficient approach to visualizing large datasets. Manuscript in preparation.


Full figure
Fig. 3 Translucent viz of labeled VH dataset

Figure of pelvis
Fig. 4 Pelvis structure

Figure of Skin
Fig. 5 Whole body/Skin visualisation

Figure of Digestive System
Fig. 6 Pseudo-color labeled visualisation of blurred alpha volume of digestive system

Figure of brain
CNS System

Figure of thorax
Fig. 8 Thoraic region

Visualization and Labeling of the Visible Humantex2html_wrap_inline199 Dataset: Challenges & Resolves

The authors thank Tim Poston and Raghu Raghavan for their valuable technical and editorial comments. We are grateful to Ms. Jin Xiaoyang and all the NUS students who enthusiastically segmented the data with great patience and precision. We also express our gratitude to Chun Pong Yu and Pingli Pang who developed the keyframe animation, and built and integrated the GUI for the rendering system. Thanks to Seema Mullick for her support in segmenting data and in matters related to design aesthetics. This work would not have been possible without the support of the National Institutes of Health for the Visible Human Project.