Author Archives: numb3r23

Spectral Rendering on the GPU – now with bumps

My GLSL spectral rendering shader (here & here) is now able to do some (environmental) bump-mapping. It does physical wavelength-based thinfilm interference with sampled SPD‘s on the GPU in realtime. All parameters are adjustable at runtime. I’ve implemented five distinct “film-setups” that have different ways to interpret/change film thickness and normals of film and object surface. The setups are outlined in this sketch:


Here’s a textual description:

  1. A simple film with constant thickness covers the object.
  2. The surface of the object is “bumped” and the film has a constant thickness.
  3. Both object and film use regular normals, but the thickness of the film varies. This is fake but looks nice:D.
  4. The object remains unchanged while the film (thickness + normal) is bumped. Basically 3, but correct.
  5. The object is bumped and the thickness is modulated such that the film’s surface is “flat”(like setup 1). The inverse scenario of 4.

Finally some screenshots from the same angle and same film settings: refraction index=1.09, tilm-thickness:550-1170nm, CIE D65 light, silver material. The matte utah teapot is shown on the left, the shiny teapot on the right:

Teapot_Matte_Setup1 Teapot_Shiny_Setup1Teapot_Matte_Setup2 Teapot_Shiny_Setup2Teapot_Matte_Setup3 Teapot_Shiny_Setup3Teapot_Matte_Setup4 Teapot_Shiny_Setup4Teapot_Matte_Setup5 Teapot_Shiny_Setup5

(Click on image to enlarge)

I hope to be able to record a demo video soon (~ this week). While going through the sourcecode I also noticed that there’s no dispersion as the refraction index is const. This should be easily adjustable, I just need some data. 

Also, while the uffizi gallery is nice I’d like to try out some other cubemaps ( from Humus) and some other models. Maybe a wobbling soapbubble?

wavelength-based thinfilm interference

I managed to successfully port some cg shaders over to glsl, gave them a spring-cleaning and integrated them into an OpenGL 3.2 core renderer (yeah, OS X 10.8 if you have to ask – bummer). It is kinda “the real deal”: thinfilm interference evaluated per lambda with interactive refraction indices, light + material spectral power distributions and thickness. I might have more results soon, so far these images below have to do:

Alien, smooth surface, film: 400nmAlien, semi-rough surface, film: 340nmAlien, rough surface, film: 430nm

Besides I still have other work todo, writing down my ph.d thesis…


I just uploaded my SpectralLibrarian to

It’s written in C# and allows you to create, edit and manage spectral data.

  • Spectras can be manually entered, imported from CSV, captured from an image, ….
  • Libraries can be saved/loaded/exchanged
  • Spectras can be put together in collections and stored into various filetypes
  • Spectral points are interpreted as peaks or as linear interpolation guides
  • Few lights & material spectras are provided in data-subfolder

I needed it for a spectral-rendering project to manage and export textures containing the spectral data.


As I was working with libSOIL ( for a mac makefile) an exception occured and I couldn’t load anything. Turns out, libSOIL is not OpenGL 3.2 core compatible. Main reason for this is the beautiful safety-net which I encountered in the function

int query_NPOT_capability( void ){
  if((NULL == strstr( (char const*)glGetString( GL_EXTENSIONS ), 

This mechanism can be found elsewhere too. As this specific feature is definitely part of OpenGL 3.2 core the problem was easy to resolve (remove the check) and it works now as expected. Yay!

A GPU-based approach for scientific illustrations [3D-NordOst ’12]

A GPU-based approach for scientific illustrations

Scientific illustrations are used for centuries in several scientific domains to communicate an abstract theory and are still created manually. In this article we present a GPU-based illustration pipeline with which such illustrations can be created and rendered in an interactive process. This is achieved by combining current non-photorealistic-rendering algorithms with a manual abstraction mechanism and a layer-system to combine multiple techniques. The pipeline can be executed completely on the GPU.

@ 15. Anwendungsbezogener Workshop zur Erfassung, Modellierung, Verarbeitung und Auswertung von 3D-Daten, Berlin (2012)

Interactive generation of (paleontological) scientific illustrations from 3D-models [CEIG ’12]

Interactive generation of (paleontological) scientific illustrations from 3D-models

Scientific illustrations play an important role in the paleontological domain and are complex illustrations that are manually drawn. The artist creates illustrations that feature common expressive painting techniques: outlines, both irregular and periodic stippling as well as an abstract shaded surface in the background. We present a semi- automatic tool to generate these illustrations from 3D-models in real-time. It is based on an extensible GPU-based pipeline to interactively render characteristic into image-layers that are combined in an image-editing fashion. The user can choose the techniques used to render each layer and manipulate its key aspects. Using 3D- and 2D-painting the artist can still interact with the result and adjust it to his or her liking.

@ XXII Congreso Español de Informática Gráfica, Jaén

Interactive generation of (paleontological) scientific illustrations from 3D-models [SIGGRAPH ’12]

Interactive generation of (paleontological) scientific illustrations from 3D-models

Scientific illustrations play an important role in the research of natural sciences and are complex drawings that are manually created. The desired images usually combine several drawing techniques such as outlines, distinctive types of stippling or an abstract shaded surface in a single image, as is exemplified in figure 2. Each of these elements aim to focus the viewers attention to certain details like e.g. shape, curvature or surface structure. As to our knowledge no tool exists with which similar images can be produced and the creation of such an illustration is a tedious and time-consuming task, we examined paleontological scientific illustrations together with researchers and artists from Senckenberg Research Institute and Natural History Museum with the intention to create an application to render scientific illustrations from a 3D-model.

SIGGRAPH2012_Poster_2048 (large)

@ SIGGRAPH 2012, Los Angeles

selected as semifinalist of ACM SIGGRPAH Student Research Competition (SRC)

Surface reconstruction and artistic rendering of small paleontological specimens [NPAR ’11]

Surface reconstruction and artistic rendering of small paleontological specimens

An important domain of paleontological research is the creation of hand-drawn artistic images of small fossil specimens. In this paper we discuss the process of writing an adequate tool to semi- automatically create the desired images and export a reconstructed 3D-model. First we reconstruct the three-dimensional surface en- tirely on the GPU from a series of images. Then we render the vir- tual specimen with Non-Photorealistic-Rendering algorithms that are adapted to recreate the impression of manually drawn images. These algorithms are parameterized with respect to the require- ments of a user from a paleontological background.

NPAR2011_Poster (small)

@ Non-Photorealistic Animation and Rendering 2011, Vancouver

received Honourable Mention, also on display at SIGGRAPH 2011 as “Best posters at NPAR”

3D-Oberflächen-Rekonstruktion und plastisches Rendern aus Bilderserien [FWS ’10]

3D-Oberflächen-Rekonstruktion und plastisches Rendern aus Bilderserien

Dieses Paper befasst sich mit der 3D-Rekonstruktion der Oberfläche von mikroskopisch kleinen Objekten. Dies geschieht in Anlehnung an das von Nayar vorgestellte Verfahren, welches in verbesserter Form verwendet wird. Zum Anderen ist es das Ziel, die rekonstruierte Oberfläche mit unterschiedlichen Rendertechniken plastisch zu visualisieren. Dazu zählen fotorealistische, schematische (Drahtgitter-, Linien- oder Punkt-Modelle) und technische (Non-Photorealistic Rendering) Darstellungen. Als besondere Anforderung soll der gesamte Verarbeitungsprozess so weit wie möglich auf die GPU ausgelagert werden.
Als Eingabe dient eine Bilderserie eines abmikroskopierten Objektes. Jedes Bild wird mit gleichen Kameraparametern und unterschiedlichen Objektiv-Objekt-Abständen aufgenommen. Daher liegen in jedem Bild der Serie unterschiedliche Bereiche des Objektes im Fokus. Durch die Zuordnung von Hö- heninformationen zu den fokussierten Pixeln lässt sich ein konvexes Oberflächenmodell rekonstruieren auf dessen Grundlage sich weitere Verfahren zur Hervorhebung von Merkmalen anwenden lassen.

@ 16. Workshop Farbbildverarbeitung 2010, Ilmenau