Category Archives: Research

Modified lambert in webgl

i wrote a little demo with a modified lambert function in webgl:

You can clamp the illumination into the “brighter” colors that is expanded into the (usually unlit) back areas and thus allows you lit the whole object. The light can be toggled between a (fixed) lightsource and the camera-position. This is usefull if you (cheaply) don’t want to show “dark parts” of 3d-models while still showing the structure. It can be further tweaked (not implemented yet!) by adding gamma or a two-tone mapping.

Video: Spectral rendering on the GPU – soap bubble!

I came up with an idea for a soap-bubble shader!

Two-sided soap bubble thin-film interference

More generally speaking it does dynamic thin-film interference of hollow convex two-sided objects in a deferred rendering configuration. In two passes front and back are rendered and the ray is traced through the object. To give you a better idea I screen-captured my demo-program and uploaded it:

As before all physical parameters of the shader can be changed at runtime (as seen in the above video). Bear in mind, that the program uses OpenGL 3.2 core and runs on my late iMac 2011 – it uses a Radeon 6960M, a mobile GPU.

The cube maps are from Emil Persson and can be found at humus’s textures. The head model is copyrighted to © I-R Entertainment Ltd., taken from Morgan McGuire‘s Graphics Data.

Spectral Rendering on the GPU – now with bumps

My GLSL spectral rendering shader (here & here) is now able to do some (environmental) bump-mapping. It does physical wavelength-based thinfilm interference with sampled SPD‘s on the GPU in realtime. All parameters are adjustable at runtime. I’ve implemented five distinct “film-setups” that have different ways to interpret/change film thickness and normals of film and object surface. The setups are outlined in this sketch:

setups_small

Here’s a textual description:

  1. A simple film with constant thickness covers the object.
  2. The surface of the object is “bumped” and the film has a constant thickness.
  3. Both object and film use regular normals, but the thickness of the film varies. This is fake but looks nice:D.
  4. The object remains unchanged while the film (thickness + normal) is bumped. Basically 3, but correct.
  5. The object is bumped and the thickness is modulated such that the film’s surface is “flat”(like setup 1). The inverse scenario of 4.

Finally some screenshots from the same angle and same film settings: refraction index=1.09, tilm-thickness:550-1170nm, CIE D65 light, silver material. The matte utah teapot is shown on the left, the shiny teapot on the right:

Teapot_Matte_Setup1 Teapot_Shiny_Setup1Teapot_Matte_Setup2 Teapot_Shiny_Setup2Teapot_Matte_Setup3 Teapot_Shiny_Setup3Teapot_Matte_Setup4 Teapot_Shiny_Setup4Teapot_Matte_Setup5 Teapot_Shiny_Setup5

(Click on image to enlarge)

I hope to be able to record a demo video soon (~ this week). While going through the sourcecode I also noticed that there’s no dispersion as the refraction index is const. This should be easily adjustable, I just need some data. 

Also, while the uffizi gallery is nice I’d like to try out some other cubemaps (m.be. from Humus) and some other models. Maybe a wobbling soapbubble?

wavelength-based thinfilm interference

I managed to successfully port some cg shaders over to glsl, gave them a spring-cleaning and integrated them into an OpenGL 3.2 core renderer (yeah, OS X 10.8 if you have to ask – bummer). It is kinda “the real deal”: thinfilm interference evaluated per lambda with interactive refraction indices, light + material spectral power distributions and thickness. I might have more results soon, so far these images below have to do:

Alien, smooth surface, film: 400nmAlien, semi-rough surface, film: 340nmAlien, rough surface, film: 430nm

Besides I still have other work todo, writing down my ph.d thesis…

SpectralLibrarian

I just uploaded my SpectralLibrarian to github.com:
https://github.com/numb3r23/SpectralLibrarian

It’s written in C# and allows you to create, edit and manage spectral data.

  • Spectras can be manually entered, imported from CSV, captured from an image, ….
  • Libraries can be saved/loaded/exchanged
  • Spectras can be put together in collections and stored into various filetypes
  • Spectral points are interpreted as peaks or as linear interpolation guides
  • Few lights & material spectras are provided in data-subfolder

I needed it for a spectral-rendering project to manage and export textures containing the spectral data.

A GPU-based approach for scientific illustrations [3D-NordOst ’12]

Title
A GPU-based approach for scientific illustrations

Abstract
Scientific illustrations are used for centuries in several scientific domains to communicate an abstract theory and are still created manually. In this article we present a GPU-based illustration pipeline with which such illustrations can be created and rendered in an interactive process. This is achieved by combining current non-photorealistic-rendering algorithms with a manual abstraction mechanism and a layer-system to combine multiple techniques. The pipeline can be executed completely on the GPU.

@ 15. Anwendungsbezogener Workshop zur Erfassung, Modellierung, Verarbeitung und Auswertung von 3D-Daten, Berlin (2012)
3DNordOst2012.bib

Interactive generation of (paleontological) scientific illustrations from 3D-models [CEIG ’12]

Title
Interactive generation of (paleontological) scientific illustrations from 3D-models

Abstract
Scientific illustrations play an important role in the paleontological domain and are complex illustrations that are manually drawn. The artist creates illustrations that feature common expressive painting techniques: outlines, both irregular and periodic stippling as well as an abstract shaded surface in the background. We present a semi- automatic tool to generate these illustrations from 3D-models in real-time. It is based on an extensible GPU-based pipeline to interactively render characteristic into image-layers that are combined in an image-editing fashion. The user can choose the techniques used to render each layer and manipulate its key aspects. Using 3D- and 2D-painting the artist can still interact with the result and adjust it to his or her liking.

@ XXII Congreso Español de Informática Gráfica, Jaén
CEIG2012.bib

Interactive generation of (paleontological) scientific illustrations from 3D-models [SIGGRAPH ’12]

Title
Interactive generation of (paleontological) scientific illustrations from 3D-models

Abstract
Scientific illustrations play an important role in the research of natural sciences and are complex drawings that are manually created. The desired images usually combine several drawing techniques such as outlines, distinctive types of stippling or an abstract shaded surface in a single image, as is exemplified in figure 2. Each of these elements aim to focus the viewers attention to certain details like e.g. shape, curvature or surface structure. As to our knowledge no tool exists with which similar images can be produced and the creation of such an illustration is a tedious and time-consuming task, we examined paleontological scientific illustrations together with researchers and artists from Senckenberg Research Institute and Natural History Museum with the intention to create an application to render scientific illustrations from a 3D-model.

SIGGRAPH2012_Poster_2048 (large)

@ SIGGRAPH 2012, Los Angeles
SIGGRAPH2012.bib

selected as semifinalist of ACM SIGGRPAH Student Research Competition (SRC)

Surface reconstruction and artistic rendering of small paleontological specimens [NPAR ’11]

Title
Surface reconstruction and artistic rendering of small paleontological specimens

Abstract
An important domain of paleontological research is the creation of hand-drawn artistic images of small fossil specimens. In this paper we discuss the process of writing an adequate tool to semi- automatically create the desired images and export a reconstructed 3D-model. First we reconstruct the three-dimensional surface en- tirely on the GPU from a series of images. Then we render the vir- tual specimen with Non-Photorealistic-Rendering algorithms that are adapted to recreate the impression of manually drawn images. These algorithms are parameterized with respect to the require- ments of a user from a paleontological background.

NPAR2011_Poster (small)

@ Non-Photorealistic Animation and Rendering 2011, Vancouver
NPAR2011.bib

received Honourable Mention, also on display at SIGGRAPH 2011 as “Best posters at NPAR”

3D-Oberflächen-Rekonstruktion und plastisches Rendern aus Bilderserien [FWS ’10]

Title
3D-Oberflächen-Rekonstruktion und plastisches Rendern aus Bilderserien

Abstract
Dieses Paper befasst sich mit der 3D-Rekonstruktion der Oberfläche von mikroskopisch kleinen Objekten. Dies geschieht in Anlehnung an das von Nayar vorgestellte Verfahren, welches in verbesserter Form verwendet wird. Zum Anderen ist es das Ziel, die rekonstruierte Oberfläche mit unterschiedlichen Rendertechniken plastisch zu visualisieren. Dazu zählen fotorealistische, schematische (Drahtgitter-, Linien- oder Punkt-Modelle) und technische (Non-Photorealistic Rendering) Darstellungen. Als besondere Anforderung soll der gesamte Verarbeitungsprozess so weit wie möglich auf die GPU ausgelagert werden.
Als Eingabe dient eine Bilderserie eines abmikroskopierten Objektes. Jedes Bild wird mit gleichen Kameraparametern und unterschiedlichen Objektiv-Objekt-Abständen aufgenommen. Daher liegen in jedem Bild der Serie unterschiedliche Bereiche des Objektes im Fokus. Durch die Zuordnung von Hö- heninformationen zu den fokussierten Pixeln lässt sich ein konvexes Oberflächenmodell rekonstruieren auf dessen Grundlage sich weitere Verfahren zur Hervorhebung von Merkmalen anwenden lassen.

@ 16. Workshop Farbbildverarbeitung 2010, Ilmenau
FWS2010.bib