Back to my research page.
Rendering rounded edges.
Manufactured objects (furniture, machines, buildings)
have a lot of sharp edges.
However, these edges are seldom totaly sharp,
and usually show a slight rounding.
The fact is that this rounding `catches' easily the light
(since it corresponds to a large set of normals).
This is well known by illustrators, who emphase the objects silhouette
to make them more aesthetic or more understandable
(curiously, these techniques are classified into
Non-Photorealistic Rendering (NPR)).
Not rounding edges in very geometrical synthetic scenes
(e.g. interiors) often yield to unclear (and unrealistic !) images,
by lack of boundaries
(between two walls enlighted with a close angle, between stair steps...).
When one really wants to take into account these rounded edges,
it is classicaly done at the geometric level, by inserting thousands
of polygons, despite they will only represent a few pixels in the
In this work, I propose a rounded edge shader,
allowing to trace a simple line at the location of the edges,
whose thickness corresponds to the appearant size of the round,
and whose color account for the analytic integration of the reflectance,
with the normal varying between the two limit angles.
NB: this work has not been published, since I discovered at the end that
the idea had already been proposed in 1989 by Saito, Shinya and Takahashi:
Highlighting Rounded Edges, CGI'89, Saito, Shinya and Takahashi
Precise Rendering Method for Edge Highlighting, CGI'91, Tanaka, Tokiichiro and Takahashi
The global result of the interaction between a localy complex material
(details, volumetric diffusion) and light
can sometimes be obtained analytically,
specialy if a strong a priori knowledge about the matter distribution is known.
A shader corresponds to the analytical formula obtained.
On left, I present a pseudo-volumetric model, looking like cloud.
On right, it is an attempt to distinguish the "spatial shape"
(the one that can be seen) and the "reflective shape"
(the one that count for reflects). The idea is to build for the second
a convex surface having the same normal distribution that the first.
In the case of pine-tree needles, this allows us to compute
cheaply the total light reflected by a bough of needles.
This very model has not been continued.
However, we have worked on the complete analytical integration
of complex shapes, and in particular bough of needles:
Multiscale Shaders for the Efficient Realistic Rendering of Pine-Trees,
Graphics Interface'00, 2000.
The use of textures (content, mapping) still contains
several open problems (see my
Among them is the defining of the textured areas limits:
textures being controled by values at the mesh nodes,
a texture can stop a priori only along edges.
To date, the two current methods thus consist in:
- either to trim the mesh along the wanted texture boundary,
which uselessly creates geometry in response to a surface problem,
- or to use a control texture whose content indicates
which texture should be applied at each surface location,
which uses a lot of memory, and oblige to define the mapping
even out of necessary areas (which can sometime be difficult).
In this work, I propose a 2D geometrical structure, linked to faces,
allowing to describe the area of texture application on a way
that is totally independant of the 3D mesh.
Note that more recent work, on
mapping without distortion or repeatition using triangle tiles,
also provides partial solution to this problem,
allowing for transition patterns between two area of different textures
(e.g. forest border, ocean border).
Trimmed Textures RR INRIA 2857, 1996.
( video 1 )
( video 2 , 3 )
During my PhD,
I had the occasion to animate and participate to the work of
the SYNTIM project concerning
Reality Enriched by Synthesis.
RES allows for inserting on a 3D-coherent way a synthetic animation into a video,
by associating image analysis and image synthesis:
real objects to be recognized (in order to be interacted with) are modeled,
then they are found and registered in the video sequence.
The registered object is then inserted in the synthetic scene,
either in black or white, in order to obtain the synthetic part masked
and shadowed by the real object (1), and the shadowing and masking of the real object by the synthetic part (2).
The images of the video sequence have just to be multiplied
by the shadowing values (2), then composed with the synthetic part (1).
Reflection properties can also be added to the real surfaces.