Download:
Painterly Animation (Win32, 540 kB)
Videos:
All videos are AVI encoded with WMV9, and are between 50-150kB.
Sphere |
Beach Ball |
Master Rai |
|
Plain | Video | Video | Video |
Strokes Follow Edges | Video | Video | Video |
Strokes Follow Edges and XY Jittered |
Video | Video | Video |
Overview:
The application is an implementation of the technique outlined by Barbara J. Meier's paper "Painterly Rendering for Animation" (SIGGRAPH '96). The project allows one to render scenes and objects in a painterly style. It works by placing particles across the surfaces involved in a scene, where each particle represents a brush stroke, and by using 2D reference images and the 3D pointset, painterly animations with minimal temporal artifacts can be produced. Noise can be added to the strokes intentionally, to push further the illusion that the frames were drawn individually by an artist.
Features:
Instructions:
Implementation Details:
The project was implemented in C++. Libraries used include: OpenGL (graphics), DevIL (image processing), GLUI (user interface widgets). Uses a modified Vector class originally by Allen Sherrod. Uses a wrapper class to interface with the Video for Windows library, author unknown.
The components of this project I implemented are:
AmountTriangleNeedsStroke(t)=SurfaceArea(t)/(CurrentParticles(t)+1.0)
Therefore, particles with large surface area and those which do not already have many particles are favoured for particle placement. As I add each of the n particles to the most needy triangle t, I immediately update the needy value for t for subsequent iterations. The particles are placed randomly on their triangles, to avoid artifical-looking patterns.
To update size attributes quickly, I place an OpenGL light at the camera position, and render the geometry without texture in a uniform colour (white). The intensities of the rendered image can be used to determine the size of the stroke - higher intensity corresponds to a surface normal which points closer to the camera. Strokes whose normal points to the camera should be drawn larger than those that are not. I then use the gluProject function to index into these intensities which correspond to size; gluProject maps the 3D world coordinate of each particle to the 2D image space.
To update the colour attribute, I render the geometry with the texture and slightly convolve it with a blurring filter. Again, gluProject gives the 2D coordinate which is used to index into the image and obtain the right colour. Occasionally, some brush strokes will flicker, as there is aliasing occuring and the colour at a particular pixel alternates in this rendering. The purpose of the convolution is to help reduce this effect.
To update the orientation attribute, I use the convolved textured geometry rendering again. I compute the spatial derivatives of the rendering. I use gluProject to determine a 2D image index, and then compute the orientation at that point. The orientation is a normalized weighted sum of the arctan of gradients within a local neighbourhood in image space (I completely reused the code here that I wrote for my Impressionist Painter project, which also had stroke orientation as a feature, the page contains more details on weight assignment, etc.). The blurring from the convolution step is also important for the orientation attribute, as it helps make the set of gradients less noisy, which helps reduce the effect of large orientation changes between frames.