This software implements texture synthesis, using a small "exemplar" image to fill a larger region. The only requirements on the exemplar image is that it can be modeled as the result of a random process that is local and stable. In other words, each pixel is related to only a small set of neighbors, and all regions of the image appear to be similar.

This software uses algorithms described in Wei and Levoy's paper, Fast Texture Synthesis using Tree-structured Vector Quantization, in Ashikhmin's Synthesizing Natural Textures, and finally in Synthesis of Bidirectional Texture Functions on Arbitrary Surfaces by Tong et. al.

I handled boundary conditions by only considering pixels in the exemplar whose neighborhoods fit entirely within the exemplar, and by wrapping neighborhoods toroidally on the output image.

The interface is divided into a synthesis mode and a bias-painting mode. The two are pictured below: In the painting interface, the user paints using a brush of any color. Then, when the paint is to be used as a bias by the synthesis algorithm, the pixel from the exemplar image closest in color value (based on L2 Norm) to the paint is used as the bias pixel for that paint color.
Synthesis Methods.
The user can select between three different synthesis methods: "Brute Force (Wei & Levoy)," "Ashikhmin," and "Ashikhmin + k-coherent." The brute force method is included mostly for comparison purposes only. Not only is it much slower than the other methods as the name implies, but the results for patterns with discernable structure tend to be worse than with Ashikhmin. Below is an example of synthesizing a berry texture using Brute Force and using Ashikhmin's method. Ashikhmin retains more of the berry structure, and runs in about a second vs. several minutes for the brute force computation.
Bias & Passes
Using the painting interface, the user can guide the texture synthesis in an intuitive way to produce impressive results. Here is an example of synthesizing an image that says "HI" in berries: Below, I demonstrate the effect of varying the number of passes made in synthesis. As the pass count increases the realism of the texture is improved, but some detail of the painting is lost. For all passes, Ashikhmin's method is used with a neighborhood of size 11x11. Runs with 1 pass, 3 passes, and 5 passes are featured.
k-Coherent Neighbors
On some images, it is beneficial to extend Ashikhmin's search of shifted candidates to an additional k pixels per candidate with neighborhoods similar to that candidate. A good example is the reptile-esque texture from Wei and Levoy's paper. Below 1- 5- and 10-coherent neighbors are used. A neighborhood of 7x7 is used for all examples. Here using only 1-coherent neighbor produces noticably lower quality than using 5- and 10-coherent neighbors.
On images with very regular structure or where stability/locality is violated, these algorithms will fail. Here, the structures are too large to be captured accurately even by large neighborhoods. Here are two such examples:
Pretty Results
Here are some images that worked particularly well using Ashikhmin's method.

This program is written in Objective-C using Mac OS X's Cocoa framework. Please be aware that this is by no means polished software, as it was written for a weekly assignment. There are a few crash-inducing user-errors that are not prevented, and the code could use some cleaning.
Design and code ©2012 Julian Panetta
Valid HTML 4.01 Strict