Tag Archive | polygon

Splines extraction on Kopf-Lischinski algorithm, part 2


This is a sequel of a series of posts. The last one was “part 0” and this order is a bit insane, but nothing to worry about. Check the first one to understand what all this is about. I hope this will be the final post in this series.

Once upon a time… a polygon with holes appeared

The polygon-union described in the previous posts works correctly and there are no better answers than the correct answer, but these polygons were meant to be B-Splines and when I described them, I already knew that the mentioned algorithm would be insufficient.

Consider the following image.

Normal polygon (left) and bordered polygon (right)

Normal polygon (left) and bordered polygon (right)

The left polygon is the polygon that you see before and/or after the polygon-union. They are visually indistinguishable. The right polygon has an outline to allow you understand its internal representation. You can understand how the polygon-union generates a polygon with a “hole” seeing the right polygon.

If the polygon-union don’t generate visually identifiable pixels, then there shouldn’t exist any problem, but when we convert the polygon to a B-Spline, some areas of the image won’t be filled. The polygons won’t fit correctly, like show in the image below.

After B-Spline conversion

After B-Spline conversion

The solution is to create some holes. With the representation of polygons used in libdepixelize, this task is very simple and key operation to accomplish the goal is to encounter the “edges that the polygon share with itself”. I’ll explain this task further in the following paragraphs.

A holed polygon with labelled points represented with a single path

The above image has 2 pairs of hidden points, pair <5, 6>, that is equal to pair <11, 10>, and pair <12, 13>, that is equal to pair <18, 17>. Well, to extract internal polygons that represent the holes, you just iterate over the points and, for each point, try to find another point that is equal to it, then get the internal sequence and use it to construct the internal polygon.

  • Remark #1: libdepixelize will do these steps in backward order, to avoid too much moves of elements in the internal vector.
  • Remark #2: The polygon is represented in clockwise order, but the holes will be represented in counter-clockwise order, but there is no reason to normalize. You can safely ignore this (de)normalization feature.


I forgot to mention that edges not always are two points only and you should use the same “largest common edge” from the algorithm of previous posts.


Things became more complicated than I predicted and now involve some recursive functions and I needed to extend (extend != change) the algorithm. To see the whole extension, check out the Splines::_fill_holes function in libdepixelize source code and where this function is used.


Things became even more complicated than the complicated things that I didn’t anticipate. Complicated code to fix the position of T-junction nodes created a pattern where the polygon itself shared a point with itself and this pattern propagated wrongly in the “holes-detection” algorithm and afterwards.

The algorithm “3rd ed.” check if the common edge has “lenth” 1 to solve the problem.

And then, the polygon were meant to be a B-Spline

It’s a series about splines extraction and it’s fair to end with this step.

The representation of the data in three different moments

It’s evolution, baby

The algorithm is very simple. The points you get through the polygon-union algorithm in the previous posts will be the control points of quadratic Bézier curves and the interpolating/ending points of each Bézier curve will be the middle points between two adjacent control points. This is the special case where all points are smooth. The image below can help you get the idea.

Locations of the B-Spline points

Locations of the B-Spline points

If you have non-smooth points, then all you need to do is interpret them as interpolating points, instead of control points. You’ll have three interpolating points that are part of two straight lines. There is a bit more of theory to understand why the generated B-Spline is smooth, but we’re ALMOST done! In next post (a small one, I hope), I’ll detail how to get correctly positioned points for T-junctions.

Splines extraction on Kopf-Lischinski algorithm, part 0


This post is a continuation of “Splines extraction on Kopf-Lischinski algorithm, part 1“. The title contains “part 0”, because the algorithm step described here should be executed before the algorithm step described in “part 1”.

This blog post is kind of unstructured, but don’t worry, because I’m aware of that and is intentional.

Generalized Voronoi diagram generation

The algorithm described in this section was kind of documented already, but the documentation wasn’t good enough to be part of a post, then it was keep as a private home-made PDF.

Well, a Voronoi diagram is a black box where you put some points (the seeds) and you get some polygons (the cells). Each polygon contains all points that are closer to its seed than any other seed. There is a good article on Wikipédia and I won’t explain any further.

Kopf-Lischinski algorithm executes a bunch of operations on a graph and it uses a Voronoi diagrams to extract a visual representation from this graph. The simplest form of a Voronoi diagram works with 2D points-seeds, but we have higher-dimentional Voronoi diagrams, Voronoi diagrams using different distance functions and even Voronoi diagrams using complex non-points seeds. We are interested in these Voronoi diagrams using complex non-points seeds.

The below image has a representation of the output graph of the Kopf-Lischinski algorithm and its Voronoi diagram representation. The graph nodes are represented by circles, where nodes with the same color are similar and usually are connected. The connections of the nodes are represented by green lines.

An accurate generalized Voronoi diagram

An accurate generalized Voronoi diagram

The generalized Voronoi diagram is visual, then there is no special meaning to explain it in the previous image. The seeds of this diagram aren’t points, they are line segments. You just need to break a green line in two and add each half to its containing cell. You can note that some polygons aren’t convex.

The graph used as input has certain constraints that enable us to use some simple and fast operations instead of a full-featured and complex algorithm.

If a Voronoi cell is a polygon containing all points that are closer to its seed than any other seed, we can determine the edge of a Voronoi cell by the midpoint of two adjacent seeds. If we generate a vertex for each of its 8 directions, we will get an accurate Voronoi diagram.

A simplified generalized Voronoi diagram

A simplified generalized Voronoi diagram

We can get a simplified Voronoi diagram by forgetting about the top, bottom, left and right vertices (if we just generate the diagonal vertices). The simplified version doesn’t contain concave polygons.

The act of generating diagonal vertices is more complex than the act of generating the other vertices. We need to check if there is a connection with the other cell and, if this connection exists, generate two vertices. If the connection doesn’t exist, we generate a single vertex, but its position depends on the existence of a connection between its neighbors. Look the following figure.

A simplified generalized Voronoi diagram (detailed)

A simplified generalized Voronoi diagram (detailed)

All information we need to generate the Voronoi diagram is located within its neighbors and the only extra tool we need to generate the points is the midpoint procedure. This is old news and it was already implemented in libdepixelize.

Metadata extraction

When we generate B-Splines using Kopf-Lischinski algorithm we need a way to separate points that create sharp corners and smooth points. The Kopf-Lischinski algorithm has techniques just to handle this issue. In libdepixelize, I decided to merge the point smoothness computation and Voronoi generation in one single phase. This is the phase where we can gather lot of info about the graph and we can propagate the smoothness data to later phases.

One note about the algorithm of this “metadata extraction” section is that some points will disappear during polygon union and we don’t care if the metadata about these points are accurate, then we can simplify the algorithm exploring this property. The below image will be citated all time during this time to how explain this algorithm:

Vertex types

Vertex types

There are two types of vertices generated in this special Voronoi diagram:

  • White type: The type of the one labelled with “#1”. Node “#1” is contained by three cells, one purple and two yellows. The two yellow cells will be merged during polygon-union. There are two possibilities for the remaining cell in this kind of node:
    • When it has the same color. Then the node will disappear during polygon-union and we don’t care about its “smoothness” attribute.
    • When it has a different color. Then we say it is a valence-2 node.
  • Cyan type: The type of the one labelled with “#2”. This type of node can appear in the border of the image and isn’t smooth or in the center of 4 cells and its smoothness isn’t predictable in advance. If it appears in center of four cells, then:
    • It can be in the middle of 4-connected cells and we don’t care about its “smoothness” attribute, because this node will disappear.
    • It can be in the middle of a valence-2 node and will be smooth.
    • It can be a valence-3 node and things start to get complex. After the polygon union, this node will be part of three different polygon and only two of these three nodes will share the same value for the “smoothness” attribute.

Problem explained, it’s time to algorithm! The algorithm is kind of repetitive (if we iterate the bottom-right node, compare with bottom and right nodes, then do all it again, but use nodes of different directions…), but the principle is the same for all “repetitive” code, then I’m only going to put the important parts and you’ll be able to see the full algorithm implemented in libdepixelize within a few hours. Lastly, this isn’t the first algorithm I came up with and I made a lot of iteractions already (lots of crumpled paper on my room’s floor), but I think the result can perform pretty fast and I’m content. This post is kind of a documentation and it will help me to not forget what I wanted to do in the code.


The above image represents the analysed data in the following code example, except for the fact that we don’t know what are the colors of the cells. We are iterating on the middle point of the image and the current iterated cell is cell A. The algorithm also uses the concept of “shading edge” and “contour edge” described in Kopf-Lischinski algorithm paper.

All images in this post were created using Inkscape and its awesome alignment tools.

Splines extraction on Kopf-Lischinski algorithm, part 1


If you don’t follow this blog and don’t know what is the Kopf-Lischinski algorithm, this small section is for you. Kopf-Lischinski algorithm is a vectorization algorithm specialized on pixel art images giving excellent results (see their supplementary material page). This “summer”, I’m working on bringing this algorithm to Inkscape in the form of a library. This post is like a “progress update status”.

Splines extraction

Well, one of the phases in the algorithm is splines extraction. This phase creates a pipeline (output from one step is the input of the next) like the following:

  1. It takes the connectivity graph
  2. It generates a generalized Voronoi diagram
  3. It groups the voronoi cells to identify visible edges
  4. It generates the splines

In a future post, I’ll explaing a fast method (already implemented in libdepixelize) to compute the Voronoi diagram exploring special properties of the input graph. In this post, I’ll explain a fast method to add polygons together exploring special properties of the generated Voronoi diagram and the data layout used in libdepixelize.

Polygon union

Polygons can be represented by vertice points. libdepixelize store them in clockwise order, with the first point being part of the polygon’s northwest/top-left and all Voronoi cells generated in step 2 are convex polygons. Another important feature of the generated Voronoi diagram is that all “connected” cells share a common edge.

The algorithm can be described in 4 major steps:

  1. Find the largest common edge
  2. Remove the in-between points of the common edge
  3. Choose one polygon and refer to it as P (choose the smallest polygon for better performance)
    1. Shift P such as P’s head and tail are points of the common edge
  4. Choose the other and refer to it as Q
    1. Remove one of the common edge’s points in Q
    2. Replace the remaining point that is part of the common edge in Q by P’s points

The Voronoi cells are iterated one by one, line by line, from the left-most cell to the right-most cell. We check if (1) we already have a group of the current color, (2) the existing group has a common edge with current Voronoi cell and, then, (3) we add the current Voronoi cell to the existing polygon. Beware that the Voronoi cells from the input are always convex polygons, but the existing polygon (that groups the Voronoi cells) might be concave.

Let’s see an example of the algorithm. In the example, we are iterating in the second line and we found an existing grouping polygon with a matching color (the two entities are connected), then we must add them togheter. The image below represents the situation we are interested in:


Polygon A is represented by the list of points [1, 2, 3, 4, 5] (common edge’s points in underline). Polygon B is represented by the list of points [6, 7, 8, 9, 10] (common edge’s points in underline). Points 4 and 8 are equal and points 5 and 7 are equal. Polygon A is the grouping polygon while polygon B is the current Voronoi cell. We shift B’s list to [8, 9, 10, 6, 7] and use it to get the final polygon [1, 2, 3, 8, 9, 10, 6, 7].

Let’s do it one more time:


Polygon A is the grouping polygon, represented by [1, 2, 3, 8, 9, 10, 6, 7]. Polygon C is the current Voronoi cell and is represented by [11, 12, 13, 14]. This time the largest common edge have 3 points and point 8/11 is the in-between point, then we remove it. The resulting A polygon is [1, 2, 3, 9, 10, 6, 7] and the resulting C polygon is [12, 13, 14]. When we replace the common edge interval in A by C, we get the final polygon, [1, 2, 12, 13, 14, 10, 6, 7]. The final polygon is show in the image below:


One last note that might be useful in later steps is the access pattern used by the algorithm. When we add a voronoi cell to the grouping polygon, there is a hot area and a cold area. The cold area is the area where there will never be a common edge. These areas always are concetrated in the same places, like exemplified by the following image:


Splines generation

Do the previous step look simple? I want to keep things simple, but there may be additional info we may want to store for each node. This (polygon-union) is the last step where we still can gather locality info without executing a deep lookup.

Valence-3 node (in red)

Valence-3 node (in red)

Let’s refer to nodes where at most two different grouping polygons share a point as valence-2 nodes. There are valence-2 nodes, valence-3 nodes and valence-4 nodes. Valence-2 nodes are always smooth and valence-4 nodes never are smooth, but valence-3 nodes vary.


Non-smooth node

Most of the points are shared by nodes of different polygons and when we have three valence-3 nodes, exactly only one of them will be smooth. We apply Kopf-Lischinski algorithm heuristics to determine which one will be and store this info for later usage. I want to play more with the code to determine the best phase to compute this data, but I’m thinking about merging Voronoi generation and node type computing in one.

Smooth point

Smooth node

The complicated part about the splines extraction on Kopf-Lischinski algorithm is the overlapping between these last steps. I wasn’t able to get the code right before “put all the cards in the table”. Now I’ll code using this guide as a reference and I’ll provide the remaining algorithm in the next post.

A bit of performance

So, Kopf-Lischinski algorithm resembles a compiler. You have several data representation types and each type offers different operations. You explore the operations each type offers and convert it to another representation until you reach the final result. In several of the type representations used by the Kopf-Lischinski algorithm, you have a matrix and you access each element and its neighbours.

A table of 3 rows and 3 columns. The elements of the first row are 1, 2 and 3. The elements of the second row are 4 violet, 5 and 6 green violet. The elements of the last row is 7, 8 and 9.

Access pattern representation

The particular implementation for libdepixelize stores and access the elements linearly like [1, 2, 3, 4, 5, 6, 7, 8, 9]. It could make good use of processor cache, but ain’t that easy. Suppose we are iterating on element 5, then we need to access all its neighbours, but only neighbours 4 and 6 may be in cache, especially in large images. This is the first problem in cache usage of the implementation, but we cannot remove this usage pattern, because it’s part of the algorithm and there is a data dependency among the elements.

What we can is reduce it, because this same pattern is used again and again over most of the passes. Another day I was reading about Xiph.org’s Daala codec and an interesting idea idea hit me: overlapping transforms. We can make each element store information about its neighbours that may not be in cache (1, 2, 3, 7, 8 and 9). This task would require more CPU time, but this cost would be mostly needed during the intialization of the first data representation only and it would be alleviated in the next passes. This change may increase memory requirements also, but the CPU would thank us for a major increase in cache hits.

Another problem with the current data access pattern is that each element store a complex object that may point to other regions of memory and add a level of indirection that can contribute to cache miss. One idea that can increase the cache hit is the one behind the Small String Optimization. This change would highly increase data locality and fits well in Kopf-Lischinski algorithm, because the complex objects stored by each element in every phase tends to have a small maximum number of subelements.

I need to combine the previous ideas and measure the impact. I have two computers and I can request access to the cluster in my university, but it’d be great to have a large database of pixel-art images ready to test. Another interesting project to use during the measurements is Stabilizer.

All that said, these are only optimizations exploring knowledge of computer architecture and I want to improve performance exploring better algorithms also. There are already a few of these, but Nathan Hurst pointed me a hint to research about Eigenvalues and I need to reserve some time to do it.

Oh, and I LOVE to work on projects where I can toy with my knowledge and projects where I can learn more and it’s so much fun to work in this project that I cannot share the joy I have right now with you guys, but I can tell this: This is the last section on this blog post, but is the first section I wrote and I only started to write the other sections after I finished this one.

%d blogueiros gostam disto: