Recently I started working on PowerPC-related technologies and I needed to prepare an environment to start my studies/research and it’s not that difficult (but I recommend you to choose a book, because it’s going to take a lot of time), but I wanted to write a HOWTO for ArchLinux anyway.
qemu is an open source hypervisor that we will use to prepare our environment. All you need to do to install qemu under ArchLinux is run the following command:
After you have qemu installed, you need to install an operating system supporting PowerPC (just as you’d in a real machine). I wanna install ArchLinux, but looks like PowerPC support on Arch is dead, then I’ll go with Debian. The list of steps to follow is (1) create a virtual machine, (2) install a guest operating system on it and (3) configure the guest system to compile powerpc software.
To accomplish the first step, you need create a disk image and this can be done with the following command (where 20G is the size):
Put everything on the same folder and use the following script to start qemu:
Follow the usual Debian installation steps and change your script to boot from the HD (replace “-boot d” with “-boot c”). When I installed the system, the installer told me it couldn’t complete the installation of the bootloader, then I jumped this step and, for my surprise, the system booted up without my intervention.
The Debian comes with GNOME by default and is slow on an emulated system, then I recomend you to install the packages xfce4 and xfce4-goodies and replace GNOME.
Also, qemu is really slow to emulate the vga card, then I suggest you to install the package openssh-server and do most of your tasks via a ssh connection. You can redirect ports (good to access guest via ssh) from the host system to the guest system passing the -redir argument to the qemu-system-ppc command:
If you want to dig deeper, I suggest you to read this lenthy developerWorks article.
The next tutorial should be a cross-compilation guide to avoid all this middleware steps and part of it is already written, but I’m struggling to make the complete system work.
GSoC 2013 is over and I thought that I should write an “ending post” for this category.
libdepixelize is integrated in Inkscape source code and its next version will feature a new “Trace Pixel Art” dialog. Check the videos below:
Optimization is disabled by default on Inkscape GUI, because it is in an experimental state currently (the harmonic maps triangulation is not implemented and some required preprocessing steps are buggy). You still can use the libdepixelize’s command line application if you want to play with the experimental features.
I’m finishing the final bits for a polished result regarding extra color information (this feature is beyond Kopf-Lischinski original algorithm) and then I’ll polish the Inkscape integration a bit more (processing in a background thread).
The “new beginning” text on the post title comes from the fact that I’ll continue to work with my mentor to improve the Kopf-Lischinski algorithm.
Stay tuned (via RSS)!
“Polished result regarding extra color information…” is finished. There is one very minor issue and a minor improvement (there will always be), but they can wait a little longer. Enjoy it!
All posts related to the Kopf-Lischinski on this blog were collected, updated and extended in the “libdepixelize algorithmic documentation”. If you have any doubs about this algorithm, use it as a guide: http://bazaar.launchpad.net/~vinipsmaker/libdepixelize/trunk/view/head:/the%20algorithmic%20documentation%20report/index.pdf
Posts that were grouped in this pdf:
Before vote and choose any option, know that this ain’t democracy and I don’t promise to follow the winner, but I’m interested in the popular option because I care. Also, before vote anything, read the rest of this post to understand the consequences of each choice.
How does the full Kopf-Lischinski algorithm looks like? Just like the image below:
Like it? Me too. How close is libdepixelize of getting this result? This depends on what libdepixelize will aim to, but let’s skip this discussion for later. First, the below image has an ultra-summarized answer to this question:
Like it? Now look some images generated by libdepixelize for the same input used to generate the image above:
The output from the left is buggy by Kopf-Lischinski and my own standard. If this result were correct, then it should group these two gray blocks in a single block and I should code a bit more to make sure this happen.
The output from the right is “innovative” and I’m coding since the beginning to preserve as much color information as possible and I don’t need to fix anything, but I need to extend Kopf-Lischinski algorithm to handle the “unfilled” space between two (non-)grouping blocks. The result wouldn’t be Kopf-Lischinski, but it’d preserve more color information and if Kopf-Lischinski algorithm is ever going to be patented, we’d have a different algorithm (partially patent-free).
Now let me explain the technical details using simple and summarized language. Kopf-Lischinski algorithm uses a similarity graph to group blocks of similar color aiming two results:
- Define a new shape to the image (and I very much like the result)
- Remove colors (and I very dislike this choice)
The algorithm remove color information to simplify the vectorization process, but I came all this way without being limited by its “not that much increased complexity” and I think I can extend the algorithm to include descriptions of how to handle more color information in these final steps.
If you think the “non-enhanced” version would be easy to implement, it’s because you haven’t thought about any image where a gradient is present. Close blocks of colors would have similar colors, but distant blocks would have different colors anyway, so there is not an easy way to choose which blocks should be grouped. Most of the ideas I thought about (since the beginning of the project) would have different results depending on the “iteration order” and as such I consider this a too “fragile”/unstable algorithm. The only “correct” solutions I can think of at the moment are too expensive and based on the benchmark present in the paper, I doubt them were chosen.
Keep in mind that any path I choose will bring me code to produce.
This is a sequel of a series of posts. The last one was “part 0” and this order is a bit insane, but nothing to worry about. Check the first one to understand what all this is about. I hope this will be the final post in this series.
Once upon a time… a polygon with holes appeared
The polygon-union described in the previous posts works correctly and there are no better answers than the correct answer, but these polygons were meant to be B-Splines and when I described them, I already knew that the mentioned algorithm would be insufficient.
Consider the following image.
The left polygon is the polygon that you see before and/or after the polygon-union. They are visually indistinguishable. The right polygon has an outline to allow you understand its internal representation. You can understand how the polygon-union generates a polygon with a “hole” seeing the right polygon.
If the polygon-union don’t generate visually identifiable pixels, then there shouldn’t exist any problem, but when we convert the polygon to a B-Spline, some areas of the image won’t be filled. The polygons won’t fit correctly, like show in the image below.
The solution is to create some holes. With the representation of polygons used in libdepixelize, this task is very simple and key operation to accomplish the goal is to encounter the “edges that the polygon share with itself”. I’ll explain this task further in the following paragraphs.
The above image has 2 pairs of hidden points, pair <5, 6>, that is equal to pair <11, 10>, and pair <12, 13>, that is equal to pair <18, 17>. Well, to extract internal polygons that represent the holes, you just iterate over the points and, for each point, try to find another point that is equal to it, then get the internal sequence and use it to construct the internal polygon.
- Remark #1: libdepixelize will do these steps in backward order, to avoid too much moves of elements in the internal vector.
- Remark #2: The polygon is represented in clockwise order, but the holes will be represented in counter-clockwise order, but there is no reason to normalize. You can safely ignore this (de)normalization feature.
I forgot to mention that edges not always are two points only and you should use the same “largest common edge” from the algorithm of previous posts.
Things became more complicated than I predicted and now involve some recursive functions and I needed to extend (extend != change) the algorithm. To see the whole extension, check out the Splines::_fill_holes function in libdepixelize source code and where this function is used.
Things became even more complicated than the complicated things that I didn’t anticipate. Complicated code to fix the position of T-junction nodes created a pattern where the polygon itself shared a point with itself and this pattern propagated wrongly in the “holes-detection” algorithm and afterwards.
The algorithm “3rd ed.” check if the common edge has “lenth” 1 to solve the problem.
And then, the polygon were meant to be a B-Spline
It’s a series about splines extraction and it’s fair to end with this step.
The algorithm is very simple. The points you get through the polygon-union algorithm in the previous posts will be the control points of quadratic Bézier curves and the interpolating/ending points of each Bézier curve will be the middle points between two adjacent control points. This is the special case where all points are smooth. The image below can help you get the idea.
If you have non-smooth points, then all you need to do is interpret them as interpolating points, instead of control points. You’ll have three interpolating points that are part of two straight lines. There is a bit more of theory to understand why the generated B-Spline is smooth, but we’re ALMOST done! In next post (a small one, I hope), I’ll detail how to get correctly positioned points for T-junctions.
Update. My remark on exceptional life-time of temporaries in array initialization was incorrect. This part is now fixed. I also included some essential information, as suggested by Herb Sutter.
C++, if you want to learn all of it, is big, difficult and tricky. If you look at what some people do with it, you might get scared. New features are being added.
This post is a continuation of “Splines extraction on Kopf-Lischinski algorithm, part 1“. The title contains “part 0″, because the algorithm step described here should be executed before the algorithm step described in “part 1″.
This blog post is kind of unstructured, but don’t worry, because I’m aware of that and is intentional.
Generalized Voronoi diagram generation
The algorithm described in this section was kind of documented already, but the documentation wasn’t good enough to be part of a post, then it was keep as a private home-made PDF.
Well, a Voronoi diagram is a black box where you put some points (the seeds) and you get some polygons (the cells). Each polygon contains all points that are closer to its seed than any other seed. There is a good article on Wikipédia and I won’t explain any further.
Kopf-Lischinski algorithm executes a bunch of operations on a graph and it uses a Voronoi diagrams to extract a visual representation from this graph. The simplest form of a Voronoi diagram works with 2D points-seeds, but we have higher-dimentional Voronoi diagrams, Voronoi diagrams using different distance functions and even Voronoi diagrams using complex non-points seeds. We are interested in these Voronoi diagrams using complex non-points seeds.
The below image has a representation of the output graph of the Kopf-Lischinski algorithm and its Voronoi diagram representation. The graph nodes are represented by circles, where nodes with the same color are similar and usually are connected. The connections of the nodes are represented by green lines.
The generalized Voronoi diagram is visual, then there is no special meaning to explain it in the previous image. The seeds of this diagram aren’t points, they are line segments. You just need to break a green line in two and add each half to its containing cell. You can note that some polygons aren’t convex.
The graph used as input has certain constraints that enable us to use some simple and fast operations instead of a full-featured and complex algorithm.
If a Voronoi cell is a polygon containing all points that are closer to its seed than any other seed, we can determine the edge of a Voronoi cell by the midpoint of two adjacent seeds. If we generate a vertex for each of its 8 directions, we will get an accurate Voronoi diagram.
We can get a simplified Voronoi diagram by forgetting about the top, bottom, left and right vertices (if we just generate the diagonal vertices). The simplified version doesn’t contain concave polygons.
The act of generating diagonal vertices is more complex than the act of generating the other vertices. We need to check if there is a connection with the other cell and, if this connection exists, generate two vertices. If the connection doesn’t exist, we generate a single vertex, but its position depends on the existence of a connection between its neighbors. Look the following figure.
All information we need to generate the Voronoi diagram is located within its neighbors and the only extra tool we need to generate the points is the midpoint procedure. This is old news and it was already implemented in libdepixelize.
When we generate B-Splines using Kopf-Lischinski algorithm we need a way to separate points that create sharp corners and smooth points. The Kopf-Lischinski algorithm has techniques just to handle this issue. In libdepixelize, I decided to merge the point smoothness computation and Voronoi generation in one single phase. This is the phase where we can gather lot of info about the graph and we can propagate the smoothness data to later phases.
One note about the algorithm of this “metadata extraction” section is that some points will disappear during polygon union and we don’t care if the metadata about these points are accurate, then we can simplify the algorithm exploring this property. The below image will be citated all time during this time to how explain this algorithm:
There are two types of vertices generated in this special Voronoi diagram:
- White type: The type of the one labelled with “#1″. Node “#1″ is contained by three cells, one purple and two yellows. The two yellow cells will be merged during polygon-union. There are two possibilities for the remaining cell in this kind of node:
- When it has the same color. Then the node will disappear during polygon-union and we don’t care about its “smoothness” attribute.
- When it has a different color. Then we say it is a valence-2 node.
- Cyan type: The type of the one labelled with “#2″. This type of node can appear in the border of the image and isn’t smooth or in the center of 4 cells and its smoothness isn’t predictable in advance. If it appears in center of four cells, then:
- It can be in the middle of 4-connected cells and we don’t care about its “smoothness” attribute, because this node will disappear.
- It can be in the middle of a valence-2 node and will be smooth.
- It can be a valence-3 node and things start to get complex. After the polygon union, this node will be part of three different polygon and only two of these three nodes will share the same value for the “smoothness” attribute.
Problem explained, it’s time to algorithm! The algorithm is kind of repetitive (if we iterate the bottom-right node, compare with bottom and right nodes, then do all it again, but use nodes of different directions…), but the principle is the same for all “repetitive” code, then I’m only going to put the important parts and you’ll be able to see the full algorithm implemented in libdepixelize within a few hours. Lastly, this isn’t the first algorithm I came up with and I made a lot of iteractions already (lots of crumpled paper on my room’s floor), but I think the result can perform pretty fast and I’m content. This post is kind of a documentation and it will help me to not forget what I wanted to do in the code.
The above image represents the analysed data in the following code example, except for the fact that we don’t know what are the colors of the cells. We are iterating on the middle point of the image and the current iterated cell is cell A. The algorithm also uses the concept of “shading edge” and “contour edge” described in Kopf-Lischinski algorithm paper.
All images in this post were created using Inkscape and its awesome alignment tools.