Arquivos | en RSS for this section

Suggestion: 8+1 languages worth learning


Every now and then I have contact with a few programming languages and this is the subset that I believe it would give me a very close insight to the sum of the all languages that I’ve had contact with. Also, this subset is not only based on the choice of ideas that each language aggregate, but also on their usefulness and importance for the general programmer’s toolbox.


Just about the most awesome way to describe and manipulate words from regular languages. No matter if it’s used as communication purposes within some specification or if it’s used to crawl certain patterns within a large collection of texts. It’s useful even within the programming environment itself. And to contribute to its awesomeness, it’s one of the easiest and fastest things to learn. It’s useful even for non-programmers (think about that time when you want to rename all files from a folder to have better consistency).

You can visualize regex patterns using Regexper or any of its competitors.


Started as a simple tool to pretify common syntax used in text-based email. But now just about almost every major site visited by programmers (e.g. StackOverflow, Reddit, GitHub, Doxygen-generated ones) has some support for MarkDown. Or its recent attempt for a smart standardization to spread good common practices and inspire better interoperability among supporting tools.

You can think of MarkDown as a simple way to describe which parts of the text will be bold or will be the tittle for a subsection and so on. MarkDown is simple! MarkDown is simple enough to be accepted in non-programmer targeted products like blogging platforms (even WordPress) or discussion platforms.


A language that appeared in 1972 that is still interesting and it’s still important. Being the “portable Assembly”, operating system’s kernels are still written in C. Pieces of software dealing with low-level are still written in C. Embedded projects are still written in C.

C is not a popular language out of merits. C is just the right abstraction to forget about Assembly, but still have no overhead between your software and the machine. Compilers will do a fantastic job in no time for you.

C is an easy language to learn, adding just a few handful abstractions like subroutines and structures to learn. Of course, C is very low-level and you’re expected to face manual memory management (and memory leaks), bit by bit serialization, pointer to functions (no closures here), architecture and operating system differences and maybe things like varargs, setjmp and mmap. You should be able to understand the implications on performance some decision has. This insight is something C has been a great language at and will hardly be acquired learning another language.


Haskell is one of the languages I learnt this year. It’s a typed purely functional language. It’s a great language. It has great concepts to decrease the total number of lines of code you should write (like list comprehensions and pattern matching), a clever syntax and some great concepts you could learn (higher-order functions, currying, lazy evaluation…).

Not all about Haskell was new to me, as I had already learn functional programming through Scheme some years ago, but Haskell does a much better job. I hate Lisp naming conventions (car for the head of the list, seriously) and excessive number of parentheses. You shouldn’t have to follow my path. You should be introduced to functional programming with Haskell.

Also, look at how elegant this QuickSort is:


Ruby is another of these languages I learnt this year. It’s a purely object-oriented language. Some cleverness was invested around its syntax and I very much appreciate this. It’s a very dynamic language where every class is open and even things like attr_reader are methods.

Object-oriented programming is one of these must-have skills for a programmer and I think Ruby, being purely object-oriented, is a great language to learn this paradigm. Hide and encapsulate!

I choose to learn Ruby looking for a scripting language to empower a possible game engine that I might code. Ruby really impressed me. Ruby is so dynamic that even if I design a wrong class hierarchy or something, Ruby probably has a way to fix it. I don’t intend to design bad hierarchies, but I don’t know who will use my possible future game engine and this concern then becomes undeniably important.


One of the worst languages I’ve ever seen. But also one of the best languages I’ve ever seen (yep, out there you can find programming languages that would surprise you in the bad way). This language is not what I’d like to be the most popular, but it’s just enough to not be hated. Also, it runs on about every web browser, which is like… everywhere. Importance and interoperability. It’s like you really need to know JavaScript.

JavaScript is like the assembly for the web. You’ll find many tools that translate source code from some language into JavaScript just to enable execution within the browser. Once developed to the browser, JavaScript has grow since and now it’s popular even on the server-side. JavaScript also conquered the smart-side.

Not knowing anything about JavaScript is almost like not knowing how to read in the programming industry. It’s a terrible language full of bad decisions, but it’s the common denominator of the web development.

Learning JavaScript also may help to solidify concepts you should know like asynchronous APIs, JSON and some others.


Responsible for most of the web traffic, this is a pretty important and simple language to understand how web documents are structured. If you think I’m overestimating web, it’s because it’s one of the greatest things we have. But XML is not only about web, it’s about interoperable documents and protocols and it is used as such. You can find XML in use within vector formats, formats for office applications and even chat protocols. I think learning the basics of XML is a big deal.


I personally think that the LaTeX tools aren’t among the most polished tools. Just look at the Makefile generated by Doxygen to see the run-until-no-more-differences-found loop to work around inconveniences in the LaTeX tools. Or just look at the terrible error messages. Also, the syntax isn’t surprisingly pleasant.

But when you want to focus on the content, forget about the accumulated little formatting details and produce beautiful scientific papers, a book with consistently in-between linked references or even just a few math formulas, LaTeX is probably what you should, at least, consider.

Bonus: bash

Capable to automate the most surprising tasks in a computer, if you are using an Unix variant system. You could automate builds, customize software startup sequences and manage your system. But if you’re using an Unix variant system, you already may be aware of that.


No Java, C++ or Python in this list. Maybe I’ll do a part 2 of this article containing languages with a lesser chance to be used like SQL, MongoDB, OpenGL, R, GStreamer or some Assembly. Actually, I think Java, C++ and Python have a better chance to be used than Haskell, but if you learn every language in this list, C++, Java and Python will be easy to catch up and the lines of code you write will be more elegant.

I don’t feel the will to trust on logic all the time

If the last two air-planes from company A crashed, it doesn’t mean that their next air-plane is going to trash. It’s not a logical consequence. It might be a statistics problem, but then I won’t trust the company for awhile anyway.

If the situation “who will you gonna save?” happens, then you don’t have a logical decision. I don’t use logic’s help on all my decisions.

Anyway, since I started computer science, I feel like I’ve seen less events where people use illogical arguments to take decisions.

“one ring to rule them all”

PHP case: one vision, two approaches

Till today, I didn’t read a post defending PHP. There are all these texts attacking the language. And I dislike most of these texts I’ve read. I don’t like the attacked PHP language either. But what I dislike above all is the excessive use of fallacies. How could we have a logical discussion if we keep using them?

I don’t mind if you share a personal experience that cannot be used to prove a statement. If we’re lucky, your experience might be fun to read or will teach us to avoid specific behaviour in specific circumstances that may apply in specific ages.

I don’t mind if you carefully expose facts that the creators want to hide from us to affect our level of trust to such creators, as long as you use evidences to sustain such facts. You aren’t trying to logically prove something, but you text is also useful.

I don’t even mind if you create a text completely relying on fallacies, but I mind a lot if someone use such text to justify a decision. These texts, to my experience, tend to be fun anyway.

So, there are the two following linked texts about PHP, and in one of two, the author demonstrate more PHP knowledge than the other. Which one deserves more of your trust/attention?

Showtime: library to be proposed as Boost.Http!

It’s been two months already since my last post on this blog. All this time (and more), I’ve been (among other tasks) working on my 2014 GSoC project, an HTTP server proposal to Boost. I’ve finally reached a point where I feel it’s ready for major review/feedback.

If you’re a C++ programmer, a native speaker or an HTTP researcher (or just a little of everything) and you want to help, I’d like to ask you to review the project (interface-wise) and give me feedback.

You can find all the documentation and code at github.


This isn’t my first time on GSoC, but the experience was very different. The communities, development model, targeted audience, knowledge domain, to name a few, were completely different. Also, this year’s project wasn’t as challenging as the last one (although this is very subjective).

I improved as a programmer, but this is limiting. Once you become able to write algorithms for turing-machine compatible devices, there isn’t much room for improvement and you need to hunt other skills to continue improving (e.g. security, parallelism…). Coding for the sake of coding solves no problems. I decided to take a break (not exactly now, but in a few weeks) to make a shift and start to increase the efforts I dedicate to my academic skills.

Next step

I aim to integrate this library into Boost, so I still want to invest effort in this project.

GSoC 2014/Boost

I was accepted for GSoC 2014 and I’ll be working on the Boost project.

I created a new category on this blog to track the progress, so you’ll be able to have a separate rss feed for these posts. The new category URL is

Yet another libdepixelize update

Looks like I’m super creative lately. I’ve had another idea to improve libdepixelize.

The idea is to use the knowledge I got from the compilers course on the Coursera MOOC platform to make libdepixelize implement an abstract machine that can be programmed through image metadata gathered from PNG files to execute arbitrary code to turn libdepixelize into a generic cg framework.

Such extension will allow libdepixelize transform images like the followin one …

Mario pixel art (magnified 16x)

… into the following other one:

In fact, I’m so excited about this idea that I’ll start to work right now and I’ll try to deliver this feature at the end of this very week and you can wait for some extensions in the next week.

In the list of planned extensions, there is support for examining your computer usage, then we can gather general info about our citizens and help the police chase criminals. But don’t worry, we’ll only use this info for the good.

Another libdepixelize update

Evil patterns

I’ve invested some effort to improve libdepixelize response to evil patterns. Because libdepixelize won’t discard color information, the connections of the similarity graph aren’t a transitivity relation and extra topological patterns can happen. I refer to the extra patterns as evil patterns. The name comes from the fact that I enjoy to play Zelda and defeat the evil from the Hyrule land. Same happened in the last libdepixelize’s commits, where I overcame some patterns to improve the output quality. Quality can be something subjective sometimes, then I was a little conservative and limited my changes to things that I could reason about. The effort end up in new samples to the libdepixelize documentation, new rules for the algorithm and its related documentation and lines of code in libdepixelize itself.

The new rules are added as an extra step on the process. They are not treated like the Kopf-Lischinski’s heuristics to resolve crossing connections. I think maybe would be possible to get some of the new rules and describe versions that are more general and can be added to the “container” that holds the old heuristics. To make this happen, I’d need to define the concept of “similar color” as a set and operations on top of the set, the notion of interval and other things and logical reasoning on top of all that. A bit of mathematical work to improve the quality a little more, but I wanna to investigate the use of La*b* colors (an old suggestion by Nathan) to replace the current concept of “similar colors”. I didn’t replaced the current space color until now, because YUV and La*b* behave differently and I couldn’t just convert the YUV constants that define the boundary between dissimilar colors to La*b* to achieve better results. The current YUV constants were taken from HQx filter and I need to define a methodology to find new constants.

The advantage of a rule that is more general and unifies behaviour is just the beauty of a simpler rule that handles the real problem, as opposed to several branches that are consequence of the real problem. It’d abstract the nature of the problem better. It’d make the code simpler. It’d handle more branches. Being a single rule that affect more branches, it’d be easier to test and better at convincing me that the improvement is real and there will be no loss of quality in other images.

It’d be interesting to investigate the range of voting in all heuristics and try to come up with “fair” multipliers/strength.

New idea to represent splines

Previously I used a technique to carefully insert new nodes to obey the technique “adjust splines” from the Kopf-Lischinski paper while being limited by the old splines representation. This technique has caused troubles to several later steps.

The first problem was to remove the extra invisible nodes that are present in the output SVG. These extra node not only make the output larger and make rendering consume more CPU time, but also can be troublesome for artists wanting to manurally edit the generated image. I’ve made several unsuccessful attempts to remove these nodes and I’m sure it’s possible, but I’ll try to move away from this problem trying a completely different approach to the problem.

The second problem is similar, in essence, to the first one, described in the paragraph above. When I originally disabled the optimization step in the Inkscape GUI and marked it as experimental, one of the reasons was because it required “extra preprocessing steps” (it was a pretty vague explanation, but I should try to improve my communication skills). With no extra invisible points, the optimization step will be way simpler. The step related to optimization that avoid overlapping shapes and holes will partly go away and the approach I mentioned previously (“A new idea to keep the shape of optimized splines correct“) will be affected.

The idea is to split splines. I was previously using a list of points. The new idea is to use a list of splines, where spline itself is a list of points. I hope that the new representation will allow a representation closer to “arbitrary”, just enough to apply the operation “adjust splines”. The new representation should do without extra points and easy the last processing steps (in terms of processing power and my productivity). Of course this change requires a lot of refactoring and will take a bit of time to be finished. Also, the “hope” word used previously means that I haven’t thought about all implications and I’m sharing this idea very early. I’m trying to improve my communication skills and this means that you’ll see some “flood” on this blog. The “flood” might include ideas that eventually prove to be bad at a later point and doesn’t hit the libdepixelize code.

Beta testers

After I shared some images on Google+ related to libdepixelize improvements, some people demonstrated interest in helping me with beta testing.

First, the software is free software, then anyone can use it (even the development versions). So, if you find a crash or something that obviously is a bug, fill a bug report and I’ll fix it.

Second, I still want to improve the quality of the algorithm, then a good pixel art database can help me a lot. Currently the algorithm behaves bad at images with too many gradients, but this doesn’t mean that a lot of images with gradients will help me to improve the quality of the algorithm. I need to publish a page explaining what kind of image can help me to improve the quality of the algorithm.

Third, you can help me with profiling info to improve the performance of the algorithm. I’ll probably send a message to the Inkscape mailing list with the title “request for beta testers/profiling data on libdepixelize”. I’ll probably (re-)share the message through Google+, this blog and maybe more. If you follow any of  these communication channels, you’ll know when I need help.

Thanks for the support.

New logo

The project doesn’t have a logo and uses an ugly icon in the Inkscape GUI dialog. I was thinking about use the character introduced by Jabier on the Inkscape mailing list to represent libdepixelize project:


This image already is highlighted on the algorithmic documentation report anyway and is unique enough.


I’ll have a test at the end of the week and later I’ll share more of my time to play with POWER and LLVM. Then libdepixelize will have to wait a little until I can do more commits.


I’m aiming to deliver all the improvements related to the original Kopf-Lischinski algorithm before Inkscape 0.91. Later I’ll try to improve performance. “Beyond Kopf-Lischinski” extensions have no timeline and depend on creativity.

%d blogueiros gostam disto: