Following the “7 bash tricks“, I’m now using ZSH as my main shell. The differences that I noticed from bash so far:
- Way smarter completion system.
- A transient rprompt showing information that isn’t important for scrollback and otherwise would only waste space.
- Zsh is more responsive/fast for me. I think the problem in my bash configuration was the git completion/info stuff, but Zsh provides a non-slow git completion/info alternative, then I’ll cast a vote for zsh. Although there are some users installing git modules more expensive in their zsh shells.
- A prompt char whose background color will turn red if the last command returned non-zero. And the exit status will also be printed on the output if non-zero.
- There are more supported Emacs key bindings. And they are easily customizable, then I can mimic some bash and Emacs behaviours.
Regarding the usage, Zsh and Bash aren’t too different and it’d be difficult to tell which shell is which. I type the same commands and I see (mostly) the same output. The svn to git change was way more significant to me than the bash to zsh change. Another recent change of habit was the Solarized’s colorscheme adoption (not shown in this page’s outdated gifs).
There are lots of users adopting some configuration framework like oh-my-zsh, zshuery or prezto. From the usual “look my oh-my-zsh’s based zsh” posts that I’ve read… I think these users like to configure tons of plugins… but I don’t and I don’t see much benefit of using one of these configuration frameworks. In fact, I like an initially minimalist and clean configuration and this is how I maintain my ArchLinux setup. But the main benefit of the framework-free setup is the self-contained nature. The only dependency is zsh itself.
My 239-lines zshrc config file is available online on my git repo and there isn’t much magic there. In this post I’ll only explain the small amount of magic there.
The bash’s Ctrl+C behaviour
When you press Ctrl + C in bash, the shell “abort” the edition of current line and let you try again from the beginning. The difference of behaviour in bash and zsh is that bash will also print a nice “^C” string, then you can know if the previous lines were actually executed or not. I wanted this behaviour in zsh.
I found out that zsh handle stty in a different way and you’d need to caught SIGINT to print the “^C” string. But zsh is customizable and it allow you to easily trap the INT signal.
This is trivial and far from the “magic” concept I wrote at the beginning of the post, but it’ll get complicated soon. Moving on…
The reddish background of the prompt char
After seeing so many esoteric zsh prompts, I started to ponder how I’d want my own prompt. I decided I wanted something minimalist, showing only important information and only when it is interesting. This is why I use relative paths on the prompt and the hostname is omitted if I’m not acessing the terminal from a ssh connection. But all those prompt demos from other zsh users made me wonder how useful a reddish color alert would be. And it would be possible to put the red color on the background of the prompt char, then I wouldn’t waste any extra space.
A simple way to set the prompt the way I want would be like this:
And it works, but there is one small problem. If I hit Ctrl+C, a non-zero status will be returned and the red background will be triggered, tricking me to think something bad happened. To “fix” the behaviour, I decided to add a “_trapped” variable that would would work like a masking, allowing or banning the red background. Then I need to change the previously created TRAPINT function to ban the red background and add a hook before the execution of new commands to allow the red background. The configuration ended up like this:
This change also has the nice side effect of turning the red background off easily by pressing Ctrl + C. The “updatemyprompt” function needs to be called from “TRAPINT”, because the “precmd” function isn’t always called before the prompt is redraw (see manpages). I’d prefer to control the output of the prompt var through something like the “prompt_subst” option, but I haven’t figured out a pretty way to conditionally show a string using zsh’s command and parameter expansion based on the “_trapped” value (yet!). And when I learn a prettier way to control the “_trapped” mask, I’ll still need the “precmd” function, because this is required for the “vcs_info” module.
The above code/config wasn’t the final one (vcs_info’s config is missing) and let’s stop here, because there is no more magic to show. The code end up being ugly, with responsibilities not well defined and error-prone communication system (a single global variable caused such a big damage here). The upside is that this is my shell, it does what I want and therefore I won’t change it. Also, in fact, my real zsh config uses anonymous function to leak less variables and avoid unnecessary naming clash.
Custom keys detection
It turns out that terminals are hard. There are escape sequences that were created to interact with the terminal. The interaction would allow applications move the cursor around and detect certain key combinations done by the user. Unfortunately, there are several protocols and there is no universal way to access these features. To register some fancy key bindings, zsh wiki and several online pages suggest you to use terminfo or zkbd.
The zkbd method requires an annoying manual intervention on the first start-up and it would be desired as a fallback mode only.
The terminfo mode will only work under application mode and the zsh wiki suggest you to activate the application mode on the beginning of the line edition and deactivate it on the end, before the execution of other applications. The problem appears when the terminal doesn’t support application mode. Fortunately, there is a a way to check if the underlying terminal supports application mode, then you can you can even add a nicely integrated zkbd fallback mode. Below there is an initial snippet to configure zsh accordingly.
Note that I initially avoided the terminfo all together because I thought the lack of info around “go to application mode” too scary, resembling some “error-prone thing that will break”. I needed to understand that zsh would leave application mode before execute other applications and it would only enter in application mode if supported. I learned all these bits thanks to comments in this prezto’s pull request.
Somebody on IRC convinced me to try ZSH, then maybe this is the last chance to document my bash tricks. So here it goes:
Sometimes you want to reuse the last argument of the last command and there is a shell variable that holds exactly this. Be careful to quote it to avoid issuing multiple arguments instead of one. You can argue that it’s easier to navigate through the history of commands, but if you use something like HISTCONTROL=’ignorespace’, the history is not always available. I’ll use the history of commands and some handful Emacs hotkeys most of the time too.
It’s not unusual to issue arguments that have common parts and brace expansion is here to help us.
If you’re unsure about the effects of the expression, just put “echo ” in front of everything and you’ll have a “preview” of what the command would do.
This trick saved me from sooo much typing.
It combines history navigation and data filtering. Simple and fast.
My system’s locale is not english and this wouldn’t mean too much, but when the time to communicate with other people comes, I need a fast way to reproduce the problem using the developer’s standard language. Want to report a bug? Use the original error messages. Want to know why are you getting a warning? Search engines will help less using localized messages.
$(), “$()” and ` `
Use the output of a command as argument (or arguments) for another command. I don’t use this trick on my day-to-day use of bash, but it’ll be probably used when the time to write scripts come.
pipes, pipes, pipes everywhere
It’s the Unix way. One command that only does one thing it does it well.
It is useful in so many ways:
- Process the input set (eg. how many tests does project X implement).
- Filter a large data set (eg. which mount points are read-only?).
- Interactively filter a data set (eg. which files have missing copyright notices?).
- Add a nice pager to the output of a process.
- Follow the output of a command and log it at the same time (eg. tee).
sleep 8h && while true; do mplayer Música/Disturbing_music.ogg; sleep 5; done
This is my alarm clock. The behaviour of my smartphone’s alarm clock is pure shit.
If you hit Ctrl+C, sleep will return false, breaking the control flow and aborting the alarm.
The music won’t stop until somebody unlock the screenlock, access the terminal and hit Ctrl+C. The audio hardware is loud enough and eventually I’ll wake up.
And my top 7 commands
- sudo: Run commands as root (it does more tricks than su).
- yaourt: Wrapper around pacman that can search for packages on AUR.
- git: I type git a lot.
- nano: A simple text editor. It’ll open fast and close in no time. Useful for simple editing tasks.
- cd: I type cd a lot.
- ls: I also type ls a lot.
- ssh/scp: This is the tool you’ll use when you have more than one machine/system under your command.
The above list was created from the history stored on my netbook, but I’m sure grep would be in this list if I have used the history stored on my desktop pc.
Since Tufão 0.4, I’ve been using CMake as the Tufão build system, but occasionally I see some users reimplementing the qmake-based project files and I thought it’d be a good idea to explain/document why such rewrite is a bad idea. This is the post.
Simply and clear (1 reason)
It means *nothing* to your qmake-based project.
What your qmake-based project needs is a *.pri file to you include in your *.pro file. And such *.pri file *is* generated (and properly included in your Qt-installation) by Tufão. You’ll just write the usual “CONFIG += TUFAO” without *any* pain.
Why won’t I use qmake in Tufão ever again (long answer)
Two reasons why is it a bad idea:
- You define only one target per file. You need subdirs. It’s hard.
- The Tufão unit testing based on Qt Test requires the definition of separate executables per test and the “src/tests/CMakeLists.txt” CMake file beautifully defines 13 tests. And with the CMake-based system, all you need to do to add a new test is add a single line to the previously mentioned file. QMake is so hard that I’d define a dumb test system that only works after you install the lib, just to free myself from the qmake-pain.
- There is no easy way to preprocess files.
- And if you use external commands that you don’t provide yourself like grep, sed or whatever, then your build system will be less portable than autotools. Not every Windows developer likes autotools and your approach (external commands) won’t be any better.
- All in all, it becomes hard to write a build system that will install the files required by projects that use QMake, CMake or PKG-CONFIG (Tufão supports all three).
The reasons above are the important reasons, but there are others like the fact that documentation is almost as bad as the documentation for creation of QtCreator plugins.
The ever growing distance from QMake
When Tufão grows, sometimes the build system becomes more complicated/demanding, and when it happens, I bet the QMake-based approach will become even more difficult to maintain. The most recent case of libtufao.pro that I’m aware of had to include some underdocumented black magic like the following just to meet the Qt4/5 Tufão 0.x demand:
You like a CMake-based Tufão
The current CMake-based build system of Tufão provides features that you certainly enjoy. At least the greater stability of the new unit testing framework requires CMake and you certainly want a stable library.
In the beginning, QMake met Tufão requirements of a build system, but I wouldn’t use it again for demanding projects.
But I don’t hate QMake and I’d use it again in a Qt-based project if, and only if, I *know* it won’t have demanding needs.
Of course I’ll hate QMake if people start to overuse it (and creating me trouble).
And if you still wants to maintain a QMake-based Tufão project file, at least you’ve been warned about the *pain*
and inferior solution you’ll end up with.
Today I’ve spent some minutes of my time to fix the metadata of the Tufão git repo. The issue makes difficult to gather who author owns which lines of the code, which can be a problem if you want give merit for the real coders (maybe this is related to ethics) and if you want to contact the authors later for actions that only can be done by the copyright owner (eg. change license). This incorrect info was introduced by misuse of the git tool.
First, I must admit that I was wrong and the issue was entirely caused by ignorance of my git’s knowledge at the time. The issue was not made on purpose. You can see an example here, where I mentioned the original author on the commit message to give the appropriate credit (my intent). But there are also cases where I failed to even make such mention.
Every commit you make using the git tool has an associated author and if you don’t specify the author explicitly, git will use the global config. This metadata is used by commands such as git shortlog -s and git blame. The solution is simply: Just set the author explicitly using the –author argument.
Now the git repo history mentions authors such as Paul Maseberg and Marco Molteni.
Lastly, but not least, I want to let you know that I believe the problem is solved. If you find something that I missed, just fill a bug on the github and I’ll fix it.
Do you know the diamond problem? Well, everybody knows it and it is boring and old. This post is about not-so know patterns (at least that’s what I think). The patterns show in this post affect the language design. Maybe in a future post I’ll try to aggregate patterns that affect library/application design only.
Be warned that not all “solutions” shown here are implemented in stable versions of the mentioned languages.
Open Type Switch
You can make a switch statement based on the value in several programming languages. Some are more restrict and only allow you to use integer values. But what about type-based switch? Not every one is happy with the visitor pattern.
You were taught about how you could override a method/function-member in a inherited class to provide specialized behaviour. But the dispatch is based solely on the this/self argument. What about multiple dispatch?
Imagine a class inheriting from Matrix to provide faster operations to that kind of Matrix. If you invoke the following code, it’ll be faster:
But what about the following one?
This is exactly the problem that multiple dispatch try to avoid.
Generators (and the yield keyword)
This is an interesting technique that reminds me of the producer/consumer pattern.
Think about the fibonacci numbers as a producer. And about the sum-of-all-elements as a consumer. Without the concept of generators, you could create a function that returns the list of n elements of the fibonacci sequence. Then you could create a consumer to sum all elements.
This approach is too problematic. The most notable problem for the client might be the performance. The programmer may notice that this approach cannot handle infinite (not flexible). We could use functional programming to improve the solution.
We could use a special number (like -1) to handle the infinite-case, but this solution suffer from other problems. For instance, there is an inversion of control (you cannot stop the work of the producer function inside of the body of the consumer function). And also, you now must know the argument n before calling the consumer. So, here we have generators (and an yield keywork to make everything easier) to solve this problem:
A generator is an example of the coroutine concept. This concept is about a function that has multiple entry points (it can resume later) and it is used in a few patterns (asynchronous programming, for example).
Asynchronous programming with async and await
Are you tired of coroutines already? Because there is more! The async/await keywords from C#. I didn’t find a before-after text in the C#’s site, then I’ll grab a simple example with C++ code.
Here is before async/await:
And here is after async/await:
This await keyword could make the life of node.js developers quite easier.
Here is the C#’s (and no, I do not like to program in non-portable languages) site about this feature:
Single-dispatch generic functions
In some statically-typed languages, you have the power to create functions that have the same name, but receive different argument types. This is called function overloading and you may be aware of that. But for several dynamically typed languages, it doesn’t make sense to create two functions with the same name, because the language is dinamically typed and you can call any function with values of whatever type you want.
If you want to create a function that can handle different types, you have invent your own dispatching code or implement all logic in the same function. Too problematic.
You should see what Python developers proposed:
Did you like this post? Why not spent 5 minutes writing about the patterns that you think that are interesting? Post a link back here, then we can be aware of your text (and my programming knowledge will increase).
After I saw several fragile designs adopted by several libraries written in C (parsers relying on global variables, broken exception handling implementations, …), I realized that parallel programming can teach a lot about API design to newbie programmers, such as:
- Avoid global variables
- Pondering about the best way to make two, initially unrelated, programming parts comunicate with each other
- The implications that a decision has on flexibility, safety and performance (usually all they care about is ease-of-use)
- Migrate easily to other programming models/idioms/paradigms (such as functional, event-driven, …)
Should programmers delay the parallel programming topics or learn it while they still are newbies?
The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.
Here’s an excerpt:
The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 8,500 times in 2013. If it were a concert at Sydney Opera House, it would take about 3 sold-out performances for that many people to see it.
Recently I started working on PowerPC-related technologies and I needed to prepare an environment to start my studies/research and it’s not that difficult (but I recommend you to choose a book, because it’s going to take a lot of time), but I wanted to write a HOWTO for ArchLinux anyway.
qemu is an open source hypervisor that we will use to prepare our environment. All you need to do to install qemu under ArchLinux is run the following command:
After you have qemu installed, you need to install an operating system supporting PowerPC (just as you’d in a real machine). I wanna install ArchLinux, but looks like PowerPC support on Arch is dead, then I’ll go with Debian. The list of steps to follow is (1) create a virtual machine, (2) install a guest operating system on it and (3) configure the guest system to compile powerpc software.
To accomplish the first step, you need create a disk image and this can be done with the following command (where 20G is the size):
Put everything on the same folder and use the following script to start qemu:
Follow the usual Debian installation steps and change your script to boot from the HD (replace “-boot d” with “-boot c”). When I installed the system, the installer told me it couldn’t complete the installation of the bootloader, then I jumped this step and, for my surprise, the system booted up without my intervention.
The Debian comes with GNOME by default and is slow on an emulated system, then I recomend you to install the packages xfce4 and xfce4-goodies and replace GNOME.
Also, qemu is really slow to emulate the vga card, then I suggest you to install the package openssh-server and do most of your tasks via a ssh connection. You can redirect ports (good to access guest via ssh) from the host system to the guest system passing the -redir argument to the qemu-system-ppc command:
If you want to dig deeper, I suggest you to read this lenthy developerWorks article.
The next tutorial should be a cross-compilation guide to avoid all this middleware steps and part of it is already written, but I’m struggling to make the complete system work.