Blog | Admin | Archives

Dovecot + Maildir + Ubuntu 14.04 LTS Upgrade

I recently upgraded the server behind this site to Ubuntu 14.04 LTS from 12.04 LTS (only about a year late!)

A few things went awry (the PHP install couldn’t talk to MySQL, for example), but a reboot cleared that right now. However, one piece remained broken. Mail wasn’t properly being delivered.

I’m don’t use this server for mail much myself, but some people (like my brother) do. It was broken, and I had no idea why. It didn’t help that the last time I touched the config was over a year ago. It also doesn’t help that mail server setup is basically a dark art. Regardless, after a few days of poking at it and “hoping the problem would go away”, I decided to go at it again today. Hopefully by writing this down I’ll remember a bit more about my setup, but if not, at least I’ll have this handy reference when I forget it again.

The way mail is set up on this server is that Postfix listens for incoming SMTP traffic, which it then forwards to Dovecot for delivery. Dovecot is set up to use the Maildir format, but instead of storing the maildirs in users’ home directories, it stores them in /var/mail/ since not all the users even have home directories. It made sense at the time, and I think it still makes sense now!

At any rate, with Dovecot’s upgrade came a problem, and after digging around, I saw in the logs that Dovecot was unable to deliver mail to /home//Maildir. Of course, those directories didn’t exist, so of course it was failing! However, I had set up the mail_location to be /var/mail/, so what was going on?

It turns out that Dovecot’s Ubuntu distro added a new configuration file, /etc/dovecot/conf.d/99-mail-stack-delivery.conf, which set the mail_location back to /var/mail/, and apparently the last place to set it wins (it was previously set in /etc/dovecot/conf.d/10-mail.conf). I commented out the offending line, and by tailing /var/log/mail.log, I could see that mail was once again delivering.

Now, I don’t know if this means it’s fixed; my brother will have to let me know. Regardless, it’s at least less broken now.

ZoomToScroll Chrome Extension

I made my first Chrome Extension today, after being annoyed by Chrome’s ctrl-scroll zoom anti-feature for far too long. I’m an avid user of the Ctrl-Click to open links in new tabs. Often I press Ctrl while I’m still scrolling, resulting in Chrome zooming the window I’m scrolling through, which is never what I want — I’ll use Ctrl-+ and Ctrl– for that, thank you very much.

I’ve searched several times before, but only today did I come across an extension, No MouseWheel Zoom, that is effective at stopping this annoying behavior.

However, it wasn’t exactly what I was hoping for. While it functioned as advertised, I actually would prefer to have the state of my Ctrl key ignored — ie, to be able to continue to scroll. I figured I’d take a look at the code of the extension and see if it looked doable.

Sure enough, it looked pretty straightforward, so I started hacking the extension, only to have Chrome disable it for security since it no longer had the same digital signature. Thus, it was time for me to learn how to create a new extension.

It’s actually pretty straightforward, and the documentation is quite useful as well. It wasn’t long before I had my own extension, which I called ZoomToScroll, written and tested. The next step was to package it an upload it, which was also straightforward. A one-time fee of $5 let me place it on the Chrome App Store, where you and anyone else can download it for free!

There are a few caveats to be aware of: I’ve only tested it on a single computer (my laptop), and it’s set up to scroll like a touch screen (that is, opposite of the old mouse wheel way). Also, different web sites seem to have different effects when Ctrl-scroll happens; some scroll speeds are faster while others are slower, and I don’t really care to investigate right now.

If this would be useful to you, feel free to download it and let me know how it works in the comments (or on the Chrome App Store).

Make mercurial use vimdiff the same way git does

At work, one of my favorite pastimes is getting mercurial to behave more like git.

Today, I decided to tackle merge conflict resolution using vimdiff.

Git has great support for this. In my .gitconfig, I have the following lines:

[diff]
tool = vimdiff

Git is smart enough to run vimdiff with four panes — base, local, remote, and output. The first three windows are at the top, and the output window is at the bottom. Since it’s vimdiff, everything is colorized so you know what is happening, and the output has all the merge markers so you know what you need to fix.

Mercurial’s default seems to be a three-pane vimdiff view that has local, other, and base with no fourth pane to do the editing. This is, in my opinion, strictly worse.

I thought it would be straightforward, but I was wrong since the official documentation is wrong in a few ways (I may edit it, since it’s a wiki, if it’s easy, after I finish this post).

What I ended up with that actually works (using hg version 2.9.1 and vim 7.4) is adding the following to my .hgrc:

[ui]
merge=vimdiff
 
[merge-tools]
vimdiff.executable = vim
vimdiff.args = -d -c "wincmd J" $output $local $other $base
vimdiff.premerge = keep

Screen and Vim with 256 colors

There are a few guides out and about about how to get your terminal up and running with 256 colors, which gives you a lot more options than the traditional 16 terminal colors that are enabled by default. I used several of these guides in coming up with my approach, but I had a few problems following these instructions. Eventually, through some trial and error, I found something that works, so I thought I’d share it here.

To get going quickly, you can just use my dotfiles, which I share on github and periodically update.

The key stuff I added for 256 colors are in these commits:

The key thing that was hard for me to get right was the tempcapinfo line in screenrc. The trial-and-error invovled putting my old termcapinfo and the new suggested termcapinfo together in different ways. Apparently, the correct answer was “append the new one to the old one” and then I was done. So, my screenrc’s termcapinfo line now looks like this:

termcapinfo xterm 'is=\E[r\E[m\E[2J\E[H\E[?7h\E[?1;4;6l:Co#256:AF=\E[38;5;%dm:AB=\E[48;5;%dm:tc=screen:'

If anyone out there understand what in the world is going on there, please enlighten us in the comments!

More Blog Optimization

About four years ago, I did some optimization on the blog, drastically speeding up load times. Since then, I’ve made some more changes that helped even more. I’ll outline them here.

1/ I installed nginx and started using it as my primary webserver instead of apache httpd. Apache is great and easy to set up, but nginx is basically as easy these days and is much higher performance from what I’ve read and experienced.

2/ I installed php-fpm. One downside of nginx compared to apache is it doesn’t have a super-easy-to-install mod_php-like plugin, so this meant I needed a standalone php interpretter. I hooked up nginx to php-fpm following steps similar to these (I don’t remember which guide I actually consulted).

3/ I stopped using Memcache. Blasphemous, since I’m on the Memcache Team at Facebook, right? Not really. My blog runs on a single server, not across thousands. Memcached is high performance in a distributed system, but keeping everything within a process (eg, the php process) is even better. Instead, I’m now using an exclusively APC-based cache that is working better than the memcache-based cache.

These days, pages seem to generate in about a quarter second which is pretty hard to beat. The next step I’ll take will probably be installing hhvm, which recently got fastcgi support added. Maybe tenth-of-a-second load times are possible?

Upgrading from Ubuntu 9.04

Among other things, I’ve let minimus go far too long without upgrades. It’s still running Ubuntu 9.04, which, while working, is old enough to be unsupported, especially since it’s not a long-term-service (LTS) release.

Upgrading from an unsupported release is, unfortunately, not officially supported, but some intrepid souls have figured out how to make it work anyway. Using answers from this thread on AskUbuntu, I have been able to get the process started. Of course, it remains to be determined how this will finish.

Sharks or glory lay ahead.

xkcd: Success

A Glorious Hack

Today, a buddy mentioned that his blog wasn’t loading. His blog was hosted on one of the servers behind silverfir.net, minimus. It’s the one with the really interesting setup: Windows Server running Ubuntu Linux in a VM.

I luckily remembers how to log into the Windows server via remote desktop and poking around I discovered that something had gone wonky with the VM image. A reboot + disk scan didn’t fix the issue, so I created a new VM image and mounted the same disk images and things seems to start working reasonably well again.

Except that the port forwarding wasn’t working. Minimus is NAT’ed behind a very high-speed residential line, and the dd-wrt router it’s behind forwards a few relevant ports to the right place: 22 and 80 come to mind. Somehow, perhaps because of a new (virtual) MAC address, the forwarding wasn’t working any more.

I wanted to get the sites back online, but I didn’t have access to the router’s admin interface, so I came up with a glorious hack. Here’s what I did:

(1) ssh with port forwarding from minimus to nexus to forward port 5050 on nexus to port 80 on minimus: ssh -R 5050:localhost:80 nexus.silverfir.net
(2) on nexus, create an nginx config that forwards all the relevant sites to port 5050
(3) point the dns entries for all the relevant sites to nexus instead of minimus

It is working surprisingly well, but of course this is a super fragile state where one connection dropping will mean all the sites become unavailable again. If you care much about content hosted on minimus, you should probably take this chance to back it up.