Upgrade beyond Docker 1.12 on CentOS 7

If you’ve a CentOS 7 server, you probably picked at first Docker from a specialized CBS from Project Atomic, “virt7”, or from EPEL.

It’s been a while Docker 1.13 has been released. Yet, there isn’t any upgrade beyond 1.12 (but fixes backported to this 1.12 branch).

You need to get rid of the old packages, then install a specific repository (community docker-ce or your subscription docker-ee one) and a new docker-ce/docker-ee package. The procedure is described at https://docs.docker.com/engine/installation/linux/centos/. A Salt state for that is provided below.

During this upgrade, your containers will be still running, the Docker engine now being able to pick up resources already launched and containerized and manage them.

Upgrade from an old Docker version prior to 1.10

The image data format has been changed.

Anytime it’s convenient, destroy and recreate every containers. For the few containers you can’t, use the migration tool to minimize downtime. If you don’t, this upgrade will be done automatically at container start time, and it will take a while.

Runtime error

If you left a Project Atomic installation, you’ve currently docker-runc as engine name. But for recent versions of Docker, the engine is simply runc.

You so need to replace it in hostconfig.json files.

From a Docker host with GNU sed available, you can do an inline replace:

This fixes the following error:

docker-runc not installed on system

Salt state

If you use SaltStack, you can use the following state.

This state directly downloads the docker-ce.repo file from the repository, to avoid to translate it in Salt pkgrepo states.

FOSDEM PGP Key signing party FAQ

FOSDEM organizes each year one of the largest  keysigning event for PGP keys. When we come back from a key signing party, what to do?

Here a FAQ with some useful notes about how I sign the keys.

Sign other keys

Bad practice: don’t upload keys you’ve just signed to the PGP server

At the event, you checked the key fingerprints and you checked an ID document. But you also want to verify the email address to be sure the mail belongs to the user and not to a namesake.

So don’t upload keys to a PGP server, send the signature for ONE mail to THAT mail address. Luckily, some softwares automate the process and does that.


Caff will automate signing and sending process.

You can follow instructions published on the Debian wiki.

Basically, it works in three steps:

  1. create a ~/.caffrc file with at least <code>$CONFIG{‘owner’}</code> and <code>$CONFIG{’email’}</code>
  2. caff <fingerprints of each keys you verified>
  3. check your mail for issues like rejected as spam, not existing mailbox, etc.
  4. take a highlighter and let a mark when a key has been sent

What if some keys aren’t fetchable on the public servers?

You can ask caff to fetch the keys from the local GnuPG keyring. For that, download the FOSDEM event keyring, then import the keys you want:

You can then ask caff to fetch them locally. For me, it was the following keys:

Other software

Some key signing participants use another piece of software: PIUS.

The software claims to be able to detect signed keys in a mailbox, useful for the next step.

Don’t expect your nice message will be read

As you encrypt the message with the recipient PGP key, it will have to make an effort to decrypt it.  Contributors using PGP to sign releases or VCS tags or commits don’t use always PGP to read and write mail. So, guess what they could with your message if their mail client doesn’t have access to the key? gpg -d | gpg --import. Your message will so never be read in clear text.

Publish your signed keys

Now you’ve signed the keys from other participants, you want to publish the signed keys you’ve received.

When your client mail supports GPG

If your mail client handles this, or if you use PIUS, they will allow you to import in GPG the keys.

Manually import the signed keys

This script will ask you Next key?

You copy/paste the PGP block (between -----BEGIN PGP MESSAGE----- and -----END PGP MESSAGE-----). Then you save with CTRL + D.

It doesn’t matter if you’ve added a line after the END line, gpg stops to parse there.

GPG will import the key and publish it. Publish on a responsive server, not pgp.mit.edu, that will ease checks.

You have two ways to know each signature have been successfully sent.

First, check the output of gpg --import :

If instead you read this, you’ve already published this key:

The second way to check is on the web view of the server.

For example if you use the server noted above, search your fingerprint here.

Stay on the page with your signatures, and when you’ve a doubt, you can refresh.

Tag the mails as done

There are a lot of mails as there are a lot of participants. So, to tag mails as processed is useful to know what is processed and what’s not.

An IMAP dedicated folder is nice, or any label/color your client allows.

Alternatively, take an highlighter, your paper list and annotate.

Deploy a darkbot or a simple generic service with SaltStack (part 2)

This is the part 2 of our service deployment walkthrough.
See also: Part 1 — Service accounts, sudo capabilities

In the first part, we’ve seen how to create an user account for the service, a group to put users with access to the service into, and sudo capabilities to allow full control on the service account, some control as root to interact with systemd or another service manager.

Deploy an application through package

If your application is packaged or you package it, something we recommend hearthily, you can simply the pkg.installed state.

For example if you wish to deploy with this state configuration:

If you only have one application you installed, you can omit pkgs, it will then take the state name:

If you want to force the last version to be reinstalled when you run again the state, you can instead use pkg.latest:

Salt can then take care of your upgrade process.

Deploy an application fetching and building it

In this sample, we’ll fetch the source code from the last version of the production branch of a Git repository cloned at /usr/local/src/darkbot, and we’ll install the software to /opt/odderon. To use /opt will allow us to perform an installation process running as a non privileged user.

Fetch the source code

Sometimes, you want a workflow to fetch, build, install, to have a better control on the compilation. Docker image creators tend to like to automate build processes.

For that, you need two things:

  1. A directory where to clone the software
  2. To actually clone the repository

A same Salt state can call several functions, here one from file to create a directory (file.directory) and one from git to clone (git.latest):

We’ve reused the user and group created in previous part.

To clone the repository, we recommend to do the clone from a source you can trust (e.g. PHP provides GPG signed packages) or better, a source you control.

Note it’s currently not possible to call two functions from the same module, e.g. two git or two file wouldn’t work.

If you automate the upgrade process (something best to do only if your CI/CD infrastructure tests the Salt deployment), provide your deployers a quick way to stop the update process.

For example, you can provide a lock file in the application directory (here /opt/odderon):

If the file exists, state will be skipped, so it’s as simple as touch /opt/odderon/LOCKED to pause deployment, rm /opt/odderon/LOCKED to resume it. SaltStack has a good documentation about requisites, requisites not always intuitive.


Let’s start with a simple case where you only have one command to write to configure, compile, install, for example here: ./build.sh --with-sleep=0 --with-add=0 --with-del=0 --with-random=0 --destdir=/opt/odderon.

For that, you only need cmd.run:

The cwd parameter allows to change the working directory (cd /usr… ; ./build.sh…) and runas to run the command as another user than root.

If you don’t have such script available, just create it.

Salt allows to deploy a custom script before to run it in one step with the cmd.script command:

The roles/shellserver/odderon/files/build.sh file will be copied on the server, and run, and yes you can also send arguments to the script like with cmd.run.

This process isn’t useful if Git failed, so we can require another state succeeded:

The same process can be used to provide the service configuration. For example, to copy a hierarchy of files from your Salt root directory to the server:

The 770 directory mode will allow the service and the deployers to access it, files by default read only for deployers to encourage to use Salt to edit them.

By the way, you probably also want a warning like this:

It’s then clear when you’re on the production server a file opened is managed.

Upgrade the code

So, what can you do with that?

First, it can provision your service to a new server, but you can also use it for updat. For example, if you’ve your states in roles/shellserver/odderon/init.sls, you can upgrade with:

salt eglide state.apply roles/shellserver/odderon

It will then upgrade your package or fetch the latest Git commit and recompile your code.

If you compile manually, note the build script doesn’t especially need a clean step. For example, if you use a Makefile, make will detect your file has been modified (looking the timestamp) and so should be upgraded.

Create a service

Thanks to Sandlayth to have prepared the Salt configuration for this part.

If your application doesn’t provide a service, it’s valuable to create one.

Two steps are needed: deploy the service file, ensure the service runs. For systemd, a third step is needed to force reload configuration.

The first part is service manager dependant, the second part is service manager agnostic.

We told you there is an extra step for systemd. This one is tricky: as this is a command to refresh an application knowledge, that’s not something we can declare and check, so it’s not available as a state. SaltStack also allows to run commands, and provide modules for that. For example, you can do salt eglide service.force_reload odderon (see systemd module doc, right menu offers the same for other systems).

Parameters for the module you run are directly put after the name. But here service.force_reload has also a name parameter. To disambiguate, you prepend m_ and get m_name.

There is currently in Salt a work of progress to abstract more the services.

Finally, whatever the service manager, you want to ensure the service runs:

The enable parameter is useful if you use a systemd-like service, as the service must be explicitly enabled to be automatically launched at server start time.

Trending colors for 2017

Trending colors in 2017 are nostalgic, cold, bold and audacious as the same time. They colonize the dark and valorize the night.

Browse the trending colors for 2017 content

The following sources offer interesting hues and intent stories:

Create a palette for Nasqueron

For Nasqueron, I prepared a palette inspired from this story, with some input from other sources linked above. This is especially welcome as we need a small site to explain the community and launch it.

Colors identified as trending in 2017


At the painting brand Sherwin-Williams, one of their forecasts is called “noir” and is described as:

It’s among our most precious commodities: night. We’re craving a refuge from urban streetlights and glowing screens, space to turn our gaze inward and recharge the spirit. Mindful melancholy is fueling a new romanticism marked by medieval patterns, revived customs and bittersweet beauty. The Dutch masters knew the secret: dark hues set a dramatic stage for sensuous luster. This palette is rich with vineripe fruits, Nordic blues, moody neutrals and golden yellows.

Colors used

This palette offers 5 colors:

  • #44484D (cyberspace gray)
  • #2B3441 (Anchors Aweigh blue)
  • #5587A2 (Niagara blue — Pantone 17-4123)
  • #F6D258 (Primrose yellow — Pantone 13-0755)
  • #67947D (unidentified classy green from this photo)

Current prototypes

This palette is currently under evaluation for the following prototypes:

Where to find the palette?

Browse the palette on:

Deploy a darkbot or a simple generic service with SaltStack (part 1)

SaltStack is one of the main configuration as code software. Written in Python, it uses Jinja templates. It can be easily used to deploy a service, like here a C application.

In this post, we’ll walk through the different steps to deploy the service and allow it to be managed by relevant trusted users, without root access.


A darkbot is an IRC talking bot developed in 1996 by Jason Hamilton. It’s basically a knowledge base with entries allowing for wildcards, and replies.

At Nasqueron, we use a darkbot to store factoids about server configurations or development projects. It’s hosted on Eglide, a free culture shell hosting service.

How a simple generic service work?

  • The service is a non forking application
  • The application runs on a dedicated user
  • A service manager like systemd can take care of the start/stop operations
  • Some users should have capabilities to maintain this application, without root access


In this first post of the series, will take care of the user/group/sudo logic.

Step 1 — Create a group account

First, you need to create a group and include the users allowed to control the service.

As you’ll have several of these groups (one per team, or one per family of services), something generic to create groups is needed.

As SaltStack uses Jinja2 templates, we can provide arbitrary data structures and iterate with a Python for loop:

This defines a pillar value shellgroups, as a dictionary containing two groups, nasqueron-irc and another-group. Each groups is itself a data structure with key/values (gid, description, members).

The property keys are totally arbitrary. As UNIX groups don’t have a GECOS-like or any field to describe them, the description here won’t be exploited. If we’d deploy on a Windows machine, we’d been able to use it.

We now need to read this pillar value, create the group, fill the members:

group.present will create the group if needed, and check it has the properties we want to control (name, system, gid and members here).

iteritems is a method of Python dictionaries to get an iterator.

We don’t need to iterate to members, group.present is happy with the full list.

Notice we name our group_{{group}}. We could have used directly {{group}}, and simplify like this:

This sample skip name as the state name (here {{group}}) is used when that field is omitted. But each state name must be unique, so to avoid any conflict, we prefer to prepend it with group. It eases future maintenance avoiding to track a queer bug born of states naming conflicts.

The advantage of this technique is you can centralize the groups in one reference file and track who has access to what. A configuration as code is a documentation per se, always up to date, as the repo is documentation first.

Step 2 — Create an user account

To create regular users account, we use a similar solution than for the groups, using a generic data pillar file too.

For a service, it’s more convenient to store it directly with the other service’s states:

Choose where you wish to deploy applications, /opt/, /srv/, /usr/local/. If the application doesn’t have a hierarchy of folders to store config etc. but use the unix hierarchy, you can use /nonexistent as home directory.

Step 3 — sudo capabilities

sudo doesn’t only allow to run command as root, it also allows to run command under another arbitrary user account.

Here we’ll allow members of the nasqueron-irc group to control the odderon user account.

Coupled with SSH keys, it means you don’t need to save a password for it.

That’s as simple as deploy a file in the right place, ie the /etc/sudoers.d folder. Softwares use more and more flexible configuration directories, it’s convenient here to compose services instead to manage one centralized file.

If you deploy on other OSes like FreeBSD too, think to adjust paths:

What are we deploying? We instruct salt through the source parameter to copy roles/shellserver/odderon/files/odderon.sudoers into …etc/sudoers.d/odderon (name). The template parameter allows to use a template format, and Salt will take care to pass the file through the template engine before to copy.

Capabilities could be as simple as “any user member of the nasqueron-irc group can run any command under odderon account”:

%nasqueron-irc ALL=(odderon) NOPASSWD: ALL

That allows to run commands with this syntax: sudo -u odderon whoami.

But then, we want here to run a service, and generally services are handled by a services manager, like systemd, running as root:

It’s important to provide the exact and full command to run to avoid unauthorized privilege escalation. If an account member of this group is compromised, all it could do as root is to restart the bot, not every service on the system.

Document UIDs and GIDs

There is a drawback to not centralize the groups and users in one same file: we need to avoid UID and GID conflicts and track what we assign. We suggest to create UIDs and GIDs file in your repository, the FreeBSD ports collection does that with success.

Next steps

Now we have an user account ready to run the service, and sudo capabilities allowing a team to control it, we’ll see in a next post how to ship software, how to provide a systemd unit and how to provide the configuration, including the secrets. We’ll also see the different deployment strategies, through packages or fetching and building the application locally.

Go to the part 2 of the walkthrough

A Laravel command to run a SQL console à la Phabricator

Phabricator offers a bin/storage shell command. It allows to run the mysql client with the options from the application configuration.

That’s useful in a modern distributed environment, when it’s not always straightforward to know what server contains what databases and what credentials to use. In a Docker container, for example, that could be a linked container or a dedicated bare metal instance.

As Laravel offers standardisation of the configuration, we can provide such a command not only for MySQL but also for PostGreSQL, SQLite and SQL Server.

The only requirement is to get the relevant client installed on the machine.

To implement it into your application, you can drop DatabaseShell.php to your app/Console/Commands/ folder. (last commit at post writing)

To run it, use php artisan db:shell.

This post has been last updated 2017-02-09.

MediaWiki now accepts out of the box RDFa and Microdata semantic markup

Semantic web

Since MediaWiki 1.16, the software has supported — as an option — RDFa and Microdata HTML semantic attributes.

This commit, integrated to the next release on MediaWiki, 1.27, will embrace more the semantic Web making these attributes always available.

If you wish to use it today, this is already available in our Git repository.

This also simplify slightly the cyclomatic complexity of our parser sanitizer code.

Microdata support will so be available on Wikipedia Thursday, 24 March 2016 and on other projects Thursday, 23 March 2016.

If you already use RDFa today on MediaWiki

First, we would be happy to get feedback, as we’re currently considering an update to RDFa  1.1 and we would like to know who is still in favour to keep RDFa 1.0.

Secondly, there is a small effort of configuration to do: open the source code of your wiki and look the <html> tag.

Copy the content of the version attribute: you should see something like like <html version=HTML+RDFa 1.0">.

Now, edit InitialiseSettings.php (or your wiki farm configuration) and set the $wgHtml5Version setting. For example here, this would be:
$wgHtml5Version="=HTML+RDFa 1.0";

For the microdata, there is nothing special to do.


Let’s encrypt lifts quota by domain for renewal

There is currently a limitation of how many certificates you can register per week: a quota of 5 per domain per week.

The same limitation applied for renewal, which would have forced to maintain a schedule.

This is not the case anymore: if a certificate has already been generated for a specific  FQDN, you can renew it regardless of your quota use. Thank you for Roland Bracewell Shoemaker for this change, which solves this issue.