FOSDEM PGP Key signing party FAQ

FOSDEM organizes each year one of the largest  keysigning event for PGP keys. When we come back from a key signing party, what to do?

Here a FAQ with some useful notes about how I sign the keys.

Sign other keys

Bad practice: don’t upload keys you’ve just signed to the PGP server

At the event, you checked the key fingerprints and you checked an ID document. But you also want to verify the email address to be sure the mail belongs to the user and not to a namesake.

So don’t upload keys to a PGP server, send the signature for ONE mail to THAT mail address. Luckily, some softwares automate the process and does that.

Caff

Caff will automate signing and sending process.

You can follow instructions published on the Debian wiki.

Basically, it works in three steps:

  1. create a ~/.caffrc file with at least <code>$CONFIG{‘owner’}</code> and <code>$CONFIG{’email’}</code>
  2. caff <fingerprints of each keys you verified>
  3. check your mail for issues like rejected as spam, not existing mailbox, etc.
  4. take a highlighter and let a mark when a key has been sent

What if some keys aren’t fetchable on the public servers?

You can ask caff to fetch the keys from the local GnuPG keyring. For that, download the FOSDEM event keyring, then import the keys you want:

You can then ask caff to fetch them locally. For me, it was the following keys:

Other software

Some key signing participants use another piece of software: PIUS.

The software claims to be able to detect signed keys in a mailbox, useful for the next step.

Don’t expect your nice message will be read

As you encrypt the message with the recipient PGP key, it will have to make an effort to decrypt it.  Contributors using PGP to sign releases or VCS tags or commits don’t use always PGP to read and write mail. So, guess what they could with your message if their mail client doesn’t have access to the key? gpg -d | gpg --import. Your message will so never be read in clear text.

Publish your signed keys

Now you’ve signed the keys from other participants, you want to publish the signed keys you’ve received.

When your client mail supports GPG

If your mail client handles this, or if you use PIUS, they will allow you to import in GPG the keys.

Manually import the signed keys

This script will ask you Next key?

You copy/paste the PGP block (between -----BEGIN PGP MESSAGE----- and -----END PGP MESSAGE-----). Then you save with CTRL + D.

It doesn’t matter if you’ve added a line after the END line, gpg stops to parse there.

GPG will import the key and publish it. Publish on a responsive server, not pgp.mit.edu, that will ease checks.

You have two ways to know each signature have been successfully sent.

First, check the output of gpg --import :

If instead you read this, you’ve already published this key:

The second way to check is on the web view of the server.

For example if you use the server noted above, search your fingerprint here.

Stay on the page with your signatures, and when you’ve a doubt, you can refresh.

Tag the mails as done

There are a lot of mails as there are a lot of participants. So, to tag mails as processed is useful to know what is processed and what’s not.

An IMAP dedicated folder is nice, or any label/color your client allows.

Alternatively, take an highlighter, your paper list and annotate.

Deploy a darkbot or a simple generic service with SaltStack (part 2)

This is the part 2 of our service deployment walkthrough.
See also: Part 1 — Service accounts, sudo capabilities

In the first part, we’ve seen how to create an user account for the service, a group to put users with access to the service into, and sudo capabilities to allow full control on the service account, some control as root to interact with systemd or another service manager.

Deploy an application through package

If your application is packaged or you package it, something we recommend hearthily, you can simply the pkg.installed state.

For example if you wish to deploy with this state configuration:

If you only have one application you installed, you can omit pkgs, it will then take the state name:

If you want to force the last version to be reinstalled when you run again the state, you can instead use pkg.latest:

Salt can then take care of your upgrade process.

Deploy an application fetching and building it

In this sample, we’ll fetch the source code from the last version of the production branch of a Git repository cloned at /usr/local/src/darkbot, and we’ll install the software to /opt/odderon. To use /opt will allow us to perform an installation process running as a non privileged user.

Fetch the source code

Sometimes, you want a workflow to fetch, build, install, to have a better control on the compilation. Docker image creators tend to like to automate build processes.

For that, you need two things:

  1. A directory where to clone the software
  2. To actually clone the repository

A same Salt state can call several functions, here one from file to create a directory (file.directory) and one from git to clone (git.latest):

We’ve reused the user and group created in previous part.

To clone the repository, we recommend to do the clone from a source you can trust (e.g. PHP provides GPG signed packages) or better, a source you control.

Note it’s currently not possible to call two functions from the same module, e.g. two git or two file wouldn’t work.

If you automate the upgrade process (something best to do only if your CI/CD infrastructure tests the Salt deployment), provide your deployers a quick way to stop the update process.

For example, you can provide a lock file in the application directory (here /opt/odderon):

If the file exists, state will be skipped, so it’s as simple as touch /opt/odderon/LOCKED to pause deployment, rm /opt/odderon/LOCKED to resume it. SaltStack has a good documentation about requisites, requisites not always intuitive.

Compile

Let’s start with a simple case where you only have one command to write to configure, compile, install, for example here: ./build.sh --with-sleep=0 --with-add=0 --with-del=0 --with-random=0 --destdir=/opt/odderon.

For that, you only need cmd.run:

The cwd parameter allows to change the working directory (cd /usr… ; ./build.sh…) and runas to run the command as another user than root.

If you don’t have such script available, just create it.

Salt allows to deploy a custom script before to run it in one step with the cmd.script command:

The roles/shellserver/odderon/files/build.sh file will be copied on the server, and run, and yes you can also send arguments to the script like with cmd.run.

This process isn’t useful if Git failed, so we can require another state succeeded:

The same process can be used to provide the service configuration. For example, to copy a hierarchy of files from your Salt root directory to the server:

The 770 directory mode will allow the service and the deployers to access it, files by default read only for deployers to encourage to use Salt to edit them.

By the way, you probably also want a warning like this:

It’s then clear when you’re on the production server a file opened is managed.

Upgrade the code

So, what can you do with that?

First, it can provision your service to a new server, but you can also use it for updat. For example, if you’ve your states in roles/shellserver/odderon/init.sls, you can upgrade with:

salt eglide state.apply roles/shellserver/odderon

It will then upgrade your package or fetch the latest Git commit and recompile your code.

If you compile manually, note the build script doesn’t especially need a clean step. For example, if you use a Makefile, make will detect your file has been modified (looking the timestamp) and so should be upgraded.

Create a service

Thanks to Sandlayth to have prepared the Salt configuration for this part.

If your application doesn’t provide a service, it’s valuable to create one.

Two steps are needed: deploy the service file, ensure the service runs. For systemd, a third step is needed to force reload configuration.

The first part is service manager dependant, the second part is service manager agnostic.

We told you there is an extra step for systemd. This one is tricky: as this is a command to refresh an application knowledge, that’s not something we can declare and check, so it’s not available as a state. SaltStack also allows to run commands, and provide modules for that. For example, you can do salt eglide service.force_reload odderon (see systemd module doc, right menu offers the same for other systems).

Parameters for the module you run are directly put after the name. But here service.force_reload has also a name parameter. To disambiguate, you prepend m_ and get m_name.

There is currently in Salt a work of progress to abstract more the services.

Finally, whatever the service manager, you want to ensure the service runs:

The enable parameter is useful if you use a systemd-like service, as the service must be explicitly enabled to be automatically launched at server start time.

Deploy a darkbot or a simple generic service with SaltStack (part 1)

SaltStack is one of the main configuration as code software. Written in Python, it uses Jinja templates. It can be easily used to deploy a service, like here a C application.

In this post, we’ll walk through the different steps to deploy the service and allow it to be managed by relevant trusted users, without root access.

Darkbot

A darkbot is an IRC talking bot developed in 1996 by Jason Hamilton. It’s basically a knowledge base with entries allowing for wildcards, and replies.

At Nasqueron, we use a darkbot to store factoids about server configurations or development projects. It’s hosted on Eglide, a free culture shell hosting service.

How a simple generic service work?

  • The service is a non forking application
  • The application runs on a dedicated user
  • A service manager like systemd can take care of the start/stop operations
  • Some users should have capabilities to maintain this application, without root access

Walkthrough

In this first post of the series, will take care of the user/group/sudo logic.

Step 1 — Create a group account

First, you need to create a group and include the users allowed to control the service.

As you’ll have several of these groups (one per team, or one per family of services), something generic to create groups is needed.

As SaltStack uses Jinja2 templates, we can provide arbitrary data structures and iterate with a Python for loop:

This defines a pillar value shellgroups, as a dictionary containing two groups, nasqueron-irc and another-group. Each groups is itself a data structure with key/values (gid, description, members).

The property keys are totally arbitrary. As UNIX groups don’t have a GECOS-like or any field to describe them, the description here won’t be exploited. If we’d deploy on a Windows machine, we’d been able to use it.

We now need to read this pillar value, create the group, fill the members:

group.present will create the group if needed, and check it has the properties we want to control (name, system, gid and members here).

iteritems is a method of Python dictionaries to get an iterator.

We don’t need to iterate to members, group.present is happy with the full list.

Notice we name our group_{{group}}. We could have used directly {{group}}, and simplify like this:

This sample skip name as the state name (here {{group}}) is used when that field is omitted. But each state name must be unique, so to avoid any conflict, we prefer to prepend it with group. It eases future maintenance avoiding to track a queer bug born of states naming conflicts.

The advantage of this technique is you can centralize the groups in one reference file and track who has access to what. A configuration as code is a documentation per se, always up to date, as the repo is documentation first.

Step 2 — Create an user account

To create regular users account, we use a similar solution than for the groups, using a generic data pillar file too.

For a service, it’s more convenient to store it directly with the other service’s states:

Choose where you wish to deploy applications, /opt/, /srv/, /usr/local/. If the application doesn’t have a hierarchy of folders to store config etc. but use the unix hierarchy, you can use /nonexistent as home directory.

Step 3 — sudo capabilities

sudo doesn’t only allow to run command as root, it also allows to run command under another arbitrary user account.

Here we’ll allow members of the nasqueron-irc group to control the odderon user account.

Coupled with SSH keys, it means you don’t need to save a password for it.

That’s as simple as deploy a file in the right place, ie the /etc/sudoers.d folder. Softwares use more and more flexible configuration directories, it’s convenient here to compose services instead to manage one centralized file.

If you deploy on other OSes like FreeBSD too, think to adjust paths:

What are we deploying? We instruct salt through the source parameter to copy roles/shellserver/odderon/files/odderon.sudoers into …etc/sudoers.d/odderon (name). The template parameter allows to use a template format, and Salt will take care to pass the file through the template engine before to copy.

Capabilities could be as simple as “any user member of the nasqueron-irc group can run any command under odderon account”:

%nasqueron-irc ALL=(odderon) NOPASSWD: ALL

That allows to run commands with this syntax: sudo -u odderon whoami.

But then, we want here to run a service, and generally services are handled by a services manager, like systemd, running as root:

It’s important to provide the exact and full command to run to avoid unauthorized privilege escalation. If an account member of this group is compromised, all it could do as root is to restart the bot, not every service on the system.

Document UIDs and GIDs

There is a drawback to not centralize the groups and users in one same file: we need to avoid UID and GID conflicts and track what we assign. We suggest to create UIDs and GIDs file in your repository, the FreeBSD ports collection does that with success.

Next steps

Now we have an user account ready to run the service, and sudo capabilities allowing a team to control it, we’ll see in a next post how to ship software, how to provide a systemd unit and how to provide the configuration, including the secrets. We’ll also see the different deployment strategies, through packages or fetching and building the application locally.

Go to the part 2 of the walkthrough

Follow-up: a BASH script to split a MySQL dump by database

In this post, we’ve seen how to split a large MySQL dump by database.

I’ve been asked a script to automate the process. Here you are.

Note: On FreeBSD, replace AWK=awk by AWK=gawk and install lang/gawk port, so we can use GNU awk.

Split a large SQL dump by database

You created a MySQL backup of a large server installation with dozens of databases and wish to get the schema and data for one of them. You now have to deal with a file of hundreds of MB in a text editor. How convenient.

Split a dump into several files

You can quickly split this dump in several files (one per database) with awk or csplit. With GNU awk (gawk on FreeBSD), this is a oneliner:

Get database.sql files

To rename these files with actual database names, the following bash script could be useful. It assumes you don’t have the main dump in the same directory.

Chromebook: run a SSH server on Chrome OS

In this post, we’ll cover how to run a SSH server directly on Chrome OS (ie not into a Crouton chroot).

One of the first things I do on any machine (FreeBSD, Linux, Mac OS X or Windows) is to install, run and configure the SSH server. It’s always convenient to be able to scp from and to a computer, or to log in remotely. Even for workstations.

Chrome OS is a reasonable if minimal standard Linux installation offering access to iptables and sshd (and openvpn by the way), so it’s as easy to run sshd and to allow incoming traffic on port 22.

Setup

1. If it’s not already done, switch your chromebook in developer mode, so you can execute commands as root.

Do a backup of your data, as you’ll wipe your current Chrome OS partitions.

On most recent machines, restart in recovery mode (ESC + REFRESH + POWER), then when it boots, CTRL + D to enter the developer mode.

Hit enter to turn off OS verification. It will then restart. Now and everytime after, you’ll need to do a CTRL + D to boot.

It will then wipe your chromebook and reinstall a fresh Chrome OS version. The process takes 6 to 7 minutes.

Former machines require to use an hardware switch, generally located below the battery. Be gentle with this switch, it breaks easily.

2. Launch a console with the shorcut ctrl + alt + t, then write shell to open a full bash shell (if the shell command isn’t available, you aren’t in developer mode).

Become root with sudo su.

3. Setup SSH keys :

4. Run SSH:

5. Allow world to connect to port 22:

6. Add your public keys to ~chronos/.ssh/authorized_keys file. Authentication by password isn’t available.

7. You’re now able to log in from the world to your chromebook.

Sources

Andrew Sutherland, cr-48 chromium os ssh server, 14 January 2011.

CentOS wiki contributors, IPTables, CentOS wiki.

How to change Phabricator logo

Before summer 2016, Phabricator didn’t provide a quick way to change the logo. In the software past, the possibility was also offered, but between July 2012 and 2016, the logo was a resource embedded into new sprites, with other header graphics.

You can track the issue on Phabricator’s Phabricator.

Meanwhile a solution is provided by the product, here the procedure to update the UI and change the logo.

Continue reading “How to change Phabricator logo”

Install Final Term on Debian

Final Term is a new terminal application, currently under development,  written in Vala and built on top of GTK+ 3, Clutter and Mx.

Screenshot of Final Term on Debian Jessie.
Screenshot of Final Term on Debian Jessie. The bottom bar is from tmux.

If you can install it easily under Ubuntu through a PPA package, it’s not the case for every OS. It’s packaged for Fedora and downstream. It’s also available in Linux distribution with alternative packages repositories for second-class citizen or denizens packages in Arch AUR, and for Gentoo as a Project SunRise ebuild. These links have been prepared as writing time, and they don’t really look as permanent URLs.

Let’s see how to install it on Debian.

Continue reading “Install Final Term on Debian”

MediaWiki nginx configuration file

Scenario

  • You have a nginx webserver
  • You have several MediaWiki installation on this server
  • You would like to have a simple and clear configuration

Solution

You want a configuration file you can include in every server {} block MediaWiki is available

Implementation

  1. Create a includes subdirectory in your nginx configuration directory (by default, /usr/local/etc/nginx or /etc/nginx).
    This directory can welcome every configuration block you don’t want to repeat in each server block.
  2. You put in this directory mediawiki-root.conf, mediawiki-wiki.conf or your own configuration block.
  3. In each server block, you can now add the following line:

Configuration I – MediaWiki in the root web directory, /article path

This is mediawiki-root.conf on my server:

Configuration II – MediaWiki in the /w directory, /wiki/article path

This is mediawiki-wiki.conf on my server:

Example of use

www.wolfplex.org serves other application is subdirectories and MediaWiki for /wiki URLs.

This server block:

  1. is a regular one
  2. includes our includes/mediawiki-wiki.conf configuration file (scenario II)
  3. contains a regular php-fpm block
  4. contains other instructions

Some notes

  • Configuration is based on Daniel Friesen’s , who collected various working nginx ones. There are some differences in the rewrite, our goal here is to have a generic configuration totally agnostic of the way .php files are handled.
  • Our configuration (not the one generated by the builder) uses a if for the thumbnails handler. The nginx culture is a culture where you should try something else than an if. See this nginx wiki page and this post about the location if way of work for more information.