FOSDEM PGP Key signing party FAQ

FOSDEM organizes each year one of the largest  keysigning event for PGP keys. When we come back from a key signing party, what to do?

Here a FAQ with some useful notes about how I sign the keys.

Sign other keys

Bad practice: don’t upload keys you’ve just signed to the PGP server

At the event, you checked the key fingerprints and you checked an ID document. But you also want to verify the email address to be sure the mail belongs to the user and not to a namesake.

So don’t upload keys to a PGP server, send the signature for ONE mail to THAT mail address. Luckily, some softwares automate the process and does that.


Caff will automate signing and sending process.

You can follow instructions published on the Debian wiki.

Basically, it works in three steps:

  1. create a ~/.caffrc file with at least <code>$CONFIG{‘owner’}</code> and <code>$CONFIG{’email’}</code>
  2. caff <fingerprints of each keys you verified>
  3. check your mail for issues like rejected as spam, not existing mailbox, etc.
  4. take a highlighter and let a mark when a key has been sent

What if some keys aren’t fetchable on the public servers?

You can ask caff to fetch the keys from the local GnuPG keyring. For that, download the FOSDEM event keyring, then import the keys you want:

You can then ask caff to fetch them locally. For me, it was the following keys:

Other software

Some key signing participants use another piece of software: PIUS.

The software claims to be able to detect signed keys in a mailbox, useful for the next step.

Don’t expect your nice message will be read

As you encrypt the message with the recipient PGP key, it will have to make an effort to decrypt it.  Contributors using PGP to sign releases or VCS tags or commits don’t use always PGP to read and write mail. So, guess what they could with your message if their mail client doesn’t have access to the key? gpg -d | gpg --import. Your message will so never be read in clear text.

Publish your signed keys

Now you’ve signed the keys from other participants, you want to publish the signed keys you’ve received.

When your client mail supports GPG

If your mail client handles this, or if you use PIUS, they will allow you to import in GPG the keys.

Manually import the signed keys

This script will ask you Next key?

You copy/paste the PGP block (between -----BEGIN PGP MESSAGE----- and -----END PGP MESSAGE-----). Then you save with CTRL + D.

It doesn’t matter if you’ve added a line after the END line, gpg stops to parse there.

GPG will import the key and publish it. Publish on a responsive server, not, that will ease checks.

You have two ways to know each signature have been successfully sent.

First, check the output of gpg --import :

If instead you read this, you’ve already published this key:

The second way to check is on the web view of the server.

For example if you use the server noted above, search your fingerprint here.

Stay on the page with your signatures, and when you’ve a doubt, you can refresh.

Tag the mails as done

There are a lot of mails as there are a lot of participants. So, to tag mails as processed is useful to know what is processed and what’s not.

An IMAP dedicated folder is nice, or any label/color your client allows.

Alternatively, take an highlighter, your paper list and annotate.

A Laravel command to run a SQL console à la Phabricator

Phabricator offers a bin/storage shell command. It allows to run the mysql client with the options from the application configuration.

That’s useful in a modern distributed environment, when it’s not always straightforward to know what server contains what databases and what credentials to use. In a Docker container, for example, that could be a linked container or a dedicated bare metal instance.

As Laravel offers standardisation of the configuration, we can provide such a command not only for MySQL but also for PostGreSQL, SQLite and SQL Server.

The only requirement is to get the relevant client installed on the machine.

To implement it into your application, you can drop DatabaseShell.php to your app/Console/Commands/ folder. (last commit at post writing)

To run it, use php artisan db:shell.

This post has been last updated 2017-02-09.

MediaWiki now accepts out of the box RDFa and Microdata semantic markup

Semantic web

Since MediaWiki 1.16, the software has supported — as an option — RDFa and Microdata HTML semantic attributes.

This commit, integrated to the next release on MediaWiki, 1.27, will embrace more the semantic Web making these attributes always available.

If you wish to use it today, this is already available in our Git repository.

This also simplify slightly the cyclomatic complexity of our parser sanitizer code.

Microdata support will so be available on Wikipedia Thursday, 24 March 2016 and on other projects Thursday, 23 March 2016.

If you already use RDFa today on MediaWiki

First, we would be happy to get feedback, as we’re currently considering an update to RDFa  1.1 and we would like to know who is still in favour to keep RDFa 1.0.

Secondly, there is a small effort of configuration to do: open the source code of your wiki and look the <html> tag.

Copy the content of the version attribute: you should see something like like <html version=HTML+RDFa 1.0">.

Now, edit InitialiseSettings.php (or your wiki farm configuration) and set the $wgHtml5Version setting. For example here, this would be:
$wgHtml5Version="=HTML+RDFa 1.0";

For the microdata, there is nothing special to do.


Split a large SQL dump by database

You created a MySQL backup of a large server installation with dozens of databases and wish to get the schema and data for one of them. You now have to deal with a file of hundreds of MB in a text editor. How convenient.

Split a dump into several files

You can quickly split this dump in several files (one per database) with awk or csplit. With GNU awk (gawk on FreeBSD), this is a oneliner:

Get database.sql files

To rename these files with actual database names, the following bash script could be useful. It assumes you don’t have the main dump in the same directory.

December 2014 links

Some links of stuff I appreciated this month. Links to French content are in a separate post. You can also take the time machine to November 2014.


What if instead to understand how the brain works, we copy the neural connections as is? This is what the OpenWorm project tries to do with C. elegans. And, big surprise, that works and allows a bot to move.


An infographics of the locality of Wikipedia participants shows without any surprise they are mainly from Europe and North America.

If you’re into dumps, the Wikipedia / MediaWiki XML dump grepper will help you to find a particular piece of data, like the text of one article.


Dev / search. The silver searcher, ag, offers a faster approach than ack to search your code.

Fun / autogenerator. Some years ago, cgMusic offered an implementation on how a computer program could create music. Add some image generation techniques and a word generators, and you can have a fake music generator offering full albums. Ælfgar has stumbled upon Liquified Death by Income Yield.

GIS. Turf is a new open source JavaScript GIS library. This post explains the capabilities and features, including its great offline support.


What if an Arduino embeds a web server and allows programmation from the web browser? This is exactly what the Photon by Spark does.


An infographics showing satellites orbiting Earth and a point of view of the Uber economy.


The GoT series offer some comprehensive scenes of torture. Did you ask yourself their interest or need for the plot? Marie Brennan offers a great opinion in « Welcome to the Desert of the Real ».

TCL and the SSL security issues: sslv3 alert handshake failure

Update 2016-01-15: With tcl-tls 1.6.7, it works out of the box without any need to configure cyphers.

If you have reconfigured your OpenSSL to take care of the current security issues, you’ve disabled SSLv3 since POODLE discovery.

Then, you could find unexpected behavior of TCL code. The package http isn’t the best to intercept and report errors, so it could be as non descriptive as software caused connection abort. If you’re luck you’ll get the actual cause of the error sslv3 alert handshake failure.

So, without any surprise, we disabled SSLv3, code still want to use SSLv3, and… that fails:

[sourcecode language=”plain” highlight=”9-10″]
/home/dereckson ] tclsh8.6
% package require http
% package require tls
% http::register https 443 ::tls::socket
443 ::tls::socket
% http::geturl
SSL channel "sock801eacd10": error: sslv3 alert handshake failure
error reading "sock801eacd10": software caused connection abort

The solution is to explicitly request to use TLS.

[sourcecode language=”plain” highlight=”6″]
% /home/dereckson ] tclsh8.6
% package require http
% package require tls
% tls::init -tls1 true -ssl2 false -ssl3 false
-tls1 true -ssl2 false -ssl3 false
% http::register https 443 ::tls::socket
443 ::tls::socket
% http::geturl
% http::cleanup ::http::1

In your TCL application, register once for all the https as preconfigured TLS socket sounds a good idea:

[sourcecode language=”plain”]
# HTTP support

package require http
package require tls
::tls::init -ssl2 false -ssl3 false -tls1 true
::http::register https 443 ::tls::socket

Thank you to rkeene from Freenode #tcl for his help to track this issue.

How to determine the SQLite filename from an open connection?

If the PHP SQLite3 class doesn’t provide a property or a method to return this information, SQLite engine has a PRAGMA statement to get or set internal data or modify the library behavior.

It will returns a row with seq, name, file fields respectively containing a sequence id, the internal name of the database and the path to the file:

Some details are interesting to note.

  • A SQLite library could use several files.
  • The path will be canonical, and so follows symbolic links.

Unit testing case sample:

To test if the current $client connection file matches $config->databaseFilename:

To test if a query returns the expected result, an efficient method is to compare two arrays, one with expected result, one with the row returned by fetchArray.

By default, fetchArray stores twice each field value, one with a numeric index, one with an associative key. Here we focus on the fields containing the right information, so we use SQLITE3_ASSOC parameter to get an associative content only. If you wish to test the order, use fetchArray(SQLITE3_NUM):

The realpath function is used to get a canonical path.


This post were initially published on Stack Overflow.

Nasqueron landing page

Nasqueron is a collection of projects, so the homepage should allow to find a resource on one of these projects. The current prototype features a responsive background and a search widget.

During development, it will be softly hidden by a landing page.






The painting used as background is Aurora Borealis from Frederic Edwin Church, an 19th century oil on canvas executed during an Arctic expedition. The widget and the different sizes served take care of the ship position.

The search will rely on several data sources, and so will allow to query the list of tools, the title of wiki pages instantaneously with autocomplete.

It also allows to easily jump to another Nasqueron site (the list is provisioned by API) with shorcuts (available with pressing the ? key):


If you wish to test it, go to and try to write something to enter. That could be relevant for Nasqueron, inspired from Ultima spells, or from general knowledge about how to enter a place. If you’re not in a mood to guess, UTSL for some solutions.