Static IPv6 subnetting at home with dynamic prefix delegation

The problem

How to have IPv6 subnetting on a non-flat network at home when you are receiving a dynamic IPv6 prefix via DHCP6-PD (prefix delegation) via an ISP.

This is about a setup where you have (e.g.) a DSL line and a router that receives a prefix (e.g. a /56 prefix) via DHCP6-PD. Since the prefix is dynamic you cannot assign static IPv6 addresses, which means that you cannot possibly subnet this, which in turn means that you cannot have any kind of non-flat home network.

With IPv4 the problem doesn’t exist as you can have a non-flat home network with some routing involved and rely on NAT/Masquerading. But Masquerading doesn’t exists for IPv6 and most probably your home IPv6 router doesn’t do IPv6 NAT

The initial setup

Suppose you have something like this:

Internet <–> DSL Router (R1) <–> Another router (R2) <–> Some IPv6 subnets

The new setup

I solved this with a raspberry pi. You can use whatever you like as long as it has two Ethernet interfaces.

I used a Raspberry Pi and a USB ethernet adapter plugged to it. Then changed the home setup to something like this:

Internet <—>  DLS Router (R1) <—> Raspberry Pi (Pi) <—> Another Router (R2) <–> Home IPv6 Subnets

For simplicity I’ll use the above setup, even though my actual setup is a triangle between R1, Pi and R2 so that IPv4 traffic goes directly from R1 to R2 while IPv6 traffic goes through the Pi.

What needs to be done

Your whole home network will be exiting with a single /64 IPv6 network with random address mappings via NAT.

In order to achieve this you need to do the following:

  1. Have your DSL router provide addresses via SLAAC to the interface that connects to the Pi.
  2. Setup your IPv6 home network behind the Pi with static IPs. Use a static prefix that is allocated to you somehow (e.g. from sixxs) or ULA or something sane. This is not the dynamic prefix you get from your ISP.
  3. Have the Pi get a dynamic IPv6 address on the external interface.
  4. Have the Pi NAT your internal IPv6 addresses to external IPv6 addresses from the same subnet it belongs to.
  5. Have the Pi respond to ND requests for the NATed addresses.
  6. Have a script that reconfigured the NAT if it changed to accomodate for the dynamic IPv6 prefix you are assigned.

How to do it

First setup your DLS router to do SLAAC (stateless) IPv6 address assignments. I.e. not to assign addresses via DHCP6.

Then setup your internal network with static IPv6 addresses/subnets. Assuming you chose to use ULA (fd00::/8) for your static home network:

  • you use fd00:1::/64 for the connection Pi<–>R2
  • Pi will have fd00:1::1/64
  • R2 has the fd00:1::2/64 and a default IPv6 route via fd00:1::1
  • Your internal network behind R2 uses fd00::/48

It’s assumed below that eth0 is the external interface and eth1 is the internal interface of the Pi

Have the Pi get a dynamic IPv6 address

Assuming you’re using Debian:

Set the defaults for forwarding and autoconf by adding these to a file under /etc/sysctl.d:

net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.forwarding = 1

net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0

(make sure you reload these with /etc/init.d/procps restart)

Add this to the external interface’s iface config:

iface eth0 inet static
  ... ipv4 setup ...
  up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/autoconf || true
  up echo 2 > /proc/sys/net/ipv6/conf/$IFACE/accept_ra || true
  up echo 0 > /sys/devices/virtual/net/$IFACE/bridge/multicast_snooping || true

You most probably want to also setup IPv4 over there. Note that these were set on “inet” and not on “inet6” as inet6 will be auto-configured. Feel free to adapt it.

You also need to setup static routes for the internal network on the pi:

iface eth1 inet static
  ... ipv4 setup ...
  echo 0 > /sys/devices/virtual/net/$IFACE/bridge/multicast_snooping || true

iface eth1 inet6 static
  address fd00:1::1
  netmask 64
  up ip -6 route add fd00::/48 via fd00:1::2 || true
  down ip -6 route del fd00::/48 via fd00:1::2 || true

Have the Pi do SNAT

Since the Pi receives an external address from an /64 IPv6 network and it’s the sole user, it’s ok to assume that you can use some more IPv6 addresses from that subnet for NAT 🙂

You can then do this:

PREFIX=$(ip -6 addr show $IFEX | grep inet6 | grep -v 'inet6 f[de]' | awk '{print $2}' | cut -f 1-4 -d : | tail -1)

Where PREFIX will hold the /64 prefix of the external (dynamically allocated) network

And then:

ip6tables -t nat -A POSTROUTING -o $IFEX -j SNAT --to-source ${from}-${to} --persistent

This will map your fd00::/48 addresses to 4096 IPv6 addresses from the dynamic prefix. You can obviously extend the range considerably, but don’t go nuts or you may end up with too many NAT and ND entries.

Have the Pi respond to ND requests

So far the Pi will happily do the NAT and will send the packets to your DSL router, but the DSL router won’t be able to send anything back as noone will be responding to ND requests for the NAT range.

To solve the problem we need to use ndppd and a dynamic configuration file.

Grab ndppd from, compile it and place the binary somewhere.

First ensure that proxy_ndp is enabled:

echo 1 > /proc/sys/net/ipv6/conf/$IFEX/proxy_ndp

Then create the appropriate ndppd.conf file:

cat << _KOKO > ndppd.conf
proxy $IFEX {
  rule ${PREFIX}::/64 {

Then fire up ndppd:

ndppd -d -v -c ndppd.conf

Test it

That’s it. If I didn’t forget anything then your home network should have IPv6 access to the rest of the world using the fd00::/48 prefix.

Script it

The final step is to script all of this and have it run via cron so that it adapts to IPv6 prefix changes. This is a slimmed down version of what I’m using, adjusted to the fd00::/48 prefix:




PREFIX=$(ip -6 addr show $IFEX | grep inet6 | grep -v 'inet6 f[de]' | awk '{print $2}' | cut -f 1-4 -d : | tail -1)
D0="/srv/ipv6" # A directory to work under


NDPPD="$D0/ndppd/ndppd" # Path to the ndppd executable



  $DEBUG && echo "$@"

  debug ip6tables -t nat "$@"
  ip6tables -t nat "$@"

# Test and set
  if ! I -C "$@" 2&gt; /dev/null ; then
    I -A "$@"

# Setup NAT
  local from to

  if $CHANGED ; then
    echo "Reseting NAT rules"
    do_nat_stop 2&gt; /dev/null


  I -N $TBL 2&gt; /dev/null
  ITS $TBL -s $PREFIX2 -o $IFEX -j SNAT --to-source ${from}-${to} --persistent

  while I -D POSTROUTING -j $TBL &gt; /dev/null ; do : ; done
  I -F $TBL
  I -X $TBL

# Do basic configuration
  echo 1 &gt; /proc/sys/net/ipv6/conf/$IFEX/proxy_ndp

  cat &lt;&lt; _KOKO &gt; $NCFGNEW
proxy $IFEX {
  rule ${PREFIX}::/64 {

  if ! test -e $NCFG || ! diff -q $NCFG $NCFGNEW &gt; /dev/null ; then
    debug "Things changed"
    test -e "$NCFG" && mv -f $NCFG $NCFGOLD
    mv -f $NCFGNEW $NCFG
    debug "Nothing changed"

# Start ndppd
  if $CHANGED ; then
   echo "New config. Reloading ndppd. Prefix: $PREFIX"
   killall ndppd
   sleep 1

  if ! pgrep ndppd &gt; /dev/null ; then
    if $DEBUG ; then
      $NDPPD -vvv -c $NCFG
      $NDPPD -d -v -c $NCFG

  if test -z "$PREFIX" ; then
    echo "No prefix"
    exit 1
    debug "Prefix: $PREFIX"


if $DEBUG ; then
  doit | logger -t 6nat

Have fun!


Using TCP-LP with pycurl in Python


TCP-LP (low priority) is a TCP congestion control algorithm that is meant to be used by TCP connections that don’t want to compete with other connections for bandwidth. Its goal is to use the idle bandwidth for file transfers. The details of TCP-LP are here.

With Linux’ plugable congestion control algorithms, it is possible to change both the default algorithm for the whole system and the one used per connection. For the latter, one needs to be root.

Note: Changing the CC algorithm will only affect the transmissions. You cannot alter the remote end’s behavior. This means that the below only make sense when you are going to upload data.


Changing the CC algorithm is a matter of using setsockopt on a socket. Doing this with pycurl can be a bit tricky. Even though pycurl supports the SOCKOPTFUNCTION, this is only for newer pycurl versions. For older, one can exploit pycurl’s OPENSOCKETFUNCTION instead.

The trick is done with this piece of code:

import pycurl
import socket

def _getsock(family, socktype, protocol, addr):
    s=socket.socket(family, socktype, protocol)
    s.setsockopt(socket.IPPROTO_TCP, 13, 'lp' )
    return s

c = pycurl.Curl()
c.setopt(c.OPENSOCKETFUNCTION, _getsock)
c.setopt(c.URL, '')

In the above, pycurl will call _getsock and expect it to return a socket. The function creates a new socket, then calls setsockopt with IPPPROTO_TCP and 13 (which is TCP_CONGESTION – see /usr/include/linux/tcp.h,  /usr/include/netinet/tcp.h). It then attempts to set the algorithm to “lp” which is the TCP-LP congestion control algorithm.

You most probably want to wrap the setsockopt around a try/except clause as it may fail if “lp” is not available (needs the module tcp_lp loaded) or if the program doesn’t run as root.

The _getsock function also depends on the pycurl version, as its arguments have changed over time. Consult the docs for the fine details.


Example of uploading two 500MB files in parallel on an already busy production network. One is with TCP-LP and the other with the default (TCP Cubic):

TCP-Cubic: 9.38 seconds
TCP-LP: 23.08 seconds

Same test, for 100MB files, again in parallel, on the same network:

TCP-Cubic: 3.14 seconds
TCP-LP: 5.38 seconds

Note: The above are random runs, presented to give an idea of the impact. For actual experimental results we would need to have  multiple runs and also monitor the background traffic.

Running an NTP server in a VM using KVM

The setup

Having physical server pA, running VMs using KVM. One of theVMs (vA) acts as an NTP server. pA gets the time from vA and vA gets it from the Internet.

It’s not a great idea to run an NTP server in a VM, but in this case there was need for it.

The problem

NTP server gets frequently out of sync.

If you use nagios, you may get errors like this:


Both for the physical server and other servers that fetch the time from vA.

The reason

There’s some guessing involved here, but this should be pretty accurate:

VM vA needs to correct its clock every now and then by slowing down or speeding up things per ntpd/adjtimex. As expected, this creates a small discrepancy between vA and pA, as now the physical server gets out of sync and needs to correct its time using vA’s reference time.

Once vA attempts to correct its time, again by slowing down or speeding up its clock, this has a direct effect on vA, as vA’s clock is now affected by pA’s ongoing adjustment. This happens because KVM by default uses kvmclock as its clock source (the source that ticks and not the source that returns the time of the day).

This action sometimes causes pA’s ntpd to get even more out of sync and may even make it consider its peers inaccurate and become fully out of sync.

The problem gets even worse if you have two ntp servers (vA and vB) running on two different physical servers (pA and pB), because the amount of desync between the two is mostly random. Assuming that all your servers, including pA and pB, fetch the time from vA and vB, the discrepancy between them will make them mark at least one of them as wrong, as the stratum of vA and vB does not permit such difference between their clocks.

You can see the above by looking at the falsetick result in ntpq’s associations:

ind assid status  conf reach auth condition  last_event cnt
  1 33082  961a   yes   yes  none  sys.peer    sys_peer  1
  2 33083  911a   yes   yes  none falsetick    sys_peer  1

Overall, the problem is that the physical servers will try to fix their clocks, thus affecting the clocks of the NTP servers running in VMs under them.

The solution

The problem is with the VMs using the kvmclock source. You can see that using dmesg:

$ dmesg | grep clocksource
Switching to clocksource kvm-clock

The way to disable this is to pass the “no-kvmclock” parameter to the kernel of your VMs. This will not always work though. The reason is that the kernel (at least the CentOS kernels) will panic very early in the boot process as it will still try to initialize the kvmclock even if it’s not going to use it, and will fail.

The solution is to pass two parameters to your VM kernels: “no-kvmclock no-kvmclock-vsyscall”. The second one is a bit undocumented, but will do the trick.

After that you can verify it through dmesg:

$ dmesg | grep Switching
Switching to clocksource refined-jiffies
Switching to clocksource acpi_pm
Switching to clocksource tsc


Below is the output of a server running in such an environment. In this case the first ntp server (vA) runs with the extra kernel parameters and the other (vB) runs without them. The clock of the physical servers (pA and pB) was slowed down by hand using adjtimex in order to test the effect of the physical server’s clock on the VM clocks. As you can see, this server is still in sync with vA and has a very large offset with vB. Note that this server is not a VM under pA or pB.

$ ntpq -nc peers
     remote           refid      st t when poll reach delay   offset  jitter
*10.93.XXX.XXX  2 u   81  256  377 0.433  -87.076  20.341
 10.93.XXX.XXX  2 u  290  512  377 0.673  11487.6 9868.84

I.e., what happened is that the first one, using the extra parameters, kept its clock accurate while the second did not.


Debian packaging for python2 and python3 at the same time

The problem

The scenario was like this:

  • Python code that provides a library and a binary
  • The code is compatible with both Python v2 and v3

The requirements were:

  • Generate a package with the library part for python v2
  • Generate a package with the library part for python v3
  • Generate a binary package with the executable for python v3

I.e, from one source package (vdns) I wanted to create python-vdns (python2 lib), python3-vdns (python3 lib) and vdns (executable).

The approach

After trying other methods, I ended up using debhelper 9 and pybuild. Before that I tried using CDBS but had no luck there.

With DH9 it’s easy to package a python library for multiple python versions as it handles everything itself. The only catch was the package that contained the binaries and how to make it use the v3 version instead of the v2

The solution

The solution was this rules file:

#!/usr/bin/make -f

# see EXAMPLES in dpkg-buildflags(1) and read /usr/share/dpkg/*
include /usr/share/dpkg/

# Don't set this or else the .install files will fail
#export PYBUILD_NAME = vdns
#export PYBUILD_SYSTEM = custom

# Otherwise the usr/bin/ file is from python2
export PYBUILD_INSTALL_ARGS_python2 = --install-scripts=/dev/null

# main packaging script based on dh7 syntax
        dh $@ --with python3,python2 --buildsystem=pybuild

The control file is like this:

Source: vdns
Section: unknown
Priority: optional
Maintainer: Stefanos Harhalakis <>;
Build-Depends: debhelper (>= 9), dh-python,
 python-all (>=2.6.6-3~), python-setuptools,
 python3-all (>=3.2), python3-setuptools
Standards-Version: 3.9.5
X-Python-Version: >= 2.7
X-Python3-Version: >= 3.2
XS-Python-Version: >= 2.7
XS-Python3-Version: >= 3.2

Package: python-vdns
Architecture: all
Depends: ${python:Depends}, ${misc:Depends}, python-psycopg2
Description: vdns python2 libraries
 These libraries allow the reading and the creation of bind zone files

Package: python3-vdns
Architecture: all
Depends: ${python3:Depends}, ${misc:Depends}, python3-psycopg2
Description: vdns python3 libraries
 These libraries allow the reading and the creation of bind zone files

Package: vdns
Architecture: all
Depends: ${python:Depends}, ${misc:Depends}, python3-vdns, python(>=3.2)
Description: Database-based DNS management
 vdns is a database-based DNS management tool. It gets data from its database
 and generates bind zone files. It supports A, AAAA, MX, NS, DS, PTR, CNAME,
 vdns uses a PostgreSQL database to store the data
 vdns is not a 1-1 mapping of DB<->zone files. Instead the databsae
 is meant to describe the data that are later generated. E.g:
 * DKIM data are used to generate TXT records
 * A, AAAA and PTR entries are generated from the same data
 * NS glue records for sub-zones are auto-created

And the .install files are like this:




What the above does is to use DH9 with pybuild. Pybuild takes care of multiple versions by “compiling” the binaries  twice under build/scripts-2.x and build/scripts/3.x directories. After that it copies them  to debian/tmp and finally splits the contents of debian/tmp based on the .install files to debian/python-vdns, debian/python3-vdns and debian/vdns.

The biggest problem were the files that were meant for the vdns package as those were present in both build/scripts-2.x and build/scripts/3.x, each one prepared for the appropriate debian version:

-rwxr-xr-x 1 v13 v13 1197 Jul 19 21:12 build/scripts-2.7/
-rwxr-xr-x 1 v13 v13 1198 Jul 19 21:12 build/scripts-3.4/

The following line in rules takes care of the conflict by skipping a version:

export PYBUILD_INSTALL_ARGS_python2 = --install-scripts=/dev/null

This way, the python2 version never gets installed and thus only the python3 version is available to be copied to debian/tmp. Otherwise the behavior was random (it was picking the first one it was finding).

Other attempts

I also tried using autoconf with CDBS but that proved to be even more difficult.


The above was accomplished only because of the help of folks in #debian-python @ OFTC; namely: p1otr, mapreri and jcristau.

Multiple relay configuration based on sender address with sendmail

One of the needs that came up was to be able to use separate relay configurations based on the sender email address, using sendmail. The problem is that sendmail is missing support for most parts of that sentence.

At the end the solution involved a combination of sendmail, smarttable, procmail and msmtp

The idea is the following:

  • Use smarttable to implement sender based rules
  • Use the procmail mailer support to use procmail to deliver the emails
  • Use procmailrc to pipe messages to msmtp
  • Use msmtp to relay via external hosts

Sender based rules

In order to be able to have sender-based rules I used smarttable.m4 from here.

Download the smarttable.m4 and (assuming sendmail config is under /etc/mail) place it under /etc/mail/m4/. Normally it should be placed along the rest of the sendmail features (/usr/share/sendmail/cf/features) but I don’t like polluting system dirs. Then use the following config in

dnl Change the _CF_DIR for a bit to load the feature from /etc/mail/m4
dnl then change it back
define(`_CF_DIR_OLD', _CF_DIR_)dnl
define(`_CF_DIR_', `/etc/mail/m4/')dnl
dnl This has to be a hash. I.e. not text.
FEATURE(`smarttable',`hash -o /etc/mail/smarttable')dnl
define(`_CF_DIR_', _CF_DIR_OLD)dnl

Then configure smarttable (/etc/mail/smarttable) like this:    procmail:/etc/mail/persource/

You can add as many lines as you like, one for each sender. See smarttable’s web page for more information on the supported sender formats. Dont’ forget to generate the hashed version (smarttable.db)

Procmail config

Configure sendmail for procmail mailer like this:

define(`PROCMAIL_MAILER_ARGS', `procmail -Y -t -m $h $f $u')dnl

You have to override the default procmail parameters in order to add the -t switch. This way delivery errors will be interpreted as softfails, otherwise mails will be rejected on the first failure.

Create /etc/mail/persource and put the procmail configs in there (nice and tidy). In this example create /etc/mail/persources/ as follows:

|/usr/bin/msmtp -C /etc/mail/persource/ -a -t

The ‘w’ flag is essential in order to feed failures back to sendmail.

Msmtp config

Create the msmtp config file (/etc/mail/persource/ as follows:

syslog          on
# logfile /tmp/

password        xxx
auth            on
tls             on
tls_trust_file  /etc/ssl/certs/ca-certificates.crt

Your mileage may vary. They above is good for gmail accounts on a debian system.


And that’s it. Sending an email as will cause sendmail to use smarttable. This will match the sender and use procmail with our config to deliver the email. Procmail will pipe the email to msmtp which will send the email via google’s email servers.

OpenVPN and remote-cert-tls server

This required a bit of digging into OpenVPN’s and OpenSSL’s code to figure out.

The problem

This error:

Thu Sep 11 00:12:05 2014 Validating certificate key usage
Thu Sep 11 00:12:05 2014 ++ Certificate has key usage  00f8, expects 00a0
Thu Sep 11 00:12:05 2014 ++ Certificate has key usage  00f8, expects 0088

The condition

Using openvpn with the following option:

remote-cert-tls server

The solution

(for me) to add this to openvpn’s config file:

remote-cert-ku f8

The explanation


remote-cert-tls attempts to solve one problem: Lets say you run a CA and you distribute the certificates to 2 people including me and you. Then you setup a VPN server for us to use and you generate another certificate for the VPN server.

As always, the problem that certificates attempt to solve is “how do you know you’re connecting to the remote end you assume you are. In normal SSL pages you trust a CA to verify that the CN of the certificate matches the owner of the domain. If you want to achieve the same thing with openvpn then you need to verify the CN of the remote end against either the hostname or a predefined string. If not then you need to use the “remote-cert-tls server” option.

If you don’t use any of the above methods then I can fire up an openvpn server using the certificate you provided me with and since both my certificate and the actual VPN server’s certificate are signed by the same CA you would be verifying both and be equally willing to connect to both, thus allowing me to spy on you.

To solve this kind of problems X509 has some properties for certificates that designate them for certain purposes. E.g. one of them is to run a VPN endpoint (TLS Web Client Authentication)

To be precise there are 2+1 such designations in X509:

X509v3 Key Usage: 
    Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment, Key Agreement
X509v3 Extended Key Usage: 
    TLS Web Server Authentication, TLS Web Client Authentication, IPSec End System, IPSec Tunnel, Time Stamping
Netscape Cert Type: 
    SSL Client, SSL Server, S/MIME, Object Signing

“Netscape Cert Type” is kind of old. “Key Usage” is the main one and “Extended Key Usage” is the final addition. Ignoring NS Cert Type, “Key Usage” is a bitmap and thus has limited space for expansion. “Extended Key Usage” on the other hand is a list of object identifiers which allows for unlimited expansion.

The certificate

The certificate I was using for the server-side of the OpenVPN had the above attributes. Ignoring NS Cert Type once more, the other two correspond to the following data:

  494:d=5  hl=2 l=   3 prim: OBJECT            :X509v3 Key Usage
  499:d=5  hl=2 l=   4 prim: OCTET STRING      [HEX DUMP]:030203F8
  433:d=5  hl=2 l=   3 prim: OBJECT            :X509v3 Extended Key Usage
  438:d=5  hl=2 l=  52 prim: OCTET STRING      [HEX DUMP]:303206082B0601050507030106082B0601050507030206082B0601050507030506082B0601050507030606082B06010505070308

Starting with “Key Usage”, the actual value is “F8”. The meaning of each bit can be found in OpenSSL’s code:

#define KU_DIGITAL_SIGNATURE    0x0080
#define KU_NON_REPUDIATION      0x0040
#define KU_KEY_ENCIPHERMENT     0x0020
#define KU_DATA_ENCIPHERMENT    0x0010
#define KU_KEY_AGREEMENT        0x0008
#define KU_KEY_CERT_SIGN        0x0004
#define KU_CRL_SIGN             0x0002
#define KU_ENCIPHER_ONLY        0x0001
#define KU_DECIPHER_ONLY        0x8000

On the other hand, the “Extended Key Usage” part contains the following Object IDs:

06082B 06010505070301 -> serverAuth (TLS Web Server Authentication)
06082B 06010505070302 -> clientAuth (TLS Web Client Authentication)
06082B 06010505070305 -> ipsecEndSystem (IPSec End System)
06082B 06010505070306 -> ipsecTunnel (IPSec Tunnel)
06082B 06010505070308 -> timeStamping (Time Stamping)

The “bug”

The first thing to notice is that the failure is for “Key Usage” and not for “Extended Key Usage” (took me some time to figure out).

After that, a bit of digging into the code confirms that OpenVPN attempts to verify a bitmap with equality. I.e. it gets the certificates’ value and compares it against a predefined list of allowed values, which according to OpenVPN’s documentation defaults to “a0 88” (which means one of them). However the actual certificates bitmap value is 0xf8 as mentioned above. And thus the comparison fails with the error:

Thu Sep 11 00:12:05 2014 Validating certificate key usage
Thu Sep 11 00:12:05 2014 ++ Certificate has key usage  00f8, expects 00a0
Thu Sep 11 00:12:05 2014 ++ Certificate has key usage  00f8, expects 0088

The reason I’m calling this a bug is because it’s not sensible to use equality to compare against a bitmap. Instead one can use AND, in which case there would be:

( <certificate's value> & <desired value> ) == <desired value>
( 0xf8 & 0xa0) == 0xa0  -> True

In order for the validation to succeed with the defaults the certificate should have one of the following designations:

0xa0: Digital Signature, Key Encipherment
0x88: Digital Signature, Key Agreement

The solution

So there, since the comparison is done with equality you can do one of the following:

  • Use the above Key Usage on the certificate (inconvenient)
  • Don’t use “remote-cert-tls server” (bad)
  • Use “remote-cert-ku XX” where XX is the value of your certificate which can be seen in OpenVPN’s messages (the last octet). In my case it’s f8.


Linux, multicast, bridging and IPv6 troubles (i.e. why my IPv6 connectivity goes missing)

For a long time now I had a very annoying problem with IPv6 under Linux.

My setup is as follows: Linux box <-> Switch <-> Router

The Linux box uses a bridge interface (br0) and usually only has one physical interface attached to it (eth0). That’s a very convenient setup.

The problem is that after a couple of minutes the IPv6 connectivity of the host will go away. Now, the host has a static IPv6 assigned to it and it’s not that it loses the address or any route. Instead it just stops communicating with everything.

Troubleshooting this showed that the box loses the MAC address of the router and the ND protocol does not work, so it never recovers.

When the problem occurs, the neighbor information becomes stale:

# ip neigh
2a01:XXX:YYY:1::1 dev br0 lladdr 00:11:12:13:14:c4 router STALE
fe80::20c:XXff:feXX:YYYY dev br0 lladdr 00:11:12:13:14:c4 router STALE

I.e the entry remains in a ‘STALE’ state and never recovers.

My workarounds so far have been:

  • Enable promiscuous mode on the interface (ifconfig br0 promisc)
  • Clear neighbors (ip neigh flush)

Everything pointed out to multicast issues (what IPv6 ND uses).

Long-story-short, this was an eye opener:

What needs to be done is to disable IGMP snooping on the bridge interface because it causes these issues. This is done with:

# echo 0 > /sys/devices/virtual/net/br0/bridge/multicast_snooping

So do yourself a favor and add this to /etc/network/interfaces, in the relevant interface:

    up    echo 0 > /sys/devices/virtual/net/$IFACE/bridge/multicast_snooping


Installing package build dependencies from a .dsc file (Debian)

There are cases where one needs to install build-dependencies of a .dsc file in Debian.

Apparently this is not as trivial as:

# apt-get build-dep package

The easiest way I’ve found so far is to use mk-build-deps (from the devscripts package):

# mk-build-deps -i vadm_1.0.4ci+r16.dsc -t apt-get --no-install-recommends -y
dpkg-deb: building package `vadm-build-deps' in `../vadm-build-deps_1.0.4ci+r16_all.deb'.

The package has been created.
Attention, the package has been created in the current directory,
not in ".." as indicated by the message above!
(Reading database ... 41052 files and directories currently installed.)
Preparing to replace vadm-build-deps 1.0.4ci+r16 (using vadm-build-deps_1.0.4ci+r16_all.deb) ...
Unpacking replacement vadm-build-deps ...

Reading package lists...
Building dependency tree...
Reading state information...

0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up vadm-build-deps (1.0.4ci+r16) ...

This godly script:

  • Creates a psudo package that depends on the build-depends of the .dsc file
  • Does a dpkg -i on the generated deb file (which may fail because of missing build depends)
  • Does apt-get -f install

Extra points for using -r which will remove the generated package after it’s done.

pyOpenSSL and invalid certificates

I was trying to import some X509v3 certificates that were created with pyOpenSSL to a MikroTik router (RouterOS 6.1) but they were always being imported with an invalid validity period (not before 1970 and not after 1970).

Eventually I found out that this is because pyOpenSSL stores the validity field in an invalid format. Here’s the story:

cert1.pem is the pyOpenSSL certificate and cert2.pem is a certificate created with openssl. Both have mostly the same information. Decoding the certificates with openssl shows that cert1.pem actually has an older notAfter date so it’s not an issue of overflow.

$ openssl x509 -noout -startdate -enddate < cert1.pem
notBefore=Jul 20 18:35:58 2013 GMT
notAfter=Jan  1 00:00:00 2032 GMT

$ openssl x509 -noout -startdate -enddate < cert2.pem
notBefore=Aug 31 21:44:54 2012 GMT
notAfter=Aug 26 21:44:54 2032 GMT

I examined the certificates in python by decoding their DER structures and looking for the validity field (copy-paste the following in a python shell).

import Crypto.Util.asn1 as asn1
import OpenSSL.crypto as c


st1=open(fn1, 'r').read()
st2=open(fn2, 'r').read()

cert1=c.load_certificate(c.FILETYPE_PEM, st1)
cert2=c.load_certificate(c.FILETYPE_PEM, st2)

dump1=c.dump_certificate(c.FILETYPE_ASN1, cert1)
dump2=c.dump_certificate(c.FILETYPE_ASN1, cert2)








at this point tt1 and tt2 are sequences of the validity field (notBefore, notafter) for the two certificates . Here’s what they contain:

>>> tt1[0]
>>> tt1[1]

>>> tt2[0]
>>> tt2[1]

SoaB! They differ!

Reading the X509 spec [1], section indicates that there are two possible formats for the validity period: both notBefore and notAfter may be encoded as UTCTime or  GeneralizedTime.

  • UTCTime is defined as YYMMDDHHMMSSZ
  • GeneralizedTime is defined as YYYYMMDDHHMMSSZ

So pyOpenSSL uses GeneralizedTime while openssl uses UTCTime. So both are valid.

However the RFC also says:

CAs conforming to this profile MUST always encode certificate
validity dates through the year 2049 as UTCTime; certificate validity
dates in 2050 or later MUST be encoded as GeneralizedTime.


So it seems that pyOpenSSL uses GeneralizedTime unconditionally which is not RFC compliant and thus rejected by RouterOS.

A quick look at pyOpenSSL’s code unfortunately proves that…


Verify that a private key matches a certificate with PyOpenSSL

Verify that a private key matches a certificate using PyOpenSSL and PyCrypto:

import OpenSSL.crypto
from Crypto.Util import asn1


# The certificate - an X509 object

# The private key - a PKey object


# Only works for RSA (I think)
if pub.type()!=c.TYPE_RSA or priv.type()!=c.TYPE_RSA:
    raise Exception('Can only handle RSA keys')

# This seems to work with public as well
pub_asn1=c.dump_privatekey(c.FILETYPE_ASN1, pub)
priv_asn1=c.dump_privatekey(c.FILETYPE_ASN1, priv)

# Decode DER

# Get the modulus

if pub_modulus==priv_modulus:

The idea is to get the modulus from the two DER structures and compare them. They should be the same.

Note: You can use the above under the MIT license. If it doesn’t fit your needs let me know. My intention is to make this usable by anyone for any kind of use with no obligation.