Petter Reinholdtsen

Entries tagged "english".

New oggz release 1.1.2 after 15 years
9th February 2025

A little over a week ago, I noticed the liboggz package on my Debian dashboard had not had a new upstream release for a while. A closer look showed that its last release, version 1.1.1, happened in 2010. A few patches had accumulated in the Debian package, and I even noticed that I had passed on these patches to upstream five years ago. A handful crash bugs had been reported against the Debian package, and looking at the upstream repository I even found a few crash bugs reported there too. To add insult to injury, I discovered that upstream had accumulated several fixes in the years between 2010 and now, and many of them had not made their way into the Debian package. I decided enough was enough, and that a new upstream release was needed fixing these nasty crash bugs. Luckily I am also a member of the Xiph team, aka upstream, and could actually go to work immediately to fix it.

I started by adding automatic build testing on the Xiph gitlab oggz instance, to get a better idea of the state of affairs with the code base. This exposed a few build problems, which I had to fix. In parallel to this, I sent an email announcing my wish for a new release to every person who had committed to the upstream code base since 2010, and asked for help doing a new release both on email and on the #xiph IRC channel. Sadly only a fraction of their email providers accepted my email. But Ralph Giles in the Xiph team came to the rescue and provided invaluable help to guide be through the release Xiph process. While this was going on, I spent a few days tracking down the crash bugs with good help from valgrind, and came up with patch proposals to get rid of at least these specific crash bugs. The open issues also had to be checked. Several of them proved to be fixed already, but a few I had to creat patches for. I also checked out the Debian, Arch, Fedora, Suse and Gentoo packages to see if there were patches applied in these Linux distributions that should be passed upstream. The end result was ready yesterday. A new liboggz release, version 1.1.2, was tagged, wrapped up and published on the project page. And today, the new release was uploaded into Debian.

You are probably by now curious on what actually changed in the library. I guess the most interesting new feature was support for Opus and VP8. Almost all other changes were stability or documentation fixes. The rest were related to the gitlab continuous integration testing. All in all, this was really a minor update, hence the version bump only from 1.1.1 to to 1.1.2, but it was long overdue and I am very happy that it is out the door.

One change proposed upstream was not included this time, as it extended the API and changed some of the existing library methods, and thus require a major SONAME bump and possibly code changes in every program using the library. As I am not that familiar with the code base, I am unsure if I am the right person to evaluate the change. Perhaps later.

Since the release was tagged, a few minor fixes has been committed upstream already: automatic testing the cross building to Windows, and documentation updates linking to the correct project page. If a important issue is discovered with this release, I guess a new release might happen soon including the minor fixes. If not, perhaps they can wait fifteen years. :)

I would like to send a big thank you to everyone that helped make this release happen, from the people adding fixes upstream over the course of fifteen years, to the ones reporting crash bugs, other bugs and those maintaining the package in various Linux distributions. Thank you very much for your time and interest.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, multimedia, standard, video.
121 packages in Debian mapped to hardware for automatic recommendation
19th January 2025

For some years now, I have been working on a automatic hardware based package recommendation system for Debian and other Linux distributions. The isenkram system I started on back in 2013 now consist of two subsystems, one locating firmware files using the information provided by apt-file, and one matching hardware to packages using information provided by AppStream. The former is very similar to the mechanism implemented in debian-installer to pick the right firmware packages to install. This post is about the latter system. Thanks to steady progress and good help from both other Debian and upstream developers, I am happy to report that the Isenkram system now are able to recommend 121 packages using information provided via AppStream.

The mapping is done using modalias information provided by the kernel, the same information used by udev when creating device files, and the kernel when deciding which kernel modules to load. To get all the modalias identifiers relevant for your machine, you can run the following command on the command line:

find /sys/devices -name modalias -print0 | xargs -0 sort -u

The modalias identifiers can look something like this:

acpi:PNP0000
cpu:type:x86,ven0000fam0006mod003F:feature:,0000,0001,0002,0003,0004,0005,0006,0007,0008,0009,000B,000C,000D,000E,000F,0010,0011,0013,0015,0016,0017,0018,0019,001A,001B,001C,001D,001F,002B,0034,003A,003B,003D,0068,006B,006C,006D,006F,0070,0072,0074,0075,0076,0078,0079,007C,0080,0081,0082,0083,0084,0085,0086,0087,0088,0089,008B,008C,008D,008E,008F,0091,0092,0093,0094,0095,0096,0097,0098,0099,009A,009B,009C,009D,009E,00C0,00C5,00E1,00E3,00EB,00ED,00F0,00F1,00F3,00F5,00F6,00F9,00FA,00FB,00FD,00FF,0100,0101,0102,0103,0111,0120,0121,0123,0125,0127,0128,0129,012A,012C,012D,0140,0160,0161,0165,016C,017B,01C0,01C1,01C2,01C4,01C5,01C6,01F9,024A,025A,025B,025C,025F,0282
dmi:bvnDellInc.:bvr2.18.1:bd08/14/2023:br2.18:svnDellInc.:pnPowerEdgeR730:pvr:rvnDellInc.:rn0H21J3:rvrA09:cvnDellInc.:ct23:cvr:skuSKU=NotProvided
pci:v00008086d00008D3Bsv00001028sd00000600bc07sc80i00
platform:serial8250
scsi:t-0x05
usb:v413CpA001d0000dc09dsc00dp00ic09isc00ip00in00

The entries above are a selection of the complete set available on a Dell PowerEdge R730 machine I have access to, to give an idea about the various styles of hardware identifiers presented in the modalias format. When looking up relevant packages in a Debian Testing installation on the same R730, I get this list of packages proposed:

% sudo isenkram-lookup
firmware-bnx2x
firmware-nvidia-graphics
firmware-qlogic
megactl
wsl
%

The list consist of firmware packages requested by kernel modules, as well packages with program to get the status from the RAID controller and to maintain the LAN console. When the edac-utils package providing tools to check the ECC RAM status will enter testing in a few days, it will also show up as a proposal from isenkram. In addition, once the mfiutil package we uploaded in October get past the NEW processing, it will also propose a tool to configure the RAID controller.

Another example is the trusty old Lenovo Thinkpad X230, which have hardware handled by several packages in the archive. This is running on Debian Stable:

% isenkram-lookup 
beignet-opencl-icd
bluez
cheese
ethtool
firmware-iwlwifi
firmware-misc-nonfree
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tlp
tp-smapi-dkms
tpb
%

Here there proposal consist of software to handle the camera, bluetooth, network card, wifi card, GPU, fan, fingerprint reader and acceleration sensor on the machine.

Here is the complete set of packages currently providing hardware mapping via AppStream in Debian Unstable: air-quality-sensor, alsa-firmware-loaders, antpm, array-info, avarice, avrdude, bmusb-v4l2proxy, brltty, calibre, colorhug-client, concordance-common, consolekit, dahdi-firmware-nonfree, dahdi-linux, edac-utils, eegdev-plugins-free, ekeyd, elogind, firmware-amd-graphics, firmware-ath9k-htc, firmware-atheros, firmware-b43-installer, firmware-b43legacy-installer, firmware-bnx2, firmware-bnx2x, firmware-brcm80211, firmware-carl9170, firmware-cavium, firmware-intel-graphics, firmware-intel-misc, firmware-ipw2x00, firmware-ivtv, firmware-iwlwifi, firmware-libertas, firmware-linux-free, firmware-mediatek, firmware-misc-nonfree, firmware-myricom, firmware-netronome, firmware-netxen, firmware-nvidia-graphics, firmware-qcom-soc, firmware-qlogic, firmware-realtek, firmware-ti-connectivity, fpga-icestorm, g810-led, galileo, garmin-forerunner-tools, gkrellm-thinkbat, goldencheetah, gpsman, gpstrans, gqrx-sdr, i8kutils, imsprog, ledger-wallets-udev, libairspy0, libam7xxx0.1, libbladerf2, libgphoto2-6t64, libhamlib-utils, libm2k0.9.0, libmirisdr4, libnxt, libopenxr1-monado, libosmosdr0, librem5-flash-image, librtlsdr0, libticables2-8, libx52pro0, libykpers-1-1, libyubikey-udev, limesuite, linuxcnc-uspace, lomoco, madwimax, media-player-info, megactl, mixxx, mkgmap, msi-keyboard, mu-editor, mustang-plug, nbc, nitrokey-app, nqc, ola, openfpgaloader, openocd, openrazer-driver-dkms, pcmciautils, pcscd, pidgin-blinklight, ponyprog, printer-driver-splix, python-yubico-tools, python3-btchip, qlcplus, rosegarden, scdaemon, sispmctl, solaar, spectools, sunxi-tools, t2n, thinkfan, tlp, tp-smapi-dkms, trezor, tucnak, ubertooth, usbrelay, uuu, viking, w1retap, wsl, xawtv, xinput-calibrator, xserver-xorg-input-wacom and xtrx-dkms.

In addition to these, there are several with patches pending in the Debian bug tracking system, and even more where no-one wrote patches yet. Good candiates for the latter are packages with udev rules but no AppStream hardware information.

The isenkram system consist of two packages, isenkram-cli with the command line tools, and isenkram with a GUI background process. The latter will listen for dbus events from udev emitted when new hardware become available (like when inserting a USB dongle or discovering a new bluetooth device), look up the modalias entry for this piece of hardware in AppStream (and a hard coded list of mappings from isenkram - currently working hard to move this list to AppStream), and pop up a dialog proposing to install any not already installed packages supporting this hardware. It work very well today when inserting the LEGO Mindstorms RCX, NXT and EV3 controllers. :) If you want to make sure more hardware related packages get recommended, please help out fixing the remaining packages in Debian to provide AppStream metadata with hardware mappings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, isenkram.
What is the most supported MIME type in Debian in 2025?
18th January 2025

Seven and twelve years ago, I measured what the most supported MIME type in Debian was, first by analysing the desktop files in all packages in the archive, then by analysing the DEP-11 AppStream data set. I guess it is time to repeat the measurement, only for unstable as last time:

Debian Unstable:

  count MIME type
  ----- -----------------------
     63 image/png
     63 image/jpeg
     57 image/tiff
     54 image/gif
     51 image/bmp
     50 audio/mpeg
     48 text/plain
     42 audio/x-mp3
     40 application/ogg
     39 audio/x-wav
     39 audio/x-flac
     36 audio/x-vorbis+ogg
     35 audio/x-mpeg
     34 audio/x-mpegurl
     34 audio/ogg
     33 application/x-ogg
     32 audio/mp4
     31 audio/x-scpls
     31 application/pdf
     29 audio/x-ms-wma

The list was created like this using a sid chroot:

cat /var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz | \
  zcat | awk '/^  - \S+\/\S+$/ {print $2 }' | sort | \
  uniq -c | sort -nr | head -20

It is nice to see that the same number of packages now support PNG and JPEG. Last time JPEG had more support than PNG. Most of the MIME types are known to me, but the 'audio/x-scpls' one I have no idea what represent, except it being an audio format. To find the packages claiming support for this format, the appstreamcli command from the appstream package can be used:

% appstreamcli what-provides mediatype audio/x-scpls | grep Package: | sort -u
Package: alsaplayer-common
Package: amarok
Package: audacious
Package: brasero
Package: celluloid
Package: clapper
Package: clementine
Package: cynthiune.app
Package: elisa
Package: gtranscribe
Package: kaffeine
Package: kmplayer
Package: kylin-burner
Package: lollypop
Package: mediaconch-gui
Package: mediainfo-gui
Package: mplayer-gui
Package: mpv
Package: mystiq
Package: parlatype
Package: parole
Package: pragha
Package: qmmp
Package: rhythmbox
Package: sayonara
Package: shotcut
Package: smplayer
Package: soundconverter
Package: strawberry
Package: syncplay
Package: vlc
%

Look like several video and auto tools understand the format. Similarly one can check out the number of packages supporting the STL format commonly used for 3D printing:

% appstreamcli what-provides mediatype model/stl | grep Package: | sort -u
Package: cura
Package: freecad
Package: open3d-viewer
%

How strange the slic3r and prusa-slicer packages do not support STL. Perhaps just missing package metadata? Luckily the amount of package metadata in Debian is getting better, and hopefully this way of locating relevant packages for any file format will be the preferred one soon.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, isenkram.
The 2025 LinuxCNC Norwegian developer gathering
11th January 2025

The LinuxCNC project is trotting along. And I believe this great software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods, would do even better with more in-person developer gatherings, so we plan to organise such gathering this summer too.

This year we would like to invite to a small LinuxCNC and free software fabrication workshop/gathering in Norway this summer for the weekend starting July 4th 2025. New this year is the slightly larger scope, and we invite people also outside the LinuxCNC community to join. As earlier, we suggest to organize it as an unconference, where the participants create the program upon arrival.

The location is a metal workshop 15 minutes drive away from to the Gardermoen airport (OSL), where there is a lot of space and a hotel only 5 minutes away by car. We plan to fire up the barbeque in the evenings.

Please let us know if you would like to join. We track the list of participants on a simple pad, please add yourself there if you are interested in joining.

The NUUG Foundation has on our request offered to handle any money involved with this gathering, in other words holding any sponsor funds and paying any bills. NUUG Foundation is a spinnoff from the NUUG member organisation here in Norway with long ties to the free software and open standards communities.

As usual we hope to find sponsors to pay for food, lodging and travel.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, linuxcnc.
New lsdvd release 0.18 after ten years
21st December 2024

The rumors of the death of the lsdvd project is slightly exaggerated. And the last few months, we have been working on fixing and improving it, culminating in a new release last night. This is the list of changes in the new 0.18 release, as announced on the project mailing list:

The most exciting news to me is easy access to the DVDDiscID, which make it a lot easier to identify DVD duplicates across a large collection of DVDs. During testing it has proved to be very effective ad identifying when DVDs in a DVD box (say all Star Wars movies) is identical to DVDs sold individually (like the same Star Wars movies packaged individually).

Because none of the current developers got access to do tarball releases on Sourceforge any more, the release is only available as a git tag in the repository. Lets hope it do not take ten years for the next release. The project are discussing to move away from Sourceforge, but it has not yet concluded.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
More than 200 orphaned Debian packages moved to git, 216 to go
11th July 2024

In April, I started migrating orphaned Debian packages without any version control system listed in debian/control to git. This morning, my Debian QA page finally reached 200 QA packages migrated. In reality there are a few more, as the packages uploaded by someone else after my initial upload have disappeared from my QA uploads list. As I am running out of steam and will most likely focus on other parts of Debian moving forward, I hope someone else will find time to continue the migration to bring the number of orphaned packages without any version control system down to zero. Here is the updated recipe if someone want to help out.

To locate packages to work on, the following one-liner can be used:

PGPASSWORD="udd-mirror" psql --port=5432 --host=udd-mirror.debian.net \
  --username=udd-mirror udd -c "select source from sources \
   where release = 'sid' and (vcs_url ilike '%anonscm.debian.org%' \
   OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL \
   OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%' \
   order by random() limit 10;"

Pick a random package from the list and run the latest edition of the script debian-snap-to-salsa with the package name as the argument to prepare a git repository with the existing packaging. This will download old Debian packages from snapshot.debian.org. Note that very recent uploads will not be included, so check out the package on tracker.debian.org. Next, run gbp buildpackage --git-ignore-new to verify that the package build as it should, and then visit https://salsa.debian.org/debian/ and make sure there is not already a git repository for the package there. I also did git log -p debian/control and look for vcs entries to check if the package used to have a git repository on Alioth, and see if it can be a useful starting point moving forward. If all this check out, I created a new gitlab project below the Debian group on salsa, push the package source there and upload a new version. I tend to also ensure build hardening is enabled, if it prove to be easy, and check if I can easily fix any lintian issues or bug reports. If the process took more than 20 minutes, I dropped it and moved on to another package.

If I found patches in debian/patches/ that were not yet passed upstream, I would send an email to make sure upstream know about them. This has proved to be a valuable step, and caused several new releases for software that initially appeared abandoned. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
Some notes from the 2024 LinuxCNC Norwegian developer gathering
10th July 2024

The Norwegian The LinuxCNC developer gathering 2024 is over. It was a great and productive weekend, and I am sad that it is over.

Regular readers probably still remember what LinuxCNC is, but her is a quick summary for those that forgot? LinuxCNC is a free software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It eats G-code and produce motor movement and other changes to the physical world, while reading sensor input.

I am not quite sure about the total head count, as not all people were present at the gathering the entire weekend, but I believe it was close to 10 people showing their faces at the gathering. The "hard core" of the group, who stayed the entire weekend, were two from Norway, two from Germany and one from England. I am happy with the outcome from the gathering. We managed to wrap up a new stable LinuxCNC release 2.9.3 and even tested it on real hardware within minutes of the release. The release notes for 2.9.3 are still being written, but should show up on on the project site in the next few days. We managed to go through around twenty pull requests and merge then into either the stable release (2.9) or the development branch (master). There are still around thirty pull requests left to process, so we are not out of work yet. We even managed to fix/improve a slightly worn lathe, and experiment with running a mechanical clock using G-code.

The evening barbeque worked well both on Saturday and Sunday. It is quite fun to light up a charcoal grill using compressed air. Sadly the weather was not the best, so we stayed indoors most of the time.

This gathering was made possible partly with sponsoring from both Redpill Linpro, Debian and NUUG Foundation, and we are most grateful for the support. I would also like to thank the local school for lending us some furniture, and of course the rest of the members of the organizers team, Asle and Bosse, for their countless contributions. The gathering was such success that we want to do it again next year.

We plan to organize the next Norwegian LinuxCNC developer gathering at the end of June next year, the weekend Friday 27th to Sunday 29th of June 2025. I recommend you reserve the dates on your calendar today. Other related communities are also welcome to join in, for example those working on systems like FreeCAD and opencamlib, as I am sure we have much in common and sharing experiences would be very useful to all involved. We are of course looking for sponsors for this gathering already. The total budget for this gathering was around NOK 25.000 (around EUR 2.300), so our needs are quite modest. Perhaps a machine or tools company would like to help out the free software manufacturing community by sponsoring food, lodging and transport for such gathering?

Tags: debian, english, linuxcnc.
The 2024 LinuxCNC Norwegian developer gathering
31st May 2024

The LinuxCNC project is still going strong. And I believe this great software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods, would do even better with more in-person developer gatherings, so we plan to organise such gathering this summer too.

The Norwegian LinuxCNC developer gathering take place the weekend Friday July 5th to 7th this year, and is open for everyone interested in contributing to LinuxCNC and free software manufacturing. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian as well as leftover money from last years gathering from Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and probably also shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch and add your details on the pad.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, linuxcnc.
45 orphaned Debian packages moved to git, 391 to go
25th April 2024

Nine days ago, I started migrating orphaned Debian packages with no version control system listed in debian/control of the source to git. At the time there were 438 such packages. Now there are 391, according to the UDD. In reality it is slightly less, as there is a delay between uploads and UDD updates. In the nine days since, I have thus been able to work my way through ten percent of the packages. I am starting to run out of steam, and hope someone else will also help brushing some dust of these packages. Here is a recipe how to do it. I start by picking a random package by querying the UDD for a list of 10 random packages from the set of remaining packages:

PGPASSWORD="udd-mirror" psql --port=5432 --host=udd-mirror.debian.net \
  --username=udd-mirror udd -c "select source from sources \
   where release = 'sid' and (vcs_url ilike '%anonscm.debian.org%' \
   OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL \
   OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%' \
   order by random() limit 10;"

Next, I visit http://salsa.debian.org/debian and search for the package name, to ensure no git repository already exist. If it does, I clone it and try to get it to an uploadable state, and add the Vcs-* entries in d/control to make the repository more widely known. These packages are a minority, so I will not cover that use case here.

For packages without an existing git repository, I run the following script debian-snap-to-salsa to prepare a git repository with the existing packaging.

#!/bin/sh
#
# See also https://bugs.debian.org/804722#31

set -e

# Move to this Standards-Version.
SV_LATEST=4.7.0

PKG="$1"

if [ -z "$PKG" ]; then
    echo "usage: $0 "
    exit 1
fi

if [ -e "${PKG}-salsa" ]; then
    echo "error: ${PKG}-salsa already exist, aborting."
    exit 1
fi

if [ -z "ALLOWFAILURE" ] ; then
    ALLOWFAILURE=false
fi

# Fetch every snapshotted source package.  Manually loop until all
# transfers succeed, as 'gbp import-dscs --debsnap' do not fail on
# download failures.
until debsnap --force -v $PKG || $ALLOWFAILURE ; do sleep 1; done
mkdir ${PKG}-salsa; cd ${PKG}-salsa
git init

# Specify branches to override any debian/gbp.conf file present in the
# source package.
gbp import-dscs  --debian-branch=master --upstream-branch=upstream \
    --pristine-tar ../source-$PKG/*.dsc

# Add Vcs pointing to Salsa Debian project (must be manually created
# and pushed to).
if ! grep -q ^Vcs- debian/control ; then
    awk "BEGIN { s=1 } /^\$/ { if (s==1) { print \"Vcs-Browser: https://salsa.debian.org/debian/$PKG\"; print \"Vcs-Git: https://salsa.debian.org/debian/$PKG.git\" }; s=0 } { print }" < debian/control > debian/control.new && mv debian/control.new debian/control
    git commit -m "Updated vcs in d/control to Salsa." debian/control
fi

# Tell gbp to enforce the use of pristine-tar.
inifile +inifile debian/gbp.conf +create +section DEFAULT +key pristine-tar +value True
git add debian/gbp.conf
git commit -m "Added d/gbp.conf to enforce the use of pristine-tar." debian/gbp.conf

# Update to latest Standards-Version.
SV="$(grep ^Standards-Version: debian/control|awk '{print $2}')"
if [ $SV_LATEST != $SV ]; then
    sed -i "s/\(Standards-Version: \)\(.*\)/\1$SV_LATEST/" debian/control
    git commit -m "Updated Standards-Version from $SV to $SV_LATEST." debian/control
fi

if grep -q pkg-config debian/control; then
    sed -i s/pkg-config/pkgconf/ debian/control
    git commit -m "Replaced obsolete pkg-config build dependency with pkgconf." debian/control
fi

if grep -q libncurses5-dev debian/control; then
    sed -i s/libncurses5-dev/libncurses-dev/ debian/control
    git commit -m "Replaced obsolete libncurses5-dev build dependency with libncurses-dev." debian/control
fi
Some times the debsnap script fail to download some of the versions. In those cases I investigate, and if I decide the failing versions will not be missed, I call it using ALLOWFAILURE=true to ignore the problem and create the git repository anyway.

With the git repository in place, I do a test build (gbp buildpackage) to ensure the build is actually working. If it does not I pick a different package, or if the build failure is trivial to fix, I fix it before continuing. At this stage I revisit http://salsa.debian.org/debian and create the project under this group for the package. I then follow the instructions to publish the local git repository. Here is from a recent example:

git remote add origin git@salsa.debian.org:debian/perl-byacc.git
git push --set-upstream origin master upstream pristine-tar
git push --tags

With a working build, I have a look at the build rules if I want to remove some more dust. I normally try to move to debhelper compat level 13, which involves removing debian/compat and modifying debian/control to build depend on debhelper-compat (=13). I also test with 'Rules-Requires-Root: no' in debian/control and verify in debian/rules that hardening is enabled, and include all of these if the package still build. If it fail to build with level 13, I try with 12, 11, 10 and so on until I find a level where it build, as I do not want to spend a lot of time fixing build issues.

Some times, when I feel inspired, I make sure debian/copyright is converted to the machine readable format, often by starting with 'debhelper -cc' and then cleaning up the autogenerated content until it matches realities. If I feel like it, I might also clean up non-dh-based debian/rules files to use the short style dh build rules.

Once I have removed all the dust I care to process for the package, I run 'gbp dch' to generate a debian/changelog entry based on the commits done so far, run 'dch -r' to switch from 'UNRELEASED' to 'unstable' and get an editor to make sure the 'QA upload' marker is in place and that all long commit descriptions are wrapped into sensible lengths, run 'debcommit --release -a' to commit and tag the new debian/changelog entry, run 'debuild -S' to build a source only package, and 'dput ../perl-byacc_2.0-10_source.changes' to do the upload. During the entire process, and many times per step, I run 'debuild' to verify the changes done still work. I also some times verify the set of built files using 'find debian' to see if I can spot any problems (like no file in usr/bin any more or empty package). I also try to fix all lintian issues reported at the end of each 'debuild' run.

If I find Debian specific patches, I try to ensure their metadata is fairly up to date and some times I even try to reach out to upstream, to make the upstream project aware of the patches. Most of my emails bounce, so the success rate is low. For projects with no Homepage entry in debian/control I try to track down one, and for packages with no debian/watch file I try to create one. But at least for some of the packages I have been unable to find a functioning upstream, and must skip both of these.

If I could handle ten percent in nine days, twenty people could complete the rest in less then five days. I use approximately twenty minutes per package, when I have twenty minutes spare time to spend. Perhaps you got twenty minutes to spare too?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2024-05-04: There is an updated edition of my migration script, last updated 2024-05-04.

Tags: debian, english.
RAID status from LSI Megaraid controllers in Debian
17th April 2024

I am happy to report that the megactl package, useful to fetch RAID status when using the LSI Megaraid controller, now is available in Debian. It passed NEW a few days ago, and is now available in unstable, and probably showing up in testing in a weeks time. The new version should provide Appstream hardware mapping and should integrate nicely with isenkram.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, isenkram, raid.
Time to move orphaned Debian packages to git
14th April 2024

There are several packages in Debian without a associated git repository with the packaging history. This is unfortunate and it would be nice if more of these would do so. Quote a lot of these are without a maintainer, ie listed as maintained by the 'Debian QA Group' place holder. In fact, 438 packages have this property according to UDD (SELECT source FROM sources WHERE release = 'sid' AND (vcs_url ilike '%anonscm.debian.org%' OR vcs_browser ilike '%anonscm.debian.org%' or vcs_url IS NULL OR vcs_browser IS NULL) AND maintainer ilike '%packages@qa.debian.org%';). Such packages can be updated without much coordination by any Debian developer, as they are considered orphaned.

To try to improve the situation and reduce the number of packages without associated git repository, I started a few days ago to search out candiates and provide them with a git repository under the 'debian' collaborative Salsa project. I started with the packages pointing to obsolete Alioth git repositories, and am now working my way across the ones completely without git references. In addition to updating the Vcs-* debian/control fields, I try to update Standards-Version, debhelper compat level, simplify d/rules, switch to Rules-Requires-Root: no and fix lintian issues reported. I only implement those that are trivial to fix, to avoid spending too much time on each orphaned package. So far my experience is that it take aproximately 20 minutes to convert a package without any git references, and a lot more for packages with existing git repositories incompatible with git-buildpackages.

So far I have converted 10 packages, and I will keep going until I run out of steam. As should be clear from the numbers, there is enough packages remaining for more people to do the same without stepping on each others toes. I find it useful to start by searching for a git repo already on salsa, as I find that some times a git repo has already been created, but no new version is uploaded to Debian yet. In those cases I start with the existing git repository. I convert to the git-buildpackage+pristine-tar workflow, and ensure a debian/gbp.conf file with "pristine-tar=True" is added early, to avoid uploading a orig.tar.gz with the wrong checksum by mistake. Did that three times in the begin before I remembered my mistake.

So, if you are a Debian Developer and got some spare time, perhaps considering migrating some orphaned packages to git?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
Plain text accounting file from your bitcoin transactions
7th March 2024

A while back I wrote a small script to extract the Bitcoin transactions in a wallet in the ledger plain text accounting format. The last few days I spent some time to get it working better with more special cases. In case it can be useful for others, here is a copy:

#!/usr/bin/python3
#  -*- coding: utf-8 -*-
#  Copyright (c) 2023-2024 Petter Reinholdtsen

from decimal import Decimal
import json
import subprocess
import time

import numpy

def format_float(num):
    return numpy.format_float_positional(num, trim='-')

accounts = {
    u'amount' : 'Assets:BTC:main',
}

addresses = {
    '' : 'Assets:bankkonto',
    '' : 'Assets:bankkonto',
}

def exec_json(cmd):
    proc = subprocess.Popen(cmd,stdout=subprocess.PIPE)
    j = json.loads(proc.communicate()[0], parse_float=Decimal)
    return j

def list_txs():
    # get all transactions for all accounts / addresses
    c = 0
    txs = []
    txidfee = {}
    limit=100000
    cmd = ['bitcoin-cli', 'listtransactions', '*', str(limit)]
    if True:
        txs.extend(exec_json(cmd))
    else:
        # Useful for debugging
        with open('transactions.json') as f:
            txs.extend(json.load(f, parse_float=Decimal))
    #print txs
    for tx in sorted(txs, key=lambda a: a['time']):
#        print tx['category']
        if 'abandoned' in tx and tx['abandoned']:
            continue
        if 'confirmations' in tx and 0 >= tx['confirmations']:
            continue
        when = time.strftime('%Y-%m-%d %H:%M', time.localtime(tx['time']))
        if 'message' in tx:
            desc = tx['message']
        elif 'comment' in tx:
            desc = tx['comment']
        elif 'label' in tx:
            desc = tx['label']
        else:
            desc = 'n/a'
        print("%s %s" % (when, desc))
        if 'address' in tx:
            print("  ; to bitcoin address %s" % tx['address'])
        else:
            print("  ; missing address in transaction, txid=%s" % tx['txid'])
        print(f"  ; amount={tx['amount']}")
        if 'fee'in tx:
            print(f"  ; fee={tx['fee']}")
        for f in accounts.keys():
            if f in tx and Decimal(0) != tx[f]:
                amount = tx[f]
                print("  %-20s   %s BTC" % (accounts[f], format_float(amount)))
        if 'fee' in tx and Decimal(0) != tx['fee']:
            # Make sure to list fee used in several transactions only once.
            if 'fee' in tx and tx['txid'] in txidfee \
               and tx['fee'] == txidfee[tx['txid']]:
                True
            else:
                fee = tx['fee']
                print("  %-20s   %s BTC" % (accounts['amount'], format_float(fee)))
                print("  %-20s   %s BTC" % ('Expences:BTC-fee', format_float(-fee)))
                txidfee[tx['txid']] = tx['fee']

        if 'address' in tx and tx['address'] in addresses:
            print("  %s" % addresses[tx['address']])
        else:
            if 'generate' == tx['category']:
                print("  Income:BTC-mining")
            else:
                if amount < Decimal(0):
                    print(f"  Assets:unknown:sent:update-script-addr-{tx['address']}")
                else:
                    print(f"  Assets:unknown:received:update-script-addr-{tx['address']}")

        print()
        c = c + 1
    print("# Found %d transactions" % c)
    if limit == c:
        print(f"# Warning: Limit {limit} reached, consider increasing limit.")

def main():
    list_txs()

main()

It is more of a proof of concept, and I do not expect it to handle all edge cases, but it worked for me, and perhaps you can find it useful too.

To get a more interesting result, it is useful to map accounts sent to or received from to accounting accounts, using the addresses hash. As these will be very context dependent, I leave out my list to allow each user to fill out their own list of accounts. Out of the box, 'ledger reg BTC:main' should be able to show the amount of BTCs present in the wallet at any given time in the past. For other and more valuable analysis, a account plan need to be set up in the addresses hash. Here is an example transaction:

2024-03-07 17:00 Donated to good cause
    Assets:BTC:main                           -0.1 BTC
    Assets:BTC:main                       -0.00001 BTC
    Expences:BTC-fee                       0.00001 BTC
    Expences:donations                         0.1 BTC

It need a running Bitcoin Core daemon running, as it connect to it using bitcoin-cli listtransactions * 100000 to extract the transactions listed in the Wallet.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
RAID status from LSI Megaraid controllers using free software
3rd March 2024

The last few days I have revisited RAID setup using the LSI Megaraid controller. These are a family of controllers called PERC by Dell, and is present in several old PowerEdge servers, and I recently got my hands on one of these. I had forgotten how to handle this RAID controller in Debian, so I had to take a peek in the Debian wiki page "Linux and Hardware RAID: an administrator's summary" to remember what kind of software is available to configure and monitor the disks and controller. I prefer Free Software alternatives to proprietary tools, as the later tend to fall into disarray once the manufacturer loose interest, and often do not work with newer Linux Distributions. Sadly there is no free software tool to configure the RAID setup, only to monitor it. RAID can provide improved reliability and resilience in a storage solution, but only if it is being regularly checked and any broken disks are being replaced in time. I thus want to ensure some automatic monitoring is available.

In the discovery process, I came across a old free software tool to monitor PERC2, PERC3, PERC4 and PERC5 controllers, which to my surprise is not present in debian. To help change that I created a request for packaging of the megactl package, and tried to track down a usable version. The original project site is on Sourceforge, but as far as I can tell that project has been dead for more than 15 years. I managed to find a more recent fork on github from user hmage, but it is unclear to me if this is still being maintained. It has not seen much improvements since 2016. A more up to date edition is a git fork from the original github fork by user namiltd, and this newer fork seem a lot more promising. The owner of this github repository has replied to change proposals within hours, and had already added some improvements and support for more hardware. Sadly he is reluctant to commit to maintaining the tool and stated in my first pull request that he think a new release should be made based on the git repository owned by hmage. I perfectly understand this reluctance, as I feel the same about maintaining yet another package in Debian when I barely have time to take care of the ones I already maintain, but do not really have high hopes that hmage will have time to spend on it and hope namiltd will change his mind.

In any case, I created a draft package based on the namiltd edition and put it under the debian group on salsa.debian.org. If you own a Dell PowerEdge server with one of the PERC controllers, or any other RAID controller using the megaraid or megaraid_sas Linux kernel modules, you might want to check it out. If enough people are interested, perhaps the package will make it into the Debian archive.

There are two tools provided, megactl for the megaraid Linux kernel module, and megasasctl for the megaraid_sas Linux kernel module. The simple output from the command on one of my machines look like this (yes, I know some of the disks have problems. :).

# megasasctl 
a0       PERC H730 Mini           encl:1 ldrv:2  batt:good
a0d0       558GiB RAID 1   1x2  optimal
a0d1      3067GiB RAID 0   1x11 optimal
a0e32s0     558GiB  a0d0  online   errs: media:0  other:19
a0e32s1     279GiB  a0d1  online  
a0e32s2     279GiB  a0d1  online  
a0e32s3     279GiB  a0d1  online  
a0e32s4     279GiB  a0d1  online  
a0e32s5     279GiB  a0d1  online  
a0e32s6     279GiB  a0d1  online  
a0e32s8     558GiB  a0d0  online   errs: media:0  other:17
a0e32s9     279GiB  a0d1  online  
a0e32s10    279GiB  a0d1  online  
a0e32s11    279GiB  a0d1  online  
a0e32s12    279GiB  a0d1  online  
a0e32s13    279GiB  a0d1  online  

#

In addition to displaying a simple status report, it can also test individual drives and print the various event logs. Perhaps you too find it useful?

In the packaging process I provided some patches upstream to improve installation and ensure a Appstream metainfo file is provided to list all supported HW, to allow isenkram to propose the package on all servers with a relevant PCI card.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, isenkram, raid.
Welcome out of prison, Mickey, hope you find some freedom!
1st January 2024

Today, the animation figure Mickey Mouse finally was released from the corporate copyright prison, as the 1928 movie Steamboat Willie entered the public domain in USA. This movie was the first public appearance of Mickey Mouse. Sadly the figure is still on probation, thanks to trademark laws and a the Disney corporations powerful pack of lawyers, as described in the 2017 article in "How Mickey Mouse Evades the Public Domain" from Priceonomics. On the positive side, the primary driver for repeated extentions of the duration of copyright has been Disney thanks to Mickey Mouse and the 2028 movie, and as it now in the public domain I hope it will cause less urge to extend the already unreasonable long copyright duration.

The first book I published, the 2004 book "Free Culture" by Lawrence Lessig, published 2015 in English, French and Norwegian Bokmål, touch on the story of Disney pushed for extending the copyright duration in USA. It is a great book explaining problems with the current copyright regime and why we need Creative Commons movement, and I strongly recommend everyone to read it.

This movie (with IMDB ID tt0019422) is now available from the Internet Archive. Two copies have been uploaded so far, one uploaded 2015-11-04 (torrent) and the other 2023-01-01 (torrent) - see VLC bittorrent plugin for streaming the video using the torrent link. I am very happy to see the number of public domain movies increasing. I look forward to when those are the majority. Perhaps it will reduce the urge of the copyright industry to control its customers.

A more comprehensive list of works entering the public domain in 2024 is available from the Public Domain Review.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, opphavsrett, verkidetfri.
VLC bittorrent plugin still going strong, new upload 2.14-4
31st December 2023

The other day I uploaded a new version of the VLC bittorrent plugin to Debian, version 2.14-4, to fix a few packaging issues. This plugin extend VLC allowing it to stream videos directly from a bittorrent source using both torrent files and magnet links, as easy as using a HTTP or local file source. I believe such protocol support is a vital feature in VLC, allowing efficient streaming from sources such at the 11 million movies in the Internet Archive. Bittorrent is one of the most efficient content distribution protocols on the Internet, without centralised control, and should be used more.

The new version is now both in Debian Unstable and Testing, as well as Ubuntu. While looking after the package, I decided to ask the VLC upstream community if there was any hope to get Bittorrent support into the official VLC program, and was very happy to learn that someone is already working on it. I hope we can see some fruits of that labour next year, but do not hold my breath. In the mean time we can use the plugin, which is already installed by 0.23 percent of the Debian population according to popularity-contest. It could use a new upstream release, and I hope the upstream developer soon find time to polish it even more.

It is worth noting that the plugin store the downloaded files in ~/Downloads/vlc-bittorrent/, which can quickly fill up the user home directory during use. Users of the plugin should keep an eye with disk usage when streaming a bittorrent source.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, verkidetfri, video.
New and improved sqlcipher in Debian for accessing Signal database
12th November 2023

For a while now I wanted to have direct access to the Signal database of messages and channels of my Desktop edition of Signal. I prefer the enforced end to end encryption of Signal these days for my communication with friends and family, to increase the level of safety and privacy as well as raising the cost of the mass surveillance government and non-government entities practice these days. In August I came across a nice recipe on how to use sqlcipher to extract statistics from the Signal database explaining how to do this. Unfortunately this did not work with the version of sqlcipher in Debian. The sqlcipher package is a "fork" of the sqlite package with added support for encrypted databases. Sadly the current Debian maintainer announced more than three years ago that he did not have time to maintain sqlcipher, so it seemed unlikely to be upgraded by the maintainer. I was reluctant to take on the job myself, as I have very limited experience maintaining shared libraries in Debian. After waiting and hoping for a few months, I gave up the last week, and set out to update the package. In the process I orphaned it to make it more obvious for the next person looking at it that the package need proper maintenance.

The version in Debian was around five years old, and quite a lot of changes had taken place upstream into the Debian maintenance git repository. After spending a few days importing the new upstream versions, realising that upstream did not care much for SONAME versioning as I saw library symbols being both added and removed with minor version number changes to the project, I concluded that I had to do a SONAME bump of the library package to avoid surprising the reverse dependencies. I even added a simple autopkgtest script to ensure the package work as intended. Dug deep into the hole of learning shared library maintenance, I set out a few days ago to upload the new version to Debian experimental to see what the quality assurance framework in Debian had to say about the result. The feedback told me the pacakge was not too shabby, and yesterday I uploaded the latest version to Debian unstable. It should enter testing today or tomorrow, perhaps delayed by a small library transition.

Armed with a new version of sqlcipher, I can now have a look at the SQL database in ~/.config/Signal/sql/db.sqlite. First, one need to fetch the encryption key from the Signal configuration using this simple JSON extraction command:

/usr/bin/jq -r '."key"' ~/.config/Signal/config.json

Assuming the result from that command is 'secretkey', which is a hexadecimal number representing the key used to encrypt the database. Next, one can now connect to the database and inject the encryption key for access via SQL to fetch information from the database. Here is an example dumping the database structure:

% sqlcipher ~/.config/Signal/sql/db.sqlite
sqlite> PRAGMA key = "x'secretkey'";
sqlite> .schema
CREATE TABLE sqlite_stat1(tbl,idx,stat);
CREATE TABLE conversations(
      id STRING PRIMARY KEY ASC,
      json TEXT,

      active_at INTEGER,
      type STRING,
      members TEXT,
      name TEXT,
      profileName TEXT
    , profileFamilyName TEXT, profileFullName TEXT, e164 TEXT, serviceId TEXT, groupId TEXT, profileLastFetchedAt INTEGER);
CREATE TABLE identityKeys(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE items(
      id STRING PRIMARY KEY ASC,
      json TEXT
    );
CREATE TABLE sessions(
      id TEXT PRIMARY KEY,
      conversationId TEXT,
      json TEXT
    , ourServiceId STRING, serviceId STRING);
CREATE TABLE attachment_downloads(
    id STRING primary key,
    timestamp INTEGER,
    pending INTEGER,
    json TEXT
  );
CREATE TABLE sticker_packs(
    id TEXT PRIMARY KEY,
    key TEXT NOT NULL,

    author STRING,
    coverStickerId INTEGER,
    createdAt INTEGER,
    downloadAttempts INTEGER,
    installedAt INTEGER,
    lastUsed INTEGER,
    status STRING,
    stickerCount INTEGER,
    title STRING
  , attemptedStatus STRING, position INTEGER DEFAULT 0 NOT NULL, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync
      INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE stickers(
    id INTEGER NOT NULL,
    packId TEXT NOT NULL,

    emoji STRING,
    height INTEGER,
    isCoverOnly INTEGER,
    lastUsed INTEGER,
    path STRING,
    width INTEGER,

    PRIMARY KEY (id, packId),
    CONSTRAINT stickers_fk
      FOREIGN KEY (packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE sticker_references(
    messageId STRING,
    packId TEXT,
    CONSTRAINT sticker_references_fk
      FOREIGN KEY(packId)
      REFERENCES sticker_packs(id)
      ON DELETE CASCADE
  );
CREATE TABLE emojis(
    shortName TEXT PRIMARY KEY,
    lastUsage INTEGER
  );
CREATE TABLE messages(
        rowid INTEGER PRIMARY KEY ASC,
        id STRING UNIQUE,
        json TEXT,
        readStatus INTEGER,
        expires_at INTEGER,
        sent_at INTEGER,
        schemaVersion INTEGER,
        conversationId STRING,
        received_at INTEGER,
        source STRING,
        hasAttachments INTEGER,
        hasFileAttachments INTEGER,
        hasVisualMediaAttachments INTEGER,
        expireTimer INTEGER,
        expirationStartTimestamp INTEGER,
        type STRING,
        body TEXT,
        messageTimer INTEGER,
        messageTimerStart INTEGER,
        messageTimerExpiresAt INTEGER,
        isErased INTEGER,
        isViewOnce INTEGER,
        sourceServiceId TEXT, serverGuid STRING NULL, sourceDevice INTEGER, storyId STRING, isStory INTEGER
        GENERATED ALWAYS AS (type IS 'story'), isChangeCreatedByUs INTEGER NOT NULL DEFAULT 0, isTimerChangeFromSync INTEGER
        GENERATED ALWAYS AS (
          json_extract(json, '$.expirationTimerUpdate.fromSync') IS 1
        ), seenStatus NUMBER default 0, storyDistributionListId STRING, expiresAt INT
        GENERATED ALWAYS
        AS (ifnull(
          expirationStartTimestamp + (expireTimer * 1000),
          9007199254740991
        )), shouldAffectActivity INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), shouldAffectPreview INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), isUserInitiatedMessage INTEGER
        GENERATED ALWAYS AS (
          type IS NULL
          OR
          type NOT IN (
            'change-number-notification',
            'contact-removed-notification',
            'conversation-merge',
            'group-v1-migration',
            'group-v2-change',
            'keychange',
            'message-history-unsynced',
            'profile-change',
            'story',
            'universal-timer-notification',
            'verified-change'
          )
        ), mentionsMe INTEGER NOT NULL DEFAULT 0, isGroupLeaveEvent INTEGER
        GENERATED ALWAYS AS (
          type IS 'group-v2-change' AND
          json_array_length(json_extract(json, '$.groupV2Change.details')) IS 1 AND
          json_extract(json, '$.groupV2Change.details[0].type') IS 'member-remove' AND
          json_extract(json, '$.groupV2Change.from') IS NOT NULL AND
          json_extract(json, '$.groupV2Change.from') IS json_extract(json, '$.groupV2Change.details[0].aci')
        ), isGroupLeaveEventFromOther INTEGER
        GENERATED ALWAYS AS (
          isGroupLeaveEvent IS 1
          AND
          isChangeCreatedByUs IS 0
        ), callId TEXT
        GENERATED ALWAYS AS (
          json_extract(json, '$.callId')
        ));
CREATE TABLE sqlite_stat4(tbl,idx,neq,nlt,ndlt,sample);
CREATE TABLE jobs(
        id TEXT PRIMARY KEY,
        queueType TEXT STRING NOT NULL,
        timestamp INTEGER NOT NULL,
        data STRING TEXT
      );
CREATE TABLE reactions(
        conversationId STRING,
        emoji STRING,
        fromId STRING,
        messageReceivedAt INTEGER,
        targetAuthorAci STRING,
        targetTimestamp INTEGER,
        unread INTEGER
      , messageId STRING);
CREATE TABLE senderKeys(
        id TEXT PRIMARY KEY NOT NULL,
        senderId TEXT NOT NULL,
        distributionId TEXT NOT NULL,
        data BLOB NOT NULL,
        lastUpdatedDate NUMBER NOT NULL
      );
CREATE TABLE unprocessed(
        id STRING PRIMARY KEY ASC,
        timestamp INTEGER,
        version INTEGER,
        attempts INTEGER,
        envelope TEXT,
        decrypted TEXT,
        source TEXT,
        serverTimestamp INTEGER,
        sourceServiceId STRING
      , serverGuid STRING NULL, sourceDevice INTEGER, receivedAtCounter INTEGER, urgent INTEGER, story INTEGER);
CREATE TABLE sendLogPayloads(
        id INTEGER PRIMARY KEY ASC,

        timestamp INTEGER NOT NULL,
        contentHint INTEGER NOT NULL,
        proto BLOB NOT NULL
      , urgent INTEGER, hasPniSignatureMessage INTEGER DEFAULT 0 NOT NULL);
CREATE TABLE sendLogRecipients(
        payloadId INTEGER NOT NULL,

        recipientServiceId STRING NOT NULL,
        deviceId INTEGER NOT NULL,

        PRIMARY KEY (payloadId, recipientServiceId, deviceId),

        CONSTRAINT sendLogRecipientsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE sendLogMessageIds(
        payloadId INTEGER NOT NULL,

        messageId STRING NOT NULL,

        PRIMARY KEY (payloadId, messageId),

        CONSTRAINT sendLogMessageIdsForeignKey
          FOREIGN KEY (payloadId)
          REFERENCES sendLogPayloads(id)
          ON DELETE CASCADE
      );
CREATE TABLE preKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE signedPreKeys(
        id STRING PRIMARY KEY ASC,
        json TEXT
      , ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE badges(
        id TEXT PRIMARY KEY,
        category TEXT NOT NULL,
        name TEXT NOT NULL,
        descriptionTemplate TEXT NOT NULL
      );
CREATE TABLE badgeImageFiles(
        badgeId TEXT REFERENCES badges(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        'order' INTEGER NOT NULL,
        url TEXT NOT NULL,
        localPath TEXT,
        theme TEXT NOT NULL
      );
CREATE TABLE storyReads (
        authorId STRING NOT NULL,
        conversationId STRING NOT NULL,
        storyId STRING NOT NULL,
        storyReadDate NUMBER NOT NULL,

        PRIMARY KEY (authorId, storyId)
      );
CREATE TABLE storyDistributions(
        id STRING PRIMARY KEY NOT NULL,
        name TEXT,

        senderKeyInfoJson STRING
      , deletedAtTimestamp INTEGER, allowsReplies INTEGER, isBlockList INTEGER, storageID STRING, storageVersion INTEGER, storageUnknownFields BLOB, storageNeedsSync INTEGER);
CREATE TABLE storyDistributionMembers(
        listId STRING NOT NULL REFERENCES storyDistributions(id)
          ON DELETE CASCADE
          ON UPDATE CASCADE,
        serviceId STRING NOT NULL,

        PRIMARY KEY (listId, serviceId)
      );
CREATE TABLE uninstalled_sticker_packs (
        id STRING NOT NULL PRIMARY KEY,
        uninstalledAt NUMBER NOT NULL,
        storageID STRING,
        storageVersion NUMBER,
        storageUnknownFields BLOB,
        storageNeedsSync INTEGER NOT NULL
      );
CREATE TABLE groupCallRingCancellations(
        ringId INTEGER PRIMARY KEY,
        createdAt INTEGER NOT NULL
      );
CREATE TABLE IF NOT EXISTS 'messages_fts_data'(id INTEGER PRIMARY KEY, block BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_idx'(segid, term, pgno, PRIMARY KEY(segid, term)) WITHOUT ROWID;
CREATE TABLE IF NOT EXISTS 'messages_fts_content'(id INTEGER PRIMARY KEY, c0);
CREATE TABLE IF NOT EXISTS 'messages_fts_docsize'(id INTEGER PRIMARY KEY, sz BLOB);
CREATE TABLE IF NOT EXISTS 'messages_fts_config'(k PRIMARY KEY, v) WITHOUT ROWID;
CREATE TABLE edited_messages(
        messageId STRING REFERENCES messages(id)
          ON DELETE CASCADE,
        sentAt INTEGER,
        readStatus INTEGER
      , conversationId STRING);
CREATE TABLE mentions (
        messageId REFERENCES messages(id) ON DELETE CASCADE,
        mentionAci STRING,
        start INTEGER,
        length INTEGER
      );
CREATE TABLE kyberPreKeys(
        id STRING PRIMARY KEY NOT NULL,
        json TEXT NOT NULL, ourServiceId NUMBER
        GENERATED ALWAYS AS (json_extract(json, '$.ourServiceId')));
CREATE TABLE callsHistory (
        callId TEXT PRIMARY KEY,
        peerId TEXT NOT NULL, -- conversation id (legacy) | uuid | groupId | roomId
        ringerId TEXT DEFAULT NULL, -- ringer uuid
        mode TEXT NOT NULL, -- enum "Direct" | "Group"
        type TEXT NOT NULL, -- enum "Audio" | "Video" | "Group"
        direction TEXT NOT NULL, -- enum "Incoming" | "Outgoing
        -- Direct: enum "Pending" | "Missed" | "Accepted" | "Deleted"
        -- Group: enum "GenericGroupCall" | "OutgoingRing" | "Ringing" | "Joined" | "Missed" | "Declined" | "Accepted" | "Deleted"
        status TEXT NOT NULL,
        timestamp INTEGER NOT NULL,
        UNIQUE (callId, peerId) ON CONFLICT FAIL
      );
[ dropped all indexes to save space in this blog post ]
CREATE TRIGGER messages_on_view_once_update AFTER UPDATE ON messages
      WHEN
        new.body IS NOT NULL AND new.isViewOnce = 1
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
      END;
CREATE TRIGGER messages_on_insert AFTER INSERT ON messages
      WHEN new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_delete AFTER DELETE ON messages BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        DELETE FROM sendLogPayloads WHERE id IN (
          SELECT payloadId FROM sendLogMessageIds
          WHERE messageId = old.id
        );
        DELETE FROM reactions WHERE rowid IN (
          SELECT rowid FROM reactions
          WHERE messageId = old.id
        );
        DELETE FROM storyReads WHERE storyId = old.storyId;
      END;
CREATE VIRTUAL TABLE messages_fts USING fts5(
        body,
        tokenize = 'signal_tokenizer'
      );
CREATE TRIGGER messages_on_update AFTER UPDATE ON messages
      WHEN
        (new.body IS NULL OR old.body IS NOT new.body) AND
         new.isViewOnce IS NOT 1 AND new.storyId IS NULL
      BEGIN
        DELETE FROM messages_fts WHERE rowid = old.rowid;
        INSERT INTO messages_fts
          (rowid, body)
        VALUES
          (new.rowid, new.body);
      END;
CREATE TRIGGER messages_on_insert_insert_mentions AFTER INSERT ON messages
      BEGIN
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
CREATE TRIGGER messages_on_update_update_mentions AFTER UPDATE ON messages
      BEGIN
        DELETE FROM mentions WHERE messageId = new.id;
        INSERT INTO mentions (messageId, mentionAci, start, length)
        
    SELECT messages.id, bodyRanges.value ->> 'mentionAci' as mentionAci,
      bodyRanges.value ->> 'start' as start,
      bodyRanges.value ->> 'length' as length
    FROM messages, json_each(messages.json ->> 'bodyRanges') as bodyRanges
    WHERE bodyRanges.value ->> 'mentionAci' IS NOT NULL
  
        AND messages.id = new.id;
      END;
sqlite>

Finally I have the tool needed to inspect and process Signal messages that I need, without using the vendor provided client. Now on to transforming it to a more useful format.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, sikkerhet, surveillance.
New chrpath release 0.17
10th November 2023

The chrpath package provide a simple command line tool to remove or modify the rpath or runpath of compiled ELF program. It is almost 10 years since I updated the code base, but I stumbled over the tool today, and decided it was time to move the code base from Subversion to git and find a new home for it, as the previous one (Debian Alioth) has been shut down. I decided to go with Codeberg this time, as it is my git service of choice these days, did a quick and dirty migration to git and updated the code with a few patches I found in the Debian bug tracker. These are the release notes:

New in 0.17 released 2023-11-10:

The latest edition is tagged and available from https://codeberg.org/pere/chrpath.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: chrpath, debian, english.
Test framework for DocBook processors / formatters
5th November 2023

All the books I have published so far has been using DocBook somewhere in the process. For the first book, the source format was DocBook, while for every later book it was an intermediate format used as the stepping stone to be able to present the same manuscript in several formats, on paper, as ebook in ePub format, as a HTML page and as a PDF file either for paper production or for Internet consumption. This is made possible with a wide variety of free software tools with DocBook support in Debian. The source format of later books have been docx via rst, Markdown, Filemaker and Asciidoc, and for all of these I was able to generate a suitable DocBook file for further processing using pandoc, a2x and asciidoctor, as well as rendering using xmlto, dbtoepub, dblatex, docbook-xsl and fop.

Most of the books I have published are translated books, with English as the source language. The use of po4a to handle translations using the gettext PO format has been a blessing, but publishing translated books had triggered the need to ensure the DocBook tools handle relevant languages correctly. For every new language I have published, I had to submit patches dblatex, dbtoepub and docbook-xsl fixing incorrect language and country specific issues in the framework themselves. Typically this has been missing keywords like 'figure' or sort ordering of index entries. After a while it became tiresome to only discover issues like this by accident, and I decided to write a DocBook "test framework" exercising various features of DocBook and allowing me to see all features exercised for a given language. It consist of a set of DocBook files, a version 4 book, a version 5 book, a v4 book set, a v4 selection of problematic tables, one v4 testing sidefloat and finally one v4 testing a book of articles. The DocBook files are accompanied with a set of build rules for building PDF using dblatex and docbook-xsl/fop, HTML using xmlto or docbook-xsl and epub using dbtoepub. The result is a set of files visualizing footnotes, indexes, table of content list, figures, formulas and other DocBook features, allowing for a quick review on the completeness of the given locale settings. To build with a different language setting, all one need to do is edit the lang= value in the .xml file to pick a different ISO 639 code value and run 'make'.

The test framework source code is available from Codeberg, and a generated set of presentations of the various examples is available as Codeberg static web pages at https://pere.codeberg.page/docbook-example/. Using this test framework I have been able to discover and report several bugs and missing features in various tools, and got a lot of them fixed. For example I got Northern Sami keywords added to both docbook-xsl and dblatex, fixed several typos in Norwegian bokmål and Norwegian Nynorsk, support for non-ascii title IDs added to pandoc, Norwegian index sorting support fixed in xindy and initial Norwegian Bokmål support added to dblatex. Some issues still remains, though. Default index sorting rules are still broken in several tools, so the Norwegian letters æ, ø and å are more often than not sorted properly in the book index.

The test framework recently received some more polish, as part of publishing my latest book. This book contained a lot of fairly complex tables, which exposed bugs in some of the tools. This made me add a new test file with various tables, as well as spend some time to brush up the build rules. My goal is for the test framework to exercise all DocBook features to make it easier to see which features work with different processors, and hopefully get them all to support the full set of DocBook features. Feel free to send patches to extend the test set, and test it with your favorite DocBook processor. Please visit these two URLs to learn more:

If you want to learn more on Docbook and translations, I recommend having a look at the the DocBook web site, the DoCookBook site and my earlier blog post on how the Skolelinux project process and translate documentation, a talk I gave earlier this year on how to translate and publish books using free software (Norwegian only).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, docbook, english.
Invidious add-on for Kodi 20
10th August 2023

I still enjoy Kodi and LibreELEC as my multimedia center at home. Sadly two of the services I really would like to use from within Kodi are not easily available. The most wanted add-on would be one making The Internet Archive available, and it has not been working for many years. The second most wanted add-on is one using the Invidious privacy enhanced Youtube frontent. A plugin for this has been partly working, but not been kept up to date in the Kodi add-on repository, and its upstream seem to have given it up in April this year, when the git repository was closed. A few days ago I got tired of this sad state of affairs and decided to have a go at improving the Invidious add-on. As Google has already attacked the Invidious concept, so it need all the support if can get. My small contribution here is to improve the service status on Kodi.

I added support to the Invidious add-on for automatically picking a working Invidious instance, instead of requiring the user to specify the URL to a specific instance after installation. I also had a look at the set of patches floating around in the various forks on github, and decided to clean up at least some of the features I liked and integrate them into my new release branch. Now the plugin can handle channel and short video items in search results. Earlier it could only handle single video instances in the search response. I also brushed up the set of metadata displayed a bit, but hope I can figure out how to get more relevant metadata displayed.

Because I only use Kodi 20 myself, I only test on version 20 and am only motivated to ensure version 20 is working. Because of API changes between version 19 and 20, I suspect it will fail with earlier Kodi versions.

I already asked to have the add-on added to the official Kodi 20 repository, and is waiting to heard back from the repo maintainers.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, kodi, multimedia, video.
What did I learn from OpenSnitch this summer?
11th June 2023

With yesterdays release of Debian 12 Bookworm, I am happy to know the the interactive application firewall OpenSnitch is available for a wider audience. I have been running it for a few weeks now, and have been surprised about some of the programs connecting to the Internet. Some programs are obviously calling out from my machine, like the NTP network based clock adjusting system and Tor to reach other Tor clients, but others were more dubious. For example, the KDE Window manager try to look up the host name in DNS, for no apparent reason, but if this lookup is blocked the KDE desktop get periodically stuck when I use it. Another surprise was how much Firefox call home directly to mozilla.com, mozilla.net and googleapis.com, to mention a few, when I visit other web pages. This direct connection happen even if I told Firefox to always use a proxy, and the proxy setting is ignored for this traffic. Other surprising connections come from audacity and dirmngr (I do not use Gnome). It took some trial and error to get a good default set of permissions. Without it, I would get popups asking for permissions at any time, also the most inconvenient ones where I am in the middle of a time sensitive gaming session.

I suspect some application developers should rethink when then need to use network connections or DNS lookups, and recommend testing OpenSnitch (only apt install opensnitch away in Debian Bookworm) to locate and report any surprising Internet connections on your desktop machine.

At the moment the upstream developer and Debian package maintainer is working on making the system more reliable in Debian, by enabling the eBPF kernel module to track processes and connections instead of depending in content in /proc/. This should enter unstable fairly soon.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2023-06-12: I got a tip about a list of privacy issues in Free Software and the #debian-privacy IRC channel discussing these topics.

Tags: debian, english, opensnitch.
wmbusmeters, parse data from your utility meter - nice free software
19th May 2023

There is a European standard for reading utility meters like water, gas, electricity or heat distribution meters. The Meter-Bus standard (EN 13757-2, EN 13757-3 and EN 13757–4) provide a cross vendor way to talk to and collect meter data. I ran into this standard when I wanted to monitor some heat distribution meters, and managed to find free software that could do the job. The meters in question broadcast encrypted messages with meter information via radio, and the hardest part was to track down the encryption keys from the vendor. With this in place I could set up a MQTT gateway to submit the meter data for graphing.

The free software systems in question, rtl-wmbus to read the messages from a software defined radio, and wmbusmeters to decrypt and decode the content of the messages, is working very well and allowe me to get frequent updates from my meters. I got in touch with upstream last year to see if there was any interest in publishing the packages via Debian. I was very happy to learn that Fredrik Öhrström volunteered to maintain the packages, and I have since assisted him in getting Debian package build rules in place as well as sponsoring the packages into the Debian archive. Sadly we completed it too late for them to become part of the next stable Debian release (Bookworm). The wmbusmeters package just cleared the NEW queue. It will need some work to fix a built problem, but I expect Fredrik will find a solution soon.

If you got a infrastructure meter supporting the Meter Bus standard, I strongly recommend having a look at these nice packages.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, nice free software.
The 2023 LinuxCNC Norwegian developer gathering
14th May 2023

The LinuxCNC project is making headway these days. A lot of patches and issues have seen activity on the project github pages recently. A few weeks ago there was a developer gathering over at the Tormach headquarter in Wisconsin, and now we are planning a new gathering in Norway. If you wonder what LinuxCNC is, lets quote Wikipedia:

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

The Norwegian developer gathering take place the weekend June 16th to 18th this year, and is open for everyone interested in contributing to LinuxCNC. Up to date information about the gathering can be found in the developer mailing list thread where the gathering was announced. Thanks to the good people at Debian, Redpill-Linpro and NUUG Foundation, we have enough sponsor funds to pay for food, and shelter for the people traveling from afar to join us. If you would like to join the gathering, get in touch.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, linuxcnc.
OpenSnitch in Debian ready for prime time
13th May 2023

A bit delayed, the interactive application firewall OpenSnitch package in Debian now got the latest fixes ready for Debian Bookworm. Because it depend on a package missing on some architectures, the autopkgtest check of the testing migration script did not understand that the tests were actually working, so the migration was delayed. A bug in the package dependencies is also fixed, so those installing the firewall package (opensnitch) now also get the GUI admin tool (python3-opensnitch-ui) installed by default. I am very grateful to Gustavo Iñiguez Goya for his work on getting the package ready for Debian Bookworm.

Armed with this package I have discovered some surprising connections from programs I believed were able to work completly offline, and it has already proven its worth, at least to me. If you too want to get more familiar with the kind of programs using Internett connections on your machine, I recommend testing apt install opensnitch in Bookworm and see what you think.

The package is still not able to build its eBPF module within Debian. Not sure how much work it would be to get it working, but suspect some kernel related packages need to be extended with more header files to get it working.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, opensnitch.
Speech to text, she APTly whispered, how hard can it be?
23rd April 2023

While visiting a convention during Easter, it occurred to me that it would be great if I could have a digital Dictaphone with transcribing capabilities, providing me with texts to cut-n-paste into stuff I need to write. The background is that long drives often bring up the urge to write on texts I am working on, which of course is out of the question while driving. With the release of OpenAI Whisper, this seem to be within reach with Free Software, so I decided to give it a go. OpenAI Whisper is a Linux based neural network system to read in audio files and provide text representation of the speech in that audio recording. It handle multiple languages and according to its creators even can translate into a different language than the spoken one. I have not tested the latter feature. It can either use the CPU or a GPU with CUDA support. As far as I can tell, CUDA in practice limit that feature to NVidia graphics cards. I have few of those, as they do not work great with free software drivers, and have not tested the GPU option. While looking into the matter, I did discover some work to provide CUDA support on non-NVidia GPUs, and some work with the library used by Whisper to port it to other GPUs, but have not spent much time looking into GPU support yet. I've so far used an old X220 laptop as my test machine, and only transcribed using its CPU.

As it from a privacy standpoint is unthinkable to use computers under control of someone else (aka a "cloud" service) to transcribe ones thoughts and personal notes, I want to run the transcribing system locally on my own computers. The only sensible approach to me is to make the effort I put into this available for any Linux user and to upload the needed packages into Debian. Looking at Debian Bookworm, I discovered that only three packages were missing, tiktoken, triton, and openai-whisper. For a while I also believed ffmpeg-python was needed, but as its upstream seem to have vanished I found it safer to rewrite whisper to stop depending on in than to introduce ffmpeg-python into Debian. I decided to place these packages under the umbrella of the Debian Deep Learning Team, which seem like the best team to look after such packages. Discussing the topic within the group also made me aware that the triton package was already a future dependency of newer versions of the torch package being planned, and would be needed after Bookworm is released.

All required code packages have been now waiting in the Debian NEW queue since Wednesday, heading for Debian Experimental until Bookworm is released. An unsolved issue is how to handle the neural network models used by Whisper. The default behaviour of Whisper is to require Internet connectivity and download the model requested to ~/.cache/whisper/ on first invocation. This obviously would fail the deserted island test of free software as the Debian packages would be unusable for someone stranded with only the Debian archive and solar powered computer on a deserted island.

Because of this, I would love to include the models in the Debian mirror system. This is problematic, as the models are very large files, which would put a heavy strain on the Debian mirror infrastructure around the globe. The strain would be even higher if the models change often, which luckily as far as I can tell they do not. The small model, which according to its creator is most useful for English and in my experience is not doing a great job there either, is 462 MiB (deb is 414 MiB). The medium model, which to me seem to handle English speech fairly well is 1.5 GiB (deb is 1.3 GiB) and the large model is 2.9 GiB (deb is 2.6 GiB). I would assume everyone with enough resources would prefer to use the large model for highest quality. I believe the models themselves would have to go into the non-free part of the Debian archive, as they are not really including any useful source code for updating the models. The "source", aka the model training set, according to the creators consist of "680,000 hours of multilingual and multitask supervised data collected from the web", which to me reads material with both unknown copyright terms, unavailable to the general public. In other words, the source is not available according to the Debian Free Software Guidelines and the model should be considered non-free.

I asked the Debian FTP masters for advice regarding uploading a model package on their IRC channel, and based on the feedback there it is still unclear to me if such package would be accepted into the archive. In any case I wrote build rules for a OpenAI Whisper model package and modified the Whisper code base to prefer shared files under /usr/ and /var/ over user specific files in ~/.cache/whisper/ to be able to use these model packages, to prepare for such possibility. One solution might be to include only one of the models (small or medium, I guess) in the Debian archive, and ask people to download the others from the Internet. Not quite sure what to do here, and advice is most welcome (use the debian-ai mailing list).

To make it easier to test the new packages while I wait for them to clear the NEW queue, I created an APT source targeting bookworm. I selected Bookworm instead of Bullseye, even though I know the latter would reach more users, is that some of the required dependencies are missing from Bullseye and I during this phase of testing did not want to backport a lot of packages just to get up and running.

Here is a recipe to run as user root if you want to test OpenAI Whisper using Debian packages on your Debian Bookworm installation, first adding the APT repository GPG key to the list of trusted keys, then setting up the APT repository and finally installing the packages and one of the models:

curl https://geekbay.nuug.no/~pere/openai-whisper/D78F5C4796F353D211B119E28200D9B589641240.asc \
  -o /etc/apt/trusted.gpg.d/pere-whisper.asc
mkdir -p /etc/apt/sources.list.d
cat > /etc/apt/sources.list.d/pere-whisper.list <<EOF
deb https://geekbay.nuug.no/~pere/openai-whisper/ bookworm main
deb-src https://geekbay.nuug.no/~pere/openai-whisper/ bookworm main
EOF
apt update
apt install openai-whisper

The package work for me, but have not yet been tested on any other computer than my own. With it, I have been able to (badly) transcribe a 2 minute 40 second Norwegian audio clip to test using the small model. This took 11 minutes and around 2.2 GiB of RAM. Transcribing the same file with the medium model gave a accurate text in 77 minutes using around 5.2 GiB of RAM. My test machine had too little memory to test the large model, which I believe require 11 GiB of RAM. In short, this now work for me using Debian packages, and I hope it will for you and everyone else once the packages enter Debian.

Now I can start on the audio recording part of this project.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, multimedia, video.
rtlsdr-scanner, software defined radio frequency scanner for Linux - nice free software
7th April 2023

Today I finally found time to track down a useful radio frequency scanner for my software defined radio. Just for fun I tried to locate the radios used in the areas, and a good start would be to scan all the frequencies to see what is in use. I've tried to find a useful program earlier, but ran out of time before I managed to find a useful tool. This time I was more successful, and after a few false leads I found a description of rtlsdr-scanner over at the Kali site, and was able to track down the Kali package git repository to build a deb package for the scanner. Sadly the package is missing from the Debian project itself, at least in Debian Bullseye. Two runtime dependencies, python-visvis and python-rtlsdr had to be built and installed separately. Luckily 'gbp buildpackage' handled them just fine and no further packages had to be manually built. The end result worked out of the box after installation.

My initial scans for FM channels worked just fine, so I knew the scanner was functioning. But when I tried to scan every frequency from 100 to 1000 MHz, the program stopped unexpectedly near the completion. After some debugging I discovered USB software radio I used rejected frequencies above 948 MHz, triggering a unreported exception breaking the scan. Changing the scan to end at 957 worked better. I similarly found the lower limit to be around 15, and ended up with the following full scan:

Saving the scan did not work, but exporting it as a CSV file worked just fine. I ended up with around 477k CVS lines with dB level for the given frequency.

The save failure seem to be a missing UTF-8 encoding issue in the python code. Will see if I can find time to send a patch upstream later to fix this exception:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/main_window.py", line 485, in __on_save
    save_plot(fullName, self.scanInfo, self.spectrum, self.locations)
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/file.py", line 408, in save_plot
    handle.write(json.dumps(data, indent=4))
TypeError: a bytes-like object is required, not 'str'
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/main_window.py", line 485, in __on_save
    save_plot(fullName, self.scanInfo, self.spectrum, self.locations)
  File "/usr/lib/python3/dist-packages/rtlsdr_scanner/file.py", line 408, in save_plot
    handle.write(json.dumps(data, indent=4))
TypeError: a bytes-like object is required, not 'str'

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, nice free software.
OpenSnitch available in Debian Sid and Bookworm
25th February 2023

Thanks to the efforts of the OpenSnitch lead developer Gustavo Iñiguez Goya allowing me to sponsor the upload, the interactive application firewall OpenSnitch is now available in Debian Testing, soon to become the next stable release of Debian.

This is a package which set up a network firewall on one or more machines, which is controlled by a graphical user interface that will ask the user if a program should be allowed to connect to the local network or the Internet. If some background daemon is trying to dial home, it can be blocked from doing so with a simple mouse click, or by default simply by not doing anything when the GUI question dialog pop up. A list of all programs discovered using the network is provided in the GUI, giving the user an overview of how the machine(s) programs use the network.

OpenSnitch was uploaded for NEW processing about a month ago, and I had little hope of it getting accepted and shaping up in time for the package freeze, but the Debian ftpmasters proved to be amazingly quick at checking out the package and it was accepted into the archive about week after the first upload. It is now team maintained under the Go language team umbrella. A few fixes to the default setup is only in Sid, and should migrate to Testing/Bookworm in a week.

During testing I ran into an issue with Minecraft server broadcasts disappearing, which was quickly resolved by the developer with a patch and a proposed configuration change. I've been told this was caused by the Debian packages default use if /proc/ information to track down kernel status, instead of the newer eBPF module that can be used. The reason is simply that upstream and I have failed to find a way to build the eBPF modules for OpenSnitch without a complete configured Linux kernel source tree, which as far as we can tell is unavailable as a build dependency in Debian. We tried unsuccessfully so far to use the kernel-headers package. It would be great if someone could provide some clues how to build eBPF modules on build daemons in Debian, possibly without the full kernel source.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, opensnitch.
Is the desktop recommending your program for opening its files?
29th January 2023

Linux desktop systems have standardized how programs present themselves to the desktop system. If a package include a .desktop file in /usr/share/applications/, Gnome, KDE, LXDE, Xfce and the other desktop environments will pick up the file and use its content to generate the menu of available programs in the system. A lesser known fact is that a package can also explain to the desktop system how to recognize the files created by the program in question, and use it to open these files on request, for example via a GUI file browser.

A while back I ran into a package that did not tell the desktop system how to recognize its files and was not used to open its files in the file browser and fixed it. In the process I wrote a simple debian/tests/ script to ensure the setup keep working. It might be useful for other packages too, to ensure any future version of the package keep handling its own files.

For this to work the file format need a useful MIME type that can be used to identify the format. If the file format do not yet have a MIME type, it should define one and preferably also register it with IANA to ensure the MIME type string is reserved.

The script uses the xdg-mime program from xdg-utils to query the database of standardized package information and ensure it return sensible values. It also need the location of an example file for xdg-mime to guess the format of.

#!/bin/sh
#
# Author: Petter Reinholdtsen
# License: GPL v2 or later at your choice.
#
# Validate the MIME setup, making sure motor types have
# application/vnd.openmotor+yaml associated with them and is connected
# to the openmotor desktop file.

retval=0

mimetype="application/vnd.openmotor+yaml"
testfile="test/data/real/o3100/motor.ric"
mydesktopfile="openmotor.desktop"

filemime="$(xdg-mime query filetype "$testfile")"

if [ "$mimetype" != "$filemime" ] ; then
    retval=1
    echo "error: xdg-mime claim motor file MIME type is $filemine, not $mimetype"
else
    echo "success: xdg-mime report correct mime type $mimetype for motor file"
fi

desktop=$(xdg-mime query default "$mimetype")

if [ "$mydesktopfile" != "$desktop" ]; then
    retval=1
    echo "error: xdg-mime claim motor file should be handled by $desktop, not $mydesktopfile"
else
    echo "success: xdg-mime agree motor file should be handled by $mydesktopfile"
fi

exit $retval

It is a simple way to ensure your users are not very surprised when they try to open one of your file formats in their file browser.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
Opensnitch, the application level interactive firewall, heading into the Debian archive
22nd January 2023

While reading a blog post claiming MacOS X recently started scanning local files and reporting information about them to Apple, even on a machine where all such callback features had been disabled, I came across a description of the Little Snitch application for MacOS X. It seemed like a very nice tool to have in the tool box, and I decided to see if something similar was available for Linux.

It did not take long to find the OpenSnitch package, which has been in development since 2017, and now is in version 1.5.0. It has had a request for Debian packaging since 2018, but no-one completed the job so far. Just for fun, I decided to see if I could help, and I was very happy to discover that upstream want a Debian package too.

After struggling a bit with getting the program to run, figuring out building Go programs (and a little failed detour to look at eBPF builds too - help needed), I am very happy to report that I am sponsoring upstream to maintain the package in Debian, and it has since this morning been waiting in NEW for the ftpmasters to have a look. Perhaps it can get into the archive in time for the Bookworm release?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, opensnitch.
LinuxCNC MQTT publisher component
8th January 2023

I watched a 2015 video from Andreas Schiffler the other day, where he set up LinuxCNC to send status information to the MQTT broker IBM Bluemix. As I also use MQTT for graphing, it occured to me that a generic MQTT LinuxCNC component would be useful and I set out to implement it. Today I got the first draft limping along and submitted as a patch to the LinuxCNC project.

The simple part was setting up the MQTT publishing code in Python. I already have set up other parts submitting data to my Mosquito MQTT broker, so I could reuse that code. Writing a LinuxCNC component in Python as new to me, but using existing examples in the code repository and the extensive documentation, this was fairly straight forward. The hardest part was creating a automated test for the component to ensure it was working. Testing it in a simulated LinuxCNC machine proved very useful, as I discovered features I needed that I had not thought of yet, and adjusted the code quite a bit to make it easier to test without a operational MQTT broker available.

The draft is ready and working, but I am unsure which LinuxCNC HAL pins I should collect and publish by default (in other words, the default set of information pieces published), and how to get the machine name from the LinuxCNC INI file. The latter is a minor detail, but I expect it would be useful in a setup with several machines available. I am hoping for feedback from the experienced LinuxCNC developers and users, to make the component even better before it can go into the mainland LinuxCNC code base.

Since I started on the MQTT component, I came across another video from Kent VanderVelden where he combine LinuxCNC with a set of screen glasses controlled by a Raspberry Pi, and it occured to me that it would be useful for such use cases if LinuxCNC also provided a REST API for querying its status. I hope to start on such component once the MQTT component is working well.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, linuxcnc, robot.
ONVIF IP camera management tool finally in Debian
24th December 2022

Merry Christmas to you all. Here is a small gift to all those with IP cameras following the ONVIF specification. There is finally a nice command line and GUI tool in Debian to manage ONVIF IP cameras. After working with upstream for a few months and sponsoring the upload, I am very happy to report that the libonvif package entered Debian Sid last night.

The package provide a C library to communicate with such cameras, a command line tool to locate and update settings of (like password) the cameras and a GUI tool to configure and control the units as well as preview the video from the camera. Libonvif is available on Both Linux and Windows and the GUI tool uses the Qt library. The main competitors are non-free software, while libonvif is GNU GPL licensed. I am very glad Debian users in the future can control their cameras using a free software system provided by Debian. But the ONVIF world is full of slightly broken firmware, where the cameras pretend to follow the ONVIF specification but fail to set some configuration values or refuse to provide video to more than one recipient at the time, and the onvif project is quite young and might take a while before it completely work with your camera. Upstream seem eager to improve the library, so handling any broken camera might be just a bug report away.

The package just cleared NEW, and need a new source only upload before it can enter testing. This will happen in the next few days.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, multimedia, standard, surveillance.
Managing and using ONVIF IP cameras with Linux
19th October 2022

Recently I have been looking at how to control and collect data from a handful IP cameras using Linux. I both wanted to change their settings and to make their imagery available via a free software service under my control. Here is a summary of the tools I found.

First I had to identify the cameras and their protocols. As far as I could tell, they were using some SOAP looking protocol and their internal web server seem to only work with Microsoft Internet Explorer with some proprietary binary plugin, which in these days of course is a security disaster and also made it impossible for me to use the camera web interface. Luckily I discovered that the SOAP looking protocol is actually following the ONVIF specification, which seem to be supported by a lot of IP cameras these days.

Once the protocol was identified, I was able to find what appear to be the most popular way to configure ONVIF cameras, the free software Windows tool named ONVIF Device Manager. Lacking any other options at the time, I tried unsuccessfully to get it running using Wine, but was missing a dotnet 40 library and I found no way around it to run it on Linux.

The next tool I found to configure the cameras were a non-free Linux Qt client ONVIF Device Tool. I did not like its terms of use, so did not spend much time on it.

To collect the video and make it available in a web interface, I found the Zoneminder tool in Debian. A recent version was able to automatically detect and configure ONVIF devices, so I could use it to set up motion detection in and collection of the camera output. I had initial problems getting the ONVIF autodetection to work, as both Firefox and Chromium refused the inter-tab communication being used by the Zoneminder web pages, but managed to get konqueror to work. Apparently the "Enhanced Tracking Protection" in Firefox cause the problem. I ended up upgrading to the Bookworm edition of Zoneminder in the process to try to fix the issue, and believe the problem might be solved now.

In the process I came across the nice Linux GUI tool ONVIF Viewer allowing me to preview the camera output and validate the login passwords required. Sadly its author has grown tired of maintaining the software, so it might not see any future updates. Which is sad, as the viewer is sightly unstable and the picture tend to lock up. Note, this lockup might be due to limitations in the cameras and not the viewer implementation. I suspect the camera is only able to provide pictures to one client at the time, and the Zoneminder feed might interfere with the GUI viewer. I have asked for the tool to be included in Debian.

Finally, I found what appear to be very nice Linux free software replacement for the Windows tool, named libonvif. It provide a C library to talk to ONVIF devices as well as a command line and GUI tool using the library. Using the GUI tool I was able to change the admin passwords and update other settings of the cameras. I have asked for the package to be included in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2022-10-20: Since my initial publication of this text, I got several suggestions for more free software Linux tools. There is a ONVIF python library (already requested into Debian) and a python 3 fork using a different SOAP dependency. There is also support for ONVIF in Home Assistant, and there is an alternative to Zoneminder called Shinobi. The latter two are not included in Debian either. I have not tested any of these so far.

Tags: debian, english, multimedia, standard, surveillance.
Time to translate the Bullseye edition of the Debian Administrator's Handbook
12th September 2022

(The picture is of the previous edition.)

Almost two years after the previous Norwegian Bokmål translation of the "The Debian Administrator's Handbook" was published, a new edition is finally being prepared. The english text is updated, and it is time to start working on the translations. Around 37 percent of the strings have been updated, one way or another, and the translations starting from a complete Debian Buster edition now need to bring their translation up from 63% to 100%. The complete book is licensed using a Creative Commons license, and has been published in several languages over the years. The translations are done by volunteers to bring Linux in their native tongue. The last time I checked, it complete text was available in English, Norwegian Bokmål, German, Indonesian, Brazil Portuguese and Spanish. In addition, work has been started for Arabic (Morocco), Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, French, Greek, Italian, Japanese, Korean, Persian, Polish, Romanian, Russian, Swedish, Turkish and Vietnamese.

The translation is conducted on the hosted weblate project page. Prospective translators are recommeded to subscribe to the translators mailing list and should also check out the instructions for contributors.

I am one of the Norwegian Bokmål translators of this book, and we have just started. Your contribution is most welcome.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, debian-handbook, english.
Automatic LinuxCNC servo PID tuning?
16th July 2022

While working on a CNC with servo motors controlled by the LinuxCNC PID controller, I recently had to learn how to tune the collection of values that control such mathematical machinery that a PID controller is. It proved to be a lot harder than I hoped, and I still have not succeeded in getting the Z PID controller to successfully defy gravity, nor X and Y to move accurately and reliably. But while climbing up this rather steep learning curve, I discovered that some motor control systems are able to tune their PID controllers. I got the impression from the documentation that LinuxCNC were not. This proved to be not true.

The LinuxCNC pid component is the recommended PID controller to use. It uses eight constants Pgain, Igain, Dgain, bias, FF0, FF1, FF2 and FF3 to calculate the output value based on current and wanted state, and all of these need to have a sensible value for the controller to behave properly. Note, there are even more values involved, theser are just the most important ones. In my case I need the X, Y and Z axes to follow the requested path with little error. This has proved quite a challenge for someone who have never tuned a PID controller before, but there is at least some help to be found.

I discovered that included in LinuxCNC was this old PID component at_pid claiming to have auto tuning capabilities. Sadly it had been neglected since 2011, and could not be used as a plug in replacement for the default pid component. One would have to rewriting the LinuxCNC HAL setup to test at_pid. This was rather sad, when I wanted to quickly test auto tuning to see if it did a better job than me at figuring out good P, I and D values to use.

I decided to have a look if the situation could be improved. This involved trying to understand the code and history of the pid and at_pid components. Apparently they had a common ancestor, as code structure, comments and variable names were quite close to each other. Sadly this was not reflected in the git history, making it hard to figure out what really happened. My guess is that the author of at_pid.c took a version of pid.c, rewrote it to follow the structure he wished pid.c to have, then added support for auto tuning and finally got it included into the LinuxCNC repository. The restructuring and lack of early history made it harder to figure out which part of the code were relevant to the auto tuning, and which part of the code needed to be updated to work the same way as the current pid.c implementation. I started by trying to isolate relevant changes in pid.c, and applying them to at_pid.c. My aim was to make sure the at_pid component could replace the pid component with a simple change in the HAL setup loadrt line, without having to "rewire" the rest of the HAL configuration. After a few hours following this approach, I had learned quite a lot about the code structure of both components, while concluding I was heading down the wrong rabbit hole, and should get back to the surface and find a different path.

For the second attempt, I decided to throw away all the PID control related part of the original at_pid.c, and instead isolate and lift the auto tuning part of the code and inject it into a copy of pid.c. This ensured compatibility with the current pid component, while adding auto tuning as a run time option. To make it easier to identify the relevant parts in the future, I wrapped all the auto tuning code with '#ifdef AUTO_TUNER'. The end result behave just like the current pid component by default, as that part of the code is identical. The end result entered the LinuxCNC master branch a few days ago.

To enable auto tuning, one need to set a few HAL pins in the PID component. The most important ones are tune-effort, tune-mode and tune-start. But lets take a step back, and see what the auto tuning code will do. I do not know the mathematical foundation of the at_pid algorithm, but from observation I can tell that the algorithm will, when enabled, produce a square wave pattern centered around the bias value on the output pin of the PID controller. This can be seen using the HAL Scope provided by LinuxCNC. In my case, this is translated into voltage (+-10V) sent to the motor controller, which in turn is translated into motor speed. So at_pid will ask the motor to move the axis back and forth. The number of cycles in the pattern is controlled by the tune-cycles pin, and the extremes of the wave pattern is controlled by the tune-effort pin. Of course, trying to change the direction of a physical object instantly (as in going directly from a positive voltage to the equivalent negative voltage) do not change velocity instantly, and it take some time for the object to slow down and move in the opposite direction. This result in a more smooth movement wave form, as the axis in question were vibrating back and forth. When the axis reached the target speed in the opposing direction, the auto tuner change direction again. After several of these changes, the average time delay between the 'peaks' and 'valleys' of this movement graph is then used to calculate proposed values for Pgain, Igain and Dgain, and insert them into the HAL model to use by the pid controller. The auto tuned settings are not great, but htye work a lot better than the values I had been able to cook up on my own, at least for the horizontal X and Y axis. But I had to use very small tune-effort values, as my motor controllers error out if the voltage change too quickly. I've been less lucky with the Z axis, which is moving a heavy object up and down, and seem to confuse the algorithm. The Z axis movement became a lot better when I introduced a bias value to counter the gravitational drag, but I will have to work a lot more on the Z axis PID values.

Armed with this knowledge, it is time to look at how to do the tuning. Lets say the HAL configuration in question load the PID component for X, Y and Z like this:

loadrt pid names=pid.x,pid.y,pid.z

Armed with the new and improved at_pid component, the new line will look like this:

loadrt at_pid names=pid.x,pid.y,pid.z

The rest of the HAL setup can stay the same. This work because the components are referenced by name. If the component had used count=3 instead, all use of pid.# had to be changed to at_pid.#.

To start tuning the X axis, move the axis to the middle of its range, to make sure it do not hit anything when it start moving back and forth. Next, set the tune-effort to a low number in the output range. I used 0.1 as my initial value. Next, assign 1 to the tune-mode value. Note, this will disable the pid controlling part and feed 0 to the output pin, which in my case initially caused a lot of drift. In my case it proved to be a good idea with X and Y to tune the motor driver to make sure 0 voltage stopped the motor rotation. On the other hand, for the Z axis this proved to be a bad idea, so it will depend on your setup. It might help to set the bias value to a output value that reduce or eliminate the axis drift. Finally, after setting tune-mode, set tune-start to 1 to activate the auto tuning. If all go well, your axis will vibrate for a few seconds and when it is done, new values for Pgain, Igain and Dgain will be active. To test them, change tune-mode back to 0. Note that this might cause the machine to suddenly jerk as it bring the axis back to its commanded position, which it might have drifted away from during tuning. To summarize with some halcmd lines:

setp pid.x.tune-effort 0.1
setp pid.x.tune-mode 1
setp pid.x.tune-start 1
# wait for the tuning to complete
setp pid.x.tune-mode 0

After doing this task quite a few times while trying to figure out how to properly tune the PID controllers on the machine in, I decided to figure out if this process could be automated, and wrote a script to do the entire tuning process from power on. The end result will ensure the machine is powered on and ready to run, home all axis if it is not already done, check that the extra tuning pins are available, move the axis to its mid point, run the auto tuning and re-enable the pid controller when it is done. It can be run several times. Check out the run-auto-pid-tuner script on github if you want to learn how it is done.

My hope is that this little adventure can inspire someone who know more about motor PID controller tuning can implement even better algorithms for automatic PID tuning in LinuxCNC, making life easier for both me and all the others that want to use LinuxCNC but lack the in depth knowledge needed to tune PID controllers well.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th June 2022

I guess it is time to bring some light on the various free software and open culture activities and projects I have worked on or been involved in the last year and a half.

First, lets mention the book releases I managed to publish. The Cory Doctorow book "Hvordan knuse overvåkningskapitalismen" argue that it is not the magic machine learning of the big technology companies that causes the surveillance capitalism to thrive, it is the lack of trust busting to enforce existing anti-monopoly laws. I also published a family of dictionaries for machinists, one sorted on the English words, one sorted on the Norwegian and the last sorted on the North Sámi words. A bit on the back burner but not forgotten is the Debian Administrators Handbook, where a new edition is being worked on. I have not spent as much time as I want to help bring it to completion, but hope I will get more spare time to look at it before the end of the year.

With my Debian had I have spent time on several projects, both updating existing packages, helping to bring in new packages and working with upstream projects to try to get them ready to go into Debian. The list is rather long, and I will only mention my own isenkram, openmotor, vlc bittorrent plugin, xprintidle, norwegian letter style for latex, bs1770gain, and recordmydesktop. In addition to these I have sponsored several packages into Debian, like audmes.

The last year I have looked at several infrastructure projects for collecting meter data and video surveillance recordings. This include several ONVIF related tools like onvifviewer and zoneminder as well as rtl-433, wmbusmeters and rtl-wmbus.

In parallel with this I have looked at fabrication related free software solutions like pycam and LinuxCNC. The latter recently gained improved translation support using po4a and weblate, which was a harder nut to crack that I had anticipated when I started.

Several hours have been spent translating free software to Norwegian Bokmål on the Weblate hosted service. Do not have a complete list, but you will find my contributions in at least gnucash, minetest and po4a.

I also spent quite some time on the Norwegian archiving specification Noark 5, and its companion project Nikita implementing the API specification for Noark 5.

Recently I have been looking into free software tools to do company accounting here in Norway., which present an interesting mix between law, rules, regulations, format specifications and API interfaces.

I guess I should also mention the Norwegian community driven government interfacing projects Mimes Brønn and Fiksgatami, which have ended up in a kind of limbo while the future of the projects is being worked out.

These are just a few of the projects I have been involved it, and would like to give more visibility. I'll stop here to avoid delaying this post.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
3rd June 2022

Back in oktober last year, when I started looking at the LinuxCNC system, I proposed to change the documentation build system make life easier for translators. The original system consisted of independently written documentation files for each language, with no automated way to track changes done in other translations and no help for the translators to know how much was left to translated. By using the po4a system to generate POT and PO files from the English documentation, this can be improved. A small team of LinuxCNC contributors got together and today our labour finally payed off. Since a few hours ago, it is now possible to translate the LinuxCNC documentation on Weblate, alongside the program itself.

The effort to migrate the documentation to use po4a has been both slow and frustrating. I am very happy we finally made it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th April 2022

Recently I wanted to upgrade the firmware of my thinkpad, and located the firmware download page from Lenovo (which annoyingly do not allow access via Tor, forcing me to hand them more personal information that I would like). The download from Lenovo is a bootable ISO image, which is a bit of a problem when all I got available is a USB memory stick. I tried booting the ISO as a USB stick, but this did not work. But genisoimage came to the rescue.

The geteltorito program in the genisoimage binary package is able to convert the bootable ISO image to a bootable USB stick using a simple command line recipe, which I then can write to the most recently inserted USB stick:

geteltorito -o usbstick.img lenovo-firmware.iso
sudo dd bs=10M if=usbstick.img of=$(ls -tr /dev/sd?|tail -1)

This USB stick booted the firmware upgrader just fine, and in a few minutes my machine had the latest and greatest BIOS firmware in place.

Tags: debian, english.
16th April 2022

Inspired by the recent news of AV1 hardware encoding support from Intel, I decided to look into the state of AV1 on Linux today. AV1 is a free and open standard as defined by Digistan without any royalty payment requirement, unlike its much used competitor encoding H.264. While looking, I came across an 5 year old question on askubuntu.com which in turn inspired me to check out how things are in Debian Stable regarding AV1. The test file listed in the question (askubuntu_test_aom.mp4) did not exist any more, so I tracked down a different set of test files on av1.webmfiles.org to test them with the various video tools I had installed on my machine. I was happy to discover that AV1 decoding and playback worked with almost every tool I tested:

mediainfo ok
dragonplayer ok
ffmpeg / ffplay ok
gnome-mplayer fail
mplayer ok
mpv ok
parole ok
vlc ok
firefox ok
chromium ok

AV1 encoding is available in Debian Stable from the aom-tools version 1.0.0.errata1-3 package, using the aomenc tool. The encoding using the package in Debian Stable is quite slow, with the frame rate for my 10 second test video at around 0.25 fps. My 10 second video test took 16 minutes and 11 seconds on my test machine.

I tested by first running ffmpeg and then aomenc using the recipe provided by the askubuntu recipe above. I had to remove the '--row-mt=1' option, as it was not supported in my 1.0.0 version. The encoding only used a single thread, according to top.

ffmpeg -i some-old-video.ogv -t 10 -pix_fmt yuv420p video.y4m
aomenc --fps=24/1 -u 0 --codec=av1 --target-bitrate=1000 \
  --lag-in-frames=25 --auto-alt-ref=1 -t 24 --cpu-used=8 \
  --tile-columns=2 --tile-rows=2 -o output.webm video.y4m

As version 1.0.0 currently have several unsolved security issues in Debian Stable, and to see if the recent backport provided in Debian is any quicker, I ran apt -t bullseye-backports install aom-tools to fetch the backported version and re-encoded the video using the latest version. This time the '--row-mt=1' option worked, and the encoding was done in 46 seconds with a frame rate of around 5.22 fps. This time it seem to be using all my four cores to encode. Encoding speed is still too low for streaming and real time, which would require frame rates above 25 fps, but might be good enough for offline encoding.

I am very happy to see AV1 playback working so well with the default tools in Debian Stable. I hope the encoding situation improve too, allowing even a slow old computer like my 10 year old laptop to be used for encoding.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12th March 2022

Recently I had a look at a Hargassner wood chip boiler, and what kind of free software can be used to monitor and control it. The boiler can be connected to some cloud service via what the producer call an Internet Gateway, which seem to be a computer connecting to the boiler and passing the information gathered to the cloud. I discovered the boiler controller got an IP address on the local network and listen on TCP port 23 to provide status information as a text line of numbers. It also provide a HTTP server listening on port 80, but I have not yet figured out what it can do beside return an error code.

If I am to believe various free software implementations talking to such boiler, the interpretation of the line of numbers differ between type of boiler and software version on the boiler. By comparing the list of numbers on the front panel of the boiler with the numbers returned via TCP, I have been able to figure out several of the numbers, but there are a lot left to understand. I've located several temperature measurements and hours running values, as well as oxygen measurements and counters.

I decided to write a simple parser in Python for the values I figured out so far, and a simple MQTT injector publishing both the interpreted and the unknown values on a MQTT bus to make collecting and graphing simpler. The end result is available from the hargassner2mqtt project page on gitlab. I very much welcome patches extending the parser to understand more values, boiler types and software versions. I do not really expect very few free software developers got their hands on such unit to experiment, but it would be fun if others too find this project useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
2nd March 2022

After many months of hard work by the good people involved in LinuxCNC, the system was accepted Sunday into Debian. Once it was available from Debian, I was surprised to discover from its popularity-contest numbers that people have been reporting its use since 2012. Its project site might be a good place to check out, but sadly is not working when visiting via Tor.

But what is LinuxCNC, you are probably wondering? Perhaps a Wikipedia quote is in place?

"LinuxCNC is a software system for numerical control of machines such as milling machines, lathes, plasma cutters, routers, cutting machines, robots and hexapods. It can control up to 9 axes or joints of a CNC machine using G-code (RS-274NGC) as input. It has several GUIs suited to specific kinds of usage (touch screen, interactive development)."

It can even control 3D printers. And even though the Wikipedia page indicate that it can only work with hard real time kernel features, it can also work with the user space soft real time features provided by the Debian kernel. The source code is available from Github. The last few months I've been involved in the translation setup for the program and documentation. Translators are most welcome to join the effort using Weblate.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th February 2022

I am very happy to report that a new version of the VLC bittorrent plugin was just uploaded into debian. The changes since last time is mostly code clean in the download code. The package is currently in Debian unstable, but should be available in Debian testing son. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

It can also use magnet links and local .torrent files like the ones provided by the Internet Archive. Another example is the Love Nest Buster Keaton movie, where one can click on the 'Torrent' link to get going.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

3rd December 2021

A few days ago, a productive translator started working on a new translation of the Made with Creative Commons book for Brazilian Portuguese. The translation take place on the Weblate web based translation system. Once the translation is complete and proof read, we can publish it on paper as well as in PDF, ePub and HTML format. The translation is already 16% complete, and if more people get involved I am conviced it can very quickly reach 100%. If you are interested in helping out with this or other translations of the Made with Creative Commons book, start translating on Weblate. There are partial translations available in Azerbaijani, Bengali, Brazilian Portuguese, Dutch, French, German, Greek, Polish, Simplified Chinese, Swedish, Thai and Ukrainian.

The git repository for the book contain all source files needed to build the book for yourself. HTML editions to help with proof reading is also available.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

24th October 2021

The Debian Lego team saw a lot of activity the last few weeks. All the packages under the team umbrella has been updated to fix packaging, lintian issues and BTS reports. In addition, a new and inspiring team member appeared on both the debian-lego-team Team mailing list and IRC channel #debian-lego. If you are interested in Lego CAD design and LEGO Mindstorms programming, check out the team wiki page to see what Debian can offer the Lego enthusiast.

Patches has been sent upstream, causing new upstream releases, one even the first one in more than ten years, and old upstreams was released with new ones. There are still a lot of work left, and the team welcome more members to help us make sure Debian is the Linux distribution of choice for Lego builders. If you want to contribute, join us in the IRC channel and become part of the team on Salsa.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, lego, robot.
4th August 2021

Almost thirty years ago, some forward looking teachers at Samisk videregående skole og reindriftsskole teaching metal work and Northern Sámi, decided to create a list of words used in Northern Sámi metal work. After almost ten years this resulted in a dictionary database, published as the book "Mekanihkkársánit : Mekanikerord = Mekaanisen alan sanasto = Mechanic's words" in 1999. The story of this work is available from the pen of Svein Lund, one of the leading actors behind this effort. They even got the dictionary approved by the Sámi Language Council as the recommended metal work words to use.

Fast forward twenty years, I came across this work when I recently became interested in metal work, and started watching educational and funny videos on the topic, like the ones from mrpete222 and This Old Tony. But they all talk English, but I wanted to know what the tools and techniques they used were called in Norwegian. Trying to track down a good dictionary from English to Norwegian, after much searching, I came across the database of words created almost thirty years ago, with translations into English, Norwegian, Northern Sámi, Swedish and Finnish. This gave me a lot of the Norwegian phrases I had been looking for. To make it easier for the next person trying to track down a good Norwegian dictionary for the metal worker, and because I knew the person behind the database from my Skolelinux / Debian Edu days, I decided to ask if the database could be released to the public without any usage limitations, in other words as a Creative Commons licensed data set. And happily, after consulting with the Sámi Parliament of Norway, the database is now available with the Creative Commons Attribution 4.0 International license from my gitlab repository.

The dictionary entries look slightly different, depending on the language in focus. This is the same entry in the different editions.

English

lathe

dreiebenk (nb) várve, várvenbeaŋka, jorahanbeaŋka, vátnanbeaŋka (se) svarv (sv) sorvi (fi)

Norwegian

dreiebenk

lathe (en) várve, várvenbeaŋka, jorahanbeaŋka, vátnanbeaŋka (se) svarv (sv) sorvi (fi)

(nb): sponskjærande bearbeidingsmaskin der ein med skjæreverktøy lausgjør spon frå eit roterande arbetsstykke

Northern Sámi

várve, várvenbeaŋka, jorahanbeaŋka, vátnanbeaŋka

dreiebenk (nb) lathe (en) svarv (sv) sorvi (fi)

(se): mašiidna mainna čuohppá vuolahasaid jorri bargoávdnasis

(nb): sponskjærande bearbeidingsmaskin der ein med skjæreverktøy lausgjør spon frå eit roterande arbetsstykke

The database included term description in both Norwegian and Northern Sámi, but not English. Because of this, the Northern Sámi edition include both descriptions, the Norwegian edition include the Norwegian description and the English edition lack a descripiton.

Once the database was available without any usage restrictions, and armed with my experience in publishing books, I decided to publish a Norwegian/English dictionary as a book using the database, to make the data set available also on paper and as an ebook. Further into the project, it occurred to me that I could just as easily make an English dictionary, and talking to Svein and concluding that it was within reach, I decided to make a Northern Sámi dictionary too.

Thus I suddenly find myself publishing a Northern Sámi dictionary, even though I do not understand the language myself. I hope it will be well received, and can help revive the impressive work done almost thirty years ago to document the vocabulary of metal workers. If I get some help, I might even extend it with some of the words I find missing, like collet, rotary broach, carbide, knurler, arbor press and others. But the first edition build from a lightly edited version of the original database, with no new entries added. If you would like to check it out, visit my list of published books and consider buying a paper or ebook copy from lulu.com. The paper edition is only available in hardcover to increase its durability in the workshop.

I am very happy to report that in the process, and thanks to help from both Svein Lund and Børre Gaup who understand the language, the docbook tools I use to create books, dblatex and docbook-xsl, now include support for Northern Sámi. Before I started, these lacked the needed locale settings for this language, but now the patches are included upstream.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: docbook, english.
5th July 2021

I am happy observe that the The Debian Administrator's Handbook is available in six languages now. I am not sure which one of these are completely proof read, but the complete book is available in these languages:

  • English
  • Norwegian Bokmål
  • German
  • Indonesian
  • Brazil Portuguese
  • Spanish

This is the list of languages more than 70% complete, in other words with not too much left to do:

  • Chinese (Simplified) - 90%
  • French - 79%
  • Italian - 79%
  • Japanese - 77%
  • Arabic (Morocco) - 75%
  • Persian - 71%

I wonder how long it will take to bring these to 100%.

Then there is the list of languages about halfway done:

  • Russian - 63%
  • Swedish - 53%
  • Chinese (Traditional) - 46%
  • Catalan - 45%

Several are on to a good start:

  • Dutch - 26%
  • Vietnamese - 25%
  • Polish - 23%
  • Czech - 22%
  • Turkish - 18%

Finally, there are the ones just getting started:

  • Korean - 4%
  • Croatian - 2%
  • Greek - 2%
  • Danish - 1%
  • Romanian - 1%

If you want to help provide a Debian instruction book in your own language, visit Weblate to contribute to the translations.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10th June 2021

I am very pleased to be able to share with you the announcement of a new version of the archiving system Nikita published by its lead developer Thomas Sødring:

It is with great pleasure that we can announce a new release of nikita. Version 0.6 (https://gitlab.com/OsloMet-ABI/nikita-noark5-core). This release makes new record keeping functionality available. This really is a maturity release. Both in terms of functionality but also code. Considerable effort has gone into refactoring the codebase and simplifying the code. Notable changes for this release include:

  • Significantly improved OData parsing
  • Support for business specific metadata and national identifiers
  • Continued implementation of domain model and endpoints
  • Improved testing
  • Ability to export and import from arkivstruktur.xml

We are currently in the process of reaching an agreement with an archive institution to publish their picture archive using nikita with business specific metadata and we hope that we can share this with you soon. This is an interesting project as it allows the organisation to bring an older picture archive back to life while using the original metadata values stored as business specific metadata. Combined with OData means the scope and use of the archive is significantly increased and will showcase both the flexibility and power of Noark.

I really think we are approaching a version 1.0 of nikita, even though there is still a lot of work to be done. The notable work at the moment is to implement access-control and full text indexing of documents.

My sincere thanks to everyone who has contributed to this release!

- Thomas

Release 0.6 2021-06-10 (d1ba5fc7e8bad0cfdce45ac20354b19d10ebbc7b)

  • Refactor metadata entity search
  • Remove redundant security configuration
  • Make OpenAPI documentation work
  • Change database structure / inheritance model to a more sensible approach
  • Make it possible to move entities around the fonds structure
  • Implemented a number of missing endpoints
  • Make sure yml files are in sync
  • Implemented/finalised storing and use of
         
    • Business Specific Metadata
    •    
    • Norwegian National Identifiers
    •    
    • Cross Reference
    •    
    • Keyword
    •    
    • StorageLocation
    •    
    • Author
    •    
    • Screening for relevant objects
    •    
    • ChangeLog
    •    
    • EventLog
  • Make generation of updated docker image part of successful CI pipeline
  • Implement pagination for all list requests
         
    • Refactor code to support lists
    •    
    • Refactor code for readability
    •    
    • Standardise the controller/service code
  • Finalise File->CaseFile expansion and Record->registryEntry/recordNote expansion
  • Improved Continuous Integration (CI) approach via gitlab
  • Changed conversion approach to generate tagged PDF documents
  • Updated dependencies
         
    • For security reasons
    •    
    • Brought codebase to spring-boot version 2.5.0
    •    
    • Remove import of necessary dependencies
    •    
    • Remove non-used metrics classes
  • Added new analysis to CI including
  • Implemented storing of Keyword
  • Implemented storing of Screening and ScreeningMetadata
  • Improved OData support
         
    • Better support for inheritance in queries where applicable
    •    
    • Brought in more OData tests
    •    
    • Improved OData/hibernate understanding of queries
    •    
    • Implement $count, $orderby
    •    
    • Finalise $top and $skip
    •    
    • Make sure & is used between query parameters
  • Improved Testing in codebase
         
    • A new approach for integration tests to make test more readable
    •    
    • Introduce tests in parallel with code development for TDD approach
    •    
    • Remove test that required particular access to storage
  • Implement case-handling process from received email to case-handler
         
    • Develop required GUI elements (digital postroom from email)
    •    
    • Introduced leader, quality control and postroom roles
  • Make PUT requests return 200 OK not 201 CREATED
  • Make DELETE requests return 204 NO CONTENT not 200 OK
  • Replaced 'oppdatert*' with 'endret*' everywhere to match latest spec
  • Upgrade Gitlab CI to use python > 3 for CI scripts
  • Bug fixes
         
    • Fix missing ALLOW
    •    
    • Fix reading of objects from jar file during start-up
    •    
    • Reduce the number of warnings in the codebase
    •    
    • Fix delete problems
    •    
    • Make better use of cascade for "leaf" objects
    •    
    • Add missing annotations where relevant
    •    
    • Remove the use of ETAG for delete
    •    
    • Fix missing/wrong/broken rels discovered by runtest
    •    
    • Drop unofficial convertFil (konverterFil) end point
    •    
    • Fix regex problem for dateTime
    •    
    • Fix multiple static analysis issues discovered by coverity
    •    
    • Fix proxy problem when looking for object class names
    •    
    • Add many missing translated Norwegian to English (internal) attribute/entity names
    •    
    • Change UUID generation approach to allow code also set a value
    •    
    • Fix problem with Part/PartParson
    •    
    • Fix problem with empty OData search results
    •    
    • Fix metadata entity domain problem
  • General Improvements
         
    • Makes future refactoring easier as coupling is reduced
    •    
    • Allow some constant variables to be set from property file
    •    
    • Refactor code to make reflection work better across codebase
    •    
    • Reduce the number of @Service layer classes used in @Controller classes
    •    
    • Be more consistent on naming of similar variable types
    •    
    • Start printing rels/href if they are applicable
    •    
    • Cleaner / standardised approach to deleting objects
    •    
    • Avoid concatenation when using StringBuilder
    •    
    • Consolidate code to avoid duplication
    •    
    • Tidy formatting for a more consistent reading style across similar class files
    •    
    • Make throw a log.error message not an log.info message
    •    
    • Make throw print the log value rather than printing in multiple places
    •    
    • Add some missing pronom codes
    •    
    • Fix time formatting issue in Gitlab CI
    •    
    • Remove stale / unused code
    •    
    • Use only UUID datatype rather than combination String/UUID for systemID
    •    
    • Mark variables final and @NotNull where relevant to indicate intention
  • Change Date values to DateTime to maintain compliance with Noark 5 standard
  • Domain model improvements using Hypersistence Optimizer
         
    • Move @Transactional from class to methods to avoid borrowing the JDBC Connection unnecessarily
    •    
    • Fix OneToOne performance issues
    •    
    • Fix ManyToMany performance issues
    •    
    • Add missing bidirectional synchronization support
    •    
    • Fix ManyToMany performance issue
  • Make List<> and Set<> use final-keyword to avoid potential problems during update operations
  • Changed internal URLs, replaced "hateoas-api" with "api".
  • Implemented storing of Precedence.
  • Corrected handling of screening.
  • Corrected _links collection returned for list of mixed entity types to match the specific entity.
  • Improved several internal structures.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.oftc.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st May 2021

Yesterday morning I got a warning call from the Debian quality control system that the VLC bittorrent plugin was due to be removed because of a release critical bug in one of its dependencies. As you might remember, this plugin make VLC able to stream videos directly from a bittorrent source using both torrent files and magnet links, similar to using a HTTP source. I believe such protocol support is a vital feature in VLC, allowing efficient streaming from sources such at the almost 7 million movies in the Internet Archive.

The dependency was the unmaintained libtorrent-rasterbar package, and the bug in question blocked its python library from working properly. As I did not want Bullseye to release without bittorrent support in VLC, I set out to check out the status, and track down a fix for the problem. Luckily the issue had already been identified and fixed upstream, providing everything needed. All I needed to do was to fetch the Debian git repository, extract and trim the patch from upstream and apply it to the Debian package for upload.

The fixed library was uploaded yesterday evening. But that is not enough to get it into Bullseye, as Debian is currently in package freeze to prepare for a new next stable release. Only non-critical packages with autopkgtest setup included, in other words able to validate automatically that the package is working, are allowed to migrate automatically into the next release at this stage. And the unmaintained libtorrent-rasterbar lack such testing, and thus needed a manual override. I am happy to report that such manual override was approved a few minutes ago, thus increasing significantly the chance of VLC bittorrent streaming being available out of the box also for Debian/Buster users. A bit too close shave for my liking, as the Bullseye release is most likely just a few days away, and this did feel like the package was saved by the bell. I am so glad the warning email showed up in time for me to handle the issue, and a big thanks go to the Debian Release team for the quick feedback on #debian-release and their swift unblocking.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

27th February 2021

I have neglected the Valutakrambod library for a while, but decided this weekend to give it a face lift. I fixed a few minor glitches in several of the service drivers, where the API had changed since I last looked at the code. I also added support for fetching the order book from the newcomer Norwegian Bitcoin Exchange.

I also decided to migrate the project from github to gitlab in the process. If you want a python library for talking to various currency exchanges, check out code for valutakrambod.

This is what the output from 'bin/btc-rates-curses -c' looked like a few minutes ago:

           Name Pair           Bid         Ask Spread Ftcd    Age   Freq
       Bitfinex BTCEUR  39229.0000  39246.0000   0.0%   44     44    nan
        Bitmynt BTCEUR  39071.0000  41048.9000   4.8%   43     74    nan
         Bitpay BTCEUR  39326.7000         nan   nan%   39    nan    nan
       Bitstamp BTCEUR  39398.7900  39417.3200   0.0%    0      0      1
           Bl3p BTCEUR  39158.7800  39581.9000   1.1%    0    nan      3
       Coinbase BTCEUR  39197.3100  39621.9300   1.1%   38    nan    nan
         Kraken+BTCEUR  39432.9000  39433.0000   0.0%    0      0      0
        Paymium BTCEUR  39437.2100  39499.9300   0.2%    0   2264    nan
        Bitmynt BTCNOK 409750.9600 420516.8500   2.6%   43     74    nan
         Bitpay BTCNOK 410332.4000         nan   nan%   39    nan    nan
       Coinbase BTCNOK 408675.7300 412813.7900   1.0%   38    nan    nan
        MiraiEx BTCNOK 412174.1800 418396.1500   1.5%   34    nan    nan
            NBX BTCNOK 405835.9000 408921.4300   0.8%   33    nan    nan
       Bitfinex BTCUSD  47341.0000  47355.0000   0.0%   44     53    nan
         Bitpay BTCUSD  47388.5100         nan   nan%   39    nan    nan
       Coinbase BTCUSD  47153.6500  47651.3700   1.0%   37    nan    nan
         Gemini BTCUSD  47416.0900  47439.0500   0.0%   36    336    nan
         Hitbtc BTCUSD  47429.9900  47386.7400  -0.1%    0      0      0
         Kraken+BTCUSD  47401.7000  47401.8000   0.0%    0      0      0
  Exchangerates EURNOK     10.4012     10.4012   0.0%   38  76236    nan
     Norgesbank EURNOK     10.4012     10.4012   0.0%   31  76236    nan
       Bitstamp EURUSD      1.2030      1.2045   0.1%    2      2      1
  Exchangerates EURUSD      1.2121      1.2121   0.0%   38  76236    nan
     Norgesbank USDNOK      8.5811      8.5811   0.0%   31  76236    nan

Yes, I notice the negative spread on Hitbtc. Either I fail to understand their Websocket API or they are sending bogus data. I've seen the same with Kraken, and suspect there is something wrong with the data they send.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
12th January 2021

After a lot of hard work by its maintainer Alexandre Viau and others, the decentralized communication platform Jami (earlier known as Ring), managed to get its latest version into Debian Testing. Several of its dependencies has caused build and propagation problems, which all seem to be solved now.

In addition to the fact that Jami is decentralized, similar to how bittorrent is decentralized, I first of all like how it is not connected to external IDs like phone numbers. This allow me to set up computers to send me notifications using Jami without having to find get a phone number for each computer. Automatic notification via Jami is also made trivial thanks to the provided client side API (as a DBus service). Here is my bourne shell script demonstrating how to let any system send a message to any Jami address. It will create a new identity before sending the message, if no Jami identity exist already:

#!/bin/sh
#
# Usage: $0  
#
# Send  to , create local jami account if
# missing.
#
# License: GPL v2 or later at your choice
# Author: Petter Reinholdtsen


if [ -z "$HOME" ] ; then
    echo "error: missing \$HOME, required for dbus to work"
    exit 1
fi

# First, get dbus running if not already running
DBUSLAUNCH=/usr/bin/dbus-launch
PIDFILE=/run/asterisk/dbus-session.pid
if [ -e $PIDFILE ] ; then
    . $PIDFILE
    if ! kill -0 $DBUS_SESSION_BUS_PID 2>/dev/null ; then
        unset DBUS_SESSION_BUS_ADDRESS
    fi
fi
if [ -z "$DBUS_SESSION_BUS_ADDRESS" ] && [ -x "$DBUSLAUNCH" ]; then
    DBUS_SESSION_BUS_ADDRESS="unix:path=$HOME/.dbus"
    dbus-daemon --session --address="$DBUS_SESSION_BUS_ADDRESS" --nofork --nopidfile --syslog-only < /dev/null > /dev/null 2>&1 3>&1 &
    DBUS_SESSION_BUS_PID=$!
    (
        echo DBUS_SESSION_BUS_PID=$DBUS_SESSION_BUS_PID
        echo DBUS_SESSION_BUS_ADDRESS=\""$DBUS_SESSION_BUS_ADDRESS"\"
        echo export DBUS_SESSION_BUS_ADDRESS
    ) > $PIDFILE
    . $PIDFILE
fi &

dringop() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*
}

dringopreply() {
    part="$1"; shift
    op="$1"; shift
    dbus-send --session --print-reply \
        --dest="cx.ring.Ring" /cx/ring/Ring/$part cx.ring.Ring.$part.$op $*
}

firstaccount() {
    dringopreply ConfigurationManager getAccountList | \
      grep string | awk -F'"' '{print $2}' | head -n 1
}

account=$(firstaccount)

if [ -z "$account" ] ; then
    echo "Missing local account, trying to create it"
    dringop ConfigurationManager addAccount \
      dict:string:string:"Account.type","RING","Account.videoEnabled","false"
    account=$(firstaccount)
    if [ -z "$account" ] ; then
        echo "unable to create local account"
        exit 1
    fi
fi

# Not using dringopreply to ensure $2 can contain spaces
dbus-send --print-reply --session \
  --dest=cx.ring.Ring \
  /cx/ring/Ring/ConfigurationManager \
  cx.ring.Ring.ConfigurationManager.sendTextMessage \
  string:"$account" string:"$1" \
  dict:string:string:"text/plain","$2" 

If you want to check it out yourself, visit the the Jami system project page to learn more, and install the latest Jami client from Debian Unstable or Testing.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th October 2020

I am happy to report that we finally made it! Norwegian Bokmål became the first translation published on paper of the new Buster based edition of "The Debian Administrator's Handbook". The print proof reading copy arrived some days ago, and it looked good, so now the book is approved for general distribution. This updated paperback edition is available from lulu.com. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, and can also be read online.

I am very happy to wrap up this Creative Common licensed project, which concludes several months of work by several volunteers. The number of Linux related books published in Norwegian are few, and I really hope this one will gain many readers, as it is packed with deep knowledge on Linux and the Debian ecosystem. The book will be available for various Internet book stores like Amazon and Barnes & Noble soon, but I recommend buying "Håndbok for Debian-administratoren" directly from the source at Lulu.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11th September 2020

Thanks to the good work of several volunteers, the updated edition of the Norwegian translation for "The Debian Administrator's Handbook" is now almost completed. After many months of proof reading, I consider the proof reading complete enough for us to move to the next step, and have asked for the print version to be prepared and sent of to the print on demand service lulu.com. While it is still not to late if you find any incorrect translations on the hosted Weblate service, but it will be soon. :) You can check out the Buster edition on the web until the print edition is ready.

The book will be for sale on lulu.com and various web book stores, with links available from the web site for the book linked to above. I hope a lot of readers find it useful.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th July 2020

Three years ago, the first Norwegian Bokmål edition of "The Debian Administrator's Handbook" was published. This was based on Debian Jessie. Now a new and updated version based on Buster is getting ready. Work on the updated Norwegian Bokmål edition has been going on for a few months now, and yesterday, we reached the first mile stone, with 100% of the texts being translated. A lot of proof reading remains, of course, but a major step towards a new edition has been taken.

The book is translated by volunteers, and we would love to get some help with the proof reading. The translation uses the hosted Weblate service, and we welcome everyone to have a look and submit improvements and suggestions. There is also a proof readers PDF available on request, get in touch if you want to help out that way.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

6th June 2020

As a member of the Norwegian Unix User Group, I have the pleasure of receiving the USENIX magazine ;login: several times a year. I rarely have time to read all the articles, but try to at least skim through them all as there is a lot of nice knowledge passed on there. I even carry the latest issue with me most of the time to try to get through all the articles when I have a few spare minutes.

The other day I came across a nice article titled "The Secure Socket API: TLS as an Operating System Service" with a marvellous idea I hope can make it all the way into the POSIX standard. The idea is as simple as it is powerful. By introducing a new socket() option IPPROTO_TLS to use TLS, and a system wide service to handle setting up TLS connections, one both make it trivial to add TLS support to any program currently using the POSIX socket API, and gain system wide control over certificates, TLS versions and encryption systems used. Instead of doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);

the program code would be doing this:

int socket = socket(PF_INET, SOCK_STREAM, IPPROTO_TLS);

According to the ;login: article, converting a C program to use TLS would normally modify only 5-10 lines in the code, which is amazing when compared to using for example the OpenSSL API.

The project has set up the https://securesocketapi.org/ web site to spread the idea, and the code for a kernel module and the associated system daemon is available from two github repositories: ssa and ssa-daemon. Unfortunately there is no explicit license information with the code, so its copyright status is unclear. A request to solve this about it has been unsolved since 2018-08-17.

I love the idea of extending socket() to gain TLS support, and understand why it is an advantage to implement this as a kernel module and system wide service daemon, but can not help to think that it would be a lot easier to get projects to move to this way of setting up TLS if it was done with a user space approach where programs wanting to use this API approach could just link with a wrapper library.

I recommend you check out this simple and powerful approach to more secure network connections. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

24th May 2020

I am very happy to report that a more reliable VLC bittorrent plugin was just uploaded into debian. This fixes a couple of crash bugs in the plugin, hopefully making the VLC experience even better when streaming directly from a bittorrent source. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

It also support magnet links and local .torrent files.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

12th May 2020

It has been way too long since my last interview, but as the Debian Edu / Skolelinux community is still active, and new people keep showing up on the IRC channel #debian-edu and the debian-edu mailing list, I decided to give it another go. I was hoping someone else might pick up the idea and run with it, but this has not happened as far as I can tell, so here we are… This time the announcement of a new free software tool to create a school year book triggered my interest, and I decided to learn more about its author.

Who are you, and how do you spend your days?

My name is Yvan MASSON, I live in France. I have my own one person business in computer services. The work consist of visiting my customers (person's home, local authority, small business) to give advise, install computers and software, fix issues, and provide computing usage training. I spend the rest of my time enjoying my family and promoting free software.

What is your approach for promoting free software?

When I think that free software could be suitable for someone, I explain what it is, with simple words, give a few known examples, and explain that while there is no fee it is a viable alternative in many situations. Most people are receptive when you explain how it is better (I simplify arguments here, I know that it is not so simple): Linux works on older hardware, there are no viruses, and the software can be audited to ensure user is not spied upon. I think the most important is to keep a clear but moderated speech: when you try to convince too much, people feel attacked and stop listening.

How did you get in contact with the Skolelinux / Debian Edu project?

I can not remember how I first heard of Skolelinux / Debian Edu, but probably on planet.debian.org. As I have been working for a school, I have interest in this type of project.

The school I am involved in is a school for "children" between 14 and 18 years old. The French government has recommended free software since 2012, but they do not always use free software themselves. The school computers are still using the Windows operating system, but all of them have the classic set of free software: Firefox ESR, LibreOffice (with the excellent extension Grammalecte that indicates French grammatical errors), SumatraPDF, Audacity, 7zip, KeePass2, VLC, GIMP, Inkscape…

What do you see as the advantages of Skolelinux / Debian Edu?

It is free software! Built on Debian, I am sure that users are not spied upon, and that it can run on low end hardware. This last point is very important, because we really need to improve "green IT". I do not know enough about Skolelinux / Debian Edu to tell how it is better than another free software solution, but what I like is the "all in one" solution: everything has been thought of and prepared to ease installation and usage.

I like Free Software because I hate using something that I can not understand. I do not say that I can understand everything nor that I want to understand everything, but knowing that someone / some company intentionally prevents me from understanding how things work is really unacceptable to me.

Secondly, and more importantly, free software is a requirement to prevent abuses regarding human rights and environmental care. Humanity can not rely on tools that are in the hands of small group of people.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Again, I don't know this project enough. Maybe a dedicated website? Debian wiki works well for documentation, but is not very appealing to someone discovering the project. Also, as Skolelinux / Debian Edu uses OpenLDAP, it probably means that Windows workstations cannot use centralized authentication. Maybe the project could use Samba as an Active Directory domain controller instead, allowing Windows desktop usage when necessary.

(Editors note: In fact Windows workstations can use the centralized authentication in a Debian Edu setup, at least for some versions of Windows, but the fact that this is not well known can be seen as an indication of the need for better documentation and marketing. :)

Which free software do you use daily?

Nothing original: Debian testing/sid with Gnome desktop, Firefox, Thunderbird, LibreOffice…

Which strategy do you believe is the right one to use to get schools to use free software?

Every effort to spread free software into schools is important, whatever it is. But I think, at least where I live, that IT professionals maintaining schools networks are still very "Microsoft centric". Schools will use any working solution, but they need people to install and maintain it. How to make these professionals sensitive about free software and train them with solutions like Debian Edu / Skolelinux is a really good question :-)

8th May 2020

Half a year ago, I wrote about the Jami communication client, capable of peer-to-peer encrypted communication. It handle both messages, audio and video. It uses distributed hash tables instead of central infrastructure to connect its users to each other, which in my book is a plus. I mentioned briefly that it could also work as a SIP client, which came in handy when the higher educational sector in Norway started to promote Zoom as its video conferencing solution. I am reluctant to use the official Zoom client software, due to their copyright license clauses prohibiting users to reverse engineer (for example to check the security) and benchmark it, and thus prefer to connect to Zoom meetings with free software clients.

Jami worked OK as a SIP client to Zoom as long as there was no password set on the room. The Jami daemon leak memory like crazy (approximately 1 GiB a minute) when I am connected to the video conference, so I had to restart the client every 7-10 minutes, which is not great. I tried to get other SIP Linux clients to work without success, so I decided I would have to live with this wart until someone managed to fix the leak in the dring code base. But another problem showed up once the rooms were password protected. I could not get my dial tone signaling through from Jami to Zoom, and dial tone signaling is used to enter the password when connecting to Zoom. I tried a lot of different permutations with my Jami and Asterisk setup to try to figure out why the signaling did not get through, only to finally discover that the fundamental problem seem to be that Zoom is simply not able to receive dial tone signaling when connecting via SIP. There seem to be nothing wrong with the Jami and Asterisk end, it is simply broken in the Zoom end. I got help from a very skilled VoIP engineer figuring out this last part. And being a very skilled engineer, he was also able to locate a solution for me. Or to be exact, a workaround that solve my initial problem of connecting to password protected Zoom rooms using Jami.

So, how do you do this, I am sure you are wondering by now. The trick is already documented from Zoom, and it is to modify the SIP address to include the room password. What is most surprising about this is that the automatically generated email from Zoom with instructions on how to connect via SIP do not mention this. The SIP address to use normally consist of the room ID (a number), an @ character and the IP address of the Zoom SIP gateway. But Zoom understand a lot more than just the room ID in front of the at sign. The format is "[Meeting ID].[Password].[Layout].[Host Key]", and you can here see how you can both enter password, control the layout (full screen, active presence and gallery) and specify the host key to start the meeting. The full SIP address entered into Jami to provide the password will then look like this (all using made up numbers):

sip:657837644.522827@192.168.169.170

Now if only jami would reduce its memory usage, I could even recommend this setup to others. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

29th April 2020

The curiosity got the better of me when Slashdot reported that New Jersey was desperately looking for COBOL programmers, and a few days later it was reported that IBM tried to locate COBOL programmers.

I thus decided to have a look at free software alternatives to learn COBOL, and had the pleasure to find GnuCOBOL was already in Debian. It used to be called Open Cobol, and is a "compiler" transforming COBOL code to C or C++ before giving it to GCC or Visual Studio to build binaries.

I managed to get in touch with upstream, and was impressed with the quick response, and also was happy to see a new Debian maintainer taking over when the original one recently asked to be replaced. A new Debian upload was done as recently as yesterday.

Using the Debian package, I was able to follow a simple COBOL introduction and make and run simple COBOL programs. It was fun to learn a new programming language. If you want to test for yourself, the GnuCOBOL Wikipedia page have a few simple examples to get you startet.

As I do not have much experience with COBOL, I do not know how standard compliant it is, but it claim to pass most tests from COBOL test suite, which sound good to me. It is nice to know it is possible to learn COBOL using software without any usage restrictions, and I am very happy such nice free software project as this is available. If you as me is curious about COBOL, check it out.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd March 2020

Today, after many months of development, a new release of Nikita Noark 5 core project was finally announced on the project mailing list. The Nikita free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.5 since version 0.4, see the email link above for links to a demo site:

  • Updated to Noark 5 versjon 5.0 API specification.
    • Changed formatting of _links from [] to {} to match IETF draft on JSON HAL.
    • Merged Registrering og Basisregistrering in version 4 to combined Registrering.
    • DokumentObjekt is now subtype of ArkivEnhet.
    • Introducing new entity Arkivnotat.
    • Changed all relation keys to use /v5/ instead of /v4/.
    • Corrected to use new official relation keys when possible.
    • Renamed Sakspart to Part and connect it to Mappe, Registrering and Dokumentbeskrivelse instead of only Saksmappe.
    • Moved Korrespondansepart connection from Journalpost to Registrering.
    • Moved Part and Korrespondansepart from package sakarkiv to arkivstruktur.
    • Renamed presedensstatus to presedensStatus.
    • Use new JSON content-type "application/vnd.noark5+json".
    • Updated prepopulated format list to use PRONOM codes.
    • Implemented endpoint for system information.
    • Implemented national identifiers for both file and record.
    • Implemented comments.
    • implemented sign off.
    • implemented conversion.
  • Improved/implemented OData search and paging support for more entities.
  • No longer exposes attribute Dokumentobjekt.referanseDokumentfil, one should use the relation in _links instead.
  • Corrected relation keys under https://rel.arkivverket.no/noark5/v5/api/administrasjon/, replacing 'administrasjon' with 'admin'.
  • Fixed several security and stability issues discovered by Coverity.
  • Corrected handling ETag errors, now return code 409.
  • Improved handling of Kryssreferanse.
  • Changed internal database model to use UUID/SystemID as primary keys in tables.
  • Changed internal database table names to use package prefix.
  • Changed time zone handling for date and datetime attributes, to be more according to the new definition in the API specification.
  • Change revoke-token to only drop token on POST requests, not GET.
  • Updated to newer Spring version.
  • Changed primary key and URL component for metadata code lists to use the 'kode' value instead of a SystemID.
  • Corrected implementation of Part and Sakspart.
  • Changed instance lists with subtypes (like .../registrering/ and .../mappe/) to include the attributes and _links entries for the subtype in the supertype lists.
  • Adjusted _links relations to make it possible to figure out the entity of an instance using the self->href->relation key lookup method.
  • Fixed several end points to make sure GET, PUT, POST and DELETE match each other.
  • Updated DELETE endpoints to work with UUID based entity identifiers.
  • Restructured code to use more common URL related constants in entry point values and replace @RequestMapping with method specific annotations.
  • Added first unit test code.
  • Updated web GUI to work with the updated API.
  • Changed integer fields, enforce them as numeric.
  • Rewrote and simplify metadata handling to use common service and controller code instead of duplicating for each type.
  • Implemented the remaining metadata types.
  • Changed Country list source from Wikipedia to Debian iso-codes and updated the list of Countries.
  • Many many corrections and improvements.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

27th February 2020

On Tuesday, two scietific articles we have been working on for a while, was finally accepted for publication into Records Management Journal. Still waiting for the assigned DOI urls to start working, but you can have a look at the LaTeX originals here.

The first article is "A record-keeping approach to managing IoT-data for government agencies" (DOI 10.1108/RMJ-09-2019-0050) by Thomas Sødring, Petter Reinholdtsen and David Massey, and sketches some approaches for storing measurement data (aka Internet of Things sensor data) in a archive, thus providing a well defined mechanism for screening and deletion of the information

The second article is "Publishing and using record-keeping structural information in a blockchain" (DOI 10.1108/RMJ-09-2019-0056) by Thomas Sødring, Petter Reinholdtsen and Svein Ølnes, where we describe a way for third parties to validate authenticity and thus improve trust in the records kept in a archive.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2020-04-26: Initially managed to swap the DOI numbers. Fixed it.

Tags: english, noark5.
7th December 2019

When asked to accept terms of use and privacy policies that state it will to remove rights I otherwise had or accept unreasonable terms undermining my privacy, I choose away the service. I simply do not have the conscience to accept terms I have no indention of upholding. But how are the system and service providers to know how many people they scared away? Normally I just quietly walk away. But today, I tried a new approach. I sent the following email (removing the specifics, as I am not out to take the specific service in question) to the service provider I decided to not use, to at least give them one data point on how many users are unhappy with their terms:

From: Petter Reinholdtsen
Subject: When terms of use turn users away
To: [contact@some.site]
Date: Sat, 07 Dec 2019 16:30:56 +0100

Dear [Site Owner],

I was eager to test the system, as it seemed like a fun and interesting application of [some] technology, but after reading the terms of use and privacy policy on <URL: https://www.[some.site]/terms-of-use > and <URL: https://www.[some.site]/privacy-policy > I want you to know that I decided to turn away. There were several provisions in the terms and policy turning me off, but the final term that convinced me was being asked to sign away my right to reverse engineer.

--
Happy hacking
Petter Reinholdtsen

I do not expect much to come out of it, but sharing it here in case others want to give something similar a try too. If companies discover their terms scare away enough people, perhaps they will be improved...

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25th November 2019

Four years ago, I did a back of the envelope calculation on how much it would cost to store audio recordings of all the phone calls in Norway, and came up with NOK 2.1 million / EUR 250 000 for the year 2013. It is time to repeat the calculation using updated numbers. The calculation is based on how much data storage is needed for each minute of audio, how many minutes all the calls in Norway sums up to, multiplied by the cost of data storage.

The number of phone call minutes for 2018 was fetched from the NKOM statistics site, and for 2018, land line calls are listed as 434 238 000 minutes, while mobile phone calls are listed with 7 542 006 000 minutes. The total number of minutes is thus 7 976 244 000. For simplicity, I decided to ignore any advantages in audio compression the last four years, and continue to assume 60 Kbytes/min as the last time.

Storage prices still varies a lot, but as last time, I decide to take a reasonable big and cheap hard drive, and double its price to include the surrounding costs into account. A 10 TB disk cost less than 4500 NOK / 450 EUR these days, and doubling it give 9000 NOK per 10 TB.

So, with the parameters in place, lets update the old table estimating cost for calls in a given year:

YearCall minutesSizePrice in NOK / EUR
200524 000 000 0001.3 PiB1 170 000 / 117 000
201218 000 000 0001.0 PiB900 000 / 90 000
201317 000 000 000950 TiB855 000 / 85 500
20187 976 244 000445 TiB401 100 / 40 110

Both the cost of storage and the number of phone call minutes have dropped since the last time, bringing the cost down to a level where I guess even small organizations can afford to store the audio recording from every phone call taken in a year in Norway. Of course, this is just the cost of buying the storage equipment. Maintenance, need to be included as well, but the volume of a single year is about a single rack of hard drives, so it is not much more than I could fit in my own home. Wonder how much the electricity bill would raise if I had that kind of storage? I doubt it would be more than a few tens of thousand NOK per year.

1st September 2019

While working on identifying and counting movies that can be legally shared on the Internet, I also looked at the Norwegian movies listed in IMDb. So far I have identified 54 candidates published before 1940 that might no longer be protected by norwegian copyright law. Of these, only 29 are available at least in part from the Norwegian National Library. It can be assumed that the remaining 25 movies are lost. It seem most useful to identify the copyright status of movies that are not lost. To verify that the movie is really no longer protected, one need to verify the list of copyright holders and figure out if and when they died. I've been able to identify some of them, but for some it is hard to figure out when they died.

This is the list of 29 movies both available from the library and possibly no longer protected by copyright law. The year range (1909-1979 on the first line) is year of publication and last year with copyright protection.

1909-1979 ( 70 year) NSB Bergensbanen 1909 - http://www.imdb.com/title/tt0347601/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons likfærd - http://www.imdb.com/title/tt9299304/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons begravelse - http://www.imdb.com/title/tt9299300/
1912-1998 ( 86 year) Roald Amundsens Sydpolsferd (1910-1912) - http://www.imdb.com/title/tt9237500/
1913-2006 ( 93 year) Roald Amundsen på sydpolen - http://www.imdb.com/title/tt0347886/
1917-1987 ( 70 year) Fanden i nøtten - http://www.imdb.com/title/tt0346964/
1919-2018 ( 99 year) Historien om en gut - http://www.imdb.com/title/tt0010259/
1920-1990 ( 70 year) Kaksen på Øverland - http://www.imdb.com/title/tt0011361/
1923-1993 ( 70 year) Norge - en skildring i 6 akter - http://www.imdb.com/title/tt0014319/
1925-1997 ( 72 year) Roald Amundsen - Ellsworths flyveekspedition 1925 - http://www.imdb.com/title/tt0016295/
1925-1995 ( 70 year) En verdensreise, eller Da knold og tott vaskede negrene hvite med 13 sæpen - http://www.imdb.com/title/tt1018948/
1926-1996 ( 70 year) Luftskibet 'Norge's flugt over polhavet - http://www.imdb.com/title/tt0017090/
1926-1996 ( 70 year) Med 'Maud' over Polhavet - http://www.imdb.com/title/tt0017129/
1927-1997 ( 70 year) Den store sultan - http://www.imdb.com/title/tt1017997/
1928-1998 ( 70 year) Noahs ark - http://www.imdb.com/title/tt1018917/
1928-1998 ( 70 year) Skjæbnen - http://www.imdb.com/title/tt1002652/
1928-1998 ( 70 year) Chefens cigarett - http://www.imdb.com/title/tt1019896/
1929-1999 ( 70 year) Se Norge - http://www.imdb.com/title/tt0020378/
1929-1999 ( 70 year) Fra Chr. Michelsen til Kronprins Olav og Prinsesse Martha - http://www.imdb.com/title/tt0019899/
1930-2000 ( 70 year) Mot ukjent land - http://www.imdb.com/title/tt0021158/
1930-2000 ( 70 year) Det er natt - http://www.imdb.com/title/tt1017904/
1930-2000 ( 70 year) Over Besseggen på motorcykel - http://www.imdb.com/title/tt0347721/
1931-2001 ( 70 year) Glimt fra New York og den Norske koloni - http://www.imdb.com/title/tt0021913/
1932-2007 ( 75 year) En glad gutt - http://www.imdb.com/title/tt0022946/
1934-2004 ( 70 year) Den lystige radio-trio - http://www.imdb.com/title/tt1002628/
1935-2005 ( 70 year) Kronprinsparets reise i Nord Norge - http://www.imdb.com/title/tt0268411/
1935-2005 ( 70 year) Stormangrep - http://www.imdb.com/title/tt1017998/
1936-2006 ( 70 year) En fargesymfoni i blått - http://www.imdb.com/title/tt1002762/
1939-2009 ( 70 year) Til Vesterheimen - http://www.imdb.com/title/tt0032036/
To be sure which one of these can be legally shared on the Internet, in addition to verifying the right holders list is complete, one need to verify the death year of these persons:
Bjørnstjerne Bjørnson (dead 1910) - http://www.imdb.com/name/nm0085085/
Gustav Adolf Olsen (missing death year) - http://www.imdb.com/name/nm0647652/
Gustav Lund (missing death year) - http://www.imdb.com/name/nm0526168/
John W. Brunius (dead 1937) - http://www.imdb.com/name/nm0116307/
Ola Cornelius (missing death year) - http://www.imdb.com/name/nm1227236/
Oskar Omdal (dead 1927) - http://www.imdb.com/name/nm3116241/
Paul Berge (missing death year) - http://www.imdb.com/name/nm0074006/
Peter Lykke-Seest (dead 1948) - http://www.imdb.com/name/nm0528064/
Roald Amundsen (dead 1928) - https://www.imdb.com/name/nm0025468/
Sverre Halvorsen (dead 1936) - http://www.imdb.com/name/nm1299757/
Thomas W. Schwartz (missing death year) - http://www.imdb.com/name/nm2616250/

Perhaps you can help me figuring death year of those missing it, or right holders if some are missing in IMDb? It would be nice to have a definite list of Norwegian movies that are legal to share on the Internet.

This is the list of 25 movies not available from the library and possibly no longer protected by copyright law:

1907-2009 (102 year) Fiskerlivets farer - http://www.imdb.com/title/tt0121288/
1912-2018 (106 year) Historien omen moder - http://www.imdb.com/title/tt0382852/
1912-2002 ( 90 year) Anny - en gatepiges roman - http://www.imdb.com/title/tt0002026/
1916-1986 ( 70 year) The Mother Who Paid - http://www.imdb.com/title/tt3619226/
1917-2018 (101 year) En vinternat - http://www.imdb.com/title/tt0008740/
1917-2018 (101 year) Unge hjerter - http://www.imdb.com/title/tt0008719/
1917-2018 (101 year) De forældreløse - http://www.imdb.com/title/tt0007972/
1918-2018 (100 year) Vor tids helte - http://www.imdb.com/title/tt0009769/
1918-2018 (100 year) Lodsens datter - http://www.imdb.com/title/tt0009314/
1919-2018 ( 99 year) Æresgjesten - http://www.imdb.com/title/tt0010939/
1921-2006 ( 85 year) Det nye year? - http://www.imdb.com/title/tt0347686/
1921-1991 ( 70 year) Under Polarkredsens himmel - http://www.imdb.com/title/tt0012789/
1923-1993 ( 70 year) Nordenfor polarcirkelen - http://www.imdb.com/title/tt0014318/
1925-1995 ( 70 year) Med 'Stavangerfjord' til Nordkap - http://www.imdb.com/title/tt0016098/
1926-1996 ( 70 year) Over Atlanterhavet og gjennem Amerika - http://www.imdb.com/title/tt0017241/
1926-1996 ( 70 year) Hallo! Amerika! - http://www.imdb.com/title/tt0016945/
1926-1996 ( 70 year) Tigeren Teodors triumf - http://www.imdb.com/title/tt1008052/
1927-1997 ( 70 year) Rød sultan - http://www.imdb.com/title/tt1017979/
1927-1997 ( 70 year) Søndagsfiskeren Flag - http://www.imdb.com/title/tt1018002/
1930-2000 ( 70 year) Ro-ro til fiskeskjær - http://www.imdb.com/title/tt1017973/
1933-2003 ( 70 year) I kongens klær - http://www.imdb.com/title/tt0024164/
1934-2004 ( 70 year) Eventyret om de tre bukkene bruse - http://www.imdb.com/title/tt1007963/
1934-2004 ( 70 year) Pål sine høner - http://www.imdb.com/title/tt1017966/
1937-2007 ( 70 year) Et mesterverk - http://www.imdb.com/title/tt1019937/
1938-2008 ( 70 year) En Harmony - http://www.imdb.com/title/tt1007975/

Several of these movies completely lack right holder information in IMDb and elsewhere. Without access to a copy of the movie, it is often impossible to get the list of people involved in making the movie, making it impossible to figure out the correct copyright status.

Not listed here are the movies still protected by copyright law. Their copyright terms varies from 79 to 144 years, according to the information I have available so far. One of the non-lost movies might change status next year, Mustads Mono from 1920. The next one might be Hvor isbjørnen ferdes from 1935 in 2024.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

10th August 2019

The recent announcement of from the New York Public Library on its results in identifying books published in the USA that are now in the public domain, inspired me to update the scripts I use to track down movies that are in the public domain. This involved updating the script used to extract lists of movies believed to be in the public domain, to work with the latest version of the source web sites. In particular the new edition of the Retro Film Vault web site now seem to list all the films available from that distributor, bringing the films identified there to more than 12.000 movies, and I was able to connect 46% of these to IMDB titles.

The new total is 16307 IMDB IDs (aka films) in the public domain or creative commons licensed, and unknown status for 31460 movies (possibly duplicates of the 16307).

The complete data set is available from a public git repository, including the scripts used to create it.

Anyway, this is the summary of the 28 collected data sources so far:

 2361 entries (   50 unique) with and 22472 without IMDB title ID in free-movies-archive-org-search.json
 2363 entries (  146 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  299 entries (   32 unique) with and    93 without IMDB title ID in free-movies-cinemovies.json
   88 entries (   52 unique) with and    36 without IMDB title ID in free-movies-creative-commons.json
 3190 entries ( 1532 unique) with and    13 without IMDB title ID in free-movies-fesfilm-xls.json
  620 entries (   24 unique) with and   283 without IMDB title ID in free-movies-fesfilm.json
 1080 entries (  165 unique) with and   651 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   13 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 7410 entries ( 7101 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
 1205 entries (   41 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  163 entries (   22 unique) with and    88 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  103 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (   71 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  248 entries (   85 unique) with and     0 without IMDB title ID in free-movies-manual.json
  158 entries (    4 unique) with and    64 without IMDB title ID in free-movies-mubi.json
   85 entries (    1 unique) with and    23 without IMDB title ID in free-movies-openflix.json
  520 entries (   22 unique) with and   244 without IMDB title ID in free-movies-profilms-pd.json
  343 entries (   14 unique) with and    10 without IMDB title ID in free-movies-publicdomainmovies-info.json
  701 entries (   16 unique) with and   560 without IMDB title ID in free-movies-publicdomainmovies-net.json
   74 entries (   13 unique) with and    60 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   16 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 5506 entries ( 2941 unique) with and  6585 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
  110 entries (    2 unique) with and    29 without IMDB title ID in free-movies-two-movies-net.json
   73 entries (   20 unique) with and   131 without IMDB title ID in free-movies-vodo.json
16307 unique IMDB title IDs in total, 12509 only in one list, 31460 without IMDB title ID

New this time is a list of all the identified IMDB titles, with title, year and running time, provided in free-complete.json. this file also indiciate which source is used to conclude the video is free to distribute.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th July 2019

Childs need to learn how to guard their privacy too. To help them, European Digital Rights (EDRi) created a colorful booklet providing information on several privacy related topics, and tips on how to protect ones privacy in the digital age.

The 24 page booklet titled Digital Defenders is available in several languages. Thanks to the valuable contributions from members of the Electronic Foundation Norway (EFN) and others, it is also available in Norwegian Bokmål. If you would like to have it available in your language too, contribute via Weblate and get in touch.

But a funny, well written and good looking PDF do not have much impact, unless it is read by the right audience. To increase the chance of kids reading it, I am currently assisting EFN in getting copies printed on paper to distribute on the street and in class rooms. Print the booklet was made possible thanks to a small et of great sponsors. Thank you very much to each and every one of them! I hope to have the printed booklet ready to hand out on Tuesday, when the Norwegian Unix Users Group is organizing its yearly barbecue for geeks and free software zealots in the Oslo area. If you are nearby, feel free to come by and check out the party and the booklet.

If the booklet prove to be a success, it would be great to get more sponsoring and distribute it to every kid in the country. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

19th June 2019

Some years ago, in 2016, I wrote for the first time about the Ring peer to peer messaging system. It would provide messaging without any central server coordinating the system and without requiring all users to register a phone number or own a mobile phone. Back then, I could not get it to work, and put it aside until it had seen more development. A few days ago I decided to give it another try, and am happy to report that this time I am able to not only send and receive messages, but also place audio and video calls. But only if UDP is not blocked into your network.

The Ring system changed name earlier this year to Jami. I tried doing web search for 'ring' when I discovered it for the first time, and can only applaud this change as it is impossible to find something called Ring among the noise of other uses of that word. Now you can search for 'jami' and this client and the Jami system is the first hit at least on duckduckgo.

Jami will by default encrypt messages as well as audio and video calls, and try to send them directly between the communicating parties if possible. If this proves impossible (for example if both ends are behind NAT), it will use a central SIP TURN server maintained by the Jami project. Jami can also be a normal SIP client. If the SIP server is unencrypted, the audio and video calls will also be unencrypted. This is as far as I know the only case where Jami will do anything without encryption.

Jami is available for several platforms: Linux, Windows, MacOSX, Android, iOS, and Android TV. It is included in Debian already. Jami also work for those using F-Droid without any Google connections, while Signal do not. The protocol is described in the Ring project wiki. The system uses a distributed hash table (DHT) system (similar to BitTorrent) running over UDP. On one of the networks I use, I discovered Jami failed to work. I tracked this down to the fact that incoming UDP packages going to ports 1-49999 were blocked, and the DHT would pick a random port and end up in the low range most of the time. After talking to the developers, I solved this by enabling the dhtproxy in the settings, thus using TCP to talk to a central DHT proxy instead of peering directly with others. I've been told the developers are working on allowing DHT to use TCP to avoid this problem. I also ran into a problem when trying to talk to the version of Ring included in Debian Stable (Stretch). Apparently the protocol changed between beta2 and the current version, making these clients incompatible. Hopefully the protocol will not be made incompatible in the future.

It is worth noting that while looking at Jami and its features, I came across another communication platform I have not tested yet. The Tox protocol and family of Tox clients. It might become the topic of a future blog post.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11th June 2019

The first book I published, Free Culture by Lawrence Lessig, is still selling a few copies. Not a lot, but enough to have contributed slightly over $500 to the Creative Commons Corporation so far. All the profit is sent there. Most books are still sold via Amazon (83 copies), with Ingram second (49) and Lulu (12) and Machette (7) as minor channels. Bying directly from Lulu bring the largest cut to Creative Commons. The English Edition sold 80 copies so far, the French 59 copies, and Norwegian only 8 copies. Nothing impressive, but nice to see the work we put down is still being appreciated. The ebook edition is available for free from Github.

Title / language Quantity
2016 jan-jun 2016 jul-dec 2017 jan-jun 2017 jul-dec 2018 jan-jun 2018 jul-dec 2019 jan-may
Culture Libre / French 3 6 19 11 7 6 7
Fri kultur / Norwegian 7 1 0 0 0 0 0
Free Culture / English 14 27 16 9 3 7 3
Total 24 34 35 20 10 13 10

It is fun to see the French edition being more popular than the English one.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

4th June 2019

Just 15 days ago, I mentioned my submission to IANA to register an official MIME type for the SOSI vector map format. This morning, just an hour ago, I was notified that the MIME type "text/vnd.sosi" is registered for this format. In addition to this registration, my file(1) patch for a pattern matching rule for SOSI files has been accepted into the official source of that program (pending a new release), and I've been told by the team behind PRONOM that the SOSI format will be included in the next release of PRONOM, which they plan to release this summer around July.

I am very happy to see all of this fall into place, for use by the Noark 5 Tjenestegrensesnitt implementations.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd June 2019

A while back a college and friend from Debian and the Skolelinux / Debian Edu project approached me, asking if I knew someone that might be interested in helping out with a technology project he was running as a teacher at L'école franco-danoise - the Danish-French school and kindergarden. The kids were building robots, rovers. The story behind it is to build a rover for use on the dark side of the moon, and remote control it. As travel cost was a bit high for the final destination, and they wanted to test the concept first, he was looking for volunteers to host a rover for the kids to control in a foreign country. I ended up volunteering as a host, and last week the rover arrived. It took a while to arrive after it was built and shipped, because of customs confusion. Luckily we were able fix it quickly with help from my colleges at work.

This is what it looked like when the rover arrived. Note the cute eyes looking up on me from the wrapping

Once the robot arrived, we needed to track down batteries and figure out how to build custom firmware for it with the appropriate wifi settings. I asked a friend if I could get two 18650 batteries from his pile of Tesla batteries (he had them from the wrack of a crashed Tesla), so now the rover is running on Tesla batteries.

Building the rover firmware proved a bit harder, as the code did not work out of the box with the Arduino IDE package in Debian Buster. I suspect this is due to a unsolved license problem with arduino blocking Debian from upgrading to the latest version. In the end we gave up debugging why the IDE failed to find the required libraries, and ended up using the Arduino Makefile from the arduino-mk Debian package instead. Unfortunately the camera library is missing from the Arduino environment in Debian, so we disabled the camera support for the first firmware build, to get something up and running. With this reduced firmware, the robot could be controlled via the controller server, driving around and measuring distance using its internal acoustic sensor.

Next, With some help from my friend in Denmark, which checked in the camera library into the gitlab repository for me to use, we were able to build a new and more complete version of the firmware, and the robot is now up and running. This is what the "commander" web page look like after taking a measurement and a snapshot:

If you want to learn more about this project, you can check out the The Dark Side Challenge Hackaday web pages.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, robot.
22nd May 2019

This morning, a new release of Nikita Noark 5 core project was announced on the project mailing list. The Nikita free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.4 since version 0.3, see the email link above for links to a demo site:

  • Roll out OData handling to all endpoints where applicable
  • Changed the relation key for "ny-journalpost" to the official one.
  • Better link generation on outgoing links.
  • Tidy up code and make code and approaches more consistent throughout the codebase
  • Update rels to be in compliance with updated version in the interface standard
  • Avoid printing links on empty objects as they can't have links
  • Small bug fixes and improvements
  • Start moving generation of outgoing links to @Service layer so access control can be used when generating links
  • Log exception that was being swallowed so it's traceable
  • Fix name mapping problem
  • Update templated printing so templated should only be printed if it is set true. Requires more work to roll out across entire application.
  • Remove Record->DocumentObject as per domain model of n5v4
  • Add ability to delete lists filtered with OData
  • Return NO_CONTENT (204) on delete as per interface standard
  • Introduce support for ConstraintViolationException exception
  • Make Service classes extend NoarkService
  • Make code base respect X-Forwarded-Host, X-Forwarded-Proto and X-Forwarded-Port
  • Update CorrespondencePart* code to be more in line with Single Responsibility Principle
  • Make package name follow directory structure
  • Make sure Document number starts at 1, not 0
  • Fix isues discovered by FindBugs
  • Update from Date to ZonedDateTime
  • Fix wrong tablename
  • Introduce Service layer tests
  • Improvements to CorrespondencePart
  • Continued work on Class / Classificationsystem
  • Fix feature where authors were stored as storageLocations
  • Update HQL builder for OData
  • Update OData search capability from webpage

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th May 2019

As part of my involvement in the work to standardise a REST based API for Noark 5, the Norwegian archiving standard, I spent some time the last few months to try to register a MIME type and PRONOM code for the SOSI file format. The background is that there is a set of formats approved for long term storage and archiving in Norway, and among these formats, SOSI is the only format missing a MIME type and PRONOM code.

What is SOSI, you might ask? To quote Wikipedia: SOSI is short for Samordnet Opplegg for Stedfestet Informasjon (literally "Coordinated Approach for Spatial Information", but more commonly expanded in English to Systematic Organization of Spatial Information). It is a text based file format for geo-spatial vector information used in Norway. Information about the SOSI format can be found in English from Wikipedia. The specification is available in Norwegian from the Norwegian mapping authority. The SOSI standard, which originated in the beginning of nineteen eighties, was the inspiration and formed the basis for the XML based Geography Markup Language.

I have so far written a pattern matching rule for the file(1) unix tool to recognize SOSI files, submitted a request to the PRONOM project to have a PRONOM ID assigned to the format (reference TNA1555078202S60), and today send a request to IANA to register the "text/vnd.sosi" MIME type for this format (referanse IANA #1143144). If all goes well, in a few months, anyone implementing the Noark 5 Tjenestegrensesnitt API spesification should be able to use an official MIME type and PRONOM code for SOSI files. In addition, anyone using SOSI files on Linux should be able to automatically recognise the format and web sites handing out SOSI files can begin providing a more specific MIME type. So far, SOSI files has been handed out from web sites using the "application/octet-stream" MIME type, which is just a nice way of stating "I do not know". Soon, we will know. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25th March 2019

As part of my involvement with the Nikita Noark 5 core project, I have been proposing improvements to the API specification created by The National Archives of Norway and helped migrating the text from a version control system unfriendly binary format (docx) to Markdown in git. Combined with the migration to a public git repository (on github), this has made it possible for anyone to suggest improvement to the text.

The specification is filled with UML diagrams. I believe the original diagrams were modelled using Sparx Systems Enterprise Architect, and exported as EMF files for import into docx. This approach make it very hard to track changes using a version control system. To improve the situation I have been looking for a good text based UML format with associated command line free software tools on Linux and Windows, to allow anyone to send in corrections to the UML diagrams in the specification. The tool must be text based to work with git, and command line to be able to run it automatically to generate the diagram images. Finally, it must be free software to allow anyone, even those that can not accept a non-free software license, to contribute.

I did not know much about free software UML modelling tools when I started. I have used dia and inkscape for simple modelling in the past, but neither are available on Windows, as far as I could tell. I came across a nice list of text mode uml tools, and tested out a few of the tools listed there. The PlantUML tool seemed most promising. After verifying that the packages is available in Debian and found its Java source under a GPL license on github, I set out to test if it could represent the diagrams we needed, ie the ones currently in the Noark 5 Tjenestegrensesnitt specification. I am happy to report that it could represent them, even thought it have a few warts here and there.

After a few days of modelling I completed the task this weekend. A temporary link to the complete set of diagrams (original and from PlantUML) is available in the github issue discussing the need for a text based UML format, but please note I lack a sensible tool to convert EMF files to PNGs, so the "original" rendering is not as good as the original was in the publised PDF.

Here is an example UML diagram, showing the core classes for keeping metadata about archived documents:

@startuml
skinparam classAttributeIconSize 0

!include media/uml-class-arkivskaper.iuml
!include media/uml-class-arkiv.iuml
!include media/uml-class-klassifikasjonssystem.iuml
!include media/uml-class-klasse.iuml
!include media/uml-class-arkivdel.iuml
!include media/uml-class-mappe.iuml
!include media/uml-class-merknad.iuml
!include media/uml-class-registrering.iuml
!include media/uml-class-basisregistrering.iuml
!include media/uml-class-dokumentbeskrivelse.iuml
!include media/uml-class-dokumentobjekt.iuml
!include media/uml-class-konvertering.iuml
!include media/uml-datatype-elektronisksignatur.iuml

Arkivstruktur.Arkivskaper "+arkivskaper 1..*" <-o "+arkiv 0..*" Arkivstruktur.Arkiv
Arkivstruktur.Arkiv o--> "+underarkiv 0..*" Arkivstruktur.Arkiv
Arkivstruktur.Arkiv "+arkiv 1" o--> "+arkivdel 0..*" Arkivstruktur.Arkivdel
Arkivstruktur.Klassifikasjonssystem "+klassifikasjonssystem [0..1]" <--o "+arkivdel 1..*" Arkivstruktur.Arkivdel
Arkivstruktur.Klassifikasjonssystem "+klassifikasjonssystem [0..1]" o--> "+klasse 0..*" Arkivstruktur.Klasse
Arkivstruktur.Arkivdel "+arkivdel 0..1" o--> "+mappe 0..*" Arkivstruktur.Mappe
Arkivstruktur.Arkivdel "+arkivdel 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering
Arkivstruktur.Klasse "+klasse 0..1" o--> "+mappe 0..*" Arkivstruktur.Mappe
Arkivstruktur.Klasse "+klasse 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering
Arkivstruktur.Mappe --> "+undermappe 0..*" Arkivstruktur.Mappe
Arkivstruktur.Mappe "+mappe 0..1" o--> "+registrering 0..*" Arkivstruktur.Registrering
Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Mappe
Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Dokumentbeskrivelse
Arkivstruktur.Basisregistrering -|> Arkivstruktur.Registrering
Arkivstruktur.Merknad "+merknad 0..*" <--* Arkivstruktur.Basisregistrering
Arkivstruktur.Registrering "+registrering 1..*" o--> "+dokumentbeskrivelse 0..*" Arkivstruktur.Dokumentbeskrivelse
Arkivstruktur.Dokumentbeskrivelse "+dokumentbeskrivelse 1" o-> "+dokumentobjekt 0..*" Arkivstruktur.Dokumentobjekt
Arkivstruktur.Dokumentobjekt *-> "+konvertering 0..*" Arkivstruktur.Konvertering
Arkivstruktur.ElektroniskSignatur -[hidden]-> Arkivstruktur.Dokumentobjekt
@enduml

The format is quite compact, with little redundant information. The text expresses entities and relations, and there is little layout related fluff. One can reuse content by using include files, allowing for consistent naming across several diagrams. The include files can be standalone PlantUML too. Here is the content of media/uml-class-arkivskaper.iuml:

@startuml
class Arkivstruktur.Arkivskaper  {
  +arkivskaperID : string
  +arkivskaperNavn : string
  +beskrivelse : string [0..1]
}
@enduml

This is what the complete diagram for the PlantUML notation above look like:

A cool feature of PlantUML is that the generated PNG files include the entire original source diagram as text. The source (with include statements expanded) can be extracted using for example exiftool. Another cool feature is that parts of the entities can be hidden after inclusion. This allow to use include files with all attributes listed, even for UML diagrams that should not list any attributes.

The diagram also show some of the warts. Some times the layout engine place text labels on top of each other, and some times it place the class boxes too close to each other, not leaving room for the labels on the relationship arrows. The former can be worked around by placing extra newlines in the labes (ie "\n"). I did not do it here to be able to demonstrate the issue. I have not found a good way around the latter, so I normally try to reduce the problem by changing from vertical to horizontal links to improve the layout.

All in all, I am quite happy with PlantUML, and very impressed with how quickly its lead developer responds to questions. So far I got an answer to my questions in a few hours when I send an email. I definitely recommend looking at PlantUML if you need to make UML diagrams. Note, PlantUML can draw a lot more than class relations. Check out the documention for a complete list. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

24th March 2019

Yesterday, a new release of Nikita Noark 5 core project was announced on the project mailing list. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.3 since version 0.2.1 (from NEWS.md):

  • Improved ClassificationSystem and Class behaviour.
  • Tidied up known inconsistencies between domain model and hateaos links.
  • Added experimental code for blockchain integration.
  • Make token expiry time configurable at upstart from properties file.
  • Continued work on OData search syntax.
  • Started work on pagination for entities, partly implemented for Saksmappe.
  • Finalise ClassifiedCode Metadata entity.
  • Implement mechanism to check if authentication token is still valid. This allow the GUI to return a more sensible message to the user if the token is expired.
  • Reintroduce browse.html page to allow user to browse JSON API using hateoas links.
  • Fix bug in handling file/mappe sequence number. Year change was not properly handled.
  • Update application yml files to be in sync with current development.
  • Stop 'converting' everything to PDF using libreoffice. Only convert the file formats doc, ppt, xls, docx, pptx, xlsx, odt, odp and ods.
  • Continued code style fixing, making code more readable.
  • Minor bug fixes.

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st February 2019

Yesterday, the Kraken virtual currency exchange announced their Websocket service, providing a stream of exchange updates to its clients. Getting updated rates quickly is a good idea, so I used their API documentation and added Websocket support to the Kraken service in Valutakrambod today. The python library can now get updates from Kraken several times per second, instead of every time the information is polled from the REST API.

If this sound interesting to you, the code for valutakrambod is available from github. Here is example output from the example client displaying rates in a curses view:

           Name Pair   Bid         Ask         Spr    Ftcd    Age
 BitcoinsNorway BTCEUR   2959.2800   3021.0500   2.0%   36    nan    nan
       Bitfinex BTCEUR   3087.9000   3088.0000   0.0%   36     37    nan
        Bitmynt BTCEUR   3001.8700   3135.4600   4.3%   36     52    nan
         Bitpay BTCEUR   3003.8659         nan   nan%   35    nan    nan
       Bitstamp BTCEUR   3008.0000   3010.2300   0.1%    0      1      1
           Bl3p BTCEUR   3000.6700   3010.9300   0.3%    1    nan    nan
       Coinbase BTCEUR   2992.1800   3023.2500   1.0%   34    nan    nan
         Kraken+BTCEUR   3005.7000   3006.6000   0.0%    0      1      0
        Paymium BTCEUR   2940.0100   2993.4400   1.8%    0   2688    nan
 BitcoinsNorway BTCNOK  29000.0000  29360.7400   1.2%   36    nan    nan
        Bitmynt BTCNOK  29115.6400  29720.7500   2.0%   36     52    nan
         Bitpay BTCNOK  29029.2512         nan   nan%   36    nan    nan
       Coinbase BTCNOK  28927.6000  29218.5900   1.0%   35    nan    nan
        MiraiEx BTCNOK  29097.7000  29741.4200   2.2%   36    nan    nan
 BitcoinsNorway BTCUSD   3385.4200   3456.0900   2.0%   36    nan    nan
       Bitfinex BTCUSD   3538.5000   3538.6000   0.0%   36     45    nan
         Bitpay BTCUSD   3443.4600         nan   nan%   34    nan    nan
       Bitstamp BTCUSD   3443.0100   3445.0500   0.1%    0      2      1
       Coinbase BTCUSD   3428.1600   3462.6300   1.0%   33    nan    nan
         Gemini BTCUSD   3445.8800   3445.8900   0.0%   36    326    nan
         Hitbtc BTCUSD   3473.4700   3473.0700  -0.0%    0      0      0
         Kraken+BTCUSD   3444.4000   3445.6000   0.0%    0      1      0
  Exchangerates EURNOK      9.6685      9.6685   0.0%   36  22226    nan
     Norgesbank EURNOK      9.6685      9.6685   0.0%   36  22226    nan
       Bitstamp EURUSD      1.1440      1.1462   0.2%    0      1      2
  Exchangerates EURUSD      1.1471      1.1471   0.0%   36  22226    nan
 BitcoinsNorway LTCEUR      1.0009     22.6538  95.6%   35    nan    nan
 BitcoinsNorway LTCNOK    259.0900    264.9300   2.2%   35    nan    nan
 BitcoinsNorway LTCUSD      0.0000     29.0000 100.0%   35    nan    nan
     Norgesbank USDNOK      8.4286      8.4286   0.0%   36  22226    nan

Yes, I notice the strange negative spread on Hitbtc. I've seen the same on Kraken. Another strange observation is that Kraken some times announce trade orders a fraction of a second in the future. I really wonder what is going on there.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
22nd January 2019

I am amazed and very pleased to discover that since a few days ago, everything you need to program the BBC micro:bit is available from the Debian archive. All this is thanks to the hard work of Nick Morrott and the Debian python packaging team. The micro:bit project recommend the mu-editor to program the microcomputer, as this editor will take care of all the machinery required to injekt/flash micropython alongside the program into the micro:bit, as long as the pieces are available.

There are three main pieces involved. The first to enter Debian was python-uflash, which was accepted into the archive 2019-01-12. The next one was mu-editor, which showed up 2019-01-13. The final and hardest part to to into the archive was firmware-microbit-micropython, which needed to get its build system and dependencies into Debian before it was accepted 2019-01-20. The last one is already in Debian Unstable and should enter Debian Testing / Buster in three days. This all allow any user of the micro:bit to get going by simply running 'apt install mu-editor' when using Testing or Unstable, and once Buster is released as stable, all the users of Debian stable will be catered for.

As a minor final touch, I added rules to the isenkram package for recognizing micro:bit and recommend the mu-editor package. This make sure any user of the isenkram desktop daemon will get a popup suggesting to install mu-editor then the USB cable from the micro:bit is inserted for the first time.

This should make it easier to have fun.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, robot.
15th January 2019

The layered video playout server created by Sveriges Television, CasparCG Server, entered Debian today. This completes many months of work to get the source ready to go into Debian. The first upload to the Debian NEW queue happened a month ago, but the work upstream to prepare it for Debian started more than two and a half month ago. So far the casparcg-server package is only available for amd64, but I hope this can be improved. The package is in contrib because it depend on the non-free fdk-aac library. The Debian package lack support for streaming web pages because Debian is missing CEF, Chromium Embedded Framework. CEF is wanted by several packages in Debian. But because the Chromium source is not available as a build dependency, it is not yet possible to upload CEF to Debian. I hope this will change in the future.

The reason I got involved is that the Norwegian open channel Frikanalen is starting to use CasparCG for our HD playout, and I would like to have all the free software tools we use to run the TV channel available as packages from the Debian project. The last remaining piece in the puzzle is Open Broadcast Encoder, but it depend on quite a lot of patched libraries which would have to be included in Debian first.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

15th December 2018

A fun way to learn how to program Python is to follow the instructions in the book "Learn to program with Minecraft", which introduces programming in Python to people who like to play with Minecraft. The book uses a Python library to talk to a TCP/IP socket with an API accepting build instructions and providing information about the current players in a Minecraft world. The TCP/IP API was first created for the Minecraft implementation for Raspberry Pi, and has since been ported to some server versions of Minecraft. The book contain recipes for those using Windows, MacOSX and Raspian. But a little known fact is that you can follow the same recipes using the free software construction game Minetest.

There is a Minetest module implementing the same API, making it possible to use the Python programs coded to talk to Minecraft with Minetest too. I uploaded this module to Debian two weeks ago, and as soon as it clears the FTP masters NEW queue, learning to program Python with Minetest on Debian will be a simple 'apt install' away. The Debian package is maintained as part of the Debian Games team, and the packaging rules are currently located under 'unfinished' on Salsa.

You will most likely need to install several of the Minetest modules in Debian for the examples included with the library to work well, as there are several blocks used by the example scripts that are provided via modules in Minetest. Without the required blocks, a simple stone block is used instead. My initial testing with a analog clock did not get gold arms as instructed in the python library, but instead used stone arms.

I tried to find a way to add the API to the desktop version of Minecraft, but were unable to find any working recipes. The recipes I found are only working with a standalone Minecraft server setup. Are there any options to use with the normal desktop version?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
12th December 2018

A few hours ago, a new and improved version (2.4) of the VLC bittorrent plugin was uploaded to Debian. This new version include a complete rewrite of the bittorrent related code, which seem to make the plugin non-blocking. This mean you can actually exit VLC even when the plugin seem to be unable to get the bittorrent streaming started. The new version also include support for filtering playlist by file extension using command line options, if you want to avoid processing audio, video or images. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

9th December 2018

Yesterday, I had the pleasure of watching on Frikanalen the OWASP talk by Scott Helme titled "What We’ve Learned From Billions of Security Reports". I had not heard of the Content Security Policy standard nor its ability to "call home" when a browser detect a policy breach (I do not follow web page design development much these days), and found the talk very illuminating.

The mechanism allow a web site owner to use HTTP headers to tell visitors web browser which sources (internal and external) are allowed to be used on the web site. Thus it become possible to enforce a "only local content" policy despite web designers urge to fetch programs from random sites on the Internet, like the one enabling the attack reported by Scott Helme earlier this year.

Using CSP seem like an obvious thing for a site admin to implement to take some control over the information leak that occur when external sources are used to render web pages, it is a mystery more sites are not using CSP? It is being standardized under W3C these days, and is supposed by most web browsers

I managed to find a Django middleware for implementing CSP and was happy to discover it was already in Debian. I plan to use it to add CSP support to the Frikanalen web site soon.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, standard, web.
8th November 2018

If you read my blog regularly, you probably know I am involved in running and developing the Norwegian TV channel Frikanalen. It is an open channel, allowing everyone in Norway to publish videos on a TV channel with national coverage. You can think of it as Youtube for national television. In addition to distribution on RiksTV and Uninett, Frikanalen is also available as a Kodi addon. The last few days I have updated the code to add more features. A new and improved version 0.0.3 Frikanalen addon was just made available via the Kodi repositories. This new version include a option to browse videos by category, as well as free text search in the video archive. It will now also show the video duration in the video lists, which were missing earlier. A new and experimental link to the HD video stream currently being worked on is provided, for those that want to see what the CasparCG output look like. The alternative is the SD video stream, generated using MLT. CasparCG is controlled by our mltplayout server which instead of talking to mlt is giving PLAY instructions to the CasparCG server when it is time to start a new program.

By now, you are probably wondering what kind of content is being played on the channel. These days, it is filled with technical presentations like those from NUUG, Debconf, Makercon, and TED, but there are also some periods with EMPT TV and P7.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st November 2018

As part of my involvement in the Nikita archive API project, I've been importing a fairly large lump of emails into a test instance of the archive to see how well this would go. I picked a subset of my notmuch email database, all public emails sent to me via @lists.debian.org, giving me a set of around 216 000 emails to import. In the process, I had a look at the various attachments included in these emails, to figure out what to do with attachments, and noticed that one of the most common attachment formats do not have an official MIME type registered with IANA/IETF. The output from diff, ie the input for patch, is on the top 10 list of formats included in these emails. At the moment people seem to use either text/x-patch or text/x-diff, but neither is officially registered. It would be better if one official MIME type were registered and used everywhere.

To try to get one official MIME type for these files, I've brought up the topic on the media-types mailing list. If you are interested in discussion which MIME type to use as the official for patch files, or involved in making software using a MIME type for patches, perhaps you would like to join the discussion?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

22nd October 2018

My current home stereo is a patchwork of various pieces I got on flee markeds over the years. It is amazing what kind of equipment show up there. I've been wondering for a while if it was possible to measure how well this equipment is working together, and decided to see how far I could get using free software. After trawling the web I came across an article from DIY Audio and Video on Speaker Testing and Analysis describing how to test speakers, and it listing several software options, among them AUDio MEasurement System (AUDMES). It is the only free software system I could find focusing on measuring speakers and audio frequency response. In the process I also found an interesting article from NOVO on Understanding Speaker Specifications and Frequency Response and an article from ecoustics on Understanding Speaker Frequency Response, with a lot of information on what to look for and how to interpret the graphs. Armed with this knowledge, I set out to measure the state of my speakers.

The first hurdle was that AUDMES hadn't seen a commit for 10 years and did not build with current compilers and libraries. I got in touch with its author, who no longer was spending time on the program but gave me write access to the subversion repository on Sourceforge. The end result is that now the code build on Linux and is capable of saving and loading the collected frequency response data in CSV format. The application is quite nice and flexible, and I was able to select the input and output audio interfaces independently. This made it possible to use a USB mixer as the input source, while sending output via my laptop headphone connection. I lacked the hardware and cabling to figure out a different way to get independent cabling to speakers and microphone.

Using this setup I could see how a large range of high frequencies apparently were not making it out of my speakers. The picture show the frequency response measurement of one of the speakers. Note the frequency lines seem to be slightly misaligned, compared to the CSV output from the program. I can not hear several of these are high frequencies, according to measurement from Free Hearing Test Software, an freeware system to measure your hearing (still looking for a free software alternative), so I do not know if they are coming out out the speakers. I thus do not quite know how to figure out if the missing frequencies is a problem with the microphone, the amplifier or the speakers, but I managed to rule out the audio card in my PC by measuring my Bose noise canceling headset using its own microphone. This setup was able to see the high frequency tones, so the problem with my stereo had to be in the amplifier or speakers.

Anyway, to try to role out one factor I ended up picking up a new set of speakers at a flee marked, and these work a lot better than the old speakers, so I guess the microphone and amplifier is OK. If you need to measure your own speakers, check out AUDMES. If more people get involved, perhaps the project could become good enough to include in Debian? And if you know of some other free software to measure speakers and amplifier performance, please let me know. I am aware of the freeware option REW, but I want something that can be developed also when the vendor looses interest.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

21st October 2018

Bittorrent is as far as I know, currently the most efficient way to distribute content on the Internet. It is used all by all sorts of content providers, from national TV stations like NRK, Linux distributors like Debian and Ubuntu, and of course the Internet archive.

Almost a month ago a new package adding Bittorrent support to VLC became available in Debian testing and unstable. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

Since the plugin was made available for the first time in Debian, several improvements have been made to it. In version 2.2-4, now available in both testing and unstable, a desktop file is provided to teach browsers to start VLC when the user click on torrent files or magnet links. The last part is thanks to me finally understanding what the strange x-scheme-handler style MIME types in desktop files are used for. By adding x-scheme-handler/magnet to the MimeType entry in the desktop file, at least the browsers Firefox and Chromium will suggest to start VLC when selecting a magnet URI on a web page. The end result is that now, with the plugin installed in Buster and Sid, one can visit any Internet Archive page with movies using a web browser and click on the torrent link to start streaming the movie.

Note, there is still some misfeatures in the plugin. One is the fact that it will hang and block VLC from exiting until the torrent streaming starts. Another is the fact that it will pick and play a random file in a multi file torrent. This is not always the video file you want. Combined with the first it can be a bit hard to get the video streaming going. But when it work, it seem to do a good job.

For the Debian packaging, I would love to find a good way to test if the plugin work with VLC using autopkgtest. I tried, but do not know enough of the inner workings of VLC to get it working. For now the autopkgtest script is only checking if the .so file was successfully loaded by VLC. If you have any suggestions, please submit a patch to the Debian bug tracking system.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18th October 2018

This morning, the new release of the Nikita Noark 5 core project was announced on the project mailing list. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.2 since version 0.1.1 (from NEWS.md):

  • Fix typos in REL names
  • Tidy up error message reporting
  • Fix issue where we used Integer.valueOf(), not Integer.getInteger()
  • Change some String handling to StringBuffer
  • Fix error reporting
  • Code tidy-up
  • Fix issue using static non-synchronized SimpleDateFormat to avoid race conditions
  • Fix problem where deserialisers were treating integers as strings
  • Update methods to make them null-safe
  • Fix many issues reported by coverity
  • Improve equals(), compareTo() and hash() in domain model
  • Improvements to the domain model for metadata classes
  • Fix CORS issues when downloading document
  • Implementation of case-handling with registryEntry and document upload
  • Better support in Javascript for OPTIONS
  • Adding concept description of mail integration
  • Improve setting of default values for GET on ny-journalpost
  • Better handling of required values during deserialisation
  • Changed tilknyttetDato (M620) from date to dateTime
  • Corrected some opprettetDato (M600) (de)serialisation errors.
  • Improve parse error reporting.
  • Started on OData search and filtering.
  • Added Contributor Covenant Code of Conduct to project.
  • Moved repository and project from Github to Gitlab.
  • Restructured repository, moved code into src/ and web/.
  • Updated code to use Spring Boot version 2.
  • Added support for OAuth2 authentication.
  • Fixed several bugs discovered by Coverity.
  • Corrected handling of date/datetime fields.
  • Improved error reporting when rejecting during deserializatoin.
  • Adjusted default values provided for ny-arkivdel, ny-mappe, ny-saksmappe, ny-journalpost and ny-dokumentbeskrivelse.
  • Several fixes for korrespondansepart*.
  • Updated web GUI:
    • Now handle both file upload and download.
    • Uses new OAuth2 authentication for login.
    • Forms now fetches default values from API using GET.
    • Added RFC 822 (email), TIFF and JPEG to list of possible file formats.

The changes and improvements are extensive. Running diffstat on the changes between git tab 0.1.1 and 0.2 show 1098 files changed, 108666 insertions(+), 54066 deletions(-).

If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

8th October 2018

I have earlier covered the basics of trusted timestamping using the 'openssl ts' client. See blog post for 2014, 2016 and 2017 for those stories. But some times I want to integrate the timestamping in other code, and recently I needed to integrate it into Python. After searching a bit, I found the rfc3161 library which seemed like a good fit, but I soon discovered it only worked for python version 2, and I needed something that work with python version 3. Luckily I next came across the rfc3161ng library, a fork of the original rfc3161 library. Not only is it working with python 3, it have fixed a few of the bugs in the original library, and it has an active maintainer. I decided to wrap it up and make it available in Debian, and a few days ago it entered Debian unstable and testing.

Using the library is fairly straight forward. The only slightly problematic step is to fetch the required certificates to verify the timestamp. For some services it is straight forward, while for others I have not yet figured out how to do it. Here is a small standalone code example based on of the integration tests in the library code:

#!/usr/bin/python3

"""

Python 3 script demonstrating how to use the rfc3161ng module to
get trusted timestamps.

The license of this code is the same as the license of the rfc3161ng
library, ie MIT/BSD.

"""

import os
import pyasn1.codec.der
import rfc3161ng
import subprocess
import tempfile
import urllib.request

def store(f, data):
    f.write(data)
    f.flush()
    f.seek(0)

def fetch(url, f=None):
    response = urllib.request.urlopen(url)
    data = response.read()
    if f:
        store(f, data)
    return data

def main():
    with tempfile.NamedTemporaryFile() as cert_f,\
    	 tempfile.NamedTemporaryFile() as ca_f,\
    	 tempfile.NamedTemporaryFile() as msg_f,\
    	 tempfile.NamedTemporaryFile() as tsr_f:

        # First fetch certificates used by service
        certificate_data = fetch('https://freetsa.org/files/tsa.crt', cert_f)
        ca_data_data = fetch('https://freetsa.org/files/cacert.pem', ca_f)

        # Then timestamp the message
        timestamper = \
            rfc3161ng.RemoteTimestamper('http://freetsa.org/tsr',
                                        certificate=certificate_data)
        data = b"Python forever!\n"
        tsr = timestamper(data=data, return_tsr=True)

        # Finally, convert message and response to something 'openssl ts' can verify
        store(msg_f, data)
        store(tsr_f, pyasn1.codec.der.encoder.encode(tsr))
        args = ["openssl", "ts", "-verify",
                "-data", msg_f.name,
	        "-in", tsr_f.name,
		"-CAfile", ca_f.name,
                "-untrusted", cert_f.name]
        subprocess.check_call(args)

if '__main__' == __name__:
   main()

The code fetches the required certificates, store them as temporary files, timestamp a simple message, store the message and timestamp to disk and ask 'openssl ts' to verify the timestamp. A timestamp is around 1.5 kiB in size, and should be fairly easy to store for future use.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th October 2018

A few days, I rescued a Windows victim over to Debian. To try to rescue the remains, I helped set up automatic sync with Google Drive. I did not find any sensible Debian package handling this automatically, so I rebuild the grive2 source from the Ubuntu UPD8 PPA to do the task and added a autostart desktop entry and a small shell script to run in the background while the user is logged in to do the sync. Here is a sketch of the setup for future reference.

I first created ~/googledrive, entered the directory and ran 'grive -a' to authenticate the machine/user. Next, I created a autostart hook in ~/.config/autostart/grive.desktop to start the sync when the user log in:

[Desktop Entry]
Name=Google drive autosync
Type=Application
Exec=/home/user/bin/grive-sync

Finally, I wrote the ~/bin/grive-sync script to sync ~/googledrive/ with the files in Google Drive.

#!/bin/sh
set -e
cd ~/
cleanup() {
    if [ "$syncpid" ] ; then
        kill $syncpid
    fi
}
trap cleanup EXIT INT QUIT
/usr/lib/grive/grive-sync.sh listen googledrive 2>&1 | sed "s%^%$0:%" &
syncpdi=$!
while true; do
    if ! xhost >/dev/null 2>&1 ; then
        echo "no DISPLAY, exiting as the user probably logged out"
        exit 1
    fi
    if [ ! -e /run/user/1000/grive-sync.sh_googledrive ] ; then
        /usr/lib/grive/grive-sync.sh sync googledrive
    fi
    sleep 300
done 2>&1 | sed "s%^%$0:%"

Feel free to use the setup if you want. It can be assumed to be GNU GPL v2 licensed (or any later version, at your leisure), but I doubt this code is possible to claim copyright on.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
29th September 2018

It would come as no surprise to anyone that I am interested in bitcoins and virtual currencies. I've been keeping an eye on virtual currencies for many years, and it is part of the reason a few months ago, I started writing a python library for collecting currency exchange rates and trade on virtual currency exchanges. I decided to name the end result valutakrambod, which perhaps can be translated to small currency shop.

The library uses the tornado python library to handle HTTP and websocket connections, and provide a asynchronous system for connecting to and tracking several services. The code is available from github.

There are two example clients of the library. One is very simple and list every updated buy/sell price received from the various services. This code is started by running bin/btc-rates and call the client code in valutakrambod/client.py. The simple client look like this:

import functools
import tornado.ioloop
import valutakrambod
class SimpleClient(object):
    def __init__(self):
        self.services = []
        self.streams = []
        pass
    def newdata(self, service, pair, changed):
        print("%-15s %s-%s: %8.3f %8.3f" % (
            service.servicename(),
            pair[0],
            pair[1],
            service.rates[pair]['ask'],
            service.rates[pair]['bid'])
        )
    async def refresh(self, service):
        await service.fetchRates(service.wantedpairs)
    def run(self):
        self.ioloop = tornado.ioloop.IOLoop.current()
        self.services = valutakrambod.service.knownServices()
        for e in self.services:
            service = e()
            service.subscribe(self.newdata)
            stream = service.websocket()
            if stream:
                self.streams.append(stream)
            else:
                # Fetch information from non-streaming services immediately
                self.ioloop.call_later(len(self.services),
                                       functools.partial(self.refresh, service))
                # as well as regularly
                service.periodicUpdate(60)
        for stream in self.streams:
            stream.connect()
        try:
            self.ioloop.start()
        except KeyboardInterrupt:
            print("Interrupted by keyboard, closing all connections.")
            pass
        for stream in self.streams:
            stream.close()

The library client loops over all known "public" services, initialises it, subscribes to any updates from the service, checks and activates websocket streaming if the service provide it, and if no streaming is supported, fetches information from the service and sets up a periodic update every 60 seconds. The output from this client can look like this:

Bl3p            BTC-EUR: 5687.110 5653.690
Bl3p            BTC-EUR: 5687.110 5653.690
Bl3p            BTC-EUR: 5687.110 5653.690
Hitbtc          BTC-USD: 6594.560 6593.690
Hitbtc          BTC-USD: 6594.560 6593.690
Bl3p            BTC-EUR: 5687.110 5653.690
Hitbtc          BTC-USD: 6594.570 6593.690
Bitstamp        EUR-USD:    1.159    1.154
Hitbtc          BTC-USD: 6594.570 6593.690
Hitbtc          BTC-USD: 6594.580 6593.690
Hitbtc          BTC-USD: 6594.580 6593.690
Hitbtc          BTC-USD: 6594.580 6593.690
Bl3p            BTC-EUR: 5687.110 5653.690
Paymium         BTC-EUR: 5680.000 5620.240

The exchange order book is tracked in addition to the best buy/sell price, for those that need to know the details.

The other example client is focusing on providing a curses view with updated buy/sell prices as soon as they are received from the services. This code is located in bin/btc-rates-curses and activated by using the '-c' argument. Without the argument the "curses" output is printed without using curses, which is useful for debugging. The curses view look like this:

           Name Pair   Bid         Ask         Spr    Ftcd    Age
 BitcoinsNorway BTCEUR   5591.8400   5711.0800   2.1%   16    nan     60
       Bitfinex BTCEUR   5671.0000   5671.2000   0.0%   16     22     59
        Bitmynt BTCEUR   5580.8000   5807.5200   3.9%   16     41     60
         Bitpay BTCEUR   5663.2700         nan   nan%   15    nan     60
       Bitstamp BTCEUR   5664.8400   5676.5300   0.2%    0      1      1
           Bl3p BTCEUR   5653.6900   5684.9400   0.5%    0    nan     19
       Coinbase BTCEUR   5600.8200   5714.9000   2.0%   15    nan    nan
         Kraken BTCEUR   5670.1000   5670.2000   0.0%   14     17     60
        Paymium BTCEUR   5620.0600   5680.0000   1.1%    1   7515    nan
 BitcoinsNorway BTCNOK  52898.9700  54034.6100   2.1%   16    nan     60
        Bitmynt BTCNOK  52960.3200  54031.1900   2.0%   16     41     60
         Bitpay BTCNOK  53477.7833         nan   nan%   16    nan     60
       Coinbase BTCNOK  52990.3500  54063.0600   2.0%   15    nan    nan
        MiraiEx BTCNOK  52856.5300  54100.6000   2.3%   16    nan    nan
 BitcoinsNorway BTCUSD   6495.5300   6631.5400   2.1%   16    nan     60
       Bitfinex BTCUSD   6590.6000   6590.7000   0.0%   16     23     57
         Bitpay BTCUSD   6564.1300         nan   nan%   15    nan     60
       Bitstamp BTCUSD   6561.1400   6565.6200   0.1%    0      2      1
       Coinbase BTCUSD   6504.0600   6635.9700   2.0%   14    nan    117
         Gemini BTCUSD   6567.1300   6573.0700   0.1%   16     89    nan
         Hitbtc+BTCUSD   6592.6200   6594.2100   0.0%    0      0      0
         Kraken BTCUSD   6565.2000   6570.9000   0.1%   15     17     58
  Exchangerates EURNOK      9.4665      9.4665   0.0%   16 107789    nan
     Norgesbank EURNOK      9.4665      9.4665   0.0%   16 107789    nan
       Bitstamp EURUSD      1.1537      1.1593   0.5%    4      5      1
  Exchangerates EURUSD      1.1576      1.1576   0.0%   16 107789    nan
 BitcoinsNorway LTCEUR      1.0000     49.0000  98.0%   16    nan    nan
 BitcoinsNorway LTCNOK    492.4800    503.7500   2.2%   16    nan     60
 BitcoinsNorway LTCUSD      1.0221     49.0000  97.9%   15    nan    nan
     Norgesbank USDNOK      8.1777      8.1777   0.0%   16 107789    nan

The code for this client is too complex for a simple blog post, so you will have to check out the git repository to figure out how it work. What I can tell is how the three last numbers on each line should be interpreted. The first is how many seconds ago information was received from the service. The second is how long ago, according to the service, the provided information was updated. The last is an estimate on how often the buy/sell values change.

If you find this library useful, or would like to improve it, I would love to hear from you. Note that for some of the services I've implemented a trading API. It might be the topic of a future blog post.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: bitcoin, english.
24th September 2018

Back in February, I got curious to see if VLC now supported Bittorrent streaming. It did not, despite the fact that the idea and code to handle such streaming had been floating around for years. I did however find a standalone plugin for VLC to do it, and half a year later I decided to wrap up the plugin and get it into Debian. I uploaded it to NEW a few days ago, and am very happy to report that it entered Debian a few hours ago, and should be available in Debian/Unstable tomorrow, and Debian/Testing in a few days.

With the vlc-plugin-bittorrent package installed you should be able to stream videos using a simple call to

vlc https://archive.org/download/TheGoat/TheGoat_archive.torrent

It can handle magnet links too. Now if only native vlc had bittorrent support. Then a lot more would be helping each other to share public domain and creative commons movies. The plugin need some stability work with seeking and picking the right file in a torrent with many files, but is already usable. Please note that the plugin is not removing downloaded files when vlc is stopped, so it can fill up your disk if you are not careful. Have fun. :)

I would love to get help maintaining this package. Get in touch if you are interested.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd September 2018

I continue to explore my Kodi installation, and today I wanted to tell it to play a youtube URL I received in a chat, without having to insert search terms using the on-screen keyboard. After searching the web for API access to the Youtube plugin and testing a bit, I managed to find a recipe that worked. If you got a kodi instance with its API available from http://kodihost/jsonrpc, you can try the following to have check out a nice cover band.

curl --silent --header 'Content-Type: application/json' \
  --data-binary '{ "id": 1, "jsonrpc": "2.0", "method": "Player.Open",
  "params": {"item": { "file":
  "plugin://plugin.video.youtube/play/?video_id=LuRGVM9O0qg" } } }' \
  http://projector.local/jsonrpc

I've extended kodi-stream program to take a video source as its first argument. It can now handle direct video links, youtube links and 'desktop' to stream my desktop to Kodi. It is almost like a Chromecast. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kodi, video.
30th August 2018

It might seem obvious that software created using tax money should be available for everyone to use and improve. Free Software Foundation Europe recentlystarted a campaign to help get more people to understand this, and I just signed the petition on Public Money, Public Code to help them. I hope you too will do the same.

13th August 2018

A few days ago, I wondered if there are any privacy respecting health monitors and/or fitness trackers available for sale these days. I would like to buy one, but do not want to share my personal data with strangers, nor be forced to have a mobile phone to get data out of the unit. I've received some ideas, and would like to share them with you. One interesting data point was a pointer to a Free Software app for Android named Gadgetbridge. It provide cloudless collection and storing of data from a variety of trackers. Its list of supported devices is a good indicator for units where the protocol is fairly open, as it is obviously being handled by Free Software. Other units are reportedly encrypting the collected information with their own public key, making sure only the vendor cloud service is able to extract data from the unit. The people contacting me about Gadgetbirde said they were using Amazfit Bip and Xiaomi Band 3.

I also got a suggestion to look at some of the units from Garmin. I was told their GPS watches can be connected via USB and show up as a USB storage device with Garmin FIT files containing the collected measurements. While proprietary, FIT files apparently can be read at least by GPSBabel and the GpxPod Nextcloud app. It is unclear to me if they can read step count and heart rate data. The person I talked to was using a Garmin Forerunner 935, which is a fairly expensive unit. I doubt it is worth it for a unit where the vendor clearly is trying its best to move from open to closed systems. I still remember when Garmin dropped NMEA support in its GPSes.

A final idea was to build ones own unit, perhaps by basing it on a wearable hardware platforms like the Flora Geo Watch. Sound like fun, but I had more money than time to spend on the topic, so I suspect it will have to wait for another time.

While I was working on tracking down links, I came across an inspiring TED talk by Dave Debronkart about being a e-patient, and discovered the web site Participatory Medicine. If you too want to track your own health and fitness without having information about your private life floating around on computers owned by others, I recommend checking it out.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
7th August 2018

Dear lazyweb,

I wonder, is there a fitness tracker / health monitor available for sale today that respect the users privacy? With this I mean a watch/bracelet capable of measuring pulse rate and other fitness/health related values (and by all means, also the correct time and location if possible), which is only provided for me to extract/read from the unit with computer without a radio beacon and Internet connection. In other words, it do not depend on a cell phone app, and do make the measurements available via other peoples computer (aka "the cloud"). The collected data should be available using only free software. I'm not interested in depending on some non-free software that will leave me high and dry some time in the future. I've been unable to find any such unit. I would like to buy it. The ones I have seen for sale here in Norway are proud to report that they share my health data with strangers (aka "cloud enabled"). Is there an alternative? I'm not interested in giving money to people requiring me to accept "privacy terms" to allow myself to measure my own health.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
31st July 2018

For a while now, I have looked for a sensible way to share images with my family using a self hosted solution, as it is unacceptable to place images from my personal life under the control of strangers working for data hoarders like Google or Dropbox. The last few days I have drafted an approach that might work out, and I would like to share it with you. I would like to publish images on a server under my control, and point some Internet connected display units using some free and open standard to the images I published. As my primary language is not limited to ASCII, I need to store metadata using UTF-8. Many years ago, I hoped to find a digital photo frame capable of reading a RSS feed with image references (aka using the <enclosure> RSS tag), but was unable to find a current supplier of such frames. In the end I gave up that approach.

Some months ago, I discovered that XScreensaver is able to read images from a RSS feed, and used it to set up a screen saver on my home info screen, showing images from the Daily images feed from NASA. This proved to work well. More recently I discovered that Kodi (both using OpenELEC and LibreELEC) provide the Feedreader screen saver capable of reading a RSS feed with images and news. For fun, I used it this summer to test Kodi on my parents TV by hooking up a Raspberry PI unit with LibreELEC, and wanted to provide them with a screen saver showing selected pictures from my selection.

Armed with motivation and a test photo frame, I set out to generate a RSS feed for the Kodi instance. I adjusted my Freedombox instance, created /var/www/html/privatepictures/, wrote a small Perl script to extract title and description metadata from the photo files and generate the RSS file. I ended up using Perl instead of python, as the libimage-exiftool-perl Debian package seemed to handle the EXIF/XMP tags I ended up using, while python3-exif did not. The relevant EXIF tags only support ASCII, so I had to find better alternatives. XMP seem to have the support I need.

I am a bit unsure which EXIF/XMP tags to use, as I would like to use tags that can be easily added/updated using normal free software photo managing software. I ended up using the tags set using this exiftool command, as these tags can also be set using digiKam:

exiftool -headline='The RSS image title' \
  -description='The RSS image description.' \
  -subject+=for-family photo.jpeg

I initially tried the "-title" and "keyword" tags, but they were invisible in digiKam, so I changed to "-headline" and "-subject". I use the keyword/subject 'for-family' to flag that the photo should be shared with my family. Images with this keyword set are located and copied into my Freedombox for the RSS generating script to find.

Are there better ways to do this? Get in touch if you have better suggestions.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
12th July 2018

Last night, I wrote a recipe to stream a Linux desktop using VLC to a instance of Kodi. During the day I received valuable feedback, and thanks to the suggestions I have been able to rewrite the recipe into a much simpler approach requiring no setup at all. It is a single script that take care of it all.

This new script uses GStreamer instead of VLC to capture the desktop and stream it to Kodi. This fixed the video quality issue I saw initially. It further removes the need to add a m3u file on the Kodi machine, as it instead connects to the JSON-RPC API in Kodi and simply ask Kodi to play from the stream created using GStreamer. Streaming the desktop to Kodi now become trivial. Copy the script below, run it with the DNS name or IP address of the kodi server to stream to as the only argument, and watch your screen show up on the Kodi screen. Note, it depend on multicast on the local network, so if you need to stream outside the local network, the script must be modified. Also note, I have no idea if audio work, as I only care about the picture part.

#!/bin/sh
#
# Stream the Linux desktop view to Kodi.  See
# http://www.hungry.com/~pere/blog/Streaming_the_Linux_desktop_to_Kodi_using_VLC_and_RTSP.html
# for backgorund information.

# Make sure the stream is stopped in Kodi and the gstreamer process is
# killed if something go wrong (for example if curl is unable to find the
# kodi server).  Do the same when interrupting this script.
kodicmd() {
    host="$1"
    cmd="$2"
    params="$3"
    curl --silent --header 'Content-Type: application/json' \
	 --data-binary "{ \"id\": 1, \"jsonrpc\": \"2.0\", \"method\": \"$cmd\", \"params\": $params }" \
	 "http://$host/jsonrpc"
}
cleanup() {
    if [ -n "$kodihost" ] ; then
	# Stop the playing when we end
	playerid=$(kodicmd "$kodihost" Player.GetActivePlayers "{}" |
			    jq .result[].playerid)
	kodicmd "$kodihost" Player.Stop "{ \"playerid\" : $playerid }" > /dev/null
    fi
    if [ "$gstpid" ] && kill -0 "$gstpid" >/dev/null 2>&1; then
	kill "$gstpid"
    fi
}
trap cleanup EXIT INT

if [ -n "$1" ]; then
    kodihost=$1
    shift
else
    kodihost=kodi.local
fi

mcast=239.255.0.1
mcastport=1234
mcastttl=1

pasrc=$(pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | \
  cut -d" " -f2|head -1)
gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \
  videoconvert ! queue2 ! \
  x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \
  key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \
  mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \
  udpsink host=$mcast port=$mcastport ttl-mc=$mcastttl auto-multicast=1 sync=0 \
  pulsesrc device=$pasrc ! audioconvert ! queue2 ! avenc_aac ! queue2 ! mux. \
  > /dev/null 2>&1 &
gstpid=$!

# Give stream a second to get going
sleep 1

# Ask kodi to start streaming using its JSON-RPC API
kodicmd "$kodihost" Player.Open \
	"{\"item\": { \"file\": \"udp://@$mcast:$mcastport\" } }" > /dev/null

# wait for gst to end
wait "$gstpid"

I hope you find the approach useful. I know I do.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kodi, video.
12th July 2018

PS: See the followup post for a even better approach.

A while back, I was asked by a friend how to stream the desktop to my projector connected to Kodi. I sadly had to admit that I had no idea, as it was a task I never had tried. Since then, I have been looking for a way to do so, preferable without much extra software to install on either side. Today I found a way that seem to kind of work. Not great, but it is a start.

I had a look at several approaches, for example using uPnP DLNA as described in 2011, but it required a uPnP server, fuse and local storage enough to store the stream locally. This is not going to work well for me, lacking enough free space, and it would impossible for my friend to get working.

Next, it occurred to me that perhaps I could use VLC to create a video stream that Kodi could play. Preferably using broadcast/multicast, to avoid having to change any setup on the Kodi side when starting such stream. Unfortunately, the only recipe I could find using multicast used the rtp protocol, and this protocol seem to not be supported by Kodi.

On the other hand, the rtsp protocol is working! Unfortunately I have to specify the IP address of the streaming machine in both the sending command and the file on the Kodi server. But it is showing my desktop, and thus allow us to have a shared look on the big screen at the programs I work on.

I did not spend much time investigating codeces. I combined the rtp and rtsp recipes from the VLC Streaming HowTo/Command Line Examples, and was able to get this working on the desktop/streaming end.

vlc screen:// --sout \
  '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{dst=projector.local,port=1234,sdp=rtsp://192.168.11.4:8080/test.sdp}'

I ssh-ed into my Kodi box and created a file like this with the same IP address:

echo rtsp://192.168.11.4:8080/test.sdp \
  > /storage/videos/screenstream.m3u

Note the 192.168.11.4 IP address is my desktops IP address. As far as I can tell the IP must be hardcoded for this to work. In other words, if someone elses machine is going to do the steaming, you have to update screenstream.m3u on the Kodi machine and adjust the vlc recipe. To get started, locate the file in Kodi and select the m3u file while the VLC stream is running. The desktop then show up in my big screen. :)

When using the same technique to stream a video file with audio, the audio quality is really bad. No idea if the problem is package loss or bad parameters for the transcode. I do not know VLC nor Kodi enough to tell.

Update 2018-07-12: Johannes Schauer send me a few succestions and reminded me about an important step. The "screen:" input source is only available once the vlc-plugin-access-extra package is installed on Debian. Without it, you will see this error message: "VLC is unable to open the MRL 'screen://'. Check the log for details." He further found that it is possible to drop some parts of the VLC command line to reduce the amount of hardcoded information. It is also useful to consider using cvlc to avoid having the VLC window in the desktop view. In sum, this give us this command line on the source end

cvlc screen:// --sout \
  '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{sdp=rtsp://:8080/}'

and this on the Kodi end

echo rtsp://192.168.11.4:8080/ \
  > /storage/videos/screenstream.m3u

Still bad image quality, though. But I did discover that streaming a DVD using dvdsimple:///dev/dvd as the source had excellent video and audio quality, so I guess the issue is in the input or transcoding parts, not the rtsp part. I've tried to change the vb and ab parameters to use more bandwidth, but it did not make a difference.

I further received a suggestion from Einar Haraldseid to try using gstreamer instead of VLC, and this proved to work great! He also provided me with the trick to get Kodi to use a multicast stream as its source. By using this monstrous oneliner, I can stream my desktop with good video quality in reasonable framerate to the 239.255.0.1 multicast address on port 1234:

gst-launch-1.0 ximagesrc use-damage=0 ! video/x-raw,framerate=30/1 ! \
  videoconvert ! queue2 ! \
  x264enc bitrate=8000 speed-preset=superfast tune=zerolatency qp-min=30 \
  key-int-max=15 bframes=2 ! video/x-h264,profile=high ! queue2 ! \
  mpegtsmux alignment=7 name=mux ! rndbuffersize max=1316 min=1316 ! \
  udpsink host=239.255.0.1 port=1234 ttl-mc=1 auto-multicast=1 sync=0 \
  pulsesrc device=$(pactl list | grep -A2 'Source #' | \
    grep 'Name: .*\.monitor$' |  cut -d" " -f2|head -1) ! \
  audioconvert ! queue2 ! avenc_aac ! queue2 ! mux.

and this on the Kodi end

echo udp://@239.255.0.1:1234 \
  > /storage/videos/screenstream.m3u

Note the trick to pick a valid pulseaudio source. It might not pick the one you need. This approach will of course lead to trouble if more than one source uses the same multicast port and address. Note the ttl-mc=1 setting, which limit the multicast packages to the local network. If the value is increased, your screen will be broadcasted further, one network "hop" for each increase (read up on multicast to learn more. :)!

Having cracked how to get Kodi to receive multicast streams, I could use this VLC command to stream to the same multicast address. The image quality is way better than the rtsp approach, but gstreamer seem to be doing a better job.

cvlc screen:// --sout '#transcode{vcodec=mp4v,acodec=mpga,vb=800,ab=128}:rtp{mux=ts,dst=239.255.0.1,port=1234,sdp=sap}'

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english, kodi, video.
9th July 2018

Five years ago, I measured what the most supported MIME type in Debian was, by analysing the desktop files in all packages in the archive. Since then, the DEP-11 AppStream system has been put into production, making the task a lot easier. This made me want to repeat the measurement, to see how much things changed. Here are the new numbers, for unstable only this time:

Debian Unstable:

  count MIME type
  ----- -----------------------
     56 image/jpeg
     55 image/png
     49 image/tiff
     48 image/gif
     39 image/bmp
     38 text/plain
     37 audio/mpeg
     34 application/ogg
     33 audio/x-flac
     32 audio/x-mp3
     30 audio/x-wav
     30 audio/x-vorbis+ogg
     29 image/x-portable-pixmap
     27 inode/directory
     27 image/x-portable-bitmap
     27 audio/x-mpeg
     26 application/x-ogg
     25 audio/x-mpegurl
     25 audio/ogg
     24 text/html

The list was created like this using a sid chroot: "cat /var/lib/apt/lists/*sid*_dep11_Components-amd64.yml.gz| zcat | awk '/^ - \S+\/\S+$/ {print $2 }' | sort | uniq -c | sort -nr | head -20"

It is interesting to see how image formats have passed text/plain as the most announced supported MIME type. These days, thanks to the AppStream system, if you run into a file format you do not know, and want to figure out which packages support the format, you can find the MIME type of the file using "file --mime <filename>", and then look up all packages announcing support for this format in their AppStream metadata (XML or .desktop file) using "appstreamcli what-provides mimetype <mime-type>. For example if you, like me, want to know which packages support inode/directory, you can get a list like this:

% appstreamcli what-provides mimetype inode/directory | grep Package: | sort
Package: anjuta
Package: audacious
Package: baobab
Package: cervisia
Package: chirp
Package: dolphin
Package: doublecmd-common
Package: easytag
Package: enlightenment
Package: ephoto
Package: filelight
Package: gwenview
Package: k4dirstat
Package: kaffeine
Package: kdesvn
Package: kid3
Package: kid3-qt
Package: nautilus
Package: nemo
Package: pcmanfm
Package: pcmanfm-qt
Package: qweborf
Package: ranger
Package: sirikali
Package: spacefm
Package: spacefm
Package: vifm
%

Using the same method, I can quickly discover that the Sketchup file format is not yet supported by any package in Debian:

% appstreamcli what-provides mimetype  application/vnd.sketchup.skp
Could not find component providing 'mimetype::application/vnd.sketchup.skp'.
%

Yesterday I used it to figure out which packages support the STL 3D format:

% appstreamcli what-provides mimetype  application/sla|grep Package
Package: cura
Package: meshlab
Package: printrun
%

PS: A new version of Cura was uploaded to Debian yesterday.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

8th July 2018

Quite regularly, I let my Debian Sid/Unstable chroot stay untouch for a while, and when I need to update it there is not enough free space on the disk for apt to do a normal 'apt upgrade'. I normally would resolve the issue by doing 'apt install <somepackages>' to upgrade only some of the packages in one batch, until the amount of packages to download fall below the amount of free space available. Today, I had about 500 packages to upgrade, and after a while I got tired of trying to install chunks of packages manually. I concluded that I did not have the spare hours required to complete the task, and decided to see if I could automate it. I came up with this small script which I call 'apt-in-chunks':

#!/bin/sh
#
# Upgrade packages when the disk is too full to upgrade every
# upgradable package in one lump.  Fetching packages to upgrade using
# apt, and then installing using dpkg, to avoid changing the package
# flag for manual/automatic.

set -e

ignore() {
    if [ "$1" ]; then
	grep -v "$1"
    else
	cat
    fi
}

for p in $(apt list --upgradable | ignore "$@" |cut -d/ -f1 | grep -v '^Listing...'); do
    echo "Upgrading $p"
    apt clean
    apt install --download-only -y $p
    for f in /var/cache/apt/archives/*.deb; do
	if [ -e "$f" ]; then
	    dpkg -i /var/cache/apt/archives/*.deb
	    break
	fi
    done
done

The script will extract the list of packages to upgrade, try to download the packages needed to upgrade one package, install the downloaded packages using dpkg. The idea is to upgrade packages without changing the APT mark for the package (ie the one recording of the package was manually requested or pulled in as a dependency). To use it, simply run it as root from the command line. If it fail, try 'apt install -f' to clean up the mess and run the script again. This might happen if the new packages conflict with one of the old packages. dpkg is unable to remove, while apt can do this.

It take one option, a package to ignore in the list of packages to upgrade. The option to ignore a package is there to be able to skip the packages that are simply too large to unpack. Today this was 'ghc', but I have run into other large packages causing similar problems earlier (like TeX).

Update 2018-07-08: Thanks to Paul Wise, I am aware of two alternative ways to handle this. The "unattended-upgrades --minimal-upgrade-steps" option will try to calculate upgrade sets for each package to upgrade, and then upgrade them in order, smallest set first. It might be a better option than my above mentioned script. Also, "aptutude upgrade" can upgrade single packages, thus avoiding the need for using "dpkg -i" in the script above.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
30th June 2018

So far, at least hydro-electric power, coal power, wind power, solar power, and wood power are well known. Until a few days ago, I had never heard of stone power. Then I learn about a quarry in a mountain in Bremanger i Norway, where the Bremanger Quarry company is extracting stone and dumping the stone into a shaft leading to its shipping harbour. This downward movement in this shaft is used to produce electricity. In short, it is using falling rocks instead of falling water to produce electricity, and according to its own statements it is producing more power than it is using, and selling the surplus electricity to the Norwegian power grid. I find the concept truly amazing. Is this the worlds only stone power plant?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
26th June 2018

My movie playing setup involve Kodi, OpenELEC (probably soon to be replaced with LibreELEC) and an Infocus IN76 video projector. My projector can be controlled via both a infrared remote controller, and a RS-232 serial line. The vendor of my projector, InFocus, had been sensible enough to document the serial protocol in its user manual, so it is easily available, and I used it some years ago to write a small script to control the projector. For a while now, I longed for a setup where the projector was controlled by Kodi, for example in such a way that when the screen saver went on, the projector was turned off, and when the screen saver exited, the projector was turned on again.

A few days ago, with very good help from parts of my family, I managed to find a Kodi Add-on for controlling a Epson projector, and got in touch with its author to see if we could join forces and make a Add-on with support for several projectors. To my pleasure, he was positive to the idea, and we set out to add InFocus support to his add-on, and make the add-on suitable for the official Kodi add-on repository.

The Add-on is now working (for me, at least), with a few minor adjustments. The most important change I do relative to the master branch in the github repository is embedding the pyserial module in the add-on. The long term solution is to make a "script" type pyserial module for Kodi, that can be pulled in as a dependency in Kodi. But until that in place, I embed it.

The add-on can be configured to turn on the projector when Kodi starts, off when Kodi stops as well as turn the projector off when the screensaver start and on when the screesaver stops. It can also be told to set the projector source when turning on the projector.

If this sound interesting to you, check out the project github repository. Perhaps you can send patches to support your projector too? As soon as we find time to wrap up the latest changes, it should be available for easy installation using any Kodi instance.

For future improvements, I would like to add projector model detection and the ability to adjust the brightness level of the projector from within Kodi. We also need to figure out how to handle the cooling period of the projector. My projector refuses to turn on for 60 seconds after it was turned off. This is not handled well by the add-on at the moment.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

22nd March 2018

The leaders of the worlds have started to congratulate the re-elected Russian head of state, and this causes some criticism. I am though a little fascinated by a comment from USA senator John McCain, sited by The Hill and others:

"An American president does not lead the Free World by congratulating dictators on winning sham elections."

While I totally agree with the senator here, the way the quote is phrased make me suspect that he is unaware of the simple fact that USA have not lead the Free World since at least before its government kidnapped a completely innocent Canadian citizen in transit on his way home to Canada via John F. Kennedy International Airport in September 2002 and sent him to be tortured in Syria for a year.

USA might be running ahead, but the path they are taking is not the one taken by any Free World.

Tags: english.
21st March 2018

So, Cambridge Analytica is getting some well deserved criticism for (mis)using information it got from Facebook about 50 million people, mostly in the USA. What I find a bit surprising, is how little criticism Facebook is getting for handing the information over to Cambridge Analytica and others in the first place. And what about the people handing their private and personal information to Facebook? And last, but not least, what about the government offices who are handing information about the visitors of their web pages to Facebook? No-one who looked at the terms of use of Facebook should be surprised that information about peoples interests, political views, personal lifes and whereabouts would be sold by Facebook.

What I find to be the real scandal is the fact that Facebook is selling your personal information, not that one of the buyers used it in a way Facebook did not approve when exposed. It is well known that Facebook is selling out their users privacy, but a scandal nevertheless. Of course the information provided to them by Facebook would be misused by one of the parties given access to personal information about the millions of Facebook users. Collected information will be misused sooner or later. The only way to avoid such misuse, is to not collect the information in the first place. If you do not want Facebook to hand out information about yourself for the use and misuse of its customers, do not give Facebook the information.

Personally, I would recommend to completely remove your Facebook account, and take back some control of your personal information. According to The Guardian, it is a bit hard to find out how to request account removal (and not just 'disabling'). You need to visit a specific Facebook page and click on 'let us know' on that page to get to the real account deletion screen. Perhaps something to consider? I would not trust the information to really be deleted (who knows, perhaps NSA, GCHQ and FRA already got a copy), but it might reduce the exposure a bit.

If you want to learn more about the capabilities of Cambridge Analytica, I recommend to see the video recording of the one hour talk Paul-Olivier Dehaye gave to NUUG last april about Data collection, psychometric profiling and their impact on politics.

And if you want to communicate with your friends and loved ones, use some end-to-end encrypted method like Signal or Ring, and stop sharing your private messages with strangers like Facebook and Google.

13th March 2018

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

2nd March 2018

Today I was pleasantly surprised to discover my operating system of choice, Debian, was used in the info screens on the subway stations. While passing Nydalen subway station in Oslo, Norway, I discovered the info screen booting with some text scrolling. I was not quick enough with my camera to be able to record a video of the scrolling boot screen, but I did get a photo from when the boot got stuck with a corrupt file system:

[photo of subway info screen]

While I am happy to see Debian used more places, some details of the content on the screen worries me.

The image show the version booting is 'Debian GNU/Linux lenny/sid', indicating that this is based on code taken from Debian Unstable/Sid after Debian Etch (version 4) was released 2007-04-08 and before Debian Lenny (version 5) was released 2009-02-14. Since Lenny Debian has released version 6 (Squeeze) 2011-02-06, 7 (Wheezy) 2013-05-04, 8 (Jessie) 2015-04-25 and 9 (Stretch) 2017-06-15, according to a Debian version history on Wikpedia. This mean the system is running around 10 year old code, with no security fixes from the vendor for many years.

This is not the first time I discover the Oslo subway company, Ruter, running outdated software. In 2012, I discovered the ticket vending machines were running Windows 2000, and this was still the case in 2016. Given the response from the responsible people in 2016, I would assume the machines are still running unpatched Windows 2000. Thus, an unpatched Debian setup come as no surprise.

The photo is made available under the license terms Creative Commons 4.0 Attribution International (CC BY 4.0).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, ruter.
18th February 2018

Surprising as it might sound, there are still computers using the traditional Sys V init system, and there probably will be until systemd start working on Hurd and FreeBSD. The upstream project still exist, though, and up until today, the upstream source was available from Savannah via subversion. I am happy to report that this just changed.

The upstream source is now in Git, and consist of three repositories:

I do not really spend much time on the project these days, and I has mostly retired, but found it best to migrate the source to a good version control system to help those willing to move it forward.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th February 2018

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:

vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent

The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13th February 2018

A new version of the 3D printer slicer software Cura, version 3.1.0, is now available in Debian Testing (aka Buster) and Debian Unstable (aka Sid). I hope you find it useful. It was uploaded the last few days, and the last update will enter testing tomorrow. See the release notes for the list of bug fixes and new features. Version 3.2 was announced 6 days ago. We will try to get it into Debian as well.

More information related to 3D printing is available on the 3D printing and 3D printer wiki pages in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

11th February 2018

We write 2018, and it is 30 years since Unicode was introduced. Most of us in Norway have come to expect the use of our alphabet to just work with any computer system. But it is apparently beyond reach of the computers printing recites at a restaurant. Recently I visited a Peppes pizza resturant, and noticed a few details on the recite. Notice how 'ø' and 'å' are replaced with strange symbols in 'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi gleder oss til å se deg igjen'.

I would say that this state is passed sad and over in embarrassing.

I removed personal and private information to be nice.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
7th January 2018

I've continued to track down list of movies that are legal to distribute on the Internet, and identified more than 11,000 title IDs in The Internet Movie Database (IMDB) so far. Most of them (57%) are feature films from USA published before 1923. I've also tracked down more than 24,000 movies I have not yet been able to map to IMDB title ID, so the real number could be a lot higher. According to the front web page for Retro Film Vault, there are 44,000 public domain films, so I guess there are still some left to identify.

The complete data set is available from a public git repository, including the scripts used to create it. Most of the data is collected using web scraping, for example from the "product catalog" of companies selling copies of public domain movies, but any source I find believable is used. I've so far had to throw out three sources because I did not trust the public domain status of the movies listed.

Anyway, this is the summary of the 28 collected data sources so far:

 2352 entries (   66 unique) with and 15983 without IMDB title ID in free-movies-archive-org-search.json
 2302 entries (  120 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  195 entries (   63 unique) with and   200 without IMDB title ID in free-movies-cinemovies.json
   89 entries (   52 unique) with and    38 without IMDB title ID in free-movies-creative-commons.json
  344 entries (   28 unique) with and   655 without IMDB title ID in free-movies-fesfilm.json
  668 entries (  209 unique) with and  1064 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   21 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 6822 entries ( 6669 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
  137 entries (    0 unique) with and     0 without IMDB title ID in free-movies-imdb-externlist.json
 1205 entries (   57 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
   84 entries (   20 unique) with and   167 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  135 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (  100 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  229 entries (   87 unique) with and     1 without IMDB title ID in free-movies-manual.json
   44 entries (    2 unique) with and    64 without IMDB title ID in free-movies-openflix.json
  291 entries (   33 unique) with and   474 without IMDB title ID in free-movies-profilms-pd.json
  211 entries (    7 unique) with and     0 without IMDB title ID in free-movies-publicdomainmovies-info.json
 1232 entries (   57 unique) with and  1875 without IMDB title ID in free-movies-publicdomainmovies-net.json
   46 entries (   13 unique) with and    81 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   64 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 1758 entries (  882 unique) with and  3786 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
   63 entries (   16 unique) with and   141 without IMDB title ID in free-movies-vodo.json
11583 unique IMDB title IDs in total, 8724 only in one list, 24647 without IMDB title ID

I keep finding more data sources. I found the cinemovies source just a few days ago, and as you can see from the summary, it extended my list with 63 movies. Check out the mklist-* scripts in the git repository if you are curious how the lists are created. Many of the titles are extracted using searches on IMDB, where I look for the title and year, and accept search results with only one movie listed if the year matches. This allow me to automatically use many lists of movies without IMDB title ID references at the cost of increasing the risk of wrongly identify a IMDB title ID as public domain. So far my random manual checks have indicated that the method is solid, but I really wish all lists of public domain movies would include unique movie identifier like the IMDB title ID. It would make the job of counting movies in the public domain a lot easier.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

17th December 2017

After several months of working and waiting, I am happy to report that the nice and user friendly 3D printer slicer software Cura just entered Debian Unstable. It consist of five packages, cura, cura-engine, libarcus, fdm-materials, libsavitar and uranium. The last two, uranium and cura, entered Unstable yesterday. This should make it easier for Debian users to print on at least the Ultimaker class of 3D printers. My nearest 3D printer is an Ultimaker 2+, so it will make life easier for at least me. :)

The work to make this happen was done by Gregor Riepl, and I was happy to assist him in sponsoring the packages. With the introduction of Cura, Debian is up to three 3D printer slicers at your service, Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D printer, give it a go. :)

The 3D printer software is maintained by the 3D printer Debian team, flocking together on the 3dprinter-general mailing list and the #debian-3dprinting IRC channel.

The next step for Cura in Debian is to update the cura package to version 3.0.3 and then update the entire set of packages to version 3.1.0 which showed up the last few days.

13th December 2017

While looking at the scanned copies for the copyright renewal entries for movies published in the USA, an idea occurred to me. The number of renewals are so few per year, it should be fairly quick to transcribe them all and add references to the corresponding IMDB title ID. This would give the (presumably) complete list of movies published 28 years earlier that did _not_ enter the public domain for the transcribed year. By fetching the list of USA movies published 28 years earlier and subtract the movies with renewals, we should be left with movies registered in IMDB that are now in the public domain. For the year 1955 (which is the one I have looked at the most), the total number of pages to transcribe is 21. For the 28 years from 1950 to 1978, it should be in the range 500-600 pages. It is just a few days of work, and spread among a small group of people it should be doable in a few weeks of spare time.

A typical copyright renewal entry look like this (the first one listed for 1955):

ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); 10Jun55; R151558.

The movie title as well as registration and renewal dates are easy enough to locate by a program (split on first comma and look for DDmmmYY). The rest of the text is not required to find the movie in IMDB, but is useful to confirm the correct movie is found. I am not quite sure what the L and R numbers mean, but suspect they are reference numbers into the archive of the US Copyright Office.

Tracking down the equivalent IMDB title ID is probably going to be a manual task, but given the year it is fairly easy to search for the movie title using for example http://www.imdb.com/find?q=adam+and+evil+1927&s=all. Using this search, I find that the equivalent IMDB title ID for the first renewal entry from 1955 is http://www.imdb.com/title/tt0017588/.

I suspect the best way to do this would be to make a specialised web service to make it easy for contributors to transcribe and track down IMDB title IDs. In the web service, once a entry is transcribed, the title and year could be extracted from the text, a search in IMDB conducted for the user to pick the equivalent IMDB title ID right away. By spreading out the work among volunteers, it would also be possible to make at least two persons transcribe the same entries to be able to discover any typos introduced. But I will need help to make this happen, as I lack the spare time to do all of this on my own. If you would like to help, please get in touch. Perhaps you can draft a web service for crowd sourcing the task?

Note, Project Gutenberg already have some transcribed copies of the US Copyright Office renewal protocols, but I have not been able to find any film renewals there, so I suspect they only have copies of renewal for written works. I have not been able to find any transcribed versions of movie renewals so far. Perhaps they exist somewhere?

I would love to figure out methods for finding all the public domain works in other countries too, but it is a lot harder. At least for Norway and Great Britain, such work involve tracking down the people involved in making the movie and figuring out when they died. It is hard enough to figure out who was part of making a movie, but I do not know how to automate such procedure without a registry of every person involved in making movies and their death year.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

5th December 2017

Three years ago, a presumed lost animation film, Empty Socks from 1927, was discovered in the Norwegian National Library. At the time it was discovered, it was generally assumed to be copyrighted by The Walt Disney Company, and I blogged about my reasoning to conclude that it would would enter the Norwegian equivalent of the public domain in 2053, based on my understanding of Norwegian Copyright Law. But a few days ago, I came across a blog post claiming the movie was already in the public domain, at least in USA. The reasoning is as follows: The film was released in November or Desember 1927 (sources disagree), and presumably registered its copyright that year. At that time, right holders of movies registered by the copyright office received government protection for there work for 28 years. After 28 years, the copyright had to be renewed if the wanted the government to protect it further. The blog post I found claim such renewal did not happen for this movie, and thus it entered the public domain in 1956. Yet someone claim the copyright was renewed and the movie is still copyright protected. Can anyone help me to figure out which claim is correct? I have not been able to find Empty Socks in Catalog of copyright entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures available from the University of Pennsylvania, neither in page 45 for the first half of 1955, nor in page 119 for the second half of 1955. It is of course possible that the renewal entry was left out of the printed catalog by mistake. Is there some way to rule out this possibility? Please help, and update the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

28th November 2017

It would be easier to locate the movie you want to watch in the Internet Archive, if the metadata about each movie was more complete and accurate. In the archiving community, a well known saying state that good metadata is a love letter to the future. The metadata in the Internet Archive could use a face lift for the future to love us back. Here is a proposal for a small improvement that would make the metadata more useful today. I've been unable to find any document describing the various standard fields available when uploading videos to the archive, so this proposal is based on my best quess and searching through several of the existing movies.

I have a few use cases in mind. First of all, I would like to be able to count the number of distinct movies in the Internet Archive, without duplicates. I would further like to identify the IMDB title ID of the movies in the Internet Archive, to be able to look up a IMDB title ID and know if I can fetch the video from there and share it with my friends.

Second, I would like the Butter data provider for The Internet archive (available from github), to list as many of the good movies as possible. The plugin currently do a search in the archive with the following parameters:

collection:moviesandfilms
AND NOT collection:movie_trailers
AND -mediatype:collection
AND format:"Archive BitTorrent"
AND year

Most of the cool movies that fail to show up in Butter do so because the 'year' field is missing. The 'year' field is populated by the year part from the 'date' field, and should be when the movie was released (date or year). Two such examples are Ben Hur from 1905 and Caminandes 2: Gran Dillama from 2013, where the year metadata field is missing.

So, my proposal is simply, for every movie in The Internet Archive where an IMDB title ID exist, please fill in these metadata fields (note, they can be updated also long after the video was uploaded, but as far as I can tell, only by the uploader):
mediatype
Should be 'movie' for movies.
collection
Should contain 'moviesandfilms'.
title
The title of the movie, without the publication year.
date
The data or year the movie was released. This make the movie show up in Butter, as well as make it possible to know the age of the movie and is useful to figure out copyright status.
director
The director of the movie. This make it easier to know if the correct movie is found in movie databases.
publisher
The production company making the movie. Also useful for identifying the correct movie.
links
Add a link to the IMDB title page, for example like this: <a href="http://www.imdb.com/title/tt0028496/">Movie in IMDB</a>. This make it easier to find duplicates and allow for counting of number of unique movies in the Archive. Other external references, like to TMDB, could be added like this too.

I did consider proposing a Custom field for the IMDB title ID (for example 'imdb_title_url', 'imdb_code' or simply 'imdb', but suspect it will be easier to simply place it in the links free text field.

I created a list of IMDB title IDs for several thousand movies in the Internet Archive, but I also got a list of several thousand movies without such IMDB title ID (and quite a few duplicates). It would be great if this data set could be integrated into the Internet Archive metadata to be available for everyone in the future, but with the current policy of leaving metadata editing to the uploaders, it will take a while before this happen. If you have uploaded movies into the Internet Archive, you can help. Please consider following my proposal above for your movies, to ensure that movie is properly counted. :)

The list is mostly generated using wikidata, which based on Wikipedia articles make it possible to link between IMDB and movies in the Internet Archive. But there are lots of movies without a Wikipedia article, and some movies where only a collection page exist (like for the Caminandes example above, where there are three movies but only one Wikidata entry).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18th November 2017

A month ago, I blogged about my work to automatically check the copyright status of IMDB entries, and try to count the number of movies listed in IMDB that is legal to distribute on the Internet. I have continued to look for good data sources, and identified a few more. The code used to extract information from various data sources is available in a git repository, currently available from github.

So far I have identified 3186 unique IMDB title IDs. To gain better understanding of the structure of the data set, I created a histogram of the year associated with each movie (typically release year). It is interesting to notice where the peaks and dips in the graph are located. I wonder why they are placed there. I suspect World War II caused the dip around 1940, but what caused the peak around 2010?

I've so far identified ten sources for IMDB title IDs for movies in the public domain or with a free license. This is the statistics reported when running 'make stats' in the git repository:

  249 entries (    6 unique) with and   288 without IMDB title ID in free-movies-archive-org-butter.json
 2301 entries (  540 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  830 entries (   29 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
 2109 entries (  377 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  291 entries (  122 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  144 entries (  135 unique) with and     0 without IMDB title ID in free-movies-manual.json
  350 entries (    1 unique) with and   801 without IMDB title ID in free-movies-publicdomainmovies.json
    4 entries (    0 unique) with and   124 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (  119 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
    8 entries (    8 unique) with and   196 without IMDB title ID in free-movies-vodo.json
 3186 unique IMDB title IDs in total

The entries without IMDB title ID are candidates to increase the data set, but might equally well be duplicates of entries already listed with IMDB title ID in one of the other sources, or represent movies that lack a IMDB title ID. I've seen examples of all these situations when peeking at the entries without IMDB title ID. Based on these data sources, the lower bound for movies listed in IMDB that are legal to distribute on the Internet is between 3186 and 4713.

It would be great for improving the accuracy of this measurement, if the various sources added IMDB title ID to their metadata. I have tried to reach the people behind the various sources to ask if they are interested in doing this, without any replies so far. Perhaps you can help me get in touch with the people behind VODO, Public Domain Torrents, Public Domain Movies and Public Domain Review to try to convince them to add more metadata to their movie entries?

Another way you could help is by adding pages to Wikipedia about movies that are legal to distribute on the Internet. If such page exist and include a link to both IMDB and The Internet Archive, the script used to generate free-movies-archive-org-wikidata.json should pick up the mapping as soon as wikidata is updates.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st November 2017

If you care about how fault tolerant your storage is, you might find these articles and papers interesting. They have formed how I think of when designing a storage system.

Several of these research papers are based on data collected from hundred thousands or millions of disk, and their findings are eye opening. The short story is simply do not implicitly trust RAID or redundant storage systems. Details matter. And unfortunately there are few options on Linux addressing all the identified issues. Both ZFS and Btrfs are doing a fairly good job, but have legal and practical issues on their own. I wonder how cluster file systems like Ceph do in this regard. After all, there is an old saying, you know you have a distributed system when the crash of a computer you have never heard of stops you from getting any work done. The same holds true if fault tolerance do not work.

Just remember, in the end, it do not matter how redundant, or how fault tolerant your storage is, if you do not continuously monitor its status to detect and replace failed disks.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, raid, sysadmin.
31st October 2017

I was surprised today to learn that a friend in academia did not know there are easily available web services available for writing LaTeX documents as a team. I thought it was common knowledge, but to make sure at least my readers are aware of it, I would like to mention these useful services for writing LaTeX documents. Some of them even provide a WYSIWYG editor to ease writing even further.

There are two commercial services available, ShareLaTeX and Overleaf. They are very easy to use. Just start a new document, select which publisher to write for (ie which LaTeX style to use), and start writing. Note, these two have announced their intention to join forces, so soon it will only be one joint service. I've used both for different documents, and they work just fine. While ShareLaTeX is free software, while the latter is not. According to a announcement from Overleaf, they plan to keep the ShareLaTeX code base maintained as free software.

But these two are not the only alternatives. Fidus Writer is another free software solution with the source available on github. I have not used it myself. Several others can be found on the nice alterntiveTo web service.

If you like Google Docs or Etherpad, but would like to write documents in LaTeX, you should check out these services. You can even host your own, if you want to. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
25th October 2017

Recently, I needed to automatically check the copyright status of a set of The Internet Movie database (IMDB) entries, to figure out which one of the movies they refer to can be freely distributed on the Internet. This proved to be harder than it sounds. IMDB for sure list movies without any copyright protection, where the copyright protection has expired or where the movie is lisenced using a permissive license like one from Creative Commons. These are mixed with copyright protected movies, and there seem to be no way to separate these classes of movies using the information in IMDB.

First I tried to look up entries manually in IMDB, Wikipedia and The Internet Archive, to get a feel how to do this. It is hard to know for sure using these sources, but it should be possible to be reasonable confident a movie is "out of copyright" with a few hours work per movie. As I needed to check almost 20,000 entries, this approach was not sustainable. I simply can not work around the clock for about 6 years to check this data set.

I asked the people behind The Internet Archive if they could introduce a new metadata field in their metadata XML for IMDB ID, but was told that they leave it completely to the uploaders to update the metadata. Some of the metadata entries had IMDB links in the description, but I found no way to download all metadata files in bulk to locate those ones and put that approach aside.

In the process I noticed several Wikipedia articles about movies had links to both IMDB and The Internet Archive, and it occured to me that I could use the Wikipedia RDF data set to locate entries with both, to at least get a lower bound on the number of movies on The Internet Archive with a IMDB ID. This is useful based on the assumption that movies distributed by The Internet Archive can be legally distributed on the Internet. With some help from the RDF community (thank you DanC), I was able to come up with this query to pass to the SPARQL interface on Wikidata:

SELECT ?work ?imdb ?ia ?when ?label
WHERE
{
  ?work wdt:P31/wdt:P279* wd:Q11424.
  ?work wdt:P345 ?imdb.
  ?work wdt:P724 ?ia.
  OPTIONAL {
        ?work wdt:P577 ?when.
        ?work rdfs:label ?label.
        FILTER(LANG(?label) = "en").
  }
}

If I understand the query right, for every film entry anywhere in Wikpedia, it will return the IMDB ID and The Internet Archive ID, and when the movie was released and its English title, if either or both of the latter two are available. At the moment the result set contain 2338 entries. Of course, it depend on volunteers including both correct IMDB and The Internet Archive IDs in the wikipedia articles for the movie. It should be noted that the result will include duplicates if the movie have entries in several languages. There are some bogus entries, either because The Internet Archive ID contain a typo or because the movie is not available from The Internet Archive. I did not verify the IMDB IDs, as I am unsure how to do that automatically.

I wrote a small python script to extract the data set from Wikidata and check if the XML metadata for the movie is available from The Internet Archive, and after around 1.5 hour it produced a list of 2097 free movies and their IMDB ID. In total, 171 entries in Wikidata lack the refered Internet Archive entry. I assume the 70 "disappearing" entries (ie 2338-2097-171) are duplicate entries.

This is not too bad, given that The Internet Archive report to contain 5331 feature films at the moment, but it also mean more than 3000 movies are missing on Wikipedia or are missing the pair of references on Wikipedia.

I was curious about the distribution by release year, and made a little graph to show how the amount of free movies is spread over the years:

I expect the relative distribution of the remaining 3000 movies to be similar.

If you want to help, and want to ensure Wikipedia can be used to cross reference The Internet Archive and The Internet Movie Database, please make sure entries like this are listed under the "External links" heading on the Wikipedia article for the movie:

* {{Internet Archive film|id=FightingLady}}
* {{IMDb title|id=0036823|title=The Fighting Lady}}

Please verify the links on the final page, to make sure you did not introduce a typo.

Here is the complete list, if you want to correct the 171 identified Wikipedia entries with broken links to The Internet Archive: Q1140317, Q458656, Q458656, Q470560, Q743340, Q822580, Q480696, Q128761, Q1307059, Q1335091, Q1537166, Q1438334, Q1479751, Q1497200, Q1498122, Q865973, Q834269, Q841781, Q841781, Q1548193, Q499031, Q1564769, Q1585239, Q1585569, Q1624236, Q4796595, Q4853469, Q4873046, Q915016, Q4660396, Q4677708, Q4738449, Q4756096, Q4766785, Q880357, Q882066, Q882066, Q204191, Q204191, Q1194170, Q940014, Q946863, Q172837, Q573077, Q1219005, Q1219599, Q1643798, Q1656352, Q1659549, Q1660007, Q1698154, Q1737980, Q1877284, Q1199354, Q1199354, Q1199451, Q1211871, Q1212179, Q1238382, Q4906454, Q320219, Q1148649, Q645094, Q5050350, Q5166548, Q2677926, Q2698139, Q2707305, Q2740725, Q2024780, Q2117418, Q2138984, Q1127992, Q1058087, Q1070484, Q1080080, Q1090813, Q1251918, Q1254110, Q1257070, Q1257079, Q1197410, Q1198423, Q706951, Q723239, Q2079261, Q1171364, Q617858, Q5166611, Q5166611, Q324513, Q374172, Q7533269, Q970386, Q976849, Q7458614, Q5347416, Q5460005, Q5463392, Q3038555, Q5288458, Q2346516, Q5183645, Q5185497, Q5216127, Q5223127, Q5261159, Q1300759, Q5521241, Q7733434, Q7736264, Q7737032, Q7882671, Q7719427, Q7719444, Q7722575, Q2629763, Q2640346, Q2649671, Q7703851, Q7747041, Q6544949, Q6672759, Q2445896, Q12124891, Q3127044, Q2511262, Q2517672, Q2543165, Q426628, Q426628, Q12126890, Q13359969, Q13359969, Q2294295, Q2294295, Q2559509, Q2559912, Q7760469, Q6703974, Q4744, Q7766962, Q7768516, Q7769205, Q7769988, Q2946945, Q3212086, Q3212086, Q18218448, Q18218448, Q18218448, Q6909175, Q7405709, Q7416149, Q7239952, Q7317332, Q7783674, Q7783704, Q7857590, Q3372526, Q3372642, Q3372816, Q3372909, Q7959649, Q7977485, Q7992684, Q3817966, Q3821852, Q3420907, Q3429733, Q774474

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th October 2017

I find it fascinating how many of the people being locked inside the proposed border wall between USA and Mexico support the idea. The proposal to keep Mexicans out reminds me of the propaganda twist from the East Germany government calling the wall the “Antifascist Bulwark” after erecting the Berlin Wall, claiming that the wall was erected to keep enemies from creeping into East Germany, while it was obvious to the people locked inside it that it was erected to keep the people from escaping.

Do the people in USA supporting this wall really believe it is a one way wall, only keeping people on the outside from getting in, while not keeping people in the inside from getting out?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english.
9th October 2017

At my nearby maker space, Sonen, I heard the story that it was easier to generate gcode files for theyr 3D printers (Ultimake 2+) on Windows and MacOS X than Linux, because the software involved had to be manually compiled and set up on Linux while premade packages worked out of the box on Windows and MacOS X. I found this annoying, as the software involved, Cura, is free software and should be trivial to get up and running on Linux if someone took the time to package it for the relevant distributions. I even found a request for adding into Debian from 2013, which had seem some activity over the years but never resulted in the software showing up in Debian. So a few days ago I offered my help to try to improve the situation.

Now I am very happy to see that all the packages required by a working Cura in Debian are uploaded into Debian and waiting in the NEW queue for the ftpmasters to have a look. You can track the progress on the status page for the 3D printer team.

The uploaded packages are a bit behind upstream, and was uploaded now to get slots in the NEW queue while we work up updating the packages to the latest upstream version.

On a related note, two competitors for Cura, which I found harder to use and was unable to configure correctly for Ultimaker 2+ in the short time I spent on it, are already in Debian. If you are looking for 3D printer "slicers" and want something already available in Debian, check out slic3r and slic3r-prusa. The latter is a fork of the former.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

29th September 2017

Every mobile phone announce its existence over radio to the nearby mobile cell towers. And this radio chatter is available for anyone with a radio receiver capable of receiving them. Details about the mobile phones with very good accuracy is of course collected by the phone companies, but this is not the topic of this blog post. The mobile phone radio chatter make it possible to figure out when a cell phone is nearby, as it include the SIM card ID (IMSI). By paying attention over time, one can see when a phone arrive and when it leave an area. I believe it would be nice to make this information more available to the general public, to make more people aware of how their phones are announcing their whereabouts to anyone that care to listen.

I am very happy to report that we managed to get something visualizing this information up and running for Oslo Skaperfestival 2017 (Oslo Makers Festival) taking place today and tomorrow at Deichmanske library. The solution is based on the simple recipe for listening to GSM chatter I posted a few days ago, and will show up at the stand of Åpen Sone from the Computer Science department of the University of Oslo. The presentation will show the nearby mobile phones (aka IMSIs) as dots in a web browser graph, with lines to the dot representing mobile base station it is talking to. It was working in the lab yesterday, and was moved into place this morning.

We set up a fairly powerful desktop machine using Debian Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers connected and visualize the visible cell phone towers using an English version of Hopglass. A fairly powerfull machine is needed as the grgsm_livemon_headless processes from gr-gsm converting the radio signal to data packages is quite CPU intensive.

The frequencies to listen to, are identified using a slightly patched scan-and-livemon (to set the --args values for each receiver), and the Hopglass data is generated using the patches in my meshviewer-output branch. For some reason we could not get more than four SDRs working. There is also a geographical map trying to show the location of the base stations, but I believe their coordinates are hardcoded to some random location in Germany, I believe. The code should be replaced with code to look up location in a text file, a sqlite database or one of the online databases mentioned in the github issue for the topic.

If this sound interesting, visit the stand at the festival!

24th September 2017

A little more than a month ago I wrote how to observe the SIM card ID (aka IMSI number) of mobile phones talking to nearby mobile phone base stations using Debian GNU/Linux and a cheap USB software defined radio, and thus being able to pinpoint the location of people and equipment (like cars and trains) with an accuracy of a few kilometer. Since then we have worked to make the procedure even simpler, and it is now possible to do this without any manual frequency tuning and without building your own packages.

The gr-gsm package is now included in Debian testing and unstable, and the IMSI-catcher code no longer require root access to fetch and decode the GSM data collected using gr-gsm.

Here is an updated recipe, using packages built by Debian and a git clone of two python scripts:

  1. Start with a Debian machine running the Buster version (aka testing).
  2. Run 'apt install gr-gsm python-numpy python-scipy python-scapy' as root to install required packages.
  3. Fetch the code decoding GSM packages using 'git clone github.com/Oros42/IMSI-catcher.git'.
  4. Insert USB software defined radio supported by GNU Radio.
  5. Enter the IMSI-catcher directory and run 'python scan-and-livemon' to locate the frequency of nearby base stations and start listening for GSM packages on one of them.
  6. Enter the IMSI-catcher directory and run 'python simple_IMSI-catcher.py' to display the collected information.

Note, due to a bug somewhere the scan-and-livemon program (actually its underlying program grgsm_scanner) do not work with the HackRF radio. It does work with RTL 8232 and other similar USB radio receivers you can get very cheaply (for example from ebay), so for now the solution is to scan using the RTL radio and only use HackRF for fetching GSM data.

As far as I can tell, a cell phone only show up on one of the frequencies at the time, so if you are going to track and count every cell phone around you, you need to listen to all the frequencies used. To listen to several frequencies, use the --numrecv argument to scan-and-livemon to use several receivers. Further, I am not sure if phones using 3G or 4G will show as talking GSM to base stations, so this approach might not see all phones around you. I typically see 0-400 IMSI numbers an hour when looking around where I live.

I've tried to run the scanner on a Raspberry Pi 2 and 3 running Debian Buster, but the grgsm_livemon_headless process seem to be too CPU intensive to keep up. When GNU Radio print 'O' to stdout, I am told there it is caused by a buffer overflow between the radio and GNU Radio, caused by the program being unable to read the GSM data fast enough. If you see a stream of 'O's from the terminal where you started scan-and-livemon, you need a give the process more CPU power. Perhaps someone are able to optimize the code to a point where it become possible to set up RPi3 based GSM sniffers? I tried using Raspbian instead of Debian, but there seem to be something wrong with GNU Radio on raspbian, causing glibc to abort().

9th August 2017

On friday, I came across an interesting article in the Norwegian web based ICT news magazine digi.no on how to collect the IMSI numbers of nearby cell phones using the cheap DVB-T software defined radios. The article refered to instructions and a recipe by Keld Norman on Youtube on how to make a simple $7 IMSI Catcher, and I decided to test them out.

The instructions said to use Ubuntu, install pip using apt (to bypass apt), use pip to install pybombs (to bypass both apt and pip), and the ask pybombs to fetch and build everything you need from scratch. I wanted to see if I could do the same on the most recent Debian packages, but this did not work because pybombs tried to build stuff that no longer build with the most recent openssl library or some other version skew problem. While trying to get this recipe working, I learned that the apt->pip->pybombs route was a long detour, and the only piece of software dependency missing in Debian was the gr-gsm package. I also found out that the lead upstream developer of gr-gsm (the name stand for GNU Radio GSM) project already had a set of Debian packages provided in an Ubuntu PPA repository. All I needed to do was to dget the Debian source package and built it.

The IMSI collector is a python script listening for packages on the loopback network device and printing to the terminal some specific GSM packages with IMSI numbers in them. The code is fairly short and easy to understand. The reason this work is because gr-gsm include a tool to read GSM data from a software defined radio like a DVB-T USB stick and other software defined radios, decode them and inject them into a network device on your Linux machine (using the loopback device by default). This proved to work just fine, and I've been testing the collector for a few days now.

The updated and simpler recipe is thus to

  1. start with a Debian machine running Stretch or newer,
  2. build and install the gr-gsm package available from http://ppa.launchpad.net/ptrkrysik/gr-gsm/ubuntu/pool/main/g/gr-gsm/,
  3. clone the git repostory from https://github.com/Oros42/IMSI-catcher,
  4. run grgsm_livemon and adjust the frequency until the terminal where it was started is filled with a stream of text (meaning you found a GSM station).
  5. go into the IMSI-catcher directory and run 'sudo python simple_IMSI-catcher.py' to extract the IMSI numbers.

To make it even easier in the future to get this sniffer up and running, I decided to package the gr-gsm project for Debian (WNPP #871055), and the package was uploaded into the NEW queue today. Luckily the gnuradio maintainer has promised to help me, as I do not know much about gnuradio stuff yet.

I doubt this "IMSI cacher" is anywhere near as powerfull as commercial tools like The Spy Phone Portable IMSI / IMEI Catcher or the Harris Stingray, but I hope the existance of cheap alternatives can make more people realise how their whereabouts when carrying a cell phone is easily tracked. Seeing the data flow on the screen, realizing that I live close to a police station and knowing that the police is also wearing cell phones, I wonder how hard it would be for criminals to track the position of the police officers to discover when there are police near by, or for foreign military forces to track the location of the Norwegian military forces, or for anyone to track the location of government officials...

It is worth noting that the data reported by the IMSI-catcher script mentioned above is only a fraction of the data broadcasted on the GSM network. It will only collect one frequency at the time, while a typical phone will be using several frequencies, and not all phones will be using the frequencies tracked by the grgsm_livemod program. Also, there is a lot of radio chatter being ignored by the simple_IMSI-catcher script, which would be collected by extending the parser code. I wonder if gr-gsm can be set up to listen to more than one frequency?

25th July 2017

I finally received a copy of the Norwegian Bokmål edition of "The Debian Administrator's Handbook". This test copy arrived in the mail a few days ago, and I am very happy to hold the result in my hand. We spent around one and a half year translating it. This paperbook edition is available from lulu.com. If you buy it quickly, you save 25% on the list price. The book is also available for download in electronic form as PDF, EPUB and Mobipocket, as can be read online as a web page.

This is the second book I publish (the first was the book "Free Culture" by Lawrence Lessig in English, French and Norwegian Bokmål), and I am very excited to finally wrap up this project. I hope "Håndbok for Debian-administratoren" will be well received.

12th June 2017

It is pleasing to see that the work we put down in publishing new editions of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig, is still being appreciated. I had a look at the latest sales numbers for the paper edition today. Not too impressive, but happy to see some buyers still exist. All the revenue from the books is sent to the Creative Commons Corporation, and they receive the largest cut if you buy directly from Lulu. Most books are sold via Amazon, with Ingram second and only a small fraction directly from Lulu. The ebook edition is available for free from Github.

Title / languageQuantity
2016 jan-jun2016 jul-dec2017 jan-may
Culture Libre / French 3 6 15
Fri kultur / Norwegian 7 1 0
Free Culture / English 14 27 16
Total 24 34 31

A bit sad to see the low sales number on the Norwegian edition, and a bit surprising the English edition still selling so well.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

10th June 2017

I am very happy to report that the Nikita Noark 5 core project tagged its second release today. The free software solution is an implementation of the Norwegian archive standard Noark 5 used by government offices in Norway. These were the changes in version 0.1.1 since version 0.1.0 (from NEWS.md):

  • Continued work on the angularjs GUI, including document upload.
  • Implemented correspondencepartPerson, correspondencepartUnit and correspondencepartInternal
  • Applied for coverity coverage and started submitting code on regualr basis.
  • Started fixing bugs reported by coverity
  • Corrected and completed HATEOAS links to make sure entire API is available via URLs in _links.
  • Corrected all relation URLs to use trailing slash.
  • Add initial support for storing data in ElasticSearch.
  • Now able to receive and store uploaded files in the archive.
  • Changed JSON output for object lists to have relations in _links.
  • Improve JSON output for empty object lists.
  • Now uses correct MIME type application/vnd.noark5-v4+json.
  • Added support for docker container images.
  • Added simple API browser implemented in JavaScript/Angular.
  • Started on archive client implemented in JavaScript/Angular.
  • Started on prototype to show the public mail journal.
  • Improved performance by disabling Sprint FileWatcher.
  • Added support for 'arkivskaper', 'saksmappe' and 'journalpost'.
  • Added support for some metadata codelists.
  • Added support for Cross-origin resource sharing (CORS).
  • Changed login method from Basic Auth to JSON Web Token (RFC 7519) style.
  • Added support for GET-ing ny-* URLs.
  • Added support for modifying entities using PUT and eTag.
  • Added support for returning XML output on request.
  • Removed support for English field and class names, limiting ourself to the official names.
  • ...

If this sound interesting to you, please contact us on IRC (#nikita on irc.freenode.net) or email (nikita-noark mailing list).

7th June 2017

This is a copy of an email I posted to the nikita-noark mailing list. Please follow up there if you would like to discuss this topic. The background is that we are making a free software archive system based on the Norwegian Noark 5 standard for government archives.

I've been wondering a bit lately how trusted timestamps could be stored in Noark 5. Trusted timestamps can be used to verify that some information (document/file/checksum/metadata) have not been changed since a specific time in the past. This is useful to verify the integrity of the documents in the archive.

Then it occured to me, perhaps the trusted timestamps could be stored as dokument variants (ie dokumentobjekt referered to from dokumentbeskrivelse) with the filename set to the hash it is stamping?

Given a "dokumentbeskrivelse" with an associated "dokumentobjekt", a new dokumentobjekt is associated with "dokumentbeskrivelse" with the same attributes as the stamped dokumentobjekt except these attributes:

  • format -> "RFC3161"
  • mimeType -> "application/timestamp-reply"
  • formatDetaljer -> "<source URL for timestamp service>"
  • filenavn -> "<sjekksum>.tsr"

This assume a service following IETF RFC 3161 is used, which specifiy the given MIME type for replies and the .tsr file ending for the content of such trusted timestamp. As far as I can tell from the Noark 5 specifications, it is OK to have several variants/renderings of a dokument attached to a given dokumentbeskrivelse objekt. It might be stretching it a bit to make some of these variants represent crypto-signatures useful for verifying the document integrity instead of representing the dokument itself.

Using the source of the service in formatDetaljer allow several timestamping services to be used. This is useful to spread the risk of key compromise over several organisations. It would only be a problem to trust the timestamps if all of the organisations are compromised.

The following oneliner on Linux can be used to generate the tsr file. $input is the path to the file to checksum, and $sha256 is the SHA-256 checksum of the file (ie the ".tsr" value mentioned above).

openssl ts -query -data "$inputfile" -cert -sha256 -no_nonce \
  | curl -s -H "Content-Type: application/timestamp-query" \
      --data-binary "@-" http://zeitstempel.dfn.de > $sha256.tsr

To verify the timestamp, you first need to download the public key of the trusted timestamp service, for example using this command:

wget -O ca-cert.txt \
  https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt

Note, the public key should be stored alongside the timestamps in the archive to make sure it is also available 100 years from now. It is probably a good idea to standardise how and were to store such public keys, to make it easier to find for those trying to verify documents 100 or 1000 years from now. :)

The verification itself is a simple openssl command:

openssl ts -verify -data $inputfile -in $sha256.tsr \
  -CAfile ca-cert.txt -text

Is there any reason this approach would not work? Is it somehow against the Noark 5 specification?

19th March 2017

The Nikita Noark 5 core project is implementing the Norwegian standard for keeping an electronic archive of government documents. The Noark 5 standard document the requirement for data systems used by the archives in the Norwegian government, and the Noark 5 web interface specification document a REST web service for storing, searching and retrieving documents and metadata in such archive. I've been involved in the project since a few weeks before Christmas, when the Norwegian Unix User Group announced it supported the project. I believe this is an important project, and hope it can make it possible for the government archives in the future to use free software to keep the archives we citizens depend on. But as I do not hold such archive myself, personally my first use case is to store and analyse public mail journal metadata published from the government. I find it useful to have a clear use case in mind when developing, to make sure the system scratches one of my itches.

If you would like to help make sure there is a free software alternatives for the archives, please join our IRC channel (#nikita on irc.freenode.net) and the project mailing list.

When I got involved, the web service could store metadata about documents. But a few weeks ago, a new milestone was reached when it became possible to store full text documents too. Yesterday, I completed an implementation of a command line tool archive-pdf to upload a PDF file to the archive using this API. The tool is very simple at the moment, and find existing fonds, series and files while asking the user to select which one to use if more than one exist. Once a file is identified, the PDF is associated with the file and uploaded, using the title extracted from the PDF itself. The process is fairly similar to visiting the archive, opening a cabinet, locating a file and storing a piece of paper in the archive. Here is a test run directly after populating the database with test data using our API tester:

~/src//noark5-tester$ ./archive-pdf mangelmelding/mangler.pdf
using arkiv: Title of the test fonds created 2017-03-18T23:49:32.103446
using arkivdel: Title of the test series created 2017-03-18T23:49:32.103446

 0 - Title of the test case file created 2017-03-18T23:49:32.103446
 1 - Title of the test file created 2017-03-18T23:49:32.103446
Select which mappe you want (or search term): 0
Uploading mangelmelding/mangler.pdf
  PDF title: Mangler i spesifikasjonsdokumentet for NOARK 5 Tjenestegrensesnitt
  File 2017/1: Title of the test case file created 2017-03-18T23:49:32.103446
~/src//noark5-tester$

You can see here how the fonds (arkiv) and serie (arkivdel) only had one option, while the user need to choose which file (mappe) to use among the two created by the API tester. The archive-pdf tool can be found in the git repository for the API tester.

In the project, I have been mostly working on the API tester so far, while getting to know the code base. The API tester currently use the HATEOAS links to traverse the entire exposed service API and verify that the exposed operations and objects match the specification, as well as trying to create objects holding metadata and uploading a simple XML file to store. The tester has proved very useful for finding flaws in our implementation, as well as flaws in the reference site and the specification.

The test document I uploaded is a summary of all the specification defects we have collected so far while implementing the web service. There are several unclear and conflicting parts of the specification, and we have started writing down the questions we get from implementing it. We use a format inspired by how The Austin Group collect defect reports for the POSIX standard with their instructions for the MANTIS defect tracker system, in lack of an official way to structure defect reports for Noark 5 (our first submitted defect report was a request for a procedure for submitting defect reports :).

The Nikita project is implemented using Java and Spring, and is fairly easy to get up and running using Docker containers for those that want to test the current code base. The API tester is implemented in Python.

9th March 2017

Over the years, administrating thousand of NFS mounting linux computers at the time, I often needed a way to detect if the machine was experiencing NFS hang. If you try to use df or look at a file or directory affected by the hang, the process (and possibly the shell) will hang too. So you want to be able to detect this without risking the detection process getting stuck too. It has not been obvious how to do this. When the hang has lasted a while, it is possible to find messages like these in dmesg:

nfs: server nfsserver not responding, still trying
nfs: server nfsserver OK

It is hard to know if the hang is still going on, and it is hard to be sure looking in dmesg is going to work. If there are lots of other messages in dmesg the lines might have rotated out of site before they are noticed.

While reading through the nfs client implementation in linux kernel code, I came across some statistics that seem to give a way to detect it. The om_timeouts sunrpc value in the kernel will increase every time the above log entry is inserted into dmesg. And after digging a bit further, I discovered that this value show up in /proc/self/mountstats on Linux.

The mountstats content seem to be shared between files using the same file system context, so it is enough to check one of the mountstats files to get the state of the mount point for the machine. I assume this will not show lazy umounted NFS points, nor NFS mount points in a different process context (ie with a different filesystem view), but that does not worry me.

The content for a NFS mount point look similar to this:

[...]
device /dev/mapper/Debian-var mounted on /var with fstype ext3
device nfsserver:/mnt/nfsserver/home0 mounted on /mnt/nfsserver/home0 with fstype nfs statvers=1.1
        opts:   rw,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,soft,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=129.240.3.145,mountvers=3,mountport=4048,mountproto=udp,local_lock=all
        age:    7863311
        caps:   caps=0x3fe7,wtmult=4096,dtsize=8192,bsize=0,namlen=255
        sec:    flavor=1,pseudoflavor=1
        events: 61063112 732346265 1028140 35486205 16220064 8162542 761447191 71714012 37189 3891185 45561809 110486139 4850138 420353 15449177 296502 52736725 13523379 0 52182 9016896 1231 0 0 0 0 0 
        bytes:  166253035039 219519120027 0 0 40783504807 185466229638 11677877 45561809 
        RPC iostats version: 1.0  p/v: 100003/3 (nfs)
        xprt:   tcp 925 1 6810 0 0 111505412 111480497 109 2672418560317 0 248 53869103 22481820
        per-op statistics
                NULL: 0 0 0 0 0 0 0 0
             GETATTR: 61063106 61063108 0 9621383060 6839064400 453650 77291321 78926132
             SETATTR: 463469 463470 0 92005440 66739536 63787 603235 687943
              LOOKUP: 17021657 17021657 0 3354097764 4013442928 57216 35125459 35566511
              ACCESS: 14281703 14290009 5 2318400592 1713803640 1709282 4865144 7130140
            READLINK: 125 125 0 20472 18620 0 1112 1118
                READ: 4214236 4214237 0 715608524 41328653212 89884 22622768 22806693
               WRITE: 8479010 8494376 22 187695798568 1356087148 178264904 51506907 231671771
              CREATE: 171708 171708 0 38084748 46702272 873 1041833 1050398
               MKDIR: 3680 3680 0 773980 993920 26 23990 24245
             SYMLINK: 903 903 0 233428 245488 6 5865 5917
               MKNOD: 80 80 0 20148 21760 0 299 304
              REMOVE: 429921 429921 0 79796004 61908192 3313 2710416 2741636
               RMDIR: 3367 3367 0 645112 484848 22 5782 6002
              RENAME: 466201 466201 0 130026184 121212260 7075 5935207 5961288
                LINK: 289155 289155 0 72775556 67083960 2199 2565060 2585579
             READDIR: 2933237 2933237 0 516506204 13973833412 10385 3190199 3297917
         READDIRPLUS: 1652839 1652839 0 298640972 6895997744 84735 14307895 14448937
              FSSTAT: 6144 6144 0 1010516 1032192 51 9654 10022
              FSINFO: 2 2 0 232 328 0 1 1
            PATHCONF: 1 1 0 116 140 0 0 0
              COMMIT: 0 0 0 0 0 0 0 0

device binfmt_misc mounted on /proc/sys/fs/binfmt_misc with fstype binfmt_misc
[...]

The key number to look at is the third number in the per-op list. It is the number of NFS timeouts experiences per file system operation. Here 22 write timeouts and 5 access timeouts. If these numbers are increasing, I believe the machine is experiencing NFS hang. Unfortunately the timeout value do not start to increase right away. The NFS operations need to time out first, and this can take a while. The exact timeout value depend on the setup. For example the defaults for TCP and UDP mount points are quite different, and the timeout value is affected by the soft, hard, timeo and retrans NFS mount options.

The only way I have been able to get working on Debian and RedHat Enterprise Linux for getting the timeout count is to peek in /proc/. But according to Solaris 10 System Administration Guide: Network Services, the 'nfsstat -c' command can be used to get these timeout values. But this do not work on Linux, as far as I can tell. I asked Debian about this, but have not seen any replies yet.

Is there a better way to figure out if a Linux NFS client is experiencing NFS hangs? Is there a way to detect which processes are affected? Is there a way to get the NFS mount going quickly once the network problem causing the NFS hang has been cleared? I would very much welcome some clues, as we regularly run into NFS hangs.

8th March 2017

So the new president in the United States of America claim to be surprised to discover that he was wiretapped during the election before he was elected president. He even claim this must be illegal. Well, doh, if it is one thing the confirmations from Snowden documented, it is that the entire population in USA is wiretapped, one way or another. Of course the president candidates were wiretapped, alongside the senators, judges and the rest of the people in USA.

Next, the Federal Bureau of Investigation ask the Department of Justice to go public rejecting the claims that Donald Trump was wiretapped illegally. I fail to see the relevance, given that I am sure the surveillance industry in USA believe they have all the legal backing they need to conduct mass surveillance on the entire world.

There is even the director of the FBI stating that he never saw an order requesting wiretapping of Donald Trump. That is not very surprising, given how the FISA court work, with all its activity being secret. Perhaps he only heard about it?

What I find most sad in this story is how Norwegian journalists present it. In a news reports the other day in the radio from the Norwegian National broadcasting Company (NRK), I heard the journalist claim that 'the FBI denies any wiretapping', while the reality is that 'the FBI denies any illegal wiretapping'. There is a fundamental and important difference, and it make me sad that the journalists are unable to grasp it.

Update 2017-03-13: Look like The Intercept report that US Senator Rand Paul confirm what I state above.

3rd March 2017

For almost a year now, we have been working on making a Norwegian Bokmål edition of The Debian Administrator's Handbook. Now, thanks to the tireless effort of Ole-Erik, Ingrid and Andreas, the initial translation is complete, and we are working on the proof reading to ensure consistent language and use of correct computer science terms. The plan is to make the book available on paper, as well as in electronic form. For that to happen, the proof reading must be completed and all the figures need to be translated. If you want to help out, get in touch.

A fresh PDF edition in A4 format (the final book will have smaller pages) of the book created every morning is available for proofreading. If you find any errors, please visit Weblate and correct the error. The state of the translation including figures is a useful source for those provide Norwegian bokmål screen shots and figures.

1st March 2017

A few days ago I ordered a small batch of the ChaosKey, a small USB dongle for generating entropy created by Bdale Garbee and Keith Packard. Yesterday it arrived, and I am very happy to report that it work great! According to its designers, to get it to work out of the box, you need the Linux kernel version 4.1 or later. I tested on a Debian Stretch machine (kernel version 4.9), and there it worked just fine, increasing the available entropy very quickly. I wrote a small test oneliner to test. It first print the current entropy level, drain /dev/random, and then print the entropy level for five seconds. Here is the situation without the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
300
0+1 oppføringer inn
0+1 oppføringer ut
28 byte kopiert, 0,000264565 s, 106 kB/s
4
8
12
17
21
%

The entropy level increases by 3-4 every second. In such case any application requiring random bits (like a HTTPS enabled web server) will halt and wait for more entrpy. And here is the situation with the ChaosKey inserted:

% cat /proc/sys/kernel/random/entropy_avail; \
  dd bs=1M if=/dev/random of=/dev/null count=1; \
  for n in $(seq 1 5); do \
     cat /proc/sys/kernel/random/entropy_avail; \
     sleep 1; \
  done
1079
0+1 oppføringer inn
0+1 oppføringer ut
104 byte kopiert, 0,000487647 s, 213 kB/s
433
1028
1031
1035
1038
%

Quite the difference. :) I bought a few more than I need, in case someone want to buy one here in Norway. :)

Update: The dongle was presented at Debconf last year. You might find the talk recording illuminating. It explains exactly what the source of randomness is, if you are unable to spot it from the schema drawing available from the ChaosKey web site linked at the start of this blog post.

Tags: debian, english.
21st February 2017

I just noticed the new Norwegian proposal for archiving rules in the goverment list ECMA-376 / ISO/IEC 29500 (aka OOXML) as valid formats to put in long term storage. Luckily such files will only be accepted based on pre-approval from the National Archive. Allowing OOXML files to be used for long term storage might seem like a good idea as long as we forget that there are plenty of ways for a "valid" OOXML document to have content with no defined interpretation in the standard, which lead to a question and an idea.

Is there any tool to detect if a OOXML document depend on such undefined behaviour? It would be useful for the National Archive (and anyone else interested in verifying that a document is well defined) to have such tool available when considering to approve the use of OOXML. I'm aware of the officeotron OOXML validator, but do not know how complete it is nor if it will report use of undefined behaviour. Are there other similar tools available? Please send me an email if you know of any such tool.

Tags: english, nuug, standard.
13th February 2017

A few days ago, we received the ruling from my day in court. The case in question is a challenge of the seizure of the DNS domain popcorn-time.no. The ruling simply did not mention most of our arguments, and seemed to take everything ØKOKRIM said at face value, ignoring our demonstration and explanations. But it is hard to tell for sure, as we still have not seen most of the documents in the case and thus were unprepared and unable to contradict several of the claims made in court by the opposition. We are considering an appeal, but it is partly a question of funding, as it is costing us quite a bit to pay for our lawyer. If you want to help, please donate to the NUUG defense fund.

The details of the case, as far as we know it, is available in Norwegian from the NUUG blog. This also include the ruling itself.

3rd February 2017

On Wednesday, I spent the entire day in court in Follo Tingrett representing the member association NUUG, alongside the member association EFN and the DNS registrar IMC, challenging the seizure of the DNS name popcorn-time.no. It was interesting to sit in a court of law for the first time in my life. Our team can be seen in the picture above: attorney Ola Tellesbø, EFN board member Tom Fredrik Blenning, IMC CEO Morten Emil Eriksen and NUUG board member Petter Reinholdtsen.

The case at hand is that the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime (aka Økokrim) decided on their own, to seize a DNS domain early last year, without following the official policy of the Norwegian DNS authority which require a court decision. The web site in question was a site covering Popcorn Time. And Popcorn Time is the name of a technology with both legal and illegal applications. Popcorn Time is a client combining searching a Bittorrent directory available on the Internet with downloading/distribute content via Bittorrent and playing the downloaded content on screen. It can be used illegally if it is used to distribute content against the will of the right holder, but it can also be used legally to play a lot of content, for example the millions of movies available from the Internet Archive or the collection available from Vodo. We created a video demonstrating legally use of Popcorn Time and played it in Court. It can of course be downloaded using Bittorrent.

I did not quite know what to expect from a day in court. The government held on to their version of the story and we held on to ours, and I hope the judge is able to make sense of it all. We will know in two weeks time. Unfortunately I do not have high hopes, as the Government have the upper hand here with more knowledge about the case, better training in handling criminal law and in general higher standing in the courts than fairly unknown DNS registrar and member associations. It is expensive to be right also in Norway. So far the case have cost more than NOK 70 000,-. To help fund the case, NUUG and EFN have asked for donations, and managed to collect around NOK 25 000,- so far. Given the presentation from the Government, I expect the government to appeal if the case go our way. And if the case do not go our way, I hope we have enough funding to appeal.

From the other side came two people from Økokrim. On the benches, appearing to be part of the group from the government were two people from the Simonsen Vogt Wiik lawyer office, and three others I am not quite sure who was. Økokrim had proposed to present two witnesses from The Motion Picture Association, but this was rejected because they did not speak Norwegian and it was a bit late to bring in a translator, but perhaps the two from MPA were present anyway. All seven appeared to know each other. Good to see the case is take seriously.

If you, like me, believe the courts should be involved before a DNS domain is hijacked by the government, or you believe the Popcorn Time technology have a lot of useful and legal applications, I suggest you too donate to the NUUG defense fund. Both Bitcoin and bank transfer are available. If NUUG get more than we need for the legal action (very unlikely), the rest will be spend promoting free software, open standards and unix-like operating systems in Norway, so no matter what happens the money will be put to good use.

If you want to lean more about the case, I recommend you check out the blog posts from NUUG covering the case. They cover the legal arguments on both sides.

9th January 2017

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this:

traceroute to www.stortinget.no (85.88.67.10), 30 hops max, 60 byte packets
 1  uio-gw10.uio.no (129.240.202.1)  0.447 ms  0.486 ms  0.621 ms
 2  uio-gw8.uio.no (129.240.24.229)  0.467 ms  0.578 ms  0.675 ms
 3  oslo-gw1.uninett.no (128.39.65.17)  0.385 ms  0.373 ms  0.358 ms
 4  te3-1-2.br1.fn3.as2116.net (193.156.90.3)  1.174 ms  1.172 ms  1.153 ms
 5  he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.627 ms he16-1-1.cr2.oslosda310.as2116.net (195.0.244.48)  3.172 ms he16-1-1.cr1.san110.as2116.net (195.0.244.234)  2.857 ms
 6  ae1.ar8.oslosda310.as2116.net (195.0.242.39)  0.662 ms  0.637 ms ae0.ar8.oslosda310.as2116.net (195.0.242.23)  0.622 ms
 7  89.191.10.146 (89.191.10.146)  0.931 ms  0.917 ms  0.955 ms
 8  * * *
 9  * * *
[...]

This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the www.stortinget.no server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request.

There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian.

This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA and others).

Lets pick an example, the Norwegian parliament web site www.stortinget.no. It is read daily by all members of parliament and their staff, as well as political journalists, activits and many other citizens of Norway. A visit to the www.stortinget.no web site will ask your browser to contact 8 other servers: ajax.googleapis.com, insights.hotjar.com, script.hotjar.com, static.hotjar.com, stats.g.doubleclick.net, www.google-analytics.com, www.googletagmanager.com and www.netigate.se. I extracted this by asking PhantomJS to visit the Stortinget web page and tell me all the URLs PhantomJS downloaded to render the page (in HAR format using their netsniff example. I am very grateful to Gorm for showing me how to do this). My goal is to visualize network traces to all IP addresses behind these DNS names, do show where visitors personal information is spread when visiting the page.

map of combined traces for URLs used by www.stortinget.no using GeoIP

When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.

scapy traceroute graph for URLs used by www.stortinget.no

I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.

example geotraceroute view for www.stortinget.no

In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.

map of combined traces for URLs used by www.stortinget.no using geotraceroute

Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority.

Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by www.stortinget.no and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project.

Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet.

PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

4th January 2017

Do you have a large iCalendar file with lots of old entries, and would like to archive them to save space and resources? At least those of us using KOrganizer know that turning on and off an event set become slower and slower the more entries are in the set. While working on migrating our calendars to a Radicale CalDAV server on our Freedombox server, my loved one wondered if I could find a way to split up the calendar file she had in KOrganizer, and I set out to write a tool. I spent a few days writing and polishing the system, and it is now ready for general consumption. The code for ical-archiver is publicly available from a git repository on github. The system is written in Python and depend on the vobject Python module.

To use it, locate the iCalendar file you want to operate on and give it as an argument to the ical-archiver script. This will generate a set of new files, one file per component type per year for all components expiring more than two years in the past. The vevent, vtodo and vjournal entries are handled by the script. The remaining entries are stored in a 'remaining' file.

This is what a test run can look like:

% ical-archiver t/2004-2016.ics 
Found 3612 vevents
Found 6 vtodos
Found 2 vjournals
Writing t/2004-2016.ics-subset-vevent-2004.ics
Writing t/2004-2016.ics-subset-vevent-2005.ics
Writing t/2004-2016.ics-subset-vevent-2006.ics
Writing t/2004-2016.ics-subset-vevent-2007.ics
Writing t/2004-2016.ics-subset-vevent-2008.ics
Writing t/2004-2016.ics-subset-vevent-2009.ics
Writing t/2004-2016.ics-subset-vevent-2010.ics
Writing t/2004-2016.ics-subset-vevent-2011.ics
Writing t/2004-2016.ics-subset-vevent-2012.ics
Writing t/2004-2016.ics-subset-vevent-2013.ics
Writing t/2004-2016.ics-subset-vevent-2014.ics
Writing t/2004-2016.ics-subset-vjournal-2007.ics
Writing t/2004-2016.ics-subset-vjournal-2011.ics
Writing t/2004-2016.ics-subset-vtodo-2012.ics
Writing t/2004-2016.ics-remaining.ics
%

As you can see, the original file is untouched and new files are written with names derived from the original file. If you are happy with their content, the *-remaining.ics file can replace the original the the others can be archived or imported as historical calendar collections.

The script should probably be improved a bit. The error handling when discovering broken entries is not good, and I am not sure yet if it make sense to split different entry types into separate files or not. The program is thus likely to change. If you find it interesting, please get in touch. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, standard.
23rd December 2016

I received a very nice Christmas present today. As my regular readers probably know, I have been working on the the Isenkram system for many years. The goal of the Isenkram system is to make it easier for users to figure out what to install to get a given piece of hardware to work in Debian, and a key part of this system is a way to map hardware to packages. Isenkram have its own mapping database, and also uses data provided by each package using the AppStream metadata format. And today, AppStream in Debian learned to look up hardware the same way Isenkram is doing it, ie using fnmatch():

% appstreamcli what-provides modalias \
  usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
Identifier: pymissile [generic]
Name: pymissile
Summary: Control original Striker USB Missile Launcher
Package: pymissile
% appstreamcli what-provides modalias usb:v0694p0002d0000
Identifier: libnxt [generic]
Name: libnxt
Summary: utility library for talking to the LEGO Mindstorms NXT brick
Package: libnxt
---
Identifier: t2n [generic]
Name: t2n
Summary: Simple command-line tool for Lego NXT
Package: t2n
---
Identifier: python-nxt [generic]
Name: python-nxt
Summary: Python driver/interface/wrapper for the Lego Mindstorms NXT robot
Package: python-nxt
---
Identifier: nbc [generic]
Name: nbc
Summary: C compiler for LEGO Mindstorms NXT bricks
Package: nbc
%

A similar query can be done using the combined AppStream and Isenkram databases using the isenkram-lookup tool:

% isenkram-lookup usb:v1130p0202d0100dc00dsc00dp00ic03isc00ip00in00
pymissile
% isenkram-lookup usb:v0694p0002d0000
libnxt
nbc
python-nxt
t2n
%

You can find modalias values relevant for your machine using cat $(find /sys/devices/ -name modalias).

If you want to make this system a success and help Debian users make the most of the hardware they have, please help add AppStream metadata for your package following the guidelines documented in the wiki. So far only 11 packages provide such information, among the several hundred hardware specific packages in Debian. The Isenkram database on the other hand contain 101 packages, mostly related to USB dongles. Most of the packages with hardware mapping in AppStream are LEGO Mindstorms related, because I have, as part of my involvement in the Debian LEGO team given priority to making sure LEGO users get proposed the complete set of packages in Debian for that particular hardware. The team also got a nice Christmas present today. The nxt-firmware package made it into Debian. With this package in place, it is now possible to use the LEGO Mindstorms NXT unit with only free software, as the nxt-firmware package contain the source and firmware binaries for the NXT brick.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

20th December 2016

The Isenkram system I wrote two years ago to make it easier in Debian to find and install packages to get your hardware dongles to work, is still going strong. It is a system to look up the hardware present on or connected to the current system, and map the hardware to Debian packages. It can either be done using the tools in isenkram-cli or using the user space daemon in the isenkram package. The latter will notify you, when inserting new hardware, about what packages to install to get the dongle working. It will even provide a button to click on to ask packagekit to install the packages.

Here is an command line example from my Thinkpad laptop:

% isenkram-lookup  
bluez
cheese
ethtool
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tlp
tp-smapi-dkms
tp-smapi-source
tpb
%

It can also list the firware package providing firmware requested by the load kernel modules, which in my case is an empty list because I have all the firmware my machine need:

% /usr/sbin/isenkram-autoinstall-firmware -l
info: did not find any firmware files requested by loaded kernel modules.  exiting
%

The last few days I had a look at several of the around 250 packages in Debian with udev rules. These seem like good candidates to install when a given hardware dongle is inserted, and I found several that should be proposed by isenkram. I have not had time to check all of them, but am happy to report that now there are 97 packages packages mapped to hardware by Isenkram. 11 of these packages provide hardware mapping using AppStream, while the rest are listed in the modaliases file provided in isenkram.

These are the packages with hardware mappings at the moment. The marked packages are also announcing their hardware support using AppStream, for everyone to use:

air-quality-sensor, alsa-firmware-loaders, argyll, array-info, avarice, avrdude, b43-fwcutter, bit-babbler, bluez, bluez-firmware, brltty, broadcom-sta-dkms, calibre, cgminer, cheese, colord, colorhug-client, dahdi-firmware-nonfree, dahdi-linux, dfu-util, dolphin-emu, ekeyd, ethtool, firmware-ipw2x00, fprintd, fprintd-demo, galileo, gkrellm-thinkbat, gphoto2, gpsbabel, gpsbabel-gui, gpsman, gpstrans, gqrx-sdr, gr-fcdproplus, gr-osmosdr, gtkpod, hackrf, hdapsd, hdmi2usb-udev, hpijs-ppds, hplip, ipw3945-source, ipw3945d, kde-config-tablet, kinect-audio-setup, libnxt, libpam-fprintd, lomoco, madwimax, minidisc-utils, mkgmap, msi-keyboard, mtkbabel, nbc, nqc, nut-hal-drivers, ola, open-vm-toolbox, open-vm-tools, openambit, pcgminer, pcmciautils, pcscd, pidgin-blinklight, printer-driver-splix, pymissile, python-nxt, qlandkartegt, qlandkartegt-garmin, rosegarden, rt2x00-source, sispmctl, soapysdr-module-hackrf, solaar, squeak-plugins-scratch, sunxi-tools, t2n, thinkfan, thinkfinger-tools, tlp, tp-smapi-dkms, tp-smapi-source, tpb, tucnak, uhd-host, usbmuxd, viking, virtualbox-ose-guest-x11, w1retap, xawtv, xserver-xorg-input-vmmouse, xserver-xorg-input-wacom, xserver-xorg-video-qxl, xserver-xorg-video-vmware, yubikey-personalization and zd1211-firmware

If you know of other packages, please let me know with a wishlist bug report against the isenkram-cli package, and ask the package maintainer to add AppStream metadata according to the guidelines to provide the information for everyone. In time, I hope to get rid of the isenkram specific hardware mapping and depend exclusively on AppStream.

Note, the AppStream metadata for broadcom-sta-dkms is matching too much hardware, and suggest that the package with with any ethernet card. See bug #838735 for the details. I hope the maintainer find time to address it soon. In the mean time I provide an override in isenkram.

11th December 2016

In my early years, I played the epic game Elite on my PC. I spent many months trading and fighting in space, and reached the 'elite' fighting status before I moved on. The original Elite game was available on Commodore 64 and the IBM PC edition I played had a 64 KB executable. I am still impressed today that the authors managed to squeeze both a 3D engine and details about more than 2000 planet systems across 7 galaxies into a binary so small.

I have known about the free software game Oolite inspired by Elite for a while, but did not really have time to test it properly until a few days ago. It was great to discover that my old knowledge about trading routes were still valid. But my fighting and flying abilities were gone, so I had to retrain to be able to dock on a space station. And I am still not able to make much resistance when I am attacked by pirates, so I bougth and mounted the most powerful laser in the rear to be able to put up at least some resistance while fleeing for my life. :)

When playing Elite in the late eighties, I had to discover everything on my own, and I had long lists of prices seen on different planets to be able to decide where to trade what. This time I had the advantages of the Elite wiki, where information about each planet is easily available with common price ranges and suggested trading routes. This improved my ability to earn money and I have been able to earn enough to buy a lot of useful equipent in a few days. I believe I originally played for months before I could get a docking computer, while now I could get it after less then a week.

If you like science fiction and dreamed of a life as a vagabond in space, you should try out Oolite. It is available for Linux, MacOSX and Windows, and is included in Debian and derivatives since 2011.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

25th November 2016

Two years ago, I did some experiments with eatmydata and the Debian installation system, observing how using eatmydata could speed up the installation quite a bit. My testing measured speedup around 20-40 percent for Debian Edu, where we install around 1000 packages from within the installer. The eatmydata package provide a way to disable/delay file system flushing. This is a bit risky in the general case, as files that should be stored on disk will stay only in memory a bit longer than expected, causing problems if a machine crashes at an inconvenient time. But for an installation, if the machine crashes during installation the process is normally restarted, and avoiding disk operations as much as possible to speed up the process make perfect sense.

I added code in the Debian Edu specific installation code to enable eatmydata, but did not have time to push it any further. But a few months ago I picked it up again and worked with the libeatmydata package maintainer Mattia Rizzolo to make it easier for everyone to get this installation speedup in Debian. Thanks to our cooperation There is now an eatmydata-udeb package in Debian testing and unstable, and simply enabling/installing it in debian-installer (d-i) is enough to get the quicker installations. It can be enabled using preseeding. The following untested kernel argument should do the trick:

preseed/early_command="anna-install eatmydata-udeb"

This should ask d-i to install the package inside the d-i environment early in the installation sequence. Having it installed in d-i in turn will make sure the relevant scripts are called just after debootstrap filled /target/ with the freshly installed Debian system to configure apt to run dpkg with eatmydata. This is enough to speed up the installation process. There is a proposal to extend the idea a bit further by using /etc/ld.so.preload instead of apt.conf, but I have not tested its impact.

13th November 2016

The Coz profiler, a nice profiler able to run benchmarking experiments on the instrumented multi-threaded program, finally made it into Debian unstable yesterday. Lluís Vilanova and I have spent many months since I blogged about the coz tool in August working with upstream to make it suitable for Debian. There are still issues with clang compatibility, inline assembly only working x86 and minimized JavaScript libraries.

To test it, install 'coz-profiler' using apt and run it like this:

coz run --- /path/to/binary-with-debug-info

This will produce a profile.coz file in the current working directory with the profiling information. This is then given to a JavaScript application provided in the package and available from a project web page. To start the local copy, invoke it in a browser like this:

sensible-browser /usr/share/coz-profiler/viewer/index.htm

See the project home page and the USENIX ;login: article on Coz for more information on how it is working.

Tags: debian, english.
7th November 2016

A few days ago I ran a very biased and informal survey to get an idea about what options are being used to communicate with end to end encryption with friends and family. I explicitly asked people not to list options only used in a work setting. The background is the uneasy feeling I get when using Signal, a feeling shared by others as a blog post from Sander Venima about why he do not recommend Signal anymore (with feedback from the Signal author available from ycombinator). I wanted an overview of the options being used, and hope to include those options in a less biased survey later on. So far I have not taken the time to look into the individual proposed systems. They range from text sharing web pages, via file sharing and email to instant messaging, VOIP and video conferencing. For those considering which system to use, it is also useful to have a look at the EFF Secure messaging scorecard which is slightly out of date but still provide valuable information.

So, on to the list. There were some used by many, some used by a few, some rarely used ones and a few mentioned but without anyone claiming to use them. Notice the grouping is in reality quite random given the biased self selected set of participants. First the ones used by many:

Then the ones used by a few.

Then the ones used by even fewer people

And finally the ones mentioned by not marked as used by anyone. This might be a mistake, perhaps the person adding the entry forgot to flag it as used?

Given the network effect it seem obvious to me that we as a society have been divided and conquered by those interested in keeping encrypted and secure communication away from the masses. The finishing remarks from Aral Balkan in his talk "Free is a lie" about the usability of free software really come into effect when you want to communicate in private with your friends and family. We can not expect them to allow the usability of communication tool to block their ability to talk to their loved ones.

Note for example the option IRC w/OTR. Most IRC clients do not have OTR support, so in most cases OTR would not be an option, even if you wanted to. In my personal experience, about 1 in 20 I talk to have a IRC client with OTR. For private communication to really be available, most people to talk to must have the option in their currently used client. I can not simply ask my family to install an IRC client. I need to guide them through a technical multi-step process of adding extensions to the client to get them going. This is a non-starter for most.

I would like to be able to do video phone calls, audio phone calls, exchange instant messages and share files with my loved ones, without being forced to share with people I do not know. I do not want to share the content of the conversations, and I do not want to share who I communicate with or the fact that I communicate with someone. Without all these factors in place, my private life is being more or less invaded.

Update 2019-10-08: Børge Dvergsdal, who told me he is Customer Relationship Manager @ Whereby (formerly appear.in), asked if I could mention that appear.in is now renamed and found at https://whereby.com/. And sure, why not. Apparently they changed the name because they were unable to trademark appear.in somewhere... While I am at it, I can mention that Ring changed name to Jami, now available from https://jami.net/. Luckily they were able to have a direct redirect from ring.cx to jami.net, so the user experience is almost the same.

4th November 2016

A while back I received a Gyro sensor for the NXT Mindstorms controller as a birthday present. It had been on my wishlist for a while, because I wanted to build a Segway like balancing lego robot. I had already built a simple balancing robot with the kids, using the light/color sensor included in the NXT kit as the balance sensor, but it was not working very well. It could balance for a while, but was very sensitive to the light condition in the room and the reflective properties of the surface and would fall over after a short while. I wanted something more robust, and had the gyro sensor from HiTechnic I believed would solve it on my wishlist for some years before it suddenly showed up as a gift from my loved ones. :)

Unfortunately I have not had time to sit down and play with it since then. But that changed some days ago, when I was searching for lego segway information and came across a recipe from HiTechnic for building the HTWay, a segway like balancing robot. Build instructions and source code was included, so it was just a question of putting it all together. And thanks to the great work of many Debian developers, the compiler needed to build the source for the NXT is already included in Debian, so I was read to go in less than an hour. The resulting robot do not look very impressive in its simplicity:

Because I lack the infrared sensor used to control the robot in the design from HiTechnic, I had to comment out the last task (taskControl). I simply placed /* and */ around it get the program working without that sensor present. Now it balances just fine until the battery status run low:

Now we would like to teach it how to follow a line and take remote control instructions using the included Bluetooth receiver in the NXT.

If you, like me, love LEGO and want to make sure we find the tools they need to work with LEGO in Debian and all our derivative distributions like Ubuntu, check out the LEGO designers project page and join the Debian LEGO team. Personally I own a RCX and NXT controller (no EV3), and would like to make sure the Debian tools needed to program the systems I own work as they should.

Tags: debian, english, lego, robot.
10th October 2016

In July I wrote how to get the Signal Chrome/Chromium app working without the ability to receive SMS messages (aka without a cell phone). It is time to share some experiences and provide an updated setup.

The Signal app have worked fine for several months now, and I use it regularly to chat with my loved ones. I had a major snag at the end of my summer vacation, when the the app completely forgot my setup, identity and keys. The reason behind this major mess was running out of disk space. To avoid that ever happening again I have started storing everything in userdata/ in git, to be able to roll back to an earlier version if the files are wiped by mistake. I had to use it once after introducing the git backup. When rolling back to an earlier version, one need to use the 'reset session' option in Signal to get going, and notify the people you talk with about the problem. I assume there is some sequence number tracking in the protocol to detect rollback attacks. The git repository is rather big (674 MiB so far), but I have not tried to figure out if some of the content can be added to a .gitignore file due to lack of spare time.

I've also hit the 90 days timeout blocking, and noticed that this make it impossible to send messages using Signal. I could still receive them, but had to patch the code with a new timestamp to send. I believe the timeout is added by the developers to force people to upgrade to the latest version of the app, even when there is no protocol changes, to reduce the version skew among the user base and thus try to keep the number of support requests down.

Since my original recipe, the Signal source code changed slightly, making the old patch fail to apply cleanly. Below is an updated patch, including the shell wrapper I use to start Signal. The original version required a new user to locate the JavaScript console and call a function from there. I got help from a friend with more JavaScript knowledge than me to modify the code to provide a GUI button instead. This mean that to get started you just need to run the wrapper and click the 'Register without mobile phone' to get going now. I've also modified the timeout code to always set it to 90 days in the future, to avoid having to patch the code regularly.

So, the updated recipe for Debian Jessie:

  1. First, install required packages to get the source code and the browser you need. Signal only work with Chrome/Chromium, as far as I know, so you need to install it.
    apt install git tor chromium
    git clone https://github.com/WhisperSystems/Signal-Desktop.git
    
  2. Modify the source code using command listed in the the patch block below.
  3. Start Signal using the run-signal-app wrapper (for example using `pwd`/run-signal-app).
  4. Click on the 'Register without mobile phone', will in a phone number you can receive calls to the next minute, receive the verification code and enter it into the form field and press 'Register'. Note, the phone number you use will be user Signal username, ie the way others can find you on Signal.
  5. You can now use Signal to contact others. Note, new contacts do not show up in the contact list until you restart Signal, and there is no way to assign names to Contacts. There is also no way to create or update chat groups. I suspect this is because the web app do not have a associated contact database.

I am still a bit uneasy about using Signal, because of the way its main author moxie0 reject federation and accept dependencies to major corporations like Google (part of the code is fetched from Google) and Amazon (the central coordination point is owned by Amazon). See for example the LibreSignal issue tracker for a thread documenting the authors view on these issues. But the network effect is strong in this case, and several of the people I want to communicate with already use Signal. Perhaps we can all move to Ring once it work on my laptop? It already work on Windows and Android, and is included in Debian and Ubuntu, but not working on Debian Stable.

Anyway, this is the patch I apply to the Signal code to get it working. It switch to the production servers, disable to timeout, make registration easier and add the shell wrapper:

cd Signal-Desktop; cat <<EOF | patch -p1
diff --git a/js/background.js b/js/background.js
index 24b4c1d..579345f 100644
--- a/js/background.js
+++ b/js/background.js
@@ -33,9 +33,9 @@
         });
     });
 
-    var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
+    var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org';
     var SERVER_PORTS = [80, 4433, 8443];
-    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
+    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
     var messageReceiver;
     window.getSocketStatus = function() {
         if (messageReceiver) {
diff --git a/js/expire.js b/js/expire.js
index 639aeae..beb91c3 100644
--- a/js/expire.js
+++ b/js/expire.js
@@ -1,6 +1,6 @@
 ;(function() {
     'use strict';
-    var BUILD_EXPIRATION = 0;
+    var BUILD_EXPIRATION = Date.now() + (90 * 24 * 60 * 60 * 1000);
 
     window.extension = window.extension || {};
 
diff --git a/js/views/install_view.js b/js/views/install_view.js
index 7816f4f..1d6233b 100644
--- a/js/views/install_view.js
+++ b/js/views/install_view.js
@@ -38,7 +38,8 @@
             return {
                 'click .step1': this.selectStep.bind(this, 1),
                 'click .step2': this.selectStep.bind(this, 2),
-                'click .step3': this.selectStep.bind(this, 3)
+                'click .step3': this.selectStep.bind(this, 3),
+                'click .callreg': function() { extension.install('standalone') },
             };
         },
         clearQR: function() {
diff --git a/options.html b/options.html
index dc0f28e..8d709f6 100644
--- a/options.html
+++ b/options.html
@@ -14,7 +14,10 @@
         <div class='nav'>
           <h1>{{ installWelcome }}</h1>
           <p>{{ installTagline }}</p>
-          <div> <a class='button step2'>{{ installGetStartedButton }}</a> </div>
+          <div> <a class='button step2'>{{ installGetStartedButton }}</a>
+	    <br> <a class="button callreg">Register without mobile phone</a>
+
+	  </div>
           <span class='dot step1 selected'></span>
           <span class='dot step2'></span>
           <span class='dot step3'></span>
--- /dev/null   2016-10-07 09:55:13.730181472 +0200
+++ b/run-signal-app   2016-10-10 08:54:09.434172391 +0200
@@ -0,0 +1,12 @@
+#!/bin/sh
+set -e
+cd $(dirname $0)
+mkdir -p userdata
+userdata="`pwd`/userdata"
+if [ -d "$userdata" ] && [ ! -d "$userdata/.git" ] ; then
+    (cd $userdata && git init)
+fi
+(cd $userdata && git add . && git commit -m "Current status." || true)
+exec chromium \
+  --proxy-server="socks://localhost:9050" \
+  --user-data-dir=$userdata --load-and-launch-app=`pwd`
EOF
chmod a+rx run-signal-app

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

7th October 2016

The Isenkram system provide a practical and easy way to figure out which packages support the hardware in a given machine. The command line tool isenkram-lookup and the tasksel options provide a convenient way to list and install packages relevant for the current hardware during system installation, both user space packages and firmware packages. The GUI background daemon on the other hand provide a pop-up proposing to install packages when a new dongle is inserted while using the computer. For example, if you plug in a smart card reader, the system will ask if you want to install pcscd if that package isn't already installed, and if you plug in a USB video camera the system will ask if you want to install cheese if cheese is currently missing. This already work just fine.

But Isenkram depend on a database mapping from hardware IDs to package names. When I started no such database existed in Debian, so I made my own data set and included it with the isenkram package and made isenkram fetch the latest version of this database from git using http. This way the isenkram users would get updated package proposals as soon as I learned more about hardware related packages.

The hardware is identified using modalias strings. The modalias design is from the Linux kernel where most hardware descriptors are made available as a strings that can be matched using filename style globbing. It handle USB, PCI, DMI and a lot of other hardware related identifiers.

The downside to the Isenkram specific database is that there is no information about relevant distribution / Debian version, making isenkram propose obsolete packages too. But along came AppStream, a cross distribution mechanism to store and collect metadata about software packages. When I heard about the proposal, I contacted the people involved and suggested to add a hardware matching rule using modalias strings in the specification, to be able to use AppStream for mapping hardware to packages. This idea was accepted and AppStream is now a great way for a package to announce the hardware it support in a distribution neutral way. I wrote a recipe on how to add such meta-information in a blog post last December. If you have a hardware related package in Debian, please announce the relevant hardware IDs using AppStream.

In Debian, almost all packages that can talk to a LEGO Mindestorms RCX or NXT unit, announce this support using AppStream. The effect is that when you insert such LEGO robot controller into your Debian machine, Isenkram will propose to install the packages needed to get it working. The intention is that this should allow the local user to start programming his robot controller right away without having to guess what packages to use or which permissions to fix.

But when I sat down with my son the other day to program our NXT unit using his Debian Stretch computer, I discovered something annoying. The local console user (ie my son) did not get access to the USB device for programming the unit. This used to work, but no longer in Jessie and Stretch. After some investigation and asking around on #debian-devel, I discovered that this was because udev had changed the mechanism used to grant access to local devices. The ConsoleKit mechanism from /lib/udev/rules.d/70-udev-acl.rules no longer applied, because LDAP users no longer was added to the plugdev group during login. Michael Biebl told me that this method was obsolete and the new method used ACLs instead. This was good news, as the plugdev mechanism is a mess when using a remote user directory like LDAP. Using ACLs would make sure a user lost device access when she logged out, even if the user left behind a background process which would retain the plugdev membership with the ConsoleKit setup. Armed with this knowledge I moved on to fix the access problem for the LEGO Mindstorms related packages.

The new system uses a udev tag, 'uaccess'. It can either be applied directly for a device, or is applied in /lib/udev/rules.d/70-uaccess.rules for classes of devices. As the LEGO Mindstorms udev rules did not have a class, I decided to add the tag directly in the udev rules files included in the packages. Here is one example. For the nqc C compiler for the RCX, the /lib/udev/rules.d/60-nqc.rules file now look like this:

SUBSYSTEM=="usb", ACTION=="add", ATTR{idVendor}=="0694", ATTR{idProduct}=="0001", \
    SYMLINK+="rcx-%k", TAG+="uaccess"

The key part is the 'TAG+="uaccess"' at the end. I suspect all packages using plugdev in their /lib/udev/rules.d/ files should be changed to use this tag (either directly or indirectly via 70-uaccess.rules). Perhaps a lintian check should be created to detect this?

I've been unable to find good documentation on the uaccess feature. It is unclear to me if the uaccess tag is an internal implementation detail like the udev-acl tag used by /lib/udev/rules.d/70-udev-acl.rules. If it is, I guess the indirect method is the preferred way. Michael asked for more documentation from the systemd project and I hope it will make this clearer. For now I use the generic classes when they exist and is already handled by 70-uaccess.rules, and add the tag directly if no such class exist.

To learn more about the isenkram system, please check out my blog posts tagged isenkram.

To help out making life for LEGO constructors in Debian easier, please join us on our IRC channel #debian-lego and join the Debian LEGO team in the Alioth project we created yesterday. A mailing list is not yet created, but we are working on it. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

30th August 2016

In April we started to work on a Norwegian Bokmål edition of the "open access" book on how to set up and administrate a Debian system. Today I am happy to report that the first draft is now publicly available. You can find it on get the Debian Administrator's Handbook page (under Other languages). The first eight chapters have a first draft translation, and we are working on proofreading the content. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors. A good way to contribute is to proofread the text and update weblate if you find errors.

Our goal is still to make the Norwegian book available on paper as well as electronic form.

11th August 2016

This summer, I read a great article "coz: This Is the Profiler You're Looking For" in USENIX ;login: about how to profile multi-threaded programs. It presented a system for profiling software by running experiences in the running program, testing how run time performance is affected by "speeding up" parts of the code to various degrees compared to a normal run. It does this by slowing down parallel threads while the "faster up" code is running and measure how this affect processing time. The processing time is measured using probes inserted into the code, either using progress counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It can also measure unmodified code by measuring complete the program runtime and running the program several times instead.

The project and presentation was so inspiring that I would like to get the system into Debian. I created a WNPP request for it and contacted upstream to try to make the system ready for Debian by sending patches. The build process need to be changed a bit to avoid running 'git clone' to get dependencies, and to include the JavaScript web page used to visualize the collected profiling information included in the source package. But I expect that should work out fairly soon.

The way the system work is fairly simple. To run an coz experiment on a binary with debug symbols available, start the program like this:

coz run --- program-to-run

This will create a text file profile.coz with the instrumentation information. To show what part of the code affect the performance most, use a web browser and either point it to http://plasma-umass.github.io/coz/ or use the copy from git (in the gh-pages branch). Check out this web site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the profiling more useful you include <coz.h> and insert the COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the code, rebuild and run the profiler. This allow coz to do more targeted experiments.

A video published by ACM presenting the Coz profiler is available from Youtube. There is also a paper from the 25th Symposium on Operating Systems Principles available titled Coz: finding code that counts with causal profiling.

The source code for Coz is available from github. It will only build with clang because it uses a C++ feature missing in GCC, but I've submitted a patch to solve it and hope it will be included in the upstream source soon.

Please get in touch if you, like me, would like to see this piece of software in Debian. I would very much like some help with the packaging effort, as I lack the in depth knowledge on how to package C++ libraries.

5th August 2016

As my regular readers probably remember, the last year I published a French and Norwegian translation of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig. A bit less known is the fact that due to the way I created the translations, using docbook and po4a, I also recreated the English original. And because I already had created a new the PDF edition, I published it too. The revenue from the books are sent to the Creative Commons Corporation. In other words, I do not earn any money from this project, I just earn the warm fuzzy feeling that the text is available for a wider audience and more people can learn why the Creative Commons is needed.

Today, just for fun, I had a look at the sales number over at Lulu.com, which take care of payment, printing and shipping. Much to my surprise, the English edition is selling better than both the French and Norwegian edition, despite the fact that it has been available in English since it was first published. In total, 24 paper books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:

Title / languageQuantity
Culture Libre / French3
Fri kultur / Norwegian7
Free Culture / English14

The books are available both from Lulu.com and from large book stores like Amazon and Barnes&Noble. Most revenue, around $10 per book, is sent to the Creative Commons project when the book is sold directly by Lulu.com. The other channels give less revenue. The summary from Lulu tell me 10 books was sold via the Amazon channel, 10 via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells me that the revenue sent so far this year is USD $101.42. No idea what kind of sales numbers to expect, so I do not know if that is a good amount of sales for a 10 year old book or not. But it make me happy that the buyers find the book, and I hope they enjoy reading it as much as I did.

The ebook edition is available for free from Github.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

1st August 2016

Did you know there is a TV channel broadcasting talks from DebConf 16 across an entire country? Or that there is a TV channel broadcasting talks by or about Linus Torvalds, Tor, OpenID, Common Lisp, Civic Tech, EFF founder John Barlow, how to make 3D printer electronics and many more fascinating topics? It works using only free software (all of it available from Github), and is administrated using a web browser and a web API.

The TV channel is the Norwegian open channel Frikanalen, and I am involved via the NUUG member association in running and developing the software for the channel. The channel is organised as a member organisation where its members can upload and broadcast what they want (think of it as Youtube for national broadcasting television). Individuals can broadcast too. The time slots are handled on a first come, first serve basis. Because the channel have almost no viewers and very few active members, we can experiment with TV technology without too much flack when we make mistakes. And thanks to the few active members, most of the slots on the schedule are free. I see this as an opportunity to spread knowledge about technology and free software, and have a script I run regularly to fill up all the open slots the next few days with technology related video. The end result is a channel I like to describe as Techno TV - filled with interesting talks and presentations.

It is available on channel 50 on the Norwegian national digital TV network (RiksTV). It is also available as a multicast stream on Uninett. And finally, it is available as a WebM unicast stream from Frikanalen and NUUG. Check it out. :)

7th July 2016

Yesterday, I tried to unlock a HTC Desire HD phone, and it proved to be a slight challenge. Here is the recipe if I ever need to do it again. It all started by me wanting to try the recipe to set up an hardened Android installation from the Tor project blog on a device I had access to. It is a old mobile phone with a broken microphone The initial idea had been to just install CyanogenMod on it, but did not quite find time to start on it until a few days ago.

The unlock process is supposed to be simple: (1) Boot into the boot loader (press volume down and power at the same time), (2) select 'fastboot' before (3) connecting the device via USB to a Linux machine, (4) request the device identifier token by running 'fastboot oem get_identifier_token', (5) request the device unlocking key using the HTC developer web site and unlock the phone using the key file emailed to you.

Unfortunately, this only work fi you have hboot version 2.00.0029 or newer, and the device I was working on had 2.00.0027. This apparently can be easily fixed by downloading a Windows program and running it on your Windows machine, if you accept the terms Microsoft require you to accept to use Windows - which I do not. So I had to come up with a different approach. I got a lot of help from AndyCap on #nuug, and would not have been able to get this working without him.

First I needed to extract the hboot firmware from the windows binary for HTC Desire HD downloaded as 'the RUU' from HTC. For this there is is a github project named unruu using libunshield. The unshield tool did not recognise the file format, but unruu worked and extracted rom.zip, containing the new hboot firmware and a text file describing which devices it would work for.

Next, I needed to get the new firmware into the device. For this I followed some instructions available from HTC1Guru.com, and ran these commands as root on a Linux machine with Debian testing:

adb reboot-bootloader
fastboot oem rebootRUU
fastboot flash zip rom.zip
fastboot flash zip rom.zip
fastboot reboot

The flash command apparently need to be done twice to take effect, as the first is just preparations and the second one do the flashing. The adb command is just to get to the boot loader menu, so turning the device on while holding volume down and the power button should work too.

With the new hboot version in place I could start following the instructions on the HTC developer web site. I got the device token like this:

fastboot oem get_identifier_token 2>&1 | sed 's/(bootloader) //'

And once I got the unlock code via email, I could use it like this:

fastboot flash unlocktoken Unlock_code.bin

And with that final step in place, the phone was unlocked and I could start stuffing the software of my own choosing into the device. So far I only inserted a replacement recovery image to wipe the phone before I start. We will see what happen next. Perhaps I should install Debian on it. :)

3rd July 2016

For a while now, I have wanted to test the Signal app, as it is said to provide end to end encrypted communication and several of my friends and family are already using it. As I by choice do not own a mobile phone, this proved to be harder than expected. And I wanted to have the source of the client and know that it was the code used on my machine. But yesterday I managed to get it working. I used the Github source, compared it to the source in the Signal Chrome app available from the Chrome web store, applied patches to use the production Signal servers, started the app and asked for the hidden "register without a smart phone" form. Here is the recipe how I did it.

First, I fetched the Signal desktop source from Github, using

git clone https://github.com/WhisperSystems/Signal-Desktop.git

Next, I patched the source to use the production servers, to be able to talk to other Signal users:

cat <<EOF | patch -p0
diff -ur ./js/background.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js
--- ./js/background.js  2016-06-29 13:43:15.630344628 +0200
+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/background.js    2016-06-29 14:06:29.530300934 +0200
@@ -47,8 +47,8 @@
         });
     });
 
-    var SERVER_URL = 'https://textsecure-service-staging.whispersystems.org';
-    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments-staging.s3.amazonaws.com';
+    var SERVER_URL = 'https://textsecure-service-ca.whispersystems.org:4433';
+    var ATTACHMENT_SERVER_URL = 'https://whispersystems-textsecure-attachments.s3.amazonaws.com';
     var messageReceiver;
     window.getSocketStatus = function() {
         if (messageReceiver) {
diff -ur ./js/expire.js userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js
--- ./js/expire.js      2016-06-29 13:43:15.630344628 +0200
+++ userdata/Default/Extensions/bikioccmkafdpakkkcpdbppfkghcmihk/0.15.0_0/js/expire.js2016-06-29 14:06:29.530300934 +0200
@@ -1,6 +1,6 @@
 ;(function() {
     'use strict';
-    var BUILD_EXPIRATION = 0;
+    var BUILD_EXPIRATION = 1474492690000;
 
     window.extension = window.extension || {};
 
EOF

The first part is changing the servers, and the second is updating an expiration timestamp. This timestamp need to be updated regularly. It is set 90 days in the future by the build process (Gruntfile.js). The value is seconds since 1970 times 1000, as far as I can tell.

Based on a tip and good help from the #nuug IRC channel, I wrote a script to launch Signal in Chromium.

#!/bin/sh
cd $(dirname $0)
mkdir -p userdata
exec chromium \
  --proxy-server="socks://localhost:9050" \
  --user-data-dir=`pwd`/userdata --load-and-launch-app=`pwd`

The script start the app and configure Chromium to use the Tor SOCKS5 proxy to make sure those controlling the Signal servers (today Amazon and Whisper Systems) as well as those listening on the lines will have a harder time location my laptop based on the Signal connections if they use source IP address.

When the script starts, one need to follow the instructions under "Standalone Registration" in the CONTRIBUTING.md file in the git repository. I right clicked on the Signal window to get up the Chromium debugging tool, visited the 'Console' tab and wrote 'extension.install("standalone")' on the console prompt to get the registration form. Then I entered by land line phone number and pressed 'Call'. 5 seconds later the phone rang and a robot voice repeated the verification code three times. After entering the number into the verification code field in the form, I could start using Signal from my laptop.

As far as I can tell, The Signal app will leak who is talking to whom and thus who know who to those controlling the central server, but such leakage is hard to avoid with a centrally controlled server setup. It is something to keep in mind when using Signal - the content of your chats are harder to intercept, but the meta data exposing your contact network is available to people you do not know. So better than many options, but not great. And sadly the usage is connected to my land line, thus allowing those controlling the server to associate it to my home and person. I would prefer it if only those I knew could tell who I was on Signal. There are options avoiding such information leakage, but most of my friends are not using them, so I am stuck with Signal for now.

Update 2017-01-10: There is an updated blog post on this topic in Experience and updated recipe for using the Signal app without a mobile phone.

6th June 2016

When I set out a few weeks ago to figure out which multimedia player in Debian claimed to support most file formats / MIME types, I was a bit surprised how varied the sets of MIME types the various players claimed support for. The range was from 55 to 130 MIME types. I suspect most media formats are supported by all players, but this is not really reflected in the MimeTypes values in their desktop files. There are probably also some bogus MIME types listed, but it is hard to identify which one this is.

Anyway, in the mean time I got in touch with upstream for some of the players suggesting to add more MIME types to their desktop files, and decided to spend some time myself improving the situation for my favorite media player VLC. The fixes for VLC entered Debian unstable yesterday. The complete list of MIME types can be seen on the Multimedia player MIME type support status Debian wiki page.

The new "best" multimedia player in Debian? It is VLC, followed by totem, parole, kplayer, gnome-mpv, mpv, smplayer, mplayer-gui and kmplayer. I am sure some of the other players desktop files support several of the formats currently listed as working only with vlc, toten and parole.

A sad observation is that only 14 MIME types are listed as supported by all the tested multimedia players in Debian in their desktop files: audio/mpeg, audio/vnd.rn-realaudio, audio/x-mpegurl, audio/x-ms-wma, audio/x-scpls, audio/x-wav, video/mp4, video/mpeg, video/quicktime, video/vnd.rn-realvideo, video/x-matroska, video/x-ms-asf, video/x-ms-wmv and video/x-msvideo. Personally I find it sad that video/ogg and video/webm is not supported by all the media players in Debian. As far as I can tell, all of them can handle both formats.

5th June 2016

Many years ago, when koffice was fresh and with few users, I decided to test its presentation tool when making the slides for a talk I was giving for NUUG on Japhar, a free Java virtual machine. I wrote the first draft of the slides, saved the result and went to bed the day before I would give the talk. The next day I took a plane to the location where the meeting should take place, and on the plane I started up koffice again to polish the talk a bit, only to discover that kpresenter refused to load its own data file. I cursed a bit and started making the slides again from memory, to have something to present when I arrived. I tested that the saved files could be loaded, and the day seemed to be rescued. I continued to polish the slides until I suddenly discovered that the saved file could no longer be loaded into kpresenter. In the end I had to rewrite the slides three times, condensing the content until the talk became shorter and shorter. After the talk I was able to pinpoint the problem – kpresenter wrote inline images in a way itself could not understand. Eventually that bug was fixed and kpresenter ended up being a great program to make slides. The point I'm trying to make is that we expect a program to be able to load its own data files, and it is embarrassing to its developers if it can't.

Did you ever experience a program failing to load its own data files from the desktop file browser? It is not a uncommon problem. A while back I discovered that the screencast recorder gtk-recordmydesktop would save an Ogg Theora video file the KDE file browser would refuse to open. No video player claimed to understand such file. I tracked down the cause being file --mime-type returning the application/ogg MIME type, which no video player I had installed listed as a MIME type they would understand. I asked for file to change its behavour and use the MIME type video/ogg instead. I also asked several video players to add video/ogg to their desktop files, to give the file browser an idea what to do about Ogg Theora files. After a while, the desktop file browsers in Debian started to handle the output from gtk-recordmydesktop properly.

But history repeats itself. A few days ago I tested the music system Rosegarden again, and I discovered that the KDE and xfce file browsers did not know what to do with the Rosegarden project files (*.rg). I've reported the rosegarden problem to BTS and a fix is commited to git and will be included in the next upload. To increase the chance of me remembering how to fix the problem next time some program fail to load its files from the file browser, here are some notes on how to fix it.

The file browsers in Debian in general operates on MIME types. There are two sources for the MIME type of a given file. The output from file --mime-type mentioned above, and the content of the shared MIME type registry (under /usr/share/mime/). The file MIME type is mapped to programs supporting the MIME type, and this information is collected from the desktop files available in /usr/share/applications/. If there is one desktop file claiming support for the MIME type of the file, it is activated when asking to open a given file. If there are more, one can normally select which one to use by right-clicking on the file and selecting the wanted one using 'Open with' or similar. In general this work well. But it depend on each program picking a good MIME type (preferably a MIME type registered with IANA), file and/or the shared MIME registry recognizing the file and the desktop file to list the MIME type in its list of supported MIME types.

The /usr/share/mime/packages/rosegarden.xml entry for the Shared MIME database look like this:

<?xml version="1.0" encoding="UTF-8"?>
<mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info">
  <mime-type type="audio/x-rosegarden">
    <sub-class-of type="application/x-gzip"/>
    <comment>Rosegarden project file</comment>
    <glob pattern="*.rg"/>
  </mime-type>
</mime-info>

This states that audio/x-rosegarden is a kind of application/x-gzip (it is a gzipped XML file). Note, it is much better to use an official MIME type registered with IANA than it is to make up ones own unofficial ones like the x-rosegarden type used by rosegarden.

The desktop file of the rosegarden program failed to list audio/x-rosegarden in its list of supported MIME types, causing the file browsers to have no idea what to do with *.rg files:

% grep Mime /usr/share/applications/rosegarden.desktop
MimeType=audio/x-rosegarden-composition;audio/x-rosegarden-device;audio/x-rosegarden-project;audio/x-rosegarden-template;audio/midi;
X-KDE-NativeMimeType=audio/x-rosegarden-composition
%

The fix was to add "audio/x-rosegarden;" at the end of the MimeType= line.

If you run into a file which fail to open the correct program when selected from the file browser, please check out the output from file --mime-type for the file, ensure the file ending and MIME type is registered somewhere under /usr/share/mime/ and check that some desktop file under /usr/share/applications/ is claiming support for this MIME type. If not, please report a bug to have it fixed. :)

Tags: debian, english.
28th May 2016

A little more than 11 years ago, one of the creators of Tor, and the current President of the Tor project, Roger Dingledine, gave a talk for the members of the Norwegian Unix User group (NUUG). A video of the talk was recorded, and today, thanks to the great help from David Noble, I finally was able to publish the video of the talk on Frikanalen, the Norwegian open channel TV station where NUUG currently publishes its talks. You can watch the live stream using a web browser with WebM support, or check out the recording on the video on demand page for the talk "Tor: Anonymous communication for the US Department of Defence...and you.".

Here is the video included for those of you using browsers with HTML video and Ogg Theora support:

I guess the gist of the talk can be summarised quite simply: If you want to help the military in USA (and everyone else), use Tor. :)

25th May 2016

The isenkram system is a user-focused solution in Debian for handling hardware related packages. The idea is to have a database of mappings between hardware and packages, and pop up a dialog suggesting for the user to install the packages to use a given hardware dongle. Some use cases are when you insert a Yubikey, it proposes to install the software needed to control it; when you insert a braille reader list it proposes to install the packages needed to send text to the reader; and when you insert a ColorHug screen calibrator it suggests to install the driver for it. The system work well, and even have a few command line tools to install firmware packages and packages for the hardware already in the machine (as opposed to hotpluggable hardware).

The system was initially written using aptdaemon, because I found good documentation and example code on how to use it. But aptdaemon is going away and is generally being replaced by PackageKit, so Isenkram needed a rewrite. And today, thanks to the great patch from my college Sunil Mohan Adapa in the FreedomBox project, the rewrite finally took place. I've just uploaded a new version of Isenkram into Debian Unstable with the patch included, and the default for the background daemon is now to use PackageKit. To check it out, install the isenkram package and insert some hardware dongle and see if it is recognised.

If you want to know what kind of packages isenkram would propose for the machine it is running on, you can check out the isenkram-lookup program. This is what it look like on a Thinkpad X230:

% isenkram-lookup 
bluez
cheese
fprintd
fprintd-demo
gkrellm-thinkbat
hdapsd
libpam-fprintd
pidgin-blinklight
thinkfan
tleds
tp-smapi-dkms
tp-smapi-source
tpb
%p

The hardware mappings come from several places. The preferred way is for packages to announce their hardware support using the cross distribution appstream system. See previous blog posts about isenkram to learn how to do that.

23rd May 2016

Yesterday I updated the battery-stats package in Debian with a few patches sent to me by skilled and enterprising users. There were some nice user and visible changes. First of all, both desktop menu entries now work. A design flaw in one of the script made the history graph fail to show up (its PNG was dumped in ~/.xsession-errors) if no controlling TTY was available. The script worked when called from the command line, but not when called from the desktop menu. I changed this to look for a DISPLAY variable or a TTY before deciding where to draw the graph, and now the graph window pop up as expected.

The next new feature is a discharge rate estimator in one of the graphs (the one showing the last few hours). New is also the user of colours showing charging in blue and discharge in red. The percentages of this graph is relative to last full charge, not battery design capacity.

The other graph show the entire history of the collected battery statistics, comparing it to the design capacity of the battery to visualise how the battery life time get shorter over time. The red line in this graph is what the previous graph considers 100 percent:

In this graph you can see that I only charge the battery to 80 percent of last full capacity, and how the capacity of the battery is shrinking. :(

The last new feature is in the collector, which now will handle more hardware models. On some hardware, Linux power supply information is stored in /sys/class/power_supply/ACAD/, while the collector previously only looked in /sys/class/power_supply/AC/. Now both are checked to figure if there is power connected to the machine.

If you are interested in how your laptop battery is doing, please check out the battery-stats in Debian unstable, or rebuild it on Jessie to get it working on Debian stable. :) The upstream source is available from github. Patches are very welcome.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: debian, english.
21st May 2016

A few weeks ago the French paperback edition of Lawrence Lessigs 2004 book Cultura Libre was published. Today I noticed that the book is now available from book stores. You can now buy it from Amazon ($19.99), Barnes & Noble ($?) and as always from Lulu.com ($19.99). The revenue is donated to the Creative Commons project. If you buy from Lulu.com, they currently get $10.59, while if you buy from one of the book stores most of the revenue go to the book store and the Creative Commons project get much (not sure how much less).

I was a bit surprised to discover that there is a kindle edition sold by Amazon Digital Services LLC on Amazon. Not quite sure how that edition was created, but if you want to download a electronic edition (PDF, EPUB, Mobi) generated from the same files used to create the paperback edition, they are available from github.

19th May 2016

I just donated to the NUUG defence "fond" to fund the effort in Norway to get the seizure of the news site popcorn-time.no tested in court. I hope everyone that agree with me will do the same.

Would you be worried if you knew the police in your country could hijack DNS domains of news sites covering free software system without talking to a judge first? I am. What if the free software system combined search engine lookups, bittorrent downloads and video playout and was called Popcorn Time? Would that affect your view? It still make me worried.

In March 2016, the Norwegian police seized (as in forced NORID to change the IP address pointed to by it to one controlled by the police) the DNS domain popcorn-time.no, without any supervision from the courts. I did not know about the web site back then, and assumed the courts had been involved, and was very surprised when I discovered that the police had hijacked the DNS domain without asking a judge for permission first. I was even more surprised when I had a look at the web site content on the Internet Archive, and only found news coverage about Popcorn Time, not any material published without the right holders permissions.

The seizure was widely covered in the Norwegian press (see for example Hegnar Online and ITavisen and NRK), at first due to the press release sent out by Økokrim, but then based on protests from the law professor Olav Torvund and lawyer Jon Wessel-Aas. It even got some coverage on TorrentFreak.

I wrote about the case a month ago, when the Norwegian Unix User Group (NUUG), where I am an active member, decided to ask the courts to test this seizure. The request was denied, but NUUG and its co-requestor EFN have not given up, and now they are rallying for support to get the seizure legally challenged. They accept both bank and Bitcoin transfer for those that want to support the request.

If you as me believe news sites about free software should not be censored, even if the free software have both legal and illegal applications, and that DNS hijacking should be tested by the courts, I suggest you show your support by donating to NUUG.

12th May 2016

Today, after many years of hard work from many people, ZFS for Linux finally entered Debian. The package status can be seen on the package tracker for zfs-linux. and the team status page. If you want to help out, please join us. The source code is available via git on Alioth. It would also be great if you could help out with the dkms package, as it is an important piece of the puzzle to get ZFS working.

Tags: debian, english.
8th May 2016

Where I set out to figure out which multimedia player in Debian claim support for most file formats.

A few years ago, I had a look at the media support for Browser plugins in Debian, to get an idea which plugins to include in Debian Edu. I created a script to extract the set of supported MIME types for each plugin, and used this to find out which multimedia browser plugin supported most file formats / media types. The result can still be seen on the Debian wiki, even though it have not been updated for a while. But browser plugins are less relevant these days, so I thought it was time to look at standalone players.

A few days ago I was tired of VLC not being listed as a viable player when I wanted to play videos from the Norwegian National Broadcasting Company, and decided to investigate why. The cause is a missing MIME type in the VLC desktop file. In the process I wrote a script to compare the set of MIME types announced in the desktop file and the browser plugin, only to discover that there is quite a large difference between the two for VLC. This discovery made me dig up the script I used to compare browser plugins, and adjust it to compare desktop files instead, to try to figure out which multimedia player in Debian support most file formats.

The result can be seen on the Debian Wiki, as a table listing all MIME types supported by one of the packages included in the table, with the package supporting most MIME types being listed first in the table.

The best multimedia player in Debian? It is totem, followed by parole, kplayer, mpv, vlc, smplayer mplayer-gui gnome-mpv and kmplayer. Time for the other players to update their announced MIME support?

4th May 2016
A friend of mine made me aware of The Pyra, a handheld computer which will be delivered with Debian preinstalled. I would love to get one of those for my birthday. :)

The machine is a complete ARM-based PC with micro HDMI, SATA, USB plugs and many others connectors, and include a full keyboard and a 5" LCD touch screen. The 6000mAh battery is claimed to provide a whole day of battery life time, but I have not seen any independent tests confirming this. The vendor is still collecting preorders, and the last I heard last night was that 22 more orders were needed before production started.

As far as I know, this is the first handheld preinstalled with Debian. Please let me know if you know of any others. Is it the first computer being sold with Debian preinstalled?

Tags: debian, english.
18th April 2016

It is days like today I am really happy to be a member of the Norwegian Unix User group, a member association for those of us believing in free software, open standards and unix-like operating systems. NUUG announced today it will try to bring the seizure of the DNS domain popcorn-time.no as unlawful, to stand up for the principle that writing about a controversial topic is not infringing copyrights, and censuring web pages by hijacking DNS domain should be decided by the courts, not the police. The DNS domain was seized by the Norwegian National Authority for Investigation and Prosecution of Economic and Environmental Crime a month ago. I hope this bring more paying members to NUUG to give the association the financial muscle needed to bring this case as far as it must go to stop this kind of DNS hijacking.

13th April 2016

I first got to know I.F. Stone when I came across an article by Jon Schwarz on The Intercept about his extraordinary contribution to investigative journalism in USA. The article is about a new documentary in two parts (part one is 12 minutes and part two is 30 minutes), and I found both truly fascinating. It is amazing what he was able to find by digging up public sources and government papers. He documented lots of government abuse and cover ups, and I find his weekly news letters inspiring to read even today.

All governments are run by liars and nothing they say should be believed.
- I. F. Stone

His starting point was that reporters should not assume governments and corporations are telling the truth, but verify all their claims as much as possible. I wonder how many Norwegian reporters can be said to follow the principles of I. F. Stone. They are definitely in short supply. If you, like me half a year ago, have never heard of him, check him out.

12th April 2016

I'm happy to report that the French paperback edition of my project to translate the Free Culture book by Lawrence Lessig is now available for sale on Lulu.com. Once I have formally verified my proof reading copy, which should be in the mail, the paperback edition should be available in book stores like Amazon and Barnes & Noble too.

This French edition, Culture Libre, is the work of the dblatex developer Benoît Guillon, who created the PO file from the initial translation available from the Wikilivres wiki pages and completed and corrected the translation to match the original docbook edition my project is using, as well as coordinated the proof reading of the final result. I believe the end result look great, but I am biased and do not read French. In addition to the paperback edition, the book is available in PDF, EPUB and Mobi format from the github project page linked to above.

When enabling book store distribution on Lulu.com, I had to nearly triple the price to allow the book stores some profit. I also had to accept that I will get some revenue when a book is sold via Lulu.com. But because of the non-commercial clause in the book license (CC-BY-NC), this might be a problem. To bypass the problem I discussed how to handle the revenue with the author, and we agreed that the revenue for these editions go to the Creative Commons non-profit Corporation who handle donations to the Creative Commons project. So far they have earned around USD 70 on sales of the English and Norwegian Bokmål editions, according to Lulu.com. They will get the revenue for the French edition too. Their revenue is higher if you buy the book directly from Lulu.com instead of via a book store, so I recommend you buy directly from Lulu.com.

Perhaps you would like to get the book published in your language? The translation is done using a web based translator service, so the technical bar to enter is fairly low. Get in touch if you would like to make this happen.

10th April 2016

During this weekends bug squashing party and developer gathering, we decided to do our part to make sure there are good books about Debian available in Norwegian Bokmål, and got in touch with the people behind the Debian Administrator's Handbook project to get started. If you want to help out, please start contributing using the hosted weblate project page, and get in touch using the translators mailing list. Please also check out the instructions for contributors.

The book is already available on paper in English, French and Japanese, and our goal is to get it available on paper in Norwegian Bokmål too. In addition to the paper edition, there are also EPUB and Mobi versions available. And there are incomplete translations available for many more languages.

7th April 2016

Just for fun I had a look at the popcon number of ZFS related packages in Debian, and was quite surprised with what I found. I use ZFS myself at home, but did not really expect many others to do so. But I might be wrong.

According to the popcon results for spl-linux, there are 1019 Debian installations, or 0.53% of the population, with the package installed. As far as I know the only use of the spl-linux package is as a support library for ZFS on Linux, so I use it here as proxy for measuring the number of ZFS installation on Linux in Debian. In the kFreeBSD variant of Debian the ZFS feature is already available, and there the popcon results for zfsutils show 1625 Debian installations or 0.84% of the population. So I guess I am not alone in using ZFS on Debian.

But even though the Debian project leader Lucas Nussbaum announced in April 2015 that the legal obstacles blocking ZFS on Debian were cleared, the package is still not in Debian. The package is again in the NEW queue. Several uploads have been rejected so far because the debian/copyright file was incomplete or wrong, but there is no reason to give up. The current status can be seen on the team status page, and the source code is available on Alioth.

As I want ZFS to be included in next version of Debian to make sure my home server can function in the future using only official Debian packages, and the current blocker is to get the debian/copyright file accepted by the FTP masters in Debian, I decided a while back to try to help out the team. This was the background for my blog post about creating, updating and checking debian/copyright semi-automatically, and I used the techniques I explored there to try to find any errors in the copyright file. It is not very easy to check every one of the around 2000 files in the source package, but I hope we this time got it right. If you want to help out, check out the git source and try to find missing entries in the debian/copyright file.

Tags: debian, english.
2nd April 2016

Two years ago, I had a look at trusted timestamping options available, and among other things noted a still open bug in the tsget script included in openssl that made it harder than necessary to use openssl as a trusted timestamping client. A few days ago I was told the Norwegian government office DIFI is close to releasing their own trusted timestamp service, and in the process I was happy to learn about a replacement for the tsget script using only curl:

openssl ts -query -data "/etc/shells" -cert -sha256 -no_nonce \
  | curl -s -H "Content-Type: application/timestamp-query" \
         --data-binary "@-" http://zeitstempel.dfn.de > etc-shells.tsr
openssl ts -reply -text -in etc-shells.tsr

This produces a binary timestamp file (etc-shells.tsr) which can be used to verify that the content of the file /etc/shell with the calculated sha256 hash existed at the point in time when the request was made. The last command extract the content of the etc-shells.tsr in human readable form. The idea behind such timestamp is to be able to prove using cryptography that the content of a file have not changed since the file was stamped.

To verify that the file on disk match the public key signature in the timestamp file, run the following commands. It make sure you have the required certificate for the trusted timestamp service available and use it to compare the file content with the timestamp. In production, one should of course use a better method to verify the service certificate.

wget -O ca-cert.txt https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
openssl ts -verify -data /etc/shells -in etc-shells.tsr -CAfile ca-cert.txt -text

Wikipedia have a lot more information about trusted Timestamping and linked timestamping, and there are several trusted timestamping services around, both as commercial services and as free and public services. Among the latter is the zeitstempel.dfn.de service mentioned above and freetsa.org service linked to from the wikipedia web site. I believe the DIFI service should show up on https://tsa.difi.no, but it is not available to the public at the moment. I hope this will change when it is into production. The RFC 3161 trusted timestamping protocol standard is even implemented in LibreOffice, Microsoft Office and Adobe Acrobat, making it possible to verify when a document was created.

I would find it useful to be able to use such trusted timestamp service to make it possible to verify that my stored syslog files have not been tampered with. This is not a new idea. I found one example implemented on the Endian network appliances where the configuration of such feature was described in 2012.

But I could not find any free implementation of such feature when I searched, so I decided to try to build a prototype named syslog-trusted-timestamp. My idea is to generate a timestamp of the old log files after they are rotated, and store the timestamp in the new log file just after rotation. This will form a chain that would make it possible to see if any old log files are tampered with. But syslog is bad at handling kilobytes of binary data, so I decided to base64 encode the timestamp and add an ID and line sequence numbers to the base64 data to make it possible to reassemble the timestamp file again. To use it, simply run it like this:

syslog-trusted-timestamp /path/to/list-of-log-files

This will send a timestamp from one or more timestamp services (not yet decided nor implemented) for each listed file to the syslog using logger(1). To verify the timestamp, the same program is used with the --verify option:

syslog-trusted-timestamp --verify /path/to/log-file /path/to/log-with-timestamp

The verification step is not yet well designed. The current implementation depend on the file path being unique and unchanging, and this is not a solid assumption. It also uses process number as timestamp ID, and this is bound to create ID collisions. I hope to have time to come up with a better way to handle timestamp IDs and verification later.

Please check out the prototype for syslog-trusted-timestamp on github and send suggestions and improvement, or let me know if there already exist a similar system for timestamping logs already to allow me to join forces with others with the same interest.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Tags: english, sikkerhet.
23rd March 2016

Since this morning, the battery-stats package in Debian include an extended collector that will collect the complete battery history for later processing and graphing. The original collector store the battery level as percentage of last full level, while the new collector also record battery vendor, model, serial number, design full level, last full level and current battery level. This make it possible to predict the lifetime of the battery as well as visualise the energy flow when the battery is charging or discharging.

The new tools are available in /usr/share/battery-stats/ in the version 0.5.1 package in unstable. Get the new battery level graph and lifetime prediction by running:

/usr/share/battery-stats/battery-stats-graph /var/log/battery-stats.csv

Or select the 'Battery Level Graph' from your application menu.

The flow in/out of the battery can be seen by running (no menu entry yet):

/usr/share/battery-stats/battery-stats-graph-flow

I'm not quite happy with the way the data is visualised, at least when there are few data points. The graphs look a bit better with a few years of data.

A while back one important feature I use in the battery stats collector broke in Debian. The scripts in /usr/lib/pm-utils/power.d/ were no longer executed. I suspect it happened when Jessie started using systemd, but I do not know. The issue is reported as bug #818649 against pm-utils. I managed to work around it by adding an udev rule to call the collector script every time the power connector is connected and disconnected. With this fix in place it was finally time to make a new release of the package, and get it into Debian.

If you are interested in how your laptop battery is doing, please check out the battery-stats in Debian unstable, or rebuild it on Jessie to get it working on Debian stable. :) The upstream source is available from github. As always, patches are very welcome.

Tags: debian, english.
19th March 2016

Back in 2013 I proposed a way to make paper and PDF invoices easier to process electronically by adding a QR code with the key information about the invoice. I suggested using vCard field definition, to get some standard format for name and address, but any format would work. I did not do anything about the proposal, but hoped someone one day would make something like it. It would make it possible to efficiently send machine readable invoices directly between seller and buyer.

This was the background when I came across a proposal and specification from the web based accounting and invoicing supplier Visma in Sweden called UsingQR. Their PDF invoices contain a QR code with the key information of the invoice in JSON format. This is the typical content of a QR code following the UsingQR specification (based on a real world example, some numbers replaced to get a more bogus entry). I've reformatted the JSON to make it easier to read. Normally this is all on one long line:

{
 "vh":500.00,
 "vm":0,
 "vl":0,
 "uqr":1,
 "tp":1,
 "nme":"Din Leverandør",
 "cc":"NO",
 "cid":"997912345 MVA",
 "iref":"12300001",
 "idt":"20151022",
 "ddt":"20151105",
 "due":2500.0000,
 "cur":"NOK",
 "pt":"BBAN",
 "acc":"17202612345",
 "bc":"BIENNOK1",
 "adr":"0313 OSLO"
}

The interpretation of the fields can be found in the format specification (revision 2 from june 2014). The format seem to have most of the information needed to handle accounting and payment of invoices, at least the fields I have needed so far here in Norway.

Unfortunately, the site and document do not mention anything about the patent, trademark and copyright status of the format and the specification. Because of this, I asked the people behind it back in November to clarify. Ann-Christine Savlid (ann-christine.savlid (at) visma.com) replied that Visma had not applied for patent or trademark protection for this format, and that there were no copyright based usage limitations for the format. I urged her to make sure this was explicitly written on the web pages and in the specification, but unfortunately this has not happened yet. So I guess if there is submarine patents, hidden trademarks or a will to sue for copyright infringements, those starting to use the UsingQR format might be at risk, but if this happen there is some legal defense in the fact that the people behind the format claimed it was safe to do so. At least with patents, there is always a chance of getting sued...

I also asked if they planned to maintain the format in an independent standard organization to give others more confidence that they would participate in the standardization process on equal terms with Visma, but they had no immediate plans for this. Their plan was to work with banks to try to get more users of the format, and evaluate the way forward if the format proved to be popular. I hope they conclude that using an open standard organisation like IETF is the correct place to maintain such specification.

Update 2016-03-20: Via Twitter I became aware of some comments about this blog post that had several useful links and references to similar systems. In the Czech republic, the Czech Banking Association standard #26, with short name SPAYD, uses QR codes with payment information. More information is available from the Wikipedia page on Short Payment Descriptor. And in Germany, there is a system named BezahlCode, (specification v1.8 2013-12-05 available as PDF), which uses QR codes with URL-like formatting using "bank:" as the URI schema/protocol to provide the payment information. There is also the ZUGFeRD file format that perhaps could be transfered using QR codes, but I am not sure if it is done already. Last, in Bolivia there are reports that tax information since november 2014 need to be printed in QR format on invoices. I have not been able to track down a specification for this format, because of my limited language skill sets.

Tags: english, standard.
15th March 2016

Back in September, I blogged about the system I wrote to collect statistics about my laptop battery, and how it showed the decay and death of this battery (now replaced). I created a simple deb package to handle the collection and graphing, but did not want to upload it to Debian as there were already a battery-stats package in Debian that should do the same thing, and I did not see a point of uploading a competing package when battery-stats could be fixed instead. I reported a few bugs about its non-function, and hoped someone would step in and fix it. But no-one did.

I got tired of waiting a few days ago, and took matters in my own hands. The end result is that I am now the new upstream developer of battery stats (available from github) and part of the team maintaining battery-stats in Debian, and the package in Debian unstable is finally able to collect battery status using the /sys/class/power_supply/ information provided by the Linux kernel. If you install the battery-stats package from unstable now, you will be able to get a graph of the current battery fill level, to get some idea about the status of the battery. The source package build and work just fine in Debian testing and stable (and probably oldstable too, but I have not tested). The default graph you get for that system look like this:

My plans for the future is to merge my old scripts into the battery-stats package, as my old scripts collected a lot more details about the battery. The scripts are merged into the upstream battery-stats git repository already, but I am not convinced they work yet, as I changed a lot of paths along the way. Will have to test a bit more before I make a new release.

I will also consider changing the file format slightly, as I suspect the way I combine several values into one field might make it impossible to know the type of the value when using it for processing and graphing.

If you would like I would like to keep an close eye on your laptop battery, check out the battery-stats package in Debian and on github. I would love some help to improve the system further.

Tags: debian, english.
19th February 2016

Making packages for Debian requires quite a lot of attention to details. And one of the details is the content of the debian/copyright file, which should list all relevant licenses used by the code in the package in question, preferably in machine readable DEP5 format.

For large packages with lots of contributors it is hard to write and update this file manually, and if you get some detail wrong, the package is normally rejected by the ftpmasters. So getting it right the first time around get the package into Debian faster, and save both you and the ftpmasters some work.. Today, while trying to figure out what was wrong with the zfsonlinux copyright file, I decided to spend some time on figuring out the options for doing this job automatically, or at least semi-automatically.

Lucikly, there are at least two tools available for generating the file based on the code in the source package, debmake and cme. I'm not sure which one of them came first, but both seem to be able to create a sensible draft file. As far as I can tell, none of them can be trusted to get the result just right, so the content need to be polished a bit before the file is OK to upload. I found the debmake option in a blog posts from 2014.

To generate using debmake, use the -cc option:

debmake -cc > debian/copyright

Note there are some problems with python and non-ASCII names, so this might not be the best option.

The cme option is based on a config parsing library, and I found this approach in a blog post from 2015. To generate using cme, use the 'update dpkg-copyright' option:

cme update dpkg-copyright

This will create or update debian/copyright. The cme tool seem to handle UTF-8 names better than debmake.

When the copyright file is created, I would also like some help to check if the file is correct. For this I found two good options, debmake -k and license-reconcile. The former seem to focus on license types and file matching, and is able to detect ineffective blocks in the copyright file. The latter reports missing copyright holders and years, but was confused by inconsistent license names (like CDDL vs. CDDL-1.0). I suspect it is good to use both and fix all issues reported by them before uploading. But I do not know if the tools and the ftpmasters agree on what is important to fix in a copyright file, so the package might still be rejected.

The devscripts tool licensecheck deserve mentioning. It will read through the source and try to find all copyright statements. It is not comparing the result to the content of debian/copyright, but can be useful when verifying the content of the copyright file.

Are you aware of better tools in Debian to create and update debian/copyright file. Please let me know, or blog about it on planet.debian.org.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2016-02-20: I got a tip from Mike Gabriel on how to use licensecheck and cdbs to create a draft copyright file

licensecheck --copyright -r `find * -type f` | \
  /usr/lib/cdbs/licensecheck2dep5 > debian/copyright.auto

He mentioned that he normally check the generated file into the version control system to make it easier to discover license and copyright changes in the upstream source. I will try to do the same with my packages in the future.

Update 2016-02-21: The cme author recommended against using -quiet for new users, so I removed it from the proposed command line.

Tags: debian, english.
4th February 2016

The appstream system is taking shape in Debian, and one provided feature is a very convenient way to tell you which package to install to make a given firmware file available when the kernel is looking for it. This can be done using apt-file too, but that is for someone else to blog about. :)

Here is a small recipe to find the package with a given firmware file, in this example I am looking for ctfw-3.2.3.0.bin, randomly picked from the set of firmware announced using appstream in Debian unstable. In general you would be looking for the firmware requested by the kernel during kernel module loading. To find the package providing the example file, do like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides firmware:runtime ctfw-3.2.3.0.bin | \
  awk '/Package:/ {print $2}'
firmware-qlogic
%

See the appstream wiki page to learn how to embed the package metadata in a way appstream can use.

This same approach can be used to find any package supporting a given MIME type. This is very useful when you get a file you do not know how to handle. First find the mime type using file --mime-type, and next look up the package providing support for it. Lets say you got an SVG file. Its MIME type is image/svg+xml, and you can find all packages handling this type like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides mimetype image/svg+xml | \
  awk '/Package:/ {print $2}'
bkchem
phototonic
inkscape
shutter
tetzle
geeqie
xia
pinta
gthumb
karbon
comix
mirage
viewnior
postr
ristretto
kolourpaint4
eog
eom
gimagereader
midori
%

I believe the MIME types are fetched from the desktop file for packages providing appstream metadata.

Tags: debian, english.
24th January 2016

Most people seem not to realise that every time they walk around with the computerised radio beacon known as a mobile phone their position is tracked by the phone company and often stored for a long time (like every time a SMS is received or sent). And if their computerised radio beacon is capable of running programs (often called mobile apps) downloaded from the Internet, these programs are often also capable of tracking their location (if the app requested access during installation). And when these programs send out information to central collection points, the location is often included, unless extra care is taken to not send the location. The provided information is used by several entities, for good and bad (what is good and bad, depend on your point of view). What is certain, is that the private sphere and the right to free movement is challenged and perhaps even eradicated for those announcing their location this way, when they share their whereabouts with private and public entities.

The phone company logs provide a register of locations to check out when one want to figure out what the tracked person was doing. It is unavailable for most of us, but provided to selected government officials, company staff, those illegally buying information from unfaithful servants and crackers stealing the information. But the public information can be collected and analysed, and a free software tool to do so is called Creepy or Cree.py. I discovered it when I read an article about Creepy in the Norwegian newspaper Aftenposten i November 2014, and decided to check if it was available in Debian. The python program was in Debian, but the version in Debian was completely broken and practically unmaintained. I uploaded a new version which did not work quite right, but did not have time to fix it then. This Christmas I decided to finally try to get Creepy operational in Debian. Now a fixed version is available in Debian unstable and testing, and almost all Debian specific patches are now included upstream.

The Creepy program visualises geolocation information fetched from Twitter, Instagram, Flickr and Google+, and allow one to get a complete picture of every social media message posted recently in a given area, or track the movement of a given individual across all these services. Earlier it was possible to use the search API of at least some of these services without identifying oneself, but these days it is impossible. This mean that to use Creepy, you need to configure it to log in as yourself on these services, and provide information to them about your search interests. This should be taken into account when using Creepy, as it will also share information about yourself with the services.

The picture above show the twitter messages sent from (or at least geotagged with a position from) the city centre of Oslo, the capital of Norway. One useful way to use Creepy is to first look at information tagged with an area of interest, and next look at all the information provided by one or more individuals who was in the area. I tested it by checking out which celebrity provide their location in twitter messages by checkout out who sent twitter messages near a Norwegian TV station, and next could track their position over time, making it possible to locate their home and work place, among other things. A similar technique have been used to locate Russian soldiers in Ukraine, and it is both a powerful tool to discover lying governments, and a useful tool to help people understand the value of the private information they provide to the public.

The package is not trivial to backport to Debian Stable/Jessie, as it depend on several python modules currently missing in Jessie (at least python-instagram, python-flickrapi and python-requests-toolbelt).

(I have uploaded the image to screenshots.debian.net and licensed it under the same terms as the Creepy program in Debian.)

15th January 2016

During his DebConf15 keynote, Jacob Appelbaum observed that those listening on the Internet lines would have good reason to believe a computer have a given security hole if it download a security fix from a Debian mirror. This is a good reason to always use encrypted connections to the Debian mirror, to make sure those listening do not know which IP address to attack. In August, Richard Hartmann observed that encryption was not enough, when it was possible to interfere download size to security patches or the fact that download took place shortly after a security fix was released, and proposed to always use Tor to download packages from the Debian mirror. He was not the first to propose this, as the apt-transport-tor package by Tim Retout already existed to make it easy to convince apt to use Tor, but I was not aware of that package when I read the blog post from Richard.

Richard discussed the idea with Peter Palfrader, one of the Debian sysadmins, and he set up a Tor hidden service on one of the central Debian mirrors using the address vwakviie2ienjx6t.onion, thus making it possible to download packages directly between two tor nodes, making sure the network traffic always were encrypted.

Here is a short recipe for enabling this on your machine, by installing apt-transport-tor and replacing http and https urls with tor+http and tor+https, and using the hidden service instead of the official Debian mirror site. I recommend installing etckeeper before you start to have a history of the changes done in /etc/.

apt install apt-transport-tor
sed -i 's% http://ftp.debian.org/% tor+http://vwakviie2ienjx6t.onion/%' /etc/apt/sources.list
sed -i 's% http% tor+http%' /etc/apt/sources.list

If you have more sources listed in /etc/apt/sources.list.d/, run the sed commands for these too. The sed command is assuming your are using the ftp.debian.org Debian mirror. Adjust the command (or just edit the file manually) to match your mirror.

This work in Debian Jessie and later. Note that tools like apt-file only recently started using the apt transport system, and do not work with these tor+http URLs. For apt-file you need the version currently in experimental, which need a recent apt version currently only in unstable. So if you need a working apt-file, this is not for you.

Another advantage from this change is that your machine will start using Tor regularly and at fairly random intervals (every time you update the package lists or upgrade or install a new package), thus masking other Tor traffic done from the same machine. Using Tor will become normal for the machine in question.

On Freedombox, APT is set up by default to use apt-transport-tor when Tor is enabled. It would be great if it was the default on any Debian system.

23rd December 2015

When I was a kid, we used to collect "car numbers", as we used to call the car license plate numbers in those days. I would write the numbers down in my little book and compare notes with the other kids to see how many region codes we had seen and if we had seen some exotic or special region codes and numbers. It was a fun game to pass time, as we kids have plenty of it.

A few days I came across the OpenALPR project, a free software project to automatically discover and report license plates in images and video streams, and provide the "car numbers" in a machine readable format. I've been looking for such system for a while now, because I believe it is a bad idea that the automatic number plate recognition tool only is available in the hands of the powerful, and want it to be available also for the powerless to even the score when it comes to surveillance and sousveillance. I discovered the developer wanted to get the tool into Debian, and as I too wanted it to be in Debian, I volunteered to help him get it into shape to get the package uploaded into the Debian archive.

Today we finally managed to get the package into shape and uploaded it into Debian, where it currently waits in the NEW queue for review by the Debian ftpmasters.

I guess you are wondering why on earth such tool would be useful for the common folks, ie those not running a large government surveillance system? Well, I plan to put it in a computer on my bike and in my car, tracking the cars nearby and allowing me to be notified when number plates on my watch list are discovered. Another use case was suggested by a friend of mine, who wanted to set it up at his home to open the car port automatically when it discovered the plate on his car. When I mentioned it perhaps was a bit foolhardy to allow anyone capable of placing his license plate number of a piece of cardboard to open his car port, men replied that it was always unlocked anyway. I guess for such use case it make sense. I am sure there are other use cases too, for those with imagination and a vision.

If you want to build your own version of the Debian package, check out the upstream git source and symlink ./distros/debian to ./debian/ before running "debuild" to build the source. Or wait a bit until the package show up in unstable.

20th December 2015

Around three years ago, I created the isenkram system to get a more practical solution in Debian for handing hardware related packages. A GUI system in the isenkram package will present a pop-up dialog when some hardware dongle supported by relevant packages in Debian is inserted into the machine. The same lookup mechanism to detect packages is available as command line tools in the isenkram-cli package. In addition to mapping hardware, it will also map kernel firmware files to packages and make it easy to install needed firmware packages automatically. The key for this system to work is a good way to map hardware to packages, in other words, allow packages to announce what hardware they will work with.

I started by providing data files in the isenkram source, and adding code to download the latest version of these data files at run time, to ensure every user had the most up to date mapping available. I also added support for storing the mapping in the Packages file in the apt repositories, but did not push this approach because while I was trying to figure out how to best store hardware/package mappings, the appstream system was announced. I got in touch and suggested to add the hardware mapping into that data set to be able to use appstream as a data source, and this was accepted at least for the Debian version of appstream.

A few days ago using appstream in Debian for this became possible, and today I uploaded a new version 0.20 of isenkram adding support for appstream as a data source for mapping hardware to packages. The only package so far using appstream to announce its hardware support is my pymissile package. I got help from Matthias Klumpp with figuring out how do add the required metadata in pymissile. I added a file debian/pymissile.metainfo.xml with this content:

<?xml version="1.0" encoding="UTF-8"?>
<component>
  <id>pymissile</id>
  <metadata_license>MIT</metadata_license>
  <name>pymissile</name>
  <summary>Control original Striker USB Missile Launcher</summary>
  <description>
    <p>
      Pymissile provides a curses interface to control an original
      Marks and Spencer / Striker USB Missile Launcher, as well as a
      motion control script to allow a webcamera to control the
      launcher.
    </p>
  </description>
  <provides>
    <modalias>usb:v1130p0202d*</modalias>
  </provides>
</component>

The key for isenkram is the component/provides/modalias value, which is a glob style match rule for hardware specific strings (modalias strings) provided by the Linux kernel. In this case, it will map to all USB devices with vendor code 1130 and product code 0202.

Note, it is important that the license of all the metadata files are compatible to have permissions to aggregate them into archive wide appstream files. Matthias suggested to use MIT or BSD licenses for these files. A challenge is figuring out a good id for the data, as it is supposed to be globally unique and shared across distributions (in other words, best to coordinate with upstream what to use). But it can be changed later or, so we went with the package name as upstream for this project is dormant.

To get the metadata file installed in the correct location for the mirror update scripts to pick it up and include its content the appstream data source, the file must be installed in the binary package under /usr/share/appdata/. I did this by adding the following line to debian/pymissile.install:

debian/pymissile.metainfo.xml usr/share/appdata

With that in place, the command line tool isenkram-lookup will list all packages useful on the current computer automatically, and the GUI pop-up handler will propose to install the package not already installed if a hardware dongle is inserted into the machine in question.

Details of the modalias field in appstream is available from the DEP-11 proposal.

To locate the modalias values of all hardware present in a machine, try running this command on the command line:

cat $(find /sys/devices/|grep modalias)

To learn more about the isenkram system, please check out my blog posts tagged isenkram.

30th November 2015

A blog post from my fellow Debian developer Paul Wise titled "The GPL is not magic pixie dust" explain the importance of making sure the GPL is enforced. I quote the blog post from Paul in full here with his permission:

Become a Software Freedom Conservancy Supporter!

The GPL is not magic pixie dust. It does not work by itself.
The first step is to choose a copyleft license for your code.
The next step is, when someone fails to follow that copyleft license, it must be enforced
and its a simple fact of our modern society that such type of work
is incredibly expensive to do and incredibly difficult to do.

-- Bradley Kuhn, in FaiF episode 0x57

As the Debian Website used to imply, public domain and permissively licensed software can lead to the production of more proprietary software as people discover useful software, extend it and or incorporate it into their hardware or software products. Copyleft licenses such as the GNU GPL were created to close off this avenue to the production of proprietary software but such licenses are not enough. With the ongoing adoption of Free Software by individuals and groups, inevitably the community's expectations of license compliance are violated, usually out of ignorance of the way Free Software works, but not always. As Karen and Bradley explained in FaiF episode 0x57, copyleft is nothing if no-one is willing and able to stand up in court to protect it. The reality of today's world is that legal representation is expensive, difficult and time consuming. With gpl-violations.org in hiatus until some time in 2016, the Software Freedom Conservancy (a tax-exempt charity) is the major defender of the Linux project, Debian and other groups against GPL violations. In March the SFC supported a lawsuit by Christoph Hellwig against VMware for refusing to comply with the GPL in relation to their use of parts of the Linux kernel. Since then two of their sponsors pulled corporate funding and conferences blocked or cancelled their talks. As a result they have decided to rely less on corporate funding and more on the broad community of individuals who support Free Software and copyleft. So the SFC has launched a campaign to create a community of folks who stand up for copyleft and the GPL by supporting their work on promoting and supporting copyleft and Free Software.

If you support Free Software, like what the SFC do, agree with their compliance principles, are happy about their successes in 2015, work on a project that is an SFC member and or just want to stand up for copyleft, please join Christopher Allan Webber, Carol Smith, Jono Bacon, myself and others in becoming a supporter. For the next week your donation will be matched by an anonymous donor. Please also consider asking your employer to match your donation or become a sponsor of SFC. Don't forget to spread the word about your support for SFC via email, your blog and or social media accounts.

I agree with Paul on this topic and just signed up as a Supporter of Software Freedom Conservancy myself. Perhaps you should be a supporter too?

17th November 2015

I've needed a new OpenPGP key for a while, but have not had time to set it up properly. I wanted to generate it offline and have it available on a OpenPGP smart card for daily use, and learning how to do it and finding time to sit down with an offline machine almost took forever. But finally I've been able to complete the process, and have now moved from my old GPG key to a new GPG key. See the full transition statement, signed with both my old and new key for the details. This is my new key:

pub   3936R/111D6B29EE4E02F9 2015-11-03 [expires: 2019-11-14]
      Key fingerprint = 3AC7 B2E3 ACA5 DF87 78F1  D827 111D 6B29 EE4E 02F9
uid                  Petter Reinholdtsen <pere@hungry.com>
uid                  Petter Reinholdtsen <pere@debian.org>
sub   4096R/87BAFB0E 2015-11-03 [expires: 2019-11-02]
sub   4096R/F91E6DE9 2015-11-03 [expires: 2019-11-02]
sub   4096R/A0439BAB 2015-11-03 [expires: 2019-11-02]

The key can be downloaded from the OpenPGP key servers, signed by my old key.

If you signed my old key (DB4CCC4B2A30D729), I'd very much appreciate a signature on my new key, details and instructions in the transition statement. I m happy to reciprocate if you have a similarly signed transition statement to present.

3rd November 2015

In Norway, all government offices are required by law to keep a list of every document or letter arriving and leaving their offices. Internal notes should also be documented. The document list (called a mail journal - "postjournal" in Norwegian) is public information and thanks to the Norwegian Freedom of Information Act (Offentleglova) the mail journal is available for everyone. Most offices even publish the mail journal on their web pages, as PDFs or tables in web pages. The state-level offices even have a shared web based search service (called Offentlig Elektronisk Postjournal - OEP) to make it possible to search the entries in the list. Not all journal entries show up on OEP, and the search service is hard to use, but OEP does make it easier to find at least some interesting journal entries .

In 2012 I came across a document in the mail journal for the Norwegian Ministry of Transport and Communications on OEP that piqued my interest. The title of the document was "Internet Governance and how it affects national security" (Norwegian: "Internet Governance og påvirkning på nasjonal sikkerhet"). The document date was 2012-05-22, and it was said to be sent from the "Permanent Mission of Norway to the United Nations". I asked for a copy, but my request was rejected with a reference to a legal clause said to authorize them to reject it (offentleglova § 20, letter c) and an explanation that the document was exempt because of foreign policy interests as it contained information related to the Norwegian negotiating position, negotiating strategies or similar. I was told the information in the document related to the ongoing negotiation in the International Telecommunications Union (ITU). The explanation made sense to me in early January 2013, as a ITU conference in Dubay discussing Internet Governance (World Conference on International Telecommunications - WCIT-12) had just ended, reportedly in chaos when USA walked out of the negotiations and 25 countries including Norway refused to sign the new treaty. It seemed reasonable to believe talks were still going on a few weeks later. Norway was represented at the ITU meeting by two authorities, the Norwegian Communications Authority and the Ministry of Transport and Communications. This might be the reason the letter was sent to the ministry. As I was unable to find the document in the mail journal of any Norwegian UN mission, I asked the ministry who had sent the document to the ministry, and was told that it was the Deputy Permanent Representative with the Permanent Mission of Norway in Geneva.

Three years later, I was still curious about the content of that document, and again asked for a copy, believing the negotiation was over now. This time I asked both the Ministry of Transport and Communications as the receiver and asked the Permanent Mission of Norway in Geneva as the sender for a copy, to see if they both agreed that it should be withheld from the public. The ministry upheld its rejection quoting the same law reference as before, while the permanent mission rejected it quoting a different clause (offentleglova § 20 letter b), claiming that they were required to keep the content of the document from the public because it contained information given to Norway with the expressed or implied expectation that the information should not be made public. I asked the permanent mission for an explanation, and was told that the document contained an account from a meeting held in the Pentagon for a limited group of NATO nations where the organiser of the meeting did not intend the content of the meeting to be publicly known. They explained that giving me a copy might cause Norway to not get access to similar information in the future and thus hurt the future foreign interests of Norway. They also explained that the Permanent Mission of Norway in Geneva was not the author of the document, they only got a copy of it, and because of this had not listed it in their mail journal.

Armed with this knowledge I asked the Ministry to reconsider and asked who was the author of the document, now realising that it was not same as the "sender" according to Ministry of Transport and Communications. The ministry upheld its rejection but told me the name of the author of the document. According to a government report the author was with the Permanent Mission of Norway in New York a bit more than a year later (2014-09-22), so I guessed that might be the office responsible for writing and sending the report initially and asked them for a copy but I was obviously wrong as I was told that the document was unknown to them and that the author did not work there when the document was written. Next, I asked the Permanent Mission of Norway in Geneva and the Foreign Ministry to reconsider and at least tell me who sent the document to Deputy Permanent Representative with the Permanent Mission of Norway in Geneva. The Foreign Ministry also upheld its rejection, but told me that the person sending the document to Permanent Mission of Norway in Geneva was the defence attaché with the Norwegian Embassy in Washington. I do not know if this is the same person as the author of the document.

If I understand the situation correctly, someone capable of inviting selected NATO nations to a meeting in Pentagon organised a meeting where someone representing the Norwegian defence attaché in Washington attended, and the account from this meeting is interpreted by the Ministry of Transport and Communications to expose Norways negotiating position, negotiating strategies and similar regarding the ITU negotiations on Internet Governance. It is truly amazing what can be derived from mere meta-data.

I wonder which NATO countries besides Norway attended this meeting? And what exactly was said and done at the meeting? Anyone know?

31st October 2015

People keep asking me where to get the various forms of the book I published last week, the Norwegian Bokmål edition of Lawrence Lessigs book Free Culture. It was published on paper via lulu.com, and is also available in PDF, ePub and MOBI format. I currently sell the paper edition for self cost from lulu.com, but might extend the distribution to book stores like Amazon and Barnes & Noble later. This will double the price and force me to make a profit from selling the book. Anyway, here are links to get the book in different formats:

Note that the MOBI version have problems with the table of content, at least with the viewers I have been able to test. And the ePub file have several problems according to epubcheck, but seem to display fine in the viewers I have tested. All the files needed to create the book in various forms are available from the github project page.

The project got press coverage from the Norwegian IT news site digi.no. Check out the article "Vil åpne politikernes øyne for Creative Commons".

I've blogged about the project as it moved along. The blogs document the translation progress and insights I had along the way.

23rd October 2015

Click here to buy the book.

In 2004, as the Creative Commons movement gained momentum, its creator Lawrence Lessig wrote the book Free Culture to explain the problems with increasing copyright regulation and suggest some solutions. I read the book back then and was very moved by it. Reading the book inspired me and changed the way I looked on copyright law, and I would love it if more people would read it too.

Because of this, I decided in the summer of 2012 to translate it to Norwegian Bokmål and publish it for those of my friends and family that prefer to read books in Norwegian. I translated the book using docbook and a gettext PO file, and a byproduct of this process is a new edition of the English original. I've been in touch with the author during by work, and he said it was fine with him if I also published an English version. So I decided to do so. Today, I made this edition available for sale on Lulu.com, for those interested in a paper book. This is the cover:

The Norwegian Bokmål version will be available for purchase in a few days. I also plan to publish a French version in a few weeks or months, depending on the amount of people with knowledge of French to join the translation project. So far there is only one active person, but the French book is almost completely translated but need some proof reading.

The book is also available in PDF, ePub and MOBI formats from my github project page. Note the ePub and MOBI versions have some formatting problems I believe is due to bugs in the docbook tool dbtoepub (Debian BTS issues #795842 and #796871), but I have not taken the time to investigate. I recommend the PDF and ePub version for now, as they seem to show up fine in the viewers I have available.

After the translation to Norwegian Bokmål was complete, I was able to secure some sponsoring from the NUUG Foundation to print the book. This is the reason their logo is located on the back cover. I am very grateful for their contribution, and will use it to give a copy of the Norwegian edition to members of the Norwegian Parliament and other decision makers here in Norway.

19th October 2015

Last year, US president candidate in the Democratic Party Lawrence interviewed Edward Snowden. The one hour interview was published by Harvard Law School 2014-10-23 on Youtube, and the meeting took place 2014-10-20.

The questions are very good, and there is lots of useful information to be learned and very interesting issues to think about being raised. Please check it out.

I find it especially interesting to hear again that Snowden did try to bring up his reservations through the official channels without any luck. It is in sharp contrast to the answers made 2013-11-06 by the Norwegian prime minister Erna Solberg to the Norwegian Parliament, claiming Snowden is no Whistle-Blower because he should have taken up his concerns internally and using official channels. It make me sad that this is the political leadership we have here in Norway.

8th October 2015

The movie "The Internet's Own Boy: The Story of Aaron Swartz" is both inspiring and depressing at the same time. The work of Aaron Swartz has inspired me in my work, and I am grateful of all the improvements he was able to initiate or complete. I wish I am able to do as much good in my life as he did in his. Every minute of this 1:45 long movie is inspiring in documenting how much impact a single person can have on improving the society and this world. And it is depressing in documenting how the law enforcement of USA (and other countries) is corrupted to a point where they can push a bright kid to his death for downloading too many scientific articles. Aaron is dead. Let us all weep.

The movie is also available on Youtube. I wish there were Norwegian subtitles available, so I could show it to my parents.

1st October 2015

As I wrap up the Norwegian version of Free Culture book by Lawrence Lessig (still waiting for my final proof reading copy to arrive in the mail), my great dblatex helper and developer of the dblatex docbook processor, Benoît Guillon, decided a to try to create a French version of the book. He started with the French translation available from the Wikilivres wiki pages, and wrote a program to convert it into a PO file, allowing the translation to be integrated into the po4a based framework I use to create the Norwegian translation from the English edition. We meet on the #dblatex IRC channel to discuss the work. If you want to help create a French edition, check out his git repository and join us on IRC. If the French edition look good, we might publish it as a paper book on lulu.com. A French version of the drawings and the cover need to be provided for this to happen.

24th September 2015

When I get a new laptop, the battery life time at the start is OK. But this do not last. The last few laptops gave me a feeling that within a year, the life time is just a fraction of what it used to be, and it slowly become painful to use the laptop without power connected all the time. Because of this, when I got a new Thinkpad X230 laptop about two years ago, I decided to monitor its battery state to have more hard facts when the battery started to fail.

First I tried to find a sensible Debian package to record the battery status, assuming that this must be a problem already handled by someone else. I found battery-stats, which collects statistics from the battery, but it was completely broken. I sent a few suggestions to the maintainer, but decided to write my own collector as a shell script while I waited for feedback from him. Via a blog post about the battery development on a MacBook Air I also discovered batlog, not available in Debian.

I started my collector 2013-07-15, and it has been collecting battery stats ever since. Now my /var/log/hjemmenett-battery-status.log file contain around 115,000 measurements, from the time the battery was working great until now, when it is unable to charge above 7% of original capacity. My collector shell script is quite simple and look like this:

#!/bin/sh
# Inspired by
# http://www.ifweassume.com/2013/08/the-de-evolution-of-my-laptop-battery.html
# See also
# http://blog.sleeplessbeastie.eu/2013/01/02/debian-how-to-monitor-battery-capacity/
logfile=/var/log/hjemmenett-battery-status.log

files="manufacturer model_name technology serial_number \
    energy_full energy_full_design energy_now cycle_count status"

if [ ! -e "$logfile" ] ; then
    (
	printf "timestamp,"
	for f in $files; do
	    printf "%s," $f
	done
	echo
    ) > "$logfile"
fi

log_battery() {
    # Print complete message in one echo call, to avoid race condition
    # when several log processes run in parallel.
    msg=$(printf "%s," $(date +%s); \
	for f in $files; do \
	    printf "%s," $(cat $f); \
	done)
    echo "$msg"
}

cd /sys/class/power_supply

for bat in BAT*; do
    (cd $bat && log_battery >> "$logfile")
done

The script is called when the power management system detect a change in the power status (power plug in or out), and when going into and out of hibernation and suspend. In addition, it collect a value every 10 minutes. This make it possible for me know when the battery is discharging, charging and how the maximum charge change over time. The code for the Debian package is now available on github.

The collected log file look like this:

timestamp,manufacturer,model_name,technology,serial_number,energy_full,energy_full_design,energy_now,cycle_count,status,
1376591133,LGC,45N1025,Li-ion,974,62800000,62160000,39050000,0,Discharging,
[...]
1443090528,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,
1443090601,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,

I wrote a small script to create a graph of the charge development over time. This graph depicted above show the slow death of my laptop battery.

But why is this happening? Why are my laptop batteries always dying in a year or two, while the batteries of space probes and satellites keep working year after year. If we are to believe Battery University, the cause is me charging the battery whenever I have a chance, and the fix is to not charge the Lithium-ion batteries to 100% all the time, but to stay below 90% of full charge most of the time. I've been told that the Tesla electric cars limit the charge of their batteries to 80%, with the option to charge to 100% when preparing for a longer trip (not that I would want a car like Tesla where rights to privacy is abandoned, but that is another story), which I guess is the option we should have for laptops on Linux too.

Is there a good and generic way with Linux to tell the battery to stop charging at 80%, unless requested to charge to 100% once in preparation for a longer trip? I found one recipe on askubuntu for Ubuntu to limit charging on Thinkpad to 80%, but could not get it to work (kernel module refused to load).

I wonder why the battery capacity was reported to be more than 100% at the start. I also wonder why the "full capacity" increases some times, and if it is possible to repeat the process to get the battery back to design capacity. And I wonder if the discharge and charge speed change over time, or if this stay the same. I did not yet try to write a tool to calculate the derivative values of the battery level, but suspect some interesting insights might be learned from those.

Update 2015-09-24: I got a tip to install the packages acpi-call-dkms and tlp (unfortunately missing in Debian stable) packages instead of the tp-smapi-dkms package I had tried to use initially, and use 'tlp setcharge 40 80' to change when charging start and stop. I've done so now, but expect my existing battery is toast and need to be replaced. The proposal is unfortunately Thinkpad specific.

Tags: debian, english.
3rd September 2015

Creating a good looking book cover proved harder than I expected. I wanted to create a cover looking similar to the original cover of the Free Culture book we are translating to Norwegian, and I wanted it in vector format for high resolution printing. But my inkscape knowledge were not nearly good enough to pull that off.

But thanks to the great inkscape community, I was able to wrap up the cover yesterday evening. I asked on the #inkscape IRC channel on Freenode for help and clues, and Marc Jeanmougin (Mc-) volunteered to try to recreate it based on the PDF of the cover from the HTML version. Not only did he create a SVG document with the original and his vector version side by side, he even provided an instruction video explaining how he did it. But the instruction video is not easy to follow for an untrained inkscape user. The video is a recording on how he did it, and he is obviously very experienced as the menu selections are very quick and he mentioned on IRC that he did use some keyboard shortcuts that can't be seen on the video, but it give a good idea about the inkscape operations to use to create the stripes with the embossed copyright sign in the center.

I took his SVG file, copied the vector image and re-sized it to fit on the cover I was drawing. I am happy with the end result, and the current english version look like this:

I am not quite sure about the text on the back, but guess it will do. I picked three quotes from the official site for the book, and hope it will work to trigger the interest of potential readers. The Norwegian cover will look the same, but with the texts and bar code replaced with the Norwegian version.

The book is very close to being ready for publication, and I expect to upload the final draft to Lulu in the next few days and order a final proof reading copy to verify that everything look like it should before allowing everyone to order their own copy of Free Culture, in English or Norwegian Bokmål. I'm waiting to give the the productive proof readers a chance to complete their work.

19th August 2015

Today, finally, my first printed draft edition of the Norwegian translation of Free Culture I have been working on for the last few years arrived in the mail. I had to fake a cover to get the interior printed, and the exterior of the book look awful, but that is irrelevant at this point. I asked for a printed pocket book version to get an idea about the font sizes and paper format as well as how good the figures and images look in print, but also to test what the pocket book version would look like. After receiving the 500 page pocket book, it became obvious to me that that pocket book size is too small for this book. I believe the book is too thick, and several tables and figures do not look good in the size they get with that small page sizes. I believe I will go with the 5.5x8.5 inch size instead. A surprise discovery from the paper version was how bad the URLs look in print. They are very hard to read in the colophon page. The URLs are red in the PDF, but light gray on paper. I need to change the color of links somehow to look better. But there is a printed book in my hand, and it feels great. :)

Now I only need to fix the cover, wrap up the postscript with the store behind the book, and collect the last corrections from the proof readers before the book is ready for proper printing. Cover artists willing to work for free and create a Creative Commons licensed vector file looking similar to the original is most welcome, as my skills as a graphics designer are mostly missing.

9th August 2015

Typesetting a book is harder than I hoped. As the translation is mostly done, and a volunteer proof reader was going to check the text on paper, it was time this summer to focus on formatting my translated docbook based version of the Free Culture book by Lawrence Lessig. I've been trying to get both docboox-xsl+fop and dblatex to give me a good looking PDF, but in the end I went with dblatex, because its Debian maintainer and upstream developer were responsive and very helpful in solving my formatting challenges.

Last night, I finally managed to create a PDF that no longer made Lulu.com complain after uploading, and I ordered a text version of the book on paper. It is lacking a proper book cover and is not tagged with the correct ISBN number, but should give me an idea what the finished book will look like.

Instead of using Lulu, I did consider printing the book using CreateSpace, but ended up using Lulu because it had smaller book size options (CreateSpace seem to lack pocket book with extended distribution). I looked for a similar service in Norway, but have not seen anything so far. Please let me know if I am missing out on something here.

But I still struggle to decide the book size. Should I go for pocket book (4.25x6.875 inches / 10.8x17.5 cm) with 556 pages, Digest (5.5x8.5 inches / 14x21.6 cm) with 323 pages or US Trade (6x8 inches / 15.3x22.9 cm) with 280 pages? Fewer pager give a cheaper book, and a smaller book is easier to carry around. The test book I ordered was pocket book sized, to give me an idea how well that fit in my hand, but I suspect I will end up using a digest sized book in the end to bring the prize down further.

My biggest challenge at the moment is making nice cover art. My inkscape skills are not yet up to the task of replicating the original cover in SVG format. I also need to figure out what to write about the book on the back (will most likely use the same text as the description on web based book stores). I would love help with this, if you are willing to license the art source and final version using the same CC license as the book. My artistic skills are not really up to the task.

I plan to publish the book in both English and Norwegian and on paper, in PDF form as well as EPUB and MOBI format. The current status can as usual be found on github in the archive/ directory. So far I have spent all time on making the PDF version look good. Someone should probably do the same with the dbtoepub generated e-book. Help is definitely needed here, as I expect to run out of steem before I find time to improve the epub formatting.

Please let me know via github if you find typos in the book or discover translations that should be improved. The final proof reading is being done right now, and I expect to publish the finished result in a few months.

16th July 2015

I'm still working on the Norwegian version of the Free Culture book by Lawrence Lessig, and is now working on the final typesetting and layout. One of the features I want to get the structure similar to the original book is to typeset the footnotes as endnotes in the notes chapter. Based on the feedback from the Debian maintainer and the dblatex developer, I came up with this recipe I would like to share with you. The proposal was to create a new LaTeX class file and add the LaTeX code there, but this is not always practical, when I want to be able to replace the class using a make file variable. So my proposal misuses the latex.begindocument XSL parameter value, to get a small fragment into the correct location in the generated LaTeX File.

First, decide where in the DocBook document to place the endnotes, and add this text there:

<?latex \theendnotes ?>

Next, create a xsl stylesheet file dblatex-endnotes.xsl to add the code needed to add the endnote instructions in the preamble of the generated LaTeX document, with content like this:

<?xml version='1.0'?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version='1.0'>
  <xsl:param name="latex.begindocument">
    <xsl:text>
\usepackage{endnotes}
\let\footnote=\endnote
\def\enoteheading{\mbox{}\par\vskip-\baselineskip }
\begin{document}
    </xsl:text>
  </xsl:param>
</xsl:stylesheet>

Finally, load this xsl file when running dblatex, for example like this:

dblatex --xsl-user=dblatex-endnotes.xsl freeculture.nb.xml

The end result can be seen on github, where my book project is located.

7th July 2015

After asking the Norwegian Broadcasting Company (NRK) why they can broadcast and stream H.264 video without an agreement with the MPEG LA, I was wiser, but still confused. So I asked MPEG LA if their understanding matched that of NRK. As far as I can tell, it does not.

I started by asking for more information about the various licensing classes and what exactly is covered by the "Internet Broadcast AVC Video" class that NRK pointed me at to explain why NRK did not need a license for streaming H.264 video:

According to a MPEG LA press release dated 2010-02-02, there is no charge when using MPEG AVC/H.264 according to the terms of "Internet Broadcast AVC Video". I am trying to understand exactly what the terms of "Internet Broadcast AVC Video" is, and wondered if you could help me. What exactly is covered by these terms, and what is not?

The only source of more information I have been able to find is a PDF named AVC Patent Portfolio License Briefing, which states this about the fees:

  • Where End User pays for AVC Video
    • Subscription (not limited by title) – 100,000 or fewer subscribers/yr = no royalty; > 100,000 to 250,000 subscribers/yr = $25,000; >250,000 to 500,000 subscribers/yr = $50,000; >500,000 to 1M subscribers/yr = $75,000; >1M subscribers/yr = $100,000
    • Title-by-Title - 12 minutes or less = no royalty; >12 minutes in length = lower of (a) 2% or (b) $0.02 per title
  • Where remuneration is from other sources
    • Free Television - (a) one-time $2,500 per transmission encoder or (b) annual fee starting at $2,500 for > 100,000 HH rising to maximum $10,000 for >1,000,000 HH
    • Internet Broadcast AVC Video (not title-by-title, not subscription) – no royalty for life of the AVC Patent Portfolio License

Am I correct in assuming that the four categories listed is the categories used when selecting licensing terms, and that "Internet Broadcast AVC Video" is the category for things that do not fall into one of the other three categories? Can you point me to a good source explaining what is ment by "title-by-title" and "Free Television" in the license terms for AVC/H.264?

Will a web service providing H.264 encoded video content in a "video on demand" fashing similar to Youtube and Vimeo, where no subscription is required and no payment is required from end users to get access to the videos, fall under the terms of the "Internet Broadcast AVC Video", ie no royalty for life of the AVC Patent Portfolio license? Does it matter if some users are subscribed to get access to personalized services?

Note, this request and all answers will be published on the Internet.

The answer came quickly from Benjamin J. Myers, Licensing Associate with the MPEG LA:

Thank you for your message and for your interest in MPEG LA. We appreciate hearing from you and I will be happy to assist you.

As you are aware, MPEG LA offers our AVC Patent Portfolio License which provides coverage under patents that are essential for use of the AVC/H.264 Standard (MPEG-4 Part 10). Specifically, coverage is provided for end products and video content that make use of AVC/H.264 technology. Accordingly, the party offering such end products and video to End Users concludes the AVC License and is responsible for paying the applicable royalties.

Regarding Internet Broadcast AVC Video, the AVC License generally defines such content to be video that is distributed to End Users over the Internet free-of-charge. Therefore, if a party offers a service which allows users to upload AVC/H.264 video to its website, and such AVC Video is delivered to End Users for free, then such video would receive coverage under the sublicense for Internet Broadcast AVC Video, which is not subject to any royalties for the life of the AVC License. This would also apply in the scenario where a user creates a free online account in order to receive a customized offering of free AVC Video content. In other words, as long as the End User is given access to or views AVC Video content at no cost to the End User, then no royalties would be payable under our AVC License.

On the other hand, if End Users pay for access to AVC Video for a specific period of time (e.g., one month, one year, etc.), then such video would constitute Subscription AVC Video. In cases where AVC Video is delivered to End Users on a pay-per-view basis, then such content would constitute Title-by-Title AVC Video. If a party offers Subscription or Title-by-Title AVC Video to End Users, then they would be responsible for paying the applicable royalties you noted below.

Finally, in the case where AVC Video is distributed for free through an "over-the-air, satellite and/or cable transmission", then such content would constitute Free Television AVC Video and would be subject to the applicable royalties.

For your reference, I have attached a .pdf copy of the AVC License. You will find the relevant sublicense information regarding AVC Video in Sections 2.2 through 2.5, and the corresponding royalties in Section 3.1.2 through 3.1.4. You will also find the definitions of Title-by-Title AVC Video, Subscription AVC Video, Free Television AVC Video, and Internet Broadcast AVC Video in Section 1 of the License. Please note that the electronic copy is provided for informational purposes only and cannot be used for execution.

I hope the above information is helpful. If you have additional questions or need further assistance with the AVC License, please feel free to contact me directly.

Having a fresh copy of the license text was useful, and knowing that the definition of Title-by-Title required payment per title made me aware that my earlier understanding of that phrase had been wrong. But I still had a few questions:

I have a small followup question. Would it be possible for me to get a license with MPEG LA even if there are no royalties to be paid? The reason I ask, is that some video related products have a copyright clause limiting their use without a license with MPEG LA. The clauses typically look similar to this:

This product is licensed under the AVC patent portfolio license for the personal and non-commercial use of a consumer to (a) encode video in compliance with the AVC standard ("AVC video") and/or (b) decode AVC video that was encoded by a consumer engaged in a personal and non-commercial activity and/or AVC video that was obtained from a video provider licensed to provide AVC video. No license is granted or shall be implied for any other use. additional information may be obtained from MPEG LA L.L.C.

It is unclear to me if this clause mean that I need to enter into an agreement with MPEG LA to use the product in question, even if there are no royalties to be paid to MPEG LA. I suspect it will differ depending on the jurisdiction, and mine is Norway. What is MPEG LAs view on this?

According to the answer, MPEG LA believe those using such tools for non-personal or commercial use need a license with them:

With regard to the Notice to Customers, I would like to begin by clarifying that the Notice from Section 7.1 of the AVC License reads:

THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL USE OF A CONSUMER OR OTHER USES IN WHICH IT DOES NOT RECEIVE REMUNERATION TO (i) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (ii) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM

The Notice to Customers is intended to inform End Users of the personal usage rights (for example, to watch video content) included with the product they purchased, and to encourage any party using the product for commercial purposes to contact MPEG LA in order to become licensed for such use (for example, when they use an AVC Product to deliver Title-by-Title, Subscription, Free Television or Internet Broadcast AVC Video to End Users, or to re-Sell a third party's AVC Product as their own branded AVC Product).

Therefore, if a party is to be licensed for its use of an AVC Product to Sell AVC Video on a Title-by-Title, Subscription, Free Television or Internet Broadcast basis, that party would need to conclude the AVC License, even in the case where no royalties were payable under the License. On the other hand, if that party (either a Consumer or business customer) simply uses an AVC Product for their own internal purposes and not for the commercial purposes referenced above, then such use would be included in the royalty paid for the AVC Products by the licensed supplier.

Finally, I note that our AVC License provides worldwide coverage in countries that have AVC Patent Portfolio Patents, including Norway.

I hope this clarification is helpful. If I may be of any further assistance, just let me know.

The mentioning of Norwegian patents made me a bit confused, so I asked for more information:

But one minor question at the end. If I understand you correctly, you state in the quote above that there are patents in the AVC Patent Portfolio that are valid in Norway. This make me believe I read the list available from <URL: http://www.mpegla.com/main/programs/AVC/Pages/PatentList.aspx > incorrectly, as I believed the "NO" prefix in front of patents were Norwegian patents, and the only one I could find under Mitsubishi Electric Corporation expired in 2012. Which patents are you referring to that are relevant for Norway?

Again, the quick answer explained how to read the list of patents in that list:

Your understanding is correct that the last AVC Patent Portfolio Patent in Norway expired on 21 October 2012. Therefore, where AVC Video is both made and Sold in Norway after that date, then no royalties would be payable for such AVC Video under the AVC License. With that said, our AVC License provides historic coverage for AVC Products and AVC Video that may have been manufactured or Sold before the last Norwegian AVC patent expired. I would also like to clarify that coverage is provided for the country of manufacture and the country of Sale that has active AVC Patent Portfolio Patents.

Therefore, if a party offers AVC Products or AVC Video for Sale in a country with active AVC Patent Portfolio Patents (for example, Sweden, Denmark, Finland, etc.), then that party would still need coverage under the AVC License even if such products or video are initially made in a country without active AVC Patent Portfolio Patents (for example, Norway). Similarly, a party would need to conclude the AVC License if they make AVC Products or AVC Video in a country with active AVC Patent Portfolio Patents, but eventually Sell such AVC Products or AVC Video in a country without active AVC Patent Portfolio Patents.

As far as I understand it, MPEG LA believe anyone using Adobe Premiere and other video related software with a H.264 distribution license need a license agreement with MPEG LA to use such tools for anything non-private or commercial, while it is OK to set up a Youtube-like service as long as no-one pays to get access to the content. I still have no clear idea how this applies to Norway, where none of the patents MPEG LA is licensing are valid. Will the copyright terms take precedence or can those terms be ignored because the patents are not valid in Norway?

5th July 2015

Several people contacted me after my previous blog post about my need for a new laptop, and provided very useful feedback. I wish to thank every one of these. Several pointed me to the possibility of fixing my X230, and I am already in the process of getting Lenovo to do so thanks to the on site, next day support contract covering the machine. But the battery is almost useless (I expect to replace it with a non-official battery) and I do not expect the machine to live for many more years, so it is time to plan its replacement. If I did not have a support contract, it was suggested to find replacement parts using FrancEcrans, but it might present a language barrier as I do not understand French.

One tip I got was to use the Skinflint web service to compare laptop models. It seem to have more models available than prisjakt.no. Another tip I got from someone I know have similar keyboard preferences was that the HP EliteBook 840 keyboard is not very good, and this matches my experience with earlier EliteBook keyboards I tested. Because of this, I will not consider it any further.

When I wrote my blog post, I was not aware of Thinkpad X250, the newest Thinkpad X model. The keyboard reintroduces mouse buttons (which is missing from the X240), and is working fairly well with Debian Sid/Unstable according to Corsac.net. The reports I got on the keyboard quality are not consistent. Some say the keyboard is good, others say it is ok, while others say it is not very good. Those with experience from X41 and and X60 agree that the X250 keyboard is not as good as those trusty old laptops, and suggest I keep and fix my X230 instead of upgrading, or get a used X230 to replace it. I'm also told that the X250 lack leds for caps lock, disk activity and battery status, which is very convenient on my X230. I'm also told that the CPU fan is running very often, making it a bit noisy. In any case, the X250 do not work out of the box with Debian Stable/Jessie, one of my requirements.

I have also gotten a few vendor proposals, one was Pro-Star, another was Libreboot. The latter look very attractive to me.

Again, thank you all for the very useful feedback. It help a lot as I keep looking for a replacement.

Update 2015-07-06: I was recommended to check out the lapstore.de web shop for used laptops. They got several different old thinkpad X models, and provide one year warranty.

Tags: debian, english.
3rd July 2015

My primary work horse laptop is failing, and will need a replacement soon. The left 5 cm of the screen on my Thinkpad X230 started flickering yesterday, and I suspect the cause is a broken cable, as changing the angle of the screen some times get rid of the flickering.

My requirements have not really changed since I bought it, and is still as I described them in 2013. The last time I bought a laptop, I had good help from prisjakt.no where I could select at least a few of the requirements (mouse pin, wifi, weight) and go through the rest manually. Three button mouse and a good keyboard is not available as an option, and all the three laptop models proposed today (Thinkpad X240, HP EliteBook 820 G1 and G2) lack three mouse buttons). It is also unclear to me how good the keyboard on the HP EliteBooks are. I hope Lenovo have not messed up the keyboard, even if the quality and robustness in the X series have deteriorated since X41.

I wonder how I can find a sensible laptop when none of the options seem sensible to me? Are there better services around to search the set of available laptops for features? Please send me an email if you have suggestions.

Update 2015-07-23: I got a suggestion to check out the FSF list of endorsed hardware, which is useful background information.

Tags: debian, english.
2nd July 2015

Last oktober I was involved on behalf of NUUG with recording the talks at MakerCon Nordic, a conference for the Maker movement. Since then it has been the plan to publish the recordings on Frikanalen, which finally happened the last few days. A few talks are missing because the speakers asked the organizers to not publish them, but most of the talks are available. The talks are being broadcasted on RiksTV channel 50 and using multicast on Uninett, as well as being available from the Frikanalen web site. The unedited recordings are available on Youtube too.

This is the list of talks available at the moment. Visit the Frikanalen video pages to view them.

  • Evolutionary algorithms as a design tool - from art to robotics (Kyrre Glette)
  • Make and break (Hans Gerhard Meier)
  • Making a one year school course for young makers (Olav Helland)
  • Innovation Inspiration - IPR Databases as a Source of Inspiration (Hege Langlo)
  • Making a toy for makers (Erik Torstensson)
  • How to make 3D printer electronics (Elias Bakken)
  • Hovering Clouds: Looking at online tool offerings for Product Design and 3D Printing (William Kempton)
  • Travelling maker stories (Øyvind Nydal Dahl)
  • Making the first Maker Faire in Sweden (Nils Olander)
  • Breaking the mold: Printing 1000’s of parts (Espen Sivertsen)
  • Ultimaker — and open source 3D printing (Erik de Bruijn)
  • Autodesk’s 3D Printing Platform: Sparking innovation (Hilde Sevens)
  • How Making is Changing the World – and How You Can Too! (Jennifer Turliuk)
  • Open-Source Adventuring: OpenROV, OpenExplorer and the Future of Connected Exploration (David Lang)
  • Making in Norway (Haakon Karlsen Jr., Graham Hayward and Jens Dyvik)
  • The Impact of the Maker Movement (Mike Senese)

Part of the reason this took so long was that the scripts NUUG had to prepare a recording for publication were five years old and no longer worked with the current video processing tools (command line argument changes). In addition, we needed better audio normalization, which sent me on a detour to package bs1770gain for Debian. Now this is in place and it became a lot easier to publish NUUG videos on Frikanalen.

15th June 2015

It is a bit work to figure out the ownership structure of companies in Norway. The information is publicly available, but one need to recursively look up ownership for all owners to figure out the complete ownership graph of a given set of companies. To save me the work in the future, I wrote a script to do this automatically, outputting the ownership structure using the Graphviz/dotty format. The data source is web scraping from Proff, because I failed to find a useful source directly from the official keepers of the ownership data, Brønnøysundsregistrene.

To get an ownership graph for a set of companies, fetch the code from git and run it using the organisation number. I'm using the Norwegian newspaper Dagbladet as an example here, as its ownership structure is very simple:

% time ./bin/eierskap-dotty 958033540 > dagbladet.dot

real    0m2.841s
user    0m0.184s
sys     0m0.036s
%

The script accept several organisation numbers on the command line, allowing a cluster of companies to be graphed in the same image. The resulting dot file for the example above look like this. The edges are labeled with the ownership percentage, and the nodes uses the organisation number as their name and the name as the label:

digraph ownership {
rankdir = LR;
"Aller Holding A/s" -> "910119877" [label="100%"]
"910119877" -> "998689015" [label="100%"]
"998689015" -> "958033540" [label="99%"]
"974530600" -> "958033540" [label="1%"]
"958033540" [label="AS DAGBLADET"]
"998689015" [label="Berner Media Holding AS"]
"974530600" [label="Dagbladets Stiftelse"]
"910119877" [label="Aller Media AS"]
}

To view the ownership graph, run "dotty dagbladet.dot" or convert it to a PNG using "dot -T png dagbladet.dot > dagbladet.png". The result can be seen below:

Note that I suspect the "Aller Holding A/S" entry to be incorrect data in the official ownership register, as that name is not registered in the official company register for Norway. The ownership register is sensitive to typos and there seem to be no strict checking of the ownership links.

Let me know if you improve the script or find better data sources. The code is licensed according to GPL 2 or newer.

Update 2015-06-15: Since the initial post I've been told that "Aller Holding A/S" is a Danish company, which explain why it did not have a Norwegian organisation number. I've also been told that there is a web services API available from Brønnøysundsregistrene, for those willing to accept the terms or pay the price.

11th June 2015

Television loudness is the source of frustration for viewers everywhere. Some channels are very load, others are less loud, and ads tend to shout very high to get the attention of the viewers, and the viewers do not like this. This fact is well known to the TV channels. See for example the BBC white paper "Terminology for loudness and level dBTP, LU, and all that" from 2011 for a summary of the problem domain. To better address the need for even loadness, the TV channels got together several years ago to agree on a new way to measure loudness in digital files as one step in standardizing loudness. From this came the ITU-R standard BS.1770, "Algorithms to measure audio programme loudness and true-peak audio level".

The ITU-R BS.1770 specification describe an algorithm to measure loadness in LUFS (Loudness Units, referenced to Full Scale). But having a way to measure is not enough. To get the same loudness across TV channels, one also need to decide which value to standardize on. For European TV channels, this was done in the EBU Recommondaton R128, "Loudness normalisation and permitted maximum level of audio signals", which specifies a recommended level of -23 LUFS. In Norway, I have been told that NRK, TV2, MTG and SBS have decided among themselves to follow the R128 recommondation for playout from 2016-03-01.

There are free software available to measure and adjust the loudness level using the LUFS. In Debian, I am aware of a library named libebur128 able to measure the loudness and since yesterday morning a new binary named bs1770gain capable of both measuring and adjusting was uploaded and is waiting for NEW processing. I plan to maintain the latter in Debian under the Debian multimedia umbrella.

The free software based TV channel I am involved in, Frikanalen, plan to follow the R128 recommondation ourself as soon as we can adjust the software to do so, and the bs1770gain tool seem like a good fit for that part of the puzzle to measure loudness on new video uploaded to Frikanalen. Personally, I plan to use bs1770gain to adjust the loudness of videos I upload to Frikanalen on behalf of the NUUG member organisation. The program seem to be able to measure the LUFS value of any media file handled by ffmpeg, but I've only successfully adjusted the LUFS value of WAV files. I suspect it should be able to adjust it for all the formats handled by ffmpeg.

10th May 2015

5 days ago, the Norwegian Parliament decided, unanimously, that all citizens of Norway, no matter if they are suspected of something criminal or not, are required to give fingerprints to the police (vote details from Holder de ord). The law make it sound like it will be optional, but in a few years there will be no option any more. The ID will be required to vote, to get a bank account, a bank card, to change address on the post office, to receive an electronic ID or to get a drivers license and many other tasks required to function in Norway. The banks plan to stop providing their own ID on the bank cards when this new national ID is introduced, and the national road authorities plan to change the drivers license to no longer be usable as identity cards. In effect, to function as a citizen in Norway a national ID card will be required, and to get it one need to provide the fingerprints to the police.

In addition to handing the fingerprint to the police (which promised to not make a copy of the fingerprint image at that point in time, but say nothing about doing it later), a picture of the fingerprint will be stored on the RFID chip, along with a picture of the face and other information about the person. Some of the information will be encrypted, but the encryption will be the same system as currently used in the passports. The codes to decrypt will be available to a lot of government offices and their suppliers around the globe, but for those that do not know anyone in those circles it is good to know that the encryption is already broken. And they can be read from 70 meters away. This can be mitigated a bit by keeping it in a Faraday cage (metal box or metal wire container), but one will be required to take it out of there often enough to expose ones private and personal information to a lot of people that have no business getting access to that information.

The new Norwegian national IDs are a vehicle for identity theft, and I feel sorry for us all having politicians accepting such invasion of privacy without any objections. So are the Norwegian passports, but it has been possible to function in Norway without those so far. That option is going away with the passing of the new law. In this, I envy the Germans, because for them it is optional how much biometric information is stored in their national ID.

And if forced collection of fingerprints was not bad enough, the information collected in the national ID card register can be handed over to foreign intelligence services and police authorities, "when extradition is not considered disproportionate".

Update 2015-05-12: For those unable to believe that the Parliament really could make such decision, I wrote a summary of the sources I have for concluding the way I do (Norwegian Only, as the sources are all in Norwegian).

1st May 2015

Many years ago, a friend of mine calculated how much it would cost to store the sound of all phone calls in Norway, and came up with the cost of around 20 million NOK (2.4 mill EUR) for all the calls in a year. I got curious and wondered what the same calculation would look like today. To do so one need an idea of how much data storage is needed for each minute of sound, how many minutes all the calls in Norway sums up to, and the cost of data storage.

The 2005 numbers are from digi.no, the 2012 numbers are from a NKOM report, and I got the 2013 numbers after asking NKOM via email. I was told the numbers for 2014 will be presented May 20th, and decided not to wait for those, as I doubt they will be very different from the numbers from 2013.

The amount of data storage per minute sound depend on the wanted quality, and for phone calls it is generally believed that 8 Kbit/s is enough. See for example a summary on voice quality from Cisco for some alternatives. 8 Kbit/s is 60 Kbytes/min, and this can be multiplied with the number of call minutes to get the storage requirements.

Storage prices varies a lot, depending on speed, backup strategies, availability requirements etc. But a simple way to calculate can be to use the price of a TiB-disk (around 1000 NOK / 120 EUR) and double it to take space, power and redundancy into account. It could be much higher with high speed and good redundancy requirements.

But back to the question, What would it cost to store all phone calls in Norway? Not much. Here is a small table showing the estimated cost, which is within the budget constraint of most medium and large organisations:

YearCall minutesSizePrice in NOK / EUR
200524 000 000 0001.3 PiB3 mill / 358 000
201218 000 000 0001.0 PiB2.2 mill / 262 000
201317 000 000 000950 TiB2.1 mill / 250 000

This is the cost of buying the storage. Maintenance need to be taken into account too, but calculating that is left as an exercise for the reader. But it is obvious to me from those numbers that recording the sound of all phone calls in Norway is not going to be stopped because it is too expensive. I wonder if someone already is collecting the data?

26th April 2015

I am happy to report that the Debian Edu team sent out this announcement today:

the Debian Edu / Skolelinux project is pleased to announce the first
*beta* release of Debian Edu "Jessie" 8.0+edu0~b1, which for the first
time is composed entirely of packages from the current Debian stable
release, Debian 8 "Jessie".

(As most reading this will know, Debian "Jessie" hasn't actually been
released by now. The release is still in progress but should finish
later today ;)

We expect to make a final release of Debian Edu "Jessie" in the coming
weeks, timed with the first point release of Debian Jessie. Upgrades
from this beta release of Debian Edu Jessie to the final release will
be possible and encouraged!

Please report feedback to debian-edu@lists.debian.org and/or submit
bugs: http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

Debian Edu - sometimes also known as "Skolelinux" - is a complete
operating system for schools, universities and other
organisations. Through its pre- prepared installation profiles
administrators can install servers, workstations and laptops which
will work in harmony on the school network.  With Debian Edu, the
teachers themselves or their technical support staff can roll out a
complete multi-user, multi-machine study environment within hours or
days.

Debian Edu is already in use at several hundred schools all over the
world, particularly in Germany, Spain and Norway. Installations come
with hundreds of applications pre-installed, plus the whole Debian
archive of thousands of compatible packages within easy reach.

For those who want to give Debian Edu Jessie a try, download and
installation instructions are available, including detailed
instructions in the manual explaining the first steps, such as setting
up a network or adding users.  Please note that the password for the
user your prompted for during installation must have a length of at
least 5 characters!

== Where to download ==

A multi-architecture CD / usbstick image (649 MiB) for network booting
can be downloaded at the following locations:

    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-CD.iso . 

The SHA1SUM of this image is: 54a524d16246cddd8d2cfd6ea52f2dd78c47ee0a

Alternatively an extended DVD / usbstick image (4.9 GiB) is also
available, with more software included (saving additional download
time):

    http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso
    rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~b1-USB.iso 

The SHA1SUM of this image is: fb1f1504a490c077a48653898f9d6a461cb3c636

Sources are available from the Debian archive, see
http://ftp.debian.org/debian-cd/8.0.0/source/ for some download
options.

== Debian Edu Jessie manual in seven languages ==

Please see https://wiki.debian.org/DebianEdu/Documentation/Jessie/ for
the English version of the Debian Edu jessie manual.

This manual has been fully translated to German, French, Italian,
Danish, Dutch and Norwegian Bokmål. A partly translated version exists
for Spanish.  See http://maintainer.skolelinux.org/debian-edu-doc/ for
online version of the translated manual.

More information about Debian 8 "Jessie" itself is provided in the
release notes and the installation manual:
- http://www.debian.org/releases/jessie/releasenotes
- http://www.debian.org/releases/jessie/installmanual


== Errata / known problems ==

    It takes up to 15 minutes for a changed hostname to be updated via
    DHCP (#780461).

    The hostname script fails to update LTSP server hostname (#783087). 

Workaround: run update-hostname-from-ip on the client to update the
hostname immediately.

Check https://wiki.debian.org/DebianEdu/Status/Jessie for a possibly
more current and complete list.

== Some more details about Debian Edu 8.0+edu0~b1 Codename Jessie released 2015-04-25 ==

=== Software updates ===

Everything which is new in Debian 8 Jessie, e.g.:

 * Linux kernel 3.16.7-ctk9; for the i386 architecture, support for
   i486 processors has been dropped; oldest supported ones: i586 (like
   Intel Pentium and AMD K5).

 * Desktop environments KDE Plasma Workspaces 4.11.13, GNOME 3.14,
   Xfce 4.12, LXDE 0.5.6
   * new optional desktop environment: MATE 1.8
   * KDE Plasma Workspaces is installed by default; to choose one of
     the others see the manual.
 * the browsers Iceweasel 31 ESR and Chromium 41
 * LibreOffice 4.3.3
 * GOsa 2.7.4
 * LTSP 5.5.4
 * CUPS print system 1.7.5
 * new boot framework: systemd
 * Educational toolbox GCompris 14.12
 * Music creator Rosegarden 14.02
 * Image editor Gimp 2.8.14
 * Virtual stargazer Stellarium 0.13.1
 * golearn 0.9
 * tuxpaint 0.9.22
 * New version of debian-installer from Debian Jessie.
 * Debian Jessie includes about 43000 packages available for installation.
 * More information about Debian 8 Jessie is provided in its release
   notes and the installation manual, see the link above.

=== Installation changes ===

    Installations done via PXE now also install firmware automatically
    for the hardware present.

=== Fixed bugs ===

A number of bugs have been fixed in this release; the most noticeable
from a user perspective:

 * Inserting incorrect DNS information in Gosa will no longer break
   DNS completely, but instead stop DNS updates until the incorrect
   information is corrected (710362)

 * shutdown-at-night now shuts the system down if gdm3 is used (775608). 

=== Sugar desktop removed ===

As the Sugar desktop was removed from Debian Jessie, it is also not
available in Debian Edu jessie.


== About Debian Edu / Skolelinux ==

Debian Edu, also known as Skolelinux, is a Linux distribution based on
Debian providing an out-of-the box environment of a completely
configured school network. Directly after installation a school server
running all services needed for a school network is set up just
waiting for users and machines being added via GOsa², a comfortable
Web-UI. A netbooting environment is prepared using PXE, so after
initial installation of the main server from CD or USB stick all other
machines can be installed via the network. The provided school server
provides LDAP database and Kerberos authentication service,
centralized home directories, DHCP server, web proxy and many other
services.  The desktop contains more than 60 educational software
packages and more are available from the Debian archive, and schools
can choose between KDE, GNOME, LXDE, Xfce and MATE desktop
environment.

== About Debian ==

The Debian Project was founded in 1993 by Ian Murdock to be a truly
free community project. Since then the project has grown to be one of
the largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and
maintain Debian software. Available in 70 languages, and supporting a
huge range of computer types, Debian calls itself the universal
operating system.

== Thanks ==

Thanks to everyone making Debian and Debian Edu / Skolelinux happen!
You rock.
15th April 2015

It was a surprise to me to learn that project to create a complete computer system for schools I've involved in, Debian Edu / Skolelinux, was being used in India. But apparently it is, and I managed to get an interview with one of the friends of the project there, Shirish Agarwal.

Who are you, and how do you spend your days?

My name is Shirish Agarwal. Based out of the educational and historical city of Pune, from the western state of Maharashtra, India. My bread comes from giving training, giving policy tips, installations on free software to mom and pop shops in different fields from Desktop publishing to retail shops as well as work with few software start-ups as well.

How did you get in contact with the Skolelinux / Debian Edu project?

It started innocently enough. I have been using Debian for a few years and in one local minidebconf / debutsav I was asked if there was anything for schools or education. I had worked / played with free educational softwares such as Gcompris and Stellarium for my many nieces and nephews so researched and found Debian Edu or Skolelinux as it was known then. Since then I have started using the various education meta-packages provided by the project.

What do you see as the advantages of Skolelinux / Debian Edu?

It's closest I have seen where a package full of educational software are packed, which are free and open (both literally and figuratively). Even if I take the simplest software which is gcompris, the number of activities therein are amazing. Another one of the softwares that I have liked for a long time is stellarium. Even pysycache is cool except for couple of issues I encountered #781841 and #781842.

I prefer software installed on the system over web based solutions, as a web site can disappear any time but the software on disk has the possibility of a larger life span. Of course with both it's more a question if it has enough users who make it fun or sustainable or both for the developer per-se.

What do you see as the disadvantages of Skolelinux / Debian Edu?

I do see that the Debian Edu team seems to be short-handed and I think more efforts should be made to make it popular and ask and take help from people and the larger community wherever possible.

I don't see any disadvantage to use Skolelinux apart from the fact that most apps. are generic which is good or bad how you see it. However, saying that I do acknowledge the fact that the canvas is pretty big and there are lot of interesting ideas that could be done but for reasons not known not done or if done I don't know about them. Let me share some of the ideas (these are more upstream based but still) I have had for a long time :

1. Classical maths question of two trains in opposing directions each running @x kmph/mph at y distance, when they will meet and how far would each travel and similar questions like these.

The computer is a fantastic system where questions like these can be drawn, animated and the methodology and answers teased out in interactive manner. While sites such as the Ask Dr. Math FAQ on The Two Trains problem (as an example or point of inspiration) can be used there is lot more that can be done. I dunno if there is a free software which does something like this. The idea being a blend of objects + animation + interaction which does this. The whole interaction could be gamified with points or sounds or colourful celebration whenever the user gets even part of the question or/and methodology right. That would help reinforce good behaviour. This understanding could be used to share/showcase everything from how the first wheel came to be, to evolution to how astronomy started, psychics and everything in-between.

One specific idea in the train part was having the Linux mascot on one train and the BSD or GNU mascot on the other train and they meeting somewhere in-between. Characters from blender movies could also be used.

2. Loads of crossword-puzzles with reference to subjects: We have enormous data sets in Wikipedia and Wikitionary. I don't think it should be a big job to design crossword puzzles. Using categories and sub-categories it should be doable to have Q&A single word answers from the existing data-sets. What would make it easy or hard could be the length of the word + existence of many or few vowels depending on the user's input.

3. Jigsaw puzzles - We already have a great software called palapeli with number of slicers making it pretty interesting. What needs to be done is to download large number of public domain and copyleft images, tease and use IPTC tags to categorise them into nature, history etc. and let it loose. This could turn to be really huge collection of images. One source could be taken from commons.wikimedia.org, others could be huge collection of royalty-free stock photos. Potential is immense.

Apart from this, free software suffers in two directions, we lag both in development (of using new features per-se) and maintenance a lot. This is more so in educational software as these applications need to be timely and the opportunity cost of missing deadlines is immense. If we are able to solve issues of funding for development and maintenance of such software I don't see any big difficulties. I know of few start-ups in and around India who would love to develop and maintain such software if funding issues could be solved.

Which free software do you use daily?

That would be huge list. Some of the softwares are obviously apt, aptitude, debdelta, leafpad, the shell of course (zsh nowadays), quassel for IRC. In games I use shisen-sho while card-games are evenly between kpat and Aiselriot. In desktops it's a tie between gnome-flashback and mate.

Which strategy do you believe is the right one to use to get schools to use free software?

I think it should first start with using specific FOSS apps. in whatever environment they are. If it's MS-Windows or Mac so be it. Once they are habitual with the apps. and there is buy-in from the school management then it could be installed anywhere. Most of the people now understand the concept of a repository because of the various online stores so it isn't hard to convince on that front.

What is harder is having enough people with technical skills and passion to service them. If you get buy-in from one or two teachers then ideas like above could also be asked to be done as a project as well.

I think where we fall short more than anything is in marketing. For instance, Debian has this whole range of fonts in its archive but there isn't even a page where all those different fonts in the La Ipsum format could be tried out for newcomers.

One of the issues faced constantly in installations is with updates and upgrades. People have this myth that each update and upgrade means the user interface will / has to change. I have seen this innumerable times. That perhaps is one of the reasons which browsers like Iceweasel / Firefox change user interfaces so much, not because it might be needed or be functional but because people believe that changed user interfaces are better. This, can easily be pointed with the user interfaces changed with almost every MS-Windows and Mac OS releases.

The problems with Debian Edu for deployment are many. The biggest is the huge gap between what is taught in schools and what Debian Edu is aimed at.

Me and my friends did teach on week-ends in a government school for around 2 years, and gathered some experience there. Some of the things we learnt/discovered there was :

  1. Most of the teachers are very territorial about their subjects and they do not want you to teach anything out of the portion/syllabus given.
  2. They want any activity on the system in accordance to whatever is in the syllabus.
  3. There are huge barriers both with the English language and at times with objects or whatever. An example, let's say in gcompris you have objects falling down and you have to name them and let's say the falling object is a hat or a fedora hat, this would not be as recognizable as say a Puneri Pagdi so there is need to inject local objects, words wherever possible. Especially for word-games there are so many hindi words which have become part of english vocabulary (for instance in parley), those could be made into a hinglish collection or something but that is something for upstream to do.
7th April 2015

I am happy to let you all know that I'm going to the Open Source Developers' Conference Nordic 2015!

It take place Friday 8th to Sunday 10th of May in Oslo next to where I work, and I finally got around to submitting a talk proposal for it (dead link for most people until the talk is accepted). As part of my involvement with the Norwegian Unix User Group member association I have been slightly involved in the planning of this conference for a while now, with a focus on organising a Civic Hacking Hackathon with our friends over at mySociety and Holder de ord. This part is named the 'My Society' track in the program. There is still space for more talks and participants. I hope to see you there.

Check out the talks submitted and accepted so far.

4th April 2015

During eastern I had some time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig. At the moment I am proof reading the finished text, looking for typos, inconsistent wordings and sentences that do not flow as they should. I'm more than two thirds done with the text, and welcome others to check the text up to chapter 13. The current status is available on the github project pages. You can also check out the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

9th March 2015

The Norwegian Unix User Group, where I am a member, and where people interested in free software, open standards and UNIX like operating systems like Linux and the BSDs come together, record our monthly technical presentations on video. The purpose is to document the talks and spread them to a wider audience. For this, the the Norwegian nationwide open channel Frikanalen is a useful venue. Since a few days ago, when I figured out the REST API to program the channel time schedule, the channel has been filled with NUUG talks, related recordings and some Creative Commons licensed TED talks (from archive.org). I fill all "leftover bits" on the channel with content from NUUG, which at the moment is almost 17 of 24 hours every day.

The list of NUUG videos uploaded so far include things like a one hour talk by John Perry Barlow when he visited Oslo, a presentation of Haiku, the BeOS re-implementation, the history of FiksGataMi, the Norwegian version of FixMyStreet, the good old Warriors of the net video and many others.

We have a large backlog of NUUG talks not yet uploaded to Frikanalen, and plan to upload every useful bit to the channel to spread the word there. I also hope to find useful recordings from the Chaos Computer Club and Debian conferences and spread them on the channel as well. But this require locating the videos and their meta information (title, description, license, etc), and preparing the recordings for broadcast, and I have not yet had the spare time to focus on this. Perhaps you want to help. Please join us on IRC, #nuug on irc.freenode.net if you want to help make this happen.

But as I said, already the channel is already almost exclusively filled with technical topics, and if you want to learn something new today, check out the Ogg Theora web stream or use one of the other ways to get access to the channel. Unfortunately the Ogg Theora recoding for distribution still do not properly sync the video and sound. It is generated by recoding a internal MPEG transport stream with MPEG4 coded video (ie H.264) to Ogg Theora / Vorbis, and we have not been able to find a way that produces acceptable quality. Help needed, please get in touch if you know how to fix it using free software.

28th February 2015

Today I was happy to learn that the documentary Citizenfour by Laura Poitras finally will show up in Norway. According to the magazine Montages, a deal has finally been made for Cinema distribution in Norway and the movie will have its premiere soon. This is great news. As part of my involvement with the Norwegian Unix User Group, me and a friend have tried to get the movie to Norway ourselves, but obviously we were too late and Tor Fosse beat us to it. I am happy he did, as the movie will make its way to the public and we do not have to make it happen ourselves. The trailer can be seen on youtube, if you are curious what kind of film this is.

The whistle blower Edward Snowden really deserve political asylum here in Norway, but I am afraid he would not be safe.

25th February 2015

The Norwegian nationwide open channel Frikanalen is still going strong. It allow everyone to send the video they want on national television. It is a TV station administrated completely using a web browser, running only Free Software, providing a REST api for administrators and members, and with distribution on the national DVB-T distribution network RiksTV. But only between 12:00 and 17:30 Norwegian time. This has finally changed, after many years with limited distribution. A few weeks ago, we set up a Ogg Theora stream via icecast to allow everyone with Internet access to check out the channel the rest of the day. This is presented on the Frikanalen web site now. And since a few days ago, the channel is also available via multicast on UNINETT, available for those using IPTV TVs and set-top boxes in the Norwegian National Research and Education network.

If you want to see what is on the channel, point your media player to one of these sources. The first should work with most players and browsers, while as far as I know, the multicast UDP stream only work with VLC.

The Ogg Theora / icecast stream is not working well, as the video and audio is slightly out of sync. We have not been able to figure out how to fix it. It is generated by recoding a internal MPEG transport stream with MPEG4 coded video (ie H.264) to Ogg Theora / Vorbis, and the result is less then stellar. If you have ideas how to fix it, please let us know on frikanalen (at) nuug.no. We currently use this with ffmpeg2theora 0.29:

./ffmpeg2theora.linux <OBE_gemini_URL.ts> -F 25 -x 720 -y 405 \
 --deinterlace --inputfps 25 -c 1 -H 48000 --keyint 8 --buf-delay 100 \
 --nosync -V 700 -o - | oggfwd video.nuug.no 8000 <pw> /frikanalen.ogv

If you get the multicast UDP stream working, please let me know, as I am curious how far the multicast stream reach. It do not make it to my home network, nor any other commercially available network in Norway that I am aware of.

10th February 2015

Aftenposten, one of the largest newspapers in Norway, today report that three of the nude body scanners now is put to use at Gardermoen, the main airport in Norway. This way the travelers can have their body photographed without cloths when visiting Norway. Of course this horrible news is presented with a positive spin, stating that "now travelers can move past the security check point faster and more efficiently", but fail to mention that the machines in question take pictures of their nude bodies and store them internally in the computer, while only presenting sketch figure of the body to the public. The article is written in a way that leave the impression that the new machines do not take these nude pictures and only create the sketch figures. In reality the same nude pictures are still taken, but not presented to everyone. They are still available for the owners of the system and the people doing maintenance of the scanners, as long as they are taken and stored.

Wikipedia have a more on Full body scanners, including example images and a summary of the controversy about these scanners.

Personally I will decline to use these machines, as I believe strip searches of my body is a very intrusive attack on my privacy, and not something everyone should have to accept to travel.

8th February 2015

When running a TV station with both broadcast and web stream distribution, it is useful to know that the stream is working. As I am involved in the Norwegian open channel Frikanalen as part of my activity in the NUUG member organisation, I wrote a script to use mplayer to connect to a video stream, pick two images 35 seconds apart and compare them. If the images are missing or identical, something is probably wrong with the stream and an alarm should be triggered. The script is written as a Nagios plugin, allowing us to use Nagios to run the check regularly and sound the alarm when something is wrong. It is able to detect both a hanging and a broken video stream.

I just uploaded the code for the script into the Frikanalen git repository on github. If you run a TV station with web streaming, perhaps you can find it useful too.

Last year, the Frikanalen public TV station transformed into using only Linux based free software to administrate, schedule and distribute the TV content. The source code for the entire TV station is available from the Github project page. Everyone can use it to send their content on national TV, and we provide both a web GUI and a web API to add and schedule content. And thanks to last weeks developer gathering and following activity, we now have the schedule available as XMLTV too. Still a lot of work left to do, especially with the process to add videos and with the scheduling, so your contribution is most welcome. Perhaps you want to set up your own TV station?

Update 2015-02-25: Got a tip from Uninett about their qstream monitoring system, which gather connection time, jitter, packet loss and burst bandwidth usage. It look useful to check if UDP streams are working as they should.

12th January 2015

A few days ago, the Free Software Foundation announced a new video explaining Free software in simple terms. The video named User Liberation is 3 minutes long, and I recommend showing it to everyone you know as a way to explain what Free Software is all about. Unfortunately several of the people I know do not understand English and Spanish, so it did not make sense to show it to them.

But today I was told that English subtitles were available and set out to provide Norwegian Bokmål subtitles based on these. The result has been sent to FSF and made available in a git repository provided by Github. Please let me know if you find errors or have improvements to the subtitles.

Update 2015-02-03: Since I publised this post, FSF created a Libreplanet project to track subtitles for the video.

Tags: english, video.
30th December 2014

I am very happy that we in the Norwegian Unix User group (NUUG), spearheaded by Marius Halden from NUUG and Matthew Somerville from mySociety, finally managed to upgrade the code base for the Norwegian version of FixMyStreet. This was the first major update since 2011. The refurbished FiksGataMi is already live, and seem to hold up the pressure. The press release and announcement went out this morning.

FixMyStreet is a web platform for allowing the citizens to easily report problems with public infrastructure to the responsible authorities. Think of it as a shared mail client with map support, allowing everyone to see what already was reported and comment on the reports in public.

19th December 2014

So, Sony caved in (according to Rob Lowe) and demonstrated that America lost its first cyberwar (according to Newt Gingrich). It should not surprise anyone, after the whistle blower Edward Snowden documented that the government of USA and their allies for many years have done their best to make sure the technology used by its citizens is filled with security holes allowing the secret services to spy on its own population. No one in their right minds could believe that the ability to snoop on the people all over the globe could only be used by the personnel authorized to do so by the president of the United States of America. If the capabilities are there, they will be used by friend and foe alike, and now they are being used to bring Sony on its knees.

I doubt it will a lesson learned, and expect USA to lose its next cyber war too, given how eager the western intelligence communities (and probably the non-western too, but it is less in the news) seem to be to continue its current dragnet surveillance practice.

There is a reason why China and others are trying to move away from Windows to Linux and other alternatives, and it is not to avoid sending its hard earned dollars to Cayman Islands (or whatever tax haven Microsoft is using these days to collect the majority of its income. :)

22nd November 2014

By now, it is well known that Debian Jessie will not be using sysvinit as its boot system by default. But how can one keep using sysvinit in Jessie? It is fairly easy, and here are a few recipes, courtesy of Erich Schubert and Simon McVittie.

If you already are using Wheezy and want to upgrade to Jessie and keep sysvinit as your boot system, create a file /etc/apt/preferences.d/use-sysvinit with this content before you upgrade:

Package: systemd-sysv
Pin: release o=Debian
Pin-Priority: -1

This file content will tell apt and aptitude to not consider installing systemd-sysv as part of any installation and upgrade solution when resolving dependencies, and thus tell it to avoid systemd as a default boot system. The end result should be that the upgraded system keep using sysvinit.

If you are installing Jessie for the first time, there is no way to get sysvinit installed by default (debootstrap used by debian-installer have no option for this), but one can tell the installer to switch to sysvinit before the first boot. Either by using a kernel argument to the installer, or by adding a line to the preseed file used. First, the kernel command line argument:

preseed/late_command="in-target apt-get install --purge -y sysvinit-core"

Next, the line to use in a preseed file:

d-i preseed/late_command string in-target apt-get install -y sysvinit-core

One can of course also do this after the first boot by installing the sysvinit-core package.

I recommend only using sysvinit if you really need it, as the sysvinit boot sequence in Debian have several hardware specific bugs on Linux caused by the fact that it is unpredictable when hardware devices show up during boot. But on the other hand, the new default boot system still have a few rough edges I hope will be fixed before Jessie is released.

Update 2014-11-26: Inspired by a blog post by Torsten Glaser, added --purge to the preseed line.

10th November 2014

The right to communicate with your friends and family in private, without anyone snooping, is a right every citicen have in a liberal democracy. But this right is under serious attack these days.

A while back it occurred to me that one way to make the dragnet surveillance conducted by NSA, GCHQ, FRA and others (and confirmed by the whisleblower Snowden) more expensive for Internet email, is to deliver all email using SMTP via Tor. Such SMTP option would be a nice addition to the FreedomBox project if we could send email between FreedomBox machines without leaking metadata about the emails to the people peeking on the wire. I proposed this on the FreedomBox project mailing list in October and got a lot of useful feedback and suggestions. It also became obvious to me that this was not a novel idea, as the same idea was tested and documented by Johannes Berg as early as 2006, and both the Mailpile and the Cables systems propose a similar method / protocol to pass emails between users.

To implement such system one need to set up a Tor hidden service providing the SMTP protocol on port 25, and use email addresses looking like username@hidden-service-name.onion. With such addresses the connections to port 25 on hidden-service-name.onion using Tor will go to the correct SMTP server. To do this, one need to configure the Tor daemon to provide the hidden service and the mail server to accept emails for this .onion domain. To learn more about Exim configuration in Debian and test the design provided by Johannes Berg in his FAQ, I set out yesterday to create a Debian package for making it trivial to set up such SMTP over Tor service based on Debian. Getting it to work were fairly easy, and the source code for the Debian package is available from github. I plan to move it into Debian if further testing prove this to be a useful approach.

If you want to test this, set up a blank Debian machine without any mail system installed (or run apt-get purge exim4-config to get rid of exim4). Install tor, clone the git repository mentioned above, build the deb and install it on the machine. Next, run /usr/lib/exim4-smtorp/setup-exim-hidden-service and follow the instructions to get the service up and running. Restart tor and exim when it is done, and test mail delivery using swaks like this:

torsocks swaks --server dutlqrrmjhtfa3vp.onion \
  --to fbx@dutlqrrmjhtfa3vp.onion

This will test the SMTP delivery using tor. Replace the email address with your own address to test your server. :)

The setup procedure is still to complex, and I hope it can be made easier and more automatic. Especially the tor setup need more work. Also, the package include a tor-smtp tool written in C, but its task should probably be rewritten in some script language to make the deb architecture independent. It would probably also make the code easier to review. The tor-smtp tool currently need to listen on a socket for exim to talk to it and is started using xinetd. It would be better if no daemon and no socket is needed. I suspect it is possible to get exim to run a command line tool for delivery instead of talking to a socket, and hope to figure out how in a future version of this system.

Until I wipe my test machine, I can be reached using the fbx@dutlqrrmjhtfa3vp.onion mail address, deliverable over SMTorP. :)

27th October 2014

I am happy to report that I on behalf of the Debian Edu team just sent out this announcement:

The Debian Edu Team is pleased to announce the release of Debian Edu
Jessie 8.0+edu0~alpha0

Debian Edu is a complete operating system for schools. Through its
various installation profiles you can install servers, workstations
and laptops which will work together on the school network. With
Debian Edu, the teachers themselves or their technical support can
roll out a complete multi-user multi-machine study environment within
hours or a few days. Debian Edu comes with hundreds of applications
pre-installed, but you can always add more packages from Debian.

For those who want to give Debian Edu Jessie a try, download and
installation instructions are available, including detailed
instructions in the manual[1] explaining the first steps, such as
setting up a network or adding users. Please note that the password
for the user your prompted for during installation must have a length
of at least 5 characters!

 [1] <URL: https://wiki.debian.org/DebianEdu/Documentation/Jessie >

Would you like to give your school's computer a longer life? Are you
tired of sneaker administration, running from computer to computer
reinstalling the operating system? Would you like to administrate all
the computers in your school using only a couple of hours every week?
Check out Debian Edu Jessie!

Skolelinux is used by at least two hundred schools all over the world,
mostly in Germany and Norway.

About Debian Edu and Skolelinux
===============================

Debian Edu, also known as Skolelinux[2], is a Linux distribution based
on Debian providing an out-of-the box environment of a completely
configured school network. Immediately after installation a school
server running all services needed for a school network is set up just
waiting for users and machines being added via GOsa², a comfortable
Web-UI. A netbooting environment is prepared using PXE, so after
initial installation of the main server from CD or USB stick all other
machines can be installed via the network.  The provided school server
provides LDAP database and Kerberos authentication service,
centralized home directories, DHCP server, web proxy and many other
services.  The desktop contains more than 60 educational software
packages[3] and more are available from the Debian archive, and
schools can choose between KDE, Gnome, LXDE, Xfce and MATE desktop
environment.

 [2] <URL: http://www.skolelinux.org/ >
 [3] <URL: http://www.hungry.com/~pere/blog/Educational_applications_included_in_Debian_Edu___Skolelinux__the_screenshot_collection____.html >

Full release notes and manual
=============================

Below the download URLs there is a list of some of the new features
and bugfixes of Debian Edu 8.0+edu0~alpha0 Codename Jessie. The full
list is part of the manual. (See the feature list in the manual[4] for
the English version.) For some languages manual translations are
available, see the manual translation overview[5].

 [4] <URL: https://wiki.debian.org/DebianEdu/Documentation/Jessie/Features >
 [5] <URL: http://maintainer.skolelinux.org/debian-edu-doc/ >

Where to get it
---------------

To download the multiarch netinstall CD release (624 MiB) you can use

 * ftp://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso
 * http://ftp.skolelinux.org/skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso
 * rsync -avzP ftp.skolelinux.org::skolelinux-cd/debian-edu-8.0+edu0~alpha0-CD.iso .

The SHA1SUM of this image is: 361188818e036ce67280a572f757de82ebfeb095

New features for Debian Edu 8.0+edu0~alpha0 Codename Jessie released 2014-10-27
===============================================================================


Installation changes
--------------------

 * PXE installation now installs firmware automatically for the hardware present.

Software updates
----------------

Everything which is new in Debian Jessie 8.0, eg:

 * Linux kernel 3.16.x
 * Desktop environments KDE "Plasma" 4.11.12, GNOME 3.14, Xfce 4.10,
   LXDE 0.5.6 and MATE 1.8 (KDE "Plasma" is installed by default; to
   choose one of the others see manual.)
 * the browsers Iceweasel 31 ESR and Chromium 38 
 * !LibreOffice 4.3.3
 * GOsa 2.7.4
 * LTSP 5.5.4
 * CUPS print system 1.7.5
 * new boot framework: systemd
 * Educational toolbox GCompris 14.07 
 * Music creator Rosegarden 14.02
 * Image editor Gimp 2.8.14
 * Virtual stargazer Stellarium 0.13.0
 * golearn 0.9
 * tuxpaint 0.9.22
 * New version of debian-installer from Debian Jessie.
 * Debian Jessie includes about 42000 packages available for
   installation.
 * More information about Debian Jessie 8.0 is provided in the release
   notes[6] and the installation manual[7].

 [6] <URL: http://www.debian.org/releases/jessie/releasenotes >
 [7] <URL: http://www.debian.org/releases/jessie/installmanual >

Fixed bugs
----------

 * Inserting incorrect DNS information in Gosa will no longer break
   DNS completely, but instead stop DNS updates until the incorrect
   information is corrected (Debian bug #710362)
 * and many others.

Documentation and translation updates
------------------------------------- 

 * The Debian Edu Jessie Manual is fully translated to German, French,
   Italian, Danish and Dutch. Partly translated versions exist for
   Norwegian Bokmal and Spanish.

Other changes
-------------

 * Due to new Squid settings, powering off or rebooting the main
   server takes more time.
 * To manage printers localhost:631 has to be used, currently www:631
   doesn't work.

Regressions / known problems
----------------------------

 * Installing LTSP chroot fails with a bug related to eatmydata about
   exim4-config failing to run its postinst (see Debian bug #765694
   and Debian bug #762103).
 * Munin collection is not properly configured on clients (Debian bug
   #764594).  The fix is available in a newer version of munin-node.
 * PXE setup for Main Server and Thin Client Server setup does not
   work when installing on a machine without direct Internet access.
   Will be fixed when Debian bug #766960 is fixed in Jessie.

See the status page[8] for the complete list.

 [8] <URL: https://wiki.debian.org/DebianEdu/Status/Jessie >

How to report bugs
------------------

<URL: http://wiki.debian.org/DebianEdu/HowTo/ReportBugs >

About Debian
============

The Debian Project was founded in 1993 by Ian Murdock to be a truly
free community project. Since then the project has grown to be one of
the largest and most influential open source projects. Thousands of
volunteers from all over the world work together to create and
maintain Debian software. Available in 70 languages, and supporting a
huge range of computer types, Debian calls itself the universal
operating system.

Contact Information
For further information, please visit the Debian web pages[9] or send
mail to press@debian.org.

 [9] <URL: http://www.debian.org/ >
23rd October 2014

I spent last weekend at Makercon Nordic, a great conference and workshop for makers in Norway and the surrounding countries. I had volunteered on behalf of the Norwegian Unix Users Group (NUUG) to video record the talks, and we had a great and exhausting time recording the entire day, two days in a row. There were only two of us, Hans-Petter and me, and we used the regular video equipment for NUUG, with a dvswitch, a camera and a VGA to DV convert box, and mixed video and slides live.

Hans-Petter did the post-processing, consisting of uploading the around 180 GiB of raw video to Youtube, and the result is now becoming public on the MakerConNordic account. The videos have the license NUUG always use on our recordings, which is Creative Commons Navngivelse-Del på samme vilkår 3.0 Norge. Many great talks available. Check it out! :)

Tags: english, nuug, video.
22nd October 2014

If you ever had to moderate a mailman list, like the ones on alioth.debian.org, you know the web interface is fairly slow to operate. First you visit one web page, enter the moderation password and get a new page shown with a list of all the messages to moderate and various options for each email address. This take a while for every list you moderate, and you need to do it regularly to do a good job as a list moderator. But there is a quick alternative, the listadmin program. It allow you to check lists for new messages to moderate in a fraction of a second. Here is a test run on two lists I recently took over:

% time listadmin xiph
fetching data for pkg-xiph-commits@lists.alioth.debian.org ... nothing in queue
fetching data for pkg-xiph-maint@lists.alioth.debian.org ... nothing in queue

real    0m1.709s
user    0m0.232s
sys     0m0.012s
%

In 1.7 seconds I had checked two mailing lists and confirmed that there are no message in the moderation queue. Every morning I currently moderate 68 mailman lists, and it normally take around two minutes. When I took over the two pkg-xiph lists above a few days ago, there were 400 emails waiting in the moderator queue. It took me less than 15 minutes to process them all using the listadmin program.

If you install the listadmin package from Debian and create a file ~/.listadmin.ini with content like this, the moderation task is a breeze:

username username@example.org
spamlevel 23
default discard
discard_if_reason "Posting restricted to members only. Remove us from your mail list."

password secret
adminurl https://{domain}/mailman/admindb/{list}
mailman-list@lists.example.com

password hidden
other-list@otherserver.example.org

There are other options to set as well. Check the manual page to learn the details.

If you are forced to moderate lists on a mailman installation where the SSL certificate is self signed or not properly signed by a generally accepted signing authority, you can set a environment variable when calling listadmin to disable SSL verification:

PERL_LWP_SSL_VERIFY_HOSTNAME=0 listadmin

If you want to moderate a subset of the lists you take care of, you can provide an argument to the listadmin script like I do in the initial screen dump (the xiph argument). Using an argument, only lists matching the argument string will be processed. This make it quick to accept messages if you notice the moderation request in your email.

Without the listadmin program, I would never be the moderator of 68 mailing lists, as I simply do not have time to spend on that if the process was any slower. The listadmin program have saved me hours of time I could spend elsewhere over the years. It truly is nice free software.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Update 2014-10-27: Added missing 'username' statement in configuration example. Also, I've been told that the PERL_LWP_SSL_VERIFY_HOSTNAME=0 setting do not work for everyone. Not sure why.

17th October 2014

When PXE installing laptops with Debian, I often run into the problem that the WiFi card require some firmware to work properly. And it has been a pain to fix this using preseeding in Debian. Normally something more is needed. But thanks to my isenkram package and its recent tasksel extension, it has now become easy to do this using simple preseeding.

The isenkram-cli package provide tasksel tasks which will install firmware for the hardware found in the machine (actually, requested by the kernel modules for the hardware). (It can also install user space programs supporting the hardware detected, but that is not the focus of this story.)

To get this working in the default installation, two preeseding values are needed. First, the isenkram-cli package must be installed into the target chroot (aka the hard drive) before tasksel is executed in the pkgsel step of the debian-installer system. This is done by preseeding the base-installer/includes debconf value to include the isenkram-cli package. The package name is next passed to debootstrap for installation. With the isenkram-cli package in place, tasksel will automatically use the isenkram tasks to detect hardware specific packages for the machine being installed and install them, because isenkram-cli contain tasksel tasks.

Second, one need to enable the non-free APT repository, because most firmware unfortunately is non-free. This is done by preseeding the apt-mirror-setup step. This is unfortunate, but for a lot of hardware it is the only option in Debian.

The end result is two lines needed in your preseeding file to get firmware installed automatically by the installer:

base-installer base-installer/includes string isenkram-cli
apt-mirror-setup apt-setup/non-free boolean true

The current version of isenkram-cli in testing/jessie will install both firmware and user space packages when using this method. It also do not work well, so use version 0.15 or later. Installing both firmware and user space packages might give you a bit more than you want, so I decided to split the tasksel task in two, one for firmware and one for user space programs. The firmware task is enabled by default, while the one for user space programs is not. This split is implemented in the package currently in unstable.

If you decide to give this a go, please let me know (via email) how this recipe work for you. :)

So, I bet you are wondering, how can this work. First and foremost, it work because tasksel is modular, and driven by whatever files it find in /usr/lib/tasksel/ and /usr/share/tasksel/. So the isenkram-cli package place two files for tasksel to find. First there is the task description file (/usr/share/tasksel/descs/isenkram.desc):

Task: isenkram-packages
Section: hardware
Description: Hardware specific packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific packages are
 proposed.
Test-new-install: show show
Relevance: 8
Packages: for-current-hardware

Task: isenkram-firmware
Section: hardware
Description: Hardware specific firmware packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific firmware
 packages are proposed.
Test-new-install: mark show
Relevance: 8
Packages: for-current-hardware-firmware

The key parts are Test-new-install which indicate how the task should be handled and the Packages line referencing to a script in /usr/lib/tasksel/packages/. The scripts use other scripts to get a list of packages to install. The for-current-hardware-firmware script look like this to list relevant firmware for the machine:

#!/bin/sh
#
PATH=/usr/sbin:$PATH
export PATH
isenkram-autoinstall-firmware -l

With those two pieces in place, the firmware is installed by tasksel during the normal d-i run. :)

If you want to test what tasksel will install when isenkram-cli is installed, run DEBIAN_PRIORITY=critical tasksel --test --new-install to get the list of packages that tasksel would install.

Debian Edu will be pilots in testing this feature, as isenkram is used there now to install firmware, replacing the earlier scripts.

4th October 2014

Today I came across an unexpected Ubuntu boot screen. Above the bread shelf on the ICA shop at Storo in Oslo, the grub menu of Ubuntu with Linux kernel 3.2.0-23 (ie probably version 12.04 LTS) was stuck on a screen normally showing the bread types and prizes:

If it had booted as it was supposed to, I would never had known about this hidden Linux installation. It is interesting what errors can reveal.

Tags: debian, english.
4th October 2014

The lsdvd project got a new set of developers a few weeks ago, after the original developer decided to step down and pass the project to fresh blood. This project is now maintained by Petter Reinholdtsen and Steve Dibb.

I just wrapped up a new lsdvd release, available in git or from the download page. This is the changelog dated 2014-10-03 for version 0.17.

  • Ignore 'phantom' audio, subtitle tracks
  • Check for garbage in the program chains, which indicate that a track is non-existant, to work around additional copy protection
  • Fix displaying content type for audio tracks, subtitles
  • Fix pallete display of first entry
  • Fix include orders
  • Ignore read errors in titles that would not be displayed anyway
  • Fix the chapter count
  • Make sure the array size and the array limit used when initialising the palette size is the same.
  • Fix array printing.
  • Correct subsecond calculations.
  • Add sector information to the output format.
  • Clean up code to be closer to ANSI C and compile without warnings with more GCC compiler warnings.

This change bring together patches for lsdvd in use in various Linux and Unix distributions, as well as patches submitted to the project the last nine years. Please check it out. :)

26th September 2014

The Debian Edu / Skolelinux project provide a Linux solution for schools, including a powerful desktop with education software, a central server providing web pages, user database, user home directories, central login and PXE boot of both clients without disk and the installation to install Debian Edu on machines with disk (and a few other services perhaps to small to mention here). We in the Debian Edu team are currently working on the Jessie based version, trying to get everything in shape before the freeze, to avoid having to maintain our own package repository in the future. The current status can be seen on the Debian wiki, and there is still heaps of work left. Some fatal problems block testing, breaking the installer, but it is possible to work around these to get anyway. Here is a recipe on how to get the installation limping along.

First, download the test ISO via ftp, http or rsync (use ftp.skolelinux.org::cd-edu-testing-nolocal-netinst/debian-edu-amd64-i386-NETINST-1.iso). The ISO build was broken on Tuesday, so we do not get a new ISO every 12 hours or so, but thankfully the ISO we already got we are able to install with some tweaking.

When you get to the Debian Edu profile question, go to tty2 (use Alt-Ctrl-F2), run

nano /usr/bin/edu-eatmydata-install

and add 'exit 0' as the second line, disabling the eatmydata optimization. Return to the installation, select the profile you want and continue. Without this change, exim4-config will fail to install due to a known bug in eatmydata.

When you get the grub question at the end, answer /dev/sda (or if this do not work, figure out what your correct value would be. All my test machines need /dev/sda, so I have no advice if it do not fit your need.

If you installed a profile including a graphical desktop, log in as root after the initial boot from hard drive, and install the education-desktop-XXX metapackage. XXX can be kde, gnome, lxde, xfce or mate. If you want several desktop options, install more than one metapackage. Once this is done, reboot and you should have a working graphical login screen. This workaround should no longer be needed once the education-tasks package version 1.801 enter testing in two days.

I believe the ISO build will start working on two days when the new tasksel package enter testing and Steve McIntyre get a chance to update the debian-cd git repository. The eatmydata, grub and desktop issues are already fixed in unstable and testing, and should show up on the ISO as soon as the ISO build start working again. Well the eatmydata optimization is really just disabled. The proper fix require an upload by the eatmydata maintainer applying the patch provided in bug #702711. The rest have proper fixes in unstable.

I hope this get you going with the installation testing, as we are quickly running out of time trying to get our Jessie based installation ready before the distribution freeze in a month.

25th September 2014

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

16th September 2014

The Debian installer could be a lot quicker. When we install more than 2000 packages in Skolelinux / Debian Edu using tasksel in the installer, unpacking the binary packages take forever. A part of the slow I/O issue was discussed in bug #613428 about too much file system sync-ing done by dpkg, which is the package responsible for unpacking the binary packages. Other parts (like code executed by postinst scripts) might also sync to disk during installation. All this sync-ing to disk do not really make sense to me. If the machine crash half-way through, I start over, I do not try to salvage the half installed system. So the failure sync-ing is supposed to protect against, hardware or system crash, is not really relevant while the installer is running.

A few days ago, I thought of a way to get rid of all the file system sync()-ing in a fairly non-intrusive way, without the need to change the code in several packages. The idea is not new, but I have not heard anyone propose the approach using dpkg-divert before. It depend on the small and clever package eatmydata, which uses LD_PRELOAD to replace the system functions for syncing data to disk with functions doing nothing, thus allowing programs to live dangerous while speeding up disk I/O significantly. Instead of modifying the implementation of dpkg, apt and tasksel (which are the packages responsible for selecting, fetching and installing packages), it occurred to me that we could just divert the programs away, replace them with a simple shell wrapper calling "eatmydata $program $@", to get the same effect. Two days ago I decided to test the idea, and wrapped up a simple implementation for the Debian Edu udeb.

The effect was stunning. In my first test it reduced the running time of the pkgsel step (installing tasks) from 64 to less than 44 minutes (20 minutes shaved off the installation) on an old Dell Latitude D505 machine. I am not quite sure what the optimised time would have been, as I messed up the testing a bit, causing the debconf priority to get low enough for two questions to pop up during installation. As soon as I saw the questions I moved the installation along, but do not know how long the question were holding up the installation. I did some more measurements using Debian Edu Jessie, and got these results. The time measured is the time stamp in /var/log/syslog between the "pkgsel: starting tasksel" and the "pkgsel: finishing up" lines, if you want to do the same measurement yourself. In Debian Edu, the tasksel dialog do not show up, and the timing thus do not depend on how quickly the user handle the tasksel dialog.

Machine/setup Original tasksel Optimised tasksel Reduction
Latitude D505 Main+LTSP LXDE 64 min (07:46-08:50) <44 min (11:27-12:11) >20 min 18%
Latitude D505 Roaming LXDE 57 min (08:48-09:45) 34 min (07:43-08:17) 23 min 40%
Latitude D505 Minimal 22 min (10:37-10:59) 11 min (11:16-11:27) 11 min 50%
Thinkpad X200 Minimal 6 min (08:19-08:25) 4 min (08:04-08:08) 2 min 33%
Thinkpad X200 Roaming KDE 19 min (09:21-09:40) 15 min (10:25-10:40) 4 min 21%

The test is done using a netinst ISO on a USB stick, so some of the time is spent downloading packages. The connection to the Internet was 100Mbit/s during testing, so downloading should not be a significant factor in the measurement. Download typically took a few seconds to a few minutes, depending on the amount of packages being installed.

The speedup is implemented by using two hooks in Debian Installer, the pre-pkgsel.d hook to set up the diverts, and the finish-install.d hook to remove the divert at the end of the installation. I picked the pre-pkgsel.d hook instead of the post-base-installer.d hook because I test using an ISO without the eatmydata package included, and the post-base-installer.d hook in Debian Edu can only operate on packages included in the ISO. The negative effect of this is that I am unable to activate this optimization for the kernel installation step in d-i. If the code is moved to the post-base-installer.d hook, the speedup would be larger for the entire installation.

I've implemented this in the debian-edu-install git repository, and plan to provide the optimization as part of the Debian Edu installation. If you want to test this yourself, you can create two files in the installer (or in an udeb). One shell script need do go into /usr/lib/pre-pkgsel.d/, with content like this:

#!/bin/sh
set -e
. /usr/share/debconf/confmodule
info() {
    logger -t my-pkgsel "info: $*"
}
error() {
    logger -t my-pkgsel "error: $*"
}
override_install() {
    apt-install eatmydata || true
    if [ -x /target/usr/bin/eatmydata ] ; then
        for bin in dpkg apt-get aptitude tasksel ; do
            file=/usr/bin/$bin
            # Test that the file exist and have not been diverted already.
            if [ -f /target$file ] ; then
                info "diverting $file using eatmydata"
                printf "#!/bin/sh\neatmydata $bin.distrib \"\$@\"\n" \
                    > /target$file.edu
                chmod 755 /target$file.edu
                in-target dpkg-divert --package debian-edu-config \
                    --rename --quiet --add $file
                ln -sf ./$bin.edu /target$file
            else
                error "unable to divert $file, as it is missing."
            fi
        done
    else
        error "unable to find /usr/bin/eatmydata after installing the eatmydata pacage"
    fi
}

override_install

To clean up, another shell script should go into /usr/lib/finish-install.d/ with code like this:

#! /bin/sh -e
. /usr/share/debconf/confmodule
error() {
    logger -t my-finish-install "error: $@"
}
remove_install_override() {
    for bin in dpkg apt-get aptitude tasksel ; do
        file=/usr/bin/$bin
        if [ -x /target$file.edu ] ; then
            rm /target$file
            in-target dpkg-divert --package debian-edu-config \
                --rename --quiet --remove $file
            rm /target$file.edu
        else
            error "Missing divert for $file."
        fi
    done
    sync # Flush file buffers before continuing
}

remove_install_override

In Debian Edu, I placed both code fragments in a separate script edu-eatmydata-install and call it from the pre-pkgsel.d and finish-install.d scripts.

By now you might ask if this change should get into the normal Debian installer too? I suspect it should, but am not sure the current debian-installer coordinators find it useful enough. It also depend on the side effects of the change. I'm not aware of any, but I guess we will see if the change is safe after some more testing. Perhaps there is some package in Debian depending on sync() and fsync() having effect? Perhaps it should go into its own udeb, to allow those of us wanting to enable it to do so without affecting everyone.

Update 2014-09-24: Since a few days ago, enabling this optimization will break installation of all programs using gnutls because of bug #702711. An updated eatmydata package in Debian will solve it.

Update 2014-10-17: The bug mentioned above is fixed in testing and the optimization work again. And I have discovered that the dpkg-divert trick is not really needed and implemented a slightly simpler approach as part of the debian-edu-install package. See tools/edu-eatmydata-install in the source package.

Update 2014-11-11: Unfortunately, a new bug #765738 in eatmydata only triggering on i386 made it into testing, and broke this installation optimization again. If unblock request 768893 is accepted, it should be working again.

10th September 2014

Yesterday, I had the pleasure of attending a talk with the Norwegian Unix User Group about the OpenPGP keyserver pool sks-keyservers.net, and was very happy to learn that there is a large set of publicly available key servers to use when looking for peoples public key. So far I have used subkeys.pgp.net, and some times wwwkeys.nl.pgp.net when the former were misbehaving, but those days are ended. The servers I have used up until yesterday have been slow and some times unavailable. I hope those problems are gone now.

Behind the round robin DNS entry of the sks-keyservers.net service there is a pool of more than 100 keyservers which are checked every day to ensure they are well connected and up to date. It must be better than what I have used so far. :)

Yesterdays speaker told me that the service is the default keyserver provided by the default configuration in GnuPG, but this do not seem to be used in Debian. Perhaps it should?

Anyway, I've updated my ~/.gnupg/options file to now include this line:

keyserver pool.sks-keyservers.net

With GnuPG version 2 one can also locate the keyserver using SRV entries in DNS. Just for fun, I did just that at work, so now every user of GnuPG at the University of Oslo should find a OpenGPG keyserver automatically should their need it:

% host -t srv _pgpkey-http._tcp.uio.no
_pgpkey-http._tcp.uio.no has SRV record 0 100 11371 pool.sks-keyservers.net.
%

Now if only the HKP lookup protocol supported finding signature paths, I would be very happy. It can look up a given key or search for a user ID, but I normally do not want that, but to find a trust path from my key to another key. Given a user ID or key ID, I would like to find (and download) the keys representing a signature path from my key to the key in question, to be able to get a trust path between the two keys. This is as far as I can tell not possible today. Perhaps something for a future version of the protocol?

25th August 2014

Two years later, I am still not sure if it is legal here in Norway to use or publish a video in H.264 or MPEG4 format edited by the commercially licensed video editors, without limiting the use to create "personal" or "non-commercial" videos or get a license agreement with MPEG LA. If one want to publish and broadcast video in a non-personal or commercial setting, it might be that those tools can not be used, or that video format can not be used, without breaking their copyright license. I am not sure. Back then, I found that the copyright license terms for Adobe Premiere and Apple Final Cut Pro both specified that one could not use the program to produce anything else without a patent license from MPEG LA. The issue is not limited to those two products, though. Other much used products like those from Avid and Sorenson Media have terms of use are similar to those from Adobe and Apple. The complicating factor making me unsure if those terms have effect in Norway or not is that the patents in question are not valid in Norway, but copyright licenses are.

These are the terms for Avid Artist Suite, according to their published end user license text (converted to lower case text for easier reading):

18.2. MPEG-4. MPEG-4 technology may be included with the software. MPEG LA, L.L.C. requires this notice:

This product is licensed under the MPEG-4 visual patent portfolio license for the personal and non-commercial use of a consumer for (i) encoding video in compliance with the MPEG-4 visual standard (“MPEG-4 video”) and/or (ii) decoding MPEG-4 video that was encoded by a consumer engaged in a personal and non-commercial activity and/or was obtained from a video provider licensed by MPEG LA to provide MPEG-4 video. No license is granted or shall be implied for any other use. Additional information including that relating to promotional, internal and commercial uses and licensing may be obtained from MPEG LA, LLC. See http://www.mpegla.com. This product is licensed under the MPEG-4 systems patent portfolio license for encoding in compliance with the MPEG-4 systems standard, except that an additional license and payment of royalties are necessary for encoding in connection with (i) data stored or replicated in physical media which is paid for on a title by title basis and/or (ii) data which is paid for on a title by title basis and is transmitted to an end user for permanent storage and/or use, such additional license may be obtained from MPEG LA, LLC. See http://www.mpegla.com for additional details.

18.3. H.264/AVC. H.264/AVC technology may be included with the software. MPEG LA, L.L.C. requires this notice:

This product is licensed under the AVC patent portfolio license for the personal use of a consumer or other uses in which it does not receive remuneration to (i) encode video in compliance with the AVC standard (“AVC video”) and/or (ii) decode AVC video that was encoded by a consumer engaged in a personal activity and/or was obtained from a video provider licensed to provide AVC video. No license is granted or shall be implied for any other use. Additional information may be obtained from MPEG LA, L.L.C. See http://www.mpegla.com.

Note the requirement that the videos created can only be used for personal or non-commercial purposes.

The Sorenson Media software have similar terms:

With respect to a license from Sorenson pertaining to MPEG-4 Video Decoders and/or Encoders: Any such product is licensed under the MPEG-4 visual patent portfolio license for the personal and non-commercial use of a consumer for (i) encoding video in compliance with the MPEG-4 visual standard (“MPEG-4 video”) and/or (ii) decoding MPEG-4 video that was encoded by a consumer engaged in a personal and non-commercial activity and/or was obtained from a video provider licensed by MPEG LA to provide MPEG-4 video. No license is granted or shall be implied for any other use. Additional information including that relating to promotional, internal and commercial uses and licensing may be obtained from MPEG LA, LLC. See http://www.mpegla.com.

With respect to a license from Sorenson pertaining to MPEG-4 Consumer Recorded Data Encoder, MPEG-4 Systems Internet Data Encoder, MPEG-4 Mobile Data Encoder, and/or MPEG-4 Unique Use Encoder: Any such product is licensed under the MPEG-4 systems patent portfolio license for encoding in compliance with the MPEG-4 systems standard, except that an additional license and payment of royalties are necessary for encoding in connection with (i) data stored or replicated in physical media which is paid for on a title by title basis and/or (ii) data which is paid for on a title by title basis and is transmitted to an end user for permanent storage and/or use. Such additional license may be obtained from MPEG LA, LLC. See http://www.mpegla.com for additional details.

Some free software like Handbrake and FFMPEG uses GPL/LGPL licenses and do not have any such terms included, so for those, there is no requirement to limit the use to personal and non-commercial.

31st July 2014

The complete and free “out of the box” software solution for schools, Debian Edu / Skolelinux, is used quite a lot in Germany, and one of the people involved is Bernd Zeitzen, who show up on the project mailing lists from time to time with interesting questions and tips on how to adjust the setup. I managed to interview him this summer.

Who are you, and how do you spend your days?

My name is Bernd Zeitzen and I'm married with Hedda, a self employed physiotherapist. My former profession is tool maker, but I haven't worked for 30 years in this job. 30 years ago I started to support my wife and become her officeworker and a few years later the administrator for a small computer network, today based on Ubuntu Server (Samba, OpenVPN). For her daily work she has to use Windows Desktops because the software she needs to organize her business only works with Windows . :-(

In 1988 we started with one PC and DOS, then I learned to use Windows 98, 2000, XP, …, 8, Ubuntu, MacOSX. Today we are running a Linux server with 6 Windows clients and 10 persons (teacher of children with special needs, speech therapist, occupational therapist, psychologist and officeworkers) using our Samba shares via OpenVPN to work with the documentations of our patients.

How did you get in contact with the Skolelinux / Debian Edu project?

Two years ago a friend of mine asked me, if I want to get a job in his school (Gymnasium Harsewinkel). They started with Skolelinux / Debian Edu and they were looking for people to give support to the teachers using the software and the network and teaching the pupils increasing their computer skills in optional lessons. I'm spending 4-6 hours a week with this job.

What do you see as the advantages of Skolelinux / Debian Edu?

The independence.

First: Every person is allowed to use, share and develop the software. Even if you are poor, you are allowed to use the software included in Skolelinux/Debian Edu and all the other Free Software.

Second: The software runs on old machines and this gives us the possibility to recycle computers, weeded out from offices. The servers and desktops are running for more than two years and they are working reliable.

We have two servers (one tjener and one terminal server), 45 workstations in three classrooms and seven laptops as a mobile solution for all classrooms. These machines are all booting from the terminal server. In the moment we are installing 30 laptops as mobile workstations. Then the pupils have the possibility to work with these machines in their classrooms. Internet access is realized by a WLAN router, connected to the schools network. This is all done without a dedicated system administrator or a computer science teacher.

What do you see as the disadvantages of Skolelinux / Debian Edu?

Teachers and pupils are Windows users. <Irony on> And Linux isn't cool. It's software for freaks using the command line. <Irony off> They don't realize the stability of the system.

Which free software do you use daily?

Firefox, Thunderbird, LibreOffice, Ubuntu Server 12.04 (Samba, Apache, MySQL, Joomla!, … and Skolelinux / Debian Edu)

Which strategy do you believe is the right one to use to get schools to use free software?

In Germany we have the situation: every school is free to decide which software they want to use. This decision is influenced by teachers who learned to use Windows and MS Office. They buy a PC with Windows preinstalled and an additional testing version of MS Office. They don't know about the possibility to use Free Software instead. Another problem are the publisher of school books. They develop their software, added to the school books, for Windows.

23rd July 2014

This summer I finally had time to continue working on the Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with todays copyright law. Yesterday, I finally completed translated the book text. There are still some foot/end notes left to translate, the colophon page need to be rewritten, and a few words and phrases still need to be translated, but the Norwegian text is ready for the first proof reading. :) More spell checking is needed, and several illustrations need to be cleaned up. The work stopped up because I had to give priority to other projects the last year, and the progress graph of the translation show this very well:

If you want to read the result, check out the github project pages and the PDF, EPUB and HTML version available in the archive directory.

Please report typos, bugs and improvements to the github project if you find any.

17th June 2014

The Debian Edu / Skolelinux project provide an instruction manual for teachers, system administrators and other users that contain useful tips for setting up and maintaining a Debian Edu installation. This text is about how the text processing of this manual is handled in the project.

One goal of the project is to provide information in the native language of its users, and for this we need to handle translations. But we also want to make sure each language contain the same information, so for this we need a good way to keep the translations in sync. And we want it to be easy for our users to improve the documentation, avoiding the need to learn special formats or tools to contribute, and the obvious way to do this is to make it possible to edit the documentation using a web browser. We also want it to be easy for translators to keep the translation up to date, and give them help in figuring out what need to be translated. Here is the list of tools and the process we have found trying to reach all these goals.

We maintain the authoritative source of our manual in the Debian wiki, as several wiki pages written in English. It consist of one front page with references to the different chapters, several pages for each chapter, and finally one "collection page" gluing all the chapters together into one large web page (aka the AllInOne page). The AllInOne page is the one used for further processing and translations. Thanks to the fact that the MoinMoin installation on wiki.debian.org support exporting pages in the Docbook format, we can fetch the list of pages to export using the raw version of the AllInOne page, loop over each of them to generate a Docbook XML version of the manual. This process also download images and transform image references to use the locally downloaded images. The generated Docbook XML files are slightly broken, so some post-processing is done using the documentation/scripts/get_manual program, and the result is a nice Docbook XML file (debian-edu-wheezy-manual.xml) and a handfull of images. The XML file can now be used to generate PDF, HTML and epub versions of the English manual. This is the basic step of our process, making PDF (using dblatex), HTML (using xsltproc) and epub (using dbtoepub) version from Docbook XML, and the resulting files are placed in the debian-edu-doc-en binary package.

But English documentation is not enough for us. We want translated documentation too, and we want to make it easy for translators to track the English original. For this we use the poxml package, which allow us to transform the English Docbook XML file into a translation file (a .pot file), usable with the normal gettext based translation tools used by those translating free software. The pot file is used to create and maintain translation files (several .po files), which the translations update with the native language translations of all titles, paragraphs and blocks of text in the original. The next step is combining the original English Docbook XML and the translation file (say debian-edu-wheezy-manual.nb.po), to create a translated Docbook XML file (in this case debian-edu-wheezy-manual.nb.xml). This translated (or partly translated, if the translation is not complete) Docbook XML file can then be used like the original to create a PDF, HTML and epub version of the documentation.

The translators use different tools to edit the .po files. We recommend using lokalize, while some use emacs and vi, others can use web based editors like Poodle or Transifex. All we care about is where the .po file end up, in our git repository. Updated translations can either be committed directly to git, or submitted as bug reports against the debian-edu-doc package.

One challenge is images, which both might need to be translated (if they show translated user applications), and are needed in different formats when creating PDF and HTML versions (epub is a HTML version in this regard). For this we transform the original PNG images to the needed density and format during build, and have a way to provide translated images by storing translated versions in images/$LANGUAGECODE/. I am a bit unsure about the details here. The package maintainers know more.

If you wonder what the result look like, we provide the content of the documentation packages on the web. See for example the Italian PDF version or the German HTML version. We do not yet build the epub version by default, but perhaps it will be done in the future.

To learn more, check out the debian-edu-doc package, the manual on the wiki and the translation instructions in the manual.

29th May 2014

Dear lazyweb. I'm planning to set up a small Raspberry Pi computer in my car, connected to a small screen next to the rear mirror. I plan to hook it up with a GPS and a USB wifi card too. The idea is to get my own "Carputer". But I wonder if someone already created a good free software solution for such car computer.

This is my current wish list for such system:

  • Work on Raspberry Pi.
  • Show current speed limit based on location, and warn if going too fast (for example using color codes yellow and red on the screen, or make a sound). This could be done either using either data from Openstreetmap or OCR info gathered from a dashboard camera.
  • Track automatic toll road passes and their cost, show total spent and make it possible to calculate toll costs for planned route.
  • Collect GPX tracks for use with OpenStreetMap.
  • Automatically detect and use any wireless connection to connect to home server. Try IP over DNS (iodine) or ICMP (Hans) if direct connection do not work.
  • Set up mesh network to talk to other cars with the same system, or some standard car mesh protocol.
  • Warn when approaching speed cameras and speed camera ranges (speed calculated between two cameras).
  • Suport dashboard/front facing camera to discover speed limits and run OCR to track registration number of passing cars.

If you know of any free software car computer system supporting some or all of these features, please let me know.

Tags: english.
29th April 2014

I've been following the Gnash project for quite a while now. It is a free software implementation of Adobe Flash, both a standalone player and a browser plugin. Gnash implement support for the AVM1 format (and not the newer AVM2 format - see Lightspark for that one), allowing several flash based sites to work. Thanks to the friendly developers at Youtube, it also work with Youtube videos, because the Javascript code at Youtube detect Gnash and serve a AVM1 player to those users. :) Would be great if someone found time to implement AVM2 support, but it has not happened yet. If you install both Lightspark and Gnash, Lightspark will invoke Gnash if it find a AVM1 flash file, so you can get both handled as free software. Unfortunately, Lightspark so far only implement a small subset of AVM2, and many sites do not work yet.

A few months ago, I started looking at Coverity, the static source checker used to find heaps and heaps of bugs in free software (thanks to the donation of a scanning service to free software projects by the company developing this non-free code checker), and Gnash was one of the projects I decided to check out. Coverity is able to find lock errors, memory errors, dead code and more. A few days ago they even extended it to also be able to find the heartbleed bug in OpenSSL. There are heaps of checks being done on the instrumented code, and the amount of bogus warnings is quite low compared to the other static code checkers I have tested over the years.

Since a few weeks ago, I've been working with the other Gnash developers squashing bugs discovered by Coverity. I was quite happy today when I checked the current status and saw that of the 777 issues detected so far, 374 are marked as fixed. This make me confident that the next Gnash release will be more stable and more dependable than the previous one. Most of the reported issues were and are in the test suite, but it also found a few in the rest of the code.

If you want to help out, you find us on the gnash-dev mailing list and on the #gnash channel on irc.freenode.net IRC server.

23rd April 2014

It would be nice if it was easier in Debian to get all the hardware related packages relevant for the computer installed automatically. So I implemented one, using my Isenkram package. To use it, install the tasksel and isenkram packages and run tasksel as user root. You should be presented with a new option, "Hardware specific packages (autodetected by isenkram)". When you select it, tasksel will install the packages isenkram claim is fit for the current hardware, hot pluggable or not.

The implementation is in two files, one is the tasksel menu entry description, and the other is the script used to extract the list of packages to install. The first part is in /usr/share/tasksel/descs/isenkram.desc and look like this:

Task: isenkram
Section: hardware
Description: Hardware specific packages (autodetected by isenkram)
 Based on the detected hardware various hardware specific packages are
 proposed.
Test-new-install: mark show
Relevance: 8
Packages: for-current-hardware

The second part is in /usr/lib/tasksel/packages/for-current-hardware and look like this:

#!/bin/sh
#
(
    isenkram-lookup
    isenkram-autoinstall-firmware -l
) | sort -u

All in all, a very short and simple implementation making it trivial to install the hardware dependent package we all may want to have installed on our machines. I've not been able to find a way to get tasksel to tell you exactly which packages it plan to install before doing the installation. So if you are curious or careful, check the output from the isenkram-* command line tools first.

The information about which packages are handling which hardware is fetched either from the isenkram package itself in /usr/share/isenkram/, from git.debian.org or from the APT package database (using the Modaliases header). The APT package database parsing have caused a nasty resource leak in the isenkram daemon (bugs #719837 and #730704). The cause is in the python-apt code (bug #745487), but using a workaround I was able to get rid of the file descriptor leak and reduce the memory leak from ~30 MiB per hardware detection down to around 2 MiB per hardware detection. It should make the desktop daemon a lot more useful. The fix is in version 0.7 uploaded to unstable today.

I believe the current way of mapping hardware to packages in Isenkram is is a good draft, but in the future I expect isenkram to use the AppStream data source for this. A proposal for getting proper AppStream support into Debian is floating around as DEP-11, and GSoC project will take place this summer to improve the situation. I look forward to seeing the result, and welcome patches for isenkram to start using the information when it is ready.

If you want your package to map to some specific hardware, either add a "Xb-Modaliases" header to your control file like I did in the pymissile package or submit a bug report with the details to the isenkram package. See also all my blog posts tagged isenkram for details on the notation. I expect the information will be migrated to AppStream eventually, but for the moment I got no better place to store it.

15th April 2014

The Freedombox project is working on providing the software and hardware to make it easy for non-technical people to host their data and communication at home, and being able to communicate with their friends and family encrypted and away from prying eyes. It is still going strong, and today a major mile stone was reached.

Today, the last of the packages currently used by the project to created the system images were accepted into Debian Unstable. It was the freedombox-setup package, which is used to configure the images during build and on the first boot. Now all one need to get going is the build code from the freedom-maker git repository and packages from Debian. And once the freedombox-setup package enter testing, we can build everything directly from Debian. :)

Some key packages used by Freedombox are freedombox-setup, plinth, pagekite, tor, privoxy, owncloud and dnsmasq. There are plans to integrate more packages into the setup. User documentation is maintained on the Debian wiki. Please check out the manual and help us improve it.

To test for yourself and create boot images with the FreedomBox setup, run this on a Debian machine using a user with sudo rights to become root:

sudo apt-get install git vmdebootstrap mercurial python-docutils \
  mktorrent extlinux virtualbox qemu-user-static binfmt-support \
  u-boot-tools
git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
  freedom-maker
make -C freedom-maker dreamplug-image raspberry-image virtualbox-image

Root access is needed to run debootstrap and mount loopback devices. See the README in the freedom-maker git repo for more details on the build. If you do not want all three images, trim the make line. Note that the virtualbox-image target is not really virtualbox specific. It create a x86 image usable in kvm, qemu, vmware and any other x86 virtual machine environment. You might need the version of vmdebootstrap in Jessie to get the build working, as it include fixes for a race condition with kpartx.

If you instead want to install using a Debian CD and the preseed method, boot a Debian Wheezy ISO and use this boot argument to load the preseed values:

url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat

I have not tested it myself the last few weeks, so I do not know if it still work.

If you wonder how to help, one task you could look at is using systemd as the boot system. It will become the default for Linux in Jessie, so we need to make sure it is usable on the Freedombox. I did a simple test a few weeks ago, and noticed dnsmasq failed to start during boot when using systemd. I suspect there are other problems too. :) To detect problems, there is a test suite included, which can be run from the plinth web interface.

Give it a go and let us know how it goes on the mailing list, and help us get the new release published. :) Please join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

9th April 2014

For a while now, I have been looking for a sensible offsite backup solution for use at home. My requirements are simple, it must be cheap and locally encrypted (in other words, I keep the encryption keys, the storage provider do not have access to my private files). One idea me and my friends had many years ago, before the cloud storage providers showed up, was to use Google mail as storage, writing a Linux block device storing blocks as emails in the mail service provided by Google, and thus get heaps of free space. On top of this one can add encryption, RAID and volume management to have lots of (fairly slow, I admit that) cheap and encrypted storage. But I never found time to implement such system. But the last few weeks I have looked at a system called S3QL, a locally mounted network backed file system with the features I need.

S3QL is a fuse file system with a local cache and cloud storage, handling several different storage providers, any with Amazon S3, Google Drive or OpenStack API. There are heaps of such storage providers. S3QL can also use a local directory as storage, which combined with sshfs allow for file storage on any ssh server. S3QL include support for encryption, compression, de-duplication, snapshots and immutable file systems, allowing me to mount the remote storage as a local mount point, look at and use the files as if they were local, while the content is stored in the cloud as well. This allow me to have a backup that should survive fire. The file system can not be shared between several machines at the same time, as only one can mount it at the time, but any machine with the encryption key and access to the storage service can mount it if it is unmounted.

It is simple to use. I'm using it on Debian Wheezy, where the package is included already. So to get started, run apt-get install s3ql. Next, pick a storage provider. I ended up picking Greenqloud, after reading their nice recipe on how to use S3QL with their Amazon S3 service, because I trust the laws in Iceland more than those in USA when it come to keeping my personal data safe and private, and thus would rather spend money on a company in Iceland. Another nice recipe is available from the article S3QL Filesystem for HPC Storage by Jeff Layton in the HPC section of Admin magazine. When the provider is picked, figure out how to get the API key needed to connect to the storage API. With Greencloud, the key did not show up until I had added payment details to my account.

Armed with the API access details, it is time to create the file system. First, create a new bucket in the cloud. This bucket is the file system storage area. I picked a bucket name reflecting the machine that was going to store data there, but any name will do. I'll refer to it as bucket-name below. In addition, one need the API login and password, and a locally created password. Store it all in ~root/.s3ql/authinfo2 like this:

[s3c]
storage-url: s3c://s.greenqloud.com:443/bucket-name
backend-login: API-login
backend-password: API-password
fs-passphrase: local-password

I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:

# mkdir -m 700 /var/lib/s3ql-cache
# mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl s3c://s.greenqloud.com:443/bucket-name
Enter backend login: 
Enter backend password: 
Before using S3QL, make sure to read the user's guide, especially
the 'Important Rules to Avoid Loosing Data' section.
Enter encryption password: 
Confirm encryption password: 
Generating random encryption key...
Creating metadata tables...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.00 MB of compressed metadata.
# 

The next step is mounting the file system to make the storage available.

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 4 upload threads.
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Mounting filesystem...
# df -h /s3ql
Filesystem                              Size  Used Avail Use% Mounted on
s3c://s.greenqloud.com:443/bucket-name  1.0T     0  1.0T   0% /s3ql
#

The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:

# umount.s3ql /s3ql
# 

There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:

# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
Using cached metadata.
File system seems clean, checking anyway.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 5000 objects so far..
..processed 10000 objects so far..
..processed 15000 objects so far..
Checking objects (sizes)...
Checking blocks (referenced objects)...
Checking blocks (refcounts)...
Checking inode-block mapping (blocks)...
Checking inode-block mapping (inodes)...
Checking inodes (refcounts)...
Checking inodes (sizes)...
Checking extended attributes (names)...
Checking extended attributes (inodes)...
Checking symlinks (inodes)...
Checking directory reachability...
Checking unix conventions...
Checking referential integrity...
Dropping temporary indices...
Backing up old metadata...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Wrote 0.89 MB of compressed metadata.
# 

Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set.

I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:

# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
  --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
Using 8 upload threads.
Backend reports that fs is still mounted elsewhere, aborting.
#

The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask S3QL to flush the cache and metadata using s3qlctrl:

# s3qlctrl upload-meta /s3ql
# s3qlctrl flushcache /s3ql
# 

If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:

# s3qlstat /s3ql
Directory entries:    9141
Inodes:               9143
Data blocks:          8851
Total data size:      22049.38 MB
After de-duplication: 21955.46 MB (99.57% of total)
After compression:    21877.28 MB (99.22% of total, 99.64% of de-duplicated)
Database size:        2.39 MB (uncompressed)
(some values do not take into account not-yet-uploaded dirty blocks in cache)
#

I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quite different and you will have to figure out what suits you best.

While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look.

Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so.

If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

1st April 2014

Microsoft have announced that Windows XP reaches its end of life 2014-04-08, in 7 days. But there are heaps of machines still running Windows XP, and depending on Windows XP to run their applications, and upgrading will be expensive, both when it comes to money and when it comes to the amount of effort needed to migrate from Windows XP to a new operating system. Some obvious options (buy new a Windows machine, buy a MacOSX machine, install Linux on the existing machine) are already well known and covered elsewhere. Most of them involve leaving the user applications installed on Windows XP behind and trying out replacements or updated versions. In this blog post I want to mention one strange bird that allow people to keep the hardware and the existing Windows XP applications and run them on a free software operating system that is Windows XP compatible.

ReactOS is a free software operating system (GNU GPL licensed) working on providing a operating system that is binary compatible with Windows, able to run windows programs directly and to use Windows drivers for hardware directly. The project goal is for Windows user to keep their existing machines, drivers and software, and gain the advantages from user a operating system without usage limitations caused by non-free licensing. It is a Windows clone running directly on the hardware, so quite different from the approach taken by the Wine project, which make it possible to run Windows binaries on Linux.

The ReactOS project share code with the Wine project, so most shared libraries available on Windows are already implemented already. There is also a software manager like the one we are used to on Linux, allowing the user to install free software applications with a simple click directly from the Internet. Check out the screen shots on the project web site for an idea what it look like (it looks just like Windows before metro).

I do not use ReactOS myself, preferring Linux and Unix like operating systems. I've tested it, and it work fine in a virt-manager virtual machine. The browser, minesweeper, notepad etc is working fine as far as I can tell. Unfortunately, my main test application is the software included on a CD with the Lego Mindstorms NXT, which seem to install just fine from CD but fail to leave any binaries on the disk after the installation. So no luck with that test software. No idea why, but hope someone else figure out and fix the problem. I've tried the ReactOS Live ISO on a physical machine, and it seemed to work just fine. If you like Windows and want to keep running your old Windows binaries, check it out by downloading the installation CD, the live CD or the preinstalled virtual machine image.

30th March 2014

Debian Edu / Skolelinux keep gaining new users. Some weeks ago, a person showed up on IRC, #debian-edu, with a wish to contribute, and I managed to get a interview with this great contributor Roger Marsal to learn more about his background.

Who are you, and how do you spend your days?

My name is Roger Marsal, I'm 27 years old (1986 generation) and I live in Barcelona, Spain. I've got a strong business background and I work as a patrimony manager and as a real estate agent. Additionally, I've co-founded a British based tech company that is nowadays on the last development phase of a new social networking concept.

I'm a Linux enthusiast that started its journey with Ubuntu four years ago and have recently switched to Debian seeking rock solid stability and as a necessary step to gain expertise.

In a nutshell, I spend my days working and learning as much as I can to face both my job, entrepreneur project and feed my Linux hunger.

How did you get in contact with the Skolelinux / Debian Edu project?

I discovered the LTSP advantages with "Ubuntu 12.04 alternate install" and after a year of use I started looking for an alternative. Even though I highly value and respect the Ubuntu project, I thought it was necessary for me to change to a more robust and stable alternative. As far as I was using Debian on my personal laptop I thought it would be fine to install Debian and configure an LTSP server myself. Surprised, I discovered that the Debian project also supported a kind of Edubuntu equivalent, and after having some pain I obtained a Debian Edu network up and running. I just loved it.

What do you see as the advantages of Skolelinux / Debian Edu?

I found a main advantage in that, once you know "the tips and tricks", a new installation just works out of the box. It's the most complete alternative I've found to create an LTSP network. All the other distributions seems to be made of plastic, Debian Edu seems to be made of steel.

What do you see as the disadvantages of Skolelinux / Debian Edu?

I found two main disadvantages.

I'm not an expert but I've got notions and I had to spent a considerable amount of time trying to bring up a standard network topology. I'm quite stubborn and I just worked until I did but I'm sure many people with few resources (not big schools, but academies for example) would have switched or dropped.

It's amazing how such a complex system like Debian Edu has achieved this out-of-the-box state. Even though tweaking without breaking gets more difficult, as more factors have to be considered. This can discourage many people too.

Which free software do you use daily?

I use Debian, Firefox, Okular, Inkscape, LibreOffice and Virtualbox.

Which strategy do you believe is the right one to use to get schools to use free software?

I don't think there is a need for a particular strategy. The free attribute in both "freedom" and "no price" meanings is what will really bring free software to schools. In my experience I can think of the "R" statistical language; a few years a ago was an extremely nerd tool for university people. Today it's being increasingly used to teach statistics at many different level of studies. I believe free and open software will increasingly gain popularity, but I'm sure schools will be one of the first scenarios where this will happen.

25th March 2014

Did you ever need to store logs or other files in a way that would allow it to be used as evidence in court, and needed a way to demonstrate without reasonable doubt that the file had not been changed since it was created? Or, did you ever need to document that a given document was received at some point in time, like some archived document or the answer to an exam, and not changed after it was received? The problem in these settings is to remove the need to trust yourself and your computers, while still being able to prove that a file is the same as it was at some given time in the past.

A solution to these problems is to have a trusted third party "stamp" the document and verify that at some given time the document looked a given way. Such notarius service have been around for thousands of years, and its digital equivalent is called a trusted timestamping service. The Internet Engineering Task Force standardised how such service could work a few years ago as RFC 3161. The mechanism is simple. Create a hash of the file in question, send it to a trusted third party which add a time stamp to the hash and sign the result with its private key, and send back the signed hash + timestamp. Both email, FTP and HTTP can be used to request such signature, depending on what is provided by the service used. Anyone with the document and the signature can then verify that the document matches the signature by creating their own hash and checking the signature using the trusted third party public key. There are several commercial services around providing such timestamping. A quick search for "rfc 3161 service" pointed me to at least DigiStamp, Quo Vadis, Global Sign and Global Trust Finder. The system work as long as the private key of the trusted third party is not compromised.

But as far as I can tell, there are very few public trusted timestamp services available for everyone. I've been looking for one for a while now. But yesterday I found one over at Deutches Forschungsnetz mentioned in a blog by David Müller. I then found a good recipe on how to use the service over at the University of Greifswald.

The OpenSSL library contain both server and tools to use and set up your own signing service. See the ts(1SSL), tsget(1SSL) manual pages for more details. The following shell script demonstrate how to extract a signed timestamp for any file on the disk in a Debian environment:

#!/bin/sh
set -e
url="http://zeitstempel.dfn.de"
caurl="https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt"
reqfile=$(mktemp -t tmp.XXXXXXXXXX.tsq)
resfile=$(mktemp -t tmp.XXXXXXXXXX.tsr)
cafile=chain.txt
if [ ! -f $cafile ] ; then
    wget -O $cafile "$caurl"
fi
openssl ts -query -data "$1" -cert | tee "$reqfile" \
    | /usr/lib/ssl/misc/tsget -h "$url" -o "$resfile"
openssl ts -reply -in "$resfile" -text 1>&2
openssl ts -verify -data "$1" -in "$resfile" -CAfile "$cafile" 1>&2
base64 < "$resfile"
rm "$reqfile" "$resfile"

The argument to the script is the file to timestamp, and the output is a base64 encoded version of the signature to STDOUT and details about the signature to STDERR. Note that due to a bug in the tsget script, you might need to modify the included script and remove the last line. Or just write your own HTTP uploader using curl. :) Now you too can prove and verify that files have not been changed.

But the Internet need more public trusted timestamp services. Perhaps something for Uninett or my work place the University of Oslo to set up?

Tags: english, sikkerhet.
21st March 2014

Keeping your DVD collection safe from scratches and curious children fingers while still having it available when you want to see a movie is not straight forward. My preferred method at the moment is to store a full copy of the ISO on a hard drive, and use VLC, Popcorn Hour or other useful players to view the resulting file. This way the subtitles and bonus material are still available and using the ISO is just like inserting the original DVD record in the DVD player.

Earlier I used dd for taking security copies, but it do not handle DVDs giving read errors (which are quite a few of them). I've also tried using dvdbackup and genisoimage, but these days I use the marvellous python library and program python-dvdvideo written by Bastian Blank. It is in Debian already and the binary package name is python3-dvdvideo. Instead of trying to read every block from the DVD, it parses the file structure and figure out which block on the DVD is actually in used, and only read those blocks from the DVD. This work surprisingly well, and I have been able to almost backup my entire DVD collection using this method.

So far, python-dvdvideo have failed on between 10 and 20 DVDs, which is a small fraction of my collection. The most common problem is DVDs using UTF-16 instead of UTF-8 characters, which according to Bastian is against the DVD specification (and seem to cause some players to fail too). A rarer problem is what seem to be inconsistent DVD structures, as the python library claim there is a overlap between objects. An equally rare problem claim some value is out of range. No idea what is going on there. I wish I knew enough about the DVD format to fix these, to ensure my movie collection will stay with me in the future.

So, if you need to keep your DVDs safe, back them up using python-dvdvideo. :)

14th March 2014

The Freedombox project is working on providing the software and hardware for making it easy for non-technical people to host their data and communication at home, and being able to communicate with their friends and family encrypted and away from prying eyes. It has been going on for a while, and is slowly progressing towards a new test release (0.2).

And what day could be better than the Pi day to announce that the new version will provide "hard drive" / SD card / USB stick images for Dreamplug, Raspberry Pi and VirtualBox (or any other virtualization system), and can also be installed using a Debian installer preseed file. The Debian based Freedombox is now based on Debian Jessie, where most of the needed packages used are already present. Only one, the freedombox-setup package, is missing. To try to build your own boot image to test the current status, fetch the freedom-maker scripts and build using vmdebootstrap with a user with sudo access to become root:

git clone http://anonscm.debian.org/git/freedombox/freedom-maker.git \
  freedom-maker
sudo apt-get install git vmdebootstrap mercurial python-docutils \
  mktorrent extlinux virtualbox qemu-user-static binfmt-support \
  u-boot-tools
make -C freedom-maker dreamplug-image raspberry-image virtualbox-image

Root access is needed to run debootstrap and mount loopback devices. See the README for more details on the build. If you do not want all three images, trim the make line. But note that thanks to a race condition in vmdebootstrap, the build might fail without the patch to the kpartx call.

If you instead want to install using a Debian CD and the preseed method, boot a Debian Wheezy ISO and use this boot argument to load the preseed values:

url=http://www.reinholdtsen.name/freedombox/preseed-jessie.dat

But note that due to a recently introduced bug in apt in Jessie, the installer will currently hang while setting up APT sources. Killing the 'apt-cdrom ident' process when it hang a few times during the installation will get the installation going. This affect all installations in Jessie, and I expect it will be fixed soon.

Give it a go and let us know how it goes on the mailing list, and help us get the new release published. :) Please join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

12th March 2014

On larger sites, it is useful to use a dedicated storage server for storing user home directories and data. The design for handling this in Debian Edu / Skolelinux, is to update the automount rules in LDAP and let the automount daemon on the clients take care of the rest. I was reminded about the need to document this better when one of the customers of Skolelinux Drift AS, where I am on the board of directors, asked about how to do this. The steps to get this working are the following:

  1. Add new storage server in DNS. I use nas-server.intern as the example host here.
  2. Add automoun LDAP information about this server in LDAP, to allow all clients to automatically mount it on reqeust.
  3. Add the relevant entries in tjener.intern:/etc/fstab, because tjener.intern do not use automount to avoid mounting loops.

DNS entries are added in GOsa², and not described here. Follow the instructions in the manual (Machine Management with GOsa² in section Getting started).

Ensure that the NFS export points on the server are exported to the relevant subnets or machines:

root@tjener:~# showmount -e nas-server
Export list for nas-server:
/storage         10.0.0.0/8
root@tjener:~#

Here everything on the backbone network is granted access to the /storage export. With NFSv3 it is slightly better to limit it to netgroup membership or single IP addresses to have some limits on the NFS access.

The next step is to update LDAP. This can not be done using GOsa², because it lack a module for automount. Instead, use ldapvi and add the required LDAP objects using an editor.

ldapvi --ldap-conf -ZD '(cn=admin)' -b ou=automount,dc=skole,dc=skolelinux,dc=no

When the editor show up, add the following LDAP objects at the bottom of the document. The "/&" part in the last LDAP object is a wild card matching everything the nas-server exports, removing the need to list individual mount points in LDAP.

add cn=nas-server,ou=auto.skole,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: automount
cn: nas-server
automountInformation: -fstype=autofs --timeout=60 ldap:ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no

add ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: top
objectClass: automountMap
ou: auto.nas-server

add cn=/,ou=auto.nas-server,ou=automount,dc=skole,dc=skolelinux,dc=no
objectClass: automount
cn: /
automountInformation: -fstype=nfs,tcp,rsize=32768,wsize=32768,rw,intr,hard,nodev,nosuid,noatime nas-server.intern:/&

The last step to remember is to mount the relevant mount points in tjener.intern by adding them to /etc/fstab, creating the mount directories using mkdir and running "mount -a" to mount them.

When this is done, your users should be able to access the files on the storage server directly by just visiting the /tjener/nas-server/storage/ directory using any application on any workstation, LTSP client or LTSP server.

22nd February 2014

Many years ago, I wrote a GPL licensed version of the netgroup and innetgr tools, because I needed them in Skolelinux. I called the project ng-utils, and it has served me well. I placed the project under the Hungry Programmer umbrella, and it was maintained in our CVS repository. But many years ago, the CVS repository was dropped (lost, not migrated to new hardware, not sure), and the project have lacked a proper home since then.

Last summer, I had a look at the package and made a new release fixing a irritating crash bug, but was unable to store the changes in a proper source control system. I applied for a project on Alioth, but did not have time to follow up on it. Until today. :)

After many hours of cleaning and migration, the ng-utils project now have a new home, and a git repository with the highlight of the history of the project. I published all release tarballs and imported them into the git repository. As the project is really stable and not expected to gain new features any time soon, I decided to make a new release and call it 1.0. Visit the new project home on https://alioth.debian.org/projects/ng-utils/ if you want to check it out. The new version is also uploaded into Debian Unstable.

Tags: debian, english.
3rd February 2014

A few days ago I decided to try to help the Hurd people to get their changes into sysvinit, to allow them to use the normal sysvinit boot system instead of their old one. This follow up on the great Google Summer of Code work done last summer by Justus Winter to get Debian on Hurd working more like Debian on Linux. To get started, I downloaded a prebuilt hard disk image from http://ftp.debian-ports.org/debian-cd/hurd-i386/current/debian-hurd.img.tar.gz, and started it using virt-manager.

The first think I had to do after logging in (root without any password) was to get the network operational. I followed the instructions on the Debian GNU/Hurd ports page and ran these commands as root to get the machine to accept a IP address from the kvm internal DHCP server:

settrans -fgap /dev/netdde /hurd/netdde
kill $(ps -ef|awk '/[p]finet/ { print $2}')
kill $(ps -ef|awk '/[d]evnode/ { print $2}')
dhclient /dev/eth0

After this, the machine had internet connectivity, and I could upgrade it and install the sysvinit packages from experimental and enable it as the default boot system in Hurd.

But before I did that, I set a password on the root user, as ssh is running on the machine it for ssh login to work a password need to be set. Also, note that a bug somewhere in openssh on Hurd block compression from working. Remember to turn that off on the client side.

Run these commands as root to upgrade and test the new sysvinit stuff:

cat > /etc/apt/sources.list.d/experimental.list <<EOF
deb http://http.debian.net/debian/ experimental main
EOF
apt-get update
apt-get dist-upgrade
apt-get install -t experimental initscripts sysv-rc sysvinit \
    sysvinit-core sysvinit-utils
update-alternatives --config runsystem

To reboot after switching boot system, you have to use reboot-hurd instead of just reboot, as there is not yet a sysvinit process able to receive the signals from the normal 'reboot' command. After switching to sysvinit as the boot system, upgrading every package and rebooting, the network come up with DHCP after boot as it should, and the settrans/pkill hack mentioned at the start is no longer needed. But for some strange reason, there are no longer any login prompt in the virtual console, so I logged in using ssh instead.

Note that there are some race conditions in Hurd making the boot fail some times. No idea what the cause is, but hope the Hurd porters figure it out. At least Justus said on IRC (#debian-hurd on irc.debian.org) that they are aware of the problem. A way to reduce the impact is to upgrade to the Hurd packages built by Justus by adding this repository to the machine:

cat > /etc/apt/sources.list.d/hurd-ci.list <<EOF
deb http://darnassus.sceen.net/~teythoon/hurd-ci/ sid main
EOF

At the moment the prebuilt virtual machine get some packages from http://ftp.debian-ports.org/debian, because some of the packages in unstable do not yet include the required patches that are lingering in BTS. This is the completely list of "unofficial" packages installed:

# aptitude search '?narrow(?version(CURRENT),?origin(Debian Ports))'
i   emacs                   - GNU Emacs editor (metapackage)
i   gdb                     - GNU Debugger
i   hurd-recommended        - Miscellaneous translators
i   isc-dhcp-client         - ISC DHCP client
i   isc-dhcp-common         - common files used by all the isc-dhcp* packages
i   libc-bin                - Embedded GNU C Library: Binaries
i   libc-dev-bin            - Embedded GNU C Library: Development binaries
i   libc0.3                 - Embedded GNU C Library: Shared libraries
i A libc0.3-dbg             - Embedded GNU C Library: detached debugging symbols
i   libc0.3-dev             - Embedded GNU C Library: Development Libraries and Hea
i   multiarch-support       - Transitional package to ensure multiarch compatibilit
i A x11-common              - X Window System (X.Org) infrastructure
i   xorg                    - X.Org X Window System
i A xserver-xorg            - X.Org X server
i A xserver-xorg-input-all  - X.Org X server -- input driver metapackage
#

All in all, testing hurd has been an interesting experience. :) X.org did not work out of the box and I never took the time to follow the porters instructions to fix it. This time I was interested in the command line stuff.

29th January 2014

Bitcoin is a incredible use of peer to peer communication and encryption, allowing direct and immediate money transfer without any central control. It is sometimes claimed to be ideal for illegal activity, which I believe is quite a long way from the truth. At least I would not conduct illegal money transfers using a system where the details of every transaction are kept forever. This point is investigated in USENIX ;login: from December 2013, in the article "A Fistful of Bitcoins - Characterizing Payments Among Men with No Names" by Sarah Meiklejohn, Marjori Pomarole,Grant Jordan, Kirill Levchenko, Damon McCoy, Geoffrey M. Voelker, and Stefan Savage. They analyse the transaction log in the Bitcoin system, using it to find addresses belong to individuals and organisations and follow the flow of money from both Bitcoin theft and trades on Silk Road to where the money end up. This is how they wrap up their article:

"To demonstrate the usefulness of this type of analysis, we turned our attention to criminal activity. In the Bitcoin economy, criminal activity can appear in a number of forms, such as dealing drugs on Silk Road or simply stealing someone else’s bitcoins. We followed the flow of bitcoins out of Silk Road (in particular, from one notorious address) and from a number of highly publicized thefts to see whether we could track the bitcoins to known services. Although some of the thieves attempted to use sophisticated mixing techniques (or possibly mix services) to obscure the flow of bitcoins, for the most part tracking the bitcoins was quite straightforward, and we ultimately saw large quantities of bitcoins flow to a variety of exchanges directly from the point of theft (or the withdrawal from Silk Road).

As acknowledged above, following stolen bitcoins to the point at which they are deposited into an exchange does not in itself identify the thief; however, it does enable further de-anonymization in the case in which certain agencies can determine (through, for example, subpoena power) the real-world owner of the account into which the stolen bitcoins were deposited. Because such exchanges seem to serve as chokepoints into and out of the Bitcoin economy (i.e., there are few alternative ways to cash out), we conclude that using Bitcoin for money laundering or other illicit purposes does not (at least at present) seem to be particularly attractive."

These researches are not the first to analyse the Bitcoin transaction log. The 2011 paper "An Analysis of Anonymity in the Bitcoin System" by Fergal Reid and Martin Harrigan is summarized like this:

"Anonymity in Bitcoin, a peer-to-peer electronic currency system, is a complicated issue. Within the system, users are identified by public-keys only. An attacker wishing to de-anonymize its users will attempt to construct the one-to-many mapping between users and public-keys and associate information external to the system with the users. Bitcoin tries to prevent this attack by storing the mapping of a user to his or her public-keys on that user's node only and by allowing each user to generate as many public-keys as required. In this chapter we consider the topological structure of two networks derived from Bitcoin's public transaction history. We show that the two networks have a non-trivial topological structure, provide complementary views of the Bitcoin system and have implications for anonymity. We combine these structures with external information and techniques such as context discovery and flow analysis to investigate an alleged theft of Bitcoins, which, at the time of the theft, had a market value of approximately half a million U.S. dollars."

I hope these references can help kill the urban myth that Bitcoin is anonymous. It isn't really a good fit for illegal activites. Use cash if you need to stay anonymous, at least until regular DNA sampling of notes and coins become the norm. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14th January 2014

Coverity is a nice tool to find problems in C, C++ and Java code using static source code analysis. It can detect a lot of different problems, and is very useful to find memory and locking bugs in the error handling part of the source. The company behind it provide check of free software projects as a community service, and many hundred free software projects are already checked. A few days ago I decided to have a closer look at the Coverity system, and discovered that the gnash and ipmitool projects I am involved with was already registered. But these are fairly big, and I would also like to have a small and easy project to check, and decided to request checking of the chrpath project. It was added to the checker and discovered seven potential defects. Six of these were real, mostly resource "leak" when the program detected an error. Nothing serious, as the resources would be released a fraction of a second later when the program exited because of the error, but it is nice to do it right in case the source of the program some time in the future end up in a library. Having fixed all defects and added a mailing list for the chrpath developers, I decided it was time to publish a new release. These are the release notes:

New in 0.16 released 2014-01-14:

  • Fixed all minor bugs discovered by Coverity.
  • Updated config.sub and config.guess from the GNU project.
  • Mention new project mailing list in the documentation.

You can download the new version 0.16 from alioth. Please let us know via the Alioth project if something is wrong with the new release. The test suite did not discover any old errors, so if you find a new one, please also include a test suite check.

25th December 2013

The Debian Edu / Skolelinux project consist of both newcomers and old timers, and this time I was able to get an interview with a newcomer in the project who showed up on the IRC channel a few weeks ago to let us know about his successful installation of Debian Edu Wheezy in his School. Say hello to Dominik George.

Who are you, and how do you spend your days?

I am a 23 year-old student from Germany who has spent half of his life with open source. In "real life", I am, as already mentioned, a student in the fields of Computer Science, Electrical Engineering, Information Technologies and Anglistics. Due to my (only partially voluntary) huge engagement in the open source world, these things are a bit vacant right now however.

I also have been working as a project teacher at a Gymasnium (public school) for various years now. I took up that work some time around 2005 when still attending that school myself and have continued it until today. I also had been running the (kind of very advanced) network of that school together with a team of very interested and talented students in the age of 11 to 15 years, who took the chance to learn a lot about open source and networking before I left the school to help building another school's informational education concept from scratch.

That said, one might see me as a kind of "glue" between school kids and the elderly of teachers as well as between the open source ecosystem and the (even more complex) educational ecosystem.

When I am not busy with open source or education, I like Geocaching and cycling.

How did you get in contact with the Skolelinux / Debian Edu project?

I think that happened some time around 2009 when I first attended FrOSCon and visited the project booth. I think I wasn't too interested back then because I used to have an attitude of disliking software that does too much stuff on its own. Maybe I was too inexperienced to realise the upsides of an "out-of-the-box" solution ;).

The first time I actively talked to Skolelinux people was at OpenRheinRuhr 2011 when the BiscuIT project, a home-grewn software used by my school for various really cool things from timetables and class contact lists to lunch ordering, student ID card printing and project elections first got to a stage where it could have been published. I asked the Skolelinux guys running the booth if the project were interested in it and gave a small demonstration, but there wasn't any real feedback and the guys seemed rather uninterested.

After I left the school where I developed the software, it got mostly lost, but I am now reimplementing it for my new school. I have reusability and compatibility in mind, and I hop there will be a new basis for contributing it to the Skolelinux project ;)!

What do you see as the advantages of Skolelinux / Debian Edu?

The most important advantage seems to be that it "just works". After overcoming some minor (but still very annoying) glitches in the installer, I got a fully functional, working school network, without the month-long hassle I experienced when setting all that up from scratch in earlier years. And above that, it rocked - I didn't have any real hardware at hand, because the school was just founded and has no money whatsoever, so I installed a combined server (main server, terminal services and workstation) in a VM on my personal notebook, bridging the LTSP network interface to the ethernet port, and then PXE-booted the Windows notebooks that were lying around from it. I could use 8 clients without any performance issues, by using a tiny little VM on a tiny little notebook. I think that's enough to say that it rocks!

Secondly, there are marketing reasons. Life's bad, and so no politician will ever permit a setup described as "Debian, an universal operating system, with some really cool educational tools" while they will be jsut fine with "Skolelinux, a single-purpose solution for your school network", even if both turn out to be the very same thing (yes, this is unfair towards the Skolelinux project, and must not be taken too seriously - you get the idea, anyway).

What do you see as the disadvantages of Skolelinux / Debian Edu?

I have not been involved with Skolelinux long enough to really answer this question in a fair way. Thus, please allow me to put it in other words: "What do you expect from Skolelinux to keep liking it?" I can list a few points about that:

  • always strive to get all things integrated into Debian upstream
  • be open to discussion about changes and the like, even with newcomers
  • be helpful at being helpful ;)

I'm really sorry I cannot say much more about that :(!

Which free software do you use daily?

First of all, all software I use is free and open. I have abandoned all non-free software (except for firmware on my darned phone) this year.

I run Debian GNU/Linux on all PC systems I use. On that, I mostly run text tools. I use mksh as shell, jupp as very advanced text editor (I even got the developer to help me write a script/macro based full-featured student management software with the two), mcabber for XMPP and irssi for IRC. For that overly coloured world called the WWW, I use Iceweasel (Firefox). Oh, and mutt for e-mail.

However, while I am personally aware of the fact that text tools are more efficient and powerful than anything else, I also use (or at least operate) some tools that are suitable to bring open source to kids. One of these things is Jappix, which I already introduced to some kids even before they got aware of Facebook, making them see for themselves that they do not need Facebook now ;).

Which strategy do you believe is the right one to use to get schools to use free software?

Well, that's a two-sided thing. One side is what I believe, and one side is what I have experienced.

I believe that the right strategy is showing them the benefits. But that won't work out as long as the acceptance of free alternatives grows globally. What I mean is that if all the kids are almost forced to use Windows, Facebook, Skype, you name it at home, they will not see why they would want to use alternatives at school. I have seen students take seat in front of a fully-functional, modern Debian desktop that could do anything their Windows at home could do, and they jsut refused to use it because "Linux sucks". It is something that makes the council of our city spend around 600000 € to buy software - not including hardware, mind you - for operating school networks, and for installing a system that, as has been proved, does not work. For those of you readers who are good at maths, have you already found out how many lives could have been saved with that money if we had instead used it to bring education to parts of the world that need it? I have, and found it to be nothing less dramatic than plain criminal.

That said, the only feasible way appears to be the bottom up method. We have to bring free software to kids and parents. I have founded an association named Teckids here in Germany that does just that. We organise several events for kids and adolescents in the area of free and open source software, for example the FrogLabs, which share staff with Teckids and are the youth programme of the Free and Open Source Software Conference (FrOSCon). We do a lot more than most other conferences - this year, we first offered the FrogLabs as a holiday camp for kids aged 10 to 16. It was a huge success, with approx. 30 kids taking part and learning with and about free software through a whole weekend. All of us had a lot of fun, and the results were really exciting.

Apart from that, we are preparing a campaign that is supposed to bring the message of free alternatives to stuff kids use every day to them and their parents, e.g. the use of Jabber / Jappix instead of Facebook and Skype. To make that possible, we are planning to get together a team of clever kids who understand very well what their peers need and can bring it across to them. So we will have a peer-driven network of adolescents who teach each other and collect feedback from the community of minors. We then take that feedback and our own experience to work closely with open source projects, such as Skolelinux or Jappix, at improving their software in a way that makes it more and more attractive for the target group. At least I hope that we will have good cooperation with Skolelinux in the future ;)!

So in conclusion, what I believe is that, if it weren't for the world being so bad, it should be very clear to the political decision makers that the only way to go nowadays is free software for various reasons, but I have learnt that the only way that seems to work is bottom up.

6th December 2013

It has been a while since I managed to publish the last interview, but the Debian Edu / Skolelinux community is still going strong, and yesterday we even had a new school administrator show up on #debian-edu to share his success story with installing Debian Edu at their school. This time I have been able to get some helpful comments from the creator of Knoppix, Klaus Knopper, who was involved in a Skolelinux project in Germany a few years ago.

Who are you, and how do you spend your days?

I am Klaus Knopper. I have a master degree in electrical engineering, and is currently professor in information management at the university of applied sciences Kaiserslautern / Germany and freelance Open Source software developer and consultant.

All of this is pretty much of the work I spend my days with. Apart from teaching, I'm also conducting some more or less experimental projects like the Knoppix GNU/Linux live system (Debian-based like Skolelinux), ADRIANE (a blind-friendly talking desktop system) and LINBO (Linux-based network boot console, a fast remote install and repair system supporting various operating systems).

How did you get in contact with the Skolelinux / Debian Edu project?

The credit for this have to go to Kurt Gramlich, who is the German coordinator for Skolelinux. We were looking for an all-in-one open source community-supported distribution for schools, and Kurt introduced us to Skolelinux for this purpose.

What do you see as the advantages of Skolelinux / Debian Edu?

  • Quick installation,
  • works (almost) out of the box,
  • contains many useful software packages for teaching and learning,
  • is a purely community-based distro and not controlled by a single company,
  • has a large number of supporters and teachers who share their experience and problem solutions.

What do you see as the disadvantages of Skolelinux / Debian Edu?

  • Skolelinux is - as we had to learn - not easily upgradable to the next version. Opposed to its genuine Debian base, upgrading to a new version means a full new installation from scratch to get it working again reliably.
  • Skolelinux is based on Debian/stable, and therefore always a little outdated in terms of program versions compared to Edubuntu or similar educational Linux distros, which rather use Debian/testing as their base.
  • Skolelinux has some very self-opinionated and stubborn default configuration which in my opinion adds unnecessary complexity and is not always suitable for a schools needs, the preset network configuration is actually a core definition feature of Skolelinux and not easy to change, so schools sometimes have to change their network configuration to make it "Skolelinux-compatible".
  • Some proposed extensions, which were made available as contribution, like secure examination mode and lecture material distribution and collection, were not accepted into the mainline Skolelinux development and are now not easy to maintain in the future because of Skolelinux somewhat undeterministic update schemes.
  • Skolelinux has only a very tiny number of base developers compared to Debian.

For these reasons and experience from our project, I would now rather consider using plain Debian for schools next time, until Skolelinux is more closely integrated into Debian and becomes upgradeable without reinstallation.

Which free software do you use daily?

GNU/Linux with LXDE desktop, bash for interactive dialog and programming, texlive for documentation and correspondence, occasionally LibreOffice for document format conversion. Various programming languages for teaching.

Which strategy do you believe is the right one to use to get schools to use free software?

Strong arguments are

  • Knowledge is free, and so should be methods and tools for teaching and learning.
  • Students can learn with and use the same software at school, at home, and at their working place without running into license or conversion problems.
  • Closed source or proprietary software hides knowledge rather than exposing it, and proprietary software vendors try to bind customers to certain products. But teachers need to teach science, not products.
  • If you have everything you for daily work as open source, what would you need proprietary software for?
30th November 2013

If you want the ability to electronically communicate directly with your neighbors and friends using a network controlled by your peers in stead of centrally controlled by a few corporations, or would like to experiment with interesting network technology, the Dugnasnett for alle i Oslo might be project for you. 39 mesh nodes are currently being planned, in the freshly started initiative from NUUG and Hackeriet to create a wireless community network. The work is inspired by Freifunk, Athens Wireless Metropolitan Network, Roofnet and other successful mesh networks around the globe. Two days ago we held a workshop to try to get people started on setting up their own mesh node, and there we decided to create a new mailing list dugnadsnett (at) nuug.no and IRC channel #dugnadsnett.no to coordinate the work. See also the NUUG blog post announcing the mailing list and IRC channel.

24th November 2013

After many years break from the package and a vain hope that development would be continued by someone else, I finally pulled my acts together this morning and wrapped up a new release of chrpath, the command line tool to modify the rpath and runpath of already compiled ELF programs. The update was triggered by the persistence of Isha Vishnoi at IBM, which needed a new config.guess file to get support for the ppc64le architecture (powerpc 64-bit Little Endian) he is working on. I checked the Debian, Ubuntu and Fedora packages for interesting patches (failed to find the source from OpenSUSE and Mandriva packages), and found quite a few nice fixes. These are the release notes:

New in 0.15 released 2013-11-24:

  • Updated config.sub and config.guess from the GNU project to work with newer architectures. Thanks to isha vishnoi for the heads up.
  • Updated README with current URLs.
  • Added byteswap fix found in Ubuntu, credited Jeremy Kerr and Matthias Klose.
  • Added missing help for -k|--keepgoing option, using patch by Petr Machata found in Fedora.
  • Rewrite removal of RPATH/RUNPATH to make sure the entry in .dynamic is a NULL terminated string. Based on patch found in Fedora credited Axel Thimm and Christian Krause.

You can download the new version 0.15 from alioth. Please let us know via the Alioth project if something is wrong with the new release. The test suite did not discover any old errors, so if you find a new one, please also include a testsuite check.

21st November 2013

Drones, flying robots, are getting more and more popular. The most know ones are the killer drones used by some government to murder people they do not like without giving them the chance of a fair trial, but the technology have many good uses too, from mapping and forest maintenance to photography and search and rescue. I am sure it is just a question of time before "bad drones" are in the hands of private enterprises and not only state criminals but petty criminals too. The drone technology is very useful and very dangerous. To have some control over the use of drones, I agree with Daniel Suarez in his TED talk "The kill decision shouldn't belong to a robot", where he suggested this little gem to keep the good while limiting the bad use of drones:

Each robot and drone should have a cryptographically signed I.D. burned in at the factory that can be used to track its movement through public spaces. We have license plates on cars, tail numbers on aircraft. This is no different. And every citizen should be able to download an app that shows the population of drones and autonomous vehicles moving through public spaces around them, both right now and historically. And civic leaders should deploy sensors and civic drones to detect rogue drones, and instead of sending killer drones of their own up to shoot them down, they should notify humans to their presence. And in certain very high-security areas, perhaps civic drones would snare them and drag them off to a bomb disposal facility.

But notice, this is more an immune system than a weapons system. It would allow us to avail ourselves of the use of autonomous vehicles and drones while still preserving our open, civil society.

The key is that every citizen should be able to read the radio beacons sent from the drones in the area, to be able to check both the government and others use of drones. For such control to be effective, everyone must be able to do it. What should such beacon contain? At least formal owner, purpose, contact information and GPS location. Probably also the origin and target position of the current flight. And perhaps some registration number to be able to look up the drone in a central database tracking their movement. Robots should not have privacy. It is people who need privacy.

13th November 2013

Today NUUG and Hackeriet announced our plans to join forces and create a wireless community network in Oslo. The workshop to help people get started will take place Thursday 2013-11-28, but we already are collecting the geolocation of people joining forces to make this happen. We have 9 locations plotted on the map, but we will need more before we have a connected mesh spread across Oslo. If this sound interesting to you, please join us at the workshop. If you are too impatient to wait 15 days, please join us on the IRC channel #nuug on irc.freenode.net right away. :)

10th November 2013

Continuing my research into mesh networking, I was recommended to use TP-Link 3040 and 3600 access points as mesh nodes, and the pair I bought arrived on Friday. Here are my notes on how to set up the MR3040 as a mesh node using OpenWrt.

I started by following the instructions on the OpenWRT wiki for TL-MR3040, and downloaded the recommended firmware image (openwrt-ar71xx-generic-tl-mr3040-v2-squashfs-factory.bin) and uploaded it into the original web interface. The flashing went fine, and the machine was available via telnet on the ethernet port. After logging in and setting the root password, ssh was available and I could start to set it up as a batman-adv mesh node.

I started off by reading the instructions from Wireless Africa, which had quite a lot of useful information, but eventually I followed the recipe from the Open Mesh wiki for using batman-adv on OpenWrt. A small snag was the fact that the opkg install kmod-batman-adv command did not work as it should. The batman-adv kernel module would fail to load because its dependency crc16 was not already loaded. I reported the bug to the openwrt project and hope it will be fixed soon. But the problem only seem to affect initial testing of batman-adv, as configuration seem to work when booting from scratch.

The setup is done using files in /etc/config/. I did not bridge the Ethernet and mesh interfaces this time, to be able to hook up the box on my local network and log into it for configuration updates. The following files were changed and look like this after modifying them:

/etc/config/network


config interface 'loopback'
        option ifname 'lo'
        option proto 'static'
        option ipaddr '127.0.0.1'
        option netmask '255.0.0.0'

config globals 'globals'
        option ula_prefix 'fdbf:4c12:3fed::/48'

config interface 'lan'
        option ifname 'eth0'
        option type 'bridge'
        option proto 'dhcp'
        option ipaddr '192.168.1.1'
        option netmask '255.255.255.0'
        option hostname 'tl-mr3040'
        option ip6assign '60'

config interface 'mesh'
        option ifname 'adhoc0'
        option mtu '1528'
        option proto 'batadv'
        option mesh 'bat0'

/etc/config/wireless


config wifi-device 'radio0'
        option type 'mac80211'
        option channel '11'
        option hwmode '11ng'
        option path 'platform/ar933x_wmac'
        option htmode 'HT20'
        list ht_capab 'SHORT-GI-20'
        list ht_capab 'SHORT-GI-40'
        list ht_capab 'RX-STBC1'
        list ht_capab 'DSSS_CCK-40'
        option disabled '0'

config wifi-iface 'wmesh'
        option device 'radio0'
        option ifname 'adhoc0'
        option network 'mesh'
        option encryption 'none'
        option mode 'adhoc'
        option bssid '02:BA:00:00:00:01'
        option ssid 'meshfx@hackeriet'

/etc/config/batman-adv


config 'mesh' 'bat0'
        option interfaces 'adhoc0'
        option 'aggregated_ogms'
        option 'ap_isolation'
        option 'bonding'
        option 'fragmentation'
        option 'gw_bandwidth'
        option 'gw_mode'
        option 'gw_sel_class'
        option 'log_level'
        option 'orig_interval'
        option 'vis_mode'
        option 'bridge_loop_avoidance'
        option 'distributed_arp_table'
        option 'network_coding'
        option 'hop_penalty'

# yet another batX instance
# config 'mesh' 'bat5'
#       option 'interfaces' 'second_mesh'

The mesh node is now operational. I have yet to test its range, but I hope it is good. I have not yet tested the TP-Link 3600 box still wrapped up in plastic.

2nd November 2013

If one of the points of switching to a new init system in Debian is to get rid of huge init.d scripts, I doubt we need to switch away from sysvinit and init.d scripts at all. Here is an example init.d script, ie a rewrite of /etc/init.d/rsyslog:

#!/lib/init/init-d-script
### BEGIN INIT INFO
# Provides:          rsyslog
# Required-Start:    $remote_fs $time
# Required-Stop:     umountnfs $time
# X-Stop-After:      sendsigs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: enhanced syslogd
# Description:       Rsyslog is an enhanced multi-threaded syslogd.
#                    It is quite compatible to stock sysklogd and can be 
#                    used as a drop-in replacement.
### END INIT INFO
DESC="enhanced syslogd"
DAEMON=/usr/sbin/rsyslogd

Pretty minimalistic to me... For the record, the original sysv-rc script was 137 lines, and the above is just 15 lines, most of it meta info/comments.

How to do this, you ask? Well, one create a new script /lib/init/init-d-script looking something like this:

#!/bin/sh

# Define LSB log_* functions.
# Depend on lsb-base (>= 3.2-14) to ensure that this file is present
# and status_of_proc is working.
. /lib/lsb/init-functions

#
# Function that starts the daemon/service

#
do_start()
{
	# Return
	#   0 if daemon has been started
	#   1 if daemon was already running
	#   2 if daemon could not be started
	start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \
		|| return 1
	start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \
		$DAEMON_ARGS \
		|| return 2
	# Add code here, if necessary, that waits for the process to be ready
	# to handle requests from services started subsequently which depend
	# on this one.  As a last resort, sleep for some time.
}

#
# Function that stops the daemon/service
#
do_stop()
{
	# Return
	#   0 if daemon has been stopped
	#   1 if daemon was already stopped
	#   2 if daemon could not be stopped
	#   other if a failure occurred
	start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --name $NAME
	RETVAL="$?"
	[ "$RETVAL" = 2 ] && return 2
	# Wait for children to finish too if this is a daemon that forks
	# and if the daemon is only ever run from this initscript.
	# If the above conditions are not satisfied then add some other code
	# that waits for the process to drop all resources that could be
	# needed by services started subsequently.  A last resort is to
	# sleep for some time.
	start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON
	[ "$?" = 2 ] && return 2
	# Many daemons don't delete their pidfiles when they exit.
	rm -f $PIDFILE
	return "$RETVAL"
}

#
# Function that sends a SIGHUP to the daemon/service
#
do_reload() {
	#
	# If the daemon can reload its configuration without
	# restarting (for example, when it is sent a SIGHUP),
	# then implement that here.
	#
	start-stop-daemon --stop --signal 1 --quiet --pidfile $PIDFILE --name $NAME
	return 0
}

SCRIPTNAME=$1
scriptbasename="$(basename $1)"
echo "SN: $scriptbasename"
if [ "$scriptbasename" != "init-d-library" ] ; then
    script="$1"
    shift
    . $script
else
    exit 0
fi

NAME=$(basename $DAEMON)
PIDFILE=/var/run/$NAME.pid

# Exit if the package is not installed
#[ -x "$DAEMON" ] || exit 0

# Read configuration variable file if it is present
[ -r /etc/default/$NAME ] && . /etc/default/$NAME

# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh

case "$1" in
  start)
	[ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME"
	do_start
	case "$?" in
		0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
		2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
	esac
	;;
  stop)
	[ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME"
	do_stop
	case "$?" in
		0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;;
		2) [ "$VERBOSE" != no ] && log_end_msg 1 ;;
	esac
	;;
  status)
	status_of_proc "$DAEMON" "$NAME" && exit 0 || exit $?
	;;
  #reload|force-reload)
	#
	# If do_reload() is not implemented then leave this commented out
	# and leave 'force-reload' as an alias for 'restart'.
	#
	#log_daemon_msg "Reloading $DESC" "$NAME"
	#do_reload
	#log_end_msg $?
	#;;
  restart|force-reload)
	#
	# If the "reload" option is implemented then remove the
	# 'force-reload' alias
	#
	log_daemon_msg "Restarting $DESC" "$NAME"
	do_stop
	case "$?" in
	  0|1)
		do_start
		case "$?" in
			0) log_end_msg 0 ;;
			1) log_end_msg 1 ;; # Old process is still running
			*) log_end_msg 1 ;; # Failed to start
		esac
		;;
	  *)
		# Failed to stop
		log_end_msg 1
		;;
	esac
	;;
  *)
	echo "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" >&2
	exit 3
	;;
esac

:

It is based on /etc/init.d/skeleton, and could be improved quite a lot. I did not really polish the approach, so it might not always work out of the box, but you get the idea. I did not try very hard to optimize it nor make it more robust either.

A better argument for switching init system in Debian than reducing the size of init scripts (which is a good thing to do anyway), is to get boot system that is able to handle the kernel events sensibly and robustly, and do not depend on the boot to run sequentially. The boot and the kernel have not behaved sequentially in years.

1st November 2013

The SPICE protocol for remote display access is the preferred solution with oVirt and RedHat Enterprise Virtualization, and I was sad to discover the other day that the browser plugin needed to use these systems seamlessly was missing in Debian. The request for a package was from 2012-04-10 with no progress since 2013-04-01, so I decided to wrap up a package based on the great work from Cajus Pollmeier and put it in a collab-maint maintained git repository to get a package I could use. I would very much like others to help me maintain the package (or just take over, I do not mind), but as no-one had volunteered so far, I just uploaded it to NEW. I hope it will be available in Debian in a few days.

The source is now available from http://anonscm.debian.org/gitweb/?p=collab-maint/spice-xpi.git;a=summary.

Tags: debian, english.
27th October 2013

The vmdebootstrap program is a a very nice system to create virtual machine images. It create a image file, add a partition table, mount it and run debootstrap in the mounted directory to create a Debian system on a stick. Yesterday, I decided to try to teach it how to make images for Raspberry Pi, as part of a plan to simplify the build system for the FreedomBox project. The FreedomBox project already uses vmdebootstrap for the virtualbox images, but its current build system made multistrap based system for Dreamplug images, and it is lacking support for Raspberry Pi.

Armed with the knowledge on how to build "foreign" (aka non-native architecture) chroots for Raspberry Pi, I dived into the vmdebootstrap code and adjusted it to be able to build armel images on my amd64 Debian laptop. I ended up giving vmdebootstrap five new options, allowing me to replicate the image creation process I use to make Debian Jessie based mesh node images for the Raspberry Pi. First, the --foreign /path/to/binfm_handler option tell vmdebootstrap to call debootstrap with --foreign and to copy the handler into the generated chroot before running the second stage. This allow vmdebootstrap to create armel images on an amd64 host. Next I added two new options --bootsize size and --boottype fstype to teach it to create a separate /boot/ partition with the given file system type, allowing me to create an image with a vfat partition for the /boot/ stuff. I also added a --variant variant option to allow me to create smaller images without the Debian base system packages installed. Finally, I added an option --no-extlinux to tell vmdebootstrap to not install extlinux as a boot loader. It is not needed on the Raspberry Pi and probably most other non-x86 architectures. The changes were accepted by the upstream author of vmdebootstrap yesterday and today, and is now available from the upstream project page.

To use it to build a Raspberry Pi image using Debian Jessie, first create a small script (the customize script) to add the non-free binary blob needed to boot the Raspberry Pi and the APT source list:

#!/bin/sh
set -e # Exit on first error
rootdir="$1"
cd "$rootdir"
cat <<EOF > etc/apt/sources.list
deb http://http.debian.net/debian/ jessie main contrib non-free
EOF
# Install non-free binary blob needed to boot Raspberry Pi.  This
# install a kernel somewhere too.
wget https://raw.github.com/Hexxeh/rpi-update/master/rpi-update \
    -O $rootdir/usr/bin/rpi-update
chmod a+x $rootdir/usr/bin/rpi-update
mkdir -p $rootdir/lib/modules
touch $rootdir/boot/start.elf
chroot $rootdir rpi-update

Next, fetch the latest vmdebootstrap script and call it like this to build the image:

sudo ./vmdebootstrap \
    --variant minbase \
    --arch armel \
    --distribution jessie \
    --mirror http://http.debian.net/debian \
    --image test.img \
    --size 600M \
    --bootsize 64M \
    --boottype vfat \
    --log-level debug \
    --verbose \
    --no-kernel \
    --no-extlinux \
    --root-password raspberry \
    --hostname raspberrypi \
    --foreign /usr/bin/qemu-arm-static \
    --customize `pwd`/customize \
    --package netbase \
    --package git-core \
    --package binutils \
    --package ca-certificates \
    --package wget \
    --package kmod

The list of packages being installed are the ones needed by rpi-update to make the image bootable on the Raspberry Pi, with the exception of netbase, which is needed by debootstrap to find /etc/hosts with the minbase variant. I really wish there was a way to set up an Raspberry Pi using only packages in the Debian archive, but that is not possible as far as I know, because it boots from the GPU using a non-free binary blob.

The build host need debootstrap, kpartx and qemu-user-static and probably a few others installed. I have not checked the complete build dependency list.

The resulting image will not use the hardware floating point unit on the Raspberry PI, because the armel architecture in Debian is not optimized for that use. So the images created will be a bit slower than Raspbian based images.

21st October 2013

The last few days I have been experimenting with the batman-adv mesh technology. I want to gain some experience to see if it will fit the Freedombox project, and together with my neighbors try to build a mesh network around the park where I live. Batman-adv is a layer 2 mesh system ("ethernet" in other words), where the mesh network appear as if all the mesh clients are connected to the same switch.

My hardware of choice was the Linksys WRT54GL routers I had lying around, but I've been unable to get them working with batman-adv. So instead, I started playing with a Raspberry Pi, and tried to get it working as a mesh node. My idea is to use it to create a mesh node which function as a switch port, where everything connected to the Raspberry Pi ethernet plug is connected (bridged) to the mesh network. This allow me to hook a wifi base station like the Linksys WRT54GL to the mesh by plugging it into a Raspberry Pi, and allow non-mesh clients to hook up to the mesh. This in turn is useful for Android phones using the Serval Project voip client, allowing every one around the playground to phone and message each other for free. The reason is that Android phones do not see ad-hoc wifi networks (they are filtered away from the GUI view), and can not join the mesh without being rooted. But if they are connected using a normal wifi base station, they can talk to every client on the local network.

To get this working, I've created a debian package meshfx-node and a script build-rpi-mesh-node to create the Raspberry Pi boot image. I'm using Debian Jessie (and not Raspbian), to get more control over the packages available. Unfortunately a huge binary blob need to be inserted into the boot image to get it booting, but I'll ignore that for now. Also, as Debian lack support for the CPU features available in the Raspberry Pi, the system do not use the hardware floating point unit. I hope the routing performance isn't affected by the lack of hardware FPU support.

To create an image, run the following with a sudo enabled user after inserting the target SD card into the build machine:

% wget -O build-rpi-mesh-node \
    https://raw.github.com/petterreinholdtsen/meshfx-node/master/build-rpi-mesh-node
% sudo bash -x ./build-rpi-mesh-node > build.log 2>&1
% dd if=/root/rpi/rpi_basic_jessie_$(date +%Y%m%d).img of=/dev/mmcblk0 bs=1M
%

Booting with the resulting SD card on a Raspberry PI with a USB wifi card inserted should give you a mesh node. At least it does for me with a the wifi card I am using. The default mesh settings are the ones used by the Oslo mesh project at Hackeriet, as I mentioned in an earlier blog post about this mesh testing.

The mesh node was not horribly expensive either. I bought everything over the counter in shops nearby. If I had ordered online from the lowest bidder, the price should be significantly lower:

SupplierModelNOK
TeknikkmagasinetRaspberry Pi model B349.90
TeknikkmagasinetRaspberry Pi type B case99.90
LefdalJensen Air:Link 25150295.-
Clas OhlsonKingston 16 GB SD card199.-
Total cost943.80

Now my mesh network at home consist of one laptop in the basement connected to my production network, one Raspberry Pi node on the 1th floor that can be seen by my neighbor across the park, and one play-node I use to develop the image building script. And some times I hook up my work horse laptop to the mesh to test it. I look forward to figuring out what kind of latency the batman-adv setup will give, and how much packet loss we will experience around the park. :)

19th October 2013

Back in 2010, I created a Perl library to talk to the Spykee robot (with two belts, wifi, USB and Linux) and made it available from my web page. Today I concluded that it should move to a site that is easier to use to cooperate with others, and moved it to github. If you got a Spykee robot, you might want to check out the libspykee-perl github repository.

Tags: english, nuug, robot.
15th October 2013

The last few days I came across a few good causes that should get wider attention. I recommend signing and donating to each one of these. :)

Via Debian Project News for 2013-10-14 I came across the Outreach Program for Women program which is a Google Summer of Code like initiative to get more women involved in free software. One debian sponsor has offered to match any donation done to Debian earmarked for this initiative. I donated a few minutes ago, and hope you will to. :)

And the Electronic Frontier Foundation just announced plans to create video documentaries about the excessive spying on every Internet user that take place these days, and their need to fund the work. I've already donated. Are you next?

For my Norwegian audience, the organisation Studentenes og Akademikernes Internasjonale Hjelpefond is collecting signatures for a statement under the heading Bloggers United for Open Access for those of us asking for more focus on open access in the Norwegian government. So far 499 signatures. I hope you will sign it too.

11th October 2013

Wireless mesh networks are self organising and self healing networks that can be used to connect computers across small and large areas, depending on the radio technology used. Normal wifi equipment can be used to create home made radio networks, and there are several successful examples like Freifunk and Athens Wireless Metropolitan Network (see wikipedia for a large list) around the globe. To give you an idea how it work, check out the nice overview of the Kiel Freifunk community which can be seen from their dynamically updated node graph and map, where one can see how the mesh nodes automatically handle routing and recover from nodes disappearing. There is also a small community mesh network group in Oslo, Norway, and that is the main topic of this blog post.

I've wanted to check out mesh networks for a while now, and hoped to do it as part of my involvement with the NUUG member organisation community, and my recent involvement in the Freedombox project finally lead me to give mesh networks some priority, as I suspect a Freedombox should use mesh networks to connect neighbours and family when possible, given that most communication between people are between those nearby (as shown for example by research on Facebook communication patterns). It also allow people to communicate without any central hub to tap into for those that want to listen in on the private communication of citizens, which have become more and more important over the years.

So far I have only been able to find one group of people in Oslo working on community mesh networks, over at the hack space Hackeriet at Husmania. They seem to have started with some Freifunk based effort using OLSR, called the Oslo Freifunk project, but that effort is now dead and the people behind it have moved on to a batman-adv based system called meshfx. Unfortunately the wiki site for the Oslo Freifunk project is no longer possible to update to reflect this fact, so the old project page can't be updated to point to the new project. A while back, the people at Hackeriet invited people from the Freifunk community to Oslo to talk about mesh networks. I came across this video where Hans Jørgen Lysglimt interview the speakers about this talk (from youtube):

I mentioned OLSR and batman-adv, which are mesh routing protocols. There are heaps of different protocols, and I am still struggling to figure out which one would be "best" for some definitions of best, but given that the community mesh group in Oslo is so small, I believe it is best to hook up with the existing one instead of trying to create a completely different setup, and thus I have decided to focus on batman-adv for now. It sure help me to know that the very cool Serval project in Australia is using batman-adv as their meshing technology when it create a self organizing and self healing telephony system for disaster areas and less industrialized communities. Check out this cool video presenting that project (from youtube):

According to the wikipedia page on Wireless mesh network there are around 70 competing schemes for routing packets across mesh networks, and OLSR, B.A.T.M.A.N. and B.A.T.M.A.N. advanced are protocols used by several free software based community mesh networks.

The batman-adv protocol is a bit special, as it provide layer 2 (as in ethernet ) routing, allowing ipv4 and ipv6 to work on the same network. One way to think about it is that it provide a mesh based vlan you can bridge to or handle like any other vlan connected to your computer. The required drivers are already in the Linux kernel at least since Debian Wheezy, and it is fairly easy to set up. A good introduction is available from the Open Mesh project. These are the key settings needed to join the Oslo meshfx network:

SettingValue
Protocol / kernel modulebatman-adv
ESSIDmeshfx@hackeriet
Channel / Frequency11 / 2462
Cell ID02:BA:00:00:00:01

The reason for setting ad-hoc wifi Cell ID is to work around bugs in firmware used in wifi card and wifi drivers. (See a nice post from VillageTelco about "Information about cell-id splitting, stuck beacons, and failed IBSS merges! for details.) When these settings are activated and you have some other mesh node nearby, your computer will be connected to the mesh network and can communicate with any mesh node that is connected to any of the nodes in your network of nodes. :)

My initial plan was to reuse my old Linksys WRT54GL as a mesh node, but that seem to be very hard, as I have not been able to locate a firmware supporting batman-adv. If anyone know how to use that old wifi access point with batman-adv these days, please let me know.

If you find this project interesting and want to join, please join us on IRC, either channel #oslohackerspace or #nuug on irc.freenode.net.

While investigating mesh networks in Oslo, I came across an old research paper from the university of Stavanger and Telenor Research and Innovation called The reliability of wireless backhaul mesh networks and elsewhere learned that Telenor have been experimenting with mesh networks at Grünerløkka in Oslo. So mesh networks are also interesting for commercial companies, even though Telenor discovered that it was hard to figure out a good business plan for mesh networking and as far as I know have closed down the experiment. Perhaps Telenor or others would be interested in a cooperation?

Update 2013-10-12: I was just told by the Serval project developers that they no longer use batman-adv (but are compatible with it), but their own crypto based mesh system.

8th October 2013

The other day I was pleased and surprised to discover that Marcelo Salvador had published a video on Youtube showing how to install the standalone Debian Edu / Skolelinux profile. This is the profile intended for use at home or on laptops that should not be integrated into the provided network services (no central home directory, no Kerberos / LDAP directory etc, in other word a single user machine). The result is 11 minutes long, and show some user applications (seem to be rather randomly picked). Missed a few of my favorites like celestia, planets and chromium showing the Zygote Body 3D model of the human body, but I guess he did not know about those or find other programs more interesting. :) And the video do not show the advantages I believe is one of the most valuable featuers in Debian Edu, its central school server making it possible to run hundreds of computers without hard drives by installing one central LTSP server.

Anyway, check out the video, embedded below and linked to above:

Are there other nice videos demonstrating Skolelinux? Please let me know. :)

29th September 2013

A few hours ago, the announcement for the first stable release of Debian Edu Wheezy went out from the Debian publicity team. The complete announcement text can be found at the Debian News section, translated to several languages. Please check it out.

There is one minor known problem that we will fix very soon. One can not install a amd64 Thin Client Server using PXE, as the /var/ partition is too small. A workaround is to extend the partition (use lvresize + resize2fs in tty 2 while installing).

27th September 2013

The Freedombox project have been going on for a while, and have presented the vision, ideas and solution several places. Here is a little collection of videos of talks and presentation of the project.

A larger list is available from the Freedombox Wiki.

On other news, I am happy to report that Freedombox based on Debian Jessie is coming along quite well, and soon both Owncloud and using Tor should be available for testers of the Freedombox solution. :) In a few weeks I hope everything needed to test it is included in Debian. The withsqlite package is already in Debian, and the plinth package is pending in NEW. The third and vital part of that puzzle is the metapackage/setup framework, which is still pending an upload. Join us on IRC (#freedombox on irc.debian.org) and the mailing list if you want to help make this vision come true.

16th September 2013

The third wheezy based beta release of Debian Edu was wrapped up today. This is the release announcement from Holger Levsen:

Hi,

it is my pleasure to announce the third beta release (beta 2 for short) of Debian Edu / Skolelinux based on Debian Wheezy!

Please test these images extensivly, if no new problems are found we plan to do this final Debian Edu Wheezy release this coming weekend. We are not aware of any major problems or blockers in beta2, if you find something, please notify us immediately!

(More about the remaining steps for the Edu Wheezy release in another mail to the edu list tonight or tomorrow...)

Noteworthy changes and software updates for Debian Edu 7.1+edu0~b2 compared to beta1:

  • The KDE proxy setup has been adjusted to use the provided wpad.dat. This also gets Chromium to use this proxy.
  • Install kdepim-groupware with KDE desktops to make sure korganizer understand ical/dav sources.
  • Increased default maximum size of /var/spool/squid and /skole/backup on the main server.
  • A source DVD image containing all source packages is now available as well.
  • Updates for chromium (29.0.1547.57-1~deb7u1), imagemagick (6.7.7.10-5+deb7u2), php5 (5.4.4-14+deb7u4), libmodplug (0.8.8.4-3+deb7u1+git20130828), tiff (4.0.2-6+deb7u2), linux-image (3.2.0-4-486_3.2.46-1+deb7u1).

Where to get it:

To download the multiarch netinstall CD release you can use

The SHA1SUM of this image is: 3a1c89f4666df80eebcd46c5bf5fedb866f9472f

To download the multiarch USB stick ISO release you can use

The SHA1SUM of this image is: 702d1718548f401c74bfa6df9f032cc3ee16597e

The Source DVD image has the filename debian-edu-7.1+edu0~b2-source-DVD.iso and the SHA1SUM 089eed8b3f962db47aae1f6a9685e9bb2fa30ca5 and is available the same way as the other isos.

How to report bugs

For information how to report bugs please see
http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the seventh test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

Notes for upgrades from Alpha Prereleases

Alpha based installations should reinstall or downgrade the versions of gosa and libpam-mklocaluser to the ones used in this beta release. Both alpha and beta0 based installations should reinstall or deal with gosa.conf manually; there are two options: (1) Keep gosa.conf and edit this file as outlined on the mailing list. (2) Accept the new version of gosa.conf and replace both contained admin password placeholders with the password hashes found in the old one (backup copy!). In both cases all users need to change their password to make sure a password is set for CIFS access to their home directory.

cheers,
Holger

10th September 2013

I was introduced to the Freedombox project in 2010, when Eben Moglen presented his vision about serving the need of non-technical people to keep their personal information private and within the legal protection of their own homes. The idea is to give people back the power over their network and machines, and return Internet back to its intended peer-to-peer architecture. Instead of depending on a central service, the Freedombox will give everyone control over their own basic infrastructure.

I've intended to join the effort since then, but other tasks have taken priority. But this summers nasty news about the misuse of trust and privilege exercised by the "western" intelligence gathering communities increased my eagerness to contribute to a point where I actually started working on the project a while back.

The initial Debian initiative based on the vision from Eben Moglen, is to create a simple and cheap Debian based appliance that anyone can hook up in their home and get access to secure and private services and communication. The initial deployment platform have been the Dreamplug, which is a piece of hardware I do not own. So to be able to test what the current Freedombox setup look like, I had to come up with a way to install it on some hardware I do have access to. I have rewritten the freedom-maker image build framework to use .deb packages instead of only copying setup into the boot images, and thanks to this rewrite I am able to set up any machine supported by Debian Wheezy as a Freedombox, using the previously mentioned deb (and a few support debs for packages missing in Debian).

The current Freedombox setup consist of a set of bootstrapping scripts (freedombox-setup), and a administrative web interface (plinth + exmachina + withsqlite), as well as a privacy enhancing proxy based on privoxy (freedombox-privoxy). There is also a web/javascript based XMPP client (jwchat) trying (unsuccessfully so far) to talk to the XMPP server (ejabberd). The web interface is pluggable, and the goal is to use it to enable OpenID services, mesh network connectivity, use of TOR, etc, etc. Not much of this is really working yet, see the project TODO for links to GIT repositories. Most of the code is on github at the moment. The HTTP proxy is operational out of the box, and the admin web interface can be used to add/remove plinth users. I've not been able to do anything else with it so far, but know there are several branches spread around github and other places with lots of half baked features.

Anyway, if you want to have a look at the current state, the following recipes should work to give you a test machine to poke at.

Debian Wheezy amd64

  1. Fetch normal Debian Wheezy installation ISO.
  2. Boot from it, either as CD or USB stick.
  3. Press [tab] on the boot prompt and add this as a boot argument to the Debian installer:

    url=http://www.reinholdtsen.name/freedombox/preseed-wheezy.dat
  4. Answer the few language/region/password questions and pick disk to install on.
  5. When the installation is finished and the machine have rebooted a few times, your Freedombox is ready for testing.

Raspberry Pi Raspbian

  1. Fetch a Raspbian SD card image, create SD card.
  2. Boot from SD card, extend file system to fill the card completely.
  3. Log in and add this to /etc/sources.list:

    deb http://www.reinholdtsen.name/freedombox wheezy main
    
  4. Run this as root:

    wget -O - http://www.reinholdtsen.name/freedombox/BE1A583D.asc | \
       apt-key add -
    apt-get update
    apt-get install freedombox-setup
    /usr/lib/freedombox/setup
    
  5. Reboot into your freshly created Freedombox.

You can test it on other architectures too, but because the freedombox-privoxy package is binary, it will only work as intended on the architectures where I have had time to build the binary and put it in my APT repository. But do not let this stop you. It is only a short "apt-get source -b freedombox-privoxy" away. :)

Note that by default Freedombox is a DHCP server on the 192.168.1.0/24 subnet, so if this is your subnet be careful and turn off the DHCP server by running "update-rc.d isc-dhcp-server disable" as root.

Please let me know if this works for you, or if you have any problems. We gather on the IRC channel #freedombox on irc.debian.org and the project mailing list.

Once you get your freedombox operational, you can visit http://your-host-name:8001/ to see the state of the plint welcome screen (dead end - do not be surprised if you are unable to get past it), and next visit http://your-host-name:8001/help/ to look at the rest of plinth. The default user is 'admin' and the default password is 'secret'.

22nd August 2013

The second wheezy based beta release of Debian Edu was wrapped up today, slightly delayed because of some bugs in the initial Windows integration fixes . This is the release announcement:

New features for Debian Edu 7.1+edu0~b1 released 2013-08-22

These are the release notes for Debian Edu / Skolelinux 7.1+edu0~b1, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the sixth test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

ALERT: Alpha based installations should reinstall or downgrade the versions of gosa and libpam-mklocaluser to the ones used in this beta release. Both alpha and beta0 based installations should reinstall or deal with gosa.conf manually; there are two options: (1) Keep gosa.conf and edit this file as outlined on the mailing list. (2) Accept the new version of gosa.conf and replace both contained admin password placeholders with the password hashes found in the old one (backup copy!). In both cases every user need to change their their password to make sure a password is set for CIFS access to their home directory.

Software updates

  • Added ssh askpass packages to default installation, to ensure ssh work also without a attached tty.
  • Add the command-not-found package to the default installation to make it easier to figure out where to find missing command line tools. Please note, that the command 'update-command-not-found' has to be run as root to actually make it useful (internet access required).

Other changes

  • Adjusted the USB stick ISO image build to include every tool needed for desktop=xfce installations.
  • Adjust thin-client-server task to work when installing from USB stick ISO image.
  • Made new grub artwork (changed png from indexed to RGB format).
  • Minor cleanup in the CUPS setup.
  • Make sure that bootstrapping of the Samba domain really happens during installation of the main server and adjust SID handling to cope with this.
  • Make Samba passwords changeable (again) via GOsa².
  • Fix generation of LM and NT password hashes via GOsa² to avoid empty password hashes.
  • Adapted Samba machine domain joining to latest change in the smbldap-tools Perl package, fixing bugs blocking Windows machines from joining the Samba domain.

Known issues

  • KDE fails to understand the wpad.dat file provided, causing it to not use the http proxy as it should.
  • Chromium also fails to use the proxy when using the KDE desktop (using the KDE configuration).

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: 1e357f80b55e703523f2254adde6d78b
The SHA1SUM of this image is: 7157f9be5fd27c7694d713c6ecfed61c3edda3b2

To download the multiarch USB stick ISO release you can use

The MD5SUM of this image is: 7a8408ead59cf7e3cef25afb6e91590b
The SHA1SUM of this image is: f1817c031f02790d5edb3bfa0dcf8451088ad119

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

18th August 2013

Earlier, I reported about my problems using an Intel SSD 520 Series 180 GB disk. Friday I was told by IBM that the original disk should be thrown away. And as there no longer was a problem if I bricked the firmware, I decided today to try to install Intel firmware to replace the Lenovo firmware currently on the disk.

I searched the Intel site for firmware, and found issdfut_2.0.4.iso (aka Intel SATA Solid-State Drive Firmware Update Tool) which according to the site should contain the latest firmware for SSD disks. I inserted the broken disk in one of my spare laptops and booted the ISO from a USB stick. The disk was recognized, but the program claimed the newest firmware already were installed and refused to insert any Intel firmware. So no change, and the disk is still unable to handle write load. :( I guess the only way to get them working would be if Lenovo releases new firmware. No idea how likely that is. Anyway, just blogging about this test for completeness. I got a working Samsung disk, and see no point in spending more time on the broken disks.

Tags: debian, english.
2nd August 2013

It has been a while since my last update. Since last summer, I have worked on a Norwegian docbook version of the 2004 book Free Culture by Lawrence Lessig, to get a Norwegian text explaining the problems with the copyright law. Yesterday, I finally broken the 90% mark, when counting the number of strings to translate. Due to real life constraints, I have not had time to work on it since March, but when the summer broke out, I found time to work on it again. Still lots of work left, but the first draft is nearing completion. I created a graph to show the progress of the translation:

When the first draft is done, the translated text need to be proof read, and the remaining formatting problems with images and SVG drawings need to be fixed. There are probably also some index entries missing that need to be added. This can be done by comparing the index entries listed in the SiSU version of the book, or comparing the English docbook version with the paper version. Last, the colophon page with ISBN numbers etc need to be wrapped up before the release is done. I should also figure out how to get correct Norwegian sorting of the index pages. All docbook tools I have tried so far (xmlto, docbook-xsl, dblatex) get the order of symbols and the special Norwegian letters ÆØÅ wrong.

There is still need for translators and people with docbook knowledge, to be able to get a good looking book (I still struggle with dblatex, xmlto and docbook-xsl) as well as to do the draft translation and proof reading. And I would like the figures to be redrawn as SVGs to make it easy to translate them. Any SVG master around? There are also some legal terms that are unfamiliar to me. If you want to help, please get in touch with me, and check out the project files currently available from github.

If you are curious what the translated book currently look like, the updated PDF and EPUB are published on github. The HTML version is published as well, but github hand it out with MIME type text/plain, confusing browsers, so I saw no point in linking to that version.

27th July 2013

The first wheezy based beta release of Debian Edu was wrapped up today. This is the release announcement:

New features for Debian Edu 7.1+edu0~b0 released 2013-07-27

These are the release notes for for Debian Edu / Skolelinux 7.1+edu0~b0, based on Debian with codename "Wheezy".

About Debian Edu and Skolelinux

Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa², a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment.

This is the fifth test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release.

ALERT: Alpha based installations should reinstall or downgrade the versions of gosa and libpam-mklocaluser to the ones used in this beta release.

Software updates

  • Switched roaming workstation profiles from wicd to network-manager for network configuration, as wicd didn't work any more.
  • Changed version numbers of patched gosa and libpam-mklocaluser packages to make sure our locally patched versions will be replaced by the official packages when they are released from Debian. Those installing alpha version need to reinstall or manually downgrade gosa and libpam-mklocaluser.
  • Added bluetooth tools to the default desktop (bluedevil, blueman).
  • Added tools for sharing the desktop on KDE (krdc, krfb).
  • Added valgrind to the default installation for easier debugging of crash bugs.

Other changes

  • Fixed artwork package to work with gnome, no longer break desktop=gnome installations.
  • Adjusted installer to now work when forced to use a proxy with the netinst CD.
  • Fixed code detecting and setting/loading hardware specific setup/firmware to work more robust out of the box.
  • Adjusted Kerberos setup to detect realm and server settings at install time instead of dynamically at run time. This avoid a crash with krb5-auth-dialog on diskless workstations without a DNS name.
  • Worked around misfeature in network-manager not calling the dhclient exit hooks, causing automatic proxy configuration and automatic host name setting at run time to work again.
  • Fixed feature setting the default Iceweasel start page from URL fetched from LDAP, to allow schools to set the global default by updating the dc=skole,dc=skolelinux,dc=no LDAP object.
  • Changed default host name on all networked machines to be unique (generated from MAC or reverse DNS) after boot.
  • Adjusted partition sizes to make sure they are big enough.

Known issues

  • Grub is missing the new artwork.
  • KDE fail to understand the wpad.dat file provided, causing it to not use the http proxy as it should.
  • Chromium also fail to use the proxy.

Where to get it

To download the multiarch netinstall CD release you can use

The MD5SUM of this image is: 55d5de9765b6dccd5d9ec33cf1a07109
The SHA1SUM of this image is: 996a1d9517740e4d627d100de2d12b23dd545a3f

To download the multiarch USB stick ISO release you can use

The MD5SUM of this image is: d8f0818c51a78d357de794066f289f69
The SHA1SUM of this image is: 49185ca354e8d0543240423746924f76a6cee733

How to report bugs

http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

17th July 2013

Today I switched to my new laptop. I've previously written about the problems I had with my new Thinkpad X230, which was delivered with an 180 GB Intel SSD disk with Lenovo firmware that did not handle sustained writes. My hardware supplier have been very forthcoming in trying to find a solution, and after first trying with another identical 180 GB disks they decided to send me a 256 GB Samsung SSD disk instead to fix it once and for all. The Samsung disk survived the installation of Debian with encrypted disks (filling the disk with random data during installation killed the first two), and I thus decided to trust it with my data. I have installed it as a Debian Edu Wheezy roaming workstation hooked up with my Debian Edu Squeeze main server at home using Kerberos and LDAP, and will use it as my work station from now on.

As this is a solid state disk with no moving parts, I believe the Debian Wheezy default installation need to be tuned a bit to increase performance and increase life time of the disk. The Linux kernel and user space applications do not yet adjust automatically to such environment. To make it easier for my self, I created a draft Debian package ssd-setup to handle this tuning. The source for the ssd-setup package is available from collab-maint, and it is set up to adjust the setup of the machine by just installing the package. If there is any non-SSD disk in the machine, the package will refuse to install, as I did not try to write any logic to sort file systems in SSD and non-SSD file systems.

I consider the package a draft, as I am a bit unsure how to best set up Debian Wheezy with an SSD. It is adjusted to my use case, where I set up the machine with one large encrypted partition (in addition to /boot), put LVM on top of this and set up partitions on top of this again. See the README file in the package source for the references I used to pick the settings. At the moment these parameters are tuned:

  • Set up cryptsetup to pass TRIM commands to the physical disk (adding discard to /etc/crypttab)
  • Set up LVM to pass on TRIM commands to the underlying device (in this case a cryptsetup partition) by changing issue_discards from 0 to 1 in /etc/lvm/lvm.conf.
  • Set relatime as a file system option for ext3 and ext4 file systems.
  • Tell swap to use TRIM commands by adding 'discard' to /etc/fstab.
  • Change I/O scheduler from cfq to deadline using a udev rule.
  • Run fstrim on every ext3 and ext4 file system every night (from cron.daily).
  • Adjust sysctl values vm.swappiness to 1 and vm.vfs_cache_pressure to 50 to reduce the kernel eagerness to swap out processes.

During installation, I cancelled the part where the installer fill the disk with random data, as this would kill the SSD performance for little gain. My goal with the encrypted file system is to ensure those stealing my laptop end up with a brick and not a working computer. I have no hope in keeping the really resourceful people from getting the data on the disk (see XKCD #538 for an explanation why). Thus I concluded that adding the discard option to crypttab is the right thing to do.

I considered using the noop I/O scheduler, as several recommended it for SSD, but others recommended deadline and a benchmark I found indicated that deadline might be better for interactive use.

I also considered using the 'discard' file system option for ext3 and ext4, but read that it would give a performance hit ever time a file is removed, and thought it best to that that slowdown once a day instead of during my work.

My package do not set up tmpfs on /var/run, /var/lock and /tmp, as this is already done by Debian Edu.

I have not yet started on the user space tuning. I expect iceweasel need some tuning, and perhaps other applications too, but have not yet had time to investigate those parts.

The package should work on Ubuntu too, but I have not yet tested it there.

As for the answer to the question in the title of this blog post, as far as I know, the only solution I know about is to replace the disk. It might be possible to flash it with Intel firmware instead of the Lenovo firmware. But I have not tried and did not want to do so without approval from Lenovo as I wanted to keep the warranty on the disk until a solution was found and they wanted the broken disks back.

Tags: debian, english.
10th July 2013

A few days ago, I wrote about the problems I experienced with my new X230 and its SSD disk, which was dying during installation because it is unable to cope with sustained write. My supplier is in contact with Lenovo, and they wanted to send a replacement disk to try to fix the problem. They decided to send an identical model, so my hopes for a permanent fix was slim.

Anyway, today I got the replacement disk and tried to install Debian Edu Wheezy with encrypted disk on it. The new disk have the same firmware version as the original. This time my hope raised slightly as the installation progressed, as the original disk used to die after 4-7% of the disk was written to, while this time it kept going past 10%, 20%, 40% and even past 50%. But around 60%, the disk died again and I was back on square one. I still do not have a new laptop with a disk I can trust. I can not live with a disk that might lock up when I download a new Debian Edu / Skolelinux ISO or other large files. I look forward to hearing from my supplier with the next proposal from Lenovo.

The original disk is marked Intel SSD 520 Series 180 GB, 11S0C38722Z1ZNME35X1TR, ISN: CVCV321407HB180EGN, SA: G57560302, FW: LF1i, 29MAY2013, PBA: G39779-300, LBA 351,651,888, LI P/N: 0C38722, Pb-free 2LI, LC P/N: 16-200366, WWN: 55CD2E40002756C4, Model: SSDSC2BW180A3L 2.5" 6Gb/s SATA SSD 180G 5V 1A, ASM P/N 0C38732, FRU P/N 45N8295, P0C38732.

The replacement disk is marked Intel SSD 520 Series 180 GB, 11S0C38722Z1ZNDE34N0L0, ISN: CVCV315306RK180EGN, SA: G57560-302, FW: LF1i, 22APR2013, PBA: G39779-300, LBA 351,651,888, LI P/N: 0C38722, Pb-free 2LI, LC P/N: 16-200366, WWN: 55CD2E40000AB69E, Model: SSDSC2BW180A3L 2.5" 6Gb/s SATA SSD 180G 5V 1A, ASM P/N 0C38732, FRU P/N 45N8295, P0C38732.

The only difference is in the first number (serial number?), ISN, SA, date and WNPP values. Mentioning all the details here in case someone is able to use the information to find a way to identify the failing disk among working ones (if any such working disk actually exist).

Tags: debian, english.
9th July 2013

The upcoming Saturday, 2013-07-13, we are organising a combined Debian Edu developer gathering and Debian and Ubuntu bug squashing party in Oslo. It is organised by the member assosiation NUUG and the Debian Edu / Skolelinux project together with the hack space Bitraf.

It starts 10:00 and continue until late evening. Everyone is welcome, and there is no fee to participate. There is on the other hand limited space, and only room for 30 people. Please put your name on the event wiki page if you plan to join us.

5th July 2013

Half a year ago, I reported that I had to find a replacement for my trusty old Thinkpad X41. Unfortunately I did not have much time to spend on it, and it took a while to find a model I believe will do the job, but two days ago the replacement finally arrived. I ended up picking a Thinkpad X230 with SSD disk (NZDAJMN). I first test installed Debian Edu Wheezy as a roaming workstation, and it seemed to work flawlessly. But my second installation with encrypted disk was not as successful. More on that below.

I had a hard time trying to track down a good laptop, as my most important requirements (robust and with a good keyboard) are never listed in the feature list. But I did get good help from the search feature at Prisjakt, which allowed me to limit the list of interesting laptops based on my other requirements. A bit surprising that SSD disk are not disks according to that search interface, so I had to drop specifying the number of disks from my search parameters. I also asked around among friends to get their impression on keyboards and robustness.

So the new laptop arrived, and it is quite a lot wider than the X41. I am not quite convinced about the keyboard, as it is significantly wider than my old keyboard, and I have to stretch my hand a lot more to reach the edges. But the key response is fairly good and the individual key shape is fairly easy to handle, so I hope I will get used to it. My old X40 was starting to fail, and I really needed a new laptop now. :)

Turning off the touch pad was simple. All it took was a quick visit to the BIOS during boot it disable it.

But there is a fatal problem with the laptop. The 180 GB SSD disk lock up during load. And this happen when installing Debian Wheezy with encrypted disk, while the disk is being filled with random data. I also tested to install Ubuntu Raring, and it happen there too if I reenable the code to fill the disk with random data (it is disabled by default in Ubuntu). And the bug with is already known. It was reported to Debian as BTS report #691427 2012-10-25 (journal commit I/O error on brand-new Thinkpad T430s ext4 on lvm on SSD). It is also reported to the Linux kernel developers as Kernel bugzilla report #51861 2012-12-20 (Intel SSD 520 stops working under load (SSDSC2BW180A3L in Lenovo ThinkPad T430s)). It is also reported on the Lenovo forums, both for T430 2012-11-10 and for X230 03-20-2013. The problem do not only affect installation. The reports state that the disk lock up during use if many writes are done on the disk, so it is much no use to work around the installation problem and end up with a computer that can lock up at any moment. There is even a small C program available that will lock up the hard drive after running a few minutes by writing to a file.

I've contacted my supplier and asked how to handle this, and after contacting PCHELP Norway (request 01D1FDP) which handle support requests for Lenovo, his first suggestion was to upgrade the disk firmware. Unfortunately there is no newer firmware available from Lenovo, as my disk already have the most recent one (version LF1i). I hope to hear more from him today and hope the problem can be fixed. :)

Tags: debian, english.
4th July 2013

Half a year ago, I reported that I had to find a replacement for my trusty old Thinkpad X41. Unfortunately I did not have much time to spend on it, but today the replacement finally arrived. I ended up picking a Thinkpad X230 with SSD disk (NZDAJMN). I first test installed Debian Edu Wheezy as a roaming workstation, and it worked flawlessly. As I write this, it is installing what I hope will be a more final installation, with a encrypted hard drive to ensure any dope head stealing it end up with an expencive door stop.

I had a hard time trying to track down a good laptop, as my most important requirements (robust and with a good keyboard) are never listed in the feature list. But I did get good help from the search feature at Prisjakt, which allowed me