Welcome to my blog.

Have a look at the most recent posts below, or browse the tag cloud on the right. An archive of all posts is also available.

RSS Atom Add a new post titled:

My first job was providing technical support for users of an ISP that provided e-mail and news over UUCP. Since then I've maintained a small personal UUCP network occasionally connecting to such UUCP providers as pop up from time to time.

I recently became aware of an attempt to set up a new UUCP network a new UUCP network. Since I have an existing personal UUCP network and AFAICT all sites connected as Tier 1 sites(which are all-2-all connected) would need to register all connected UUCP systems centrally to prevent name collisions I elected to try to join as a Tier 2/Leaf site.

The instructions for Tier 2 sites don't appear to have been written so I downloaded the setup script for Tier 1 sites and ran it under an unprivileged user with the intent of extracting the config for connecting to the Tier 1 site I needed to connect to from there.

The generated config tries to create a suitable authorized_keys file but all keys get the same command forced on them one which invokes uucico -l to prompt for username and password. Unfortunately the same script will generate the username and password to use with a Tier 1 sites for whichever site you give it.

Also by default uucico doesn't normally care overmuch about the login you provided. To fix this you need a called-login entry in the sys file for each system restricting it to use by a particular login. None of the sys file entries have this which means that whatever login and password you give it UUCP will accept any system name you care to give it in response to Shere=<system name>.

To maximise security it would be best if each calling system were logged in under a different unix uid (traditionally U<system name>) which is a member of the uucp group or is otherwise able to tun uucico. Alternatively each key in ~uucp/.ssh/authorized_keys could be associated with a particular login via uucico's -u option. This login could then be checked against the called_login entry for the system that they claimed.

The whole idea of having uucico prompt for passwords seems pointless today even if there was some secure mechanism for negotiating them since they are stored in plain text.

Posted Sat 30 Mar 2019 12:27:28 GMT

Some e-book stores will send e-books to your kindle via e-mail but don't provide the facility to send them to non-kindle e-mail addresses. I was investigating whether it would be easier to set this up and then automate download from Amazon to my computer for importation into Calibre when I discovered I can no longer download my copy of Information Doesn't want to be free. On selecting "Download and transfer via USB" I am told."No device is eligible for Downloading the selected content". A drop-down menu is provided that lists devices associated with my Amazon account but they are all grayed out.

It looks like Amazon will no longer allow me to download even DRM free content unless I can prove to them that I have an ancient kindle that can't connect directly.

Posted Fri 09 Nov 2018 18:42:15 GMT

The nmh Message Handling System is a set of command line tools for reading your mail from within the shell on a Posix compatible system such as Linux. It has support for an unseen sequence to enable automatic tracking of which messages have been read. Unfortunately the rcvstore command does not lock the .mh_sequences file which means it is possible to corrupt it. Most other MDAs won't update the unseen sequence even if they otherwise support MH folders.

An alternative to the unseen sequence would be to use a read sequence. Unlike an unread sequence this does not need to be updated when delivering mail only when viewing it which is done synchronously. To implement this I wrote a short shell script to use as a wrapper for the showproc and showmimeproc.

#!/bin/sh
[ $# -ge 3 ] || exit 1
mark cur -sequence "$1" -add -nozero
CMD="$2"
shift 2
[ "${CMD}" = "-" ] && CMD=/usr/lib/mh/mhl
exec "${CMD}" "$@"

This can then be invoked via your .mh_profile with lines like the following:

sequence-negation: un
showproc: /home/wish/bin/readseq read -
showmimeproc: /home/wish/bin/readseq read /usr/bin/mh/mhshow 

The use of a hyphen to force invocation of the mhl command is to bypass special treatment of showprocs called mhl by the show program. The sequence-negation entry allows the read sequence to be used as a mechanism to list or display unread messages.

One down side of using a read sequence rather than an unseen sequence is that new and related commands don't work with negated sequences.

Posted Sat 29 Sep 2018 12:51:46 BST

I fairly regularly buy ebooks from Baen books and Weightless Books who both send me the books as attachments to an e-mail. I've automated the processing of these e-mails so that books sent this way are automatically incorporated into my Calibre library. I also buy bundles of ebooks from Storybundle. Unfortunately Storybundle will only send books to @kindle.com addresses.

While Storybundle don't send me the e-books they do send an e-mail where the ebooks can be downloaded. While this normally involves clicking on buttons and such it is possible to tweak the url so that the books in question can be downloaded directly. The first thing I do is set up a script on my Bitfolk VM (where this blog is hosted) which takes the url Storybundle provided and uses it to download the bundle and e-mail it to me:

#!/bin/bash                                                                                                                                                  
PATH=/usr/bin:/bin                                                                                                                                           
export PATH                                                                                                                                                  
URL="$1"                                                                                                                                                     
MAILTO="$2"                                                                                                                                                  
ZIP="$(echo ${URL}|sed -e s':^.*/\([^/]*$\):\1.zip:')"                                                                                                       
MYDIR=$(mktemp -d)                                                                                                                                           
until curl -s -L --data-urlencode "download=DOWNLOAD ALL" ${URL}/download_all >"${MYDIR}/${ZIP}"                                                             
do 
sleep 60
done
mpack -a -s "Your Storybundle ${ZIP}"  -c "application/zip" "${MYDIR}/${ZIP}" "${MAILTO}"
rm "${MYDIR}/${ZIP}"
rmdir  "${MYDIR}"

For this to be useful I need to extract the URL from the storybundle e-mail. This is fairly easy to do as the message storybundle sends has a standard format and I store my mail in MH folders. One message per file. I added the code for this to the script I use for processing e-books I've received by mail:

#!/bin/bash
mkdir -p ~/books/messages
shopt -s nullglob
for MSG in $(find ~/Mail/entz/books/delivered -maxdepth 1 -links 1 -regex '.*/[1-9][0-9]*$')
do 
   MD5=$(md5sum ${MSG}|awk '{print $1}')
   if ln ${MSG} ~/books/messages/${MD5} >/dev/null 2>&1
   then
      mkdir -p ~/books/import/${MD5}
      cd ~/books/import/${MD5}
      munpack -q <~/books/messages/${MD5} >/dev/null 2>&1
      echo *.zip |xargs -n 1 7z e
      calibredb add --ignore ".*" --ignore "*.zip" . >/dev/null
   fi
done      
for MSG in $(find ~/Mail/entz/books/storybundle -maxdepth 1 -links 1 -regex '.*/[1-9][0-9]*$')
do 
   MD5=$(md5sum ${MSG}|awk '{print $1}')
   if ln ${MSG} ~/books/messages/${MD5} >/dev/null 2>&1
   then 
       uux 'vicar!storybundle'  "$( <${MSG} awk "/Here's your unique download link:/{print \$6}"|sed -e 's:\.$::')" "${MAILTO}"
      sleep 60
  fi
done

I don't read my e-mail on this VM which is why I queue the command for later execution on the VM via uucp.

Keeping my OpenPGP key safe

In order to keep it reasonably secure I do not keep the secret part of my PGP key on internet connected computers. I keep my signing,encryption and authentication keys on my FSFE membership card which doubles up as an OpenPGP card. The certifying key is kept, encrypted, on a small non-networked computer (and backed up elsewhere).

Signing OpenPGP keys

To simplify signing other people's keys I use caff from the Debian signing party package. The way caff works is to sign each uid on a key individually and send that signed key, encrypted to itself, to the e-mail address embedded in the uid. If the recipient can decrypt the message they demonstrate control of the key and thereby verify, to some degree, that the e-mail address and key are controlled by the same person.

Combing the two

Modern MTAs are designed to work in an Internet environment and by default send messages via SMTP. On my disconnected key signing computer there is no Internet and therefore no SMTP. On the face of it this presents a problem for caff .

The retro-computing solution

My first proper job was with a small, and now defunct ISP that, in addition to the then usual dial-up SLIP/PPP Internet connections offered a BBS and UUCP: a system for copying files and remote command execution that works in batched mode . My operating system of choice still supports this. UUCP normally works over serial-ports, phone lines, TCP or even ssh. However as I want an air-gap between my internet-connected computer and my key-signing machine none of these are suitable. My solution then is to run UUCP over Sneakernet.

Implementation

My solution uses the usbmount,uucpand openssh packages and a thumb drive. On the thumbdrive I create spool, log and pub directories to server as the data directories for a virtual UUCP system. I chown them appropriately. Fortunately UUCP is an old enough part of unix that it has a fixed uid and gid assigned to it meaning that it is the same across all my systems.

I created three directories on each node to support the sneakernet: /etc/opt/uusbcp, /var/opt/uusbcp and /opt/uusbcp/bin . In /etc/opt/uusbcp are most of the usual uucp files plus a file uuid that holds the uuid of the thumbdrive. The config file contains a few unusual entries that ensure the virtual machine can be distinguished from the host machine and stores its data on the thumbdrive when it is plugged in:

nodename        epistle
spool           /var/opt/uusbcp/spool/
pubdir          /var/opt/uusbcp/pub/
logfile         /var/opt/uusbcp/log/Log
statfile        /var/opt/uusbcp/log/Stats
debugfile       /var/opt/uusbcp/log/Debug

Rather than a sys file I created a sys.head that contains only defaults and a pointer to a special port for contacting the system into which the drive is plugged:

chat ""
port TCP
command-path /bin /usr/bin /usr/sbin
commands true
callback true
forward ANY
remote-send ~
remote-receive ~
local-send ~ 
local-receive ~


system dumain
port UUSBCP
time any

The port file defines that port:

port TCP
type tcp

port UUSBCP
type pipe
command /usr/bin/ssh -C -x -o batchmode=yes uucp@localhost

I use the uucp user's *~/.ssh configuration to force running uucico with the appropriate userid for the thumbdrive. Password logins are disabled.

I created a script /opt/uusbcp/bin/docall to be run when the thumb drive is plugged in to initiate the UUCP connection to the local machine:

#!/bin/bash
set -e
mount --make-private --bind /etc/opt/uusbcp /etc/uucp
mount --make-private --bind "$2" /var/opt/uusbcp
uuxqt
while  fuser -m /var/opt/uusbcp; do sleep 1 ;done
su - uucp -c "/usr/lib/uucp/uucico -z -x 2 -D -q -S $1"
uuxqt
while fuser -m /var/opt/uusbcp; do sleep 1 ;done

On my signing machine this is a little simpler:

#!/bin/bash
set -e
mount --make-private --bind /etc/opt/uusbcp /etc/uucp
mount --make-private --bind "$2" /var/opt/uusbcp
su - uucp -c "/usr/lib/uucp/uucico -z -x 2 -D -q -S $1"

Finally to ensure my script is called when needed I add a script under /etc/usbmount/mount.d/99_uusbcp:

 #!/bin/bash
 UUSBETC=/etc/opt/uusbcp/
 UUSBSYS="${UUSBETC}/sys"
 UUSBCP=/var/opt/uusbcp/
 UUNAME="$(uuname -l)"
 UUID=$(cat ${UUSBETC}/uuid)
 SNEAKERNET=$(findmnt -n -o TARGET UUID="${UUID}")
 if [  "${SNEAKERNET}" = "${UM_MOUNTPOINT}" -a -n "${SNEAKERNET}" ] 
 then
    set -e
    ls  -1 "${SNEAKERNET}/spool" |grep '^[a-z][a-z0-9-]*[a-z0-9]$'|grep -v -- "^${UUNAME}"'$'|sed -e 's/^/system /' -e 's/$/\nforward ANY/'|cat "${UUSBSYS}.head" - >"${UUSBSYS}.new"
    mv "${UUSBSYS}.new" "${UUSBSYS}"
    mount -o remount --make-rprivate /
    unshare -m /opt/uusbcp/bin/docall "${UUNAME}" "${SNEAKERNET}"
    umount "${SNEAKERNET}"
    e2fsck /dev/disk/by-uuid/${UUID}
 fi

The long line beginning with ls -1 generates the sys file used for the thumbdrive based on the sys.head file and the contents of the spool directory on the thumbdrive.

Those of you familiar with UUCP may be wondering why I used unshare and private mounts to mount /etc/opt/uusbcp over /etc/opt/uucp rather than just using the -I option to uuxqt and uucico to specify an alternate config file. The reason is that uuxqt does not pass this option on to uucp when executing multi-hop copies.

On each host in the sneakernet there is an entry in the sys file for the thumb drive uucp host (epistle)with appropriate permissions (essentially only copying into uucppublic for the key-signing machine).

Email

My signing machine's MTA sends mail by piping it into a double hop uux command that will execute rsmtp on my desktop machine the next time I move the thumb drive from the first to the second.

uux epistle!dumain!rsmtp  

Apart from the double hop this is fairly standard for sending mail via UUCP to a smarthost.

Getting keys onto the signing host

The signing machine is a pure satellite system from an e-mail perspective so I can't use e-mail to get keys there. However I can use a double hop uucp command to copy a keyring from my desktop to the /var/spool/uucppublic directory on the signing machine from whence I can pick them up and sign if appropriate.

Future Improvements

I may look into making one uucico call the other directly rather than using ssh and possibly mounting the regular filesystems read-only in the thumb drive environment to improve the protection against hostile thumb-drives. At the moment protection relies on checking the uuid and restricted command execution via UUCP.

Posted Sat 25 Feb 2017 19:02:17 GMT

I'm going to upgrade this site to use HTTPS, HSTS and forward secrecy this year in order to help Reset the Net. They might get a bit further if they didn't insist of the URL of a tweet before I can submit this blog post. I don't use twitter as I prefer not to put everything in the hands of a giant american corporation.

Posted Thu 05 Jun 2014 18:32:36 BST

So I've been working on getting my PGP key better connected into the web of trust. I've been to a couple of key signing parties and got my key signed by CACert and the PGP Global Directory all of which has made my key fairly well connected.

However this only underscores the fundamental problem with OpenPGP: relatively few people use it and only a fraction of them are connected into the strong set. This is in part a bootstrapping problem. With the web of trust connecting so few people it is hard to find someone to sign your key and key signing parties are a fair amount of work to organize.

So my idea to help OpenPGP users connect: a mobile phone app that tells you when you are close to a fellow user with whom you have not exchanged signatures.

Features

  1. Authentication either with the key or (for those who don't want to keep their key on their phone) by a signed token.
  2. User determines required proximity before detection occurs
  3. Variable levels of visibility: Invisible,Headcount only,Contact details,Location
  4. Ability to ignore certain users.
  5. Encrypted IM if you have your key.
Posted Mon 21 Apr 2014 20:25:47 BST

For the past couple of days I haven't been able to access Goodreads. I get a response page that reads:

403 Forbidden

Request forbidden by administrative rules.

Using google I could find no evidence that Goodreads was down and Is it down right now claims it is up and has been so for the last week. A little poking around and I found I couldn't access Goodreads directly over my normal internet connection or via TOR but could access it just fine using my phone as a mobile hotspot.

As the IP I normally browse from also functions as a restricted TOR exit node I conclude that Goodreads has started blocking TOR exit nodes. This is rather tricky to Google due to frequent references to Tor books and Goodreads together on the internet. Oddly enougth Goodreads owner Amazon don't block me so I guess they only object to TOR when there isn't any money in the offing.

As everyone knows now Google Reader will be shutting down on July 1st. This has caused me to actually start working on my long planned switch to a self-hosted solution. Looking at what I actually use Google Reader for it looks like I really need multiple readers. I've already switched my audio podcast consumption to a dedicated podcatcher program on my mobile phone. Unfortunately getting enough content for my walks home will exceed my the "fair use" limits on my "unlimited" plan so I'll have to download it in advance via wi-fi.

For webcomics news and people I follow regularly a River of News style aggregator like planet looks to be what I need.

However there are still some feeds for which I would prefer the mailbox style of news provided by reader. Unfortunately most of the options here seem to be either designed for massive hosting sites or written in PHP. While I'm sure it is possible to write secure PHP it doesn't seem to be the norm.

I'm also looking for something that can split link posts into multiple entries and ideally merge multiple links to the same article.

Posted Sun 14 Apr 2013 15:47:12 BST

The internet derives its strength and flexibility from its design as a decentralised system with the bulk of the inteligence on the edges rather than in the network "core". Unfortunately it is still too centralised in many respects. Much of this centralisation stems from early technological constraints that either no longer apply or will shortly cease to apply. The early internet required central management because it relied on a protocol with a relatively small (32 bit) address space and routers that operate under severe memory constraints.

We can reasonably assume that the number of independent networks will be of roughly the same order of magnitude as the number of people on the planet ie a few billion. Since modern computers come with several gigabytes of RAM we can work on the assumption that storing the routing table is now trivial. Likewise network link speeds are increasing so transmitting the table should not be prohibitive.

What might be expensive is the need to look routes up quickly. This might require very fast RAM on core routers and storing the entire routing table therein would be prohibitive. This could be avoided by making use of a source routing protocol like MPLS to enable the workload to move to the network edge.

Given the above we no longer need routing tables to be efficient. We should be able to afford one table entry per network easily. This means we no longer need management to ensure compact allocations. I could be wrong but I suspect I'm only wrong by a few years if so.

Although allocation compactness is no longer a concern we still can't allocate at random with IPv6 there might be accidental collisions. However we don't need an authority to prevent this just an agreed standard. One mechanism would be to assign each router a network address based on its physical location on the surface of the earth. One could use any map projection that produces a roughly square map without distorting shape or area too badly and simply take the router's cartesian co-ordinates to make up the network address. With a resolution of a square meter this would take up about 50 bits, comfortably within the 64 bits reserved for the network. By interleaving the bits from the X and Y co-ordinates one might even be able to shrink the routing table back down again.

Of course that doesn't prevent hijacking an IP address as there is no central registry of who legitimately controls which address. If one is prepared to throw out IPv6 compatibility and increase the address space then one could just use a hash of the router's public key to identify the network.

Unfortunately Zooko's Triangle causes some problems when trying to decentralise human meaningful names so I'll leave those to a later post.

Posted Mon 01 Apr 2013 16:16:17 BST

This blog is powered by ikiwiki.