Planet

February 12, 2019

Ana Custura

LibreELEC Streaming Camera

This is a blogpost about a Raspberry Pi camera setup on LibreELEC. It includes a step by step tutorial.

If you have the very specific use case of needing to run LibreELEC on a Raspberry Pi to play media while also streaming a Pi camera, look no further. This setup has been running for the past year or so in my house. I’ve rebuilt the Docker container for it recently, which is the perfect excuse for a blogpost.

LibreELEC is a Linux distribution for running media center software Kodi. It will stream films, music, TV shows from your NAS to your TV and has several awesome add-ons. LibreELEC has no add-on for streaming from a camera. Perhaps because the drivers needed for USB cameras are not actually even compiled into the image for the RPi2. I’ve tried this setup using a USB camera and had to build a custom LibreELEC image.

However, there is support for streaming from RPi cameras and there is a Docker add-on, which can be used to run a simple camera streaming app in a container. The app I’ve chosen after some trial and error of which gruelling details I will spare you is mjpeg-streamer.

The container uses the current Raspbian stable image, stretch, on top of which it builds mjpeg-streamer. It then uses an entrypoint script to capture a still image per second from the Pi camera, which the software then turns into a www stream on port 8080.

You can get the container here. For a step by step tutorial on how to deploy this, read on.


Step by step tutorial

Ingredients:
  • Raspberry Pi v2 or v3
  • Micro SD card (>2GB) and SD card adapter
  • Compatible Raspberry Pi camera module
  • Monitor and keyboard
  • Wired connection or wireless dongle if using a RPi 2

Step 1: Download

Download the official LibreELEC image for the Raspberry Pi from the official website. Scroll to ‘Direct Downloads’, select ‘Raspberry Pi v2 and Raspberry Pi v3’ and click the filename link on the page. The filename looks like LibreELEC-RPi2.arm-8.2.5.img.gz at the moment of writing this blog post, yours will most likely be newer.

Step 2: Put image on SD card

I use an SD card adapter and the in-built card reader in my laptop. On Linux, after plugging the SD card in, command sudo dmesg should display the name newly inserted device, similar to the following:

ana@sinopia:~|⇒  sudo dmesg
[...]
[70540.504869] mmc0: new high speed SDHC card at address aaaa
[70540.585060] mmcblk0: mmc0:aaaa SB16G 14.8 GiB (ro)
[70540.590225] mmcblk0: p1

The identifier for SD cards on Linux usually looks like mmcblk. Careful to copy your image on the right device, and not on your local hard drive:

ana@cyan:~|⇒  gunzip LibreELEC-RPi2.arm-8.2.5.img.gz
ana@cyan:~|⇒  sudo dd if=LibreELEC-RPi2.arm-8.2.5.img of=/dev/mmcblk0
549+0 records in
549+0 records out
575668224 bytes (576 MB, 549 MiB) copied, 50.8254 s, 11.3 MB/s

This may take a minute or two. Once dd has finished, load up your MicroSD card in your Raspberry Pi.

Step 3: Configure LibreELEC

Plug a monitor and a keyboard in and turn on your RPi. LibreELEC starts an autoconfiguration wizard when it first boots. This will guide you through setting up location, timezone and most importantly, a network connection needed for the next step. Hit ‘Next’ and follow the wizard.

  • One you connect to a network, the interface will display your IP address. Make sure to take a note of it.
  • Under the initial ‘Sharing and Remote Access’ screen make sure you enable the SSH service.

For security purposes, we will disable SSH after setting up docker as the root password is hardcoded in the image.

Step 4: Download the Docker Add-on

From the left hand-side menu, navigate to Add-ons -> Install from repository -> LibreELEC Add-ons -> Services -> Docker and then select Install.

The interface will notify you once this has installed.

Step 5: Build the Docker container

In order to build and run the docker container, you need to ssh as root into your Pi, using the IP address from Step 3. In this example, my IP is 192.168.0.156.

ana@cyan:~|⇒  ssh root@172.22.152.253
root@172.22.152.253's password

At the password prompt, type ‘libreelec’, the LibreELEC default and hardcoded password.

Next, make a directory and download the Dockerfile and starting script.

LibreELEC:~ # mkdir mpeg
LibreELEC:~ # cd mpeg
LibreELEC:~/mpeg # wget https://raw.githubusercontent.com/ana-cc/dockerstuffs/master/mjpeg-streamer-rpi/Dockerfile
Connecting to raw.githubusercontent.com (151.101.60.133:443)
Dockerfile           100% |************************************************************************************************************************|   917   0:00:00 ETA
LibreELEC:~/mpeg # wget https://raw.githubusercontent.com/ana-cc/dockerstuffs/master/mjpeg-streamer-rpi/stream_from_pi.sh
Connecting to github.com (140.82.118.3:443)
stream_from_pi.sh    100% |************************************************************************************************************************| 55534   0:00:00 ETA

Inspect and (optionally) edit the stream script. The script uses command raspistill to capture an image per second from the Pi camera, which mjpeg-streamer then turns into a www stream on port 8080. You can vary the resolution, quality and time apart these images are taken with the -w -h and sleep values. Here is the script by default:

until raspistill --nopreview -vf -hf -w 640 -h 480 -q 5 -o /tmp/stream/pic.jpg -tl 100 -t 9999999 -th 0:0:0 & LD_LIBRARY_PATH=/usr/local/lib mjpg_streamer -i "input_file.so -f /tmp/stream -n pic.jpg" -o "output_http.so -w /usr/local/www"; do
	sleep 1
done

Finally, build the container and give it an easy to remember tag:

docker build -t mpeg_container .

This should take a while, at the end your output should be similar to:

[...]
Step 12/13 : COPY stream_from_pi.sh /
 ---> 2299b11e7696
Removing intermediate container de4317561efe
Step 13/13 : ENTRYPOINT /bin/bash /stream_from_pi.sh
 ---> Running in deff3a4ebe15
 ---> b5f669ccd45e
Removing intermediate container deff3a4ebe15
Successfully built b5f669ccd45e

Step 6: Run the docker container

Now for the fun part: running and testing the newly-built container.

We want the Docker process to map the default streaming port to port 8081 of our RPi host (Kodi’s own web interface occupies port 8080 by default):

 docker run -d --privileged -p 8081:8080 mjpeg-container

And now the LibreELEC box will present the stream on port 8081. In a browser of choice, navigate to your IP address on port 8081, for example https//192.168.0.156:8081.

You should see a welcome page like the following:

Your stream can be viewed under the stream tab.

Step 8: Clean up

  • Worth adding an @reboot crontab to run the docker command if your box is subject to, uh, a lot of reboots.

Before ending your ssh session:

LibreELEC:~ # crontab -e

Make sure the file contains the following line:

@reboot docker run -d -it mpeg -p 8081:8080

This tells the box to run the docker container at every reboot so that you don’t have to start it manually.

  • Finally, disable SSH by going to Settings -> Services in your Kodi navigation.

This concludes the tutorial. Happy streaming!

February 12, 2019 12:00 AM

February 07, 2019

Iain R. Learmonth

Face

Privacy-preserving monitoring of an anonymity network (FOSDEM 2019)

This is a transcript of a talk I gave at FOSDEM 2019 in the Monitoring and Observability devroom about the work of Tor Metrics.

Direct links:

Producing this transcript was more work than I had anticipated it would be, and I’ve done this in my free time, so if you find it useful then please do let me know otherwise I probably won’t be doing this again.


I’ll start off by letting you know who I am. Generally this is a different audience for me but I’m hoping that there are some things that we can share here. I work for Tor Project. I work in a team that is currently formed of two people on monitoring the health of the Tor network and performing privacy-preserving measurement of it. Before Tor, I worked on active Internet measurement in an academic environment but I’ve been volunteering with Tor Project since 2015. If you want to contact me afterwards, my email address, or if you want to follow me on the Fediverse there is my WebFinger ID.

So what is Tor? I guess most people have heard of Tor but maybe they don’t know so much about it. Tor is quite a few things, it’s a community of people. We have a paid staff of approximately 47, the number keeps going up but 47 last time I looked. We also have hundereds of volunteer developers that contribute code and we also have relay operators that help to run the Tor network, academics and a lot of people involved organising the community locally. We are registered in the US as a non-profit organisation and the two main things that really come out of Tor Project are the open source software that runs the Tor network and the network itself which is open for anyone to use.

Currently there are an estimated average 2,000,000 users per day. This is estimated and I’ll get to why we don’t have exact numbers.

Most people when they are using Tor will use Tor Browser. This is a bundle of Firefox and a Tor client set up in such a way that it is easy for users to use safely.

When you are using Tor Browser your traffic is proxied through three relays. With a VPN there is only one server in the middle and that server can see either side. It is knows who you are and where you are going so they can spy on you just as your ISP could before. The first step in setting up a Tor connection is that the client needs to know where all of those relays are so it downloads a list of all the relays from a directory server. We’re going to call that directory server Dave. Our user Alice talks to Dave to get a list of the relays that are available.

In the second step, the Tor client forms a circuit through the relays and connects finally to the website that Alice would like to talk to, in this case Bob.

If Alice later decides they want to talk to Jane, they will form a different path through the relays.

We know a lot about these relays. Because the relays need to be public knowledge for people to be able to use them, we can count them quite well. Over time we can see how many relays there are that are announcing themselves and we also have the bridges which are a seperate topic but these are special purpose relays.

Because we have to connect to the relays we know their IP addresses and we know if they have IPv4 or IPv6 addresses so as we want to get better IPv6 support in the Tor network we can track this and see how the network is evolving.

Because we have the IP addresses we can combine those IP addresses with GeoIP databases and that can tell us what country those relays are in with some degree of accuracy. Recently we have written up a blog post about monitoring the diversity of the Tor network. The Tor network is not very useful if all of the relays are in the same datacenter.

We also perform active measurement of these relays so we really analyse these relays because this is where we put a lot of the trust in the Tor network. It is distributed between multiple relays but if all of the relays are malicious then the network is not very useful. We make sure that we’re monitoring its diversity and the relays come in different sizes so we want to know are the big relays spread out or are there just a lot of small relays inflating the absolute counts of relays in a location.

When we look at these two graphs, we can see that the number of relays in Russia is about 250 at the moment but when we look at the top 5 by the actual bandwidth they contribute to the network they drop off and Sweden takes Russia’s place in the top 5 contributing around 4% of the capacity.

The Tor Metrics team, as I mentioned we are two people, and we care about measuring and analysing things in the Tor network. There are 3 or 4 repetitive contributors and then occasionally people will come along with patches or perform a one-off analysis of our data.

We use this data for lots of different use cases. One of which is detecting censorship so if websites are blocked in a country, people may turn to Tor in order to access those websites. In other cases, Tor itself might be censored and then we see a drop in Tor users and then we also see we a rise in the use of the special purpose bridge relays that can be used to circumvent censorship. We can interpret the data in that way.

We can detect attacks against the network. If we suddenly see a huge rise in the number of relays then we can suspect that OK maybe there is something malicious going on here and we can deal with that. We can evaluate effects on how performance changes when we make changes to the software. We have recently made changes to an internal scheduler and the idea there is to reduce congestion at relays and from our metrics we can say we have a good idea that this is working.

Probably one of the more important aspects is being able to take this data and make the case for a more private and secure Internet, not just from a position of I think we should do this, I think it’s the right thing, but here is data and here is facts that cannot be so easily disputed.

We only handle public non-sensitive data. Each analysis goes through a rigorous review and discussion process before publication.

As you might imagine the goals of privacy and anonymity network doesn’t lend itself to easy data gathering and extensive monitoring of the network. The Research Safety Board if you are interested in doing research on Tor or attempting to collect data through Tor can offer advice on how to do that safely. Often this is used by academics that want to study Tor but also the Metrics Team has used it on occasion where we want to get second opinions on deploying new measurements.

What we try and do is follow three key principles: data minimalisation, source aggregation, and transparency.

The first one of these is quite simple and I think with GDPR probably is something people need to think about more even if you don’t have an anonymity network. Having large amounts of data that you don’t have an active use for is a liability and something to be avoided. Given a dataset and given an infinite amount of time that dataset is going to get leaked. The probability increases as you go along. We want to make sure that we are collecting as little detail as possible to answer the questions that we have.

When we collect data we want to aggregate it as soon as we can to make sure that sensitive data exists for as little time as possible. This means usually in the Tor relays themselves before they even report information back to Tor Metrics. They will be aggregating data and then we will aggregate the aggregates. This can also include adding noise, binning values. All of these things can help to protect the individual.

And then being as transparent as possible about our processes so that our users are not surprised when they find out we are doing something, relay operators are not surprised, and academics have a chance to say whoa that’s not good maybe you should think about this.

The example that I’m going to talk about is counting unique users. Users of the Tor network would not expect that we are storing their IP address or anything like this. They come to Tor because they want the anonymity properties. So the easy way, the traditional web analytics, keep a list of the IP addresses and count up the uniques and then you have an idea of the unique users. You could do this and combine with a GeoIP database and get unique users per country and these things. We can’t do this.

So we measure indirectly and in 2010 we produced a technical report on a number of different ways we could do this.

It comes back to Alice talking to Dave. Because every client needs to have a complete view of the entire Tor network, we know that each client will fetch the directory approximately 10 times a day. By measuring how many directory fetches there are we can get an idea of the number of concurrent users there are of the Tor network.

Relays don’t store IP addresses at all, they count the number of directory requests and then those directory requests are reported to a central location. We don’t know how long an average session is so we can’t say we had so many unique users but we can say concurrently we had so many users on average. We get to see trends but we don’t get the exact number.

So here is what our graph looks like. At the moment we have the average 2,000,000 concurrent Tor users. The first peak may have been an attempted attack, as with the second peak. Often things happen and we don’t have full context for them but we can see when things are going wrong and we can also see when things are back to normal afterwards.

This is in a class of problems called the count-distinct problem and these are our methods from 2010 but since then there has been other work in this space.

One example is HyperLogLog. I’m not going to explain this in detail but I’m going to give a high-level overview. Imagine you have a bitfield and you initialise all of these bits to zero. You take an IP address, you take a hash of the IP address, and you look for the position of the leftmost one. How many zeros were there at the start of that string? Say it was 3, you set the third bit in your bitfield. At the end you have a series of ones and zeros and you can get from this to an estimate of the total number that there are.

Every time you set a bit there is 50% chance that that bit would be set given the number of distinct items. (And then 50% chance of that 50% for the second bit, and so on…) There is a very complicated proof in the paper that I don’t have time to go through here but this is one example that actually turns out to be quite accurate for counting unique things.

This was designed for very large datasets where you don’t have enough RAM to keep everything in memory. We have a variant on this problem where even keep 2 IP addresses in memory would, for us, be a very large dataset. We can use this to avoid storing even small datasets.

Private Set-Union Cardinality is another example. In this one you can look at distributed databases and find unique counts within those. Unfortunately this currently requires far too much RAM to actually do the computation for us to use this but over time these methods are evolving and should become feasible, hopefully soon.

And then moving on from just count-distinct, the aggregation of counters. We have counters such as how much bandwidth has been used in the Tor network. We want to aggregate these but we don’t want to release the individual relay counts. We are looking at using a method called PrivCount that allows us to get the aggregate total bandwidth used while keeping the individual relay bandwidth counters secret.

And then there are similar schemes to this, RAPPOR and PROCHLO from Google and Prio that Mozilla have written a blog post about are similar technologies. All of the links here in the slides and are in the page on the FOSDEM schedule so don’t worry about writing these down.

Finally, I’m looking at putting together some guidelines for performing safe measurement on the Internet. This is targetted primarily at academics but also if people wanted to apply this to analytics platforms or monitoring of anything that has users and you want to respect those users’ privacy then there could be some techniques in here that are applicable.

Ok. So that’s all I have if there are any questions?


Q: I have a question about how many users have to be honest so that the network stays secure and private, or relays?

A: At the moment when we are collecting statistics we can see – so as I showed earlier the active measurement – we know how much bandwidth a relay can cope with and then we do some load balancing so we have an idea of what fraction of traffic should go to each relay and if one relay is expecting a certain level of traffic and it has wildly different statistics to another relay then we can say that relay is cheating. There isn’t really any incentive to do this and it’s something we can detect quite easily but we are also working on more robust metrics going forward to avoid this being a point where it could be attacked.

Q: A few days ago I heard that with Tor Metrics you are between 2 and 8 million users but you don’t really know in detail what the real numbers are? Can you talk about the variance and which number is more accurate?

A: The 8 million number comes from the PrivCount paper and they did a small study where they looked at unique IP address over a day where we look at concurrent users. There are two different measurements. What we can say is that we know for certain that there are between 2 million and 25 million unique users per day but we’re not sure where in there we fall and 8 million is a reasonable-ish number but also they measured IP addresses and some countries use a lot of NAT so it could be more than 8 million. It’s tricky but we see the trends.

Q: Your presentation actually implies that you are able to collect more private data than you are doing. It says that the only thing preventing you from collecting private user data is the team’s good will and good intentions. Have I got it wrong? Are there any possibilities for the Tor Project team to collect some private user data?

A: Tor Project does not run the relays. We write the code but there individual relay operators that run the relays and if we were to release code that suddenly collecting lots of private data people would realise and they wouldn’t run that code. There is a step between the development and it being deployed. I think that it’s possible other people could write that code and then run those relays but if they started to run enough relays that it looked suspicious then people would ask questions there too, so there is a distributed trust model with the relays.

Q: You talk about privacy preserving monitoring, but also a couple of years ago we learned that the NSA was able to monitor relays and learn which user was connecting to relays so is there also research to make it so Tor users cannot be targetted for using a relay and never being able to be monitored.

A: Yes, there is! There is a lot of research in this area and one of them is through obfuscation techniques where you can make your traffic look like something else. They are called pluggable transports and they can make your traffic look like all sorts of things.

February 07, 2019 02:00 PM

February 05, 2019

Iain R. Learmonth

Face

Cryptonoise: January 2019

On Thursday 17th January, we held the first Cryptonoise event of 2019. We had a good turn out and kicked off the discussion with a quick browse through Wikipedia’s list of data breaches.

Our first topic of discussion was relating to how we all used passwords and how password reuse can very quickly become problematic if it happens that your password is leaked.

Over time, the probability that any entity holding a large store of sensitive private data will remain both competent enough to protect it adequately and honest enough to want to goes to zero. –@mattblaze

In good news, everyone was already aware of the risks of password reuse. The vast majority of the attendees were using pass as a password manager. One attendee used a method to derive unique passwords for services from a hash function, a salt, a master password, and a memorable name for the service. Between us we were not able to come up with any shortcomings in the strategy except for those that had already been considered but a password manager still looked to be the easier option.

We talked a little about how some passwords are more important than others and that some passwords, such as your email password, might be proxies for other passwords, e.g. by password reset emails. I learned of a feature of pass that I did not know about before that allows you to encrypt different subfolders of your password store to different keys allowing for passwords to be selectively available in different locations. This would allow you to protect the most important passwords while allowing others to be used more freely.

As so many of us are using pass, we also talked about the possibility of future collaboration on pass extensions and swapping hints and tips. Two areas we identified were in obfuscating the accounts list and being able to see how old a password is and so have an idea when it should be rotated.

No one had any set schedule for rotating their passwords, so this is hopefully something that we will be thinking about some more and we might discuss more at future events.

Following from the password manager discussion we talked about the use of two-factor authentication. Some were using email or SMS second factor authentication but not really using cryptographic second factors such as a time-based one time password app on a phone or a hardware token like a YubiKey.

Many of us are using a YubiKey for GPG, which we then use for pass. As only the password would be required to access a service and the YubiKey only unlocks the password, this isn’t really two factor authentication. The oathtool command can be used to generate one time passwords and the key for this can be stored in pass. As the key is just a password that allows you to generate a two factor authentication code this may just be a two password arrangement if your pass database is compromised. If just the password is compromised though this may still provide some security as an attacker cannot use the password alone to log in.

Our last discussion topic was disk encryption. We discussed the trade offs between full disk encryption and encrypting only your home directory. If a machine only has an encrypted home directory then it would still be able to boot and perhaps you would have a chance to locate it if you can access it remotely. An attacker would have the ability to modify the contents of the disk however, including the operating system. Given that an attacker could insert backdoors into a laptop in a number of locations (firmware, hardware implant, rootkit, etc.) it may be that you wouldn’t want the laptop back after it has been suspiciously missing even if you did track it down.

Signed binaries may in the future remove the threat of modification of the disk contents as an attacker’s modifications would just never get run. Of course this is only in the case that the user has some control over the system.

We had a quick session at the end to discuss topics for future sessions this year. So far we have:

  • Securing your communications in hostile networks
  • Securing your communications at home
  • The differences between BSD and Linux
  • Best practices for working with Windows
  • GnuPG, web-of-trust, and keysigning

The next Cryptonoise will be at 19:00 on the 28th February 2019 in the 57North Hacklab Social Room. More details will be sent closer to the time on the org-aberdeen and 57north-discuss mailing lists.

February 05, 2019 04:30 PM

February 01, 2019

Ana Custura

Tor Relay Map

Here is a map that displays Tor relays!

The map also supports searching for the full or partial nickname, IP address fingerprint of a Tor relay. You can also search for relays in specific countries or ASes (country:us, as:3), or with specific flags (flag:exit or flag:authority).

To display all running relays, the search string is running:true. An empty search string will plot all the relays.

The Tor relay information is queried from Onionoo.

This was written a while ago for the only purpose of trying out JS and in the future, this will have other open source nodes added to it, including things like BSD mirrors, Bitcoin nodes etc. For now, the source can be found here.

There are several icons used to display the relays, their meanings can be found here. A legend is still on the TODO list.

Full-page version.

©&fields&fields



February 01, 2019 12:00 AM

January 23, 2019

Ana Custura

Zenburn Theme for the terminal

So today I glanced down to my i3 status bar, and thought man those colours are fugly. By default, these are white, fully saturated neon green and red on black:

‘5 minutes is enough to configure the colors for this, right?’

Some research and 2 hours down the rabbit-hole later, and everything from my text editor to the status bar is Zenburn-themed.

Zenburn is a low-contrast theme meant to ensure your eyes do not fall off when you’re having that mexican-standoff with whatever project you happen to be Vimming.

But most of us spend a lot of time in text editors and on the command line, and Zenburn is a theme for Vim only. This useful and beautiful theme does not have an .Xresources file listed on the official page of Zenburn things.

As I’m a fan of cohesion, this now exists, this blog post will explain how.

First, I extracted the colors from the original zenburn.vim using simple regex:

matches = re.findall(r'#(?:[0-9a-fA-F]{3}){1,2}', line)

I then used yattag, a Python library for creating HTML, and displayed all the unique colors from the file. It looks like this:

#c0bed1
#bc6c4c
#80d4aa
#dfdfbf
#4f4f4f
#2b2b2b
#bc8cbc
#d0d0a0
#e89393
#82a282
#304a3d
#444444
#8f8f8f
#2e3330
#242424
#f8f893
#41363c
#88b090
#f0f0c0
#dccdcc
#121212
#313c36
#efef8f
#bc6c9c
#2f2f2f
#464646
#353535
#ecbcbc
#71d3b4
#dcdccc
#7cac7c
#222222
#93b3a3
#5b605e
#dfdfdf
#000d18
#0f0f0f
#353a37
#ffd7a7
#dc8c6c
#dca3a3
#383838
#1f1f1f
#b2b2a0
#b6bf98
#e3ceab
#688060
#709080
#cc9393
#434343
#dfaf8f
#3a3a39
#acd0b3
#2c2e2e
#bca3a3
#efefaf
#545a4f
#343434
#2c302d
#000000
#a0afa0
#313633
#6c6c9c
#8c8cbc
#9fafaf
#efefef
#d0d0b8
#9ccc9c
#8faf9f
#3f3f3f
#ffffe0
#ffffff
#284f28
#664040
#ccd990
#c3bf9f
#dfcfaf
#e37170
#1c1c1b
#efdcbc
#ccccbc
#233323
#333333
#f18c96
#404040
#161616
#262626
#f0f0b0
#cfcfaf
#f0dfaf
#385f38
#040404
#9ece9e
#cbecd0
#434443
#dfe4cf
#3f4040
#ccdc90
#3d3535
#101010
#f0efd0
#ffcfaf
#7f9f7f
#2e2e2e
#181818
#9f9f9f
#8cd0d3
#cfbfaf

To get the colors to make a bit of sense, some sorting was in order. I spent some time reading this excellent article about why sorting colours is a pain. I then did some quick RGB to HSV conversion and sorting, and voilà:

#000000
#040404
#0f0f0f
#101010
#121212
#161616
#181818
#1f1f1f
#222222
#242424
#262626
#2b2b2b
#2e2e2e
#2f2f2f
#333333
#343434
#353535
#383838
#3f3f3f
#404040
#434343
#444444
#464646
#4f4f4f
#8f8f8f
#9f9f9f
#dfdfdf
#efefef
#ffffff
#3d3535
#bca3a3
#ecbcbc
#dca3a3
#cc9393
#e89393
#664040
#e37170
#dccdcc
#bc6c4c
#dc8c6c
#ffcfaf
#dfaf8f
#cfbfaf
#ffd7a7
#e3ceab
#efdcbc
#dfcfaf
#f0dfaf
#c3bf9f
#f0efd0
#3a3a39
#1c1c1b
#dcdccc
#ccccbc
#b2b2a0
#d0d0b8
#ffffe0
#dfdfbf
#cfcfaf
#f0f0c0
#d0d0a0
#f0f0b0
#efefaf
#efef8f
#f8f893
#ccd990
#ccdc90
#b6bf98
#dfe4cf
#545a4f
#688060
#434443
#a0afa0
#82a282
#7f9f7f
#9ece9e
#9ccc9c
#7cac7c
#233323
#385f38
#284f28
#cbecd0
#acd0b3
#88b090
#2c302d
#2e3330
#353a37
#313633
#313c36
#93b3a3
#8faf9f
#709080
#304a3d
#80d4aa
#5b605e
#71d3b4
#3f4040
#2c2e2e
#9fafaf
#8cd0d3
#000d18
#8c8cbc
#6c6c9c
#c0bed1
#bc8cbc
#bc6c9c
#41363c
#f18c96

This looks better and one can easily see what colors there are.

The next stop was terminal.sexy, which is an amazing terminal theme tool which allows you to vizualize and export your work easily. Here’s our theme:

And here’s how it looks in the terminal:

The i3 window colors, borders and status bar have now also been changed to match:

And for completeness, here’s a Firefox theme to match, because we can:

Starting the new year ready to sysadmin for hours on end in an ill-lit room, happy hacking!

January 23, 2019 12:00 AM

January 16, 2019

Iain R. Learmonth

Face

A Solution for Authoritative DNS

I’ve been thinking about improving my DNS setup. So many things will use e-mail verification as a backup authentication measure that it is starting to show as a real weak point. An Ars Technica article earlier this year talked about how “[f]ederal authorities and private researchers are alerting companies to a wave of domain hijacking attacks that use relatively novel techniques to compromise targets at an almost unprecedented scale.”

The two attacks that are mentioned in that article, changing the nameserver and changing records, are something that DNSSEC could protect against. Records wouldn’t have to be changed on my chosen nameservers, a BGP-hijacking could just give another server the queries for records on my domain instead and then reply with whatever it chooses.

After thinking for a while, my requirements come down to:

  • Offline DNSSEC signing
  • Support for storing signing keys on a HSM (YubiKey)
  • Version control
  • No requirement to run any Internet-facing infrastructure myself

After some searching I discovered GooDNS, a “good” DNS hosting provider. They have an interesting setup that looks to fit all of my requirements. If you’re coming from a more traditional arrangement with either a self-hosted name server or a web panel then this might seem weird, but if you’ve done a little “infrastructure as code” then maybe it is not so weird.

The inital setup must be completed via the web interface. You’ll need to have an hardware security module (HSM) for providing a time based one time password (TOTP), an SSH key and optionally a GPG key as part of the registration. You will need the TOTP to make any changes via the web interface, the SSH key will be used to interact with the git service, and the GPG key will be used for any email correspondance including recovery in the case that you lose your TOTP HSM or password.

You must validate your domain before it will be served from the GooDNS servers. There are two options for this, one for new domains and one “zero-downtime” option that is more complex but may be desirable if your domain is already live. For new domains you can simply update your nameservers at the registrar to validate your domain, for existing domains you can add a TXT record to the current DNS setup that will be validated by GooDNS to allow for the domain to be configured fully before switching the nameservers. Once the domain is validated, you will not need to use the web interface again unless updating contact, security or billing details.

All the DNS configuration is managed in a single git repository. There are three branches in the repository: “master”, “staging” and “production”. These are just the default branches, you can create other branches if you like. The only two that GooDNS will use are the “staging” and “production” branches.

GooDNS provides a script that you can install at /usr/local/bin/git-dns (or elsewhere in your path) which provides some simple helper commands for working with the git repository. The script is extremely readable and so it’s easy enough to understand and write your own scripts if you find yourself needing something a little different.

When you clone your git repository you’ll find one text file on the master branch for each of your configured zones:

irl@computer$ git clone git@goodns.net:irl.git
Cloning into 'irl1'...
remote: Enumerating objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 3
Receiving objects: 100% (3/3), 22.55 KiB | 11.28 MiB/s, done.
Resolving deltas: 100% (1/1), done.
irl@computer$ ls
irl1.net   learmonth.me
irl@computer$ cat irl1.net
@ IN SOA ns1.irl1.net. hostmaster.irl1.net. (
            _SERIAL_
            28800
            7200
            864000
            86400
            )

@           IN      NS        ns1.goodns.net.
@           IN      NS        ns2.goodns.net.
@           IN      NS        ns3.goodns.net.

In the backend GooDNS is using OpenBSD 6.4 servers with nsd(8). This means that the zone files use the same syntax. If you don’t know what this means then that is fine as the documentation has loads of examples in it that should help you to configure all the record types you might need. If a record type is not yet supported by nsd(8), you can always specify the record manually and it will work just fine.

One thing you might note here is that the string _SERIAL_ appears instead of a serial number. The git-dns script will replace this with a serial number when you are ready to publish the zone file.

I’ll assume that you already have you GPG key and SSH key set up, now let’s set up the DNSSEC signing key. For this, we will use one of the four slots of the YubiKey. You could use either 9a or 9e, but here I’ll use 9e as 9a is already the SSH key for me.

To set up the token, we will need the yubico-piv-tool. Be extremely careful when following these steps especially if you are using a production device. Try to understand the commands before pasting them into the terminal.

First, make sure the slot is empty. You should get an output similar to the following one:

irl@computer$ yubico-piv-tool -s 9e -a status 
CHUID:  ...
CCC:    No data available
PIN tries left: 10

Now we will use git-dns to create our key signing key (KSK):

irl@computer$ git dns kskinit --yubikey-neo
Successfully generated a new private key.
Successfully generated a new self signed certificate.
Found YubiKey NEO.
Slots available:
 (1) 9a - Not empty
 (2) 9e - Empty
Which slot to use for DNSSEC signing key? 2
Successfully imported a new certificate.
CHUID:  ...
CCC:    No data available
Slot 9e:    
    Algorithm:  ECCP256
    Subject DN: CN=irl1.net
    Issuer DN:  CN=irl1.net
    Fingerprint:    97dda8a441a401102328ab6ed4483f08bc3b4e4c91abee8a6e144a6bb07a674c
    Not Before: Feb 01 13:10:10 2019 GMT
    Not After:  Feb 01 13:10:10 2021 GMT
PIN tries left: 10

We can see the public key for this new KSK:

irl@computer$ git dns pubkeys
irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw==

Next we will create a zone signing key (ZSK). These are stored in the keys/ folder of your git repository but are not version controlled. You can optionally encrypt these with GnuPG (and so requiring the YubiKey to sign zones) but I’ve not done that here. Operations using slot 9e do not require the PIN so leaving the YubiKey connected to the computer is pretty much the same as leaving the KSK on the disk. Maybe a future YubiKey will not have this restriction or will add more slots.

irl@computer$ git dns zskinit
Created ./keys/
Successfully generated a new private key.
irl@computer$ git dns pubkeys
irl1.net. DNSKEY 256 3 13 UgGYfiNse1qT4GIojG0VGcHByLWqByiafQ8Yt7/Eit2hCPYYcyiE+TX8HP8al/SzCnaA8nOpAkqFgPCI26ydqw=
irl1.net. DNSKEY 257 3 13 kS7DoH7fxDsuH8o1vkvNkRcMRfTbhLqAZdaT2SRdxjRwZSCThxxpZ3S750anoPHV048FFpDrS8Jof08D2Gqj9w==

Now we can go to our domain registrar and add DS records to the registry for our domain using the public keys. First though, we should actually sign the zone. To create a signed zone:

irl@computer$ git dns signall
Signing irl1.net...
Signing learmonth.me...
[production 51da0f0] Signed all zone files at 2019-02-01 13:28:02
 2 files changed, 6 insertions(+), 0 deletions(-)

You’ll notice that all the zones were signed although we only created one set of keys. Set ups where you have one shared KSK and individual ZSK per zone are possible but they provide questionable additional security. Reducing the number of keys required for DNSSEC helps to keep them all under control.

To make these changes live, all that is needed is to push the production branch. To keep things tidy, and to keep a backup of your sources, you can push the master branch too. git-dns provides a helper function for this:

irl@computer$ git dns push
Pushing master...done
Pushing production...done
Pushing staging...done

If I now edit a zone file on the master branch and want to try out the zone before making it live, all I need to do is:

irl@computer$ git dns signall --staging
Signing irl1.net...
Signing learmonth.me...
[staging 72ea1fc] Signed all zone files at 2019-02-01 13:30:12
 2 files changed, 8 insertions(+), 0 deletions(-)
irl@computer$ git dns push
Pushing master...done
Pushing production...done
Pushing staging...done

If I now use the staging resolver or lookup records at irl1.net.staging.goodns.net then I’ll see the zone live. The staging resolver is a really cool idea for development and testing. They give you a couple of unique IPv6 addresses just for you that will serve your staging zone files and act as a resolver for everything else. You just have to plug these into your staging environment and everything is ready to go. In the future they are planning to allow you to have more than one staging environment too.

All that is left to do is ensure that your zone signatures stay fresh. This is easy to achieve with a cron job:

0 3 * * * /usr/local/bin/git-dns cron --repository=/srv/dns/irl1.net --quiet

I monitor the records independently and disable the mail output from this command but you might want to drop the --quiet if you’d like to get mails from cron on errors/warnings.

On the GooDNS blog they talk about adding an Onion service for the git server in the future so that they do not have logs that could show the location of your DNSSEC signing keys, which allows you to have even greater protection. They already support performing the git push via Tor but the addition of the Onion service would make it faster and more reliable.


Unfortunately, GooDNS is entirely fictional and you can’t actually manage your DNS in this way, but wouldn’t it be nice? This post has drawn inspiration from the following:

January 16, 2019 04:30 PM

November 14, 2018

Ana Custura

A tiny Nagios plugin to check DNSSEC RRSIG expiry

I have used both Icinga and Nagios to monitor a variety of infrastructures, either at work, at home and anywhere in between. These share a variety of very useful plugins for monitoring lots and lots of standard stuff, from basic ping checks to filesystem mounts. Occasionally I had to write the odd script to monitor something ridiculously niche, like split-brain HSRP status between pairs of Cisco routers.

This one is a plugin for checking RRSIG expiry dates on DNSSEC records. There are other plugins out there that do exactly this, but I could not find anything that would not depend on a ton of other packages to do it (libnet-dns-sec-perl anyone?). So I decided to write something that has minimal footprint, ended up with a python wrapper around dig. Of course, this depends on dig (which should come with the plugin installation), as well as python. The python part is optional if you rewrite this in bash.

With regards to RRSIGs and DNSSEC:

RRSIGs = signature records for a zone which contain a cryptographic signature used for validating a DNSSEC response. When you sign a zone, the signature records expire after 30 days. There are recommendations for resigning a zone once a day. This is why the plugin default is to warn if the signature expires in less than 29 days (of course warning and critical thresholds can be changed with command line arguments -w and -c; the server to check against can also be specified with -s):

~./check_dns_rrsig.py cloudflare.com
Signature expires in 1 days

Here is the plugin!

This is very, very basic and will work if all you want is something to tell you when an RRSIG expires, without any other overhead.

Happy monitoring!

November 14, 2018 12:00 AM

November 12, 2018

Ana Custura

Unbound on a fresh FreeBSD install

I’ve recenty encountered a weird problem where unbound would not work on a fresh FreeBSD install on an APU3 board. Online research lead to finding a bunch of posts complaining about this, but no leads. As it turns out, my problem was the clock.

But first things first. Installing FreeBSD on an APU board is very easy, at the boot shell enter the following commands:

>set boot_serial=YES
>set comconsole_speed=115200
>set console=comconsole
#this delays the boot by 10s to allow the USB controller to come up
>kern.cam.boot_delay="10000"

Of course, remembering to drop into a shell at the end of the installation, and edit /boot/loader.conf so that we can use it over the console afterwards:

boot_serial="YES"
comconsole_speed="115200"
console="comconsole"
kern.cam.boot_delay="10000"
amdtemp_load="YES" #this loads the driver for the temperature sensor embedded in the CPU. It is equivalent to the km driver in OpenBSD. Yay sensors!

Now, the APU board does not have an on-board clock. I did not set the time/date during installation (lazy), as NTP was going to be set up anyway. So, at this stage in /etc/ntp.conf, I replaced pool 0.freebsd.pool.ntp.org iburst with the name of my local NTP server.

Reboot after successful installation, cue unbound not working. General DNS failure. As unbound is DNSSEC enabled, run:

unbound-anchor

This sets up the root trust anchor for DNSSEC validation, which was part of the solution to this issue.

But unbound was still not working. Do not underestimate actually checking the results of any commands run:

cat /var/unbound/root.key

The file contained no valid key, thus solving the mistery:

  • DNS lookups do not work as there is no valid key here, due to our system clock being, well, off by a few years.
  • And of course NTP does not sync the date… as we gave it a hostname, which needs to be looked up in DNS.

A quick ntpdate <ip-address> followed by running unbound-anchor again solves the issue.

So frustrating.

November 12, 2018 12:00 AM

November 06, 2018

hibby

Face

Disk Destroyer

I finally fell into the dd trap. I dd’d a FreeBSD Beaglebone black image to /dev/sdb, where my home directory lives. It should have been /dev/mmcblk0, but I trusted tab completion and didn’t check dmesg. After a quick check of gparted, it was obvious what had happened. I’ve nuked the partition table on the drive. Well done. The data in /home/hibby was intact, however. My home directory was still responding, I could load new files ith no corruption.

November 06, 2018 07:00 PM

October 28, 2018

Iain R. Learmonth

Face

OpenBSD with GPS synchronised NTP

I wrote on Monday about how I’ve swapped my home router for an OpenBSD box. One of the fun things I’ve done with this box is configure it as a network time server using ntpd(8).

Synchronising time with servers on the Internet isn’t that exciting, but I’ve had a USB GPS dongle sitting doing nothing for a while. I originally purchased it to use with amateur radio projects but I haven’t done much radio recently. It looks like you can pick these up on eBay for around £11.00 and the newer ones also support GLONASS (mine only does GPS as far as I can tell).

When you attach this to OpenBSD, it is recognised as:

umodem0 at uhub0 port 4 configuration 1 interface 0 "u-blox AG - www.u-blox.com u-blox 6  -  GPS Receiver" rev 1.10/7.03 addr 2
umodem0: data interface 1, has CM over data, has no break
umodem0: status change notification available
ucom0 at umodem0

In Linux, I would use gpsd which is userland software and overkill for what is needed to get time out of the device. Instead, OpenBSD provides the nmea(4) line discipline that can be attached to the serial port with only a single line of configuration in /etc/ttys:

cuaU0   "/sbin/ldattach nmea"   unknown on softcar

Now when I reboot and run sysctl hw.sensors:

hw.sensors.nmea0.indicator0=On (Signal), OK
hw.sensors.nmea0.timedelta0=-0.001002 secs (GPS autonomous), OK, Sun Oct 28 17:08:04.998
hw.sensors.nmea0.angle0=57.3748 degrees (Latitude), OK
hw.sensors.nmea0.angle1=-2.3849 degrees (Longitude), OK

This was so incredibly simple I couldn’t believe that was it. If you have ntpd enabled, it will automatically pick up this new sensor too and start using it. I wanted to give a greater weight to my GPS clock than to random servers on the Internet though so I did add a line to /etc/ntpd.conf:

sensor nmea0 weight 5 refid GPS

By default all sources have a weight of 1. Giving the GPS a weight of 5 gives it 5× the influence over the clock compared to servers from pool.ntp.org. The reference seems to be passed to clients but I’m not entirely sure yet what its significance is.

In RFC2132 there is a DHCP option defined for advertising NTP servers, so I advertised the server via this mechanism. Thinking more about it, I’m hoping that nothing is automatically configuring itself using this DHCP option as that seems like something that shouldn’t get reconfigured just because you joined a network. There’s a lot of different options defined in this document and I wonder just how many of them are actually used/useful.

I did spot that I can advertise my RFC868 time server too, so I set up both that and a daytime server using inetd(8) by adding to my /etc/inetd.conf:

daytime stream tcp nowait nobody internal
time stream tcp nowait nobody internal

October 28, 2018 05:00 PM

October 22, 2018

Iain R. Learmonth

Face

OpenBSD with PlusNet VDSL

For a long time we’ve had a Cisco 887VA acting as our VDSL modem and gateway. We got this for the old flat when we moved in there in 2016 and took it with us to where we live now. It’s been quite reliable but the Ethernet interfaces are only 10/100Mbps and there are some limitations to the software where either features are hidden behind additional licenses or they are missing altogether. The software was last updated in 2016 and there’s no easy way to get hold of later firmware.

The replacement for this box is a PC Engines apu3c4. This board has an AMD Embedded G series CPU, 4GB RAM and 3 Gigabit Ethernet interfaces. For storage I have fitted a 16GB mSATA SSD and I’m planning to also fit a 4G modem to allow failover in the event of a broadband outage (as happened once before). This box is running OpenBSD 6.4 which was incredibly easy to install via the serial console, booted from a USB stick.

When the prompt appears after booting from the USB stick:

boot> stty com0 115200
boot> set tty com0
boot> boot

But wait, something is missing here. The Cisco box has a VDSL modem but the APU board does not. The original plan here was to use a Huawei HG612 as the modem and then bridge the connection to the APU board. Unfortunately, I had installed the one I had got for this purpose in the hackerspace, as they now have their sponsored VDSL line installed from Converged. Another hackerspace member had said they would provide me with a replacement, but when I went to install it, it was in fact an ECI Telecom B-FOCus V-2FUb/r Rev.B and not an HG612. These two devices look almost identical if you’re not paying attention. So the temporary solution is to use the Cisco box as a bridge.

The relevant configuration:

no ip routing
interface Ethernet0
 no ip address
end
interface Ethernet0.101
 encapsulation dot1Q 101
 bridge-group 1
end
interface Vlan200
 no ip address
 bridge-group 1
end
interface FastEthernet3
 switchport access vlan 200
 no ip address
end
bridge 1 protocol ieee

Essentially what this is doing is bridging the Ethernet interface that used to be used by the Cisco box for the PPPoE tunnel to FastEthernet3 instead. By connecting a cable between FastEthernet3 and em0 on the APU board (first Gigabit Ethernet port) it is then possible to terminate the PPPoE tunnel on the APU board instead.

In /etc/hostname.em0:

up

In /etc/hostname.pppoe0:

inet 0.0.0.0 255.255.255.255 NONE \
        mtu 1492 \
        pppoedev em0 \
        authproto chap \
        authname 'username@plus.net' \
        authkey 'password' \
        peerproto chap \
        peerflag callin \
        up
dest 0.0.0.1
!/sbin/route add default -ifp pppoe0 0.0.0.1

The man pages for hostname.if(5), em(4) and pppoe(4) can give you more details on what is going on here. If you’re not doing this on an APU board you might find that your Ethernet interface is named differently in OpenBSD. Interface names are based on the driver they are using.

It took me a while to debug this because the PPPoE driver was attempting to authenticate the access concentrator. Adding peerflag callin fixed this:

The `callin’ flag will require the remote peer to authenticate only when he’s calling in, but not when the peer is called by the local client.

Annoyingly this is only described in ifconfig(8) and so I didn’t see it the first time reading through the documentation. Some combination of DuckDuckGo and the online man page browser did get me there in the end.

So now I have an OpenBSD box as my home network’s gateway. I’ve configured dhcpd(8), ntpd(8) (using an nmea(4) GPS device as a time source) and a bunch of other things that may or may not become other blog posts. I’m expecting the replacement HG612 modem will arrive soon and then the Cisco box will probably find its way to eBay.

October 22, 2018 08:30 PM

October 03, 2018

Tom Jones

Face

EuroBSDCon Bucharest Romania

The Wikitravel page for Bucharest has some scary warnings about taxis. I didn't heaer any horror stories from conference goers, but there was a large variation in prices for the same journey.

He held a two day DevSummit before the conference proper. A DevSummit is a chance to talk through technical issues and hash things out face to face. We did some planning for FreeBSD 13 with the idea of setting GGoals for the release.

We tried to match a bit of a hackathon with the DevSummit, but the tutorial schedules meant we couldn't focus the time very well and it was broken up.

EuroBSDCon

Day One:

  • Keynote1: Lightweight virtualization with LightVM and Unikraft
  • Hacking together a FreeBSD presentation streaming box – For as little as possible
    • That was me, I thought it was quite good :D
  • The Evolution of FreeBSD Governance
  • Using Boot Environments at Scale
  • The End of DNS as we know it
  • Keynote2: Some computing and networking historical perspectives
    • Ron's keynote was unreal and it is a massive shame that this sessions wasn't recorded. Ron has a ton of experience with working with network systems since 1976, he shared some stories and anecdotes. The one closest to my heart was pirating away an IMP front pannel and saving it from the scrappers. If you get a chance to see Ron speak you should jump up and down at it.

Day Two:

  • Taking NetBSD kernel bug roast to the next level : Kernel Sanitizers
  • Livepatching FreeBSD kernel
    • This was an interesting study into how many different platforms do live patching. The FreeBSD way to do live patching could be simplified to 'use dtrace fbt probes'. Which is super reductive of all of the work invovled, but it shows the power of the system we have with dtrace.
  • Profiling Packet Processing: Performance & Peculiarities
  • Debugging lessons learned as a newbie fixing NetBSD
    • Maya is a terrifying person. Somehow she manages to hack productivly across the entire range of the stack and across many different architectures. There were many debuggin gems in here, I hope she continues to present on this the information was great.
  • FreeBSD/VPC: a new kernel subsystem for cloud workloads
  • FreeBSD on IBM PowerNV
    • An recount of the porting work Semihalf did to POWER8. Interesting, I hope it is also sumbitted to AsiaBSDCon. There need to be more written account of bringing up on different architectures.

Day Two concluded with announcing the location of the next EuroBSDCon, Lillehammer Norway.

October 03, 2018 12:00 AM

Presentations with mdp

It feels like work is just a constant stream of preparing, travelling for and giving presentations. Brief words and pictures is an excellent for conveying information between small groups of humans. All of these presentations I write in keynote, keynote manages to be light weight, powerful and not horrific to use. As a bonus, my boss feels at home in keynote and is happy to make edits there.

The keynote workflow does not match well to how I think. When dreaming up a presentation I want to shit of a stream of conciousness and have it magically become slides in the right shape.

I might write a series of headings like:

# intro
# who
# meat 
# details
# questions?

I will iterate on these to add bodies, details and more slides.

For quite a while I have wanted a system where I could write plain text and have it become slides. I [wrote][3] about the [sent][4] tool from suckless, but in the end I found it wanting. I have also considered just showing screens of text, but a nightmare DEFCON wireess village talk by Hak5 scared me away. They attempted to just present using just a plain text file and less, but the window size got out of whack and it all fell apart.

Enter mdp

mdp is a terminal presentation program, it takes slides it approximately markdown and takes over the entire terminal as its presentation surface.

Intrigued I used an opportunity to speak at a [local tech event][5] to try out mdp. [The slides][6] from that presentation can be found on [my talks page][7] and overall I thought mdp worked quite well.

I was able to draft in the stream of conciousness style I want, getting the bulk of the slides written very quickly. Adding diagrams required resorting to ASCII art which isn't so bad, I like [ascii][10] [art][11]. mdp worked great in practice, I had to find readable dimensions for the text by trial and error, but overall it went well.

Plain text as a format does have some major downsides, mdp has a way to encode builds for text (see below), but I couldn't use it with my tools. ASCII art diagrams also meant that the builds I did use were eggregious to maintain, any modification required manual propigation through the build chain.

mdp does not support a portable output format. You may say the source markdown is an excellent format for portability, but I find it lacks the crispness of having a single slide in view at once.

I wanted to be able to point at a viewable copy of my slides and so I hacked together some tools to export the mdp presentation to html, but for this I had to sacrifice the built in build mechanism of mdp

Finally there was no way to include images in the mdp presentation let alone the sacride gif format required to correctly convey nyan cat. I played with some terminal graphics viewers, but none of them worked well and after a while I started to think 'what is the point of reinventing everything'.

Drafting the presentation in markdown fit very well with my work flow, but the difficulties in getting a complete presentation with mdp meant that I didn't want to use it for future presentations.

Exporting to html

Getting html of the mdp presentation hinged on a complete hack. There is a tool I had seen in the past that can output a html dump of a tmux session unsurprisingly called [tmux2html][12]. With some playing around I was able to automate a tmux session to work through the slides and use tmux2html to grab each slide as a frame.

Finding the number of slides in the deck required splitting on the slide seperator from the markdown, this ruled out using the built in build mechanism as I would end up with the wrong number of slides.

The output script runs through the markdown to find the number of slides then uses tmux send-keys to control moving through the deck.

#!/bin/sh

set -e 

command -v tmux >/dev/null 2>&1 || { echo >&2 "I require tmux but it's not installed.  Aborting."; exit 1; }
command -v tmux2html >/dev/null 2>&1 || { echo >&2 "I require tmux2html but it's not installed.  Aborting."; exit 1; }
command -v mdp >/dev/null 2>&1 || { echo >&2 "I require mdp but it's not installed.  Aborting."; exit 1; }

if [ -z "$1" ]
  then
    echo "tohtml presentatin.md [outfile.html]"
    exit
fi

file=$1
outfile=outfile.html

if [ ! -z "$2" ]
  then
  outfile=$2
fi

javascript="<script>function page(){var e=!1,n=document.getElementsByClassName('tmux-html'),l=0; document.onkeydown=function(t){if(t=t||window.event,key=t.keyCode,e)if(13==key){e=!1,l=0;for(var i=0;i<n.length;i++)n[i].style.display='inline'}else{37==key&&--l<0&&(l=0),39==key&&++l>=n.length&&(l=n.length-1);for(i=0;i<n.length;i++)n[i].style.display='none';n[l].style.display='inline'}else if(13==key){e=!0,n[0].style.display='inline',l=0;for(i=1;i<n.length;i++)n[i].style.display='none'}}}window.onload=function(){page()};</script>"

tmpfile=tmpfilenamefilething
tmux='mdptohtmlconverstionsession'

slides=`grep -e "^---" $file | wc -l`

tmux new-session -s $tmux -d -x 96 -y 25

tmux send-keys -t $tmux "mdp $file"
tmux send-keys -t $tmux "Enter"

tmux send-keys -t $tmux 'g'
tmux2html -o $tmpfile $tmux 1>/dev/null

# insert javascript
lines=`cat $tmpfile | wc -l`
styleend=`cat -n $tmpfile | grep -e "</style>" | awk '{print \$1}'`
head -n $styleend $tmpfile > $outfile
echo $javascript >> $outfile
tail -n $((lines-styleend)) $tmpfile >> $outfile
mv $outfile $tmpfile

# remove closing tag
lines=`cat $tmpfile | wc -l `
end=`tail -n 1 $tmpfile`
head -n $((lines-1)) $tmpfile > $outfile

echo turning $file into $((slides+1)) slides 

i=1
while [ $i -lt $((slides+1)) ]
do
    printf "\rSlide $i"
    tmux send-keys -t $tmux 'j'

    tmux2html -o $tmpfile $tmux 1>/dev/null 
    grep -e "^<div" $tmpfile >> $outfile
    (( i++ ))
done

echo $end >> $outfile
tmux kill-session -t $tmux 
rm $tmpfile
printf "\rwritten to $outfile \n"

[If you view the presentation page][6] you will see the entire slide deck, this was the first output I got from this script. All the slides in a nice order. After a little pondering I wrote up some javascript to give controls, if you hit enter it will go from all slides to single slide. Arrow keys in single slide mode will allow you to move through the slide deck. The unminified javascript for this is below.

function page() 
{
    var presenting = false
    var elements = document.getElementsByClassName('tmux-html');
    var current = 0;

    document.onkeydown = function(evt) {
        evt = evt || window.event;
        key = evt.keyCode 

        if (presenting) {
            if (key == 13) {
                presenting = false;
                current = 0;
                for (var i = 0; i < elements.length;i++)
                    elements[i].style.display='inline'
            } else {
                if (key == 37) {    //left
                    current--;
                    if (current < 0)
                        current = 0;
                }
                if (key == 39) {    //right
                    current++;
                    if (current >= elements.length)
                        current = elements.length-1;
                }
                for (var i = 0; i < elements.length;i++)
                    elements[i].style.display='none'
                elements[current].style.display='inline'
            }
        } else {
            if (key == 13) {
                presenting = true;
                elements[0].style.display='inline'

                current = 0;
                for (var i = 1; i < elements.length;i++)
                    elements[i].style.display='none'
            }
        }
    };
}

window.onload = function () {
   page();
}

[3]: sent blog post [4]: sent link [5]: techmeetup aberdeen [6]: mdp slides [7]: talks page

[10]: ietf ascii art guidelines [11]: draft cco [12]: tmux to html

October 03, 2018 12:00 AM

October 02, 2018

Tom Jones

Face

EMF Camp 2018

Skeleton Dome

EMF Camp is a giant hacker camp that occurs in the deep South of England. It managed to attract nearly 2500 people into a field for four days at the end August.

EMF Camp 2018 was the first time I have volunteered to help with the organisation. I volunteered to help out the content team earlier in the year, it wasn't until the week before that we realised lightning talks needed more organisation. Foolishly I stepped up and got a weird split experience between attending the camp and running a tiny slice of it.

It wasn't sooooo awful, I'll probably do it again.

EMF Camp Map

I attended EMF Camp 2014, since then they have really managed to integrate well with the village system used at other camps. The map shows all the spontaneous events that people put together during the camp, the adage 'it is what you make it' really comes out at these events with many participants helping to make it hole.

Border control point ShoutyTel

In our own way the Scottish Consulate contributed too, with our bureaucratic role playing game going beyond the pale and expanding into operation of a phone network (cups and string) and a Hard border from the rest of the camp.

Polybius Biotech Tenticle

October 02, 2018 12:00 AM

October 01, 2018

Tom Jones

Face

America's Hackercamp

Milliways Dome

Toorcamp is America's Hackercamp, it happens on the stunning Orcas Island an hour or so North West of Seattle. Hacker events always manage to create their own neon lit world, Toorcamp took this to another level and sequestered 500 hackers away in a idillic resort in the Pacific North west and even then it poured on the neon lighting effects to keep us in a dream world.

LED Gateway

Doe Bay resort spreads over three regions, a bay area (were I camped with Milliways), an island outcrop and a field at the top of a hill. This division (especially the hill) make it less enticing to move around the site. It also meant that the nosiy area in the bay, by being far away from most of the camping, was able to go all night long without disturbing too many people.

Call to anywhere but Shady Tel

Toorcamp is serviced by a group of telephone enthusiasts called Shady Tel. They operate a highly reptuable phone company in the American fashion, offering service anywhere on the camp site, whether near an exchange or on a boat out in the bay.

I hate talking to people on the phone, but I found this limited network to be a ton of fun. I must have spent hours wardialling around trying to find people to call. Once I discovered the maintainence line that echo'd back your phone number I started going around and collecting interesting phones.

Payphone

Because we are hackers on top of this phone network highly ammusing things pop up. Milliways ran a pager network and from their payphone I spent many hours paging people to call numbers. Knowing how to find numbers for a phone I started paging people to call me at random places.

Stage Dome

The Doe Bay resort that hosted Toorcamp would be a wonderful place to go even without an amazing hacker camp in toe. Rather than attempt to describe the event it is easier to link to the 10 intervies the amp hour podcast did on site.

The final night nature decided to turn on a smoke machine and join the party.

Neon Fog

October 01, 2018 12:00 AM

September 30, 2018

Tom Jones

Face

FreeBSD on the Intel Compute Stick STK1AW32SC

Compute stick pieces

A FreeBSD developer has been tricked somehow into working on EFI boot. A large missing piece has been support for 32 bit EFI. Many devices with Intel mobile SOCs have shipped with bios which only support 32 bit EFI for boot even on 64 bit processors.

Rumour had it the Intel Compute Stick STK1AW32SC was one of the platforms with only 32bit EFI. This compute stick has a SOC from the Cherryview family, the same as the GPD Pocket, I want FreeBSD to support this SOC well and 32 bit EFI to boot is a part of that.

This compute stick is end of life and looking around I saw a few on ebay. I managed to win an auction for a new in box compute stick, getting it for about £50. For that I got:

  • x5-Z8330 4 Cores 1.44 GHz
  • 2GB Ram
  • 32GB Internal Flash
  • 1 USB 2 Port
  • 1 USB 3 Port
  • MicroSD Card slot
  • Intel Wireless-AC 7265 + Bluetooth 4.2
  • Intel Integrated Graphics

I asked Allan Jude to take his compute stick to the DevSummit at EuroBSDCon, while he was grabbing it someone else piped up and claimed to have run FreeBSD on the compute stick before. Turns out there is a bios option to switch between 32bit boot and 64bit boot.

Yes, our deliberate FreeBSD brick actually works. Here is how to install FreeBSD on a Compute Stick:

BIOS default settings

Break into the bios by hitting F2 at boot.

BIOS Boot 64

In 'Configuration' change Operating System from 'Windows 32-bit' to 'Windows 64-bit'

Reboot and break into the boot menu and choose your FreeBSD USB stick.

As with the x5 box there is an issue where the uart causes the compute stick to hang.

Break into the loader menu and set:

OK unset hint.uart.1.at
OK boot

Install as normal

Before rebooting at the end of the installer you need to edit device.hints to disable the uart again.

# chmod +w /boot/device.hints
# vi /boot/device.hints
....
hint.sc.0.flags="0x100"
hint.uart.0.at="isa"    # comment this line out
hint.uart.0.port="0x3F8"
....

WiFi

Bluetooth is present in the dmesg, but we need to load the iwm kernel module then we can configure WiFi as normal.

# kldload if_iwm

Graphics

Since setting up the x5 box in January our FreeBSD has has gained support for integrated graphics on CherryView SoCs. Now graphics support is available by installing and loading the drm-next-kmod.

# pkg install drm-next-kmod
# kldload /boot/modules/i915kms

Meat

I was unable to find any tear down pictures of the compute stick so I had to make some. The cast is easy to take a part, there is a single screw under a rubber foot once that is removed the rest of the top case is held on with snap fits. Inside the fan is connected with a tiny cable, the 2.4GHz and 5GHz antennas are glued to the side of the case, everything else is held down with 2 screws. 3 screws hold the heat sink assembly to the pcb.

Compute stick top

Inside there is very little to see

On the top is the SOC is in a puddble of goop, an AXP288 PMIC, 64Mb of Winbond flash and two Kingston 4Gb DDR3 Ram modules.

On the bottom there are two more DDR3 modules (taking us up to 2GB), a SanDisk SDINADF4A 32GB eMMC and an Intel 7265D2W WiFi + Bluetooth module..

Compute stick bottom

---<<BOOT>>---
Copyright (c) 1992-2018 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
    The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 12.0-ALPHA7  r338849 amd64
FreeBSD clang version 6.0.1 (tags/RELEASE_601/final 335540) (based on LLVM 6.0.1)
WARNING: WITNESS option enabled, expect reduced performance.
VT(efifb): resolution 1920x1080
CPU: Intel(R) Atom(TM) x5-Z8330  CPU @ 1.44GHz (1440.00-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x406c4  Family=0x6  Model=0x4c  Stepping=4
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x43d8e3bf<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,SSE4.2,MOVBE,POPCNT,TSCDLT,AESNI,RDRAND>
  AMD Features=0x28100800<SYSCALL,NX,RDTSCP,LM>
  AMD Features2=0x101<LAHF,Prefetch>
  Structured Extended Features=0x2282<TSCADJ,SMEP,ERMS,NFPUSG>
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
  TSC: P-state invariant, performance statistics
real memory  = 2147483648 (2048 MB)
avail memory = 1955004416 (1864 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: <Intel  COMSTKFC>
WARNING: L1 data cache covers fewer APIC IDs than a core (0 < 1)
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
FreeBSD/SMP: 1 package(s) x 4 core(s)
random: unblocking device.
ioapic0 <Version 2.0> irqs 0-114 on motherboard
Launching APs: 2 1 3
Timecounter "TSC" frequency 1439997858 Hz quality 1000
random: entropy device external interface
netmap: loaded module
[ath_hal] loaded
module_register_init: MOD_LOAD (vesa, 0xffffffff810e1920, 0) error 19
random: registering fast source Intel Secure Key RNG
random: fast provider: "Intel Secure Key RNG"
kbd1 at kbdmux0
nexus0
efirtc0: <EFI Realtime Clock> on motherboard
efirtc0: registered as a time-of-day clock, resolution 1.000000s
cryptosoft0: <software crypto> on motherboard
acpi0: <Intel COMSTKFC> on motherboard
acpi0: Power Button (fixed)
unknown: I/O range not supported
cpu0: <ACPI CPU> on acpi0
attimer0: <AT timer> port 0x40-0x43,0x50-0x53 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
atrtc0: <AT realtime clock> port 0x70-0x77 on acpi0
atrtc0: Warning: Couldn't map I/O.
atrtc0: registered as a time-of-day clock, resolution 1.000000s
Event timer "RTC" frequency 32768 Hz quality 0
hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff irq 8 on acpi0
Timecounter "HPET" frequency 14318180 Hz quality 950
Event timer "HPET" frequency 14318180 Hz quality 450
Event timer "HPET1" frequency 14318180 Hz quality 440
Event timer "HPET2" frequency 14318180 Hz quality 440
Timecounter "ACPI-safe" frequency 3579545 Hz quality 850
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x408-0x40b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
vgapci0: <VGA-compatible display> port 0xf000-0xf03f mem 0x90000000-0x90ffffff,0x80000000-0x8fffffff at device 2.0 on pci0
vgapci0: Boot video device
xhci0: <Intel Braswell USB 3.0 controller> mem 0x91500000-0x9150ffff at device 20.0 on pci0
xhci0: 32 bytes context size, 64-bit DMA
usbus0 on xhci0
usbus0: 5.0Gbps Super Speed USB v3.0
pci0: <encrypt/decrypt> at device 26.0 (no driver attached)
pcib1: <ACPI PCI-PCI bridge> at device 28.0 on pci0
pci1: <ACPI PCI bus> on pcib1
pci1: <network> at device 0.0 (no driver attached)
isab0: <PCI-ISA bridge> at device 31.0 on pci0
isa0: <ISA bus> on isab0
acpi_button0: <Power Button> on acpi0
acpi_tz0: <Thermal Zone> on acpi0
sdhci_acpi0: <Intel Bay Trail/Braswell eMMC 4.5/4.5.1 Controller> iomem 0x9152c000-0x9152cfff irq 45 on acpi0
mmc0: <MMC/SD bus> on sdhci_acpi0
sdhci_acpi1: <Intel Bay Trail/Braswell SDXC Controller> iomem 0x9152a000-0x9152afff irq 47 on acpi0
mmc1: <MMC/SD bus> on sdhci_acpi1
atkbdc0: <Keyboard controller (i8042)> at port 0x60,0x64 on isa0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
atkbdc0: non-PNP ISA device will be removed from GENERIC in FreeBSD 12.
est0: <Enhanced SpeedStep Frequency Control> on cpu0
Timecounters tick every 1.000 msec
ugen0.1: <0x8086 XHCI root HUB> at usbus0
uhub0: <0x8086 XHCI root HUB, class 9/0, rev 3.00/1.00, addr 1> on usbus0
mmcsd0: 31GB <MMCHC DF4032 0.1 SN 9557679A MFG 05/2016 by 69 0x0000> at mmc0 200.0MHz/8bit/8192-block
mmcsd0boot0: 4MB partion 1 at mmcsd0
mmcsd0boot1: 4MB partion 2 at mmcsd0
mmcsd0rpmb: 4MB partion 3 at mmcsd0
mmc1: No compatible cards found on bus
WARNING: WITNESS option enabled, expect reduced performance.
Trying to mount root from ufs:/dev/mmcsd0p2 [rw]...
uhub0: 13 ports with 13 removable, self powered
lo0: link state changed to UP

September 30, 2018 12:00 AM

campgnd 2018

                                                              __,--'\
                                                        __,--'    :. \.
                                                   _,--'              \`.
                                                  /|\       `          \ `.
                            ____ _   _ ____      / | \        `:        \  `/
  ___ __ _ _ __ ___  _ __  / ___| \ | |  _ \    / '|  \        `:.       \
 / __/ _` | '_ ` _ \| '_ \| |  _|  \| | | | |  / , |   \                  \
| (_| (_| | | | | | | |_) | |_| | |\  | |_| | /    |:   \              `:. \
 \___\__,_|_| |_| |_| .__/ \____|_| \_|____/ /| '  |     \ :.           _,-'`.
                    |_|                    \' |,  / \   ` \ `:.     _,-'_|    `/
                                              '._;   \ .   \   `_,-'_,-'
                                            \'    `- .\_   |\,-'_,-'
Scotland's first Hacker camp.                            `--|_,`'

One of the campgnds I used the tag line 'it happened again!'. It keeps happening and people are still upset about that year we missed. At this point it is easier to keep doing it.

hammocking

campgnd is the annual camping trip for the hackerspace. We take 10-20 people off into a remote field build up an unreasonable shanty town of tents, feed it with power and data and let our minds go.

pokedex hacking

I love campgnd, it is a chance to escape and an opportunity to test out Village for visiting larger camps around Europe. Getting away and going somewhere is a great way to increase focus, if camping isn't your thing then taking your hackerspace to a makerfaire is a great way to focus on getting projects ready to show.

smelt pour

It seems we are already planning campgnd 2019, if you want to join the madness drop into #scottishconsulate on freenode and ask.

radio hacking

September 30, 2018 12:00 AM

September 29, 2018

Tom Jones

Face

Far Too Much Summer

There was frost on the car this morning we can declare summer concluded. These last 3 months have been very intense, an absolute ton of fun, but the intensity meant very little down time.

I plan to write a series of blog posts to capture some of excellent adventures I had. As always the best things required participation to spare you from inside jokes I will stay close to easily shared realities.

Somehow there was space in this calendar to start running a monthly pancake breakfast at the hackerspace. The next Hacker Breakfast be Sunday the 14th of October.


It is hard to admit. This summer was too much.

It is hard to admin because throughout, despite the travel exhaustion, hangovers and mild illness, it was a ton of fun. The fun came at a cost, post IETF my brain was a puddle and I still had to build a streaming system and write a slide deck based on it.

EuroBSDCon was a major stress inducer for me, I submitted to the CFP with a Proof of Concept, which did work. Getting from the PoC to a presentable system was a lot of work. I allocated time to do this and then filled that time with travel and conferences and my job.

Six large blocks of travel in a row were too many. I need to figure out how to control the commitments I make so I don't become overwhelmed by saying yes.

September 29, 2018 12:00 AM

September 08, 2018

hibby

Face

vm-bhyve with NAT on FreeBSD

I’ve been running FreeBSD on my primary server for a while. There’s a number of things I like, and I’m enjoying the challenge of getting to grips with how the system is put together. It’s been a good challenge so far, with many highlights. The idea of the server was to be VM host - bhyve is a lovely hypervisor to interact with, and I’ve tried a few management tools for it.

September 08, 2018 03:45 PM

August 25, 2018

Iain R. Learmonth

Face

MB7VX Shutdown

The APRS digipeater MB7VX has been offline for quite a while now. I’m not going to bring it back any time soon and I have applied for a second NoV to release the frequency and callsign.

The original MB7VX setup using a donated Tait 2 meter radio and a 13.8 volt power supply from the local club junk sale

The original MB7VX setup using a donated Tait 2 meter radio and a 13.8 volt power supply from the local club junk sale

Over time, a number of people have contributed to the running of this station. It has been a really fun project and I am sad that it is coming to an end. Thanks go to all the people that have contributed to the construction, installation, configuration, testing and software development that was required to get this station going.

The primary reason for this shutdown is that I do not have the time to deal with all the arguments. The Radio Society of Great Britain (RSGB) has not made it easy to run this station. The Amateur Radio Observer Service (AROS) have also caused unnecessary problems.

For those that may be unaware, RSGB assist licensed radio amateurs in applying for NoVs. This means getting the relevant paperwork from Ofcom to run the station 247 without me being there. It is also required to allow other amateurs to use the repeater.

The first complaints came from an AROS member who also happened to operate an APRS “IGate” in Aberdeen. To enhance the fun you might have with APRS, it is possible to copy the packets you see to an Internet service. You can then see position reports and messaging at the aprs.fi website, to give one example.

The path that a packet took is interesting. Maybe you were repeated before you get to an Internet-connected station or maybe you’ve got a direct communication with one. As far as I can tell, the AROS member was upset because MB7VX was showing as being the first station to report some mobile stations’ packets and not his.

Here followed a ridiculous debate in which RSGB/AROS were arguing with Ofcom, the regulator, that my license was actually more restrictive than it was. Ofcom (who were the nice people in all this, and I am thankful to those who were dealing with this case) cleared this up and while I didn’t feel that RSGB/AROS had accepted Ofcom’s answer they did somewhat let the matter go.

In a sensible world, this should have been the other way round. RSGB/AROS are meant to be making it easier for radio amateurs to experiment and learn, not harder! At the time, I wrote this blog post which goes in to more detail on the issue.

Later, this AROS member would also jump into conversations we were having and start “educating” us on what we can and can’t do with packet radio. There was talk of moving to another frequency and just having our APRS activity their instead. In the end the usage dropped off and the digipeater mostly sat silent.

At the last renewal of the NoV I was told that the output power would need to be reduced below the point that it would be a useful station. It would still receive packets but the repeated packets wouldn’t be heard. It was RSGB’s view that the goal of APRS was to get your packets to the Internet whereas our group were really more interested in RF to RF communication.

RSGB told me that they were looking to prevent interference that could occur with nearby voice gateways. The fact is that in the whole time the station was operating, I never once received a complaint about interference. I’m also not aware of any voice gateways around the Aberdeen area that even interference would affect. If there were some, I would use them.

So anyway, for now at least, MB7VX is shutdown and I will soon no longer hold the NoV for it.

August 25, 2018 01:20 PM

Subscriptions

Last updated:
February 22, 2019 04:00 PM
All times are UTC.

Powered by:
Planet

Planetarium: