Planet

June 02, 2020

Tom Jones

Face

Help me blog more in June

This is a post on my blog.

I both have a blog and enjoy blogging. I think a blog is the perfect way to keep notes for myself with the nice possibility that they might help someone else. I frequently look up my own post on how to take screenshots with imagemagick on this blog.

I wrote a blog post everyday for 6 months, from the 26th September 2016 to the 27th of March 2017, in the end I added 196 new entries to this blog in 182 days. I can't write an epic detailed post everyday, though I might manage a week or so like that if I have a backlog. Most posts were little more than an image and some text, little more than tweets.

But I still wrote, on a few days I even wrote more than one blog post. Under each blog post I included where in the world I was (borrowed from a characters quirk in Cryptonomicom), the weather and the books I was reading at the time.

These posts cover a range, pictures I was proud of, planning for events, travel and cool projects I worked on. Reviewing these has been a fun experience.

All that said, I haven't published a blog post for 276 days and I think I would like some help getting back into the public writing groove.

I tend to be better at delivering projects when I am held accountable by other people. Think of it as peer help rather than peer pressure.

I think it would be really cool if you too wrote some blog posts. Not for me, not for anyone else, but for you. You today and in the future. Write something now to clear your head, write something today that will help you tomorrow. Writing things down helps you remember them, but when you forget you will get to try searching your own log rather than having to scour the depths of the Internet.

I asked this weekend at campgndd for some peer help so that I would actually write blog posts in June. I suggested we each try to write four this month.

These kind people said they would try and write too to keep my going:

It would be cool if you were to join us. I will look for your response on your blog.

June 02, 2020 12:00 AM

May 29, 2020

Edward Watson

Face

Blog back online!

So I just fixed the blog engine - only took 2 years! This might mean that I'll be posting entries again, maybe.

 

May 29, 2020 04:08 PM

May 21, 2020

hibby

Face

1/4 Wave Ground Plane

I’ve had a plan to build a 1/4 Wave Ground Plane antenna based on this calculator and the data available on that site for some time. I’m doing the 2m band first (144MHz) - I’ve got piece of 4mm brass diameter pipe for my driven element, and 6mm aluminium pipe for my radials. The radials are to be angled at 45° from each other, at 45° below normal. Click the link for what that looks like!

May 21, 2020 10:30 PM

May 08, 2020

Iain R. Learmonth

Face

HamBSD Development Log 2020-05-08

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on documentation, tests and tidying up. It’s good to keep technical debt in check and while no new big exciting features got made today on anything, lots of hard work was done.

I started the day on documentation, adding details on the new -b flag for aprsisd and details on which packets will be sent to APRS-IS. I also added a reference to the APRS-IS specifications to both the aprsisd(8) and the aprsis-filter(7) man pages.

I’ve added a few more unit tests to test the generation of the TNC2 format packets from AX.25 packets to upload to APRS-IS. One of these covers an interesting case, where a carriage return or line feed is present in the information field of an APRS packet.

This could lead to an “injection attack” if the payload of the packet is copied verbatim to the server, each of these characters are interpreted as packet delimiters and it would be possible to send packets bypassing checks or to send in-band commands (such as filter) via RF.

I tested both Dire Wolf and Xastir, both are immune to this attack. I posted to the tapr-aprssig mailing list. It turns out that YAAC may be affected by this issue.

John Langner WB2OSZ commented on the thread to say that the approach that Dire Wolf takes is to truncate the packet at the first carriage return or line feed character, so I’ve implemented this in aprsisd before any checks are carried out on the information part.

I’ve learned from looking at YAAC that it also implements TLS for APRS-IS and so that brings the number of other clients I know about to two: APRSdroid and YAAC. I also managed to find in its source code the root CA certificate for the APRS tier 2 servers and have included this in the HamPKI certificate bundle.

Bad news is that it uses SHA-1 as a signature scheme and while this meant I can now enable certificate validation, neither server has ssl.aprs2.net in its certificate so the name can’t be validated and one of the servers still has an expired certificate. Progress is still progress.

Another test case looks at third-party traffic with TCPIP in the third-party header, and this will now be correctly dropped.

I’m not going to list any next steps in this post, because I think now I need to do a review of what has been done and where effort needs to go next. The next steps are going to be working out what the next steps are.

May 08, 2020 04:44 PM

May 07, 2020

Iain R. Learmonth

Face

HamBSD Development Log 2020-05-07

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on writing unit tests for aprsisd.

I’ve added a few unit tests to test the generation of the TNC2 format packets from AX.25 packets to upload to APRS-IS. There’s still some todo entries there as I’ve not made up packets for all the cases I wanted to check yet.

These are the first unit tests I’ve written for HamBSD and it’s definitely a different experience compared to writing Python unit tests for example. The framework for the tests uses bsd.regress.mk(5). The tests are C programs that include functions from aprsisd.

In order to do this I’ve had to split the function that converts AX.25 packets to TNC2 packets out into a seperate file. This is the sort of thing that I’d be more comfortable doing if I had more unit test coverage. It seemed to go OK and hopefully the coverage will improve as I get more used to writing tests in this way.

I also corrected a bug from yesterday where AX.25 3rd-party packets would have their length over-reported, leaking stack memory to RF.

I’ve been reading up on the station capabilities packet and it seems a lot of fields have been added by various software over time. Successful APRS IGate Operation (WB2OSZ, 2017) has a list of some of the fields and where they came from under “IGate Status Beacon” which seems to be what this packet is used for, not necessarily advertising the capabilities of the whole station.

If this packet were to become quite long, there is the possibility for an amplification attack. Someone with a low power transmitter can send an IGate query, and then get the IGate to respond with a much longer packet at higher power. It’s not even clear in the reply why this packet would be being sent as the requestor is not mentioned.

I think this will be the first place where I implement some rate limiting and see how that works. Collecting some simple statistics like the number of packets uploaded/downloaded would also be useful for monitoring.

Next steps:

  • Keep track of number of packets uploaded and downloaded
  • Add those statistics to station capabilities packet

May 07, 2020 08:58 PM

May 06, 2020

Iain R. Learmonth

Face

HamBSD Development Log 2020-05-06

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on gating from the Internet to RF.

In the morning I cleaned up the mess I made yesterday with escaping the non-printable characters in packets before uploading them. I ran some test packets through and both Xastir and aprs.fi could decode them so that must be the correct way to do it.

I also added filtering of generic station queries (?APRS?) and IGate queries (?IGATE?). When an IGate query is seen, aprsisd will now respond with a station capabilities packet. The packet is not very exciting as it only contains the “IGATE” capability right now, but at least it does that.

Third-party packets are also identified, and have their RF headers stripped, and then are unconditionally thrown away. I need to add a check there to see if TCPIP is in the path and gate it if it’s not but I didn’t get to that today.

I added a new flag to aprsisd, -b, to allow the operator to indicate whether or not this station is a bi-directional IGate station. This currently only affects the q construct added to the path for uploaded packets (there wasn’t one before today) to indicate to APRS-IS whether or not this station can be used for two-way messaging with RF stations in range. Later I’ll make this also drop incoming messages if it’s set to receive-only instead of attempting to gate them anyway.

I noticed that when I connect to aprsc servers with TLS, they have actually been appending a qAS (message generated by server) instead of qAR (message is from RF) which I think is a bug, so I filed a GitHub issue.

The big thing today was generating third-party headers. Until now, aprsisd has tried to stuff the TNC2 header back into an AX.25 packet with a lot of truncated path entries. It’s now building the headers correctly(ish) and it’s possible to have bi-directional messaging through a HamBSD IGate. The path in the encapsulated header is currently entirely missing but it still works.

As this is a completely different way of handling these packets, it meant a rewrite of a good chunk of code. The skeleton is there now, just need to fill it in.

APRS message from MM0ROR: Hello from APRS-IS!

Next steps:

  • Generate proper paths for 3rd-party packets
  • Include a path for RF headers for station capabilities and 3rd-party packets
  • Add the new -b flag to the man page

May 06, 2020 08:30 PM

May 05, 2020

Iain R. Learmonth

Face

HamBSD Development Log 2020-05-05

I worked on HamBSD today, still looking at improvements to aprsisd(8). My focus today was on converting AX.25 packets to the TNC2 format used by APRS-IS.

I fixed the path formatting to include the asterisks for used path entries. Before packets would always appear to APRS-IS to have been heard directly, which gave some impressive range statistics for packets that had in fact been through one or two digipeaters.

A little more filtering is now implemented for packets. The control field and PID are verified to ensure the packets are APRS packets.

The entire path for AX.25 packet read from axtap(4) interface to TNC2 formatted string going out the TCP/TLS connection has bounds checks, with almost all string functions replaced with the mem* equivalents.

It wasn’t clear if it’s necessary to escape the non-printable characters in packets before sending to APRS-IS, and it turns out that actually you’re not meant to do that. I’d implemented this with the following (based roughly on how the KISS output escaping working in kiss(4):

icp = AX25_INFO_PTR(pkt_ax25, pi);
iep = pkt_ax25 + ax25_len;
while (icp < iep) {
        ibp = icp;
        while (icp < iep) {
                if (!isprint(*icp++)) {
                        icp--;
                        break;
                }
        }
        if (icp > ibp) {
                if (tp + (icp - ibp) > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                memcpy(&pkt_tnc2[tp], ibp, icp - ibp);
                tp += icp - ibp;
        }
        if (icp < iep) {
                if (tp + 6 > TNC2_MAXLINE)
                        /* Too big for a TNC2 format line */
                        return 0;
                pkt_tnc2[tp++] = '<';
                pkt_tnc2[tp++] = '0';
                pkt_tnc2[tp++] = 'x';
                pkt_tnc2[tp++] = hex_chars[(*icp >> 4) & 0xf];
                pkt_tnc2[tp++] = hex_chars[*icp & 0xf];
                pkt_tnc2[tp++] = '>';
                icp++;
        }
}

I can now probably replace this with just a single bounds check and memcpy, but then I need to worry about logging. There is a debug log for every packet that I’ll probably just call strvis(3).

This did throw up something interesting though, so maybe this wasn’t a complete waste of time. I noticed that a “<0x0d>” was getting appended to packets coming out of my Yaesu VX-8DE. It turns out that this wasn’t a bug in my code or in aprsc (the APRS-IS server software I was connected to) but it’s actually a real byte that is tagged on the end of every APRS packet generated by the radio’s firmware. I never saw it before because aprsc would interpret this byte (ASCII carriage return) as the end of a packet, it would just be lost.

Next steps:

  • Removing the non-printable character escaping again
  • Filtering generic APRS queries (to avoid packet floods)
  • Filtering 3rd-party packets

May 05, 2020 07:55 PM

May 03, 2020

Iain R. Learmonth

Face

Hetzner Dedicated Server Reverse DNS + Ansible

Continuing on the path towards all my stuff being managed by Ansible, I’ve figured out a method of managing the reverse DNS entries for subnets on the Hetzner Dedicated Server.

There’s a bunch of Ansible modules for handling Hetzner Cloud, but these servers are managed in Robot which the Cloud API doesn’t cover. Instead, you need to use the Robot Webservice.

Ansible does have a module for doing pretty arbitrary things with web APIs though, so using that I’ve got the following playbook figured out to keep the reverse DNS entries in sync:

---
- hosts:
  - vmhost_vm1
  gather_facts: False
  tasks:
  - name: import hetzner webservice credentials
    include_vars:
      file: "hetzner_ws.yml"
  - name: set rdns for hetzner hosts
    block:
    - name: get current rdns entry
      uri:
        user: "{{ hetzner_ws_user }}"
        password: "{{ hetzner_ws_password }}"
        url: "https://robot-ws.your-server.de/rdns/{{ vmip4 }}"
        status_code: [200, 404]
      register: rdns_result
    - name: update incorrect rdns entry
      uri:
        user: "{{ hetzner_ws_user }}"
        password: "{{ hetzner_ws_password }}"
        url: "https://robot-ws.your-server.de/rdns/{{ vmip4 }}"
        method: "POST"
        body_format: form-urlencoded
        body:
          ptr: "{{ inventory_hostname }}"
        status_code: [200, 201]
      when: '"rdns" not in rdns_result.json or inventory_hostname != rdns_result.json.rdns.ptr'
      changed_when: True
    delegate_to: localhost

The host groups this runs on are currently hardcoded as the VMs that live in the Hetzner Dedicated Server. A future iteration of this might use some inventory plugin to look up the subnets that are managed on Hetzner and create a group for all of those. Right now it won’t be setting the reverse DNS for the “router” interface on that subnet, and won’t automatically include new server’s subnets.

Gathering facts is disabled because all of these run locally. There is at least one VM running on this server that I can’t log in to because I host it for someone else, so running locally is a necessity.

The webservice credentials are stored in an Ansible Vault encrypted YAML file and loaded explicitly. An important note: the webservice username and password is not the same as your regular Robot username and password. You need to create a webservice user in Robot under “Settings; Webservice and app settings”.

If you attempt to authenticate with an incorrect username and password 3 times in a row, your IP address will be blocked for 10 minutes. There are 6 hosts in this group, so I did this a few times before I realised there was a different user account I needed to create. I’d suggest limiting to a single host while you’re testing to get the authentication figured out.

The actual calls to the webservice take place in a block just to avoid having to specify the delegate_to: localhost twice. The first step looks up the current PTR record, and accepts success if it gives either a 200 or 404 status code (it will be 404 if there is no existing pointer record). The result of this is used to conditionally create/update the PTR record in the next task.

If nothing needs to be done, nothing will be changed and the second task will be skipped. If a change is needed, the second step is successful if either the PTR record is updated (status 200) or created (status 201).

This was actually a lot easier than I thought it would be, and the uri module looks to be really flexible, so I bet there are other things that I could easily integrate into my Ansible playbooks.

May 03, 2020 09:55 AM

April 20, 2020

hibby

Face

Yubikey GnuPG + SSH Agent with Fish

So I’ve had a lot of GPG issues over the last few months, and part of that has been me not bothering particularly to ensure I’ve got a consistent setup across my machines. Part of the issue is that I’m using a yubikey, and keep having to look at external references to make sure it works consistently. I’m going to keep this as a reference for what I’ve used recently to make everything work.

April 20, 2020 01:02 PM

April 10, 2020

Ana Custura

Ansible x OpenBSD Web Deployment

I’ve recently switched most of my home and away infrastructure from Linux to various flavours of BSD. This blog post documents an attempt at deploying multiple static websites in the fastest way possible, on OpenBSD, with Ansible.

It covers automating multiple LetsEncrypt-enabled static websites, configured with onion services for access through the Tor network, using OpenBSD’s brilliant httpd. As a bonus, the content of these websites is automatically generated from cat pictures.

The requirements for this are a public IP address, ability to make DNS records, and of course… some static websites to deploy.

Basics

At the heart of the Ansible configuration for this project is the group of websites to deploy, which are defined as variables in the hosts file:

[www_sites]
site1.example.net
site1.example.com
site2.example.net
site2.example.com

Throughtout the deployment, this group is iterated over.

Generate the websites

First, to deploy multiple websites, one must have multiple websites available - I made a few, out of:

  • pictures of several cat friends
  • a short bio for each cat friend
  • a Jinja2 HTML template for the html index

My website vars group looks like this:

[www_sites]
lion.kitty.institute
leppy.kitty.institute
fox.kitty.institute
jinxy.kitty.institute
kitty.kitty.institute

Directories for each site are iteratively created with an Ansible task:

- name: create vhost directories
  file:
    path: "/var/www/vhosts/{{ item }}"
    state: directory
    owner: www
  with_items: "{{ groups['www_sites'] }}"

The website content, in our case a generated index.html file, is also created by an Ansible task. With each iteration, a variable named ‘vhost’, which corresponds to the name of each site, is passed to the index.html.j2 template:

- name: create mock site content
  template:
    src: "index.html.j2"
    dest: "/var/www/vhosts/{{ item }}/index.html"
    owner: www
  with_items: "{{ groups['www_sites'] }}"
  vars: 
    vhost: "{{ item }}" 

The template is simple html, and uses the ‘vhost’ variable to load the correct bio and picture files. The bios and pictures files are stored in the main Ansible directory, to be used by the template:

<h1>{{ vhost }}</h1>
<hr>
<p>"{{ lookup('file','kitty_bios/' + vhost) }}"</p>
<img src= "{{  vhost }}.jpg" width="500 px"/>

The resulting index files are installed in their own directory, ready to be served.

Deploy the websites

To serve the new websites, a configuration file for httpd must be created. You’ve guessed it - this is another template: httpd.conf.j2.

The example configuration file shipped with OpenBSD is a wonderful reference here, and we want to keep the part that redirects port 80 to port 443 and allows LetsEncrypt verification for ACME challenges, for all the sites (and, aditionally, for your host):

server "{{ inventory_hostname }}" {
        listen on * port 80
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        }
        location * {
                block return 302 "https://$HTTP_HOST$REQUEST_URI"
        }
}

Then, a loop is added to make configuration blocks for all the servers in the www_sites group: Eventually, all the websites should listen on port 443 and are expected to have TLS certificates - these will be generated in the next step.

{% for vhost in groups['www_sites'] %}
server "{{ vhost }}" {
        listen on * tls port 443
        tls {
                certificate "/etc/ssl/{{ vhost }}.fullchain.pem"
                key "/etc/ssl/private/{{ vhost }}.key"
        }
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        }
        location * {
                root "/vhosts/{{ vhost }}"
        }
}
{% endfor %}

Letsencrypt the websites

Ok, time to fill /etc/ssl with certificates for these websites. As a cautionary note, httpd must be started before you attempt to request the certificates. This was…the hardest issue to debug in the entire setup.

So, throw in there:

- name: enable and start httpd
  service:
    name: httpd
    enabled: yes
    state: started

For the ACME setup, a configuration file must first be created and installed. Yes, there’s a template for that.

- name: install acme-client.conf
  template:
    src: "acme-client.conf"
    dest: "/etc/acme-client.conf"

The template is based on the default example configuration included with OpenBSD’s acme-client, wrapped in a for loop:

authority letsencrypt {
        api url "https://acme-v02.api.letsencrypt.org/directory"
        account key "/etc/acme/letsencrypt-privkey.pem"
}

{% for d in groups['www_sites'] %}
domain "{{ d }}" {
        domain key "/etc/ssl/private/{{ d }}.key"
        domain full chain certificate "/etc/ssl/{{ d }}.fullchain.pem"
        sign with letsencrypt
}
{% endfor %}

For the ACME run, ensure the names match up those used earlier for the httpd configuration file:

- name: initial acme-client run
  command: "/usr/sbin/acme-client {{ item }}"
  args:
    creates: "/etc/ssl/{{ item }}.fullchain.pem"
  with_items: "{{ groups['www_sites'] }}"
  notify:
  - reload_httpd

A task to renew certs indefinitely via cron jobs finishes the job nicely:

- name: renew certificates via root crontab
  cron:
    name: "acme-client renew {{ item }}"
    minute: "0"
    job: "sleep $((RANDOM \\% 2048)) && acme-client {{ item }} && rcctl reload httpd"
    user: root
  with_items: "{{ groups['www_sites'] }}"

To test, go to your website, which should both redirect to https, and should have a valid cert. In this case, https://fox.kitty.institute.

Onion the websites

The cats now must be made available to those under oppresive regimes or avoiding censorship. An onion address does just that.

Tor must be installed, and the appropriate configuration to enable onion services must be enabled. The torrc file is another template - which basically instructs Tor to create an onion service for each site and store the onion service files in a separate site folder:

Log notice syslog
RunAsDaemon 1
DataDirectory /var/tor
User _tor

{% for domain in groups['www_sites'] %}
HiddenServiceDir /var/tor/{{ domain }}/
HiddenServicePort 80 127.0.0.1:80
HiddenServicePort 443 127.0.0.1:443
{% endfor %}

To install tor, and the configuration file:

- name: install tor
  openbsd_pkg:
    name: ['tor']
    state: present

- name: install torrc
  template:
    src: "torrc"
    dest: "/etc/tor/torrc"
    owner: root
    group: wheel
    mode: 0644
  register: torrc

Note how copying the torrc template registers a variable. This is used later to reload tor if any changes were made to the configuration.

I should point out that this will generate random onion addresses. If you want a custom onion address you can mine it and replace the files in each directory manually.

Remember: the machine running the service must be able to read the private key, so to reduce attack surface the safest way to generate onion addresses is on the very same machine. Software to mine vanity onion addresses can be found here. When copying these, the owners and permissions of the original files need to be kept for tor to run.

My final onion addresses look like this:

meow2ecfmjschzktpwnaufh5m5fop3emrmm2mb62gvawovpiyk7jxdqd.onion
meow3pdf65knffidkkypdgtvohispjpzw6omcyoellwythv64vxo5dqd.onion
meow4u5lkpndb562ble3ityac2l3gm47wegtqp72mq2mfqmved7mnhad.onion
meow5w6vxagd7ipzvmb2h54quuzgwkanu2i3zjnb7qloakmgy3nmrgid.onion
meow6onx3grwas4k2i7lb6iecgyk6fkrypcnsatr2tri6kunv36zksid.onion

Try these in Tor browser!

The step above is entirely optional. However, after copying the torrc configuration file, tor must be running and set to run at startup:

- name: ensure tor is enabled and started
  service:
    name: tor
    enabled: yes
    state: started

- name: reload tor
  service:
    name: tor
    state: reloaded
  when: torrc.changed

The final step is integrating these tasks in with the rest of the configuration, and telling httpd what to do when it receives requests on the onion addresses. The onion address must be known before installing the httpd template - if they are generated by tor then fetching them manually would be a pain. Tor stores the onion address in a file called hostname in /var/tor/. A task can be set up to retrieve these with Ansible:

- name: retrieve onion hostnames
  fetch:
    src: "/var/tor/{{ item }}/hostname"
    dest: "files/onion_hostnames/{{ item }}"
    flat: yes
  with_items: "{{ groups['www_sites'] }}"

Now the files are in place ready to be retrieved by other tasks in Ansible. The httpd template created earlier can now be modified to use the onion addresses:

{% for vhost in groups['www_sites'] %}
server {{ lookup('file', 'onion_hostnames/' + vhost) }} {
        listen on * port 80
        location * {
                root "/vhosts/{{ vhost }}"
        }
}

This will set up listeners on port 80 for each onion, and serve the website content created earlier. You can enable https for these addresses, but as the server name certificate won’t match the onion name, the user will be presented with a warning. Note the new alias line included in the template:

server "{{ vhost }}" {
        listen on * tls port 443
        alias {{ lookup('file', 'onion_hostnames/' + vhost) }}
        tls {
                certificate "/etc/ssl/{{ vhost }}.fullchain.pem"
                key "/etc/ssl/private/{{ vhost }}.key"
        }
        location "/.well-known/acme-challenge/*" {
                root "/acme"
                request strip 2
        }
        location * {
                root "/vhosts/{{ vhost }}"
        }

Lots of things were omitted here for brevity, but the full template is on github.

To finish this off, a picture of a cat friend:

Kitty

April 10, 2020 12:00 AM

April 09, 2020

Iain R. Learmonth

Face

Consolidation and simplification

I’ve been doing a lot of consolidation and simplification in an effort to reduce the amount of brain I have to expend on various tasks and responsibilities. I think it’s working.

Last week I wrote about Ansible for Tor Metrics. I’ve also been working on Ansible for my own stuff. Some of the things I’ve been working on include:

  • consistent user account setup and synchronised SSH public keys
  • consistent privilege escalation (doas on OpenBSD, sudo on Debian)
  • all hosts are backed up via BackupPC
  • all hosts are monitored in Nagios

If you want to check out this stuff as it evolves, I’ve pushed it to git.sr.ht (this link might break, I make no promises).

I’m wondering how I want to manage these Ansible roles going forward. Perhaps as git submodules or I could learn about Ansible Galaxy. Maybe I can do both. I’m a bit put off from Ansible Galaxy that the only option to sign up was to use GitHub and I’m trying to stop using these silos.

Speaking of stopping using things, I’ve killed my Netlify account. For a long time I used Netlify to host my blog but I’ve now moved it back onto a server I manage using my Ansible playbook.

When I signed up to Netlify, they offered a free account for open source people that included most, if not all, of the pro features. It seemed like they wanted to be a company that was giving back to the community that produces the stuff they are using. One day, without and notice or announcement, my account converted from an open source account to a simple “free” account. It felt sneaky and from then I wasn’t so happy with Netlify. I’m happy to have moved everything away now.


I learned a thing today: on OpenBSD, you can use arp(8) to send wake on LAN packets. Maybe that will come in handy for you. It used to be another command on its own but it looks like it was consolidated.

April 09, 2020 05:23 PM

April 01, 2020

Iain R. Learmonth

Face

Continue as normal

I’m thinking to experiment with a loosely organised but perhaps more regular posting style. I’m going to post these under the tag “journal” because I think that will fit these quite well. We’ll see if I keep these up, or if this one ends up standing alone.

For many people right now is a time of uncertainty, anxiety, fear and loss. For me though today was pretty much a normal day. I work from home anyway so I’m doing what I do every week day. Today felt a little more normal than other days as I was attempting to work exlusively on a new laptop.

Before the whole coronavirus thing I was getting a little fed up with being tied to my desk and so I purchased a laptop for work that would allow me to sometimes work from the hackerspace or a coffee shop or whatever.

The laptop is a Mac and I’m usually working on Debian on my desktop, so there are changes to my workflows and I need to understand how the different tools work. I want to be sure that when we’re allowed to go outside again, I won’t immediately end up stuck and find that I have to go home again.

For a lot of this morning I was working on documentation. For this I’ve used org-mode and compared to writing Markdown in vim (as I’m doing now for this blog post) it’s a vastly superior way to produce lots of detailed technical text.

Some of the documentation is talking about how we use CloudFormation in Tor Metrics. Just after lunch I went to try some of these things out and found I didn’t have the AWS CLI installed. Installing it was fairly simple though and now it’s installed and I’ve got the access keys.

Shortly after this, I found that some of the files didn’t quite look right. I’d been working on them on my desktop and hadn’t pushed the new commits so I was looking at an old view. This one I would have had to have gone home for and I had to unplug the monitor, keyboard and mouse and stick them back in the desktop to push those changes, then straight back to the laptop.

Looking at our Ansible scripts, I was consolidating the “system” setup from a number of services into a single “metrics-common” role. We typically deploy our services on hosts that are managed by the Tor sysadmin team. They manage user accounts, system packages, backups and hardware monitoring so we can focus on the services. For development though we use AWS and we need to recreate the environment that they will give us, and that’s what “metrics-common” is for.

I learned about a new thing today, blocks. Blocks allow you to apply directives to a whole bunch of things at a time. Our “metrics-common” role needs to run as root, but then the rest of our stuff never runs as root because we assume we won’t have root access. Using a block we can do:

tasks:
- name: do the setup
  block:
  - name: step one
    ...
  - name: step two
    ...
  become: yes

The become: applies then to all of the tasks in the block without having to specify it every time.

Going back to the title, continue as normal, being on the laptop for the vast majority of the day meant I was disconnected from distractions like e-mail and IRC and so I completely missed all of the conversations going on about the state of the world.

It’s good to be informed and to understand the latest advice you need to follow, but also sometimes it’s good to just ignore everything and dig into some YAML-based domain-specific languages to orchestrate some systems.

April 01, 2020 04:48 PM

February 08, 2020

Iain R. Learmonth

Face

From socket(2) to .onion with pf(4)

I’ve been rebuilding my IRC bouncer setup and as part of this process I’ve decided to connect to IRC via onion services where possible. This setup isn’t intended to provide anonymity as once I’m connected I’m going to identify to NickServ anyway. I guess it provides a little protection in that my IP address shouldn’t be visible in that gap between connection and a cloak activating, but there’s so many other ways that my identity could leak.

You might wonder why I even bothered if not for anonymity. There are two reasons:

  1. to learn more about tor(1) and pf(4), and
  2. to figure out how to get non-proxy aware software to talk to onion services.

I often would find examples of socat, torsocks, etc. but none of them seemed to fit with my goal of wanting to use an onion service as if it were just another host on the Internet. By this I mean, with a socket(AF_INET, SOCK_STREAM) that didn’t also affect my ability to connect to other Internet hosts.

Onion services don’t have IP addresses. They have names that look like DNS names but that are not actually in DNS. So the first problem here is going to be that we’re not going to be able to give an onion address to the kernel, it wants an IP address. In my setup I chose 10.10.10.0/24 as a subnet that will have IP addresses that when connected to, will actually connect to onion services.

In the torrc file you can use MapAddress to encode these mappings, for example:

MapAddress 10.10.10.10 ajnvpgl6prmkb7yktvue6im5wiedlz2w32uhcwaamdiecdrfpwwgnlqd.onion # Freenode
MapAddress 10.10.10.11 dtlbunzs5b7s5sl775quwezleyeplxzicdoh3cnhm7feolxmkfd42nqd.onion # Hackint
MapAddress 10.10.10.12 awwqg2ishrohngue.onion # 2600net - broken(?)
MapAddress 10.10.10.13 darksci3bfoka7tw.onion # darkscience
MapAddress 10.10.10.14 akeyxc6hie26nlfylwiuyuf3a4tdwt4os7wiz3fsafijpvbgrkrzx2qd.onion # Indymedia

Now when tor(1) is asked to connect to 10.10.10.10 it will map this to the address of Freenode’s onion service, and connect to that instead. The next part of the problem is allowing tor(1) to receive these requests from a non-proxy aware application, in my case ZNC. This setup will also need a network interface to act as the interface to tor(1). A loopback interface will suffice and it’s not necessary to add an IP address to it:

# ifconfig lo1 up

pf is a firewall for OpenBSD, that can also perform some other related functions. One such function is called divert-to. Unfortunately, there is also divert-packet which is completely unrelated. tor(1) supports receiving packets that have been processed by a divert-to rule and this is often used for routing all traffic from a network through the Tor network. This arrangement is known as a “transparent proxy” because the application is unaware that anything is going on.

In my setup, I’m only routing traffic for specific onion services via the Tor network, but the same concepts are used.

In the torrc:

TransPort 127.0.0.1:1338
TransProxyType pf-divert

In pf.conf(5):

pass in quick on lo1 inet proto tcp all divert-to 127.0.0.1 port 1338
pass out inet proto tcp to 10.10.10.0/24 route-to lo1

and that’s it! I’m now able to connect to 10.10.10.10 from ZNC and pf will divert the traffic to tor.

On names and TLS certificates: I’m using TLS to connect to the onion services, but I’m not validating the certificates. I’ve already verified the server identities because they have the key for the onion service, the reason I’m using TLS is because I’m then presenting a client certificate to the servers (CertFP) to log in to NickServ. The TLS is there for the server’s benefit while the onion service authentication is for my benefit. You could add entries to your /etc/hosts file with mappings from irc.freenode.org to 10.10.10.10 but it seemed like a bit of a fragile arrangement. If pf or tor stop functioning currently, then no connection is made, but if the /etc/hosts file were to be rewritten, you’d then connect over the Internet and you’ve disabled TLS verification because you’re relying on the onion service to do that, which you’re not using.

On types of tranparent proxy: There are a few different types of transparent proxy supported by tor. pf-divert seemed like the most appropriate one to use in my case. It’s possible that the natd(8) “protocol” referred to in the NATDPort torrc option is actually talking about divert(4) sockets which are supported in OpenBSD, and that’s another option, but it’s not clear which would be the preferred way to do it. If I had more time I’d dig into which methods are useful and which are redundant as removing code is often a good thing to do.

February 08, 2020 07:35 PM

January 02, 2020

hibby

Face

RIP 2019

Wow, 2019 came and went in a flash! It’s been an incredibly tough, rewarding and interesting year for me. There was a lot of change and turmoil in my personal and professional life, which made actually finding time to stop and do the things I love a lot more difficult. I’ve barely cycled, I feel like I’ve barely touched personal projects, and the memories of what I did in January & February feel a million miles away.

January 02, 2020 03:30 PM

Ana Custura

Packet Radio Fun

Packet radio has been around since the 70s, and is used extensively by radio amateurs. APRS (Automatic Packet Reporting System) allows amateurs to send real-time beacons which may include location or other types of information.

Alongside stations, the APRS infrastructure includes digipeaters and IGates. APRS packets may be picked up by local digipeaters (repeaters of digital packets), and then repeated to reach further stations. An IGate relays received packets to the Internet where a unified map of all APRS objects can be displayed.

I’ve tried to do APRS packet radio over the past several days with mixed results. Here is what I found.

Required equipment

The standard APRS frequency for sending messages is 144.800 MHz in Europe, in the 2 metre band. Any 2M radio that can receive and decode APRS messages will work. It’s the most fun using a handheld device and receive messages as you walk through a new area. Unfortunately, handheld radios that have a built-in TNC are very expensive - but there is an alternative. If all you have is a cheap 2M radio, it can be used with a compact bluetooth TNC device and an APRS application on your phone for the same capabilities.

If doing this stationary, there are more alternatives - as the TNC bit can be done in software, with applications such as Direwolf, and all you require is a sound card and any radio.

I’ve used an Yaesu VX-8DE, which is a handheld device with a built-in TNC.

Sending packets

It’s harder than you think.

Problem #1: Trains

When travelling by trains, I was only able to send packets for maybe 15 of the total journey time, due to not having GPS coordinates to build packets with. Turns out that the inside of a train is NOT the best environment for acquiring a GPS lock.

Even with the GPS fired up and sending a beacon every minute, only about 10% of all the packets sent were picked up by any digipeater. Turns out the inside of a train is NOT the best environment for RF propagation either. I was digipeated once approaching Leipzig, as the train was stopping:

Trains are still amazing despite this slight glitch. And using a bluetooth TNC should eliminate the GPS problem, as phones are generally better at acquiring GPS data (they can use a broader range of satellites than the proprietary Yaesu GPS antenna for one).

Problem #2: Other users

There can be too many APRS packets.

For example, Cambridge is very active APRS-wise. There were many stations around, some even within direct messaging distance. However, trying to send a single APRS message took several attempts due to collisions. Timing the message so that it sends between the regular beacons of other users helps, but rarely first try, and spamming messages to get something through only makes the problem worse in a crowded spectrum.

As good practice for those out there thinking of running stationary or unattended beacons I recommend sending updates at least three minutes apart.

Problem #3: Cloudy skies & rubber ducks

Several satellites have amateur radio capabilities, and there are apps to easily track satellite passes in your location.

Cambridge is also in a great spot for ISS passes, some with really a nice elevation above the horizon. The ISS has a digipeater on board. This extends the limited local 2M perspective to space and back anywhere on the continent. I spent too much time in muddy fields trying to get digipeated by the ISS, but with no success. However, I did receive stations from Austria (OE6PWE), Sweden (SM3XLY), Italy (IK1COA), Netherlands (PE1NTN), France (F6FUJ), Germany (DB1DT), Russia (UA1WBM) and Space itself (RS0ISS):

SPACE

Sending failed possibly due to a combination of the bad rubber duck antenna of my handheld, sky conditions, maybe competing with other stations.

On my to-do list is to build a Yagi, and try again, as per this nice start-up guide.

January 02, 2020 12:00 AM

September 09, 2019

Iain R. Learmonth

Face

Spoofing commits to repositories on GitHub

The following has already been reported to GitHub via HackerOne. Someone from GitHub has closed the report as “informative” but told me that it’s a known low-risk issue. As such, while they haven’t explicitly said so, I figure they don’t mind me blogging about it.

Check out this commit in torvalds’ linux.git on GitHub. In case this is fixed, here’s a screenshot of what I see when I look at this link:

GitHub page showing a commit in torvalds/linux with the commit message add super evil code

How did this get past review? It didn’t. You can spoof commits in any repo on GitHub due to the way they handle forks of repositories internally. Instead of copying repositories when forks occur, the objects in the git repository are shared and only the refs are stored per-repository. (GitHub tell me that private repositories are handled differently to avoid private objects leaking out this way. I didn’t verify this but I have no reason to suspect it is not true.)

To reproduce this:

  1. Fork a repository
  2. Push a commit to your fork
  3. Put your commit ref on the end of:
https://github.com/[parent]/[repo]/commit/

That’s all there is to it. You can also add .diff or .patch to the end of the URL and those URLs work too, in the namespace of the parent.

The situation that worries me relates to distribution packaging. Debian has a policy that deltas to packages in the stable repository should be as small as possible, targetting fixes by backporting patches from newer releases.

If you get a bug report on your Debian package with a link to a commit on GitHub, you had better double check that this commit really did come from the upstream author and hasn’t been spoofed in this way. Even if it shows it was authored by the upstream’s GitHub account or email address, this still isn’t proof because this is easily spoofed in git too.

The best defence against being caught out by this is probably signed commits, but if the upstream is not doing that, you can clone the repository from GitHub and check to see that the commit is on a branch that exists in the upstream repository. If the commit is in another fork, the upstream repo won’t have a ref for a branch that contains that commit.

September 09, 2019 08:17 PM

August 31, 2019

Tom Jones

Face

FreeBSD on the NanoPi NEOLTS

The NanoPi NEOLTS is a SBC from FriendlyElec that uses the Allwinner H3 SOC. The NanoPi NEOLTS has a nice selection of hardware including 100Mbit Ethernet, 3 USB Ports and a bunch of exposed GPIO.

FreeBSD on the NanoPi uses GENERICSD image. This image requires a bootloader to be added before it will work. We can prepare a single image to be copied to many SD cards by using a memory disk as an intermediate step.

NanoPi NEOLTS

We need to:

  • Get the latest GENERICSD card image snapshot
  • Install the correct boot loader pkg
  • Create a memory disk
  • Copy the GENERICSD image to memory disk
  • Copy the bootloader to the memory disk
  • Mount the root partition of the sd card image
  • Copy the programs and files we need for the tutorial to the sd card

The latest image is as I write is 13 CURRENT from 20190829:

$ fetch ftp://ftp.freebsd.org/pub/FreeBSD/snapshots/arm/armv7/ISO-IMAGES/13.0/FreeBSD-13.0-CURRENT-arm-armv7-GENERICSD-20190829-r351591.img.xz

We have to decompress the image before we can use it

$ xz -d FreeBSD-13.0-CURRENT-arm-armv7-GENERICSD-20190829-r351591.img.xz

Each u-boot bootloader platform has its own package, currently there are 46 different bootloaders in the FreeBSD ports system. We want the u-boot for the nanopi_neo (our target).

$ pkg search nanopi     
u-boot-nanopi-neo2-2019.07     Cross-build das u-boot for model nanopi-neo2
u-boot-nanopi_a64-2019.07      Cross-build das u-boot for model nanopi_a64
u-boot-nanopi_m1plus-2019.07   Cross-build das u-boot for model nanopi_m1plus
u-boot-nanopi_neo-2019.07      Cross-build das u-boot for model nanopi_neo
u-boot-nanopi_neo_air-2019.07  Cross-build das u-boot for model nanopi_neo_air

# pkg install u-boot-nanopi_neo-2019.07

The u-boot-nanopi_neo package contains the binary bootloader we need in u-boot-sunxi-with-spl.bin

$ pkg info -l u-boot-nanopi_neo-2019.07

u-boot-nanopi_neo-2019.07:
    /usr/local/share/licenses/u-boot-nanopi_neo-2019.07/GPLv2
    /usr/local/share/licenses/u-boot-nanopi_neo-2019.07/LICENSE
    /usr/local/share/licenses/u-boot-nanopi_neo-2019.07/catalog.mk
    /usr/local/share/u-boot/u-boot-nanopi_neo/README
    /usr/local/share/u-boot/u-boot-nanopi_neo/boot.scr
    /usr/local/share/u-boot/u-boot-nanopi_neo/metadata
    /usr/local/share/u-boot/u-boot-nanopi_neo/u-boot-sunxi-with-spl.bin

With the GENERICSD image and the bootloader we need to create the memory disk image we will use for staging. First we need to create a large enough backing file.

$ truncate -s 8G nanopi.img
# mdconfig -f nanopi.img
md0

Now we can dd the GENERICSD image to the memory disk

# dd if=FreeBSD-13.0-CURRENT-arm-armv7-GENERICSD-20190829-r351591.img of=/dev/md0 bs=1m

We need to dd the bootloader to the start of the SD card, i.e. the entire device and not a partition.

# dd if=/usr/local/share/u-boot/u-boot-nanopi_neo/u-boot-sunxi-with-spl.bin of=/dev/da0 bs=1k seek=8 conv=sync

With the memory disk attached we can interact with the image file as if it were a real USB drive or SD card.

$ gpart show md0
=>      63  16777153  md0  MBR  (8.0G)
        63      2016       - free -  (1.0M)
      2079    102312    1  fat32lba  [active]  (50M)
    104391   6187041    2  freebsd  (3.0G)
   6291432  10485784       - free -  (5.0G)

We can mount the root partition of the SD card and modify or add any files we wish:

# mount /dev/md0sa mnt

When we are done changing things we have to disconnect the memory disk:

# sudo mdconfig -d -u md0

Finally we can copy the memory disk to a real sd card using dd:

# sudo dd if=nanopi.img of=/dev/da0 bs=1m

August 31, 2019 12:00 AM

August 14, 2019

Tom Jones

Face

My FreeBSD Development Setup

I do my FreeBSD development using git, tmux, vim and cscope.

I keep a FreeBSD fork on my github, I have forked https://github.com/freebsd/freebsd to https://github.com/adventureloop/freebsd

On my fork I have the freebsd/freebsd repo set as an upstream

$ git remote -v
origin  git@github.com:adventureloop/freebsd.git (fetch)
origin  git@github.com:adventureloop/freebsd.git (push)
upstream        https://github.com/freebsd/freebsd.git (fetch)
upstream        https://github.com/freebsd/freebsd.git (push)

See this article for information on setting this up https://help.github.com/en/articles/configuring-a-remote-for-a-fork

I do all work on branches using worktrees, keeping the master branch clean.

Periodically I sync the master branch with the FreeBSD upstream:

$ cd ~/code/freebsd-dev/freebsd-git
$ git checkout master
$ git fetch upstream
$ git merge upstream/master
$ git push

I have a development setup based on Ian Lapore's arm set up documented on the FreeBSD wiki https://wiki.freebsd.org/FreeBSD/arm/crossbuild

I have a freebsd-dev directory in my code directory. It their I keep a copy of FreeBSD in freebsd-git, and obj directory for build output and a projects directory for in progress code.

$ tree -L2
.
├── diffs
│   ├── D15222.diff
│   └── old
├── dstdir
│   ├── boot
│   ├── METALOG
│   └── usr
├── freebsd-git
│   ├── bin
│   ├── sbin
...
│   └── usr.sbin
├── obj
│   └── usr
├── projects
│   ├── atomicstats
│   ├── axp288
│   ├── bugsquash
│   ├── byebyejumbograms
...

I use git worktrees for ongoing projects. git worktrees allow you to have a shallow file system copy on a git branch in a directory.

When starting a new project I do something like:

$ cd ~/code/freebsd-dev/freebsd-git
$ git worktree add thj/newdevelopment ../projects/newdevelopment master
$ cd ../projects/newdevelopment

Once the worktree is set up I launch a tmux session in the projects directory. Each random idea or itch I have, if there is enough there, ends up with a project worktree and a tmux session.

tmux allows me to have many windows in a session, I have a serious tmux problem. Right now I have 11 sessions with 42 windows across them. This is a good indicator of my focus level.

I do FreeBSD development with cscope and vim. With tmux splits I normally have an open file and I use other cscope instances in tmux windows to search for things I need in the tree.

I do testing in a bhyve vm and leave the serial port in a tmux window somewhere. I follow the setup in the FreeBSD handbook and back each vm with a zfs dataset.

I do FreeBSD kernel builds using a command like:

env MAKEOBJDIRPREFIX=/home/tom/code/freebsd-dev/obj make -j 44 buildkernel \
        -DKERNFAST installkernel \
    -DNO_ROOT DESTDIR=/home/tom/code/freebsd-dev/dstdir

I then ship kernels to the test vm with scp. jhb@ has a nicer method using the bhyve-loader, but I am yet to try it.

When changes are maturing I create reviews for them using arcanist, manu@ has a good article on doing this

August 14, 2019 12:00 AM

May 08, 2019

Iain R. Learmonth

Face

An OpenBSD Mail Server on Digital Ocean

I never finished this blog post, but I'm hitting publish anyway, maybe something in here is useful. If you were looking for a complete guide then this isn't going to be what you wanted.

I am one person, but I have many roles. I’m starting to find that I’m getting too much information from too many directions. I’m also finding that I don’t have access to certain information when I need it due to operational security issues. The biggest problem that I’m having with email right now is that 99.9% of my email is going into a single inbox.

This presents a number of problems:

  • Crack the email, crack it all. Access to a single inbox allows password resets on many of my accounts. There’s no seperation between notifications/alerts and password reset access either.
  • There’s a lot of history. Could you describe every email in your inbox and folders? Is there anything dangerous in there? People are not very good at information hygiene when it comes to email unfortunately. When I’m travelling, or out and about, I don’t need to have the full history of every email I’ve ever received with me. That’s not the sort of thing I want falling into the wrong hands (stolen by theives, seized by border staff, etc.).
  • There’s just too much going on. Maybe I used to be better at this, or maybe there really is more going on now, but context switches are painful. If I have a whole stack in my head while I’m working on some code, and then I get notifications going off about something unrelated, then some of that stack gets smashed and I have to rebuild it. I can’t just close my email client because I’m referring to relevant messages from a mailing list or team member.

There are other problems too but these are the main ones. Until now I’ve relied on my FastMail family plan for my day-to-day email, but I only get one inbox. The pricing is per-inbox, not per-user. The pricing model is set up to force me into centralising my entire history, password reset capabilities and notifications through a single login.

In fact it’s even got worse than that, as when I signed up for FastMail they gave you a “master user” account with the family plan that you could use for “admin” powers relating to the plan, including modifying DNS records and billing information, that had a seperate password. In the current plans that account is gone and the “admin” powers are overloaded on your inbox account.

I’ve tried shopping around for the commercial provider that offers the service that I want, but just like with my DNS provider, it doesn’t exist. So I’ll do it myself.

This is not my perfect mail server setup, this is a work-in-progress. I’m mostly documenting this so I can remember what I did.

This guide will use OpenBSD 6.5. If later versions of OpenBSD have been released since this was written you should refer to the changelogs and errata to see what may have changed.

To get started, you’re going to need to login to Digital Ocean and create a $5/mo droplet with “FreeBSD 11.2 x64 ufs”. FreeBSD VMs require an SSH key for login so make sure you’ve got that ready.

Get logged into the instance via SSH. Also load up the “console” by going to Access → Launch Console in the droplet page. To install OpenBSD we’re going to write a filesystem with the ramdisk installer over the virtual disk and then boot from it. We use the miniroot image because by the time we get to installing the sets, the disk will have a new disklabel anyway.

First we will write down the IPv4 and IPv6 addresses that have been assigned by Digital Ocean to the instance. Make sure to get the addresses, the netmasks/prefix lengths, and the default routes. Also get the local DNS resolvers in case you want to use them as forwarders.

# get the ipv4 and ipv6 adresses and netmasks
ifconfig
# get the default routes
netstat -rn
# get the local resolvers
cat /etc/resolv.conf

With this information written down safely:

# download the openbsd 6.5 miniroot filesystem
fetch https://cdn.openbsd.org/pub/OpenBSD/6.5/amd64/miniroot65.fs
# the computer is upset when you write over its memory with openbsd
# installers
swapoff /dev/gpt/swapfs
# the computer gets upset when you try to write over the partition table of
# the running system
sysctl kern.geom.debugflags=0x10
# write the miniroot filesystem to the virtual disk
dd if=miniroot57.fs of=/dev/vtbd0 bs=512k
# reboot into the openbsd installer
shutdown -r now

Now look to the console, which may have turned to a blank screen. Refresh the browser window if it has, and you’ll see that it’s booting into the installer. The OpenBSD installer runs from memory so it won’t be upset when we write over the disk it was loaded from later.

The installer will ask you to answer some questions. They are all fairly self-explanatory. Note that your network interface will appear as vio0, and your virtual disk will appear as sd0. Use the network addresses you wrote down earlier.

I used a temporary root password during the installation and then loaded in an SSH key after the first boot before disabling SSH root password login.

selector1._domainkey IN TXT "k=rsa; t=s; p=MHwwDQYJK ... OprwIDAQAB"

May 08, 2019 12:00 PM

February 12, 2019

Ana Custura

LibreELEC Streaming Camera

This is a blogpost about a Raspberry Pi camera setup on LibreELEC. It includes a step by step tutorial.

If you have the very specific use case of needing to run LibreELEC on a Raspberry Pi to play media while also streaming a Pi camera, look no further. This setup has been running for the past year or so in my house. I’ve rebuilt the Docker container for it recently, which is the perfect excuse for a blogpost.

LibreELEC is a Linux distribution for running media center software Kodi. It will stream films, music, TV shows from your NAS to your TV and has several awesome add-ons. LibreELEC has no add-on for streaming from a camera. Perhaps because the drivers needed for USB cameras are not actually even compiled into the image for the RPi2. I’ve tried this setup using a USB camera and had to build a custom LibreELEC image.

However, there is support for streaming from RPi cameras and there is a Docker add-on, which can be used to run a simple camera streaming app in a container. The app I’ve chosen after some trial and error of which gruelling details I will spare you is mjpeg-streamer.

The container uses the current Raspbian stable image, stretch, on top of which it builds mjpeg-streamer. It then uses an entrypoint script to capture a still image per second from the Pi camera, which the software then turns into a www stream on port 8080.

You can get the container here. For a step by step tutorial on how to deploy this, read on.


Step by step tutorial

Ingredients:
  • Raspberry Pi v2 or v3
  • Micro SD card (>2GB) and SD card adapter
  • Compatible Raspberry Pi camera module
  • Monitor and keyboard
  • Wired connection or wireless dongle if using a RPi 2

Step 1: Download

Download the official LibreELEC image for the Raspberry Pi from the official website. Scroll to ‘Direct Downloads’, select ‘Raspberry Pi v2 and Raspberry Pi v3’ and click the filename link on the page. The filename looks like LibreELEC-RPi2.arm-8.2.5.img.gz at the moment of writing this blog post, yours will most likely be newer.

Step 2: Put image on SD card

I use an SD card adapter and the in-built card reader in my laptop. On Linux, after plugging the SD card in, command sudo dmesg should display the name newly inserted device, similar to the following:

ana@sinopia:~|⇒  sudo dmesg
[...]
[70540.504869] mmc0: new high speed SDHC card at address aaaa
[70540.585060] mmcblk0: mmc0:aaaa SB16G 14.8 GiB (ro)
[70540.590225] mmcblk0: p1

The identifier for SD cards on Linux usually looks like mmcblk. Careful to copy your image on the right device, and not on your local hard drive:

ana@cyan:~|⇒  gunzip LibreELEC-RPi2.arm-8.2.5.img.gz
ana@cyan:~|⇒  sudo dd if=LibreELEC-RPi2.arm-8.2.5.img of=/dev/mmcblk0
549+0 records in
549+0 records out
575668224 bytes (576 MB, 549 MiB) copied, 50.8254 s, 11.3 MB/s

This may take a minute or two. Once dd has finished, load up your MicroSD card in your Raspberry Pi.

Step 3: Configure LibreELEC

Plug a monitor and a keyboard in and turn on your RPi. LibreELEC starts an autoconfiguration wizard when it first boots. This will guide you through setting up location, timezone and most importantly, a network connection needed for the next step. Hit ‘Next’ and follow the wizard.

  • One you connect to a network, the interface will display your IP address. Make sure to take a note of it.
  • Under the initial ‘Sharing and Remote Access’ screen make sure you enable the SSH service.

For security purposes, we will disable SSH after setting up docker as the root password is hardcoded in the image.

Step 4: Download the Docker Add-on

From the left hand-side menu, navigate to Add-ons -> Install from repository -> LibreELEC Add-ons -> Services -> Docker and then select Install.

The interface will notify you once this has installed.

Step 5: Build the Docker container

In order to build and run the docker container, you need to ssh as root into your Pi, using the IP address from Step 3. In this example, my IP is 192.168.0.156.

ana@cyan:~|⇒  ssh root@172.22.152.253
root@172.22.152.253's password

At the password prompt, type ‘libreelec’, the LibreELEC default and hardcoded password.

Next, make a directory and download the Dockerfile and starting script.

LibreELEC:~ # mkdir mpeg
LibreELEC:~ # cd mpeg
LibreELEC:~/mpeg # wget https://raw.githubusercontent.com/ana-cc/dockerstuffs/master/mjpeg-streamer-rpi/Dockerfile
Connecting to raw.githubusercontent.com (151.101.60.133:443)
Dockerfile           100% |************************************************************************************************************************|   917   0:00:00 ETA
LibreELEC:~/mpeg # wget https://raw.githubusercontent.com/ana-cc/dockerstuffs/master/mjpeg-streamer-rpi/stream_from_pi.sh
Connecting to github.com (140.82.118.3:443)
stream_from_pi.sh    100% |************************************************************************************************************************| 55534   0:00:00 ETA

Inspect and (optionally) edit the stream script. The script uses command raspistill to capture an image per second from the Pi camera, which mjpeg-streamer then turns into a www stream on port 8080. You can vary the resolution, quality and time apart these images are taken with the -w -h and sleep values. Here is the script by default:

until raspistill --nopreview -vf -hf -w 640 -h 480 -q 5 -o /tmp/stream/pic.jpg -tl 100 -t 9999999 -th 0:0:0 & LD_LIBRARY_PATH=/usr/local/lib mjpg_streamer -i "input_file.so -f /tmp/stream -n pic.jpg" -o "output_http.so -w /usr/local/www"; do
	sleep 1
done

Finally, build the container and give it an easy to remember tag:

docker build -t mjpeg_container .

This should take a while, at the end your output should be similar to:

[...]
Step 12/13 : COPY stream_from_pi.sh /
 ---> 2299b11e7696
Removing intermediate container de4317561efe
Step 13/13 : ENTRYPOINT /bin/bash /stream_from_pi.sh
 ---> Running in deff3a4ebe15
 ---> b5f669ccd45e
Removing intermediate container deff3a4ebe15
Successfully built b5f669ccd45e

Step 6: Run the docker container

Now for the fun part: running and testing the newly-built container.

We want the Docker process to map the default streaming port to port 8081 of our RPi host (Kodi’s own web interface occupies port 8080 by default):

 docker run -d --privileged -p 8081:8080 mjpeg_container

And now the LibreELEC box will present the stream on port 8081. In a browser of choice, navigate to your IP address on port 8081, for example https//192.168.0.156:8081.

You should see a welcome page like the following:

Your stream can be viewed under the stream tab.

Step 8: Clean up

  • Worth adding an @reboot crontab to run the docker command if your box is subject to, uh, a lot of reboots.

Before ending your ssh session:

LibreELEC:~ # crontab -e

Make sure the file contains the following line:

@reboot docker run -d --privileged -p 8081:8080 mjpeg_container

This tells the box to run the docker container at every reboot so that you don’t have to start it manually.

  • Finally, disable SSH by going to Settings -> Services in your Kodi navigation.

This concludes the tutorial. Happy streaming!

February 12, 2019 12:00 AM

Subscriptions

Last updated:
June 04, 2020 12:06 AM
All times are UTC.

Powered by:
Planet

Planetarium: