Tuesday, July 28, 2020

Install Python 3 on raspberry pi and make it the system default

As of this blog post, the raspberry pi still ships with python2.7 as the default. There's also not really an easy way to install python3.8 (again the latest as of this post) as the system default.

I used a global pyenv so that the global version can be upgraded and changed easily.

The below script will automatically download, compile, and set python3.8.5 as the new system default for your raspberry pi. No prompts necessary.

You can run the gist to automatically do this:

curl 'https://gist.githubusercontent.com/stephen-mw/341c8194aefb694939b366204156037c/raw/fe2fc6060792dbe3e98bc3fc7830229e0d657bdf/install_python38_on_py.sh' | sudo bash

Or just copy below into your script and execute it:

#!/usr/bin/env bash
set -euo pipefail

# This script downloads, compiles, and installs python3.8 as the system default

export VERSION=3.8.5

apt install -y       \
    build-essential  \
    libbz2-dev       \
    libffi-dev       \
    liblzma-dev      \
    libncurses5-dev  \
    libncursesw5-dev \
    libreadline-dev  \
    libsqlite3-dev   \
    libssl-dev       \
    llvm             \
    python-openssl   \
    python-pip       \
    tk-dev           \
    xz-utils         \
    zlib1g-dev

export OLD_PIP=$(which pip)
export NEW_PIP=${OLD_PIP}2.7

mv ${OLD_PIP} ${NEW_PIP}
ln -s ${NEW_PIP} ${OLD_PIP}

export PYENV_ROOT=/etc/pyenv
export PATH="$PYENV_ROOT/bin:$PATH"

if [[ -d "${PYENV_ROOT}" ]]; then
    rm -rfv -- "${PYENV_ROOT}"
fi

git clone https://github.com/pyenv/pyenv.git ${PYENV_ROOT}

# Can't fit in /tmp because it's a ramfs
local TMPDIR=${PYENV_ROOT}/tmp
mkdir ${TMPDIR}

# The latest version as of now
echo "Setting system python to ${VERSION}. This may take several minutes..."
CFLAGS="-O2" TMPDIR=${TMPDIR} pyenv install ${VERSION}

rm -rf -- "${TMPDIR}"

# Set the global version
pyenv global ${VERSION}

# Latest
update-alternatives --install $(which python) python /etc/pyenv/versions/${VERSION}/bin/python 1
update-alternatives --install $(which pip) pip /etc/pyenv/versions/${VERSION}/bin/pip 1

# 2.7
update-alternatives --install $(which python) python /usr/bin/python2.7 2
update-alternatives --install $(which pip) pip /usr/bin/pip2.7 2

# Lastly, set the system python to our new version
update-alternatives --set python /etc/pyenv/versions/${VERSION}/bin/python
update-alternatives --set pip /etc/pyenv/versions/${VERSION}/bin/pip

pip install -U pip

cat <<"HELP"

Python${VERSION} is now your system default.

If you want to roll your system back to 2.7, simply run:

    sudo update-alternatives --set python /usr/bin/python2.7
    sudo update-alternatives --set pip /usr/bin/pip2.7

HELP

Friday, August 17, 2018

Shinobi on the Raspberry Pi 3 B+


The Raspberry Pi is a $35 single board computer that runs on a 64-bit ARM processor. It uses the Broadcom BCM2835 SoC which includes hardware-accelerated h264 encoding/decoding, making it a great choice for running a small Shinobi setup with network h264 cameras.
In fact, there's even a wireless $10 version called the Raspberry Pi Zero W!
Shinobi is a powerful CCTV software that forms the "brain" for your cameras, allowing you to configure how, what, and when they record. 
If you've ever futzed around with the CCTV software that comes preinstalled with NVR cameras you'll realize quickly how awful they are. I fully replaced my CCTV NVR with a PoE switch attached to a Raspberry Pi.
Together with the Raspberry Pi 3, you have an affordable, reliable, and power-efficient way to manage your CCTV setup. This "guide" will walk you through my setup and provide some tips if you want to use the RPI for this purpose.

The setup

My setup includes 4 cameras: 2x 1080p@30, and 2x 720p@30. I have them monitoring for motion and recording. The RPI is able to handle this workload with surprisingly little power usage.
In order for this setup to work, there's a few conditions that must be met:
  1. No transcoding. It's simply too CPU intensive even with hardware-accelerated h264_omx encoder. For any recording/streaming you'll need to set the video codec to "copy" (or possibly the jpeg API).
  2. Depending on the camera quality settings, you'll need to bump the GPU memory share up to 256MB. Even to me this seemed too high, but without it I was getting mmal decoding errors with more than 2 1080p@30 NVR camera.
  3. For any decoding you'll need to use the hardware-accelerated h264_mmal codec. Without specifying this codec there will be too much CPU usage. Using MMAL ensures that the heavy lifting of deciding the h264 stream is done on the GPU.
  4. Real 2.5a power supply. Your RPI needs all of it.
  5. (Optional) Active cooling. My RPI case has a small fan hooked up the 3v power supply. I have small heatsinks attached to the SoC.

Configuration

The default ffmpeg binary installed from apt includes all of the necessary codecs to use RPI's GPU for hardware acceleration. There's no need to do any compiling or ffmpeg, just get it from apt and you're done.
Despite what you find online. There is no need to recompile FFMPEG on the Raspberry Pi to do hardware accelerated h264 encoding/decoding! It's amazing how out-of-date a lot of these guides are.
# ffmpeg
ffmpeg version 3.2.10-1~deb9u1+rpt2 Copyright (c) 2000-2018 the FFmpeg 

# The encoder to use (if any -- see comment about "copy")
$ ffmpeg -encoders | grep omx
 V..... h264_omx             OpenMAX IL H.264 video encoder (codec h264)

# The decoder to use
$ ffmpeg -decoders | grep mmal
 V..... h264_mmal            h264 (mmal) (codec h264)
 V..... mpeg2_mmal           mpeg2 (mmal) (codec mpeg2video)
 V..... mpeg4_mmal           mpeg4 (mmal) (codec mpeg4)
 V..... vc1_mmal             vc1 (mmal) (codec vc1)

Configuring Shinobi

I made a few small tweaks to Shinobi to expose the Raspberry Pi's native decoding methods. The changes are in the dev branch now but should be merged into master soon. 
You can check out the repo here: https://gitlab.com/Shinobi-Systems/Shinobi. Depending on when you read this blog you may be able to checkout master.
There are several guides on the site for getting the software installed.
To expose the hardware acceleration method select yes for hardware acceleration dropdown. Leave the HWAccel option as auto and select H.264 (Raspberry Pi) as the decoder.
This will use the hardware-accelerated h264_mmal codec.
For streaming/output I highly recommend you set it to copy to save yourself the CPU cycles of transcoding. If you need to encode in h264, make sure to use the h264_omx codec so that it's hardware accelerated.
Another option for transcoding is simply to setup a cron to do the transcoding in the background with low CPU affinity.
That's about it as far as configurations go.

Storage

You basically have three options: root storage, attached storaged, and network-attached storage.
The raspberry pi's main storage is micro SD. You can use the root storage as your primary storage if you're careful about space and set the appropriate video expirations.
You can attach storage via the USB interface. Be careful with additional power draw if your device is unpowered.
The last option is to use a network-attached storage device. This can be a NAS or something similar. This method is the most flexible if the hardware is available to you, such as a NAS server.

Troubleshooting

mmal encoding errors

The most common problem I encountered was mmal encoding errors. If your cameras are restarting because of these errors, bump up the available memory to the GPU. You may also need to downscale the quality/bitrate of your cameras.

Unstable Pi / reboots

Make sure you're using a 2.5a (or above) power supply. Most power supplies do not supply 2.5a. Usually ones that are marketed for iPads or tablets will supply the amperage, but you must check the back of your adapter to see what its rating is.
Depending on your setup (and how hard you're pushing your Pi), you may need active cooling. You can also try adding a heatsink to the CPU/GPU SoC.

Slow / sluggish performance

If you're using the h264_mmal codec with a 1080p@30fps camera, it takes about 100% of a single CPU. If you're seeing higher CPU usage (such as 200-250%) you're probably not using the codec or something is else is misconfigured.
You can check by running ps aux | grep ffmpeg and taking note of the setting just before the -i rtsp://... line. It should say -c:v h264_mmal.
$ sudo ps aux | grep ffmpeg
ffmpeg ... -c:v h264_mmal -i rtsp://....

Monday, June 11, 2018

Working with nulls with sql databases in Golang

Go is typed with no generics, so it's a little tricky working with databases where null values abound.

the sql package includes structs such as sql.NullString, but it's confusing for users to get a json response that looks like this, which is how the value is represented internally:

{
 "foo": {
  "Valid": true,
  "Value": 5
 }
}

Below is a working golang HTTP server that handles null strings. It reads and writes to a sqlite3 database (in memory). When rows contain null values, they'll be properly displayed as null.

The trick to making things work is the null.v3 library, which implements the json marshal and unmarshal methods necessary to return a null value when querying the database.

Gist here.

package main

import (
 "database/sql"
 "encoding/json"
 "log"
 "net/http"
 "strconv"

 _ "github.com/mattn/go-sqlite3"
 "gopkg.in/guregu/null.v3"
)

// DB is the database connector
var DB *sql.DB

// Person represents a single row in a database. Using the type null.
type Person struct {
 Name     string      `json:"id"`
 Age      int         `json:"age"`
 NickName null.String `json:"nickname"` // Optional
}

// InsertPerson adds a person to the database
func InsertPerson(p Person) {
 cnx, _ := DB.Prepare(`
          INSERT INTO people (name, age, nickname) VALUES (?, ?, ?)`)
 defer cnx.Close()
  
 log.Printf("Adding person: %v\n", p)
 cnx.Exec(p.Name, p.Age, p.NickName)
}

// GetPeople will retur N number of people from database
func GetPeople(n int) []Person {
 people := make([]Person, 0)
 rows, _ := DB.Query(`SELECT name, age, nickname from people LIMIT ?`, n)
 for rows.Next() {
  p := new(Person)
  rows.Scan(&p.Name, &p.Age, &p.NickName)
  people = append(people, *p)
 }
 return people
}

func addPersonRouter(w http.ResponseWriter, r *http.Request) {
 r.ParseForm()

 age, _ := strconv.Atoi(r.FormValue("age"))

 // Get nickname from the form and create a new null.String. If the string
 // is empty, it will be considered invalid (null) in the database and not
 // empty
 nick := r.FormValue("nickname")
 nickName := null.NewString(
  nick, nick != "")

 p := Person{
  Name:     r.FormValue("name"),
  Age:      age,
  NickName: nickName,
 }

 InsertPerson(p)

 w.WriteHeader(http.StatusCreated)
}

func getPeopleRouter(w http.ResponseWriter, r *http.Request) {
 r.ParseForm()
 limit, _ := strconv.Atoi(r.FormValue("limit"))
 people := GetPeople(limit)

 peopleJSON, _ := json.Marshal(people)
 w.Header().Set("Content-Type", "application/json")
 w.Write(peopleJSON)
}

// CreateTable is a helper function to create the table for the first run
func CreateTable() error {

 createSQL := `
    CREATE TABLE people (
        id INTEGER PRIMARY KEY AUTOINCREMENT,
  name TXT NOT NULL,
  age INTEGER NOT NULL,
  nickname TXT
 );`
  
 statement, err := DB.Prepare(createSQL)
 if err != nil {
  return err
 }

 statement.Exec()
 statement.Close()

 return nil
}

func main() {
 var err error
 DB, err = sql.Open("sqlite3", ":memory:")
 if err != nil {
  log.Fatal(err)
 }

 err = CreateTable()
 if err != nil {
  log.Fatal(err)
 }

 http.HandleFunc("/add", addPersonRouter)
 http.HandleFunc("/list", getPeopleRouter)
 http.ListenAndServe(":8080", nil)
}

Here's some examples of the server when it's running

$ curl -XPOST "localhost:8080/add?name=Joseph&age=25&nickname=Joe"
$ curl -XPOST "localhost:8080/add?name=Stephen&age=33"
$ curl -s 'localhost:8080/list?limit=2' | jq
[
  {
    "id": "Joseph",
    "age": 25,
    "nickname": "Joe"
  },
  {
    "id": "Stephen",
    "age": 33,
    "nickname": null
  }
]

There you go. Now you can read and write valid or invalid values to your database.

Friday, December 18, 2015

Git cheat sheet

Creating a feature branch

Creating a feature branch is a good way to keep your testing out of the main branch. You can hack away at your code without disturbing the general workflow.
The following command will create a feature branch that tracks master. Pushing it with the -u origin flag will autmatically make your local version of the branch track the remote version:
     $ git checkout master && git pull
     $ git checkout -t -b "some-feature-branch-name"
     $ git push -u origin "some-feature-branch-name"
Also, since this is a feature branch you're free to do force commits and cleanup your git history.
You can see what your branches are tracking and what their origin is with the following command:
git branch -vv
  1432-rb-force-safety-switch 62d5af7 [origin/1432-rb-force-safety-switch] Refuse rb if path contains keys.
* develop                     5f56434 [origin/develop: ahead 233, behind 2] Merge branch 'release-1.8.12'
And to see where "origin" points to:
$ git remote -v
origin  git@github.com:stephen-mw/aws-cli.git (fetch)
origin  git@github.com:stephen-mw/aws-cli.git (push)
upstream    git@github.com:aws/aws-cli.git (fetch)
upstream    git@github.com:aws/aws-cli.git (push)
In the above example, I can push to either "upstream" (which in this case is the official aws repo), or "origin", which is my forked repo.

Creating a Git Tag

A tag is a point-in-time reference to a git branch. Tags are immutable snapshots of the repo that can't be altered once they are created (though they can be deleted and recreated). They are useful for tasks such as deployments or ensuring your work won't be altered by others.
    # This will create a tag called "mysql-1380235429"
    $ git tag -a -m "Deploying mysql update" mysql-`date +%s`
    $ git push --tags
Now the tag can be checked out in the usual method.
    $ git checkout mysql-1380235429

Preparing Your Branch for a Pull Request

A pull request should be done before anything is merged into the master branch.
Before asking someone to review your work, you should take a few minutes and make sure it's really finished. You'd be surprised what things can slip through.

Diff Against Master

The first thing you want to do is diff your branch against the master branch. This allows you to catch simple bugs before pushing anything upstream.
git diff master
This will tell you exactly what's different between your branch and the master branch. It will also highlight things such as trailing whitespace.

Rebasing your branch against master (or a different branch)

You can think of rebasing as doing the following actions: 1. Take all recent changes and stash them away 2. Pull down the most recent changes from a different branch 3. Attempt to apply your stashed changes on top of the new changes one at a time.
Here's how to rebase your branch against master:
git checkout master
git pull
git checkout my_feature_branch
git rebase master
git push origin +my_feature_branch
Many times there's conflicts in this process if there were multiple changes to the same file. The way to fix a conflict is to change it during the rebase and add it again:
(fix the conflicts within the file)
git add some_conflicted_file
git rebase --continue
If you think there's a problem then you can abort the rebase with --abort.

Squashing Commits

It's important that you branch is tidy because it makes rolling back bad changes a lot easier. You can squash all of your commits into one or two commits using the rebase command.
First, run git log and find the commit that you want to squash into. Usually this is the first commit on your feature branch.
Next, interactively rebase against that commit:
git rebase --interactive shdf8032hfohsdofhsdohf80h^
This will rebase every commit after the sha.
Follow the instructions for rebasing. Usually you just want to change the "pick" to a "squash" or "s". In the below example, all commits will be squashed into the top commit (which is the oldest):
pick e775ebe Refactor regex to avoid duplication
s 3bb23fd Add support for spaces within unquoted values
s e36d71a Support '-' char in a key name
s 16eda81 Remove unused import in test module
s 403c7ec Add bugfixes to changelog
You will have a chance to rewrite the commit message to better include all of the changes.
After you're finished, do a diff against master again just to make sure things don't go wrong. Then force push your branch to origin.
# Make sure it still looks good
git diff master
git push origin +my_feature_branch

Send Out the PR

Go to github.com and find your branch. Compare it with master and send out the PR. You should ask someone specifically to review your changes and include the following information:
  • Why did you change this?
  • What have you changed?
  • How have you tested these changes?
  • What are the risks involved with these changes?

Oh no! I rebase my branch and lost a lot of history! I'm doomed!

Fear not. With git nothing is ever truly lost. If you made some mistake and you're currently in the rebase process then you can abort it with git rebase --abort. If you've already commited, pushed, etc, then you can use the fantastic git reflog tool to go back to a different commit.

Using git reflog

In the following example, I'm going to force my branch back to the moment before I rebased and broke everything:
First, find the commit that came right before your rebase:
git reflog
5f56434 HEAD@{0}: pull upstream master: Fast-forward
ed610bc HEAD@{1}: checkout: moving from 1432-rb-force-safety-switch to develop
62d5af7 HEAD@{2}: rebase -i (finish): returning to refs/heads/1432-rb-force-safety-switch
62d5af7 HEAD@{3}: rebase -i (squash): Refuse rb if path contains keys.
781b4da HEAD@{4}: rebase -i (start): checkout 781b4da0dcc65736297464dd73da442daad4cf2c^
4350e25 HEAD@{5}: commit: Use two vars for readability. <---- ding ding ding
781b4da HEAD@{6}: checkout: moving from ed610bc9d38244feeaf0b640781da8ab01808f4e to 1432-rb-force-safety-switch
Next, we'll checkout that sha and then force push it
git checkout 4350e25
git log # make sure it's what you want
git push +some_branch

Saturday, December 12, 2015

Using ssh-import-id to manage authorized keys

ssh-import-id

While poking around in my ~/.ssh directory (in order to inspect and harden some of my SSH configurations -- more on that later), I noticed a file that I have never seen before:
ssh_import_id
I was surprised to this this file, especially in a directory related to openssh. Opening the file I saw this:
{
 "_comment_": "This file is JSON syntax and will be loaded by ssh-import-id to obtain the URL string, which defaults to launchpad.net.  The following URL *must* be an https address with a valid, signed certificate!!!  %s is the variable that will be filled by the ssh-import-id utility.",
 "URL": "https://launchpad.net/~%s/+sshkeys"
}
ssh-import-id is a utility included with Ubuntu 14.04+ that, according to the man page "will securely contact a public key server and retrieve one or more user's public keys". In other words it's a way to manage your authorized_keys file via an external API.
You have two options: launchpad.net's user directory or github. Running the utility will fetch and update the authorized_keys file based on the remote API.
For example, the following command will pull down my authorized_keys on github and update the file/home/stephen/.ssh/authorized_keys (since that's the user running the command)
stephen@cato:/etc/ssh$ ssh-import-id gh:stephen-mw
2015-11-30 21:44:04,813 INFO Already authorized ['4096', 'SHA256:3bLv3IXbSzhQpCnchqQprIRHXWPoI+PPW4xwguR6ktE', 'stephen-mw@github/10248951', '(RSA)']
2015-11-30 21:44:04,817 INFO Already authorized ['4096', 'SHA256:5ZtG8hD7l9+yU7I1S17FunmrPR5u6tEcRi0xa6wQGD4', 'stephen-mw@github/12837805', '(RSA)']
2015-11-30 21:44:04,817 INFO [2] SSH keys [Authorized]
The way it works is pretty simple. Github exposes an API for authorized keys. The utility simply makes a request to this endpoint and loads the output into the file. The utility is smart enough to know when keys change (that is, if you added all of your keys with ssh-import-id) and will keep things up-to-date.
By the way, did you know that github has an API for retrieving any public key? If that weirds you out, remember that they're called public keys for a reason! Here's Linus Torvalds public key. It's a 2048 RSA key.
You can add something like this to your crontab to update your key once a day at 4 am, and then once again if ever there's a restart. The second option is to ensure that servers/hosts that have been turned off for a long time can be accessed immediately.
# Pull down my github keys and add them to my user
0 4 * * * ssh-import-id gh:stephen-mw
@reboot ssh-import-id gh:stephen-mw
I find this to be especially useful on small embedded computers, such as a raspberry pi. When the raspberry pi is started after a long period it will automatically pick up my newest keys.

Security

My first problem was a file appearing magically in my ~/.ssh/ directory. I consider this directory a sacred place and don't like uninvited files here. Apart from that, the application bills itself as "secure" so I took a look at the source. Mostly it looks fine, but there are some things I would like to see different:
  • Github usernames can change and that string is the only thing used to pull down the key. If you change your name you'll need to hunt down any instance of this program and update it. That's annoying with embedded systems, which is exactly the problem I'm hoping to solve with this application.
  • For SSL, the application uses Python's urlib and attempts to fallback on shelling to wget. However, there's no guarantee that wget will honor https requests only. In fact this can be disabled via ~/.wgetrc. They're relying on wget's default behavior without being explicit.
  • It checks only if the SSL cert is valid, but doesn't try very hard to see how valid it is. I would have preferred to see it reject any TLS versions lower than 1.2 and only accept EV certificates, since both domains use EV and TLS 1.2.
The last issue worries me the most. SuperfishCNNIC, and eDellRoot all show that rogue certificate authorities are a real and not theoretical problem.
But like most things in the world, it's a trade-off. If you find the convenience outweighs the security risk -- and I do -- then give ssh-import-id a try.