PowerShell for OS X and Linux

Microsoft released PowerShell for OS X and Linux. I tried a simple hello world on Ubuntu 16.04, and it seems to be getting that far. What interesting times we are living in the tech world. I hear that “Microsoft โ™ฅ Linux.” ๐Ÿ™‚

#!/usr/bin/env powershell

Write-Host "`nHello, World! `n"
$ ./hello-world.ps1 

Hello, World! 


get-powershell

Pi Motion

I’ve been wanting to return to the motion detection setups I’ve created in the past [1, 2] on the Raspberry Pi, and update them a bit using some more recent development approaches. For example, before I used Wiring Pi’s gpio utility in a bash script and displayed the results in staticly generated pages from PHP’s built-in web server.

In this setup I am using the very capable Cylon.js framework for the server-sent events API and for controlling the PIR sensor with the hardware button. SQLite takes up duties for storing data in the database, while Node.js and Express run the server side portion of the web application. Polymer, Web Components, and some custom design are utilized for the front end side of things.

Here are the hardware parts used:

I used moldable wiring on the breadboard attempting to keep it clean and portable. The wiring that is connected to the GPIO pins on the Raspberry Pi is encased in clear heatshrink and is made into a sort of plug.

After everything is setup on the hardware end, the software side is setup.

Install Dependencies:

#!/usr/bin/env bash

cd /opt
sudo curl -O https://iojs.org/dist/v2.0.1/iojs-v2.0.1-linux-armv7l.tar.gz
sudo tar -xzpvf iojs-v2.0.1-linux-armv7l.tar.gz
sudo ln -s /opt/iojs-v2.0.1-linux-armv7l/bin/node /usr/local/bin/node
sudo ln -s /opt/iojs-v2.0.1-linux-armv7l/bin/npm /usr/local/bin/npm

Install Pi-motion:

#!/usr/bin/env bash

cd ~
git clone https://github.com/kherrick/pi-motion
cd pi-motion
bin/init.sh

Run unit tests, and see a coverage report:

#!/usr/bin/env bash

bin/gulp test
bin/gulp test-coverage

Check out the config file and change the options to access the web application from another browser (TV, phone, etc.). Next, serve the web app and cylon wiring app:

#!/usr/bin/env bash

bin/gulp serve

With just the defaults set, open a browser on the Pi, and go to https://localhost:3000 to view the Robeaux dashboard that is built into the Cylon.js framework (be sure to allow the browser to view the content behind the self signed certificate). Then browse to http://localhost to view the web application.

The main task runner installed is gulp and it exposes several pre-configured tasks such as:

  • serve (to start the Cylon.js and Express code)
  • init-database (to initialize a brand new database)
  • compile-js (to compile the front end code using webpack)
  • test (to run the mocha unit tests)
  • test-coverage (to get a unit test coverage report using istanbul)
  • lint (to lint the JavaScript using jshint)

Once everything is setup and running the user can activate the PIR sensor. To do this in hardware, press the button on the breadboard. Using software, touch the hamburger menu, touch the “PIR Sensor” menu item, and then touch the “Toggle” button under the sensor indicator (large red circle).

After a forty second warm up time the PIR sensor will begin sensing movement. When it activates, the sensor indicator will turn green and the movement data will be logged to the SQLite database.

To view the saved data touch the hamburger menu, touch the “Charts” menu item, select an appropriate time range (by hour), and then touch the “Update Chart” button.

HipChat bot on AWS

Over the last year or so I’ve spent some time testing out Amazon Web Services. While at first the myriad of options seemed a bit overwhelming, once I figured out what I wanted, it was fairly easy to navigate and manage the setup. In particular I used Elastic Compute Cloud with a Debian Wheezy image and stuck to the AWS Free Tier. This setup allowed me to run a t2.micro instance of Linux continuously (only rebooting it for necessary software updates) with the following basic specs:

  • CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  • RAM: A little over 512 MB
  • Architecture: x86_64

I was a little concerned while I used it that I would push the system out of the free territory, but was pleasantly surprised to not see any charges to my account. The first few months were spent getting familiar with what AWS was all about, and the next few on hosting a few NodeJS ideas.

The best test application I setup for it (and what I finished the free period with) was one based on good ol’ trusty PHP and the HipChat v2 API. As it turns out I haven’t been using an RSS reader in awhile (thanks Google) but still enjoy reading news. Instead of hitting a bunch of different pages to check and see the latest, I figured I would write a simple bot to collect the data for me and message a room in HipChat with the contents.

The project is called NewsToChat and installation was straightfoward:

  • On the HipChat profile page:
    • Create an OAuthAPI token
  • On the AWS instance:

The basic idea is that there are scripts setup to run in cron, and are mapped to the commands available in NewsToChat. These are:

  • pullnews
    • uses a few classes to pull from and format the identified news sources
    • makes a basic attempt to de-duplicate what was found
    • uses a database service to store the data
  • pushnews
    • push one article to the identified chat target
    • marks the article as expired
  • maintenance
    • perform maintenance on the pool of news articles in the database

For the little experiment I was running, I pushed the first article I gathered into the HipChat room on September 6th, 2014 and the last on December 16th, 2014. In all, a little over sixty thousand news items were gathered, providing more than enough content to publish about every five minutes.

It was extremely simplistic, but could have easily been expanded to behave more like an appropriate chat room bot. For instance, had I spent the time on it, it would have been nice for the bot to support user specific preferences and listen to commands from the chat room.

Shortly after the free period expired, I let it run for awhile to see how much it would cost me. Turns out, it was about 0.47 USD per day. Not bad. Less than the cost of a newspaper. ๐Ÿ™‚

Portable Development Environments

I have been enjoying setting up and utilizing my development environments these days. Though that may sound odd to some, it has been that way as I’ve been able to isolate them from my main system, allow them to be cross-platform, and easily reproduce them on other hosts. The most efficient way I’ve found to do this up until now is by utilizing Vagrant. While most of my work has been based on a mixture of Puppet and Bash shell script provisioning, I’m looking forward to utilizing Docker at some point in the future.

Here are some of the ones I have developed:

By no means should they be considered a complete development environment, or ready for all projects that are based on what is in the title. They should be a starting point for future work and something to build upon. Additionally they’ve been great for learning a little more about automation. Any comments, ideas, or applicable pull requests are much appreciated.

For the most part, the workflow goes like this for initially launching the development environment:

  • Clone the repository and start the virtual machine. The first time will take awhile to boot and subsequent ones will be much quicker. In this example I’m showing the nodejs branch:
$ git clone https://github.com/kherrick/vagrant-environments development
$ cd development
$ git checkout nodejs
$ bin/vm start
  • To turn off the virtual machine:
$ bin/vm stop
  • To login to the virtual machine:
$ bin/vm ssh

2048 on a Touchscreen Raspberry Pi

As I was checking out the latest accessories for the Raspberry Pi, I stumbled across this nice little shield at Adafruit: PiTFT Mini Kit – 320×240 2.8″ TFT+Touchscreen. Several days went by before I was certain I had to have it, and when it eventually arrived, I was not disappointed.

2048pi

To start, I followed the excellent setup guide, which went from the soldering (hint: use thin solder and a pointed tip) and software setup all the way to instructions on the display calibration. Before, during, and after verifying that everything was working, I kept wondering about all the things I could make with this thing. What I ended up landing on was building a portable 2048 gaming system. ๐Ÿ™‚

So, I began to do some research on how exactly I was going to control 2048 itself. At first I wasn’t sure whether I wanted to use a keyboard like peripheral to simulate the arrow keys, wire up buttons for use with RPi.GPIO / WiringPi, or to look into using swipe events.

Also I reasoned, since I didn’t plan on writing the game from scratch, running the original seemed ideal as it works in a plain ol’ web browser. And if that didn’t pan out, I would resort to booting it in command line mode, as there are a variety of C, Python, and Bash ports. Obviously there are other ways to approach this, especially if I were to be developing the game for market, but I was looking at it as a minimum viable prototype project (MVPPรขโ€žยข). j/k

For the command line forks, the simplest way seemed to be a method involving launching the application, and then sending keystrokes such as ‘w’, ‘a’, ‘s’, or ‘d’ to /dev/tty1 using a small C utility:

s_key.c

#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>

#define O_WRONLY   0x0001
#define O_NONBLOCK 0x2000

int main(void)
{
  int hTTY = open("/dev/tty1", O_WRONLY|O_NONBLOCK);
  ioctl(hTTY, TIOCSTI, "b");
  close(hTTY);
  return 0;
}

Another idea that came up was to start a tmux session and send keystrokes through it:

tmux send-keys -t SESSIONNAME Down
tmux send-keys -t SESSIONNAME Up

However, as much as I like the command line, for this project I wanted to avoid it for the UI. I also wondered how I would control it within X? The original 2048 didn’t seem to support using a mouse, or the touch screen from Adafruit, so I looked at utilizing XAUT or xdotool so that I could script out a solution that would be triggered from a button press.

Didn’t want to do any of those either. I did arrive at this solution though:

  • auto login to Linux without a password by modifying /etc/inittab:
    • look for a line like this: 1:2345:respawn:/sbin/getty --noclear 38400 tty1
    • change it to something like this: 1:2345:respawn:/sbin/getty --autologin {USERNAME} --noclear 38400 tty1
  • auto start LXDE
  • auto start Midori in fullscreen
  • and then make sure the default home page is set to the modified 2048.

The main difference from the original is that I have the app zoomed to 90%, and I swapped out the original touch event setup with one based on Hammer.js. This was done only to work with the touch screen and the events provided by Midori when swiping in various directions on the game board.

The only other functionality provided in the project is a listener script running in the background that is started from /etc/rc.local on bootup:

poweroff-on-gpio-23.sh

#!/usr/bin/env bash

#wiring pi pin 4 maps to gpio-23
/usr/local/bin/gpio mode 4 up

while true; do
  sleep .5
  if [ `/usr/local/bin/gpio read 4` -lt 1 ]; then
    sudo poweroff
  fi
done

This is to allow for a poweroff event to be sent to the Raspberry Pi, so that when I am done playing the game, I can get the unit to turn off without using a keyboard. Then, I power off the battery pack using a stylus on the underside of the unit.

Parts used:

Watch a video of it booting, a little game play, and the shutdown sequence here:

Checking out Bolt

Recently I was able to spend some time with Bolt, “a tool for Content Management, which strives to be as simple and straightforward as possible.”

Actually, It was very straightforward, as I only had to:

  • Clone the project from GitHub
  • Get Composer and install
  • Tweak my existing Nginx configuration
  • Export the entries from this installation of WordPress into XML
  • Enable the ImportWXR extension, and import the entries into the new Bolt database
  • And finally, tweak a few configs and Twig templates

Below you can see the entries and menu items from this blog (screenshots are from Firefox Mobile):

Tracking with Tasker

After finding Tasker for Android and realizing all of the potential it has, I had to try it out. To begin, I decided to build a prototype backend service for receiving the location signals sent from the app itself. The possibilities seemed endless. As the website states:

Tasker is an application for Android which performs tasks (sets of actions) based on contexts (application, time, date, location, event, gesture) in user-defined profiles or in clickable or timer home screen widgets.

What I came up with isn’t nearly as robust as something like Android Device Manager or services like Google+ location sharing, but it did allow me to think through the ins and outs of implementing products like these. The project requires:

  • Tasker
  • A web server that can run PHP 5.4 code
  • A terminal that can run PHP CLI 5.4 code

The sending portion involves thoroughly understanding the location intricacies of Tasker, and a basic idea of how it works. I configured it as shown in the following images:

The receiving portion is setup by checking out and building the git repository, as well as configuring hosting for public/index.php.

 git clone https://github.com/kherrick/tracking
 cd tracking/
 bin/build.sh

If everything is in place properly when Tasker posts to the service, the data can be stored in an sqlite database or a simple log file.

DT:4-15-2011_11.12@BATT:13,SMSRF:+15558675309,LOC:32.2000000,-64.4500000,LOCACC:49,LOCALT:165.3000030517578,LOCSPD:0.0,LOCTMS:1458423011,LOCN:32.2000000,-64.4500000,LOCNACC:101,LOCNTMS:1458423011,CELLID:GSM:10081.13345030,CELLSIG:4,CELLSRV:service
DT:4-15-2011_11.14@BATT:12,SMSRF:+15558675309,LOC:18.5000000,-66.9000000,LOCACC:49,LOCALT:165.3000030517578,LOCSPD:0.0,LOCTMS:1458423021,LOCN:18.5000000,-66.9000000,LOCNACC:101,LOCNTMS:1458423021,CELLID:GSM:11172.24255141,CELLSIG:4,CELLSRV:service

The final piece is to generate the output from the data stored. When tracker.php is executed on the command line, the help page is displayed showing the available arguments.

$ ./tracker.php 
Tracker version .001

Usage:
  [options] command [arguments]

Options:
  --help           -h Display this help message.
  --quiet          -q Do not output any message.
  --verbose        -v|vv|vvv Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
  --version        -V Display this application version.
  --ansi              Force ANSI output.
  --no-ansi           Disable ANSI output.
  --no-interaction -n Do not ask any interactive question.

Available commands:
  help   Displays help for a command
  list   Lists commands
  map    Generate a Google static map URL.

In this version, the main command “map” parses through the included demo log file and returns a URL for a static map from Google, with location points marked.

$ ./tracker.php map -l logs/YYYY-MM-DD_post_capture_DOT_log
Processing...
http://maps.googleapis.com/maps/api/staticmap?size=640x640&;zoom=4&;sensor=false&;markers=32.2000000,-64.4500000|18.5000000,-66.9000000|25.4800000,-80.1800000|32.2000000,-64.4500000
The Bermuda Triangle
The Bermuda Triangle

Controlling a Lego motor with the Raspberry Pi

Raspberry Pi and Gertboard controlling a 4.5v Lego motor #6216mWhen the Gertboard originally came out, it was a do it yourself kit that required soldering, and it had enough pieces that made me want to wait until one was offered pre-assembled (you can still get one like the original from Tandy called the Multiface Kit). After purchasing the latest version, I tried putting one of the Lego motors we have around the house through its paces. Using one of the older models (4.5v Lego motor #6216m) and learning from the examples in the Python Gertboard Suite made crafting my own setup surprisingly easy.

The main steps were to code a motor controlling script in Python (see lego-motor-control.py), and demo it in Bash (see lego-motor.sh). I find tinkering around with these components immensely rewarding while exploring topics like the Internet of Things. This type of exploration will only get easier as time goes on, for example, I just noticed today that Gert van Loo created something called the Gertduino. It is similar to the original product, but much smaller and easier to use. What kind of fun projects could be implemented with even smaller kit like TinyDuino or Intel Edison? I do like sticking with the Pi for now, so I dug up a list of compatible shields for your perusal (or browse an even larger list of expansion boards @ elinux.org):

Also, check out the video to get an idea on how the Lego motor and scripts ended up working out:


lego-motor.sh

#!/bin/bash
echo -e "Turning the lego motor right for five seconds...n"
sudo python ~/lego-motor-control.py right &
pid=$!
sleep 5
sudo kill -INT $pid

echo -e "nSleeping for three seconds to let the motor wind downn"
sleep 3

echo -e "Turning the lego motor left for five seconds...n"
sudo python ~/lego-motor-control.py left &
pid=$!
sleep 5
sudo kill -INT $pid

lego-motor-control.py

#!/usr/bin/env python
import RPi.GPIO as GPIO
import collections, signal, sys
from time import sleep

# get command line agruments
arg_names = ['command', 'direction']
args      = dict(zip(arg_names, sys.argv))
arg_list  = collections.namedtuple('arg_list', arg_names)
args      = arg_list(*(args.get(arg, None) for arg in arg_names))

# set initial values
step      = 1
mota      = 18
motb      = 17
left      = motb
right     = mota
reps      = 400
hertz     = 2000
freq      = (1 / float(hertz)) - 0.0003
ports     = [mota,motb]
percent   = 100

# function to run the motor
def run_motor(reps, pulse_width, port_num, period):
    for i in range(0, reps):
        GPIO.output(port_num, True)
        sleep(pulse_width)
        GPIO.output(port_num, False)
        sleep(period)

# trap SIGINT and provide a clean exit path
def signal_handler(signal, frame):
    GPIO.output(direction, False)
    GPIO.cleanup()
    sys.exit(0)

signal.signal(signal.SIGINT, signal_handler)

GPIO.setmode(GPIO.BCM)

# initialize the ports being used
for port_num in ports:
    GPIO.setup(port_num, GPIO.OUT)
    print "setting up GPIO port:", port_num
    GPIO.output(port_num, False)

# determine direction or set a default
if args[1] == "left":
    direction = left
elif args[1] == "right":
    direction = right
else:
    direction = right

# the main loop
while True:
    pulse_width = percent / float(100) * freq
    period      = freq - (freq * percent / float(100))
    run_motor(reps, pulse_width, direction, period)

Lego Car & Raspberry Pi

Lego Car with Raspberry Pi

For awhile I’ve wanted to connect an RC car to the Raspberry Pi by wiring up to a Tyco Fast Traxx remote control. There are even products like Pi-Cars now, and entire blogs devoted to the construction of autonomous cars. I decided to build out a similar concept with the help of my kids (read about the last time we posted on lego cars).

Recently we acquired the Lego 4×4 Crawler, which includes:

After they completed the Lego build instructions, they added:

This provided an opportunity for me to control the 4×4 Crawler’s infrared remote with a Parallax servo and Lego gear attached to the Raspberry Pi.

The Parallax Standard Servo was attached with Lego bricks built around the outside of it and a larger Lego gear wired on top of it. I then built a platform in the bed of the 4×4 for space to hold this custom part, as well as the connecting gear, and Lego Infrared Remote Control.

To distribute the weight a little bit, the USB battery was enclosed on the underside of the vehicle. As you can see in the video at the end of the post, it still runs lopsided at times. When turning it on, the only thing it directly powers is the USB hub on top. The USB hub then powers the Raspberry Pi, servo, and other peripherals.

Parts used:

Although the servo in the project was wired to the USB hub, the illustration below shows how the USB battery can power the servo, as well as how the control wire is able to be connected up to BCM_GPIO pin 18 for testing.

Servo driven from battery and Raspberry Pi

example wiring showing how to run the servo from external battery

For issuing the commands remotely, I had in mind to hook up a Bluetooth USB adapter and pair the Bluetooth game controller directly to the Pi, but in the interest of simplicity, I paired it with my Nexus 4 instead and ssh’ed into the Pi using Terminal IDE.

Using a wireless keyboard I launched the listener script for the servo and was able to control it using the arrow keys. The Bluetooth game controller worked flawlessly as the buttons mapped to standard keyboard codes I specifically looked out for in the script.

listener.sh

#!/bin/bash
ENTER=$(printf "%b" "n")
ESCAPE=$(printf "%b" "e")
UP_SUFFIX="A"
RIGHT_SUFFIX="C"
LEFT_SUFFIX="D"

while true; do
  read -sn1 a
  case "$a" in
    $ENTER) echo "ENTER";;
    $ESCAPE)
      read -sn1 b
      test "$b" == "[" || continue
      read -sn1 b
      case "$b" in
        $RIGHT_SUFFIX)
          echo "Turning Right"
          ~/right.sh
        ;;
        $UP_SUFFIX)
          echo "Straigtening"
          ~/straight.sh
        ;;
        $LEFT_SUFFIX)
          echo "Turning Left"
          ~/left.sh
        ;;
        *) continue;;
      esac
    ;;
    "d")
      echo "Turning Right"
      ~/right.sh
    ;;
    "w")
      echo "Straigtening"
      ~/straight.sh
    ;;
    "a")
      echo "Turning Left"
      ~/left.sh
    ;;

    "3")
      echo "Taking a picture"
      ~/picture.sh &
    ;;
    "4")
      echo "Taking a video"
      ~/video.sh &
    ;;
    *) echo "$a";;
  esac
done

When the various buttons or keys are pressed, the listener script calls out to the other scripts, which provide for turning, picture, and video taking.

picture.sh

#!/bin/bash
dateTime=`date +%Y-%m-%d-%H-%M-%S`
pictureDirectory=~/pictures

raspistill -w 640 -h 480 -e jpg -t 0 -o $pictureDirectory/$dateTime.jpg

video.sh

#!/bin/bash
dateTime=`date +%Y-%m-%d-%H-%M-%S`
videoDirectory=~/videos

echo "Taking a video for 10 seconds"
raspivid -t 10000  -o $videoDirectory/$dateTime.h264

right.sh

#!/bin/bash
sudo ~/control.py 1100 .25

straight.sh

#!/bin/bash
sudo ~/control.py 1300 .25

left.sh

#!/bin/bash
sudo ~/control.py 1550 .25

The turning scripts all make use of RPIO.PWM which I was able to use after installing RPIO and building off of the provided examples.

control.py

#!/usr/bin/python
import time
import sys
from RPIO import PWM

servo = PWM.Servo()

#appropriate range of values for BCM_GPIO pin 18 and the servo: 370 - 2330

print servo.set_servo(18, float(sys.argv[1]))

time.sleep(float(sys.argv[2]))

servo.stop_servo(18)


Though the Raspberry Pi camera module is rated
5 megapixels with a native resolution of 2592×1944 I have it statically set to 640×480 in the picture taking script above. With it, I was able to capture a picture of my cat ignoring everything but the treat we were bribing her with.

First cat photo from Raspberry Pi camera module

And another one, when she had realized something was going on.

Second cat photo from Raspberry Pi camera module

Lastly, I put together a short video of the Lego 4×4 Crawler being driven from the outside as well as a first person perspective clip from the Raspberry Pi camera.