Category Archives: Web Development

Pi Motion

I’ve been wanting to return to the motion detection setups I’ve created in the past [1, 2] on the Raspberry Pi, and update them a bit using some more recent development approaches. For example, before I used Wiring Pi’s gpio utility in a bash script and displayed the results in staticly generated pages from PHP’s built-in web server.

In this setup I am using the very capable Cylon.js framework for the server-sent events API and for controlling the PIR sensor with the hardware button. SQLite takes up duties for storing data in the database, while Node.js and Express run the server side portion of the web application. Polymer, Web Components, and some custom design are utilized for the front end side of things.

Here are the hardware parts used:

I used moldable wiring on the breadboard attempting to keep it clean and portable. The wiring that is connected to the GPIO pins on the Raspberry Pi is encased in clear heatshrink and is made into a sort of plug.

After everything is setup on the hardware end, the software side is setup.

Install Dependencies:

#!/usr/bin/env bash

cd /opt
sudo curl -O
sudo tar -xzpvf iojs-v2.0.1-linux-armv7l.tar.gz
sudo ln -s /opt/iojs-v2.0.1-linux-armv7l/bin/node /usr/local/bin/node
sudo ln -s /opt/iojs-v2.0.1-linux-armv7l/bin/npm /usr/local/bin/npm

Install Pi-motion:

#!/usr/bin/env bash

cd ~
git clone
cd pi-motion

Run unit tests, and see a coverage report:

#!/usr/bin/env bash

bin/gulp test
bin/gulp test-coverage

Check out the config file and change the options to access the web application from another browser (TV, phone, etc.). Next, serve the web app and cylon wiring app:

#!/usr/bin/env bash

bin/gulp serve

With just the defaults set, open a browser on the Pi, and go to https://localhost:3000 to view the Robeaux dashboard that is built into the Cylon.js framework (be sure to allow the browser to view the content behind the self signed certificate). Then browse to http://localhost to view the web application.

The main task runner installed is gulp and it exposes several pre-configured tasks such as:

  • serve (to start the Cylon.js and Express code)
  • init-database (to initialize a brand new database)
  • compile-js (to compile the front end code using webpack)
  • test (to run the mocha unit tests)
  • test-coverage (to get a unit test coverage report using istanbul)
  • lint (to lint the JavaScript using jshint)

Once everything is setup and running the user can activate the PIR sensor. To do this in hardware, press the button on the breadboard. Using software, touch the hamburger menu, touch the “PIR Sensor” menu item, and then touch the “Toggle” button under the sensor indicator (large red circle).

After a forty second warm up time the PIR sensor will begin sensing movement. When it activates, the sensor indicator will turn green and the movement data will be logged to the SQLite database.

To view the saved data touch the hamburger menu, touch the “Charts” menu item, select an appropriate time range (by hour), and then touch the “Update Chart” button.

HipChat bot on AWS

Over the last year or so I’ve spent some time testing out Amazon Web Services. While at first the myriad of options seemed a bit overwhelming, once I figured out what I wanted, it was fairly easy to navigate and manage the setup. In particular I used Elastic Compute Cloud with a Debian Wheezy image and stuck to the AWS Free Tier. This setup allowed me to run a t2.micro instance of Linux continuously (only rebooting it for necessary software updates) with the following basic specs:

  • CPU:  Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  • RAM: A little over 512 MB
  • Architecture: x86_64

I was a little concerned while I used it that I would push the system out of the free territory, but was pleasantly surprised to not see any charges to my account. The first few months were spent getting familiar with what AWS was all about, and the next few on hosting a few NodeJS ideas.

The best test application I setup for it (and what I finished the free period with) was one based on good ol’ trusty PHP and the HipChat v2 API. As it turns out I haven’t been using an RSS reader in awhile (thanks Google) but still enjoy reading news. Instead of hitting a bunch of different pages to check and see the latest, I figured I would write a simple bot to collect the data for me and message a room in HipChat with the contents.

The project is called NewsToChat and installation was straightfoward:

  • On the HipChat profile page:
    • Create an OAuthAPI token
  • On the AWS instance:

The basic idea is that there are scripts setup to run in cron, and are mapped to the commands available in NewsToChat. These are:

  • pullnews
    • uses a few classes to pull from and format the identified news sources
    • makes a basic attempt to de-duplicate what was found
    • uses a database service to store the data
  • pushnews
    • push one article to the identified chat target
    • marks the article as expired
  • maintenance
    • perform maintenance on the pool of news articles in the database

For the little experiment I was running, I pushed the first article I gathered into the HipChat room on September 6th, 2014 and the last on December 16th, 2014. In all, a little over sixty thousand news items were gathered, providing more than enough content to publish about every five minutes.

It was extremely simplistic, but could have easily been expanded to behave more like an appropriate chat room bot. For instance, had I spent the time on it, it would have been nice for the bot to support user specific preferences and listen to commands from the chat room.

Shortly after the free period expired, I let it run for awhile to see how much it would cost me. Turns out, it was about 0.47 USD per day. Not bad. Less than the cost of a newspaper. :-)

Portable Development Environments

I have been enjoying setting up and utilizing my development environments these days. Though that may sound odd to some, it has been that way as I’ve been able to isolate them from my main system, allow them to be cross-platform, and easily reproduce them on other hosts. The most efficient way I’ve found to do this up until now is by utilizing Vagrant. While most of my work has been based on a mixture of Puppet and Bash shell script provisioning, I’m looking forward to utilizing Docker at some point in the future.

Here are some of the ones I have developed:

By no means should they be considered a complete development environment, or ready for all projects that are based on what is in the title. They should be a starting point for future work and something to build upon. Additionally they’ve been great for learning a little more about automation. Any comments, ideas, or applicable pull requests are much appreciated.

For the most part, the workflow goes like this for initially launching the development environment:

  • Clone the repository and start the virtual machine. The first time will take awhile to boot and subsequent ones will be much quicker. In this example I’m showing the nodejs branch:
$ git clone development
$ cd development
$ git checkout nodejs
$ bin/vm start
  • To turn off the virtual machine:
$ bin/vm stop
  • To login to the virtual machine:
$ bin/vm ssh

2048 on a Touchscreen Raspberry Pi

As I was checking out the latest accessories for the Raspberry Pi, I stumbled across this nice little shield at Adafruit: PiTFT Mini Kit – 320×240 2.8″ TFT+Touchscreen. Several days went by before I was certain I had to have it, and when it eventually arrived, I was not disappointed.


To start, I followed the excellent setup guide, which went from the soldering (hint: use thin solder and a pointed tip) and software setup all the way to instructions on the display calibration. Before, during, and after verifying that everything was working, I kept wondering about all the things I could make with this thing. What I ended up landing on was building a portable 2048 gaming system. :-)

So, I began to do some research on how exactly I was going to control 2048 itself. At first I wasn’t sure whether I wanted to use a keyboard like peripheral to simulate the arrow keys, wire up buttons for use with RPi.GPIO / WiringPi, or to look into using swipe events.

Also I reasoned, since I didn’t plan on writing the game from scratch, running the original seemed ideal as it works in a plain ol’ web browser. And if that didn’t pan out, I would resort to booting it in command line mode, as there are a variety of C, Python, and Bash ports. Obviously there are other ways to approach this, especially if I were to be developing the game for market, but I was looking at it as a minimum viable prototype project (MVPP™). j/k

For the command line forks, the simplest way seemed to be a method involving launching the application, and then sending keystrokes such as ‘w’, ‘a’, ‘s’, or ‘d’ to /dev/tty1 using a small C utility:


#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>

int main(void)
  int hTTY = open("/dev/tty1", O_WRONLY|O_NONBLOCK);
  ioctl(hTTY, TIOCSTI, "b");
  return 0;

Another idea that came up was to start a tmux session and send keystrokes through it:

tmux send-keys -t SESSIONNAME Down
tmux send-keys -t SESSIONNAME Up

However, as much as I like the command line, for this project I wanted to avoid it for the UI. I also wondered how I would control it within X? The original 2048 didn’t seem to support using a mouse, or the touch screen from Adafruit, so I looked at utilizing XAUT or xdotool so that I could script out a solution that would be triggered from a button press.

Didn’t want to do any of those either. I did arrive at this solution though:

  • auto login to Linux without a password by modifying /etc/inittab:
    • look for a line like this: 1:2345:respawn:/sbin/getty --noclear 38400 tty1
    • change it to something like this: 1:2345:respawn:/sbin/getty --autologin {USERNAME} --noclear 38400 tty1
  • auto start LXDE
  • auto start Midori in fullscreen
  • and then make sure the default home page is set to the modified 2048.

The main difference from the original is that I have the app zoomed to 90%, and I swapped out the original touch event setup with one based on Hammer.js. This was done only to work with the touch screen and the events provided by Midori when swiping in various directions on the game board.

The only other functionality provided in the project is a listener script running in the background that is started from /etc/rc.local on bootup:

#!/usr/bin/env bash

#wiring pi pin 4 maps to gpio-23
/usr/local/bin/gpio mode 4 up

while true; do
  sleep .5
  if [ `/usr/local/bin/gpio read 4` -lt 1 ]; then
    sudo poweroff

This is to allow for a poweroff event to be sent to the Raspberry Pi, so that when I am done playing the game, I can get the unit to turn off without using a keyboard. Then, I power off the battery pack using a stylus on the underside of the unit.

Parts used:

Watch a video of it booting, a little game play, and the shutdown sequence here:

Checking out Bolt

Recently I was able to spend some time with Bolt, “a tool for Content Management, which strives to be as simple and straightforward as possible.”

Actually, It was very straightforward, as I only had to:

  • Clone the project from GitHub
  • Get Composer and install
  • Tweak my existing Nginx configuration
  • Export the entries from this installation of WordPress into XML
  • Enable the ImportWXR extension, and import the entries into the new Bolt database
  • And finally, tweak a few configs and Twig templates

Below you can see the entries and menu items from this blog (screenshots are from Firefox Mobile):

Tracking with Tasker

After finding Tasker for Android and realizing all of the potential it has, I had to try it out. To begin, I decided to build a prototype backend service for receiving the location signals sent from the app itself. The possibilities seemed endless. As the website states:

Tasker is an application for Android which performs tasks (sets of actions) based on contexts (application, time, date, location, event, gesture) in user-defined profiles or in clickable or timer home screen widgets.

What I came up with isn’t nearly as robust as something like Android Device Manager or services like Google+ location sharing, but it did allow me to think through the ins and outs of implementing products like these. The project requires:

  • Tasker
  • A web server that can run PHP 5.4 code
  • A terminal that can run PHP CLI 5.4 code

The sending portion involves thoroughly understanding the location intricacies of Tasker, and a basic idea of how it works. I configured it as shown in the following images:

The receiving portion is setup by checking out and building the git repository, as well as configuring hosting for public/index.php.

 git clone
 cd tracking/

If everything is in place properly when Tasker posts to the service, the data can be stored in an sqlite database or a simple log file.


The final piece is to generate the output from the data stored. When tracker.php is executed on the command line, the help page is displayed showing the available arguments.

$ ./tracker.php 
Tracker version .001

  [options] command [arguments]

  --help           -h Display this help message.
  --quiet          -q Do not output any message.
  --verbose        -v|vv|vvv Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
  --version        -V Display this application version.
  --ansi              Force ANSI output.
  --no-ansi           Disable ANSI output.
  --no-interaction -n Do not ask any interactive question.

Available commands:
  help   Displays help for a command
  list   Lists commands
  map    Generate a Google static map URL.

In this version, the main command “map” parses through the included demo log file and returns a URL for a static map from Google, with location points marked.

$ ./tracker.php map -l logs/YYYY-MM-DD_post_capture_DOT_log

The Bermuda Triangle
The Bermuda Triangle

Observations on HTML

In December of 2012 HTML5 became HTML5 Logofeature complete according to the W3C, “meaning businesses and developers have a stable target for implementation and planning.” They continued to describe HTML5 as, “the cornerstone of the Open Web Platform, a full programming environment for cross-platform applications with access to device capabilities; video and animations; graphics; style, typography, and other tools for digital publishing; extensive network capabilities; and more.”

What a long road. Think back to the turn of the century. Shortly after HTML 4.01 was published as a W3C recommendation and XHTML 1.0 had it’s turn, it was said to be “developed to make HTML more extensible and increase interoperability with other data formats.” I remember the routine. Best practices at the time were separating form from content, moving to CSS, utilizing unobtrusive JavaScript, and practicing graceful degradation. Don’t forget to close your tags. Make it XML. Oh, and the machines are coming! :-)

As XHTML 2 approached it became clear that it would be an entirely new way of doing things, and not just an incremental approach that preserved compatibility. In 2004, Mozilla and Opera published the “Position Paper for the W3C Workshop on Web Applications and Compound Documents.” Some key sections included headings like, “Backwards compatibility, clear migration path”, “Users should not be exposed to authoring errors”, and “Scripting is here to stay.” Ultimately the initiatives were voted down and in response the WHATWG was formed. Between the years of 2007 and 2009, not only did the W3C accept WHATWG’s “Proposal to Adopt HTML5“, they allowed the XHTML 2 Working Group’s charter to expire, even going on to acknowledge that HTML5 would be the standard rather than the XML variant XHTML5. Regarding the two formats they wrote, “The first such concrete syntax is the HTML syntax. This is the format suggested for most authors.”

Since then, the whole web has been marching toward HTML5 domination, steadily learning best practice and implementation. In the earlier days I recall it not being as rapid as more recent, with people discussing the semantics, along with calls to prepare, but there has been a ton of solid information on the topic for awhile now and the momentum has shifted. Not only has the WHATWG decided that HTML is a living standard while the W3C publishes regular snapshots, the working draft of HTML 5.1 has been issued.

Lastly, I find it interesting to see the various web development strategies work themselves out as the craft changes. Graceful degradation (and desktop centric) has steadily given way to a solid progressive enhancement (and mobile first) approach as the web continues to gain in mobile traffic. In addition there are quite a few ideas going around on how to best accommodate all of the client browsers, especially in the comments. Should one start with an adaptive web design, and how is that related to responsive web design? Is one really a part of the other and should we have a strategy utilizing both? Maybe that’s the future… I guess it depends.

Time flies when you are having fun. Certainly I can be sure about one thing, I still like to close my tags. :-)

HTML5 Logo by W3C.

Visualizing Motion

The programmability of the Raspberry Pi comes in handy when you want to change the behavior of a circuit without moving a single wire. In this case, I decided the data I was logging with my former motion detection script was largely useless because it only ever recorded when a motion event occurred, but didn’t hint to how long it had happened for.

So, while I admit the new method probably isn’t the -best- way there is, I believe it to be incrementally better. :-) The main difference being that “sleep” is called for half a second in the mix to allow for the line to start at bottom, progress to top, and then back down after the event occurs. I suppose it is inaccurate in that motion didn’t actually happen exactly in this manner, but it does allow for a nicer graph. Google Chart Tools is used along with the PHP built-in web server effectively piping the “data.js” log file to Google for displaying. I know the Pi has to be online…

Finally every evening at 23:59, cron runs a maintenance script and moves the data around for archiving (I love this little linux box). My thoughts on further improvements have been pointing me toward PHPlot instead of the Annotated Time Line from Google or maybe even utilizing kst. Also I would like to avoid that half second delay in future revisions… oh, and I haven’t tested what happens if I dance around in front of the thing right at midnight. :-)

source for

function setup {
gpio export 17 out
gpio -g write 17 1
gpio export 18 in
echo The PIR sensor is initializing and calibration has begun.
i=0; while [ $i -lt 40 ]; do i=$(($i+1)); echo $i; sleep 1; done

function loop {
  while true
    if [ `gpio -g read 18` -eq 1 ]; then #PIR sensor activated
      if [ $start -eq 0 ]; then
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date \
          +", %d, %H, %M, %S"`'), 0],' | tee -a \
        sleep .5
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date \
          +", %d, %H, %M, %S"`'), 1],' | tee -a \
    else #PIR sensor de-activated
      if [ $start -eq 1 ]; then
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date \
          +", %d, %H, %M, %S"`'), 1],' | tee -a \
        sleep .5
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date \
          +", %d, %H, %M, %S"`'), 0],' | tee -a \

setup; loop

excerpt of data.js

[new Date(2012, 10, 28, 08, 06, 30), 0],
[new Date(2012, 10, 28, 08, 06, 31), 1],
[new Date(2012, 10, 28, 08, 07, 10), 1],
[new Date(2012, 10, 28, 08, 07, 11), 0],
[new Date(2012, 10, 28, 08, 07, 16), 0],
[new Date(2012, 10, 28, 08, 07, 17), 1],
[new Date(2012, 10, 28, 08, 07, 22), 1],
[new Date(2012, 10, 28, 08, 07, 23), 0],

source for index.php


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" 
<html xmlns="">
  <meta http-equiv="content-type"
    content="text/html; charset=utf-8" />
  <title>Karl's Passive Infrared Sensor Graph</title>
  <script type="text/javascript" 
  <script type="text/javascript">
    google.load('visualization', '1', {packages: 
    function drawVisualization() {
      var data = 
        new google.visualization.DataTable();

      data.addColumn('datetime', 'Date');
      data.addColumn('number', 'Status');

        <?php include ('data.js'); ?>

      var annotatedtimeline = new
        {'displayAnnotations': true});


<body style="font-family: Arial;border: 0 none;">
  <h1>Karl's Passive Infrared Sensor Graph</h1>
  <div id="visualization" style="width: 900px; height: 300px;">
  <br />
  <a href="http://<?php echo $hostname ?>/archive">Archive</a>

source for the maintenance script

directory=$(date +"%Y-%m-%d")
sleep 60 #wait until midnight, cron is set for 23:59 as this keeps the directory names and dates aligned
mkdir /opt/GoogleVisualization/app/archive/$directory
mv /opt/GoogleVisualization/app/data.js /opt/GoogleVisualization/app/archive/$directory/
cp -pr /opt/GoogleVisualization/app/index.php /opt/GoogleVisualization/app/archive/$directory/

Responsive Design View in Firefox

With the faster release cycle and updating of Firefox, it’s been interesting to see features show up in the application like they do in web sites. They’re there the next time you load it, and are discovered almost by accident if you do not go searching for them on purpose.

The other day, while testing various display sizes on this very site, I noticed a new developer tool that was released called Responsive Design View. This special viewing mode was released in Firefox 15, and allows for various device sizes to be represented using the Gecko layout engine without too much hassle.

How nice. :-) The last feature I found out by accident that made me think, “wow cool,” was the 3D view that was released in Firefox 11. It would have been wonderful to have while developing some animated tabs I had running on here back in the day. Particularly because they would pop up from behind the main container of content on the site… so I had to visualize them behind there waiting for an event to happen to send them shooting up via some jQuery effects.