Tracking with Tasker

After finding Tasker for Android and realizing all of the potential it has, I had to try it out. To begin, I decided to build a prototype backend service for receiving the location signals sent from the app itself. The possibilities seemed endless. As the website states:

Tasker is an application for Android which performs tasks (sets of actions) based on contexts (application, time, date, location, event, gesture) in user-defined profiles or in clickable or timer home screen widgets.

What I came up with isn’t nearly as robust as something like Android Device Manager or services like Google+ location sharing, but it did allow me to think through the ins and outs of implementing products like these. The project requires:

  • Tasker
  • A web server that can run PHP 5.4 code
  • A terminal that can run PHP CLI 5.4 code

The sending portion involves thoroughly understanding the location intricacies of Tasker, and a basic idea of how it works. I configured it as shown in the following images:

The receiving portion is setup by checking out and building the git repository, as well as configuring hosting for public/index.php.

 git clone https://github.com/kherrick/tracking
 cd tracking/
 bin/build.sh

If everything is in place properly when Tasker posts to the service, the data can be stored in an sqlite database or a simple log file.

DT:4-15-2011_11.12@BATT:13,SMSRF:+15558675309,LOC:32.2000000,-64.4500000,LOCACC:49,LOCALT:165.3000030517578,LOCSPD:0.0,LOCTMS:1458423011,LOCN:32.2000000,-64.4500000,LOCNACC:101,LOCNTMS:1458423011,CELLID:GSM:10081.13345030,CELLSIG:4,CELLSRV:service
DT:4-15-2011_11.14@BATT:12,SMSRF:+15558675309,LOC:18.5000000,-66.9000000,LOCACC:49,LOCALT:165.3000030517578,LOCSPD:0.0,LOCTMS:1458423021,LOCN:18.5000000,-66.9000000,LOCNACC:101,LOCNTMS:1458423021,CELLID:GSM:11172.24255141,CELLSIG:4,CELLSRV:service

The final piece is to generate the output from the data stored. When tracker.php is executed on the command line, the help page is displayed showing the available arguments.

$ ./tracker.php 
Tracker version .001

Usage:
  [options] command [arguments]

Options:
  --help           -h Display this help message.
  --quiet          -q Do not output any message.
  --verbose        -v|vv|vvv Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
  --version        -V Display this application version.
  --ansi              Force ANSI output.
  --no-ansi           Disable ANSI output.
  --no-interaction -n Do not ask any interactive question.

Available commands:
  help   Displays help for a command
  list   Lists commands
  map    Generate a Google static map URL.

In this version, the main command “map” parses through the included demo log file and returns a URL for a static map from Google, with location points marked.

$ ./tracker.php map -l logs/YYYY-MM-DD_post_capture_DOT_log
Processing...
http://maps.googleapis.com/maps/api/staticmap?size=640x640&;zoom=4&;sensor=false&;markers=32.2000000,-64.4500000|18.5000000,-66.9000000|25.4800000,-80.1800000|32.2000000,-64.4500000
The Bermuda Triangle
The Bermuda Triangle

Controlling a Lego motor with the Raspberry Pi

Raspberry Pi and Gertboard controlling a 4.5v Lego motor #6216mWhen the Gertboard originally came out, it was a do it yourself kit that required soldering, and it had enough pieces that made me want to wait until one was offered pre-assembled (you can still get one like the original from Tandy called the Multiface Kit). After purchasing the latest version, I tried putting one of the Lego motors we have around the house through its paces. Using one of the older models (4.5v Lego motor #6216m) and learning from the examples in the Python Gertboard Suite made crafting my own setup surprisingly easy.

The main steps were to code a motor controlling script in Python (see lego-motor-control.py), and demo it in Bash (see lego-motor.sh). I find tinkering around with these components immensely rewarding while exploring topics like the Internet of Things. This type of exploration will only get easier as time goes on, for example, I just noticed today that Gert van Loo created something called the Gertduino. It is similar to the original product, but much smaller and easier to use. What kind of fun projects could be implemented with even smaller kit like TinyDuino or Intel Edison? I do like sticking with the Pi for now, so I dug up a list of compatible shields for your perusal (or browse an even larger list of expansion boards @ elinux.org):

Also, check out the video to get an idea on how the Lego motor and scripts ended up working out:


lego-motor.sh

#!/bin/bash
echo -e "Turning the lego motor right for five seconds...n"
sudo python ~/lego-motor-control.py right &
pid=$!
sleep 5
sudo kill -INT $pid

echo -e "nSleeping for three seconds to let the motor wind downn"
sleep 3

echo -e "Turning the lego motor left for five seconds...n"
sudo python ~/lego-motor-control.py left &
pid=$!
sleep 5
sudo kill -INT $pid

lego-motor-control.py

#!/usr/bin/env python
import RPi.GPIO as GPIO
import collections, signal, sys
from time import sleep

# get command line agruments
arg_names = ['command', 'direction']
args      = dict(zip(arg_names, sys.argv))
arg_list  = collections.namedtuple('arg_list', arg_names)
args      = arg_list(*(args.get(arg, None) for arg in arg_names))

# set initial values
step      = 1
mota      = 18
motb      = 17
left      = motb
right     = mota
reps      = 400
hertz     = 2000
freq      = (1 / float(hertz)) - 0.0003
ports     = [mota,motb]
percent   = 100

# function to run the motor
def run_motor(reps, pulse_width, port_num, period):
    for i in range(0, reps):
        GPIO.output(port_num, True)
        sleep(pulse_width)
        GPIO.output(port_num, False)
        sleep(period)

# trap SIGINT and provide a clean exit path
def signal_handler(signal, frame):
    GPIO.output(direction, False)
    GPIO.cleanup()
    sys.exit(0)

signal.signal(signal.SIGINT, signal_handler)

GPIO.setmode(GPIO.BCM)

# initialize the ports being used
for port_num in ports:
    GPIO.setup(port_num, GPIO.OUT)
    print "setting up GPIO port:", port_num
    GPIO.output(port_num, False)

# determine direction or set a default
if args[1] == "left":
    direction = left
elif args[1] == "right":
    direction = right
else:
    direction = right

# the main loop
while True:
    pulse_width = percent / float(100) * freq
    period      = freq - (freq * percent / float(100))
    run_motor(reps, pulse_width, direction, period)

Lego Car & Raspberry Pi

Lego Car with Raspberry Pi

For awhile I’ve wanted to connect an RC car to the Raspberry Pi by wiring up to a Tyco Fast Traxx remote control. There are even products like Pi-Cars now, and entire blogs devoted to the construction of autonomous cars. I decided to build out a similar concept with the help of my kids (read about the last time we posted on lego cars).

Recently we acquired the Lego 4×4 Crawler, which includes:

After they completed the Lego build instructions, they added:

This provided an opportunity for me to control the 4×4 Crawler’s infrared remote with a Parallax servo and Lego gear attached to the Raspberry Pi.

The Parallax Standard Servo was attached with Lego bricks built around the outside of it and a larger Lego gear wired on top of it. I then built a platform in the bed of the 4×4 for space to hold this custom part, as well as the connecting gear, and Lego Infrared Remote Control.

To distribute the weight a little bit, the USB battery was enclosed on the underside of the vehicle. As you can see in the video at the end of the post, it still runs lopsided at times. When turning it on, the only thing it directly powers is the USB hub on top. The USB hub then powers the Raspberry Pi, servo, and other peripherals.

Parts used:

Although the servo in the project was wired to the USB hub, the illustration below shows how the USB battery can power the servo, as well as how the control wire is able to be connected up to BCM_GPIO pin 18 for testing.

Servo driven from battery and Raspberry Pi

example wiring showing how to run the servo from external battery

For issuing the commands remotely, I had in mind to hook up a Bluetooth USB adapter and pair the Bluetooth game controller directly to the Pi, but in the interest of simplicity, I paired it with my Nexus 4 instead and ssh’ed into the Pi using Terminal IDE.

Using a wireless keyboard I launched the listener script for the servo and was able to control it using the arrow keys. The Bluetooth game controller worked flawlessly as the buttons mapped to standard keyboard codes I specifically looked out for in the script.

listener.sh

#!/bin/bash
ENTER=$(printf "%b" "n")
ESCAPE=$(printf "%b" "e")
UP_SUFFIX="A"
RIGHT_SUFFIX="C"
LEFT_SUFFIX="D"

while true; do
  read -sn1 a
  case "$a" in
    $ENTER) echo "ENTER";;
    $ESCAPE)
      read -sn1 b
      test "$b" == "[" || continue
      read -sn1 b
      case "$b" in
        $RIGHT_SUFFIX)
          echo "Turning Right"
          ~/right.sh
        ;;
        $UP_SUFFIX)
          echo "Straigtening"
          ~/straight.sh
        ;;
        $LEFT_SUFFIX)
          echo "Turning Left"
          ~/left.sh
        ;;
        *) continue;;
      esac
    ;;
    "d")
      echo "Turning Right"
      ~/right.sh
    ;;
    "w")
      echo "Straigtening"
      ~/straight.sh
    ;;
    "a")
      echo "Turning Left"
      ~/left.sh
    ;;

    "3")
      echo "Taking a picture"
      ~/picture.sh &
    ;;
    "4")
      echo "Taking a video"
      ~/video.sh &
    ;;
    *) echo "$a";;
  esac
done

When the various buttons or keys are pressed, the listener script calls out to the other scripts, which provide for turning, picture, and video taking.

picture.sh

#!/bin/bash
dateTime=`date +%Y-%m-%d-%H-%M-%S`
pictureDirectory=~/pictures

raspistill -w 640 -h 480 -e jpg -t 0 -o $pictureDirectory/$dateTime.jpg

video.sh

#!/bin/bash
dateTime=`date +%Y-%m-%d-%H-%M-%S`
videoDirectory=~/videos

echo "Taking a video for 10 seconds"
raspivid -t 10000  -o $videoDirectory/$dateTime.h264

right.sh

#!/bin/bash
sudo ~/control.py 1100 .25

straight.sh

#!/bin/bash
sudo ~/control.py 1300 .25

left.sh

#!/bin/bash
sudo ~/control.py 1550 .25

The turning scripts all make use of RPIO.PWM which I was able to use after installing RPIO and building off of the provided examples.

control.py

#!/usr/bin/python
import time
import sys
from RPIO import PWM

servo = PWM.Servo()

#appropriate range of values for BCM_GPIO pin 18 and the servo: 370 - 2330

print servo.set_servo(18, float(sys.argv[1]))

time.sleep(float(sys.argv[2]))

servo.stop_servo(18)


Though the Raspberry Pi camera module is rated
5 megapixels with a native resolution of 2592×1944 I have it statically set to 640×480 in the picture taking script above. With it, I was able to capture a picture of my cat ignoring everything but the treat we were bribing her with.

First cat photo from Raspberry Pi camera module

And another one, when she had realized something was going on.

Second cat photo from Raspberry Pi camera module

Lastly, I put together a short video of the Lego 4×4 Crawler being driven from the outside as well as a first person perspective clip from the Raspberry Pi camera.

PHP one-liners on the CLI

PHP Code

After checking out a page called Perl one-liners and reading how you could execute Perl inline on the CLI, I decided to try and replicate some grep functionality using PHP one-liners for fun.

Interestingly, PHP seems quite slow compared to other tools for my particular task, so I probably won’t be using these approaches outlined below in daily practice. ๐Ÿ™‚

Here are some stats for a search through all files in the current directory for words with ‘foo’ in them:

time to search using PHP:

username@host:~$ time php -r '$s="foo";$fs=scandir(".");foreach($fs as $f){$ls=file($f);foreach($ls as $l){if(strpos($l,$s)!==false)echo "$f:$l";}}'
filename.txt:football season is approaching

real	0m0.013s
user	0m0.012s
sys	0m0.000s

time to search using Perl:

username@host:~$ time perl -ne 'print "$ARGV:$_" if /foo/' *
filename.txt:football season is approaching

real	0m0.003s
user	0m0.000s
sys	0m0.000s

time to search using grep:

username@host:~$ time grep -H foo *
filename.txt:football season is approaching

real	0m0.002s
user	0m0.000s
sys	0m0.000s

As you can see, to use the one liner technique it appears to be as simple as “compressing” your scripts down to a single line, making sure they are shell friendly, and executing them with the “-r” option. Some other ones that I worked out with PHP are below:

to search for ‘foo’ and print matching lines in a single file:

php -r '$s="foo";$ls=file("filename");foreach($ls as $l){if(strpos($l,$s)!==false)echo$l;}'

to search for ‘foo’ and print matching lines in all files in the current directory:

php -r '$s="foo";$fs=scandir(".");foreach($fs as $f){$ls=file($f);foreach($ls as $l){if(strpos($l,$s)!==false)echo$l;}}'

to search for ‘foo’ and print matching lines recursively from the current directory:

php -r '$s="foo";$t=new RecursiveDirectoryIterator(".");foreach(new RecursiveIteratorIterator($t) as $f){$ls=file($f);foreach($ls as $l){if(strpos($l,$s)!==false)echo$l;}}'

Observations on HTML

In December of 2012 HTML5 became HTML5 Logofeature complete according to the W3C, “meaning businesses and developers have a stable target for implementation and planning.” They continued to describe HTML5 as, “the cornerstone of the Open Web Platform, a full programming environment for cross-platform applications with access to device capabilities; video and animations; graphics; style, typography, and other tools for digital publishing; extensive network capabilities; and more.”

What a long road. Think back to the turn of the century. Shortly after HTML 4.01 was published as a W3C recommendation and XHTML 1.0 had it’s turn, it was said to be “developed to make HTML more extensible and increase interoperability with other data formats.” I remember the routine. Best practices at the time were separating form from content, moving to CSS, utilizing unobtrusive JavaScript, and practicing graceful degradation. Don’t forget to close your tags. Make it XML. Oh, and the machines are coming! ๐Ÿ™‚

As XHTML 2 approached it became clear that it would be an entirely new way of doing things, and not just an incremental approach that preserved compatibility. In 2004, Mozilla and Opera published the “Position Paper for the W3C Workshop on Web Applications and Compound Documents.” Some key sections included headings like, “Backwards compatibility, clear migration path”, “Users should not be exposed to authoring errors”, and “Scripting is here to stay.” Ultimately the initiatives were voted down and in response the WHATWG was formed. Between the years of 2007 and 2009, not only did the W3C accept WHATWG’s “Proposal to Adopt HTML5“, they allowed the XHTML 2 Working Group’s charter to expire, even going on to acknowledge that HTML5 would be the standard rather than the XML variant XHTML5. Regarding the two formats they wrote, “The first such concrete syntax is the HTML syntax. This is the format suggested for most authors.”

Since then, the whole web has been marching toward HTML5 domination, steadily learning best practice and implementation. In the earlier days I recall it not being as rapid as more recent, with people discussing the semantics, along with calls to prepare, but there has been a ton of solid information on the topic for awhile now and the momentum has shifted. Not only has the WHATWG decided that HTML is a living standard while the W3C publishes regular snapshots, the working draft of HTML 5.1 has been issued.

Lastly, I find it interesting to see the various web development strategies work themselves out as the craft changes. Graceful degradation (and desktop centric) has steadily given way to a solid progressive enhancement (and mobile first) approach as the web continues to gain in mobile traffic. In addition there are quite a few ideas going around on how to best accommodate all of the client browsers, especially in the comments. Should one start with an adaptive web design, and how is that related to responsive web design? Is one really a part of the other and should we have a strategy utilizing both? Maybe that’s the future… I guess it depends.

Time flies when you are having fun. Certainly I can be sure about one thing, I still like to close my tags. ๐Ÿ™‚

HTML5 Logo by W3C.

Compiling TrueCrypt on Raspberry Pi

These steps were gleaned from the work completed by Reinhard Seiler.

Other than the manual download and placement of the TrueCrypt source, the rest should be fairly hands off. So first, get the “TrueCrypt 7.1a Source.tar.gz” package from http://www.truecrypt.org/downloads2 and copy it to /usr/local/src/ on the Raspberry Pi.

Secondly, run the commands below, oh, and at your own risk of course… ๐Ÿ™‚

#!/bin/bash
#get source files other than the TrueCrypt source
sudo wget -P /usr/local/src http://prdownloads.sourceforge.net/wxwindows/wxWidgets-2.8.11.tar.gz
sudo wget -P /usr/local/src/pkcs11 ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v211/pkcs11.h
sudo wget -P /usr/local/src/pkcs11 ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v211/pkcs11f.h
sudo wget -P /usr/local/src/pkcs11 ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v211/pkcs11t.h

#get and install dependent packages
sudo apt-get -y install libgtk2.0-dev libfuse-dev nasm libwxgtk2.8-dev

#extract, configure, and make wxWidgets
sudo tar -xzvf /usr/local/src/wxWidgets-2.8.11.tar.gz -C /usr/local/src
cd /usr/local/src/wxWidgets-2.8.11/
./configure
make

#setup, extract
export PKCS11_INC=/usr/local/src/pkcs11
sudo tar -xzvf /usr/local/src/TrueCrypt 7.1a Source.tar.gz -C /usr/local/src
cd /usr/local/src/truecrypt-7.1a-source

#comment out some lines that prevented building
sed -i 's#TC_TOKEN_ERR (CKR_NEW_PIN_MODE)#/*TC_TOKEN_ERR (CKR_NEW_PIN_MODE)*/#g' Common/SecurityToken.cpp 
sed -i 's#TC_TOKEN_ERR (CKR_NEXT_OTP)#/*TC_TOKEN_ERR (CKR_NEXT_OTP)*/#g' Common/SecurityToken.cpp
sed -i 's#TC_TOKEN_ERR (CKR_FUNCTION_REJECTED)#/*TC_TOKEN_ERR (CKR_FUNCTION_REJECTED)*/#g' Common/SecurityToken.cpp

#compile, build, make!
sudo make WX_ROOT=/usr/local/src/wxWidgets-2.8.12/ wxbuild
sudo -E make WXSTATIC=1

echo
echo TrueCrypt should be found in /usr/local/src/truecrypt-7.1a-source/Main/

 
Finally, copy the compiled ‘truecrypt’ binary from /usr/local/src/truecrypt-7.1a-source/Main/ to /usr/local/bin/.

Afterwards, I was able to run it from the command line and view the help. There are two items in particular that may be of interest for those going through these steps. One is, when attempting to mount an encrypted volume with the Raspberry Pi that I have, I received the following:

Error: device-mapper: reload ioctl on truecrypt1_1 failed: No such 
file or directory
Command failed

 
It appears as though it may be due to a particular kernel module not being compiled in. So I added the “-m=nokernelcrypto” command line option and was successful.

TrueCrypt on the Raspbeery Pi

For more command line usage, see their website @ http://www.truecrypt.org/docs/?s=command-line-usage. I haven’t tested it yet, but it should work graphically as well.

Also, though some may not want to edit the source directly, I ended up commenting out lines 660, 661, and 662 of /usr/local/src/truecrypt-7.1a-source/Common/SecurityToken.cpp in order for the compile to work (see the sed lines in the script I provided above). Here’s the difference between the original and the changed source:

diff ./SecurityToken.cpp 
/usr/local/src/truecrypt-7.1a-source/Common/SecurityToken.cpp 
660,662c660,662
< 			TC_TOKEN_ERR (CKR_NEW_PIN_MODE)
< 			TC_TOKEN_ERR (CKR_NEXT_OTP)
< 			TC_TOKEN_ERR (CKR_FUNCTION_REJECTED)
---
> 			/*TC_TOKEN_ERR (CKR_NEW_PIN_MODE)*/
> 			/*TC_TOKEN_ERR (CKR_NEXT_OTP)*/
> 			/*TC_TOKEN_ERR (CKR_FUNCTION_REJECTED)*/

 
While there are some compromises in the process described above, it was the only way I could get it compiled in the time I allotted to the task.

Jigging with Debian

I found a package manager for Debian that I’ve never heard of: Wajig.

  • History: Motivations for Wajig

If you’ve tried to remember all the different commands to get different information about different aspects of Debian package management and then used other commands to install and remove packages then you’ll know that it can become a little too much.

Swapping between dselect, deity, deity-gtk, aptitude, apt-get, dpkg, gnome-apt, apt-cache, and so on is interesting but cumbersome. Plus personally I find dselect, deity, and aptitude confusing and even though I’ve spent hours understanding each of them, I don’t think the time was particularly well spent.

This Python script simply collects together what I have learnt over the years about various commands! Clearly I have yet to learn all there is.

As Andrew Tanenbaum once said: “The nice thing about standards is that you have so many to choose from.” Even Debian documentation seems to advocate one tool, aptitude, or the older, apt-get, depending on where you look (at least at this time)…

  • Source one

aptitude – CLI and ncurses front end for Apt (recommended)

  • Source Two

The recommended way to upgrade from previous Debian GNU/Linux releases is to use the package management tool apt-get. In previous releases, aptitude was recommended for this purpose, but recent versions of apt-get provide equivalent functionality and also have shown to more consistently give the desired upgrade results

As for myself I seem to default to apt-get. In addition, the auto removal tools that are offered in any package management tools make me want to proceed with caution. Though, I have been told I worry too much. ๐Ÿ™‚

Visualizing Motion

The programmability of the Raspberry Pi comes in handy when you want to change the behavior of a circuit without moving a single wire. In this case, I decided the data I was logging with my former motion detection script was largely useless because it only ever recorded when a motion event occurred, but didn’t hint to how long it had happened for.

So, while I admit the new method probably isn’t the -best- way there is, I believe it to be incrementally better. ๐Ÿ™‚ The main difference being that “sleep” is called for half a second in the mix to allow for the line to start at bottom, progress to top, and then back down after the event occurs. I suppose it is inaccurate in that motion didn’t actually happen exactly in this manner, but it does allow for a nicer graph. Google Chart Tools is used along with the PHP built-in web server effectively piping the “data.js” log file to Google for displaying. I know the Pi has to be online…

Finally every evening at 23:59, cron runs a maintenance script and moves the data around for archiving (I love this little linux box). My thoughts on further improvements have been pointing me toward PHPlot instead of the Annotated Time Line from Google or maybe even utilizing kst. Also I would like to avoid that half second delay in future revisions… oh, and I haven’t tested what happens if I dance around in front of the thing right at midnight. ๐Ÿ™‚

source for motion.sh

#!/bin/bash
function setup {
gpio export 17 out
gpio -g write 17 1
gpio export 18 in
start=0
echo The PIR sensor is initializing and calibration has begun.
i=0; while [ $i -lt 40 ]; do i=$(($i+1)); echo $i; sleep 1; done
}

function loop {
  while true
  do
    if [ `gpio -g read 18` -eq 1 ]; then #PIR sensor activated
      if [ $start -eq 0 ]; then
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date 
          +", %d, %H, %M, %S"`'), 0],' | tee -a 
          /opt/GoogleVisualization/app/data.js
        sleep .5
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date 
          +", %d, %H, %M, %S"`'), 1],' | tee -a 
          /opt/GoogleVisualization/app/data.js
        start=1;
      fi
    else #PIR sensor de-activated
      if [ $start -eq 1 ]; then
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date 
          +", %d, %H, %M, %S"`'), 1],' | tee -a 
          /opt/GoogleVisualization/app/data.js
        sleep .5
        echo '[new Date('`date +"%Y, "`$((`date +%m`-1))`date 
          +", %d, %H, %M, %S"`'), 0],' | tee -a 
          /opt/GoogleVisualization/app/data.js
        start=0;
      fi
    fi
  done
}

setup; loop

 
excerpt of data.js

[new Date(2012, 10, 28, 08, 06, 30), 0],
[new Date(2012, 10, 28, 08, 06, 31), 1],
[new Date(2012, 10, 28, 08, 07, 10), 1],
[new Date(2012, 10, 28, 08, 07, 11), 0],
[new Date(2012, 10, 28, 08, 07, 16), 0],
[new Date(2012, 10, 28, 08, 07, 17), 1],
[new Date(2012, 10, 28, 08, 07, 22), 1],
[new Date(2012, 10, 28, 08, 07, 23), 0],

 
source for index.php

<?php
$hostname='bb.local';
$directory='/opt/GoogleVisualization/app/';
?>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" 
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
  <meta http-equiv="content-type"
    content="text/html; charset=utf-8" />
  <title>Karl's Passive Infrared Sensor Graph</title>
  <script type="text/javascript" 
    src="http://www.google.com/jsapi"></script>
  <script type="text/javascript">
    google.load('visualization', '1', {packages: 
      ['annotatedtimeline']});
    function drawVisualization() {
      var data = 
        new google.visualization.DataTable();

      data.addColumn('datetime', 'Date');
      data.addColumn('number', 'Status');

      data.addRows([
        <?php include ('data.js'); ?>
      ]);

      var annotatedtimeline = new
        google.visualization.AnnotatedTimeLine(
        document.getElementById('visualization'));
        annotatedtimeline.draw(data, 
        {'displayAnnotations': true});
    }

    google.setOnLoadCallback(drawVisualization);

  </script>
</head>
<body style="font-family: Arial;border: 0 none;">
  <h1>Karl's Passive Infrared Sensor Graph</h1>
  <div id="visualization" style="width: 900px; height: 300px;">
  </div>
  <br />
  <a href="http://<?php echo $hostname ?>/archive">Archive</a>
</body>
</html>

 
source for the maintenance script

#!/bin/bash
directory=$(date +"%Y-%m-%d")
sleep 60 #wait until midnight, cron is set for 23:59 as this keeps the directory names and dates aligned
mkdir /opt/GoogleVisualization/app/archive/$directory
mv /opt/GoogleVisualization/app/data.js /opt/GoogleVisualization/app/archive/$directory/
cp -pr /opt/GoogleVisualization/app/index.php /opt/GoogleVisualization/app/archive/$directory/

Responsive Design View in Firefox

With the faster release cycle and updating of Firefox, it’s been interesting to see features show up in the application like they do in web sites. They’re there the next time you load it, and are discovered almost by accident if you do not go searching for them on purpose.

The other day, while testing various display sizes on this very site, I noticed a new developer tool that was released called Responsive Design View. This special viewing mode was released in Firefox 15, and allows for various device sizes to be represented using theย Gecko layout engineย without too much hassle.

How nice. ๐Ÿ™‚ The last feature I found out by accident that made me think, “wow cool,” was theย 3D view that was released in Firefox 11.ย It would have been wonderful to have while developing some animated tabs I had running on here back in the day. Particularly because they would pop up from behind the main container of content on the site… so I had to visualize them behind there waiting for an event to happen to send them shooting up via some jQuery effects.