Thursday 18 December 2014

Raspberry Pi - Python - MySQL - Cron Jobs and Physical Jerks #1


In two previous posts I blogged about how I've linked geekiness and exercise by playing with the Strava API.  This is good for exercise types that Strava covers but doesn't cover the other stuff I like to do like strength and conditioning work (very important now I am entering more mature years).

What I find is that I go through peaks and troughs of doing this sort of exercise.  I have period where I have good intentions and do lots and other periods where I just seem to forget about it.  Hence what I needed is something like Strava that takes a record of what I've done and gives me prompts to do more.

I'm sure there's apps out there that does this sort of thing but that's not the Geek Dad way; I wanted to build my own....

So here was my idea:
  1. When I've done some exercise I send a Tweet from my 'phone with some form of short hand notation in it to define what I've done.
  2. My Raspberry Pi periodically checks my twitter feed to pick up these Tweets. 
  3. My Raspberry Pi logs the exercise data in a database.
  4. My Raspberry Pi sends me responses via Twitter to congratulate me for doing the exercise.
  5. At a later stage I can analyse the data on a web page.
The TUI (Twitter User Interface) is a little lazy; maybe one day when I get some more time I'll write an app with a decent GUI (Geek User Interface).

I decided to call it "Jerks", as in physical jerks.  The word has no other meaning that I know of... ;-)

To start with  I needed a database.  I decided to use MySQL as it is a core part of a LAMP webserver.  This page provides an excellent tutorial on setting up and using (via Python and PHP) a MySQL database on Raspberry Pi. The only problem I ran into was that my installation process did not prompt me to create a root password.  I followed the steps on this site and managed to resolve this.

To set-up my Jerks database I simply used this MySQL syntax:

mysql > CREATE DATABASE jerks;
mysql USE jerks;

I wanted the method to Tweet in details of the exercises I'd done to be super simple. Hence I decided to use the format - XX N - where XX is a two letter code defining the exercise I'd done and N is the number of repetitions.  So for example:
  • Pu 12 - Means I've done 12 press ups.
  • Yo 1 - Means I've done 1 Yoga session.
So I needed a look-up database table that could map from these two letter codes to the full exercise name.  This was easy to create:

CREATE TABLE lookup (twoletters TEXT, longform TEXT);

...and add to:

INSERT INTO lookup (twoletters,longform) values("PU","Press Ups");
INSERT INTO lookup (twoletters,longform) values("YO","Yoga");

...meaning I had a table like this after entering lots of insert statements:

mysql> select * from lookup;
+------------+--------------------+
| twoletters | longform           |
+------------+--------------------+
| PU         | Press Ups          |
| CP         | Clap Press Ups     |
| FR         | Foam Rolling       |
| CR         | Calf Raises        |
| PI         | Pilates            |
| YO         | Yoga               |
| SQ         | Squatting          |
| BC         | Bicep Curls        |
| TC         | Tricep Curls       |
| FR         | Front Raises       |
| SR         | Side Raises        |
| GS         | General Stretching |
| LW         | Leg Weights        |
+------------+--------------------+
13 rows in set (0.01 sec)

In terms of playing with Twitter in Python, I used some example code from this awesome tutorial.  This tells you how to sign up for a Twitter developer account, read Tweets and send Tweets.  A quick pydoc twitter.Apito showed me more ways to use the Python Twitter wrapper.

So (after initiating a Twitter object) you can iterate through each of the Tweets using code like this:

# Display all my tweets
for tweet in twitter.statuses.home_timeline():
  print(tweet['text'])

A quick check of the Twitter Developer page showed that this call would result in only the last 20 Tweets being returned so it's not a massive overhead.

So (after checking that a Tweet is a Jerks Tweet by looking for the pattern of two space delimited parts, first part two letters, second part a number) I needed to create a way to check whether a Tweet was a new one or an old one.  I decided to use another database table and to log the unique Tweet ID that is  returned from the Twitter API.  Overall I needed a table to show:
  • The date I did the exercise.
  • The time I did the exercise.
  • The associated Tweet ID.
  • The exercise.
  • The count of the exercise.
So I used this syntax to create this table:
CREATE TABLE exercise (tdate DATE, ttime TIME, tweet_id TEXT, exercise TEXT, count NUMERIC);

So for every Tweet that came in I could run an SQL query to see if the Tweet ID was in the table and, if not, add a new row to the table with an INSERT statement.

Then to finish the job I could send a Twitter direct message back to me to give me a warm and glowing feeling that I'd done some good to my creaking body.  So my Twitter feed and direct messages looks something like this:


Finally I needed to automate the Python script to get  it to periodically check the Twitter feed, add to the database and direct message me.  Previously I've used endless While loops but this time I fancied using a cron job (as I've never done it before).  After struggling with the crontab command, this excellent page gave me an excellent alternative that worked using a file in/etc/cron.d

So the script works very nicely and after a few days the database is filling up nicely:

mysql> select * from exercise;
+------------+----------+--------------------+--------------------+-------+
| tdate      | ttime    | tweet_id           | exercise           | count |
+------------+----------+--------------------+--------------------+-------+
| 2014-12-10 | 21:33:40 | 542794300512077440 | Press Ups          |     5 |
| 2014-12-10 | 19:51:30 | 542768590521259395 | Clap Press Ups     |    12 |
| 2014-12-09 | 21:46:47 | 542435212346121218 | Foam Rolling       |     1 |
| 2014-12-09 | 21:30:53 | 542431214136039808 | Press Ups          |    10 |
| 2014-12-08 | 19:21:32 | 542036123435369729 | Foam Rolling       |     1 |
| 2014-12-06 | 18:42:07 | 541300987675000064 | Clap Press Ups     |    12 |
| 2014-12-06 | 18:39:29 | 541300987683332864 | Calf Raises        |    12 |
| 2014-12-06 | 18:37:52 | 541302456700659328 | Press Ups          |    12 |
| 2014-12-10 | 21:55:04 | 542790864539329920 | Yoga               |     1 |
| 2014-12-11 | 08:12:05 | 542954961987842176 | Calf Raises        |    40 |
| 2014-12-11 | 21:59:04 | 543161234896574465 | Press Ups          |     5 |
| 2014-12-11 | 22:00:07 | 543163346444444632 | Clap Press Ups     |    12 |
| 2014-12-11 | 22:06:37 | 543164980787878298 | Press Ups          |     4 |
| 2014-12-12 | 07:44:40 | 543310451141212378 | Press Ups          |     3 |
| 2014-12-12 | 09:03:04 | 543330181834567898 | Press Ups          |     1 |
| 2014-12-12 | 09:02:50 | 543330124222222211 | Press Ups          |     1 |
| 2014-12-12 | 07:50:11 | 543311856755551132 | Press Ups          |    10 |
| 2014-12-12 | 18:40:31 | 543475502015523554 | Squatting          |     1 |
| 2014-12-12 | 18:34:33 | 543412342452578571 | Calf Raises        |    12 |
| 2014-12-12 | 18:58:36 | 543480055554545456 | Squatting          |     1 |
+------------+----------+--------------------+--------------------+-------+
20 rows in set (0.00 sec)

Here's all the code (minus sensitive bits of course):

import os
from twitter import *
import MySQLdb
from datetime import datetime
# go to https://dev.twitter.com/apps/new to create your own
# CONSUMER_KEY and CONSUMER_SECRET
# Note that you need to set the access level to read and write
# for this script to work (Found in the settings tab once you
# have created a new application)
# pydoc twitter.Apito get all the twitter documentation
CONSUMER_KEY = "Your_key_here"
CONSUMER_SECRET = "Your_secret_here"

#Database related contents
dbMainTable = "exercise"
dbLookupTable = "lookup"

#Used to take a Twitter format date and turn it into a Python date
def ParseTwitterDate(TwitterDt):
  #The Twitter date is like this -> Sat Dec 06 18:42:07 +0000 2014
  #strptime doesn't deal with the +0000 at all well. So I'll just strip it out
  DateToChange = TwitterDt[0:19] + TwitterDt[25:30]
  return datetime.strptime(DateToChange,'%a %b %d %H:%M:%S %Y')

#######################
#This is the main part of the code

#Database stuff
db = MySQLdb.connect("localhost", "jerks", "user", "password")   #host,user,password,database name
curs=db.cursor()

# get full pathname of .twitterdemo_oauth file in the
# home directory of the current user
oauth_filename = os.path.join(os.path.expanduser('~'),'.twitterdemo_oauth')

# get twitter account login info
if not os.path.exists(oauth_filename):
  oauth_dance('Raspberry Pi Twitter Demo', CONSUMER_KEY, CONSUMER_SECRET, oauth_filename)
(oauth_token, oauth_token_secret) = read_token_file(oauth_filename)


# log in to Twitter
auth = OAuth(oauth_token, oauth_token_secret, CONSUMER_KEY, CONSUMER_SECRET)
twitter = Twitter(auth=auth)
# Tweet a new status update
# twitter.statuses.update(status="Hello The World!")
# Display all my tweets
for tweet in twitter.statuses.home_timeline():
  #Jerks tweets are two parts space delimited, two letters + space + a number and from PDW.  Do all these parts
  #First check if it was from the my account
  if tweet['user']['screen_name'] == 'PDW':
    #Get all the parts of the tweet
    TweetParts = tweet['text'].split(' ')
    #Check for all the right components
    if (len(TweetParts) == 2) and (len(TweetParts[0]) == 2) and (TweetParts[1].isdigit()):
      print tweet['text']
      #This means that this is a Jerks tweet.  Need to check whether it's new or old by looking at the database.  We do this by checking the id.  First form the SQL query, then execute it
      JerksQuery = 'SELECT * FROM ' + dbMainTable + ' where tweet_id="' + str(tweet['id']) + '";'
      curs.execute (JerksQuery)
      #Check whether we got anything in response, if not we add to the database
      if (curs.rowcount == 0):
        #Execute a query to add to the database
        print "Adding to the database"
        try:
          #Form the new query then execute it.  First get the date
          MyDateTime = ParseTwitterDate(tweet['created_at'])

          #Now we need to lookup the two letter code from the tweet into it's long form.  We do this in a lookup table
          JerksQuery = 'SELECT * FROM ' + dbLookupTable + ' where twoletters = "' + TweetParts[0].upper() + '";'
          print JerksQuery
          curs.execute (JerksQuery)
          #Check whether we got anything in response, if not we add to the database
          if (curs.rowcount > 0):
            #We've found the long form value of the exercise in the query response.  Add to the insert query
            for LookUpResponse in curs.fetchall():
              LongFormExercise = LookUpResponse[1]

            #Form the query
            JerksQuery = 'INSERT INTO ' + dbMainTable + ' (tdate,ttime,tweet_id,exercise,count) values("' + str(MyDateTime.date()) + '","' + str(MyDateTime.time()) + '"
,"' + str(tweet['id']) + '","' + LongFormExercise + '",'+ TweetParts[1] + ');'
            print JerksQuery
            curs.execute (JerksQuery)
            db.commit()
            print "Data committed"
            MessageToTweet = 'Nice one jerk!  You did ' + str(TweetParts[1]) + ' ' + LongFormExercise
            print MessageToTweet
            twitter.direct_messages.new(user="PDW",text=MessageToTweet)
          else:
            print "Exercise not in lookup table"
            twitter.direct_messages.new(user="PDW",text='Hey jerk! ' + TweetParts[0] + ' is not a code Jerks understands')
        except:
          print "Error: the database is being rolled back"
          db.rollback()
      else:
        print "No new entries to add"

Monday 1 December 2014

Raspberry Pi and Strava API #2

In a previous post blogged on how I've managed to cross the streams of two of my hobbies, geekery and exercise, using my Raspberry Pi and the Strava API.  I've extended this by creating a capability to use Strava API data to see if I am getting fitter through exercise activities.

One key part of this is having an exercise regime that is repeatable in a controlled fashion to allow the likes of me, who lacks an exercise lab, to do some analysis.  With half a mind on the geek potential it provided, back in September I started to do HIIT sessions using the same bike setup and the same exercise protocol.

Now generally I am skeptical of exercise fads but HIIT:
  • Seemed to have a number of reputable scientific studies backing it.
  • Enabled me to fit in sessions in very short time windows.
  • Is very basic, no HIIT specific equipment provided so no one trying to sell you kit or DVDs or exercise programmes.
So my bike is setup on a turbo trainer and I always have it in the same gear and on the same turbo resistance setting.  I know that things like tyre pressure could vary (thus changing rolling resistance) but by and large the conditions are similar.

I have a heart rate monitor and speed/cadence sensor on my bike so I can log these things as I exercise.  I don't have a fancy-dan power meter.

My exercise regime is as follows:
  • 4 minute warm up
  • 6 lots of flat out for 20 seconds followed by 10 seconds rest.
  • 3 minute warm down.
...so all done in 10 minutes flat.

So in Strava I get these sort of results:


So you can see my speed and cadence peak for every interval and my heart rate increase gently during the warm-up, goes up quickly (in steps) during the flat-out periods and then drops during the warm-down.

So all good stuff but as a geek I need to know whether the results above are any better than this other session below:


Looks like a better first couple of reps but tailed off at the end.  The Strava UI didn't seem to give me a good way to compare and contrast my ~20 HIIT sessions so I decided to find a geeky way!  A quick look at the Strava API documentation showed that there is a "laps" resource in the API so I decided to use it as my Garmin logs each section of the HIIT activity as a lap.

First you identify the Strava activity in question by listing all activities:

https://www.strava.com/api/v3/activities?access_token=<your token here>&per_page=200

You then list the laps for a specific activity using this URL:

https://www.strava.com/api/v3/activities/331779633/laps?access_token=<your token here>&per_page=200

Which gives you easily parsable json output like this (just 2 laps shown for clarity):

[{"id":744754123,"resource_state":2,"name":"Lap 1","activity":{"id":331779633},"athlete":{"id":4309532},"elapsed_time":240,"moving_time":241,"start_date":"2014-11-18T19:49:59Z","start_date_local":"2014-11-18T19:49:59Z","distance":1527.27,"start_index":0,"end_index":36,"total_elevation_gain":0.0,"average_speed":6.4,"max_speed":6.7,"average_cadence":69.1,"average_watts":70.7,"average_heartrate":85.2,"max_heartrate":116.0,"lap_index":1},{"id":744744535,"resource_state":2,"name":"Lap 2","activity":{"id":220668522},"athlete":{"id":4309532},"elapsed_time":20,"moving_time":19,"start_date":"2014-11-18T19:54:04Z","start_date_local":"2014-11-18T19:54:04Z","distance":245.38,"start_index":37,"end_index":45,"total_elevation_gain":0.0,"average_speed":12.3,"max_speed":12.4,"average_cadence":124.5,"average_watts":320.2,"average_heartrate":134.8,"max_heartrate":148.0,"lap_index":2}

So for each lap you get a range of interesting information such as:
  • average_cadence
  • average_watts (which I assume to be estimated as I don't have a power meter)
  • average_heartrate
  • max_heartrate
So for every HIIT session I've done I've given it a name in Strava using the format "HIIT YYYYMMDD" so it was easy to write some PHP to:

  • List all activities and put the resulting json into an array.
  • Loop through the array and pick out each of the HIIT activities.
  • For each HIIT session, call the API to get lap resource information.
  • Parse the json to pick out measurements for each lap.
  • Print the results into a web page.

Full code listing is at the bottom of this blog post.  The output I get within a browser window is shown below:


So a very raw CSV print out of 3 key metrics per lap.  So I could easily take this CSV data and pull it into Excel for data analysis.  For example I could put together this graph showing average cadence per lap:


So laps 2,4,6,8,10 and 12 are the laps where I put in a lot of effort for 20 seconds.  (Lap 1 is warm up, lap 13 is the final 10 seconds after a hard effort 6 and lap 14 is warm down).

I first put this graph together a couple of weeks ago and one initial observation was that in previous HIIT sessions that was a lot of variance, both within a session and from session to session. This was because I had no real targets for hard efforts, I just did it as hard as I could and held on for the last couple of efforts.  Hence in the last two weeks I've focussed on doing "tighter" sessions where the target is 130rpm across all the efforts. You can see this on the graph where there's much less of a spread from lap 2 to lap 12 and they're clustered in the 120 to 135 range.

Next: More HIIT sessions, more analysis and attempting to draw graphs on web page rather than nasty old Excel.

Full code (remember I'm new to PHP):

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Weeks Family Strava Fun</title>
    <link rel="stylesheet" type="text/css" href="/stylesheet/style1.css">
  </head>
  <body>
   <!-- Get the value from Strava-->
   <?php
     //Use cURL to get the value from Strava.  Max 200 actvities per file.  After that will need to play with pages...
     $curl = curl_init();
     curl_setopt ($curl, CURLOPT_URL, "https://www.strava.com/api/v3/activities?access_token=<your token here>&per_page=200");
     curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
     $result = curl_exec ($curl);
     curl_close ($curl);
     $json_a=json_decode($result,true);
     //Good debug code
     //foreach($json_a as $num)
     //{
     //print $num[start_date]."-".$num[name]."-".$num[type]."-".$num[distance]."-".$num[total_elevation_gain]."-".$num[average_speed]."<br>";
     //}

     //Now itterate through the main results, picking out the HIITs and summarising them
     foreach($json_a as $num)
     {
       if (substr($num[name],0,4) == "HIIT") //Check for HIIT sessions
       {
       //This is an entry with a HIIT
       $hiitNameDate = $num[name].",".$num[start_date_local].",";
       //Download the associated lap
       $activityNumber = $num[id];
       //Form the URL
       $lapURL = "https://www.strava.com/api/v3/activities/".$activityNumber."/laps?access_token=<your token here>";
       //Do cURL with this URL
       $curl = curl_init();
       curl_setopt ($curl, CURLOPT_URL, $lapURL);
       curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
       $lapResult = curl_exec ($curl);
       curl_close ($curl);
       //Turn into an array we can parse
       $json_b=json_decode($lapResult,true);
       foreach($json_b as $lapNum)
         {
         echo $hiitNameDate.$lapNum[name].",Ave Cadence,".$lapNum[average_cadence]."<br>";
         echo $hiitNameDate.$lapNum[name].",Ave Heart,".$lapNum[average_heartrate]."<br>";
         echo $hiitNameDate.$lapNum[name].",Max Heart,".$lapNum[max_heartrate]."<br>";
         }
       }
     }
   ?>
      <h1>Summary of HIIT Activity</h1>
      <p><?php echo $hiitSummary ?></p>
    <p>Warning: This only shows a max of 200 Stravas!  Will have to play with  pages after that...</p>
  </body>
</html>