VPN bridge from home network to AWS VPC with Raspberry Pi

Introduction and disclaimer

I wanted to extend my home network to a Virtual Private Cloud (VPC) within Amazon Web Services (AWS), primarily for use as a jenkins build farm. I have achieved this using a Raspberry Pi as my Customer Gateway device. This post covers the process of configuring the raspberry pi from scratch and AWS from scratch. I’m posting as a reminder to myself, hopefully others will find this useful.

NOTE: I found this post by Pahud Hsieh on hackmd.io very helpful while developing this. I have also relied heavily on the excellent documentation at http://aws.amazon.com/documentation/

I have a fairly standard home network on 192.168.0/24 with a router provided by my ISP.  This post uses a Raspberry Pi, on a static IP address within my home network as a VPN gateway. Allowing any devices on my home network to communicate with EC2 instances (virtual machines), running within my VPC.

Network Diagram

Network Diagram

Prerequisite

  • Raspberry Pi on a static IP address (in this example 192.168.0.30)
  • Home gateway IP address of you home network is static (My ISP doesn’t offer static IP addresses, however the address has not changed for years).
  • AWS account (I suggest also running some tutorials, the free tier is sufficient).

I’m starting with a clean install of Raspbian Jessie Lite, although other distributions of Linux should work.

Configuring VPC

Login to AWS, and select Services->VPC, this takes you to the VPC dashboard. Start the VPC Wizard

Screen Shot 2016-07-11 at 14.24.36

Choose VPC with a Private Subnet Only and Hardware VPN access and click Select.

Screen Shot 2016-07-11 at 14.27.49

 

Screen Shot 2016-07-11 at 14.33.15

 

Screen Shot 2016-07-11 at 14.37.54

Wait for the VPN to be created

Screen Shot 2016-07-11 at 14.41.53

Now on the left towards the bottom find the VPN Connections page and click the Download Configuration button at the top of the page

Screen Shot 2016-07-11 at 15.04.56

In the downloaded configuration file find the tunnel groups under the IKE section

!
! The tunnel group sets the Pre Shared Key used to authenticate the 
! tunnel endpoints.
!
tunnel-group <TUNNEL1_IP> type ipsec-l2l
tunnel-group <TUNNEL1_IP> ipsec-attributes
pre-shared-key <PSKEY_STRING>

You will need the <TUNNEL1_IP> and <PSKEY_STRING> values later, note them down.

Ensure that the static route to your home network exists in the VPN, Opne VPN Connections and select the Static Routes tab, it should show the CIDR for your home network. If not (my setup didn’t) click Edit and type in the CIDR.

Screen Shot 2016-07-12 at 10.17.03

 

 

 

Configure the Raspberry Pi

Enable the Random Number Generator

Edit /boot/config.tx and append

# Enable random number generator
dtparam=randon=on

Reboot and then install the random number generator tools

sudo apt-get install rng-tools

Install Openswan

sudo apt-get install -y openswan lsof

During package installation you get prompted about using X.509 certificates. I’m sure AWS supports these, for now I’m skipping for simplicity.

Screen Shot 2016-07-11 at 15.24.15

IPSec configuration

Edit /etc/ipsec.conf and set the content as shown below. NOTE this is including configuration files from /etc/ipsec.c/*.conf, this allows different files for different connections.

# /etc/ipsec.conf - Openswan IPsec configuration file
#
# Manual: ipsec.conf.5
#
# Please place your own config files in /etc/ipsec.d/ ending in .conf
version 2.0 # conforms to second version of ipsec.conf specification
# basic configuration
config setup
# Debug-logging controls: "none" for (almost) none, "all" for lots.
# klipsdebug=none
# plutodebug="control parsing"
# For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
protostack=netkey
nat_traversal=yes
virtual_private=
oe=off
# Enable this if you see "failed to find any available worker"
# nhelpers=0
#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf

Create a configuration file for this connection, edit /etc/ipsec.d/home_to_aws.conf

conn home-to-aws
   type=tunnel
   authby=secret
   #left=%defaultroute
   left=192.168.0.30
   leftid=123.123.123.123
   leftnexthop=%defaultroute
   leftsubnet=192.168.0.0/24
   right=<TUNNEL1_IP>
   rightsubnet=10.0.0.0/16
   pfs=yes
   auto=start

Where

left – The IP address of your Raspberry Pi on your home network

leftid – The IP address of your home gateway

leftsubnet – the CIDR of your home network

right – the IP address of Tunnel1 in your AWS gateway.

right subnet – The CIDR of your VPC

Input the pre-shared key

edit/var/lib/openswan/ipsec.secrets.inc and set the content as below

123.123.123.123 <TUNNEL1_IP> : PSK "<PSKEY_STRING>"

Edit /etc/sysctl.conf

Append the following lines

net.ipv4.ip_forward=1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.default.accept_redirects = 0

Then run sysctl -p to reload it.

Check IPSec status

$ sudo ipsec verify
Checking your system to see if IPsec got installed and started correctly:
Version check and ipsec on-path [OK]
Linux Openswan U2.6.38/K4.4.11-v7+ (netkey)
Checking for IPsec support in kernel [OK]
SAref kernel support [N/A]
NETKEY: Testing XFRM related proc values [OK]
 [OK]
 [OK]
Hardware RNG detected, testing if used properly [OK]
Checking that pluto is running [OK]
Pluto listening for IKE on udp 500 [OK]
Pluto listening for NAT-T on udp 4500 [OK]
Checking for 'ip' command [OK]
Checking /bin/sh is not /bin/dash [WARNING]
Checking for 'iptables' command [OK]
Opportunistic Encryption Support [DISABLED]

 

Restart IPSec

sudo service ipsec restart

Check IPsec status

make sure an active connection is running

$ sudo service ipsec status
● ipsec.service - LSB: Start Openswan IPsec at boot time
Loaded: loaded (/etc/init.d/ipsec)
Active: active (running) since Mon 2016-07-11 17:56:40 UTC; 8min ago
Process: 1660 ExecStop=/etc/init.d/ipsec stop (code=exited, status=0/SUCCESS)
Process: 1746 ExecStart=/etc/init.d/ipsec start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/ipsec.service
├─1840 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --...
├─1841 logger -s -p daemon.error -t ipsec__plutorun
├─1842 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --...
├─1845 /bin/sh /usr/lib/ipsec/_plutoload --wait no --post
├─1846 /usr/lib/ipsec/pluto --nofork --secretsfile /etc/ipsec.secr...
├─1853 pluto helper # 0 
├─1854 pluto helper # 1 
├─1855 pluto helper # 2 
└─1974 _pluto_adns
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: STATE_MAIN_I2: sent MI2,...R2
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: NAT-Traversal: Result us...ed
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: transition from state ST...I3
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: STATE_MAIN_I3: sent MI3,...R3
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: Main mode peer ID is ID_...4'
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: transition from state ST...I4
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: STATE_MAIN_I4: ISAKMP SA...8}
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #2: initiating Quick Mode PS...8}
Jul 11 17:56:42 vpc pluto[1846]: "home-to-aws" #2: transition from state ST...I2
Jul 11 17:56:42 vpc pluto[1846]: "home-to-aws" #2: STATE_QUICK_I2: sent QI2...e}
Hint: Some lines were ellipsized, use -l to show in full.

Start on reboot

sudo update-rc.d ipsec defaults

reboot the Raspberry pi and recheck the ipsec service status

Check VPC connection in the VPC console

Make sure Tunnel1 is UP

Screen Shot 2016-07-11 at 19.40.12

Redundancy

For increased reliability, add the second tunnel configuration to the RPI configuration

 

Create an EC2 Instance

To test the configuration Launch an EC2 instance into your VPC. As an example I’m launching the free tier Ubuntu server.

From the EC2 Dashboard select Launch Instance, select Ubuntu Server 14.04 LTS (HVM), SSD Volume Type, select t2.micro (Free Tier Eligible) then click Next: Configure Instance Details. Select your VPC as the Network.

Screen Shot 2016-07-12 at 09.31.01

Click Next: Add Storage, Next: Tag Instance, Next: Configure Security Group, Create a new security group and add the rules you require, the example below adds SSH and ICMP (ping) from my home subnet.

Screen Shot 2016-07-12 at 09.38.33

Click Review and Launch, Launch, create a new key-pair (or use existing if you prefer), Download the Key Pair and keep them safe. Launch and then View Instance. Wait for the Status Checks to complete.

Screen Shot 2016-07-12 at 09.48.25

Note the private IP address – in this example 10.0.1.19

From your Raspberry Pi you should now be able to ping the instance

$ ping 10.0.1.19
PING 10.0.1.19 (10.0.1.19) 56(84) bytes of data.
64 bytes from 10.0.1.19: icmp_seq=1 ttl=64 time=200 ms
64 bytes from 10.0.1.19: icmp_seq=2 ttl=64 time=204 ms

Adding local Routes

Devices on your home network that are to access the VPC need to have a static route added that identifies the Raspberry Pi as the gateway to use for 10.0.0.0/16.

On Linux based devices (including macs)

sudo route -n add 10.0.0.0/16 192.168.0.30

ssh access

copy the key file that you previously downloaded to the machine you want to open an ssh session from. Ensure the pen file has read only permissions.

chmod 400 jenkins_aws.pem

Open an ssh session using the key

ssh -i jenkins_aws.pem ubuntu@10.0.1.19

All being well you are now logged in to your EC2 instance.

 

Install Raspberry Pi img using OSX

Up to now I have used win32Diskimager.exe in a windows VM to program images onto SDCards for the Raspberry Pi. For some reason having not done this for a while it has stopped working for me, so I decided to program using OSX directly.

Downloading

Firstly I downloaded and unzipped the raspbian image in my Downloads folder, the image I downloaded and unzipped contained 2016-02-09-raspbian-jessie.img.
In the terminal on the mac

cd ~/Downloads
ls -l *.img

this showed
rwx—— 1 davidcozens staff 4127195136 20 Feb 13:11 2016-02-09-raspbian-jessie.img

Programming the SDCard

I inserted the SDCard into a USB card reader (The internal reader on my MBP didn’t recognise the card), then to identify which device OSX had mounted this as

diskutil list

The output looked like

/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:          Apple_CoreStorage Macintosh HD            999.7 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.1 MB   disk0s3
/dev/disk1 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS Macintosh HD           +999.3 GB   disk1
                                 Logical Volume on disk0s2
                                 A8176DD4-96DB-4F64-B013-EAA7D075A63B
                                 Unencrypted
/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *8.0 GB     disk2
   1:             Windows_FAT_32 boot                    62.9 MB    disk2s1
   2:                      Linux                         4.1 GB     disk2s2
/dev/disk3 (disk image):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        +1.6 TB     disk3
   1:                        EFI EFI                     209.7 MB   disk3s1
   2:                  Apple_HFS Time Machine Backups    1.6 TB     disk3s2
/dev/disk6 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   *613.9 KB   disk6
Now BE VERY CAREFUL to identify the device corresponding to the SDCard. Using the wrong device name in the commands below could erase all of the data on your MAC. Look for the size that matches your SDCard. In my case /dev/disk2 is the SDCard, disk0 and disk1 are the internal HD on my MAC and disk6 is may backup device.
In the commands below I use /dev/diskx to represent the SDCard device name.
Unmount the SDCard

diskutil unmountDisk /dev/diskx

zero out the partition table

sudo dd if=/dev/zero of=/dev/rdiskx bs=1024 count=1

Program the SDcard

sudo dd if=2016-02-09-raspbian-jessie.img of=/dev/rdiskx bs=4m

ensure all data is synced

sync

You should now be safe to remove and use the SDCard.

Starting with CODESYS on the Raspberry Pi – Update

Last May I blogged about Starting with CODESYS on the Raspberry Pi. Since then I have found it a great teaching platform to help others learn CODESYS. My getting started tutorial is now a little out of date, so I thought I would reproduce it using current versions of software. This blog walks through the steps of installing CODESYS under Windows 10, installing the CODESYS for Raspberry Pi plugin, Installing the CODESYS runtime Raspbian and establishing communications.
Thanks to 3S the process is now much easier.

Installing CODESYS on Windows 10

Download CODESYS from https://www.codesys.com/download/  (You need to register, but then the download is free). Run the installer accepting all defaults

Installing CODESYS for Raspberry Pi

Download CODESYS for Raspberry Pi from the CODESYS Store http://store.codesys.com/codesys-control-for-raspberry-pi-sl.html. Again you have to register first, everything is contained within a single .package file
Installation is a two stage process, firstly installing the package within CODESYS, and then subsequently installing CODESYS on the Raspberry Pi.

Installing the CODESYS Package

Launch the CODESYS IDE, Tools->Package Manager… then Install…, browse to the Package previously downloaded. Accept the license and then accept the default on every menu in the installer. The package should install as shown, and CODESYS then needs to be restarted.

Preparing the Raspberry Pi

Ensure that you have the latest version of Raspbian installed on your Raspberry Pi.

New Install

3S give links on their download page http://store.codesys.com/codesys-control-for-raspberry-pi-sl.html and the First Steps pdf tells you how to setup the RPi for various different peripherals

Upgrade Raspbian

If you choose to upgrade, be careful of anything important, perform backups before you start. On the Raspberry Pi issue these commands to upgrade, then continue to install CODESYS as instructed below.

sudo apt-get updatesudo apt-get dist-upgradesudo rpi-update

Installing CODESYS on the Raspberry Pi

Launch CODESYS and then select Tools->Update RaspberryPi, enter the password for the Raspberry Pi, click scan.
If your RPi is found then select the IP Address from the dialog and click OK.
If your RPi is not found, it is possible to enter the IP Address manually, then click OK on the Update Raspberry Pi dialog – this will install the CODESYS runtime on your Raspberry Pi. Progress is reported in the bottom left corner of the CODESYS window.
The update only takes a few seconds, on completion check for messages

There should be a single message saying the Update has finished

Login the the Raspberry Pi and reboot it – now you are ready to test your installation.

Testing The Installation

Launch CODESYS and select New Project…
Select Standard Project and give the project a name, click OK.
Select the Device: as CODESYS Control for Raspberry Pi and the programming language to use, click OK. A project is then created with this content.
Double click on PLC_PRG (PRG) and create a simple program
Now try to login (Online->Login),
Click Yes to start a network scan. (For me this didn’t work – probably because I am running Windows in a virtual machine, if that happens manually type the IP address of your RPi into the box representing the device and hit Enter, your device should then be found).
Select Online->Login again, this time login and download should succeed, the program will be in the STOP state. Switch to the PLC_PRG tab and you should see your program and Variables at their starting values.
Select Debug->Start and the application will start running, you will ten see the variables changing.
Have fun…

Cross development using eclipse and GCC for the RPi

Introduction

When I decided to use eclipse to do some cross development for the Raspberry Pi hosted on windows I was surprised to find that I found lots of useful posts elsewhere that gave a part of the solution, or worked well on a Linux host. I found both of the sites below to be particularly helpful. http://www.gurucoding.com/en/raspberry_pi_eclipse/index.php
http://gnutoolchains.com/raspberry/tutorial/
My blog post Using google test with CDT in eclipse gives details on how to install eclipse CDT, which is an assumed starting point for this post.

Install GCC Cross compiler

It is possible to build your own GCC cross compiler, however this is a complex process and prebuilt versions are available. I choose to download from http://gnutoolchains.com/raspberry/, and selected raspberry-gcc4.6.3.exe to match wheezy raspbian. Installation was as simple as running the executable and accepting the terms of the license.

Synchronise header file and library versions

This step is required to ensure that the versions of headers and libraries used on the Raspberry Pi match those used by the cross compiler.
Run c:SysGCCRaspberryTOOLSUpdateSysroot.exe
Press Select… and setup a new SSH connection to your Raspberry PI (Mine is running a newly installed image and has the default username and password).
Click Connect and then save the key when prompted, then click Synchronize, wait while the headers and libraries are copied (This takes a few minutes).
NOTE: Whenever you update your RPi it is a good idea to resynchronize to ensure that your development environment remains in step with your target.

Configure eclipse for cross compilation

Open eclipse and in the Project Explorer right click and select New…->C++ Project. Select the Hello World C++ Project and Cross GCC, give the project a name.
Click Next> and customise as you wish
Click Next>
Click Next> and specify the cross-compiler prefix as arm-linux-gnueabihf- and cross-compiler path as C:SysGCCRaspberrybin.
Click Finish and then build the project, there should be no errors.

Configure RPi for gdb

Run SmarTY.exe from C:SysGCCRaspberryTOOLSPortableSmartty, and open the connection to your RPi.
Enter these commands
mkdir remote-debugging
Verify that gdbserver is on the path and the version of gdbserver that is installed
gdbserver --version

To automate the use of gdb on the target RPi it needs to have the DataStore server running. 
 
Download the server from http://download.eclipse.org/tm/downloads/index.php, I used the 3.6M4 release and downloaded rseserver-linux-3.6-M4.tar to the /tmp directory on the RPi. Following the install advice from the eclipse help I executed these commands on the RPi

sudo mkdir /opt/rseservercd /opt/rseserversudo tar xf /tmp/rseserver-linux-3.6-M4.tar

The daemon needs to be run as root, so it can be manually started by

sudo su
cd /opt/rseserver
perl ./daemon.pl &

To have it start automatically each time the RPi reboots, edit /etc/rc.local and just before the exit 0 at the end of the default file launch the server

cd /opt/rseserver
perl ./daemon.pl &

When the RPi reboots you should see a message that says

Daemon running on: raspberry, port: 4075 

Configure eclipse for Cross debugging

Right click on the HelloWorld project in the project explorer and New->File, set the name to .gdbinit and click Finish.
In the file set sys root to point to the cross tools set sysroot C:SysGCCRaspberryarm-linux-gnueabihfsysroot
 

In the project explorer right click on the HelloWorld project and select Debug As->Debug Configurations…, then select C/C++ Remote Application and click the New button (circled).

In the Debug Configurations Dialog the first step is to configure a connection to the Raspberry Pi, to do this, next to Connection: click New…
 
Select a Remote System type of Linux, and click Next>
Enter the IP address of your RPi in Host name:, name the connection and set a description, then Next>.
Click Next>

 

Click Finish, ensure that the Connection: is set to your new connection
Then click Browse… under Remote Absolute File Path for C/C++ Application:

 

Then try to expand My Home, you will be prompted for user credentials and certificates

 

 

 

Select the remote-debugging directory and click OK.

 

When the binary is transferred to the raspberry pi we need to ensure that it has execute permissions set, so in the Commands to execute before application enter chmod 777 /home/pi/remote-debugging/HelloWorld
Now in the Debug Configurations dialog select the Debugger tab and on the Main sub-tab browse to the arm-linux-gnueabihf-gdb.exe debugger.
Now select the Gdbserver Settings sub-tab, the default settings worked for me and looked like this.
Apply these settings, and Close.
Click Window->Perspective->Open Perspective->Other... select Remote System Explorer, and click OK.
In the Remote Systems tab right click on your connection and select Connect. (I found if I didn’t connect to the RPi before launching the debugger the launch would fail, retrying usually worked – but this is cleaner).
Now select Run->Debug Configurations, ensure your configuration is selected (it should be) and click Debug. You will be prompted to confirm the perspective switch, tick to Remember your decision and click Yes.

You should then see your application stopped on the first line and ready to be debugged

I have tested this setup with setting breakpoints, viewing disassembly, stepping into and out of functions and inspecting variables. It all seems to work, however I have not yet diagnosed the error Cannot access memory at address 0x0. I will update this post if/when I get to the bottom of it.

Clean error

If you don’t have rm.exe on your path you will see an error like this when you clean the project.
This can be resolved by downloading rm.exe as part of MinGW, find the path to rm.exe, (with a default install it is C:MinGWmsys1.0bin). Add this to the end of your windows PATH and restart eclipse.

Further Reading

If you find this tutorial helpful you may find my previous postings on eclipse helpful too
Unit testing with googletest
Code coverage with Gcov
Static analysis with CppCheck
I develop for RPi by having a unit test project that compiles and runs on my PC for speed, and then a cross compiled project that shares the same source code and runs on the RPi.
 
 




Static analysis with Cppcheck in eclipse CDT and Jenkins

Static analysis tools look for a wide range of potential errors with code that compilers do not look for.    Cppcheck is a an open source static analysis tool, it is extensible and being actively developed. These are the sorts of errors that can be found

  • Out of bounds checking
  • Memory leaks checking
  • Detect possible null pointer dereferences
  • Check for uninitialized variables
  • Check for invalid usage of STL
  • Checking exception safety
  • Warn if obsolete or unsafe functions are used
  • Warn about unused or redundant code
  • Detect various suspicious code indicating bugs

This post walks through the process of installing Cppcheck and integrating it with eclipse CDT as well as on Jenkins.

Installing Cppcheck

Download and run the msi installer from http://cppcheck.sourceforge.net. I clicked on through accepting the defaults.
NOTE: I installed version 1.71, originally I tried the x64 version but had problems, the x86 version worked fine.
To understand how to run cppcheck refer to the manual.

Installing the eclipse plugin

In eclipse click Help->Eclipse Marketplace and search for Cppcheclipse.
Click Install then Confirm >
Accept the terms of the license and Finish. (I was prompted about installing unsigned software – I chose to continue). When prompted restart eclipse.
The next step is to configure the plugin. In Eclipse go to Window->Preferences->C/C++->cppcheclipse and set the path for the binary
Now review the Problems and Settings preferences, see below for the settings I use, I have all Problems enabled.
Now In the C/C++ perspective select the project that you want to check, right click and select cppcheck->Run cppcheck.
Any problems found are shown in the Problems tab
Double clicking on an issue takes you to the offending code (could this be a deliberate error?)

 

 

NOTE: I had repeated problems with errors: URI is not absolute
I worked around this by changing all include paths for the project to absolute paths. Not a real solution for me, for now I change the paths to analyse and then change back afterwards, or ignore the eclipse plugin and run on the command line.

Installing the Jenkins Plugin

Log in to Jenkins and go to Jenkins->Manage Jenkins->Mange Plugins, select the Available Plugins tab and Filter for cppcheck.
check the Install check box and select Install without restarting.
At this point you need to configure Jenkins to run the analysis and report on it, static analysis typically takes much longer than compilation for the same code. So in a real world application I would create a new Jenkins Job that checks out the code and runs the analysis. For my home project I’ll just extend the existing job.
Edit the node configuration for the build slave (Jenkins->Manage Jenkins->Manage Nodes) to add a label for cppcheck and also set an environment variable to say where cppcheck is installed.
Now in your Jenkins Job configuration – make it depend on the cppcheck label
Add a build step to run cppcheck, the example below is what I have, note the 2> which redirects stderr into a file, this is needed to capture the xml output.
Add a Post Build step to publish the cppcheck results (Once it is working play with the advanced options).
Now run a couple of builds and you should see graphing of the analysis results, be able to drill into the results down to the specific lines in files.

gcov code coverage in eclipse and Jenkins

Introduction

This post gives step by step instructions for adding code coverage with gcov to a google test eclipse project that is built as part of a CI process on Jenkins. This post starts from the point of already having a project compiled by a gnu compiler, in an eclipse CDT project that is being build on Jenkins. These posts give the information required to get to this stage.

Using google test with CDT in eclipse
Integrating SVN, Trac and Jenkins with eclipse
Automating eclipse CDT build on Jenkins

To enable code coverage with gcov the project needs to be instrumented (build with flags that cause the raw coverage information to be saved when an executable runs). Having built the code with these flags enabled, and run the executable, both eclipse and Jenkins can be configured to view the coverage.

Enabling code coverage in the build

To determine what code coverage the unit tests have in the unit test project in eclipse: Right click on the project and select Properties->C/C++ Build->Settings->Tool Settings->GCC C++ Compiler->Debugging and check Generate gov information (-ftest-coverage -profile-arcs).

Still in the Properties->C/C++ Build->Settings->Tool Settings select MinGW C++ Linker->Miscellaneous and then in the Linker flags box add -ftest-coverage -fprofile-arcs.
Click OK to close the properties dialog. clean the project and the build, then look in the Debug folder and there should be a .gcno file for every source file.

Coverage Reports in eclipse

CDT includes support for gov, however this support relies on external tools that must be in the path. I tried to add them to the path in the eclipse project, I couldn’t get it to work, so I had to extend the Windows environment variable path. The path needs to include a path to addr2line, c++filt and nm, with my install of minGW I added c:mingwbin. Shutdown and restart eclipse so that it has access to the new path.
Now run your unit test application in eclipse, this will generate a .gcda for each source file. Now if you open any .gcda or .gcno file you should see
Click OK and the gov tab should be opened with coverage shown
NOTE: If you get an error here the most likely reason is that the gcov data is not all from a single binary version of the test application. The quick way to resolve this is to delete the Debug directory, rebuild and rerun the tests.
NOTE: When you double click on a file in the gov tab it opens the associated source file and should show line by line the coverage. Except in my scenario it doesn’t for linked folders. To see line by line coverage open the file of interest from the linked folders, open the file from under the unit test project, the coverage will show. This is being tracked as bug 447554 in the eclipse bugzilla. (Update 9 January 2016 – This is now fixed in updates-nightly-mars, Thank you Jeff)

Coverage in Jenkins

To get a coverage report added to the Jenkins job we have to convert the gcov output into a a format that can be understood by Jenkins (using gcovr) and then configure Jenkins with the cobertura plugin.

gcovr

gcovr is a python application, so before it can be used you need python installed on the jenkins slave. Download and install active python from https://www.activestate.com/activepython/downloads. In my case I downloaded the ActivePython 2.7.10.12 (x64). I accepted all defaults on the installer.
Open a command prompt and install gcovr
pypm -g install gcovr

I found it very hard to get the correct options on gcovr for the files I wanted in the output. I installed activepython and gcovr in the same way on my desktop machine, and experimented in the eclipse workspace. For me with an eclipse workspace at C:UsersDavidworkspace. Running the command below from C:UsersDavidworkspacehex2numSource I got the desired output. i.e. coverage for each source and include file being tested in hex2num.

python c:Python27Scriptsgcovr -r . –object-directory= ….hex2numUnitTestDebug -f .*\hex2num\Include\ -f .*\hex2num\Source\ 

cobertura

Cobertura is a plugin for Jenkins that displays code coverage. To install the plugin login to Jenkins and go to Manage Jenkins->Mange Plugins->Available, select Cobertura Plugin, click Install without Restart.

For the Jenkins job you are configuring (in this example hex2num) go to the job configuration page, select Add Build Step->Execute Windows batch command. Add the following to go to the correct directory, I have added flags to generate xml output to a named file.

cd hex2numsource

python c:Python27Scriptsgcovr -r . –object-directory= ….hex2numUnitTestDebug -f .*\hex2num\Include\ -f .*\hex2num\Source\ -x -o ….gcovr.xml

Then select Add post-build action->Publish Cobertura Coverage Report, enter the path to the generated results file. Selecting Advanced allows a variety of options to be set.
Run a build and you should start to see code coverage reports, once a second build has been run you should start to see graphs. You can also drill in for more details.

Getting painted files to show in Cobertura

Although cobertura is displaying coverage there are two problems
  1. The package names don’t read well
  2. Painted source files don’t show as cobertura cannot find the source files

To resolve this I wrote a small python script to patch the paths in the gcovr.xml files. Save this source as C:Python27ScriptsCorrectGcovrPaths.py.

try:
import xml.etree.cElementTree as ET
except ImportError:
import xml.etree.ElementTree as ET
import sys
gcovrXmlFilename=sys.argv[1]
paths = sys.argv[2:]

tree = ET.ElementTree(file=gcovrXmlFilename)
for packageElement,path in zip(tree.iter(tag=’package’),paths):
packageElement.set(‘name’,path)
for classElement in packageElement.iterfind(‘classes/class’):
filename = classElement.get(‘filename’)
filename = filename.split(‘\’)[-1:][0]
filename = path+”/”+filename
classElement.set(‘filename’, filename)
tree.write(gcovrXmlFilename)

In the jenkins job, change the windows batch command that runs gcovr so that it looks like this.

The script reads the file in the first argument (gcovr.xml), renames the first package(directory) to the next argument and sets the path for all classes (files) in the package, the second argument is processed similarly. More arguments can be added for additional packages.
In Jenkins you can then see nicely painted sources like this

Automating eclipse CDT build on Jenkins

In my previous posts I have shown how to setup Jenkins to work with SVN and trac running on the Raspberry PI. I have also shown how to configure eclipse to work with google test and how to configure eclipse to work with SVN and trac. This post covers how to automate the build of the eclipse projects under Jenkins and how automate running the unit tests.

Jenkins Configuration

Jenkins Slave Setup

First of all install all of the tools that you need for your build on a Jenkins slave, in my case eclipse CDT with a selection of plugins, and Mingw. Eclipse is not on the path by default, so I created a system environment variable ECLIPSE_DIR on the slave, with the path to where eclipse is installed. I also restarted the Jenkins slave service so that it had access to the new environment variable.
Decide on a label for these tools, I will use two, eclipse mingw. Login to Jenkins to add the labels to the slave with the tools installed. Go to Manage Jenkins->Manage Nodes, select the node and add the labels

 

Jenkins Job Configuration

Add the job to Jenkins, from the Jenkins home page select New Item, name the job and select Freestyle project, click OK. My example project is called Hex2Num, a simple utility I’ve written, it consists of two eclipse projects (a unit test project and the application), the unit tests are written using googletest, documentation is handled with doxygen, gcov is used for code coverage.
There is quite a bit of configuration, on the job page, so section by section

Name and Description

Fill in a meaningful description

Trac

If you are using trac and have integrated trac and Jenkins (see Integrating Trac and Jenkins) then fill in the URL to access trac.

Restricting where the build runs

Only the build to run on slaves that have the labels mingW and eclipse
Source Code Management

The source is stored in a single module in subversion, fill in the URL and any required credentials.

Build

Click Add build step->Execute windows batch command, this command builds all of the projects in the workspace as DEBUG.

Warnings

Click Add Post-Build action->Scan for compiler warnings and select gnu 4

 

Google Tests

Go back and add another  build step, Click Add build step->Execute windows batch command, this command executes the unit test application and generates an xml results file that can be interpreted as Unit results.
Add another post-build action, Click Add Post-Build action->Publish Unit test result report and enter the name of the result file.

Results

After a few builds have been run two graphs will appear on the Jenkins page for the job.

Integrating SVN, trac and jenkins with eclipse CDT

I have previously posted on how to install trac, SVN and jenkins on the Raspberry Pi, and also on how to install (see the bottom of this post for related posts). When using Eclipse there are helpful plugins that tightly integrate all of these tools. This post covers the installation and configuration of these tools.
NOTE: I am installing in the Mars release of Eclipse CDT.

Subversion – Install Subclipse

There are two competing SVN integrations for Eclipse, subclipse and subversive. Both appear to be good, subclipse is supported by tigris (the organisation behind subversion), subversive is supported as an official Eclipse project. I have personally used subclipse for a long time and see no compelling reason to change.
If you use any other SVN client to work on the same checkout as Eclipse then they must use the same working copy format. See the subclipse download and install page for details. Because I also use tortoise SVN and am currently using version 1.8.x,  I need to use subclipse 1.10.x. The update site for this version is http://subclipse.tigris.org/update_1.10.x
In Eclipse go to Help->Install New Software…, in the Install dialog press the Add… button paste in the url for the update site that you require and enter a name.
Click OK and then select Subclipse and SVNKit as shown
click through accepting the license agreement and wait for the install to complete. I got a warning about unsigned content during the install and chose to accept the installation.
The built in help for subclipse is good, in Eclipse click Help->Help Contents and then find the topic Subclipse – Subversion Eclipse Plugin, the getting started section walks you through how to add a new project or how to access an existing one.

Trac and Jenkins install Mylyn Connectors

In Eclipse go to Help->Install New Software…, in the Install dialog in the Work with: drop down pick –All Available Sites– and then under Collaboration check Mylyn Builds Connector: Hudson/Jenkins and Mylyn Tasks Connector : Trac.
click through accepting the license agreement and wait for the install to complete.

Configuring the trac connector

Switch to the SVN Repository Exploring perspective in Eclipse, in the bottom left of the display you should see the Task Repositories view. In this view right click and Add Task Repository, or click on the  icon, select the Trac repository type and click Next.
NOTE: you may be prompted to setup a Secure Storage Password.
In the Add Task Repository… dialog, for the Server enter the URL that you use to browse to the trac repository, in the Label field put a short description, complete the User ID and Password that you use to log in to trac. Validate Settings and then Finish.
When you click Next you will be prompted to Add a new query, queries give you the ability to filter lists of trac tickets to those appropriate to you, a component, a release or whatever. The example below shows all of my open tickets.
To then see the results of the queries click Window->Show View->Other, then under Mylyn select Task List and click OK.
In the Task List it is possible to expand the queries to see lists of tickets, and the tickets can be opened and worked on. See below for an example.

Configuring the Jenkins Connector

Open the Mylyn Builds View (Click Window->Show View->Other…, and find Builds under Mylyn)
In the Builds view either click on Create a build server or on the icon. Select Hudson (supports Jenkins).
Fill in the Server:, Label:, User: and Password:, fields assuming you are using password authentication. For me the server URL is http://xxx.xxx.xxx.xxx:8080 where xxx.xxx.xxx.xxx is the IP address of my server. Check any build plans you are interested in , validate your settings and click finish.
From this builds view all build plans selected are visible, builds can be manually started, the build history is available and details for individual builds can be accessed (Such as warnings, test results and changes made).

Related Posts

Using Google Test with CDT in eclipse (Covers the installation of Eclipse CDT and MinGW)

Using Google Test with CDT in eclipse

Introduction

I like to use test driven development, currently my preferred framework is googletest. When I came to use eclipse CDT with the MinGW toolchain I found I had lots of little issues to get over to achieve what I wanted, namely a rapid TDD environment that I could also build from the command line (so that it can be automated under Jenkins). This post collects together the steps to get a simple application and associated unit test application building.

Installation

MinGW

Clicked the Download Installer button on the MinGw home page http://mingw.org/. This downloads an installer for MinGW. I then selected the following components
 screen-shot-2016-10-13-at-18-23-29

Java

eclipse CDT

I downloaded the Eclipse Installer from http://www.eclipse.org/downloads and ran the install selecting the C/C++ development environment.

Create Projects

My personal preference is to have two projects for a component, the actual component (executable, library etc), and an executable that is a unit test for the code in that component. I find this works well for me as a pattern because it works the same if I am working on a library or cross compiling.
Launch eclipse, close the welcome menu and you should end up in the C/C++ perspective. In the Project Explorer right click and select new C/C++ Project. Select Executable->Empty Project and the MinGW GCC toolchain, give your project a name, for this example I have chosen the name demo.
Click Finish. Then right click on the project and add a new folder called Main, in that folder create a new file called demo.cpp. In Main.cpp create a minimal main function as shown and save the file. (If you forget to save you may well see an error about undefined reference to WinMain@16)
int main(int argc, char **argv)
{
return -1;
}
When I got to this point I found that I could build the program, and debug it without error. But if I tried to run it windows pops up an error saying that demo.exe has stopped working. This can be resolved by changing the linker flags. On the demo project, select properties then C/C++ Build->Settings. Set the Configuration to [All Configurations], then under Tool Settings select MinGW C++ Linker->Miscellaneous and in the Linker flags box type -static-libgcc -static-libstdc++ as shown.
You should now be able to build, run and debug demo.exe.
Repeat the above process to create a second project called demotest, ensure that you can build, run and debug remotest too.

Add Googletest to the test project

I downloaded googletest-release-1.7.0 from https://github.com/google/googletest/releases and unzipped the content locally. Google now recommend compiling google test with each project, the easiest way of achieving this I find is merging google test into a single header and source file. To do this you need python installed, I downloaded and installed python 2.7.10 from http://www.python.org/downloads/
Then open a command prompt in the root of the unzipped googletest-release-1.7.0 directory and run
.scriptsfuse_gtest_files.py . <demotest project dir>contrib
where <demotest project dir> is the path to the demotest project in your workspace. Now open eclipse and you should see gtest in the demotest project.
Next we need to ensure that the include path finds the test headers to do this, open the properties dialog for the remotest project. Under C/C++ General->Paths and Symbols, select [All Configurations] and GNU C++, then add the contrib include directory as a workspace include.
Prove the project still builds – For me it builds with a single compiler warning – I will ignore that for now.
The next step is to add code to run google test. Edit demotestMainMain.cpp so that it has the content below
#include “gtest/gtest.h”
int main(int argc, char **argv) {
  ::testing::InitGoogleTest(&argc, argv);
  return RUN_ALL_TESTS();
}
At this point if you build and run you should get this output.
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (0 ms total)
[  PASSED  ] 0 tests.

Starting TDD

The first test I want is to prove that my demo.exe returns 0 when called with no arguments.
First addd a folder called Tests to the remotest project, in that create a
Create two folders in the demo project called Source and Include. In the demotest project add a folder called Tests, and in that a new file DemoTest.cpp with this content
#include“gtest/gtest.h”
#include“demo.h”
namespace
{
class DemoMainTest : public :: testing::Test
{
};
TEST_F(DemoMainTest,Returns0)
{
ASSERT_EQ(0, ::Demo(0, NULL));
}
}
A TDD purist would say I am doing too much at once, this adds a test to show that a function called Demo, called with two NULL arguments returns 0. If you try to compile now the build will fail because the file demo.h doesn’t exist.
To get this to compile and link we need to add the file Demo.h and Demo.cpp, but these files should actually be part of the demo project and not demotest.
In the demo project add a new folder Include and in that add Demo.h with this content
int Demo(int argc, char **argv);
Still in the demo project ass a new folder Source and in that add Demo.cpp with this content
int Demo(int argc, char **argv)
{
return -1;
}
In the demo project properties, under C/C++ General->Paths and Symbols, select [All Configurations] and GNU C++, then add the Include directory as a workspace Include.
Also edit the existing Main.cpp in the demo project to call the demo function.
#include “demo.h”
int main(int argc, char **argv)
{
return Demo(argc,argv);
}

At this stage the demo project should compile and link.

We now need to get the demotest project to reference this source and include path.
In the demotest project properties under C/C++ General->Paths and Symbols, in the Source Location tab click the Link Folder… button, click Advanced>> , check Link to folder in the file system, click Variables…, select WORKSPACE_LOC and click Extend…, find demo/source, click OK all the way out.
In the demotest project properties under C/C++ General->Paths and Symbols, select [All Configurations] and GNU C++, then add the Include directory as a workspace include.
All being well, the workspace should now look like this
You can now build, and run, the test should fail
[==========] Running 1 test from 1 test case.
[———-] Global test environment set-up.
[———-] 1 test from DemoMainTest
[ RUN      ] DemoMainTest.Returns0
..TestsDemoTest.cpp:12: Failure
Value of: ::Demo(0, __null)
  Actual: -1
Expected: 0
[  FAILED  ] DemoMainTest.Returns0 (0 ms)
[———-] 1 test from DemoMainTest (0 ms total)
[———-] Global test environment tear-down
[==========] 1 test from 1 test case ran. (0 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] DemoMainTest.Returns0
 1 FAILED TEST
Change the return value in Demo.cpp to 0, save, rebuild and re-run and the test should pass.
[==========] Running 1 test from 1 test case.
[———-] Global test environment set-up.
[———-] 1 test from DemoMainTest
[ RUN      ] DemoMainTest.Returns0
[       OK ] DemoMainTest.Returns0 (0 ms)
[———-] 1 test from DemoMainTest (0 ms total)
[———-] Global test environment tear-down
[==========] 1 test from 1 test case ran. (0 ms total)
[  PASSED  ] 1 test.
Now we are in a good position to start TDDing, as long as you add test files to the Tests folder, source to the Source folder and include files to the includes folder there should be no further need to mess with the build.

Test Runner

At this stage we are up and running, but it is still a bit clunky running the tests. Fortunately eclipse CDT has a test runner that supports google test. Unfortunately it is not installed by default. To install go to Help->Install New Software…, choose to Work with: –All Available Sites– and then under Programming Languages select C/C++ Unit Testing Support, click Next>, Next>, Finish and wait for the install to complete.
Restart eclipse when prompted.
Select Window->Show View->Other… and select C/C++ Unit
In the Project Explorer right click on demotest and then Run As->Run Configurations…, double click C/C++ Unit and then under the C/C++ Testing tab select Google Tests Runner.
Click Run and all being well you should see
Now you can really start moving with TDD 🙂

Version Control

At this stage it is a good idea to put the project under version control. Under eclipse the workspace is specific to a single folder on a specific machine, so it should not normally be placed under version control. Also any files generated by the build do not need to be version controlled, so before looking at what files to version control clean both projects from inside eclipse. If you are not using eclipse to manage your version control it is then a good idea to exit eclipse.
Using explorer the workspace looks like this
Both the RemoteSystemsTempFiles and .metadata folders are created by eclipse and do not need to be version controlled. The Debug folder under each project is created when a Debug build is performed and also does not need to be version controlled, if a Release build has been performed then there will be a folder called Release, again this doesn’t need to be version controlled. Everything else should be placed under version control.
To prove you have everything required under version control, it is a good idea at this point to verify that a clean checkout of your code builds. To do this create a new empty workspace directory (e.g. Workspace2) and checkout the projects from your version control system into that workspace. Then launch eclipse and select Workspace2 as your workspace
When eclipse starts there will be no projects in the Project Explorer, even though the projects exist in the Workspace2 folder on the file system. Right click in the Project Explorer and select Import…, then General->Existing Projects into Workspace
click Next>, then Browse…, this will open a Browse For Folder dialog opened on the workspace, click OK.
Then click Finish on the Import dialog, both projects should then be imported and can be build as before.
NOTE: Depending on your version control system and any plugins that may be installed you may be able to import projects directly from the VCS into a new workspace.

Command Line Build

The managed build in eclipse CDT is very easy to manage when working in the GUI, however there are situations when a command line build is required, such as when projects need to be built by a continuous integration system. Typically under CI the command line will need to cope with the situation where projects have been checked out from the VCS into a clean workspace folder, this example when executed from the root of the workspace folder creates the eclipse workspace artefacts, imports all of the projects in the workspace and builds the Debug configuration for each project.

<path to eclipse>eclipsec.exe –launcher.suppressErrors -nosplash -application org.eclipse.cdt.managedbuilder.core.headlessbuild -data . -importAll . -no-indexer -build “.*/Debug”

The first few arguments on this command line are standard eclipse arguments, see running eclipse and eclipse runtime arguments for details.
There are many other options – at the time of writing the options are
Usage:
   -import     {[uri:/]/path/to/project}
   -importAll  {[uri:/]/path/to/projectTreeURI} Import all projects under URI
   -build      {project_name_reg_ex{/config_reg_ex} | all}
   -cleanBuild {project_name_reg_ex{/config_reg_ex} | all}
   -no-indexer Disable indexer
   -I          {include_path} additional include_path to add to tools
   -include    {include_file} additional include_file to pass to tools
   -D          {prepoc_define} addition preprocessor defines to pass to the tools
   -E          {var=value} replace/add value to environment variable when running all tools
   -Ea         {var=value} append value to environment variable when running all tools
   -Ep         {var=value} prepend value to environment variable when running all tools
   -Er         {var} remove/unset the given environment variable
   -T          {toolid} {optionid=value} replace a tool option value in each configuration built
   -Ta         {toolid} {optionid=value} append to a tool option value in each configuration built
   -Tp         {toolid} {optionid=value} prepend to a tool option value in each configuration built
   -Tr         {toolid} {optionid=value} remove a tool option value in each configuration built
               Tool option values are parsed as a string, comma separated list of strings or a boolean based on the option’s type

Backup and Restore Raspberry Pi to Synology DiskStation

Introduction

There are many articles on how to backup Raspery Pi systems, up to now I have relied on taking card images and storing these on a machine that is backed up. However it is all to easy to have a few hours of fun with the Raspberry Pi and then not get round to making a backup, I want something that is automated so that I don’t have to think about it.
The key goals for this backup strategy are:
1) Unattended backup
2) Warnings sent for backup failure [ToDo]
3) Rotating backups
4) Backup to my Synology DiskStation
5) Simple to add additional Raspberry Pi’s to the backup

References

None of this is particularly new, I have pulled together ideas that I’ve found in a number of other sources, all of these were helpful

Solution Overview

I have a Synology DiskStation with Raid disks that I backup all of my machines to, this post backs up to this server, the same basic approach should work going to any Linux server with small modifications.
cron is used on the server to invoke a script to perform the backup, the script is parameterised with the ip address of the Raspberry Pi to backup.
rsync is used to perform the actual backup.
ssh is used to securely connect to the Raspberry Pi.

Backing Up

Step 1 – root SSH access

We need the Synology DiskStation to have root access via SSH to the Raspberry Pi. Open up two SSH sessions, one to the DiskStation (as root) and one to the Pi (as your normal login e.g. pi).
1) In the Raspberry Pi Session first enable root login by setting a root password

sudo passwd root

Enter a password as prompted
Now we need to ensure that there is a directory to hold ssh keys.

su root

mkdir ~/.ssh

exit

2) In DiskStation session
First check to see if you already have a pair of SSH keys by running

ls ~/.ssh

 
If you don’t see a file called id_rsa.pub there you need to generate a pair of keys using this command

ssh-keygen -t rsa -C root@<Your Synology server’s name>

– When prompted for the file in which to save the key, accept the default (hit <Enter>)
– When prompted for a passphrase hit <Enter> (no passphrase)

Now push the public key to the Raspberry Pi (this is why we need the Pi to briefly allow password logins on the root account, because until the public key exists on the destination server, you will be prompted for a password when you run this command):

cat ~/.ssh/id_rsa.pub | ssh root@<your Pi’s IP address> ‘cat >> .ssh/authorized_keys’

Enter the password you created in Step 1 when prompted

Confirm that SSH key logins are now accepted by the Raspberry Pi by running this command (still from the Synology session, don’t toggle back to the Pi session yet🙂

ssh root@<Your Pi’s IP Address>

Now you can close the Synology session’s terminal window
 
3) In the Raspberry Pi Session remove password based root access. 

passwd -d root

Be very careful to do this on the Raspberry Pi – you could cause all sorts of problems doing this from another one of your hosts.
Now close down this ssh session as well
 

Step 2 Creating the backup script files

Using a web browser log in to the DiskStation as admin. Create a top level folder for holding all files related to Raspberry Pi backups (rpi_backup), use the Control Panel->Shared Folder
Then using the File Station create a directory under rpi_backup called scripts, and another called logs.
NOTE: I think the easiest way to create these scripts is to use the Text Editor from the Main Menu on the DiskStation, alternately use ssh to connect to the DiskStation and then use vi to create the files.
Create a file called rpi_backup/scripts/backup_pi.sh with the following content.

#!/bin/ash
#This script uses rsync to backup a named Raspberry Pi at a given IP address
#Three rotating backups are kept. If rsync fails then the preceding backups are retained.
if [ “$#” -ne 2 ] ; then
echo “Usage: 2 arguments required: $0 SERVERNAME IP” >&2
exit 1
fi
# Set up string variables
SERVER=$1
ADDRESS=$2
NOW=$(date +”%Y-%m-%d”)
RPI_BACKUP=”/volume1/rpi_backup”
LOGFILE=”$RPI_BACKUP/logs/$SERVER-$NOW.log”
SERVERDIR=”$RPI_BACKUP/$SERVER”
BASENAME=”$SERVERDIR/$SERVER”
# Paths to common commands used
MV=/bin/mv;
RM=/bin/rm;
MKDIR=/bin/mkdir;
PING=/bin/ping;
RSYNC=/usr/syno/bin/rsync
#Function to check for command failure and exit if there has been one. This is done
# so that when invoked from cron the error is reported
check_exit_code() {
exit_code=$?
if [ “$exit_code” -ne “0” ] ; then
echo “$1”
echo “exit with exitcode $exit_code”
exit 1
fi
}
#Ping the RPI a few times to ensure the interface is up (I’ve not seen this fail)
$PING $ADDRESS -c 3 >> $LOGFILE
# Ensure we have a top level backup directory for this server
if ! [ -d $SERVERDIR ] ; then
$MKDIR $SERVERDIR ;
fi ;
# Backups are made to BASENAME.0, this is then moved if the backup was successfull.
# So we start by clearing out anything from a failed backup
if [ -d $BASENAME.0 ] ; then
$RM -rf $BASENAME.0 ;
fi;
# RSYNC via SSH from the RPI as an incremental against the previous backup.
$RSYNC -av
–delete
–exclude-from=$RPI_BACKUP/scripts/rsync-exclude.txt
–link-dest $BASENAME.1
-e “ssh -p 22” root@$ADDRESS:/
$BASENAME.0 >> $LOGFILE 2>&1
# If RSYNC failed in any way, don’t trust the backup, exit the script
check_exit_code “RSYNC Failed”
#Rotate the existing backups
if [ -d $BASENAME.3 ] ; then
$RM -rf $BASENAME.3 ;
fi;
if [ -d $BASENAME.2 ] ; then
$MV $BASENAME.2 $BASENAME.3 ;
fi;
if [ -d $BASENAME.1 ] ; then
$MV $BASENAME.1 $BASENAME.2 ;
fi;
if [ -d $BASENAME.0 ] ; then
$MV $BASENAME.0 $BASENAME.1 ;
fi;

Create a second file called rpi_backup/scripts/rsync-exclude.txt with this content

/proc/*
/sys/*
/dev/*
/boot/*
/tmp/*
/run/*
/mnt/*

At this stage I like to manually run the script to ensure that it is working as I like. To do this connect to the DiskStation as root via ssh, cd to /volume1/rpi_backup/scripts. To backup a Raspberry Pi that I want to identify as myRPI at address 192.168.0.34. I repeat this until I am happy with the behaviour, including checking for reporting when the RPI is offline.
/volume1/rpi_backup/scripts/backup_rpi.sh myRPI 192.168.0.34

Step 3 Automating the backup

On the DiskStation desktop open Control Panel->Task Scheduler
Create a User Defined Script
Give the task a name and enter the command to run your script with the full path to the script, this example is for my jenkins server at address 192.168.0.34
Set the schedule, in my case I want this task to run at 3am every morning
Save the job by clicking OK
You can now test run the job by selecting it and clicking Run.

Step 4 Restoring from backup

The backup directories are shared from the DiskStation, mounting the volume gives access to the files, for now I have only done this from my mac and used it to restore individual files.
At this stage I have not investigated a full system restore, I rely on having a base img for each RPi taken after any major reconfiguration. I can then restore files from these backups.

Step 5 Backing up a second RPI

We need the Synology DiskStation to have root access via SSH to the Raspberry Pi. Open up two SSH sessions, one to the DiskStation (as root) and one to the Pi (as your normal login e.g. pi).
1) In the Raspberry Pi Session first enable root login by setting a root password

sudo passwd root

Enter a password as prompted
Now we need to ensure that there is a directory to hold ssh keys.

su root

mkdir ~/.ssh

exit

2) In DiskStation session

Push the public key to the Raspberry Pi (this is why we need the Pi to briefly allow password logins on the root account, because until the public key exists on the destination server, you will be prompted for a password when you run this command):

cat ~/.ssh/id_rsa.pub | ssh root@<your Pi’s IP address> ‘cat >> .ssh/authorized_keys’

Enter the password you created in Step 1 when prompted

Confirm that SSH key logins are now accepted by the Raspberry Pi by running this command (still from the Synology session, don’t toggle back to the Pi session yet🙂

ssh root@<Your Pi’s IP Address>

Now you can close the Synology session’s terminal window
 
3) In the Raspberry Pi Session remove password based root access. 

passwd -d root

Be very careful to do this on the Raspberry Pi – you could cause all sorts of problems doing this from another one of your hosts.
Now close down this ssh session as well.

4) Finally repeat step 3 Automating the Backup for the new host.

ToDo

This process has given me a fully automated backup of my Raspery Pi systems. There are a number of improvements that I would like to make as and when time allows.
  1. There is currently no notification sent if the backup fails. For now I inspect the logs, and the backup does at least preserve the last three good backups. It is possible to use the synology notification system, see http://www.beatificabytes.be/wordpress/send-custom-email-notifications-from-scripts-running-on-a-synology/ I may get around to this, I’m hoping that sinology may improve the system though…
  2. I want to investigate if I can perform a full system restore on to a blank card. For now I rely on taking occasional card images, and then this automated system gives me backup of all files.
  3. The timestamp on the individual backup top level directory is incorrect, it gets updated for each backup each time a backup occurs (mv is incorrectly setting the date). I did have a brief look at this, there are a number of obstacles to overcome. mv doesn’t preserve the timestamp on move, I cannot use touch with a specified timestamp because the version available on the DiskStation doesn’t support the feature. I could solve the problem with python, but this isn’t installed as standard. I think I could solve the issue using php, but I haven’t put the time in yet – more important to have a working backup.