UniPi Neuron for CODESYS – available now

Available Now

Cozens Software Solutions are pleased to announce that UniPi Neuron for CODESYS is available now for download from the CODESYS store.

UniPi Neuron for CODESYS provides CODESYS driver support for the full range of UniPi Neuron PLCs and expansion modules.

  • UniPi Neuron L20x, L30x, L40x, L50x, L51x, M10x, M20x, M30x, M40x, M50x and S10x
  • UniPi Neuron Expansion modules xS10, xS30, xS40 and xS50

The UniPi Neuron is a product line of PLC (Programmable Logic Controller) units built to be universal and used in both Smart Home and Business applications and automation systems. 

CODESYS is the leading hardware independent IEC 61131-3 development system under windows for developing and engineering controller applications

TDD Zombies

Not these zombies

When I first used TDD I read James Grenning’s book Test Driven Development for Embedded C. In this book James proposed following a pattern for developing tests to test for zero, then one and then many (ZOM). Recently he has developed this idea further into ZOMBIE testing.

Z – Zero
O – One
M – Many (or More complex)
B – Boundary Behaviors
I – Interface definition
E – Exercise Exceptional behavior
S – Simple Scenarios, Simple Solutions

I’ve found this to be a really helpful pattern to follow when developing tests. To read more about it see James’ recent post TDD Guided by ZOMBIES

The Pragmatic Programmer

The Pragmatic Programmer from journeyman to master

The Pragmatic Programmer from journeyman to master

I think I originally read The Pragmatic Programmer by Andrew Hunt and David Thomas a good ten or fifteen years ago. I’ve just taken a couple of days while between contracts to re-read the book.

I was very pleased to find that the book is just as fresh as I remember, 70 great pragmatic tips to help you develop from a journeyman to a master. Given that the book is 16 years old, there are references to technology that seems dated, e.g. version control without SVN or GIT. However the technology referenced is not the point of the book, it is completely geared around taking pragmatic steps to produce better software.

If you want to grow as a software engineer, this book is still a must read.

CODESYS for UniPi 1.0.2.0

CODESYS for UniPi 1.0.2.0 is available for download from the CODESYS store.

Changes:

  • Addition of support for digital inputs I13 and I14 using a custom cable. 
  • Enhanced support for one wire expansion modules giving better behaviour on comms or power failure to the module.

Learning Python

zenkoans

Python has never been a language I have had to know well. I’ve adapted existing scripts, I’ve created a few simple scripts from scratch. But I haven’t learnt it properly, just the parts I’ve needed.

I decided it was about time I learnt the language properly. A friend recommended that I take a look at python koans. A koan is a riddle or puzzle use in Zen mediation to help gain enlightenment.

Python Koans is an interactive tutorial for learning the Python programming language by making tests pass. Tests are executed by executing contemplate_koans.py

python contemplate_koans.py

A single test will fail, tell you what has failed, and what you need to think about to make it pass.

Most tests are fixed by filling the missing parts of assert functions. Eg:

self.assertEqual(__, 1+2)

which can be fixed by replacing the __ part with the appropriate code:

self.assertEqual(3, 1+2)

Very quickly I got in a rhythm, much like TDD, red, fix, green, repeat. I would definitely recommend this as a way of learning the language.

 

Refactoring C to Remove Feature Flags

You’ve read the books on Refactoring, on working with legacy code, on Unit Testing and on TDD. Then you look at the codebase you’ve inherited, it’s written in C, and it’s riddled with conditional compilation. Where do you start?

screen-shot-2016-10-06-at-17-30-54

 

In years gone by feature flags were widely used in embedded systems as a means of having a common codebase shared across multiple devices. The devices varied in what hardware was present, what capacity there was in terms of RAM, ROM and performance. The devices also varied according to market demands, e.g. some features were only required on ‘premium’ products.

Now imagine how the codebase could have deteriorated over the years. Some of the code is forty years old, the code base has been targeted at fifty different hardware platforms, and at a marketing level there have been over one hundred different features. There are terrifying potential number of combinations of ways the software could be built.

How bad is your code? This command will show you how many different conditional statements there are in your code. Admittedly some will only be different because of whitespace, or because of the order of the flags.

grep --include=*.c --include=*.h -r -h '#if' . |sort -u | wc

I’ve been faced with a codebase containing 16000 different conditional include lines; codebases exist with many more than that. Where do you start? Should you start?

With this amount of conditional compilation, introducing Unit Testing may appear impossible, each test fixture can only be compiled with one combination of feature flags. You may be able to use it for new modules, but how about for maintenance? This article offers a step by step approach that I have used to remove feature flags, and remove conditional compilation from a large codebase (a few million lines).

As with all refactoring, there is a level of risk, the aim of these changes is to minimise the risk by taking baby steps and using a safety net.

Step 1 – Preparation - Repeatable builds

To remove a feature flag we need a test to know that we haven’t impacted the code. The method I like to use is to determine that the build produced a binary identical output before and after the change. Perform two complete builds and compare the build output. We need to get to the state where they are identical. There are multiple reasons why the output may vary, these need to be addressed before we attempt any refactoring:

  • Problem -  Time/Date of the build is included in the binary
  • Solution – Make the build use a fixed time for your test purposes. How you do this depends on how the time and date is injected into the build. Consider link time substitution of a fixed file, disabling that part of the makefile, or conditional compilation.
  • Problem – The version of a file or a checkout from the version control system is embedded in the build.
  • Solution – Be careful to checkout both copies from the same revision. If the revision information is in a single source file consider link time substitution to replace it with static values. If the information is in a single header file consider using the include path to prioritise a file with static values.
  • Problem – the file format of your binary includes the time that the build was performed.
  • Solution – Use another form of output to compare to decide the builds are identical, e.g. transform the output into a plain format such as .bin or SREC, or use a map file for comparison. (e.g. if using gnu, look at objcopy and strip)
  • Problem – the file format includes the paths of source files.
  • Solution – Use tools to strip debug information from the binary (e.g. if using gnu, look at objcopy and strip).  Or perform both builds in the same directory.

This process needs to be repeated for every build that is to be supported from your codebase. There may have been hundreds of products delivered, it is likely that only a small subset still require support. To be confident in your changes you must be sure that you are not impacting any of the current builds with your changes.

Step 2 – Identify redundant feature flags

We can identify a feature flag as redundant in each of these circumstances

  • It is defined to the same value on all supported platforms
  • It is undefined on all platforms
  • There are no longer any uses of the flag in the code
  • All uses of the feature flag are in sections of code removed by other feature flags

For cases 1, and 2, use the pre-processor to prove that your assumptions are correct by forcing a build that will fail only if your assumption is correct. (Choose the failing option because it is faster to test). For example, if you believe that FEATURE_A always has the value 1 on all platforms then add the following to a source file included early on in all builds

screen-shot-2016-10-06-at-17-37-17

 

Then verify that all of your builds fail. If they do then you know that this flag is safe to remove.

screen-shot-2016-10-06-at-17-37-43

Step 3 – Remove Feature Flags

Following on from the example above, assume that we have discovered that FEATURE_A always has the value 1 in all of the builds we need to support. How can we remove FEATURE_A when it may be mentioned in many of the thousands of files in our build? Removing by hand is going to be time consuming and worse error prone. 

To automate the process use unifdef. The command below invokes unifdef on every .c and every .h file below the current directory, and removes the conditional compilation related to FEATURE_A.

find . -name '*.[ch]' | xargs unifdef -DFEATURE_A=1 -m

Lets see what this did to our example function below. Not only has the #if FEATURE_A statement been removed, so to has #if FEATURE_A || FEATURE_B, unifdef was smart enough to determine that if FEATURE_A was defined the compound condition was always true.

screen-shot-2016-10-06-at-18-20-43

At this stage rebuild all of the applications, verify that none of the binaries have changed and commit the change to version control. Then repeat for the next feature flag. Lets see one more example, suppose FEATURE_B is always undefined, unifdef can be used to remove the feature with this command

find . -name '*.[ch]' | xargs unifdef -UFEATURE_B -m

Here we can see that the code guarded by #ifdef FEATURE_B has been removed as well as the feature flag.

screen-shot-2016-10-06-at-18-29-02

Verify that the binary output is identical, for all builds. Commit changes to version control and repeat.

Should you be worried about making these changes? What about the code that is being deleted, isn’t it valuable? No, it has no value. It isn’t included in any current builds, so it carries no current value. It adds confusion, and slows development, so it has cost and not value. If you ever have to look at what had previously been included in a feature you have removed, then your VCS provides a means for accessing that code. And if you have followed this process then you have a single commit for removal of each feature. 

I would repeat the above process for every feature flag that I suspect is identically defined in all live builds.

With the safety net of knowing all builds are binary identical, there is no risk of introducing bugs.

Step 4 – Removal of a feature flag that is in different states in different builds

Now if we consider the final conditional in our function, FEATURE_C; FEATURE_C is defined as 1 in some of our builds, and as 0 in others. How can we safely remove the conditional? Should we attempt to remove this conditional?

Personally I would attempt to remove this conditional only when I start working on code that is impacted by the conditional compilation, and not before.

It is unlikely that we are going to be able to make the changes to remove this feature and leave all builds binary identical, so we need another safety net to tell us that what we are doing has not had any nasty side effects.

To change the code away from using the pre-processor we must choose one of three other ways of varying the behaviour between builds.

  1. Compile Time Substitution
  2. Link Time Substitution
  3. Runtime Substitution

Lets assume we need to do some maintenance work in VeryLongFunction(). Before we try to make a functional change we want to get rid of this conditional compilation. And before we get rid of the conditional compilation we want tests to tell us that it is safe to do so.

So our first step is to create a test harness for this source file. Rather than re-state the process, look at James Grenning's article TDD How-to: Get your Legacy C into a Test Harness. In this test harness have FEATURE_C defined as 1, so that our conditionally included code is included in the test harness.

Now write some tests that prove the functionality of VeryLongFunction(), including a test that checks calls to wibble only occur if the previous functions have succeeded.

Great we have a test harness, now we can start refactoring. In this scenario, Extract Method looks like a good refactoring to try. Lets pull out all of the code inside FEATURE_C into a well named method (FeatureCWibbleIfOK isn't a great name, but it will do for our example, but do pay attention to the name you choose). We end up with something like:

screen-shot-2016-10-06-at-19-26-09

All of our tests still pass, we are good to continue. The next step in our refactoring is to open up a seam to allow us to substitute different behaviour. We move the function out into a new source file and create a new header, say feature_c.c, and feature_c.h. These files should be included into our test harness, and our tests all still pass.

Next step is to produce a test fixture to prove feature_c, once this is done we can simplify the the tests in our original test harness to prove that FeatureCWibbleIfOK is being called correctly, and remove feature_c from that test harness.

We are now at a point where we can substitute different behaviour and we need to decide with of our three possibilities we will use. In the first two cases we should develop a new test fixture, initially a copy of feature_c test harness, using a copy of feature_c.c, modify the test to expect the behaviour with FEATURE_C undefined, run the tests and observe them fail. Undefined FEATURE_C in the test harness and observe the test pass. You can then remove the FEATURE_C feature flag and code.

Compile Time Substitution

In compile time substitution we can use the include path to insert one of two different copies of feature_c.h, for example, one could have a plain prototype

int FeatureCWibbleIfOK(int ret);

and the other could have a null inline implementation.

inline int FeatureCWibbleIfOK(int ret){return ret;};

Link Time Substitution

For link time substitution, a second copy of feature_c.c may look a bit like

#include "feature_c.h"
int FeatureCWibbleIfOK(int ret)
{
    return ret;
}

Runtime Substitution

Here we presume that there is going to be some runtime check that allows us to determine if FEATURE_C is enabled. Use normal TDD methods to test drive this into your application.

Summary

Refactoring a large legacy code base that is riddled with conditional compilation is hard. However it can be safely achieved with care, allowing the code to be brought under control of test harnesses. You may never achieve full coverage of a test harness, but with care you should be able to bring the areas that you work on under control, get tests in place and gradually improve the quality and maintainability of the code.

It may be hard, but what other choices do you have

smiley-crossing-fingers

 

Developing the CODESYS runtime with TDD

Introduction

 

I was recently working in the CODESYS runtime again, developing some components for a client and I thought the experience wold make the basis of a good post on bringing legacy code into a test environment, to enable Test Driven Development (TDD)

The CODESYS runtime is a component based system, and for most device manufacturers is delivered as a binary for their target system and a collection of header files and interface definitions. Much of the interface is generic, however there are platform specific headers that abstract the underlying RTOS. Device manufacturers often develop bespoke runtime components, to access proprietary IO for example. To help with this the delivered software package includes template components as a starting point for development. This means that, according to Michael Feathers definition of legacy code (code without tests), the starting point when developing a CODESYS component is legacy code. In this example the starting point was a partially developed component, legacy code.

 

The Plan

I tend to follow a fairly standard process when bringing legacy code under test. The basic process is well described in TDD How-to: Get your legacy C into a test harness on James Grenning's blog.  I follow roughly the same process, with minor changes, my process can be summarised as follows

  • Select appropriate tools
  • Create a test harness with no reference to the code to be tested and a dummy failing test. Observe it fail. Fix the test and observe it pass.
  • Decide the boundaries of the code I want to test, and include this source in the test harness build.
  • Make the test harness compile (not link)
  • Make the code Link using exploding fakes.
  • Ensure the dummy test still passes
  • Add the first test of the code under test (expect it to crash or fail)
  • Make the test pass by adding initialisation, and using better fakes.
  • Add more tests, always observe them fail (force a failure if needs be - to check that the error output is meaningful), factor out common code into helper functions. Keep the tests small and testing one thing.
  • Add profiling, I like to be able to observe which parts of the code are under test before I make any changes. Particularly if the code under test has large complex functions it is the only way that I trust I have sufficient coverage before making code changes.

Tools

The development build of the component uses a gcc cross compiler on linux. The build is controlled by a makefile and there is already an eclipse project.

I will use the native gcc compiler to run the tests

For the testing framework I'm using googletest 1.8, my preferred test framework for C and C++

To help with creating fakes and mocks I will use Mike Long's Fake Function Framework (fff).

I will add plugins to eclipse so that the whole process can happen in a single environment.

 

The first test

There are two ways of using googletest, one is to build it as a library and link it to the tests, the other is to fuse the source into a single file, and then include the fused source in the tests. On linux I tend to just build the library with default settings.

I've created a new folder called UnitTests to which I've added a makefile and a single source file with this content

#include "gtest/gtest.h"
namespace 
{
TEST(FirstTest, ShouldPass)
{
    ASSERT_EQ(1,0);
}
} // namespace

The makefile, references just this source file, the include path has the path to googletest/include. The link line is shown below (I've omitted the paths for simplicity)

g++ FirstTest.o gtest_main.a -lpthread -o UnitTest

This builds, and when run fails as below

screen-shot-2016-09-08-at-11-06-57

Change the the ASSERT_EQ so that the test passes, rebuild and re-run the tests. 

Compiling with the UUT

The CODESYS component that that I'm working on consists of a single source file (The Unit Under Test UUT), and it links into a target specific library

To get the test application to compile I had to add three directories to the include path

-I$(CODESYS)/Components
-I$(CODESYS)/Platforms/Linux
-I$(TARGET_LIB_SRC)/include

NOTE: If the CODESYS runtime delivery is for a different operating system to the development system then it may be necessary to create fake versions of the headers in the Platforms directory. It may also be necessary to fake some of the RTOS header files.

Linking - Exploding Fakes

Having resolved the includes there are lots of unresolved symbols. A good starting point is to generate a file of exploding fakes, the idea here is to ensure that you know when you are faking code. Have a look at James' exploding fake generator, this can easily be adapted to any linker and any test framework. Save the output of your failed link into a file, execute gen-exploding-fakes-from-linker-output.sh to generate a file of exploding fakes which you include into your build.

make >& make.out
gen-exploding-fakes-from-linker-output.sh make.out explodingfake.c

The only other change required is to copy explodingfakes.h somewhere on the include path for the tests and adapt it to work with gtest as shown.

#ifndef EXPLODING_FAKE_INCLUDED
#define EXPLODING_FAKE_INCLUDED
#include "gtest/gtest.h"
#define EXPLODING_FAKE_FOR(f) void f() { FAIL() << "go write a proper stub for " #f; }
#endif

Now the test application should run and pass again, none of the UUT is yet being executed.

 

 

Testing - Part 1

CODESYS components have well defined interfaces, and I find it pays to test from those interfaces rather then exposing internals of the component wherever possible. Taking this approach tends to lead to less fragile tests that are testing the functionality rather than the implementation.

All components implement CmpItf, an interface that allows the component to be registered and initialised. CmpItf requires a single extern function ComponentEntry to be declared, all other functions in the interface are accessed through function pointers returned by this function call. So my starting point is to write tests that test this interface.

The first tests are straight forward, and soon the ComponentEntry call itself is factored out into the test constructor.

#include "gtest/gtest.h"
extern "C"
{
#include "CmpMyComponentDep.h"
DLL_DECL RTS_INT CDECL ComponentEntry(INIT_STRUCT *pInitStruct);
}
namespace
{
class CmpItfTest: public ::testing::Test
{
public:
    CmpItfTest():m_rResult(ERR_OK),m_InitStruct()
    {
        m_rResult = ComponentEntry(&m_InitStruct);
    }
    RTS_RESULT m_rResult;
    INIT_STRUCT m_InitStruct;
};
TEST_F(CmpItfTest, ComponentEntryShouldSucceed)
{
    ASSERT_EQ(ERR_OK, m_rResult);
}
TEST_F(CmpItfTest, ComponentEntryShouldSetComponentID)
{
    ASSERT_EQ(0x166B2002, m_InitStruct.CmpId);
}
TEST_F(CmpItfTest, CmpGetVersionShouldReturnCorrectVersion)
{
    ASSERT_EQ(0x03050800, m_InitStruct.pfGetVersion());
}

Fairly soon I am testing code that calls into other CODESYS components, as soon as I do, the exploding fakes show up in the tests.

screen-shot-2016-09-11-at-08-53-13

 

Using The Fake Function Framework

Now I need a more powerful fake, this is where the fake function framework comes in to it's own. Creating a fake for EventOpen can be as simple as adding the following to the test source file, and making sure fff.h is on the include path

#include "fff.h"
#include "CmpEventMgrItf.h"
DEFINE_FFF_GLOBALS;
FAKE_VALUE_FUNC( RTS_HANDLE, EventOpen , EVENTID , CMPID , RTS_RESULT *);

Having added this the link will fail with a message like

CmpEventMgrItf.fff.c:7: multiple definition of `EventOpen'

Remove the line from explodingfakes.c for EventOpen, and the tests should now run again.

It is then possible to write a simple test to prove that the EventOpen function has been called.

TEST_F(CmpItfTest, HookCH_INIT3ShouldOpenEvent)
{
    m_InitStruct.pfHookFunction(CH_INIT3,0,0);
    ASSERT_EQ(1, EventOpen_fake.call_count);
}

The Fake Function Framework includes facilities for recording a history of argument calls, setting return values and the ability to provide a custom fake. It makes a very powerful tool for testing C code, I'm not going to cover all of the features here there are plenty of other examples on the web. Do note though that fakes need to be reset for each new test. The constructor for my test fixture looks like this

CmpItfTest():m_rResult(ERR_OK),m_InitStruct()
{
    m_rResult = ComponentEntry(&m_InitStruct);
    RESET_FAKE(EventOpen);
    FFF_RESET_HISTORY();
}

As tests grow and there become multiple test files using the same fakes it makes sense to pull the fakes out into separate files,. I follow a pattern, if I am faking functions defined in a file called XXX.h, I create XXX.fff.h and XXX.fff.c and define my fakes in these files. Most of the time I take the approach of generating each fake manually, one by one as required.

CODESYS specifies the interface to all components in .m4 files, in the delivery I have there are 164 interface files specified. I know that over time these interfaces will be extended, and more interfaces added. I have generated a tool to process the interface definitions and automatically generate fff fakes for each API function in each of the interfaces. I then build these fakes into a static library that can be linked with any component I develop.

There is a danger in automating fake generation, it becomes very easy to not realise when you are using a fake. Most API functions in CODESYS return an RTS_RESULT, ERR_OK means success. ERR_OK has the value of zero which is also the default value returned by fff fakes. If developing new code this isn't a problem. But when bringing a legacy component under test it can lead to code appearing to be tested when it isn't. This can be avoided by still using exploding fakes within fff. 

To achieve all of this using the test library all I need in the tests is an include of the appropriate fake header file,

#include "CmpEventMgrItf.fff.h"

and the test constructor is changed to reset all of the CmpEventMgrItf fakes, set all of the fakes to explode, and then for the two functions that I want to fake I can disable the exploding behaviour.

CmpItfTest():m_rResult(ERR_OK),m_InitStruct()
{
    m_rResult = ComponentEntry(&m_InitStruct);
    FFF_CmpEventMgrItf_FAKES_LIST(RESET_FAKE);
    FFF_RESET_HISTORY();
    FFF_CmpEventMgrItf_FAKES_LIST(FFF_EXPLODE);
    // Allow normal fake operation for these functions, all others in the interface will explode if called.
    EventOpen_fake.custom_fake = NULL; 
    EventRegisterCallbackFunction_fake.custom_fake = NULL;
}

 

What does the fakes library look like?

To show what is included in the library of fakes, for those who are interested below is the content of the CmpEventMgrItf fakes cut down to show just the two functions that have been used.

CmpEventMgrItf.fff.h

#ifndef __CmpEventMgrItf__FFF_H__
#define __CmpEventMgrItf__FFF_H__
#include "fff.h"
#include <string.h>
#include "fff_explode.h"
#include "CmpEventMgrItf.h"
DECLARE_FAKE_VALUE_FUNC3( RTS_HANDLE, EventOpen , EVENTID , CMPID , RTS_RESULT * );
DECLARE_FAKE_VALUE_FUNC2( RTS_RESULT, EventRegisterCallback , RTS_HANDLE , ICmpEventCallback * );
RTS_HANDLE EventOpen_explode( EVENTID , CMPID , RTS_RESULT * );
RTS_RESULT EventRegisterCallback_explode( RTS_HANDLE , ICmpEventCallback * );
#define FFF_CmpEventMgrItf_FAKES_LIST(FAKE) 
FAKE(EventOpen)
FAKE(EventRegisterCallback)

#endif /* __CmpEventMgrItf__FFF_H__ */

Other than including headers three things are happening in this file. Firstly the fff fakes are declared, secondly prototypes for exploding functions are declared and finally a list of all faked functions is created allowing operations to be done on all fakes in one statement.

CmpEventMgrItf.fff.cpp

#include "CmpEventMgrItf.fff.h"
DEFINE_FAKE_VALUE_FUNC3( RTS_HANDLE, EventOpen , EVENTID , CMPID , RTS_RESULT * );
DEFINE_FAKE_VALUE_FUNC2( RTS_RESULT, EventRegisterCallback , RTS_HANDLE , ICmpEventCallback * );
RTS_HANDLE EventOpen_explode( EVENTID  a, CMPID  b, RTS_RESULT * z ){ fff_explode("EventOpen"); return (RTS_HANDLE)0; }
RTS_RESULT EventRegisterCallback_explode( RTS_HANDLE  a, ICmpEventCallback * z ){ fff_explode("EventRegisterCallback"); return (RTS_RESULT)0; }

The fff fakes are defined along with definitions of the exploding fakes. Each exploding fake calls fff_explode, which is declared in a separate module allowing the way it explodes to be changed for a different testing tool..

fff_explode.h

#ifndef __FFF_EXPLODE_H__
#define __FFF_EXPLODE_H__
#define FFF_EXPLODE(a) a##_fake.custom_fake = a##_explode;
#ifdef __cplusplus
extern "C"
{
#endif
void fff_explode(const char * func);
#ifdef __cplusplus
}
#endif
#endif /* __FFF_EXPLODE_H__ */

The macro FFF_EXPLODE(a)  sets the custom_fake variable in an fff fake to point to the exploding fake.

fff_explode.cpp

#include "fff_explode.h"
#include "gtest/gtest.h"
#ifdef __cplusplus
extern "C"
{
#endif
void fff_explode(const char * func)
{
    FAIL()<<"Time to use fake for "<<func;
}
#ifdef __cplusplus
}
#endif

 

Keeping it fast

As I mentioned in the tools section the production code is being built in eclipse. I want to build the test code in eclipse as well, and I want everything to work seamlessly.

I added a second Build Configuration to the production code build, and made this build the unit tests. Having done this I want to run the tests every time I build (Or rather I want to run the tests after every code change, and have the code rebuilt if required). This requires an optional component to be installed in eclipse. Go to Help->Install New Software…, choose to Work with: –All Available Sites– and then under Programming Languages select C/C++ Unit Testing Support, click Next>, Next>, Finish and wait for the install to complete. Restart eclipse when prompted.

Now right click on your project in eclipse and selectRun As->Run Configurations... Create a new C/C++ Unit Test configuration. Use Search Project to find your Unit Test application, then on the C/C++ Testing tab, select Google Tests Runner.

screen-shot-2016-09-08-at-16-14-25

When you run this configuration, it should force your tests to be built and then display the results graphically. Clicking on any failures will take you to the failing tests.

screen-shot-2016-09-10-at-13-20-46

Profiling

Particularly when bringing legacy code under test, I like to be able to visualise what is being tested and what isn't. If you are using gcc then this becomes very easy.

Add these compiler flags to the compilation of the unit under test, and to the link line.

-fprofile-arcs -ftest-coverage

Building and then running with profiling generates .gcda and .gcno files, these are specific to a particular build, so to ensure there are no mismatches in versions add to the link rule in the makefile an action to remove all .gcda and .gcno files from the object directory.

Now having run your tests look in the object directory in eclipse and you will see .gcda and .gcno files, double click one of them. In the dialog that pops up, ensure that your unit test executable is selected, and choose "Show coverage for the whole selected binary".

For me the key is not the amount of code covered, much more, what has been covered by my tests. Each file can be inspected and it is very clear what was run by the tests and what wasn't. This helps me decide if I have sufficient coverage before making changes. For example, the bars below show that my tests don't cover all of the initialisation functions.

screen-shot-2016-09-10-at-13-30-15

ExportFunctions is a standard function that is part of all components, the implementation shouldn't change. The image below shows that the test suite invokes it, but there must be a return statement inside the EXPORT_STMT. Without code coverage I may never have known that some of the code wasn't being exercised. Inspecting the code will then tell me if I need to add tests or not. This may be a trivial example but I hope it shows why inspecting test coverage helps you understand what is being tested. You can then make informed decisions about increasing the coverage, or accepting that you have gone far enough.

screen-shot-2016-09-10-at-13-30-46

Once I'm happy with the coverage in an area I want to change I can start more traditional TDD development. Having started TDD, I tend not to use code coverage checks very often. Being rigorous about TDD tends to lead to 100% coverage, the main time I re-use the coverage checks is if I have refactored the UUT, it helps to show not just that the existing functionality still passes, but that I haven't inadvertently added some untested functionality.

Summary and next steps

Investing the time to get the component under test has given me a re-useable test harness that allows me to extend and refactor the code with confidence. Future development can happen much faster than it would otherwise, as much of the functionality can be proven before taking the software anywhere near the embedded target.

Some components it is worth investing the time to create pre-canned functionality through custom_fakes. Consider these components

SysMem

With no further work fff can be used to simulate failures, check the sizes being allocated and return fixed data structures on allocations. However in some tests we just want the memory allocation to work, so having a simple set of custom fakes that can be used to delegate these calls functional equivalents is worth while. Another useful extension can be to track allocation and freeing of memory, then in a test fixture setup tracking can be enabled, and in the teardown it can be checked.

CMUtils

This component provides string manipulation and other utility functions, in most cases it is preferable to have a working double than the standard fff fake. If you have a source code distribution of the runtime code I would attempt to link this with the tests.

SysTime and SysTimeRTC

One of the great advantages of Unit Testing in embedded systems is being able to run tests faster than realtime. Develop custom_fakes that allow you to take control of the progress of time. 

Continuous Integration

Tests are only useful when they are run. Setting up a continuous integration system to build and test each component every time there is a change to the source code is the way to go.

Continuous Delivery

How far can you go towards continuous delivery? Using a combination of free tools, and the CODESYS Test Manager I have set up delivery pipelines that build the embedded code, run unit tests, performed static analysis, generated documentation, package up instrument firmware packages, build and test CODESYS libraries, automated version number management, create CODESYS packages, deploy the code into test systems and invoke automated testing (integration and system). If the tests all pass then the packages can be promoted to potential release candidates ready for final human validation as required.

 

Getting Started with Yocto on the Raspberry Pi

Introduction

I've been wanting to have a play with Yocto so decided to have a go at getting an image running on a Raspberry Pi. I found plenty of references but no step by step that just worked. This post just covers my notes on how to get going.

Development Machine

The Yocto Project Quick Start states "In general, if you have the current release minus one of the following distributions, you should have no problems", then lists several distros including Ubuntu. I originally tried using Ubuntu 16.04, and had problems, I think because of the later version of gcc.

I used a clean install of Ubuntu 14.04 desktop running on a virtual machine, the rest of the process was actually pretty straight forward

Firstly I made sure that Ubuntu was fully patched

sudo apt-get update

sudo apt-get upgrade

Then following the Yocto Project Quick Start  I installed the required packages

sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat libsdl1.2-dev xterm

I then disabled the dash shell in favour of the bash shell ( I did this because I saw it advised, not sure if this is required)

sudo dpkg-reconfigure dash

Getting the code and Building

At the time of writing krogoth is the latest version of Yocto, hence I am working with that branch. The raspberrypi meta data is not currently branched so I am working with the master branch. The first step is to clone yocto and meta-raspberrypi

mkdir yocto
cd yocto
git clone -b krogoth git://git.yoctoproject.org/poky.git poky
cd poky
git clone -b master git://git.yoctoproject.org/meta-raspberrypi

Now generate the default configuration files into the default directory build.

. oe-init-build-env build

Now we need to edit the build configuration. Firstly edit yocto/poky/build/conf/local.conf add these lines

MACHINE ?= "raspberrypi2"
GPU_MEM = "16"

MACHINE could also be set to raspberrypi, or to raspberrypi3 depending on your target hardware (The raspberrypi2 image should also run on an RPI3). The GPU_MEM setting allocates the minimum amount of memory to the GPU leaving the rest for the ARM processor. See the READEME in meta-raspberrypi for details of these and other options.
Secondly edit 
yocto/poky/build/conf/bblayers.conf and add meta-raspberrypi, mine looks like this

# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"

BBPATH = "${TOPDIR}"
BBFILES ?= ""

BBLAYERS ?= " 
 /home/david/yocto/poky/meta 
 /home/david/yocto/poky/meta-poky 
 /home/david/yocto/poky/meta-yocto-bsp 
 /home/david/yocto/poky/meta-raspberrypi 
 "

Now it's time to build, it's worth noting that the command oe-init-build-env doesn't just create the configuration files and build directory - it also sets up the environment including the path, so if you build in a  new shell, or having logged in you need to re-run oe-init-build-env. It won't overwrite the changes you've made to the configuration. So to build I cd to the poky directory and then

. oe-init-build-env build
bitbake rpi-basic-image

The build takes a long time the first time, potentially hours, numerous packages are fetched, when I build the first time I had a package fail to download because the git repository was unavailable, and the build failed. Just re-run the bitbake command. Assuming everything succeeds you should see output that looks like this

Parsing recipes: 100% |#########################################| Time: 00:00:22
Parsing of 891 .bb files complete (0 cached, 891 parsed). 1321 targets, 67 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies


Build Configuration:
BB_VERSION = "1.30.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "raspberrypi2"
DISTRO = "poky"
DISTRO_VERSION = "2.1.1"
TUNE_FEATURES = "arm armv7ve vfp thumb neon vfpv4 callconvention-hard cortexa7"
TARGET_FPU = "hard"
meta 
meta-poky 
meta-yocto-bsp = "krogoth:f5da2a5913319ad6ac2141438ba1aa17576326ab"
meta-raspberrypi = "master:2745399f75d7564fcc586d0365ff73be47849d0e"

NOTE: Preparing RunQueue
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
NOTE: Tasks Summary: Attempted 2167 tasks of which 911 didn't need to be rerun and all succeeded.

The resultant sdcard image is found beneath the build directory, flash it to SDCard as you would any other image

tmp/deploy/images/raspberrypi2/rpi-basic-image-raspberrypi2.rpi-sdimg

Testing

Insert the media into your raspberry pi, attach keyboard and monitor, power on...

Screen Shot 2016-08-09 at 20.10.02

ssh is also available in this image, access is available as root with no password. Before doing anything else tighten up security.

Have fun...

VPN bridge from home network to AWS VPC with Raspberry Pi

Introduction and disclaimer

I wanted to extend my home network to a Virtual Private Cloud (VPC) within Amazon Web Services (AWS), primarily for use as a jenkins build farm. I have achieved this using a Raspberry Pi as my Customer Gateway device. This post covers the process of configuring the raspberry pi from scratch and AWS from scratch. I'm posting as a reminder to myself, hopefully others will find this useful.

NOTE: I found this post by Pahud Hsieh on hackmd.io very helpful while developing this. I have also relied heavily on the excellent documentation at http://aws.amazon.com/documentation/

I have a fairly standard home network on 192.168.0/24 with a router provided by my ISP.  This post uses a Raspberry Pi, on a static IP address within my home network as a VPN gateway. Allowing any devices on my home network to communicate with EC2 instances (virtual machines), running within my VPC.

Network Diagram

Network Diagram

Prerequisite

  • Raspberry Pi on a static IP address (in this example 192.168.0.30)
  • Home gateway IP address of you home network is static (My ISP doesn't offer static IP addresses, however the address has not changed for years).
  • AWS account (I suggest also running some tutorials, the free tier is sufficient).

I'm starting with a clean install of Raspbian Jessie Lite, although other distributions of Linux should work.

Configuring VPC

Login to AWS, and select Services->VPC, this takes you to the VPC dashboard. Start the VPC Wizard

 Screen Shot 2016-07-11 at 14.24.36

Choose VPC with a Private Subnet Only and Hardware VPN access and click Select.

Screen Shot 2016-07-11 at 14.27.49

 

Screen Shot 2016-07-11 at 14.33.15

 

Screen Shot 2016-07-11 at 14.37.54

Wait for the VPN to be created

Screen Shot 2016-07-11 at 14.41.53

Now on the left towards the bottom find the VPN Connections page and click the Download Configuration button at the top of the page

Screen Shot 2016-07-11 at 15.04.56

In the downloaded configuration file find the tunnel groups under the IKE section

!
! The tunnel group sets the Pre Shared Key used to authenticate the 
! tunnel endpoints.
!
tunnel-group <TUNNEL1_IP> type ipsec-l2l
tunnel-group <TUNNEL1_IP> ipsec-attributes
 pre-shared-key <PSKEY_STRING>

You will need the <TUNNEL1_IP> and <PSKEY_STRING> values later, note them down.

Ensure that the static route to your home network exists in the VPN, Opne VPN Connections and select the Static Routes tab, it should show the CIDR for your home network. If not (my setup didn't) click Edit and type in the CIDR.

Screen Shot 2016-07-12 at 10.17.03

 

 

 

Configure the Raspberry Pi

Enable the Random Number Generator

Edit /boot/config.txt and append

# Enable random number generator
dtparam=random=on

Reboot and then install the random number generator tools

sudo apt-get install rng-tools

Install Openswan

sudo apt-get install -y openswan lsof

During package installation you get prompted about using X.509 certificates. I'm sure AWS supports these, for now I'm skipping for simplicity.

Screen Shot 2016-07-11 at 15.24.15

IPSec configuration

Edit /etc/ipsec.conf and set the content as shown below. NOTE this is including configuration files from /etc/ipsec.c/*.conf, this allows different files for different connections.

# /etc/ipsec.conf - Openswan IPsec configuration file
#
# Manual: ipsec.conf.5
#
# Please place your own config files in /etc/ipsec.d/ ending in .conf

version 2.0 # conforms to second version of ipsec.conf specification

# basic configuration
config setup
 # Debug-logging controls: "none" for (almost) none, "all" for lots.
 # klipsdebug=none
 # plutodebug="control parsing"
 # For Red Hat Enterprise Linux and Fedora, leave protostack=netkey
 protostack=netkey
 nat_traversal=yes
 virtual_private=
 oe=off
 # Enable this if you see "failed to find any available worker"
 # nhelpers=0

#You may put your configuration (.conf) file in the "/etc/ipsec.d/" and uncomment this.
include /etc/ipsec.d/*.conf

Create a configuration file for this connection, edit /etc/ipsec.d/home_to_aws.conf

conn home-to-aws
   type=tunnel
   authby=secret
   #left=%defaultroute
   left=192.168.0.30
   leftid=123.123.123.123
   leftnexthop=%defaultroute
   leftsubnet=192.168.0.0/24
   right=<TUNNEL1_IP>
   rightsubnet=10.0.0.0/16
   pfs=yes
   auto=start

Where 

left - The IP address of your Raspberry Pi on your home network

leftid - The IP address of your home gateway

leftsubnet - the CIDR of your home network

right - the IP address of Tunnel1 in your AWS gateway.

right subnet - The CIDR of your VPC

Input the pre-shared key

edit/var/lib/openswan/ipsec.secrets.inc and set the content as below

123.123.123.123 <TUNNEL1_IP> : PSK "<PSKEY_STRING>"

Edit /etc/sysctl.conf

Append the following lines

net.ipv4.ip_forward=1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.default.accept_redirects = 0

Then run sysctl -p to reload it.

Check IPSec status

$ sudo ipsec verify
Checking your system to see if IPsec got installed and started correctly:
Version check and ipsec on-path [OK]
Linux Openswan U2.6.38/K4.4.11-v7+ (netkey)
Checking for IPsec support in kernel [OK]
 SAref kernel support [N/A]
 NETKEY: Testing XFRM related proc values [OK]
 [OK]
 [OK]
Hardware RNG detected, testing if used properly [OK]
Checking that pluto is running [OK]
 Pluto listening for IKE on udp 500 [OK]
 Pluto listening for NAT-T on udp 4500 [OK]
Checking for 'ip' command [OK]
Checking /bin/sh is not /bin/dash [WARNING]
Checking for 'iptables' command [OK]
Opportunistic Encryption Support [DISABLED]

 

Restart IPSec

sudo service ipsec restart

Check IPsec status

make sure an active connection is running

$ sudo service ipsec status
● ipsec.service - LSB: Start Openswan IPsec at boot time
 Loaded: loaded (/etc/init.d/ipsec)
 Active: active (running) since Mon 2016-07-11 17:56:40 UTC; 8min ago
 Process: 1660 ExecStop=/etc/init.d/ipsec stop (code=exited, status=0/SUCCESS)
 Process: 1746 ExecStart=/etc/init.d/ipsec start (code=exited, status=0/SUCCESS)
 CGroup: /system.slice/ipsec.service
 ├─1840 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --...
 ├─1841 logger -s -p daemon.error -t ipsec__plutorun
 ├─1842 /bin/sh /usr/lib/ipsec/_plutorun --debug --uniqueids yes --...
 ├─1845 /bin/sh /usr/lib/ipsec/_plutoload --wait no --post
 ├─1846 /usr/lib/ipsec/pluto --nofork --secretsfile /etc/ipsec.secr...
 ├─1853 pluto helper # 0 
 ├─1854 pluto helper # 1 
 ├─1855 pluto helper # 2 
 └─1974 _pluto_adns

Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: STATE_MAIN_I2: sent MI2,...R2
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: NAT-Traversal: Result us...ed
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: transition from state ST...I3
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: STATE_MAIN_I3: sent MI3,...R3
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: Main mode peer ID is ID_...4'
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: transition from state ST...I4
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #1: STATE_MAIN_I4: ISAKMP SA...8}
Jul 11 17:56:41 vpc pluto[1846]: "home-to-aws" #2: initiating Quick Mode PS...8}
Jul 11 17:56:42 vpc pluto[1846]: "home-to-aws" #2: transition from state ST...I2
Jul 11 17:56:42 vpc pluto[1846]: "home-to-aws" #2: STATE_QUICK_I2: sent QI2...e}
Hint: Some lines were ellipsized, use -l to show in full.

Start on reboot

sudo update-rc.d ipsec defaults

reboot the Raspberry pi and recheck the ipsec service status

Check VPC connection in the VPC console

Make sure Tunnel1 is UP

Screen Shot 2016-07-11 at 19.40.12

Redundancy

For increased reliability, add the second tunnel configuration to the RPI configuration

 

Create an EC2 Instance

To test the configuration Launch an EC2 instance into your VPC. As an example I'm launching the free tier Ubuntu server.

From the EC2 Dashboard select Launch Instance, select Ubuntu Server 14.04 LTS (HVM), SSD Volume Type, select t2.micro (Free Tier Eligible) then click Next: Configure Instance Details. Select your VPC as the Network.

Screen Shot 2016-07-12 at 09.31.01

Click Next: Add Storage, Next: Tag Instance, Next: Configure Security Group, Create a new security group and add the rules you require, the example below adds SSH and ICMP (ping) from my home subnet.

Screen Shot 2016-07-12 at 09.38.33

Click Review and Launch, Launch, create a new key-pair (or use existing if you prefer), Download the Key Pair and keep them safe. Launch and then View Instance. Wait for the Status Checks to complete.

Screen Shot 2016-07-12 at 09.48.25

Note the private IP address - in this example 10.0.1.19

From your Raspberry Pi you should now be able to ping the instance

$ ping 10.0.1.19
PING 10.0.1.19 (10.0.1.19) 56(84) bytes of data.
64 bytes from 10.0.1.19: icmp_seq=1 ttl=64 time=200 ms
64 bytes from 10.0.1.19: icmp_seq=2 ttl=64 time=204 ms

Adding local Routes

Devices on your home network that are to access the VPC need to have a static route added that identifies the Raspberry Pi as the gateway to use for 10.0.0.0/16.

On Linux based devices (including macs)

sudo route -n add 10.0.0.0/16 192.168.0.30

ssh access

copy the key file that you previously downloaded to the machine you want to open an ssh session from. Ensure the pen file has read only permissions.

chmod 400 jenkins_aws.pem

Open an ssh session using the key

ssh -i jenkins_aws.pem ubuntu@10.0.1.19

All being well you are now logged in to your EC2 instance.