Reading List

The list below comprises the books I have found to benefit me and that I now recommend to others. I’m always on the lookout for new books that will help me develop as an engineer, suggestions are welcome

Software Engineering

Clean Code BookClean Code – A Handbook of Agile Software Craftsmanship
Another must read that explains your responsibilities as a professional software engineer
A really readable and thought provoking book, so many lessons that I wish I had been able to read about rather than learning the hard way. How to move from being an expert beginner to a craftsman. Should be compulsory reading for all software engineers
Clean Coder BookClean Coder – A Code of Conduct for Professional Programmers
A focus on what your responsibilities as a software engineer are. Again should be read by all software engineers. It helps with such useful topics as how to behave as a professional, when and how to say no, how to avoid burnout and how to deal with conflict
Clean Architecture – A Craftsman’s Guide to Software Structure and Design
How to design and deliver robust flexible and reusable architectures
The Pragmatic Programmer BookThe Pragmatic Programmer, your journey to mastery (20th Anniversary Edition)
A recent update of the classic title, still full of great tips and advice
Working Effectively With Legacy Code
A practical guide on how to get a legacy codebase under control
“To me, legacy code is simply code without tests.”
― Michael C. Feathers, Working Effectively with Legacy Code
Refactoring BookRefactoring – Improving the Design of Existing Code
A collection of different refactoring patterns to help improve existing code
Design Patterns BookDesign Patterns
The seminal work from the gang of four. There are other newer books on design patterns, but this is still well worth reading. Watch out for the Singleton Pattern though – I would now consider this an antipattern, a global in disguise.
TDD For Embedded CTest Driven Development for Embedded C
An excellent book that shows in detail how TDD can benefit embedded systems. I find the examples are just as relevant for those working in C++, a must read for those in the embedded industry.
Accelerate BookAccelerate – Building and Scaling High Performance Technology Organisations
A rigorous look at what high performing organizations are doing technically and in their development processes. The book provides scientific, statistical analysis of what works, I have found this to be extremely useful when trying to justify changes in the development processes at organisations.

Agile

Scrum GuideThe Scrum Guide
Actually a PDF, rather than a book. Scrum is remarkably simple – read the guide! (free)
Lean from the Trenches bookLean from the Trenches – Managing Large Scale Projects with Kanban
My current favourite lean agile book. The book is structured in two parts, the first a real world example and the second part a closer look at the techniques used.
Scrum and XP from the Trenches - How We Do ScrumScrum and XP from the Trenches – How We Do Scrum (2nd Edition)
Probably the best introduction to how SCRUM practically works. Extremely readable and full of practical examples of practical application of SCRUM XP principles. (Free PDF available)
Kanban and Scrum making the most of both bookKanban and Scrum – Making the Most of Both
Very practical book showing real examples of both methods, really useful when moving beyond scrum (Free PDF available)
Lean Software Development – An agile toolkit
Lean thinking has been key in manufacturing organisations for decades now. This book explores how the same principles work for software development. The book shows principles for scaling agile beyond the development team. One of my favourites. Mary and Toms other books on lean are also well worth reading
Clean Agile BookClean Agile – Back To Basics
Back to the original principles behind Agile. Agile, SCRUM as originally intended is a development practice for small software teams. This book reminds us what agile is and explains principles, tools and techniques.
Agile Estimating and Planning
Agile is not an excuse for not planning or estimating projects. This book provides lots of examples on how agile planning and estimating works both at the team level and scaling into large projects
Agile Retrospectives – Making Good Teams Great
Retrospectives are arguably the most important of the scum ceremonies. This book is full of ideas on how to ensure that your retrospectives are effective, fresh and generate ways of improving

C++

Effective C++ BookEffective C++ – 55 Specific Ways to Improve Your Programs and Designs
Good practical advice on how to write good C++. Still relevant after the introduction of Modern C++
More Effective C++ BookMore Effective C++ – 35 New Ways to Improve your Programs and Designs
More excellent tips. Still relevant after the introduction of Modern C++
Effective Modern C++
covers up to C++14. Modern C++ is a different language, the tips in here help to get the most from the language. I found the book harder going than the earlier ones in the series, this is probably due to how complex the language has become rather than anything else. Still it is essential to understand how to use the more modern features well.

Tools

Docker Deep Dive BookDocker Deep Dive – Zero to Docker in a Single Book
Docker is one of the key tools of software development. This book clearly covers everything you need to understand containerisation and docker in particular

UniPi Neuron for CODESYS – available now

Available Now

Cozens Software Solutions are pleased to announce that UniPi Neuron for CODESYS is available now for download from the CODESYS store.

UniPi Neuron for CODESYS provides CODESYS driver support for the full range of UniPi Neuron PLCs and expansion modules.

  • UniPi Neuron L20x, L30x, L40x, L50x, L51x, M10x, M20x, M30x, M40x, M50x and S10x
  • UniPi Neuron Expansion modules xS10, xS30, xS40 and xS50

The UniPi Neuron is a product line of PLC (Programmable Logic Controller) units built to be universal and used in both Smart Home and Business applications and automation systems. 

CODESYS is the leading hardware independent IEC 61131-3 development system under windows for developing and engineering controller applications

TDD Zombies

Not these zombies

When I first used TDD I read James Grenning’s book Test Driven Development for Embedded C. In this book James proposed following a pattern for developing tests to test for zero, then one and then many (ZOM). Recently he has developed this idea further into ZOMBIE testing.

Z – Zero
O – One
M – Many (or More complex)
B – Boundary Behaviors
I – Interface definition
E – Exercise Exceptional behavior
S – Simple Scenarios, Simple Solutions

I’ve found this to be a really helpful pattern to follow when developing tests. To read more about it see James’ recent post TDD Guided by ZOMBIES

The Pragmatic Programmer

The Pragmatic Programmer from journeyman to master

The Pragmatic Programmer from journeyman to master

I think I originally read The Pragmatic Programmer by Andrew Hunt and David Thomas a good ten or fifteen years ago. I’ve just taken a couple of days while between contracts to re-read the book.

I was very pleased to find that the book is just as fresh as I remember, 70 great pragmatic tips to help you develop from a journeyman to a master. Given that the book is 16 years old, there are references to technology that seems dated, e.g. version control without SVN or GIT. However the technology referenced is not the point of the book, it is completely geared around taking pragmatic steps to produce better software.

If you want to grow as a software engineer, this book is still a must read.

Learning Python

zenkoans

Python has never been a language I have had to know well. I’ve adapted existing scripts, I’ve created a few simple scripts from scratch. But I haven’t learnt it properly, just the parts I’ve needed.

I decided it was about time I learnt the language properly. A friend recommended that I take a look at python koans. A koan is a riddle or puzzle use in Zen mediation to help gain enlightenment.

Python Koans is an interactive tutorial for learning the Python programming language by making tests pass. Tests are executed by executing contemplate_koans.py

python contemplate_koans.py

A single test will fail, tell you what has failed, and what you need to think about to make it pass.

Most tests are fixed by filling the missing parts of assert functions. Eg:

self.assertEqual(__, 1+2)

which can be fixed by replacing the __ part with the appropriate code:

self.assertEqual(3, 1+2)

Very quickly I got in a rhythm, much like TDD, red, fix, green, repeat. I would definitely recommend this as a way of learning the language.

 

Refactoring C to Remove Feature Flags

You’ve read the books on Refactoring, on working with legacy code, on Unit Testing and on TDD. Then you look at the codebase you’ve inherited, it’s written in C, and it’s riddled with conditional compilation. Where do you start?

screen-shot-2016-10-06-at-17-30-54

 

In years gone by feature flags were widely used in embedded systems as a means of having a common codebase shared across multiple devices. The devices varied in what hardware was present, what capacity there was in terms of RAM, ROM and performance. The devices also varied according to market demands, e.g. some features were only required on ‘premium’ products.

Now imagine how the codebase could have deteriorated over the years. Some of the code is forty years old, the code base has been targeted at fifty different hardware platforms, and at a marketing level there have been over one hundred different features. There are terrifying potential number of combinations of ways the software could be built.

How bad is your code? This command will show you how many different conditional statements there are in your code. Admittedly some will only be different because of whitespace, or because of the order of the flags.

grep --include=*.c --include=*.h -r -h '#if' . |sort -u | wc

I’ve been faced with a codebase containing 16000 different conditional include lines; codebases exist with many more than that. Where do you start? Should you start?

With this amount of conditional compilation, introducing Unit Testing may appear impossible, each test fixture can only be compiled with one combination of feature flags. You may be able to use it for new modules, but how about for maintenance? This article offers a step by step approach that I have used to remove feature flags, and remove conditional compilation from a large codebase (a few million lines).

As with all refactoring, there is a level of risk, the aim of these changes is to minimise the risk by taking baby steps and using a safety net.

Step 1 – Preparation – Repeatable builds

To remove a feature flag we need a test to know that we haven’t impacted the code. The method I like to use is to determine that the build produced a binary identical output before and after the change. Perform two complete builds and compare the build output. We need to get to the state where they are identical. There are multiple reasons why the output may vary, these need to be addressed before we attempt any refactoring:

  • Problem –  Time/Date of the build is included in the binary
  • Solution – Make the build use a fixed time for your test purposes. How you do this depends on how the time and date is injected into the build. Consider link time substitution of a fixed file, disabling that part of the makefile, or conditional compilation.
  • Problem – The version of a file or a checkout from the version control system is embedded in the build.
  • Solution – Be careful to checkout both copies from the same revision. If the revision information is in a single source file consider link time substitution to replace it with static values. If the information is in a single header file consider using the include path to prioritise a file with static values.
  • Problem – the file format of your binary includes the time that the build was performed.
  • Solution – Use another form of output to compare to decide the builds are identical, e.g. transform the output into a plain format such as .bin or SREC, or use a map file for comparison. (e.g. if using gnu, look at objcopy and strip)
  • Problem – the file format includes the paths of source files.
  • Solution – Use tools to strip debug information from the binary (e.g. if using gnu, look at objcopy and strip).  Or perform both builds in the same directory.

This process needs to be repeated for every build that is to be supported from your codebase. There may have been hundreds of products delivered, it is likely that only a small subset still require support. To be confident in your changes you must be sure that you are not impacting any of the current builds with your changes.

Step 2 – Identify redundant feature flags

We can identify a feature flag as redundant in each of these circumstances

  • It is defined to the same value on all supported platforms
  • It is undefined on all platforms
  • There are no longer any uses of the flag in the code
  • All uses of the feature flag are in sections of code removed by other feature flags

For cases 1, and 2, use the pre-processor to prove that your assumptions are correct by forcing a build that will fail only if your assumption is correct. (Choose the failing option because it is faster to test). For example, if you believe that FEATURE_A always has the value 1 on all platforms then add the following to a source file included early on in all builds

screen-shot-2016-10-06-at-17-37-17

 

Then verify that all of your builds fail. If they do then you know that this flag is safe to remove.

screen-shot-2016-10-06-at-17-37-43

Step 3 – Remove Feature Flags

Following on from the example above, assume that we have discovered that FEATURE_A always has the value 1 in all of the builds we need to support. How can we remove FEATURE_A when it may be mentioned in many of the thousands of files in our build? Removing by hand is going to be time consuming and worse error prone. 

To automate the process use unifdef. The command below invokes unifdef on every .c and every .h file below the current directory, and removes the conditional compilation related to FEATURE_A.

find . -name '*.[ch]' | xargs unifdef -DFEATURE_A=1 -m

Lets see what this did to our example function below. Not only has the #if FEATURE_A statement been removed, so to has #if FEATURE_A || FEATURE_B, unifdef was smart enough to determine that if FEATURE_A was defined the compound condition was always true.

screen-shot-2016-10-06-at-18-20-43

At this stage rebuild all of the applications, verify that none of the binaries have changed and commit the change to version control. Then repeat for the next feature flag. Lets see one more example, suppose FEATURE_B is always undefined, unifdef can be used to remove the feature with this command

find . -name '*.[ch]' | xargs unifdef -UFEATURE_B -m

Here we can see that the code guarded by #ifdef FEATURE_B has been removed as well as the feature flag.

screen-shot-2016-10-06-at-18-29-02

Verify that the binary output is identical, for all builds. Commit changes to version control and repeat.

Should you be worried about making these changes? What about the code that is being deleted, isn’t it valuable? No, it has no value. It isn’t included in any current builds, so it carries no current value. It adds confusion, and slows development, so it has cost and not value. If you ever have to look at what had previously been included in a feature you have removed, then your VCS provides a means for accessing that code. And if you have followed this process then you have a single commit for removal of each feature. 

I would repeat the above process for every feature flag that I suspect is identically defined in all live builds.

With the safety net of knowing all builds are binary identical, there is no risk of introducing bugs.

Step 4 – Removal of a feature flag that is in different states in different builds

Now if we consider the final conditional in our function, FEATURE_C; FEATURE_C is defined as 1 in some of our builds, and as 0 in others. How can we safely remove the conditional? Should we attempt to remove this conditional?

Personally I would attempt to remove this conditional only when I start working on code that is impacted by the conditional compilation, and not before.

It is unlikely that we are going to be able to make the changes to remove this feature and leave all builds binary identical, so we need another safety net to tell us that what we are doing has not had any nasty side effects.

To change the code away from using the pre-processor we must choose one of three other ways of varying the behaviour between builds.

  1. Compile Time Substitution
  2. Link Time Substitution
  3. Runtime Substitution

Lets assume we need to do some maintenance work in VeryLongFunction(). Before we try to make a functional change we want to get rid of this conditional compilation. And before we get rid of the conditional compilation we want tests to tell us that it is safe to do so.

So our first step is to create a test harness for this source file. Rather than re-state the process, look at James Grenning’s article TDD How-to: Get your Legacy C into a Test Harness. In this test harness have FEATURE_C defined as 1, so that our conditionally included code is included in the test harness.

Now write some tests that prove the functionality of VeryLongFunction(), including a test that checks calls to wibble only occur if the previous functions have succeeded.

Great we have a test harness, now we can start refactoring. In this scenario, Extract Method looks like a good refactoring to try. Lets pull out all of the code inside FEATURE_C into a well named method (FeatureCWibbleIfOK isn’t a great name, but it will do for our example, but do pay attention to the name you choose). We end up with something like:

screen-shot-2016-10-06-at-19-26-09

All of our tests still pass, we are good to continue. The next step in our refactoring is to open up a seam to allow us to substitute different behaviour. We move the function out into a new source file and create a new header, say feature_c.c, and feature_c.h. These files should be included into our test harness, and our tests all still pass.

Next step is to produce a test fixture to prove feature_c, once this is done we can simplify the the tests in our original test harness to prove that FeatureCWibbleIfOK is being called correctly, and remove feature_c from that test harness.

We are now at a point where we can substitute different behaviour and we need to decide with of our three possibilities we will use. In the first two cases we should develop a new test fixture, initially a copy of feature_c test harness, using a copy of feature_c.c, modify the test to expect the behaviour with FEATURE_C undefined, run the tests and observe them fail. Undefined FEATURE_C in the test harness and observe the test pass. You can then remove the FEATURE_C feature flag and code.

Compile Time Substitution

In compile time substitution we can use the include path to insert one of two different copies of feature_c.h, for example, one could have a plain prototype

int FeatureCWibbleIfOK(int ret);

and the other could have a null inline implementation.

inline int FeatureCWibbleIfOK(int ret){return ret;};

Link Time Substitution

For link time substitution, a second copy of feature_c.c may look a bit like

#include "feature_c.h"
int FeatureCWibbleIfOK(int ret)
{
return ret;
}

Runtime Substitution

Here we presume that there is going to be some runtime check that allows us to determine if FEATURE_C is enabled. Use normal TDD methods to test drive this into your application.

Summary

Refactoring a large legacy code base that is riddled with conditional compilation is hard. However it can be safely achieved with care, allowing the code to be brought under control of test harnesses. You may never achieve full coverage of a test harness, but with care you should be able to bring the areas that you work on under control, get tests in place and gradually improve the quality and maintainability of the code.

It may be hard, but what other choices do you have

smiley-crossing-fingers

 

Developing the CODESYS runtime with TDD

Introduction

 

I was recently working in the CODESYS runtime again, developing some components for a client and I thought the experience wold make the basis of a good post on bringing legacy code into a test environment, to enable Test Driven Development (TDD)

The CODESYS runtime is a component based system, and for most device manufacturers is delivered as a binary for their target system and a collection of header files and interface definitions. Much of the interface is generic, however there are platform specific headers that abstract the underlying RTOS. Device manufacturers often develop bespoke runtime components, to access proprietary IO for example. To help with this the delivered software package includes template components as a starting point for development. This means that, according to Michael Feathers definition of legacy code (code without tests), the starting point when developing a CODESYS component is legacy code. In this example the starting point was a partially developed component, legacy code.

 

The Plan

I tend to follow a fairly standard process when bringing legacy code under test. The basic process is well described in TDD How-to: Get your legacy C into a test harness on James Grenning’s blog.  I follow roughly the same process, with minor changes, my process can be summarised as follows

  • Select appropriate tools
  • Create a test harness with no reference to the code to be tested and a dummy failing test. Observe it fail. Fix the test and observe it pass.
  • Decide the boundaries of the code I want to test, and include this source in the test harness build.
  • Make the test harness compile (not link)
  • Make the code Link using exploding fakes.
  • Ensure the dummy test still passes
  • Add the first test of the code under test (expect it to crash or fail)
  • Make the test pass by adding initialisation, and using better fakes.
  • Add more tests, always observe them fail (force a failure if needs be – to check that the error output is meaningful), factor out common code into helper functions. Keep the tests small and testing one thing.
  • Add profiling, I like to be able to observe which parts of the code are under test before I make any changes. Particularly if the code under test has large complex functions it is the only way that I trust I have sufficient coverage before making code changes.

Tools

The development build of the component uses a gcc cross compiler on linux. The build is controlled by a makefile and there is already an eclipse project.

I will use the native gcc compiler to run the tests

For the testing framework I’m using googletest 1.8, my preferred test framework for C and C++

To help with creating fakes and mocks I will use Mike Long’s Fake Function Framework (fff).

I will add plugins to eclipse so that the whole process can happen in a single environment.

 

The first test

There are two ways of using googletest, one is to build it as a library and link it to the tests, the other is to fuse the source into a single file, and then include the fused source in the tests. On linux I tend to just build the library with default settings.

I’ve created a new folder called UnitTests to which I’ve added a makefile and a single source file with this content

#include "gtest/gtest.h"
namespace 
{
TEST(FirstTest, ShouldPass)
{
ASSERT_EQ(1,0);
}
} // namespace

The makefile, references just this source file, the include path has the path to googletest/include. The link line is shown below (I’ve omitted the paths for simplicity)

g++ FirstTest.o gtest_main.a -lpthread -o UnitTest

This builds, and when run fails as below

screen-shot-2016-09-08-at-11-06-57

Change the the ASSERT_EQ so that the test passes, rebuild and re-run the tests.

Compiling with the UUT

The CODESYS component that that I’m working on consists of a single source file (The Unit Under Test UUT), and it links into a target specific library

To get the test application to compile I had to add three directories to the include path

-I$(CODESYS)/Components
-I$(CODESYS)/Platforms/Linux
-I$(TARGET_LIB_SRC)/include

NOTE: If the CODESYS runtime delivery is for a different operating system to the development system then it may be necessary to create fake versions of the headers in the Platforms directory. It may also be necessary to fake some of the RTOS header files.

Linking – Exploding Fakes

Having resolved the includes there are lots of unresolved symbols. A good starting point is to generate a file of exploding fakes, the idea here is to ensure that you know when you are faking code. Have a look at James’ exploding fake generator, this can easily be adapted to any linker and any test framework. Save the output of your failed link into a file, execute gen-exploding-fakes-from-linker-output.sh to generate a file of exploding fakes which you include into your build.

make >& make.out
gen-exploding-fakes-from-linker-output.sh make.out explodingfake.c

The only other change required is to copy explodingfakes.h somewhere on the include path for the tests and adapt it to work with gtest as shown.

#ifndef EXPLODING_FAKE_INCLUDED
#define EXPLODING_FAKE_INCLUDED
#include "gtest/gtest.h"
#define EXPLODING_FAKE_FOR(f) void f() { FAIL() << "go write a proper stub for " #f; }
#endif

Now the test application should run and pass again, none of the UUT is yet being executed.

 

Testing – Part 1

CODESYS components have well defined interfaces, and I find it pays to test from those interfaces rather then exposing internals of the component wherever possible. Taking this approach tends to lead to less fragile tests that are testing the functionality rather than the implementation.

All components implement CmpItf, an interface that allows the component to be registered and initialised. CmpItf requires a single extern function ComponentEntry to be declared, all other functions in the interface are accessed through function pointers returned by this function call. So my starting point is to write tests that test this interface.

The first tests are straight forward, and soon the ComponentEntry call itself is factored out into the test constructor.

#include "gtest/gtest.h"
extern "C"
{
#include "CmpMyComponentDep.h"
DLL_DECL RTS_INT CDECL ComponentEntry(INIT_STRUCT *pInitStruct);
}
namespace
{
class CmpItfTest: public ::testing::Test
{
public:
CmpItfTest():m_rResult(ERR_OK),m_InitStruct()
{
m_rResult = ComponentEntry(&m_InitStruct);
}
RTS_RESULT m_rResult;
INIT_STRUCT m_InitStruct;
};
TEST_F(CmpItfTest, ComponentEntryShouldSucceed)
{
ASSERT_EQ(ERR_OK, m_rResult);
}
TEST_F(CmpItfTest, ComponentEntryShouldSetComponentID)
{
ASSERT_EQ(0x166B2002, m_InitStruct.CmpId);
}
TEST_F(CmpItfTest, CmpGetVersionShouldReturnCorrectVersion)
{
ASSERT_EQ(0x03050800, m_InitStruct.pfGetVersion());
}

Fairly soon I am testing code that calls into other CODESYS components, as soon as I do, the exploding fakes show up in the tests.

screen-shot-2016-09-11-at-08-53-13

 

Using The Fake Function Framework

Now I need a more powerful fake, this is where the fake function framework comes in to it’s own. Creating a fake for EventOpen can be as simple as adding the following to the test source file, and making sure fff.h is on the include path

#include "fff.h"
#include "CmpEventMgrItf.h"
DEFINE_FFF_GLOBALS;
FAKE_VALUE_FUNC( RTS_HANDLE, EventOpen , EVENTID , CMPID , RTS_RESULT *);

Having added this the link will fail with a message like

CmpEventMgrItf.fff.c:7: multiple definition of `EventOpen'

Remove the line from explodingfakes.c for EventOpen, and the tests should now run again.

It is then possible to write a simple test to prove that the EventOpen function has been called.

TEST_F(CmpItfTest, HookCH_INIT3ShouldOpenEvent)
{
m_InitStruct.pfHookFunction(CH_INIT3,0,0);
    ASSERT_EQ(1, EventOpen_fake.call_count);
}

The Fake Function Framework includes facilities for recording a history of argument calls, setting return values and the ability to provide a custom fake. It makes a very powerful tool for testing C code, I’m not going to cover all of the features here there are plenty of other examples on the web. Do note though that fakes need to be reset for each new test. The constructor for my test fixture looks like this

CmpItfTest():m_rResult(ERR_OK),m_InitStruct()
{
    m_rResult = ComponentEntry(&m_InitStruct);
RESET_FAKE(EventOpen);
FFF_RESET_HISTORY();
}

As tests grow and there become multiple test files using the same fakes it makes sense to pull the fakes out into separate files,. I follow a pattern, if I am faking functions defined in a file called XXX.h, I create XXX.fff.h and XXX.fff.c and define my fakes in these files. Most of the time I take the approach of generating each fake manually, one by one as required.

CODESYS specifies the interface to all components in .m4 files, in the delivery I have there are 164 interface files specified. I know that over time these interfaces will be extended, and more interfaces added. I have generated a tool to process the interface definitions and automatically generate fff fakes for each API function in each of the interfaces. I then build these fakes into a static library that can be linked with any component I develop.

There is a danger in automating fake generation, it becomes very easy to not realise when you are using a fake. Most API functions in CODESYS return an RTS_RESULT, ERR_OK means success. ERR_OK has the value of zero which is also the default value returned by fff fakes. If developing new code this isn’t a problem. But when bringing a legacy component under test it can lead to code appearing to be tested when it isn’t. This can be avoided by still using exploding fakes within fff.

To achieve all of this using the test library all I need in the tests is an include of the appropriate fake header file,

#include “CmpEventMgrItf.fff.h”

and the test constructor is changed to reset all of the CmpEventMgrItf fakes, set all of the fakes to explode, and then for the two functions that I want to fake I can disable the exploding behaviour.

CmpItfTest():m_rResult(ERR_OK),m_InitStruct()
{
    m_rResult = ComponentEntry(&m_InitStruct);
FFF_CmpEventMgrItf_FAKES_LIST(RESET_FAKE);
FFF_RESET_HISTORY();
    FFF_CmpEventMgrItf_FAKES_LIST(FFF_EXPLODE);
// Allow normal fake operation for these functions, all others in the interface will explode if called.
EventOpen_fake.custom_fake = NULL; 
EventRegisterCallbackFunction_fake.custom_fake = NULL;
}

What does the fakes library look like?

To show what is included in the library of fakes, for those who are interested below is the content of the CmpEventMgrItf fakes cut down to show just the two functions that have been used.

CmpEventMgrItf.fff.h

#ifndef __CmpEventMgrItf__FFF_H__
#define __CmpEventMgrItf__FFF_H__
#include "fff.h"
#include <string.h>
#include "fff_explode.h"
#include "CmpEventMgrItf.h"
DECLARE_FAKE_VALUE_FUNC3( RTS_HANDLE, EventOpen , EVENTID , CMPID , RTS_RESULT * );
DECLARE_FAKE_VALUE_FUNC2( RTS_RESULT, EventRegisterCallback , RTS_HANDLE , ICmpEventCallback * );
RTS_HANDLE EventOpen_explode( EVENTID , CMPID , RTS_RESULT * );
RTS_RESULT EventRegisterCallback_explode( RTS_HANDLE , ICmpEventCallback * );
#define FFF_CmpEventMgrItf_FAKES_LIST(FAKE) 
FAKE(EventOpen)
FAKE(EventRegisterCallback)
#endif /* __CmpEventMgrItf__FFF_H__ */

Other than including headers three things are happening in this file. Firstly the fff fakes are declared, secondly prototypes for exploding functions are declared and finally a list of all faked functions is created allowing operations to be done on all fakes in one statement.

CmpEventMgrItf.fff.cpp

#include "CmpEventMgrItf.fff.h"
DEFINE_FAKE_VALUE_FUNC3( RTS_HANDLE, EventOpen , EVENTID , CMPID , RTS_RESULT * );
DEFINE_FAKE_VALUE_FUNC2( RTS_RESULT, EventRegisterCallback , RTS_HANDLE , ICmpEventCallback * );
RTS_HANDLE EventOpen_explode( EVENTID  a, CMPID  b, RTS_RESULT * z ){ fff_explode("EventOpen"); return (RTS_HANDLE)0; }
RTS_RESULT EventRegisterCallback_explode( RTS_HANDLE  a, ICmpEventCallback * z ){ fff_explode("EventRegisterCallback"); return (RTS_RESULT)0; }

The fff fakes are defined along with definitions of the exploding fakes. Each exploding fake calls fff_explode, which is declared in a separate module allowing the way it explodes to be changed for a different testing tool..

fff_explode.h

#ifndef __FFF_EXPLODE_H__
#define __FFF_EXPLODE_H__
#define FFF_EXPLODE(a) a##_fake.custom_fake = a##_explode;
#ifdef __cplusplus
extern "C"
{
#endif
void fff_explode(const char * func);
#ifdef __cplusplus
}
#endif
#endif /* __FFF_EXPLODE_H__ */

The macro FFF_EXPLODE(a)  sets the custom_fake variable in an fff fake to point to the exploding fake.

fff_explode.cpp

#include "fff_explode.h"
#include "gtest/gtest.h"
#ifdef __cplusplus
extern "C"
{
#endif
void fff_explode(const char * func)
{
    FAIL()<<"Time to use fake for "<<func;
}
#ifdef __cplusplus
}
#endif

Keeping it fast

As I mentioned in the tools section the production code is being built in eclipse. I want to build the test code in eclipse as well, and I want everything to work seamlessly.

I added a second Build Configuration to the production code build, and made this build the unit tests. Having done this I want to run the tests every time I build (Or rather I want to run the tests after every code change, and have the code rebuilt if required). This requires an optional component to be installed in eclipse. Go to Help->Install New Software…, choose to Work with: –All Available Sites– and then under Programming Languages select C/C++ Unit Testing Support, click Next>, Next>, Finish and wait for the install to complete. Restart eclipse when prompted.

Now right click on your project in eclipse and selectRun As->Run Configurations... Create a new C/C++ Unit Test configuration. Use Search Project to find your Unit Test application, then on the C/C++ Testing tab, select Google Tests Runner.

screen-shot-2016-09-08-at-16-14-25

When you run this configuration, it should force your tests to be built and then display the results graphically. Clicking on any failures will take you to the failing tests.

screen-shot-2016-09-10-at-13-20-46

Profiling

Particularly when bringing legacy code under test, I like to be able to visualise what is being tested and what isn’t. If you are using gcc then this becomes very easy.

Add these compiler flags to the compilation of the unit under test, and to the link line.

-fprofile-arcs -ftest-coverage

Building and then running with profiling generates .gcda and .gcno files, these are specific to a particular build, so to ensure there are no mismatches in versions add to the link rule in the makefile an action to remove all .gcda and .gcno files from the object directory.

Now having run your tests look in the object directory in eclipse and you will see .gcda and .gcno files, double click one of them. In the dialog that pops up, ensure that your unit test executable is selected, and choose “Show coverage for the whole selected binary“.

For me the key is not the amount of code covered, much more, what has been covered by my tests. Each file can be inspected and it is very clear what was run by the tests and what wasn’t. This helps me decide if I have sufficient coverage before making changes. For example, the bars below show that my tests don’t cover all of the initialisation functions.

screen-shot-2016-09-10-at-13-30-15

ExportFunctions is a standard function that is part of all components, the implementation shouldn’t change. The image below shows that the test suite invokes it, but there must be a return statement inside the EXPORT_STMT. Without code coverage I may never have known that some of the code wasn’t being exercised. Inspecting the code will then tell me if I need to add tests or not. This may be a trivial example but I hope it shows why inspecting test coverage helps you understand what is being tested. You can then make informed decisions about increasing the coverage, or accepting that you have gone far enough.

screen-shot-2016-09-10-at-13-30-46

Once I’m happy with the coverage in an area I want to change I can start more traditional TDD development. Having started TDD, I tend not to use code coverage checks very often. Being rigorous about TDD tends to lead to 100% coverage, the main time I re-use the coverage checks is if I have refactored the UUT, it helps to show not just that the existing functionality still passes, but that I haven’t inadvertently added some untested functionality.

Summary and next steps

Investing the time to get the component under test has given me a re-useable test harness that allows me to extend and refactor the code with confidence. Future development can happen much faster than it would otherwise, as much of the functionality can be proven before taking the software anywhere near the embedded target.

Some components it is worth investing the time to create pre-canned functionality through custom_fakes. Consider these components

SysMem

With no further work fff can be used to simulate failures, check the sizes being allocated and return fixed data structures on allocations. However in some tests we just want the memory allocation to work, so having a simple set of custom fakes that can be used to delegate these calls functional equivalents is worth while. Another useful extension can be to track allocation and freeing of memory, then in a test fixture setup tracking can be enabled, and in the teardown it can be checked.

CMUtils

This component provides string manipulation and other utility functions, in most cases it is preferable to have a working double than the standard fff fake. If you have a source code distribution of the runtime code I would attempt to link this with the tests.

SysTime and SysTimeRTC

One of the great advantages of Unit Testing in embedded systems is being able to run tests faster than realtime. Develop custom_fakes that allow you to take control of the progress of time.

Continuous Integration

Tests are only useful when they are run. Setting up a continuous integration system to build and test each component every time there is a change to the source code is the way to go.

Continuous Delivery

How far can you go towards continuous delivery? Using a combination of free tools, and the CODESYS Test Manager I have set up delivery pipelines that build the embedded code, run unit tests, performed static analysis, generated documentation, package up instrument firmware packages, build and test CODESYS libraries, automated version number management, create CODESYS packages, deploy the code into test systems and invoke automated testing (integration and system). If the tests all pass then the packages can be promoted to potential release candidates ready for final human validation as required.

 

Getting Started with Yocto on the Raspberry Pi

Introduction

I’ve been wanting to have a play with Yocto so decided to have a go at getting an image running on a Raspberry Pi. I found plenty of references but no step by step that just worked. This post just covers my notes on how to get going.

Development Machine

The Yocto Project Quick Start states “In general, if you have the current release minus one of the following distributions, you should have no problems”, then lists several distros including Ubuntu. I originally tried using Ubuntu 16.04, and had problems, I think because of the later version of gcc.

I used a clean install of Ubuntu 14.04 desktop running on a virtual machine, the rest of the process was actually pretty straight forward

Firstly I made sure that Ubuntu was fully patched

sudo apt-get update
sudo apt-get upgrade

Then following the Yocto Project Quick Start  I installed the required packages

sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat libsdl1.2-dev xterm

I then disabled the dash shell in favour of the bash shell ( I did this because I saw it advised, not sure if this is required)

sudo dpkg-reconfigure dash

Getting the code and Building

At the time of writing krogoth is the latest version of Yocto, hence I am working with that branch. The raspberrypi meta data is not currently branched so I am working with the master branch. The first step is to clone yocto and meta-raspberrypi

mkdir yocto
cd yocto
git clone -b krogoth git://git.yoctoproject.org/poky.git poky
cd poky
git clone -b master git://git.yoctoproject.org/meta-raspberrypi

Now generate the default configuration files into the default directory build.

. oe-init-build-env build

Now we need to edit the build configuration. Firstly edit yocto/poky/build/conf/local.conf add these lines

MACHINE ?= "raspberrypi2"
GPU_MEM = "16"

MACHINE could also be set to raspberrypi, or to raspberrypi3 depending on your target hardware (The raspberrypi2 image should also run on an RPI3). The GPU_MEM setting allocates the minimum amount of memory to the GPU leaving the rest for the ARM processor. See the READEME in meta-raspberrypi for details of these and other options.
Secondly edit 
yocto/poky/build/conf/bblayers.conf and add meta-raspberrypi, mine looks like this

# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " 
/home/david/yocto/poky/meta 
/home/david/yocto/poky/meta-poky 
/home/david/yocto/poky/meta-yocto-bsp 
/home/david/yocto/poky/meta-raspberrypi 
"

Now it’s time to build, it’s worth noting that the command oe-init-build-env doesn’t just create the configuration files and build directory – it also sets up the environment including the path, so if you build in a  new shell, or having logged in you need to re-run oe-init-build-env. It won’t overwrite the changes you’ve made to the configuration. So to build I cd to the poky directory and then

. oe-init-build-env build
bitbake rpi-basic-image

The build takes a long time the first time, potentially hours, numerous packages are fetched, when I build the first time I had a package fail to download because the git repository was unavailable, and the build failed. Just re-run the bitbake command. Assuming everything succeeds you should see output that looks like this

Parsing recipes: 100% |#########################################| Time: 00:00:22
Parsing of 891 .bb files complete (0 cached, 891 parsed). 1321 targets, 67 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.30.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "raspberrypi2"
DISTRO = "poky"
DISTRO_VERSION = "2.1.1"
TUNE_FEATURES = "arm armv7ve vfp thumb neon vfpv4 callconvention-hard cortexa7"
TARGET_FPU = "hard"
meta 
meta-poky 
meta-yocto-bsp = "krogoth:f5da2a5913319ad6ac2141438ba1aa17576326ab"
meta-raspberrypi = "master:2745399f75d7564fcc586d0365ff73be47849d0e"
NOTE: Preparing RunQueue
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
NOTE: Tasks Summary: Attempted 2167 tasks of which 911 didn't need to be rerun and all succeeded.

The resultant sdcard image is found beneath the build directory, flash it to SDCard as you would any other image

tmp/deploy/images/raspberrypi2/rpi-basic-image-raspberrypi2.rpi-sdimg

Testing

Insert the media into your raspberry pi, attach keyboard and monitor, power on…

Screen Shot 2016-08-09 at 20.10.02

ssh is also available in this image, access is available as root with no password. Before doing anything else tighten up security.

Have fun…