UniPi Neuron for CODESYS – available now

Available Now

Cozens Software Solutions are pleased to announce that UniPi Neuron for CODESYS is available now for download from the CODESYS store.

UniPi Neuron for CODESYS provides CODESYS driver support for the full range of UniPi Neuron PLCs and expansion modules.

  • UniPi Neuron L20x, L30x, L40x, L50x, L51x, M10x, M20x, M30x, M40x, M50x and S10x
  • UniPi Neuron Expansion modules xS10, xS30, xS40 and xS50

The UniPi Neuron is a product line of PLC (Programmable Logic Controller) units built to be universal and used in both Smart Home and Business applications and automation systems. 

CODESYS is the leading hardware independent IEC 61131-3 development system under windows for developing and engineering controller applications

TDD Zombies

Not these zombies

When I first used TDD I read James Grenning’s book Test Driven Development for Embedded C. In this book James proposed following a pattern for developing tests to test for zero, then one and then many (ZOM). Recently he has developed this idea further into ZOMBIE testing.

Z – Zero
O – One
M – Many (or More complex)
B – Boundary Behaviors
I – Interface definition
E – Exercise Exceptional behavior
S – Simple Scenarios, Simple Solutions

I’ve found this to be a really helpful pattern to follow when developing tests. To read more about it see James’ recent post TDD Guided by ZOMBIES

Using Google Test with CDT in eclipse


I like to use test driven development, currently my preferred framework is googletest. When I came to use eclipse CDT with the MinGW toolchain I found I had lots of little issues to get over to achieve what I wanted, namely a rapid TDD environment that I could also build from the command line (so that it can be automated under Jenkins). This post collects together the steps to get a simple application and associated unit test application building.



Clicked the Download Installer button on the MinGw home page http://mingw.org/. This downloads an installer for MinGW. I then selected the following components


eclipse CDT

I downloaded the Eclipse Installer from http://www.eclipse.org/downloads and ran the install selecting the C/C++ development environment.

Create Projects

My personal preference is to have two projects for a component, the actual component (executable, library etc), and an executable that is a unit test for the code in that component. I find this works well for me as a pattern because it works the same if I am working on a library or cross compiling.
Launch eclipse, close the welcome menu and you should end up in the C/C++ perspective. In the Project Explorer right click and select new C/C++ Project. Select Executable->Empty Project and the MinGW GCC toolchain, give your project a name, for this example I have chosen the name demo.
Click Finish. Then right click on the project and add a new folder called Main, in that folder create a new file called demo.cpp. In Main.cpp create a minimal main function as shown and save the file. (If you forget to save you may well see an error about undefined reference to WinMain@16)
int main(int argc, char **argv)
return -1;
When I got to this point I found that I could build the program, and debug it without error. But if I tried to run it windows pops up an error saying that demo.exe has stopped working. This can be resolved by changing the linker flags. On the demo project, select properties then C/C++ Build->Settings. Set the Configuration to [All Configurations], then under Tool Settings select MinGW C++ Linker->Miscellaneous and in the Linker flags box type -static-libgcc -static-libstdc++ as shown.
You should now be able to build, run and debug demo.exe.
Repeat the above process to create a second project called demotest, ensure that you can build, run and debug remotest too.

Add Googletest to the test project

I downloaded googletest-release-1.7.0 from https://github.com/google/googletest/releases and unzipped the content locally. Google now recommend compiling google test with each project, the easiest way of achieving this I find is merging google test into a single header and source file. To do this you need python installed, I downloaded and installed python 2.7.10 from http://www.python.org/downloads/
Then open a command prompt in the root of the unzipped googletest-release-1.7.0 directory and run
.scriptsfuse_gtest_files.py . <demotest project dir>contrib
where <demotest project dir> is the path to the demotest project in your workspace. Now open eclipse and you should see gtest in the demotest project.
Next we need to ensure that the include path finds the test headers to do this, open the properties dialog for the remotest project. Under C/C++ General->Paths and Symbols, select [All Configurations] and GNU C++, then add the contrib include directory as a workspace include.
Prove the project still builds – For me it builds with a single compiler warning – I will ignore that for now.
The next step is to add code to run google test. Edit demotestMainMain.cpp so that it has the content below
#include “gtest/gtest.h”
int main(int argc, char **argv) {
  ::testing::InitGoogleTest(&argc, argv);
  return RUN_ALL_TESTS();
At this point if you build and run you should get this output.
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (0 ms total)
[  PASSED  ] 0 tests.

Starting TDD

The first test I want is to prove that my demo.exe returns 0 when called with no arguments.
First addd a folder called Tests to the remotest project, in that create a
Create two folders in the demo project called Source and Include. In the demotest project add a folder called Tests, and in that a new file DemoTest.cpp with this content
class DemoMainTest : public :: testing::Test
ASSERT_EQ(0, ::Demo(0, NULL));
A TDD purist would say I am doing too much at once, this adds a test to show that a function called Demo, called with two NULL arguments returns 0. If you try to compile now the build will fail because the file demo.h doesn’t exist.
To get this to compile and link we need to add the file Demo.h and Demo.cpp, but these files should actually be part of the demo project and not demotest.
In the demo project add a new folder Include and in that add Demo.h with this content
int Demo(int argc, char **argv);
Still in the demo project ass a new folder Source and in that add Demo.cpp with this content
int Demo(int argc, char **argv)
return -1;
In the demo project properties, under C/C++ General->Paths and Symbols, select [All Configurations] and GNU C++, then add the Include directory as a workspace Include.
Also edit the existing Main.cpp in the demo project to call the demo function.
#include “demo.h”
int main(int argc, char **argv)
return Demo(argc,argv);

At this stage the demo project should compile and link.

We now need to get the demotest project to reference this source and include path.
In the demotest project properties under C/C++ General->Paths and Symbols, in the Source Location tab click the Link Folder… button, click Advanced>> , check Link to folder in the file system, click Variables…, select WORKSPACE_LOC and click Extend…, find demo/source, click OK all the way out.
In the demotest project properties under C/C++ General->Paths and Symbols, select [All Configurations] and GNU C++, then add the Include directory as a workspace include.
All being well, the workspace should now look like this
You can now build, and run, the test should fail
[==========] Running 1 test from 1 test case.
[———-] Global test environment set-up.
[———-] 1 test from DemoMainTest
[ RUN      ] DemoMainTest.Returns0
..TestsDemoTest.cpp:12: Failure
Value of: ::Demo(0, __null)
  Actual: -1
Expected: 0
[  FAILED  ] DemoMainTest.Returns0 (0 ms)
[———-] 1 test from DemoMainTest (0 ms total)
[———-] Global test environment tear-down
[==========] 1 test from 1 test case ran. (0 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] DemoMainTest.Returns0
Change the return value in Demo.cpp to 0, save, rebuild and re-run and the test should pass.
[==========] Running 1 test from 1 test case.
[———-] Global test environment set-up.
[———-] 1 test from DemoMainTest
[ RUN      ] DemoMainTest.Returns0
[       OK ] DemoMainTest.Returns0 (0 ms)
[———-] 1 test from DemoMainTest (0 ms total)
[———-] Global test environment tear-down
[==========] 1 test from 1 test case ran. (0 ms total)
[  PASSED  ] 1 test.
Now we are in a good position to start TDDing, as long as you add test files to the Tests folder, source to the Source folder and include files to the includes folder there should be no further need to mess with the build.

Test Runner

At this stage we are up and running, but it is still a bit clunky running the tests. Fortunately eclipse CDT has a test runner that supports google test. Unfortunately it is not installed by default. To install go to Help->Install New Software…, choose to Work with: –All Available Sites– and then under Programming Languages select C/C++ Unit Testing Support, click Next>, Next>, Finish and wait for the install to complete.
Restart eclipse when prompted.
Select Window->Show View->Other… and select C/C++ Unit
In the Project Explorer right click on demotest and then Run As->Run Configurations…, double click C/C++ Unit and then under the C/C++ Testing tab select Google Tests Runner.
Click Run and all being well you should see
Now you can really start moving with TDD 🙂

Version Control

At this stage it is a good idea to put the project under version control. Under eclipse the workspace is specific to a single folder on a specific machine, so it should not normally be placed under version control. Also any files generated by the build do not need to be version controlled, so before looking at what files to version control clean both projects from inside eclipse. If you are not using eclipse to manage your version control it is then a good idea to exit eclipse.
Using explorer the workspace looks like this
Both the RemoteSystemsTempFiles and .metadata folders are created by eclipse and do not need to be version controlled. The Debug folder under each project is created when a Debug build is performed and also does not need to be version controlled, if a Release build has been performed then there will be a folder called Release, again this doesn’t need to be version controlled. Everything else should be placed under version control.
To prove you have everything required under version control, it is a good idea at this point to verify that a clean checkout of your code builds. To do this create a new empty workspace directory (e.g. Workspace2) and checkout the projects from your version control system into that workspace. Then launch eclipse and select Workspace2 as your workspace
When eclipse starts there will be no projects in the Project Explorer, even though the projects exist in the Workspace2 folder on the file system. Right click in the Project Explorer and select Import…, then General->Existing Projects into Workspace
click Next>, then Browse…, this will open a Browse For Folder dialog opened on the workspace, click OK.
Then click Finish on the Import dialog, both projects should then be imported and can be build as before.
NOTE: Depending on your version control system and any plugins that may be installed you may be able to import projects directly from the VCS into a new workspace.

Command Line Build

The managed build in eclipse CDT is very easy to manage when working in the GUI, however there are situations when a command line build is required, such as when projects need to be built by a continuous integration system. Typically under CI the command line will need to cope with the situation where projects have been checked out from the VCS into a clean workspace folder, this example when executed from the root of the workspace folder creates the eclipse workspace artefacts, imports all of the projects in the workspace and builds the Debug configuration for each project.

<path to eclipse>eclipsec.exe –launcher.suppressErrors -nosplash -application org.eclipse.cdt.managedbuilder.core.headlessbuild -data . -importAll . -no-indexer -build “.*/Debug”

The first few arguments on this command line are standard eclipse arguments, see running eclipse and eclipse runtime arguments for details.
There are many other options – at the time of writing the options are
   -import     {[uri:/]/path/to/project}
   -importAll  {[uri:/]/path/to/projectTreeURI} Import all projects under URI
   -build      {project_name_reg_ex{/config_reg_ex} | all}
   -cleanBuild {project_name_reg_ex{/config_reg_ex} | all}
   -no-indexer Disable indexer
   -I          {include_path} additional include_path to add to tools
   -include    {include_file} additional include_file to pass to tools
   -D          {prepoc_define} addition preprocessor defines to pass to the tools
   -E          {var=value} replace/add value to environment variable when running all tools
   -Ea         {var=value} append value to environment variable when running all tools
   -Ep         {var=value} prepend value to environment variable when running all tools
   -Er         {var} remove/unset the given environment variable
   -T          {toolid} {optionid=value} replace a tool option value in each configuration built
   -Ta         {toolid} {optionid=value} append to a tool option value in each configuration built
   -Tp         {toolid} {optionid=value} prepend to a tool option value in each configuration built
   -Tr         {toolid} {optionid=value} remove a tool option value in each configuration built
               Tool option values are parsed as a string, comma separated list of strings or a boolean based on the option’s type

A solution to Jenkins jobs conflicting over shared resources

I have been looking for a way to manage the execution of Jenkins jobs that require exclusive access to a resource other than the Jenkins slave it is being built on. Let’s consider two scenarios that can cause problems.

Shared Physical Resource

Working with embedded systems I often want to run tests on an embedded device as part of a continuous integration process. I may have multiple Jenkins slaves that are capable of running the jobs, with a single specific embedded device required. If I have multiple jobs how do I ensure that only one runs at a time?

Software License and Tool Versions

For years I have used vxWorks and the Windriver Workbench IDE. Lets suppose that I have three Jenkins Slave Nodes, each with a different version of the tools installed. The tools are licensed by FlexLM on a license server, only one of these slaves can run the tools at the same time. With no restrictions if two jobs are run concurrently that require different versions of the tool then one will fail to build with a license violation. How do I ensure that the jobs are sequenced rather than running concurrently?

The solution

Basic Usage

The Jenkins Lockable Resource Plugin elegantly solves these problems.
The plugin is installed in the normal way from Manage Jenkins->Manage Plugins->Available, tick the box next to the plugin and Install (I restarted as well). Once installed go to Manage Jenkins->Configure System, scroll down to the Lockable Resources Manager and Add Lockable Resource.
In this example I have created a resource to represent a Raspberry Pi running Codesys. I have chosen a unique name (I may have more than one in the future) and a label for the type of node.
Next configure the jobs that require the resource, in the Configuration page for the jobs check the This build requires lockable resources check box.
In this particular build I have selected a specific resource. When the job runs you can see the resource being locked around the job.
If the resource is already locked when the job becomes ready to run then it is held pending in the Build Queue, having NOT blocked the Slave node.

Manually Reserving a Resource

One other nice feature of the plugin is that it is possible to manually reserve the resource, so if in my example I wanted to do some maintenance on the raspberry pi I can go to Manage Jenkins->Configure System, and put some text in the Reserved by field to take the lock thus holding off builds until I have finished.


Suppose that I have two embedded devices that I could run my tests on CODESYS_RPI_1 and CODESYS_RPI_2, both configured identically. However my build job needs to know which has been locked so that it can communicate appropriately. To achieve this I have changed the configuration for my job to depend on the resource Label rather than the resource name, and introduced a variable to hold the name of the resource that was locked, like this

Now looking at the Parameters for the build we can see which resource was locked.
This parameter can be used like any other in the build.
Some tests may require multiple resources, let’s say that we need two identical devices, setting the Number of resources to request to 2 and building shows the variable now has the names of both resources.


The lockable Resources Plugin provides a simple, elegant solution to locking/reserving resources for a job. As shown above it is easy to have a lock represent a physical resource, the plugin can equally be used in the licensing scenario, define a single resource to represent the license, each job requires the resource and is locked to running on the appropriate slave. Definitely one to add to my list of favourite Jenkins Plugins.