This is the mbeddr user guide. It targets users of mbeddr, not people who extend the mbeddr languages. We strive to cover all relevant mbeddr functionalities, but as things change, some parts may not always be up-to-date. Please report problems, ommissions or bugs to our mailing list at mailto:mbeddr-discuss@googlegroups.com. When you report a problem, please make sure you include a Node Info for the problem (see Section Utilitites for details on what a Node Info is).
This user guide is written with the MPS documentation language. On the plus side, this affords us very close integration with source code and models -- these things are always up to date. On the downside, we still have some issues in the Latex generator, the generated Latex file, and especially the embedded figures, don't yet look very beautiful. Please bear with us. It will improve.
You can use mbeddr's full text search utility provided by mbeddr: see Section Searching Models.
Finally, we want to emphasize that, in addition to this user guide, there is a lot of additional information. This includes various videos (at http://mbeddr.com/ and at http://mbeddr.com/screencasts.html) as well as a number of articles and papers at http://mbeddr.com/learn.html.
mbeddr relies on the JetBrains MPS language workbench. A language workbench is a tool that supports language engineering, the activity of building, composing, extending and using languages. MPS can be seen as a framework on which mbeddr relies.
MPS is open source software licensed under Apache 2.0. It can be downloaded from http://jetbrains.com/MPS. mbeddr currently uses MPS 3.1.4, even though 3.1.5 should work as well. Later in 2014 we will move to the upcoming version 3.2.
Various tutorials exist, and the MPS documentation page (at https://www.jetbrains.com/mps/documentation/) contains links to most of these. We want to highlight in particular the book by Fabien Campagne (http://www.amazon.com/The-MPS-Language-Workbench-Vol/dp/1497378656/). Note that in order to use mbeddr, you don't have to understand the MPS language engineering facilities (this is only necessary to extend mbeddr with new languages.
MPS relies on a projectional editor. Instead of using a parser that transforms character sequence into a tree for further processing, in MPS, every editing gesture directly changes the tree. This has two important advantages:
flexible notations and language modularity.
Language Modularity Because a projectional editor does not use a grammar, the combination of independently developed grammars cannot become ambiguous. Potential ambiguities are delegated to the user to resolve. This way, arbitrary languages and language extensions can be combined in one single program. mbeddr exploits this by providing a set of modular language extensions to C (and a couple of other languages). If you are interested in the details, please refer to this paper: http://voelter.de/data/pub/Voelter-GTTSE-MPS.pdf as well as mbeddr in general.
Flexible Notations Since no parser is used, MPS can support essentially any notation. This includes classical text, but also prose (as in this user guide), tables, math symbols and also diagrams. We exploit this in mbeddr extensively. For an overview over MPS' supported notations take a look at this paper: http://mbeddr.com/files/gemoc2014-MPSNotations.pdf
Traditionally, the above mentioned benefits of a projectional editor have come at a price: the user experience of traditional projectional editors was not very good, in particular, when editing code that looked like "normal text". Consequently, projectional editors were not adopted. However, MPS is doing a much better job with this. We have conducted a study with 20 MPS users (most of them mbeddr users as well), and the conclusion is that the editor works well enough. Take a look at this paper for details: http://mbeddr.com/files/projectionalEditing-sle2014.pdf.
mbeddr is essentially a set of plugins to MPS. If you consider MPS as a framework, then mbeddr can be considered an application. The plugins contain mostly language definitions: mbeddr C as well as its extensions are languages in terms of MPS. In addition, the plugins also contain additional utilities, libraries, views, editors, etc. However, these details are irrelevant to the mbeddr user.
In this user guide we do not explain the philosophical underpinnings, ideas and approaches of mbeddr since these have been discussed extensively in the various papers we wrote. You can find these papers on the mbeddr website (in particular on the Learn page at http://mbeddr.com/learn.html. To get an overview we suggest to take a look at the following two:
mbeddr is structured into 5 layers and addresses three different concerns (see Fig. 1.2.1-A). The platform layer consists of the JetBrains MPS language workbench, which enables the overall approach. The core layer includes the base languages used in mbeddr; these are different for each of the three concerns. The default extensions layer consits of a set of predefined extensions of the core languages that can be used out-of-the-box to develop software. The user extension layer is empty, it is intended to be filled by users. The backend layer consists of existing tools compilation and analysis tools which mbeddr uses.
Implementation Concern The implementation concern addresses the development of applications based on C. On the core layer, mbeddr comes with an implementation of C99 in MPS. There are a few minor differences to C99, and the preprocessor is not exposed to the user; first-class concepts are provided for the legitimate uses of the preprocessor (including a module system). On the default extensions layer, the implementation concern comes with C extensions for decision tables, interfaces and components, state machines, physical units, testing as well as logging and tracing. The user extensions layer is by definition empty; users can easily extend the C core as well as any of the default extensions. State-of-the-art IDE support is available for all languages, including syntax highlighting, code completion, real-time type checks and refactorings. The implementation concern also ships with an extensible debugger that is able to debug on the level of the extensions, so the abstractions do not break down when debugging becomes necessary. At the backend layer, the implementation concern relies on a C compiler, a C debugger and tools for importing existing C code into mbeddr. By default, mbeddr uses the gcc compiler and the gdb debugger.
Analysis Concern The analysis concern provides static analyses (formal verification) for some of the default extensions provided by the implementation concern. The analyses themselves are performed by existing external tools. However, mbeddr integrates the tools tightly by (a) providing language abstractions to conveniently describe behavior that can be analyzed, (b) translating this description to the input of the analysis tool, (c) running the tool, and (d) lifting the output of the tool back to the original abstraction level, to make it easier to understand for the user. The integrated analyses are based on symbolic model checking, SMT solving and C-level model-checking. Specifically, the following analyses are available: State machines can be checked with a symbolic model checker. It verifies a set of default properties and optional user-defined properties. Decision tables can be checked for completeness and consistency. Feature model configurations are checked for consistency. Finally, interface contracts can be checked statically: interfaces can specify pre- and post-conditions as well as protocol state machines that specify the valid invocation order of interface operations. These contracts can be checked for each implementing component via a C-level model checker.
Process Concern The process concern includes facilities for integrating mbeddr into development processes. These facilities can be used with arbitrary mbeddr languages, such as all the default and user-defined C extensions. The requirements support provides a language for describing requirements. Traces can be attached to any program element expressed in any language. Arbitrary additional data, expressed in any language, can be added to a requirement. The product lines supports defining feature models and configurations. Feature models can be connected to other artifacts by means of presence conditions, While presence conditions are static and work for any language, there is also C-specific support to evaluate variability at runtime. The documentation language aspect supports writing prose documents as part of an mbeddr project, exportable as HTML or LaTeX. It supports close integration with program elements. They can be referenced (with real references that are renamed if the element itself is renamed) and program code can be embedded as text or as an image. The embedded code is updated whenever the document is regenerated. Visualization provides a facility to render diagrams. Reports and assessments are customizable queries over the code.
mbeddr does not come with its own compiler. While this could be considered "cheating", it is actually a good idea because gcc
and other compilers contain person-decades of optimizations. Replicating those is hopeless. Hence, mbeddr relies on existing backends.
For compiling, building and debugging mbeddr code, mbeddr relies on the established GNU C tool chain. In particular, gcc
is used to compile the generated C code, make
is used to orchestrate the compile and link process, and gdb
is used by the mbeddr debugger. Note that all of these tools are not visible to the mbeddr user. But they have to be available on your system (see Section Gcc, Make and Gdb for details).
Other compilers, make tools and debuggers can be plugged into mbeddr (we have just integrated the Microchip PIC 16 for a customer). How this is done is beyond the scope of this user guide.
CBMC is used for verification. It is discussed in detail in Section Formal Verification with CBMC. To summarize, CBMC is a bounded model checker for C programs. This means that it can essentially simulate the execution of a C program regarding all its possible execution paths. CBMC is open source software and can be downloaded from http://www.cprover.org/cbmc/. It has to be installed separately, as described in the Installation section Section Verification Tools. For details on how mbeddr uses CBMC, please take a look at the following papers, as well as at [ToDo: section ]:
PlantUML (http://www.plantuml.com/) is an open source tool to generate UML diagrams from relatively simple textual specifications. We use it in mbeddr to render read-only diagrams of program structures in a separate view (these are different from graphical editors). Internally, PlantUML relies on Graphviz (http://www.graphviz.org/) for layouting the diagrams. As a user, you will not interact with graphviz directly, but you have to install it manually (see Section Graphviz). PlantUML itself is packaged with mbeddr and does not have to be installed manually.
MPS is a Java application. This means that it relies on a Java Virtual Machine (specifically, a JRE) to be able to execute. So, in order to be able to use MPS and mbeddr, you have to install Java on your computer. The specific versions are discussed in the Installation section.
Note that the Java dependency is of course only for the tool - the systems developed with mbeddr are pure C.
This chapter describes the manual installation process of mbeddr in detail. This includes MPS, Java and additional tools needed for compilation, debugging, visualization and verification.
mbeddr relies on gcc
and make
for compilation and gdb
for debugging. You must install these tools unless you have a different, target-specific build process. Below we describe the installation of these tools on the various platforms.
bin
), make sure mingw32-gcc
, mingw32-gdb
and mingw32-make
are included (all of them are in the All Packages
subtree in the selection dialog).
gcc
, make
and the associated tools (see Section Required Tools and Versions). XCode comes with lldb
, which is not yet supported by mbeddr. Therefore you have to install gdb
via brew
(see Section Required Tools and Versions). Use the following command for installing gdb
: brew install https://raw.github.com/Homebrew/homebrew-dupes/master/gdb.rb
. Additionally, you have to sign a certificate for gdb
(http://wiki.freepascal.org/GDB_on_OS_X_Mavericks_and_Xcode_5).
After the installation, please make sure the tools can be invoked from the command-line. If this is not possible, please add them to your system path, as described in Section Adding Tools to the System Path.
mbeddr supports visualization via PlantUML (http://plantuml.sourceforge.net), directly embedded in MPS. To use it, you have to install graphviz. Section Required Tools and Versions describes which version is required and where to get it from. After the installation, you have to put the bin
directory of graphviz into your system path (see Section Adding Tools to the System Path). On Windows, in addition, you also need to have an environment variable GRAPHVIZ_DOT
that points to the dot.exe
file supplied with graphviz.
To be able to use the verifications in mbeddr, you have to install CBMC
, which is a C-level model-checker. Section Required Tools and Versions describes which version is required and where to get it from. You can find installers for the various platforms at the bottom of the website. After the installation, please make sure CBMC
can be invoked from the command-line. If this is not possible, please add it to your system path, as described in Section Adding Tools to the System Path. CBMC uses gcc for preprocessing the C files, so please make sure that you have gcc installed on your machine.
You can check that a proper version of CBMC is correctly installed on your machine by clicking on the 'Test CBMC Installation' button from File -> Settings -> CProver Preferences.
Depending on what you want to do with mbeddr, we provide three different installation methods. The following list describes general differences between these methods, the next sections describe each of them in detail:
With the mbeddr IDE you can use all languages provided by mbeddr and write applications with them. Installing the IDE is simple, as you just have to perform two tasks:
In case you do not want to change mbeddr but still want to be able to extend it (so you'll need MPS' language engineering facilities), we recommend installing the distribution. This just requires downloading a ZIP file and copying a bunch of plugins into your existing MPS installation. If you do not have an existing MPS installation on your machine, please follow the steps described in Section JetBrains MPS and Java.
To install the distribution, please download the distribution ZIP file from the mbeddr GitHub release page: https://github.com/mbeddr/mbeddr.core/releases. This ZIP file contains the mbeddr plugins for MPS. After unzipping, please take all folders inside the plugins
directory and copy them into the plugins
directory of your MPS installation 1. So, for example, after copying, there should be a $MPS_DIR$/plugins/mbeddr.core
directory.
1:
Note that there is also a plugin
(without the s
) directory under MPS!
Installing mbeddr from sources gives you total control over what you want to do with mbeddr. We rather recommend working with the distribution or the mbeddr IDE, except you have good reasons not to. The following list provides some of these good reasons:
master
There are two ways to get the mbeddr source installation. One of them is a completely manual installation as it is described below in Section Manual Source Installation. And the other one is an equivalent automated interactive process, an installer written in Python, described in Section Source Distribution Python Installer
The installer is hosted at http://mbeddr.fortiss.org/. To be able to run it you have to install some of the prerequisites required by MPS and mbeddr, and also the installer itself.
For the hints on how to install the prerequisites, especially for the Windows platform, please, follow to this website: http://mbeddr.fortiss.org/download/prereq/. After the prerequisites are installed, you are ready to run the installer. The installer will check that you have installed everything right before setting up mbeddr.
To use the installer, please, download it from http://mbeddr.fortiss.org/download/, run it, and then simply follow the instructions it gives.
After the installer has finished, you should have the following installed and configured:
You might still want to install other optional tools used by mbeddr, as it is described in this chapter. [ToDo: Make a reference to gcc&make and graphviz. ]
The website with the installer provides additional information on how to run the installer, and what to do next, after it has finished.
Basically the installer is an automation, which performs for you the steps, described below in Section Manual Source Installation. Thus, normally, you would never need to perform the steps as described below. However, we provide it for completeness and for troubleshooting purposes, in the rare case when the installer fails to work right.
The following guide will explain you how to install mbeddr from sources manually. Please carry out the following instructions step by step in their defined order:
ant
binary is in your path (see Section Adding Tools to the System Path).
buildLanguages
, which is located inside the mbeddr.core/code/languages
directory. Depending on your operating system, you either use the one with .bat
(Windows) or .sh
(Linux/Mac) file extension. Before running the script, you first have to copy the build.properties.example
into the same folder as the script and rename it to build.properties
. Next, open this file and change the paths (On Windows, you have to use forward slashes as directory separators even on Windows!) of the following properties:mps.home
: points to the Contents
folder of your MPS installationmps.platform.caches
: points to a folder, which is used by the build process to store temporary filesmbeddr.github.core.home
: points to the mbeddr.core folder of your local mbeddr repository buildLanguages
script from within the mbeddr.core/code/languages
folder. At the end of the build you should get a message stating BUILD SUCCESSFUL
. Depending on the performance of your machine, running the script takes between 20 and 45 minutes.
mbeddr.core
languages in your local repository. Start MPS and go to the (on the Mac it is under ) and select the Path Variables
in the IDE Settings
. Create a path variable named mbeddr.github.core.home
and point to the root directory of your mbeddr git clone (e.g. /Users/peter/Documents/mbeddr.core
).
Global Libraries
section in the IDE settings (see Fig. 2.4.3.3-A). Create a library named mbeddr.core
that points to the code
directory of the github repository directory (/Users/peter/Documents/mbeddr.core/code
). This library must point to the code
directory of the checkout so that all languages are below it, including core and mpsutil. Because we had earlier defined the path variable, it will use the path variable for defining the library. Notice that this is a global setting, which means it has to be performed only once before your first application project.
Congrats: you are now ready to build your first project with mbeddr.
The mbeddr system is based on JetBrains MPS, an open source language workbench available from http://www.jetbrains.com/mps/. MPS is available for different platforms, like Windows, Mac and Linux. Please make sure you pick the right installer for your platform and the proper version (see Section Required Tools and Versions). When running the installer, please make sure you install MPS in a path that does not contain blanks in any of its directory or file names (not even in the MPS 3.1
folder). This will simplify some of the command line work you may want to do. After installing MPS, please follow the platform-specific steps mentioned below.
bin
folder and edit, depending on weather you are running mbeddr on a 32- or 64-bit machine, either mps.exe.vmoptions
or mps64.exe.vmoptions
. To make MPS run smoothly, increase the MaxPermSize
setting to at least 512m
. This JVM setting controls how much space is occupied for loaded Java classes, methods etc.
bin
folder and edit the mps.vmoptions
. To make MPS run smoothly, increase the MaxPermSize
setting to at least 512m
.
app
folder (right click on the application and do a Show package contents) and open the Contents
folder. Open the Info.plist
file with a text editor and navigate to the entry VMOptions
at the very bottom of the file. Edit the mps.vmoptions
to make MPS run smoothly, increase the -XX:MaxPermSize
setting to at least 512m
.
If your MPS does not start with the error message Could not create the VM, then your system has less main memory available than required. In this case, you should open the previously edited file (*.vmoptions
or Info.plist
) and decrease the value of the -Xmx
parameter in small steps, until you are able to start MPS. This parameter controls the maximum size of dynamic memory that the JVM provides to MPS.
For mbeddr/MPS to be able to access command-line tools, their binaries have to be added to your system's PATH
variable. You can do this either globally, or locally in the script that starts up MPS. The way how the PATH
variable is changed depends on your platform. The table below shows these platform-specific differences.
For changing the variable locally
, open your start-up script that is located inside your MPS installation directory. Next, add the PATH redefinition at the beginning of that file, replacing <your path>
with the file system path of the tool you want to add to the path. For details see the second column of the table below.
To change the path globally, see the right column of the table below. It explains how to add tools to the global PATH
on the different platforms. Here, you should also replace <your path>
with the file system path that points to your tool.
locally | globally | |
---|---|---|
Windows | start-up script: mps.bat PATH redefinition: set PATH=<your path>;%PATH% | in your Windows System Settings, change the value of the PATH variable by adding your path to the existing paths: <existing paths>;<your path>
|
Linux/Mac | start-up script: mps.sh PATH redefinition: export PATH:<your path>;$PATH | in general you will have to add your tool path to the PATH variable. However, the way how this is done depends on your actual operating system and the shell you are using. Since nowadays almost every Unix system comes with a bash shell, we describe the process just for tis scenario. In case you using a different shell, please refer to the vendor's documentation. First, create or open your existing .bash_profile , that is located inside your home folder. Next, just add the following line to it, save the file and restart your shell:export PATH=<your path>:$PATH
|
The following table lists all tools mentioned in this guide, on which platform they are needed, in which version, which components are required and where to get the tool from.
Tool | Platform | Version | Required components | URL |
---|---|---|---|---|
MinGW | Windows | newest | mingw32-gcc (v 4.8.1), mingw32-make (v 3.82.90-2), mingw32-gdb (v 7.6.1), msys-base (v 2013072300) and msys-coreutils (v 5.97-3) | http://http://www.mingw.org/ |
XCode | Mac | newest | gcc and make | - |
Brew | Mac | newest | gdb | http://brew.sh |
make | Linux | newest | - | - |
gcc | Linux | newest | - | - |
gdb | Linux | newest | - | - |
Graphviz | all | 2.30 | - | http://graphviz.org |
CBMC | all | 4.9 | - | http://www.cprover.org/cbmc |
ant | all | newest | - | http://ant.apache.org |
git | all | newest | - | - |
MPS | all | 3.1.4 or 3.1.5 (MPS 3.2 not yet supported) | - | http://www.jetbrains.com/mps |
JDK | Windows | 1.6 or higher | - | - |
Linux | 1.6 or higher | - | - | |
Mac | 1.6 | - | - |
Fundamentally, MPS looks and works like any modern IDE. Fig. 3.1.1-A shows an overview. We describe the various marked up parts below.
A Menu Bar Like all IDEs and applications MPS comes with a menu bar. Depending on your OS it is located at the top of the MPS window (on Windows) or at the top of the screen (Mac).
B Tool Bar The tool bar contains quick access buttons for various functionalities and the run configuration selection combo box for running and debugging programs.
C Project Explorer / Logical View The Project Explorer is the central view on the left side of the screen. It can show several different things: the Logical View of the project (solutions, folders, models), a File System View (directories, files) as well as various favourites lists. In the context of the user guide we mostly talk about the Logical View, which is why we use the terms Project Explorer and Logical View synonymously.
The Logical View contains the project structure. It is the primary means for navigating around mbeddr (and MPS in general). We describe the project structure separately below (Section Project Structure). The Project Explorer is an example of a Tool (aka View in Eclipse).
Editor The main part of the MPS window is made up of the editor pane. The editor pane contains any number of editors, each with its own Tab. Each editor edits a so-called root (see Section Nodes, Roots and Concepts). It is also possible to split the editor pane into several editors (arranged vertically and horizontally) through the menu.
Logical Inspector The Inspector is associated with the editor. For each selected node in the currently active editor, the Inspector provides additional (read/write) information. The particular contents of the inspector are determined by the language definition (of the concept that defines the selected node). Sometimes it shows nothing useful, sometimes it just shows additional information (such as the statically evaluated value of an expression), and sometimes it contains additional stuff that can be edited (and is hence essential for the editing experience). We suggest to always keep it open.
G Status Bar The status bar shows status information about MPS, including the version control status, the current branch, memory usage and various other details.
Note that the Status Bar, the Tool Bar Buttons as well as the little tool placeholder tabs can all be hidden through the
menu.Tools are the windows that are arranged around the edges of the MPS window. In this section we describe the most important ones (some language-specific tools are explained in their respective context). Generally, they can be activated through the
menu.Tools can be arranged around the edges of the MPS window. Every Tool has a default position on one particular edge of the MPS window. However, by right-clicking on to the title bar, other edges (and a floating mode) can be selected for each Tool. Tools can also be hidden. If not shown, the border shows a little placeholder (like the Context Sidebar tool near D on the left and H on the right). If a tool has a number associated with it (such as the 1 in case of the Logical View), Ctrl+<Number> can be used to select this window.
Project Explorer As mentioned above, the Project Explorer (in its Logical View mode) contains the overall structure of the project. It is the primary means of navigating around your code. We'll describe it in more detail in Section Project Structure.
Changes The Changes view shows the files that have been changed since the last commit. This is part of MPS' version control support and is discussed in more detail in Section Version Control.
Model Checker The Model Checker shows the result of running Fig. 3.1.2-A shows an example. Note that errors that result from a transformation (code generation) are not shown here. They are shown in the Messages view.
on a project, solution, language or model. It shows warnings and errors. Double-clicking on an element opens this element in the editor. There are various options for grouping and filtering the contents using the buttons on the left. You should play around with these options.Messages This view shows (mostly) messages, errors and warning resulting from the build process. These include problems in the transformation as well as problems with the subsequent compilation and link process. If an error shows up here during build, it is likely that there has already been a problem with the input models. In this case, please run the Model Checker on the respective parts of the project and fix all errors reported there before attempting to generate.
Visualizations A visualization is a read-only diagram of some program structures. It is layouted automatically. Users can double-click on nodes and labels to jump back into the code. Fig. 3.1.2-B shows an example. To open a visualization, users can press Ctrl+Alt+V or use the context menu to open a visualization if the selected node defines any. Note that the view also supports changing between multiple visualizations for the same node (if the node defines several of them) through the combobox. The buttons support navigating through the recent diagrams, zooming, and to export the diagram as a PNG file.
Tree Views Tree views are similar to visualizations in that they are contributed by particular nodes, they are shown in a special view, double-click on the tree node selects the node in the editor, and in that a node can define zero, one or more tree view structures. They are activated through Ctrl+Alt+O or the context menu.
Event Log The event log shows internal MPS errors. You can generally ignore this Tool except for debugging internal MPS or mbeddr errors.
Version Control This tool shows the version control console, essentially a stream of all version control operations. Typically not needed.
Explorer The explorer shows the low level tree structure of a node. It is more like a debugging tool and is not used regularly. It can be opened for a node by selecting the node and choosing Fig. 3.1.2-D shows an example.
from the context menu.
MPS programs are trees of nodes (this is the structure you can see in the Node Explorer). Each node can have several children, resulting in the tree structure. However, there can also be cross-references between nodes in a program tree. So this makes MPS programs a graph, but one with a primary containment hierarchy. And in fact, a complete MPS program consists of several such trees, we call each of theses trees a Root. Importantly, each Root gets its own editor tab in MPS. References can be across Root boundaries. Fig. 3.2.1-A shows the structure. In Fig. 3.2.2-A, all the implementation modules (blue I icon) such as Components
or StateMachines
are roots.
Each node is an instance of a language concept. The language concept defines all properties of its instance nodes, including its visual representation (notation, concrete syntax). Concepts are created as part of a language. Fig. 3.2.1-B shows this relationship.
MPS has an elaborate project structure. We discuss the most important ingredients in this section. Each of the ingredients is used for configuring different aspects of the overall project. We describe this in Section Using Languages and Devkits. The basics on the various kinds of dependencies between these ingredients are discussed below. The (vast load of) details are discussed in https://github.com/mbeddr/mbeddr.core/wiki/Dependencies-and-Classpath-in-MPS.
Models MPS is fundamentally a file-based system. The roots discussed above are stored in XML files (these can then be managed with existing version control systems, see Section Version Control). These files are called models in MPS. Each model contains a number of roots (which then have arbitrary substructure). In Fig. 3.2.2-A, the purple diamonds with the little M inside are models. Models als determine which languages are used for the programs/roots/nodes inside the model. Models can import each other in order to be able to establish cross-references. These dependencies are established in the Model Properties
dialog (Dependencies tab).
Models are important because they represent the only real "physical structure" in MPS. Hence they are the granularity relevant to version control (since they are files, it is models that are seen by file-based tools such as git). Also, the MPS transformation and generation engine always processes complete models. So in order to keep (partial) generation times low, you should split your overall system into a reasonable number of (relatively small-sized) models. See Chapter Structuring Complex Projects for details.
Solutions From a user's perspective, a solution is where application code lives (this is in contrast to Languges, which is where language definitions reside). Technically, a solution is a way to group models. A solution can contain any number of models and each model resides in exactly one solution. In Fig. 3.2.2-A, the orange S
is a solution. Solutions can also have dependencies onto other solutions.
Project A project contains several solutions. Any particular solution can be used in several projects, so projects do not own the solutions (in contrast to models, which are owned by a solution). A project is really more like a workspace in Eclipse: it's a collection of solutions which you want to work on for a particular task. The project defines various settings, preferences and view options.
To be able to write code in a model, that model must specify which languages should be used to write the code. As shown in Fig. 3.2.3-A, these languages are specified in the Used Languages
tab of the model's properties.
In the Used Languages spec, users can also add dependencies on devkits. A Devkit is a group of languages. For example, there are several languages that deal with requirements in mbeddr. Instead of adding all of these manually to any given model, users can simply add the com.mbeddr.reqtrace
devkit to a model. This simplifies configuration and avoids clutter.
A library is essentially a directory that contains additional languages or solutions (it is like a search path or the Java classpath). Instead of adding a needed solution or language to the project, it can remain "outside" the project and made visible by pointing a library to the file system folder. Libraries can be defined specifically for a project (Project Libraries) or globally (Global Libraries). Both are created via the Preferences.
In order not to hard code absolute paths in the definition of libraries, you can define a path variable. A path variable is a name (such as MyProjectDir
) plus a directory (such as /Users/markus/mbeddr/project1
). If you now define a reference to a libary in, say, /Users/markus/mbeddr/project1/my/folder/sol.mps
, then this stored in the project as ${myProjectDir}/my/folder/sol.mps
even though this is not visible in the UI. If the same library is accessed on another computer, the its definition of the MyProjectDir
variable is used to resolve it.
MPS is a projectional editor. This means that the editing actions as you know them from a text editor may not work exactly the same way as in a text editor. This leads to a lot of benefits in terms of language modularity and notational flexibility. It also has the potential drawback that the editor may not work exactly as you'd expect. To better understand how the editor works and what MPS does to achieve quite good editor usability, we suggest you watch this video: http://www.youtube.com/watch?v=PvcXD4cFM4s
In addition, the section on Section Keyboard Shortcuts has a lot of important keyboard shortcuts. Please take the time to read this section! The cheat sheet in Section Cheat Sheet also summarizes them again.
Since MPS is a projectional editor, the same program can be shown with different notations. Users can switch the notation at any time to match the task they are working on. The specific available notations are discussed throughout this user guide when we discuss the respective languages; examples include a tabular notation for state machines, a textual vs. graphical notation for wiring components, as well as the regular and the presentation mode in the kinds of documents you may be reading right now.
All notations can be switched from one common menu location: Fig. 3.3.2-A. The menu shows only those options that are valid for the languages used in the current model, and the selection is specific for any given editor/root. The specific entries are discussed in the context of their respective languages.
. See alsoHere is a summary of the most important keyboard shortcuts. Please take the time to learn them. It pays off! At the end of this section you can also find a cheatsheet that you can print and put next to your computer.
Entering Code In MPS you can only enter code that is available from the code completion menu. Using aliases and other "tricks", MPS manages to make this feel almost like text editing. Here are some hints though:
Entering Expressions Expressions can be entered linearly. So if you have a number literal 42
and you want to change that expression to 42 + 2
you can simply move to the right side of the 42
and type + and then 2. However, if you want to add something on the left side of an expression (e.g. changing 42
to 10 + 42
) then you have to move to the left side of the 42
and enter +
as well; you can then enter 10
.
Parentheses Imagine you have already entered 4 + 2 * 3
. The *
binds more tightly, so the tree root is the +
, with the 4
and the *
as children. If you now want to change this to (4 + 2) * 3
you have to set parentheses. This is supported in a natural way: just move to the left of the 4
and enter an opening parenthesis. It will remain red, because the balacing parenthesis is still missing. You can now go to any other place in the expression, in this case, to the right of the 2
, and enter the closing parenthesis. The tree is restructured accordingly.
Navigation Navigation in the source works as usual using the cursor keys or the mouse. References can be followed ("go to definition") either by Ctrl+Click or by using Ctrl+B. The reverse is also supported. The context menu on a program element supports Find Useages. This can also be activated via Alt+F7. Within an mbeddr program, you can also press Ctrl+F12 to get an outline view that lists all top level or important elements in that particular program so you can navigate to it easily. Ctrl+E will show you the recently closed editors, in case you've closed too many. More generally, Ctrl+Tab can be used to switch between windows in MPS.
Selection Selection is different than in normal text editors. Ctrl+Up/Down can be used to select along the tree. For example consider a local variable declaration int x = 2 + 3 * 4;
with the cursor at the 3
. If you now press Ctrl+Up, the 3 * 4
will be selected because the *
is the parent of the 3
. Pressing Ctrl+Up again selects 2 + 3 * 4
, and the next Ctrl+Up selects the whole local variable declaration. You can also select with Shift+Up/Down. This selects siblings in a list. For example, consider a statement list as in a function body:
void aFunction() {
int x;
int y;
int z;
}
Iimagine the cursor in the x
. You can press Ctrl+Up once to select the whole int x;
and then you can use Shift+Down to select the y
and z
siblings. Note that the screencast mentioned above illustrates these things much clearer. We recommend again to watch it, if you haven't done so yet.
Deleting Things The safest way to delete something is to mark it (using the strategies discussed in the previous paragraph) and the press Backspace or Delete. In many places you can also simply press Backspace behind or Delete before the thing you want to delete.
Adding List Elements If you are in list context (e.g. in function arguments or statement lists) you can press Enter to add an element after the current one; you can press Shift+Enter before the current element. You can also type the list separator (if one is defined) to add a new element. For example, in an argument list, you can type a comma to add another argument.
Intentions Some editing functionalities are not available via "regular typing", but have to be performed via what's traditionally known as a quick fix. In MPS, those are called intentions. The intentions menu can be invoked by pressing Alt+Enter while the cursor is on the program element for which the intention menu should be shown (each language concept has its own set of intentions). For example, module contents in mbeddr can only be set to be exported
by selecting from the intentions menu. Explore the contents of the intentions menu from time to time to see what's possible. Note that you can just type the name of an intention once the menu is open, you don't have to use the cursor keys to select from the list. So, for example, to export a module content (function, struct), you type Alt+Enter, e, x, Enter.
Surround-With Intentions Surround-With intentions are used to surround a selection with another construct. For example, if you select a couple of lines (i.e. a list of statements) in a C program, you can then surround these statements with an if
or with a while
. Press Ctrl+Alt+T to show the possible surround options at any time. To reemphasize: in contrast to regular intentions which are opened by Alt+Enter, surround-with intentions can work on a selection that contains several nodes!
Refactorings For many language constructs, refactorings are provided. Refactorings are more important in MPS than in "normal" text editors, because a quite few editing operations are hard to do manually. Please explore the refactorings context menu, and take note when we explain refactorings in the user's guide. Unlike intentions, which cannot have a specific keyboard shortcut assigned, refactorings can, and we make use of this feature heavily. For example, Ctrl+Alt+V extracts a variable from an expression.
Select in Project To select the currently edit root note in the project explorer (the big tree of the left), press Alt+F1 and then Enter.
Open Inspector Sometimes, additional properties of an element can be entered in the inspector ("properties view"). To open the inspector window, press Ctrl+2.
Make and Rebuild By default, Ctrl+F9 (Cmd+F9 on the Mac) makes the current module, i.e. it regenerates and recompiles if the current module has changed. We recommend using the Keymap preferences to assign Ctrl+Alt+F9 (and Cmd+Alt+F9 on the Mac) to a complete Rebuild of the current solution; sometimes changes aren't detected correctly and a full rebuild is necessary. Ctrl+Alt+F8 and Ctrl+Alt+F10 should be mapped to a rebuild of the current model and the current project, respectively (see screenshot below).
In this chapter we discuss version control with MPS. Note that this is not an introduction to version control in general: we assume that you understand version control concepts before you read this chapter. We use git as the example version control system because this is what the mbeddr team has used successfully for years now. We clearly recommend using git.
Also, mbeddr/MPS uses the version control support of the IDEA platform. So to get an overview, you can also check out the following links:
In principle, MPS can work with all the version control systems supported by IntelliJ IDEA. These include git, Subversion, CVS, Mercurial, Perforce, TFS, VSS and Rational ClearCase. This is the case because from the version control system's perspective, MPS is a regular, file based tool.
However, this is not quite true, because during merging, some MPS-specific things have to be done. For this purpose, version control systems support merge drivers. Currently, MPS merge drivers exist only for git, Subversion and CVS. Since nobody wants to use CVS anymore, git and Subversion are the most reasonable alternatives to use with MPS. However, it is feasible to develop additional merge drivers if this should become necessary. Please contact the MPS team in this case.
Our experience over the last years is with git, which is why we use git in this chapter. We've also used Subversion successfully in one project. We clearly recommend using git with mbeddr.
To be able to use version control from within MPS, MPS must be made aware of the version control system and its meta data. MPS uses so called VCS roots. A VCS root is simply a root directory in which git (or other version control systems) store their meta data. You can configure the roots via the Preferences (see Fig. 3.5.3-A). Note that MPS also detects directories that could/should be roots and brings up a popup.
Change Markup in Editor As you edit code in the regular MPS editor, MPS keeps track of changes. In particular, it shows changes in the left gutter. Fig. 3.5.4-A shows how new nodes are highlighted. MPS also shows changes to existing nodes as shown in Fig. 3.5.4-B. Clicking on the respective color bar in the gutter lets you revert that change, among other things. It also lets you show a detailed diff; we discuss the diff in the next paragraph.
Changes View The Changes view keeps track of which files (i.e., models) have changed on you system compared to the latest in the version control system. Fig. 3.5.4-C shows an example. The defaultExtensions
model is the one that contains the root with the two changes shown above. You can see a diff by selecting from the context menu of the respective model in the changes view. The view which then opens (Fig. 3.5.4-D) lets you choose from the changed roots (if several have changed) and then see a diff in the usual side-by-side style. Note that this works for arbitrary notations! You can also undo the changes with the usual operations.
In principle, all version control operations except the diff/merge can be performed from the command line. You can also use VCS-specific tools such as SourceTree (http://www.sourcetreeapp.com/). However, it is recommended to use the MPS-internal ways to do things to make sure that MPS' caches do not get out of sync. In practice we do the basic things (explained here) with MPS, and the advanced things with the command line or SourceTree.
We recommend exploring the
menu in MPS, and in particular the submenu; it has a lot of git operations such as branching or stashing.Updating, or pulling in git-speak, refers to getting the newest contents from the version control system. From within MPS you can do this via the menu entry Fig. 3.5.5.1-A). You can use the context menu to show a diff of the changes that came in.
. After the update, the Version Control view shows the set of changed files (You can use Ctrl+K to bring up the commit dialog. This dialog, shown in Fig. 3.5.5.2-A, lets you select the models you want to commit and specify a commit message. By using the context menu on models, you can also revert some of the changes. By pressing the little down-arrow on the button, you can also directly push from within MPS. You can also use the context menu to show the diff, i.e. the changes that are about to be committed.
orConflicts happen if the same node(s) are changed independently by different parties. It can happen during an update/pull (if somebody else has changed the same nodes independently) or when merging a branch (in the case where the two branches have changes to the same model). In both cases, the Version Control view, when showing the incoming changes, will mark conflicting files as red, as shown in Fig. 3.5.5.3-A.
You can then select Fig. 3.5.5.3-B). It shows the list of conflicing files, and for each of them you can select whether you want to accept your local changes (overwriting what comes in), accept the incoming changes (discarding your own changes) or merge manually.
to open the Merge overview (In case you select Fig. 3.5.5.3-C. There you pick and choose then changes you want to use. You can also edit, as well as copy/paste changes into the merge result.
, you will get a three-way merge dialog as shown inThere are various ways of interoperating with textual worlds. One of them is the import of textual, legacy C source. This is covered in Section Accessing Libraries. In this section we discuss a few additional ways.
Node Info and Node URL It is often useful to be able to be able to point somebody to a specific node. For example, one may want to send an email to a coworker pointing out a specific node, or an issue in an issue tracker may want to refer to a program node in MPS. For this purpose, it is possible to copy the Node Info and the Node URL to the clipboard (and then paste it as text somewhere else).
You can select any node in MPS and use [ToDo: Where is the menu entry in MPS to do the same based on URL in the clipboard? ]
from the context menu. The following code snippet is a node URL. If you open this URL in a browser, MPS will try to open the respective node. This requires MPS to be running and the currently opened project must contain the node. MPS does not open the project if it is currently closed.http://localhost:8080/select/tutorial/r:4ac377c2-0a54-4908-ae24-f86f1bad7e73/1195021413141453989/
The node URL is not readable by humans, only by the tool. You can use
to get a more detailed info about the node, as shown in the next snippet. This contains enough data to navigate to a node manually; of course the URL is also included to automate the navigation.project: tutorial (/Users/markusvoelter/Documents/mbeddr/mbeddr.core/code/applications/tutorial)
module: com.mbeddr.tutorial.documentation
model: com.mbeddr.tutorial.documentation.ug.fundamentals
root: G_TextInterOp [Document]
node: Word [Word]
url: http://localhost:8080/select/tutorial/r:4ac377c2-0a54-4908-ae24-f86f1bad7e73/1195021413141455557/
By the way: this functionality is also very useful if you want to report problems about the documentation to us. Please include a Node URL or even better, a Node Info of the document or paragraph where you spot a problem.
Copy to Text Every MPS node can be copied to a text editor. However, the result is only useful if the original node has essentially a textual syntax (like regular program code or the paragraphs in this document). In the text editor, it will look essentially similar. However, it is not possible to paste the text back into MPS, because, by default, an MPS language definition does not come with the parser necessary to process the text and build the MPS tree (it is of course possible to build and integrate such parsers into MPS, but you don't get it for free). So this approach is only useful for communication purposes, but not to actually exchange program code. The next paragraph describes a more useful alternative.
Copy to XML It is possible to copy a node (and its complete subtree, of course) to an XML text. The menu item
creates the XML. This can of course be copied into any text context. The following listing shows an example XML (of this paragraph):<_root _nodeID="1195021413141455614" _conceptFQN="com.mbeddr.doc.structure.TextParagraph" _linktype="child">
<header _nodeID="1195021413141455617" _conceptFQN="com.mbeddr.doc.structure.TextParHeader" _linktype="child" text="Copy to XML" />
<text _nodeID="1195021413141455615" _conceptFQN="de.slisson.mps.richtext.structure.Text" _linktype="child">
<words _nodeID="1195021413141455616" _conceptFQN="de.slisson.mps.richtext.structure.Word" _linktype="child" escapedValue="It is possible to copy a node (and its complete subtree, of course) to an XML text. The menu item " />
<words _nodeID="1195021413141473799" _conceptFQN="com.mbeddr.doc.structure.KeyPressFormattedText" _linktype="child">
<text _nodeID="1195021413141473800" _conceptFQN="de.slisson.mps.richtext.structure.Text" _linktype="child">
<words _nodeID="1195021413141473801" _conceptFQN="de.slisson.mps.richtext.structure.Word" _linktype="child" escapedValue="Edit->Copy Special->Copy Node as XML" />
</text>
</words>
<words _nodeID="1195021413141473802" _conceptFQN="de.slisson.mps.richtext.structure.Word" _linktype="child" escapedValue=" creates the XML. This can of course be copied into any text context. The following listing shows an example XML (of this paragraph):" />
</text>
</_root>
This XML can then be pasted into another MPS using
. The MPS node structure is reconstructed and the (internal and external) references are reestablished. While, in general, it is better to use the version control system (and branching, if necessary) to exchange code, this XML-based facility is sometimes useful as well.You can do a full-text search on soutions and models. The functionality is available via the context menu (shown in Fig. 3.6-B). The resulting search window (Fig. 3.6.2-A) reports all the model elements that contain your search string. Note that this can also be used to search this documentation itself.
It is often useful to open a terminal (console, command line) in solutions or models. The context menu for solutions or models has this option. For solutions, the terminal opens in the solution directory. For models, it opens in the model's output directory (where the model's code will be generated to). Fig. 3.6-B shows the context menu for solutions.
[ToDo: ]
Print, cut out and put it next to your monitor when you start with mbeddr. Note that on the Mac, most (but not all!) of the Ctrl+Something are actually Cmd+Something. You will have to try it out.
Code Completion Menu | Ctrl+Space |
Code Completion Menu Reset | Ctrl+Space again |
Intentions Menu | Alt+Enter |
Select Intentions | Alt+Enter, then type name |
Wrap With... | Ctrl+Alt+T |
Deleting | Backspace, or Select-and-then-Backspace |
Add after current | Enter |
Add before current | Shift+Enter again |
Move current node up | Ctrl+Shift Up on Windows and Linux, Cmd+Shift Up on Mac |
Move current node down | Ctrl+Shift Down on Windows and Linux, Cmd+Shift Down on Mac |
Select Along Tree | Ctrl+Up on Windows and Linux, Cmd+Up on Mac |
Select Siblings (List) | Shift Up/Down |
Select in Project | Alt+F1, then Enter |
Go-To-Definition | Ctrl+Click or Ctrl+B on Windows and Linux, Cmd+Click or Cmd+B on Mac |
Find Usages | Alt+F7 on Windows and Linux, Alt+F7 on Mac |
Find Usages with Options | Ctrl+Alt+Shift+F12 on Windows and Linux, Cmd+Alt+Shift+F12 on Mac |
Outline | Ctrl+F12 on Windows and Linux, Cmd+F12 on Mac |
Open/Focus Project Tree | Ctrl+1 on Windows and Linux, Cmd+1 on Mac |
Open/Focus Inspector | Ctrl+2 on Windows and Linux, Cmd+2 on Mac |
Close current Editor | Ctrl+F4 on Windows and Linux, Cmd+F4 on Mac |
Show List of Recent Editors | Ctrl+E on Windows and Linux, Cmd+E on Mac |
Toggle through current Windows | Ctrl+Tab |
Add Documentation | Ctrl+A |
Comment Out | Ctrl+Shift+C |
Make Solution | Ctrl+F9 |
Rebuild Solution | Ctrl+Alt+F9 (custom mapped) |
Rebuild Model | Ctrl+Alt+F8 (custom mapped) |
Rebuild Project | Ctrl+Alt+F10 (custom mapped) |
Here are a few settings we recommend you change in MPS more generally (in the
menu):While mbeddr tries to stick as close to the C99 standard C as possible there are some differences between the two. In this section we will discuss these differences in detail.
C has header files to manage visibility. If a file A.c
wants to access symbols defined in file B.c
, then the to-be-accessible entities (or prototypes of them) are put into B
's header file B.h
. A.c
uses #include
to import the B.h
header, and this way sees the declaration of the symbols. This way, header files can be used in C to provide some kind of modularization and information hiding. Header files can even be seen as a poor man's way of specifying interfaces (through prototypes). mbeddr instead provides a native, first-class module system. Headers are not exposed, they are only used under the hood (i.e., generated).
Chunks and Modules MPS' main entity for structuring code are models (see Section Project Structure). A model is essentially a file that contains a set of nodes. The nodes directly beneath the model are called root nodes in MPS. All root nodes provided by mbeddr are called chunks. A chunk has a name as well as a set of dependencies, i.e., references of chunks it depends on. Chunks come in many different kinds. BuildConfiguration
s are chunks, UnitContainer
s are chunks, and the modules in which you can write mbeddr C code are also chunks. In general, any chunk can reference (or import) any other one, but there may be chunk-type-specific restrictions. If a chunk A
imports chunk B
, then it is possible to reference B
's contents from A
(in some cases only exported
content are referencable; see below).
Implementation and External Module Modules are the chunks that contain C code (as opposed to, for example, build configurations or requirements). mbeddr has two different kinds of modules. The most common one is the ImplementationModule
in which you write your normal mbeddr code which will be generated to C99 code and then compiled. The second kind is the ExternalModule
. External Modules won't be generated into C code. They are used to proxy external, textual C code such as header files on which your code may rely 1. Since they just proxy existing C or header files they don't contain any implementation. They just contain declarations. ExternalModule
s always have to declare a header file that is included in the generated C code (the files for which they are a proxy).
Visibility and Exporting A chunk A
that imports (depends on) chunk B
only sees those contents of B
that are exported. Some kinds of chunks, for example the requirements module, implicitly export everything. Other chunks, for example implementation modules, default to private contents, and the user has to specifically mark contents as exported
. An importing chunk, A
in the example, can also mark a chunk import as reexport
. This way, if a chunk C
imports A
, and a has reexport
ed imports, C
can access the contents of B
as well.
1: This is necessary because, in order to reference a node (e.g., call a function), that node has to be in MPS. If you want to call into textual C code that is not in MPS, you have to create a proxy of that textual code in MPS. External modules are used for this purpose.
mbeddr supports all of C99's data types in principle, but it changes a few defaults. We discuss these details here.
While in C99 the boolean is a typedef'd integer, in mbeddr booleans are first class types.
This means code like this is not valid in mbeddr:
Also boolean and integer are not assignable to each other by default. You have to convert them explicitly. This might be the case when you interface legacy code that was not written in mbeddr. In addition to int2bool
, there is also a bool2int
conversion operator.
C allows you to specify user defined types (struct and union). To make use of these types, they are usually wrapped with a typedef
. This is not necceary in mbeddr. mbeddr will take care of the typedef during generation to textual C. In addition, the order of declarations does not matter in mbeddr. It is safe to write code which references types that are defined later in the ImplementationModule
. The contents of a module are automatically sorted topographically during generation.
Integers in mbeddr always come with a qualified size. There is no int
, short
, long long
and so on in mbeddr, because the size of these integers can differ depending on the compilation target. mbeddr has its own data types for singed or unsigned integers from 8bit to 64bit. For example, uint8
represents an 8bit unsigned integer, int8
represents an 8bit signed integer, uint16
represents an 16bit integer and so on.
If you need to interact with existing C that uses int
, you can still write int
inside ExternalModule
s but you have to provide a mapping to the mebddr types. This is done inside the TypesizeConfiguration
.
As described above, mbeddr uses size-qualified alternatives of the C99 native types. To interact with existing C code, mbeddr offers ExternalModule
s. In those modules it is possible to use unqualified integers (such as int )
. Since mbeddr by default doesn't know about the size of these types, those have to be configured on a per-project basis. This is done in the so called TypesizeConfiguration
. This configuration can be accessed via (see Fig. 4.2.4-A). It contains a mapping for each C99 type to an mbeddr type. The mapping for a given target may differ from the defaults that ship with mbeddr. These can be changes for each project.
One of the fundamental differences compared to C in mebddr is the way how arrays are declared. In C, array declarations are hard to read when they incorporate pointers and/or the const
and volatile
modifier. In mbeddr, arrays are declared and read from right to left. This might be different compared to other languages that you are used to, but in order to make reading types consistent across mbeddr, we had to introduce this. Declaring a multidimensional (3 sets of 2 elements) array of 16 bit integers in C99 would look like this:
In mbeddr it looks like this:
This might look irritating in the first place, but once we add pointers it will look much easier to read. Declaring a pointer to the array, we declared above in C, looks like this:
int16_t *(array[3][2])
In mbeddr, the same declaration is completely written from right to left:
Declaration order In contrast to C99 the order of declared functions, variables and types doesn't matter. Also forward declarations aren't necessary. mbeddr will take care of all these during generation to textual C code.
No Preprocessor mbeddr does not expose the C preprocessor to the end user. Because the preprocessor simply replaces text, it is possible that it results in not compileable code (and create all kinds of other mayhem). In addition, the preprocessor does not take the typesystem into account and the code may not typecheck.
#constant
#macro
#ifdefs
are supported via native support for product-line variability (see Section Static Variability)
Finally, more sophisticated/awkward/brittle uses of the preprocessor should be replaced by native mbeddr language extensions. There's no need for "preprocessor meta programming" if you have a full-blown language workbench at your disposal!
Unit testing is important for software development in general, it is important for developing code with mbeddr, and it is important for the remainder of the user guide. Hence we discuss unit testing now. The examples discussed in this section can be found in SimpleTestCase.
Test Subjects The example module contains two functions which we will use as test subjects, add and divide. They look as follows:
SimpleTestCase
model mbeddr.tutorial.main.defaultExtensions
package examples constraints
int32 add(int32 a, int32 b) {
return a + b;
} add (function)
int32 divide(int32 a, int32 b) {
return a / b;
} divide (function)
imports nothing
Declaring Test Cases Test cases are a little bit like void functions: they have no parameters and they do not return anything. mbeddr provides first class abstractions, because they track failures. Here is the signature of a test case:
exported testcase testAdding {
testHelperFunction();
assert-equals(0) 10 == add(5, 5);
} testAdding(test case)
Asserting Inside test cases, a number of assert
statements are available, as Fig. 4.5-A shows. It should be self-explanatory what they mean. Using these assert statements, test cases can now be made meaningful.
Invoking test cases A collection of test cases can be invoked via the test
expression. It invokes the test cases mentioned as arguments and evaluates to the total number of assertion failures. Typically the test
expression is used from a main
function, and the result is returned to the OS; this way, the whole process fails if there are assertion failures. The test output will also be printed to the Console in MPS with links pointing to the individual test cases in the C code. You can read more about this in Section Running Applications from MPS/mbeddr.
exported int32 main(int32 argc, string[] argv) {
return test testAdding; (unittest) ;
} main (function)
testDivision; (unittest)
The test executable can then be run like any other executable, as, for example, the HelloWorld example in Section Build and Run. Note that you can set the isTest
flag to true for the executable (inside the BuildConfiguration
). If you do this, your Makefile
has an additional target test
, so you can invoke it directly via make test
.
Helper Functions Assert statements only work in test cases. If, for some reason, you want to access them from regular functions, then these functions have to have the @test helper
annotation. It can be attached with the corresponding intention.
SimpleTestCase
model mbeddr.tutorial.main.defaultExtensions
package examples constraints
void testHelperFunction() {
assert(0) add(1, 1) == 2;
} testHelperFunction (function)
exported testcase testAdding {
testHelperFunction();
assert-equals(0) 10 == add(5, 5);
} testAdding(test case)
imports nothing
Parameterized unit tests mbeddr has tight integration with formal verification tools which can be used to specify a verification harness for the testing function. This way similar functionality can be achieved as with the ordinary parameterized tests. Actually, testing your function (unit of the system) with harness and formal verification is much more powerful because unit tests usually cover only punctual cases, while the verification can prove the property (in this case assertions) for a whole wider set of values. You can read more about these features in Section Formal Verification with CBMC.
Domain-Specific Testing Support Some of mbeddr's extensions (such as components or state machines) come with their own extensions of the testing language to make it more efficient to unit-test these abstractions. These extensions are discussed together with the respective language extensions.
Function pointers are an important ingredient to writing flexible, configurable software in C. However, their syntax and type safety is not that great -- hence, mbeddr provides a different, more typesafe approach.
As the first example, we will add a configurable event handler using function pointers. We create a new module FunctionPointers using the context menu on the current model.
Inside it, we will add a struct
called Trackpoint
that contains a number of members. You create the struct
by just typing struct
inside the module. You add the members by simply starting to type the int8
types. Ignore the physical units (the things behind the slashes after the types) for now.
exported struct Trackpoint {
int8 id;
int8/s/ time;
int8/m/ x;
int8/m/ y;
int16/m/ alt;
int16/mps/ speed;
};
We then create two functions that are able to process the Trackpoint
s. Here is one function that does nothing (intentionally). You enter this function by starting out with the Trackpoint
type, then typing the name and then using the (
to actually create the function (the thing has been a global variable up to this point!):
Other functions with the same signature may process the data in some specific way. We can generalize those into a function type using a typedef
. Note that entering the function type ()=>()
is in fact a little bit cumbersome. The alias for entering it is funtype
:
We can now create a global variable that holds an instance of this type and that acts as a global event dispatcher.
We also create a new, empty test case
that we will use for making sure the program actually works. In the test we assign a reference to process_doNothing to that event handler. Note the :
notation for function references - we do not use the ugly C function pointer syntax. With this in place, we can write the first test assertion:
{
processor = :process_doNothing;
Trackpoint i2 = processor(i1);
assert(0) i2.id == 1 && i2.alt == 100 m;
}
Let us complete this into a runnable system. In the Main
module we change our main
function to run our new test. Note how we import the FunctionPointers
module; we call the test case, which is visible because it is exported
. In the code below we call all the test cases built in the tutorial:
exported int32 main(int32 argc, string[] argv) {
return test testProcessing; (unittest) ;
} main (function)
testLambdaProcessing; (unittest)
testNullerOK; (unittest)
testInterpolator; (unittest)
testJudging; (unittest)
testInterpolatorWithMock; (unittest)
testPrintf; (unittest)
testFlightAnalyzer; (unittest)
testRuntimeVar; (unittest)
testPresenceConditions; (unittest)
testConditionalAlternative; (unittest)
testConditionalReplacement; (unittest)
testFlightRecorder; (unittest)
To make the program build correctly, we have to make sure all modules used in the program are included. In particular, the FunctionPointers
module must be included. Mising modules can be included with a quick fix. In the end, this results in the following binary:
executable MbeddrTutorialDefaultExt is test {
modules:
Main
}
FunctionPointers (examples)
Components (examples)
DataStructures (examples)
StateMachines (examples)
LibraryAccess (examples)
stdio_stub (external)
stdlib_stub (external)
RuntimeVariability (examples)
StaticVariability (examples)
ComponentsSRI (examples)
UnitDeclarations (config)
We can run the tests from within MPS or from the console.
In contrast to regular C, mbeddr also provides lambdas, i.e. anonymous functions. They can be passed to functions that take function types as an argument. However, they can also be assigned to variables that have a function type, such as the processor
above. Here is an example:
exported testcase testLambdaProcessing {
Trackpoint i1 = {
id = 1,
time = 0 s,
x = 0 m,
y = 0 m,
alt = 50 m
};
processor = [tp|
tp.alt = 100 m;
tp;];
assert(0) processor(i1).alt == 100 m;
} testLambdaProcessing(test case)
A lamda is expressed as [arg1, arg2, ...|statements]
. The type of the arguments is inferred from the context, they don't have to be specified. If several statements are required (as in the example above), they are layouted vertically. If only an expression is required, it is shown in line.
mbeddr supports several kinds of comments. We discuss all of them in this section. All of them have in common that they are so-called richtext blocks. This means that you can edit the multi-line comments in a way that resembles a regular text editor: pressing Enter to get a new line, using Alt+Backspace to delete words, Ctrl+Right to jump to the end of the line, etc. In addition, you can press Ctrl+Space inside the text blocks to insert actual MPS nodes: this lets you mix unstructured text with "real program code". We use this, for example, to support references to program elements inside the comments. Since these are real references, they are renamed automatically if the referenced element is renamed. Press Ctrl+Space to see which references are available; here are some:
@mc
references another top-level element in an implementation module (these are called module contents, hence the mc
@child
references any named child under the current node
@module
references another module
@arg
references an argument (only works in functions or similar contexts)
@local
references a local variable (only works in functions or similar contexts)
There are several context-specific references available, for example, a comment in a state machine can use @state
to reference states.
Statement comments are comments that are used in statement context (i.e., places where you write "procedural" C code). Examples can be seen in the following two code fragments:
void store_store(Trackpoint* tp) <= op store.store {
//here is a regular statement comment.
@arg(tp) is a reference to an argument.
storedTP = tp;
return;
} runnable store_store
Trackpoint* store_take() <= op store.take {
Trackpoint* temp = storedTP;
//this comment refers to a local variable: @local(temp)
storedTP = null;
return temp;
} runnable store_take
Statement comments are entered just like you would expect, by typing //
.
The statement comments are simply another kind of statement. They live inside a statement list along with all the other statements. Their only "connection" to the code is their phyical location. Element Documentations are different: they are actually attached to program nodes. This means that if you move/cut/copy/paste the program node, the documentation moves along with it. In addition, while statement comments can only be used in statement context, element documentations can be attached to all (coarse-grained) program nodes (functions, components, operations, states, transitions, unit declarations, etc).
To attach an element documentation, you can use the
intention on the respective element. The following is an example:Review comments are not inlined in the code. Instead, they a reside in the right margin, just like comments in Word. An example can be found in the DataStructures module. Margin comments can be attached to any node using the intention. Once there, they can be replied to or deleted using context menu items. The color of the comment outline depends on the commenter (as determined by the currently logged in user name).
If margin comments are used in a root, the root itself shows a summary at the top: how many comments are in the root, plus who commented last, and when. You can use the
intention on that root annotation to delete all comments.The comments described so far were all text comments: their contents was unstructured text. This is totally different from commenting out code, because in this case, even though the code has been commented out, the structure of the code (concepts, nodes, tree, references) must be retained so it can be commented back in. This is why commenting out code is treated differently than the comments discussed above which contain text (as opposed to code).
Essentially all program nodes can be commented out by using the FlightAnalyzer state machine (commented out states and transitions), the commented struct
in DataStructures and the two first statements in calcVerticalSpeed.
To comment something back in, you can either use the Backspace on the leading /*
or trailing */
.
If mbeddr is used as a plain C IDE, the efficiency of the generated code will not be any better or worse than it would have been written in any other IDE. mbeddr does not do any magic to code that is written on C-level.
Extensions to C fall come in two flavours: those that extend only the type system and those that affect the generated C code.
Type System Only An example for such a language is the units support in mbeddr. While it can help you to catch flaws in your code in the IDE, it has no implications for the generated code. This means you will not pay any price at runtime for using this extension.
Extensions that affect the generated Code The far more common case is that mbeddr's extension translate into C code. Here we try to be as efficient wrt. memory and runtime overhead as possible. Again, some of the extensions will have no overhead because they are very straightforward and will just improve the readability and reduce possible errors. However, more complex extensions such as mbeddr's components or statemachines have to store state and potentially add additional indirections in certain cases. The price you pay for that is typically not any higher than the price you would pay if you implement the same functionality manually in a clean, well-structured way.
In many cases mbeddr offers transformation options in the build configuration which make different trade-offs between flexibility and efficiency. For example, the static wiring
strategy in mbeddr's components does not support interface polymorphism (less flexibility) but translates to C code that does not involve a function pointer indirection (more efficient).
Within the mbeddr IDE you have many options to write readable code, mostly due to the high-level extensions. Not all of them can be reflected in the code we generate due to limitations of the C-language itself. When translating mbeddr code into C code we follow a few idiomatic rules.
Naming First and foremost mbeddr has a concept of namespaces which is not available in C. Every module in mbeddr forms its own namespace. Additional structural elements such as components or statemachines add additional namespaces. This means that mbeddr allows two functions to have the same name as long as they live in different namespaces. In order to translate these namespaces into C and retain globally unique names, we typically prefix e.g., the function name with the module name. Similar things happen for all module contents. This can make the generated funtion names quite long and thus hard to read. There are two ways around that: You can add the preventNameMangling
annotation to module contents. This will prevent the prefixes from being added to the element which has the annotation. This should only be used in cases where you want to provide non-mbeddr users with API to work with. The second option is to select the prevent name mangling
configuration item in the build configuration. This will try to get rid of the prefixes where possible, i.e., if an only if a non-prefixed name is globally unique within an executable. This optimization is performed each time you rebuild your code. The downside of that might be that the generated code changes once you introduce a new e.g. function with a name which already exists elsewhere in your code.
Sections The second option to structure your code in mbeddr are section
s. They do not affect the generated code (nor the scoping in mbeddr) and are just there to visually structure your module content.
Ordering It is also worth noting that in mbeddr you can order your module content according to aspects important for maintainability rather than the declaration dependencies you know from C.
mbeddr has its own abstraction for building applications. This is called the BuildConfiguration
. The BuildConfiguration
is responsible for the generation of the Makefile
and holds parameters for the various parts of mbeddr generators. The following is an example.
BuildConfiguration (c.m.t.d.code)
Platform
GNU paths are not checked
make: make
gdb: gdb
compiler
path to executable: gcc
compiler options: -std=c99
debug options: -g
Configuration Items
reporting printf
Binaries
executable HelloWorld is test {
modules:
HelloWorld
}
It is split into three sections:
Platform In this section a platform can be referenced. A platform specifies the used compiler and the options passed to the compiler. By default mbeddr ships with two predefined platforms Desktop
and Nothing
. The Desktop
platform will use Make for building, GCC for compiling, GDB for debugging. Per default, mbeddr will expect these tools to be on your global PATH (see paths in build config above). However, you can specify their locations by providing an absolute path or using a path macro (see Section Paths and Libraries): e.g., specifying the location of make via ${make.home}/make. Latter makes sense if you are working in a team with people, each having these tools installed at different locations. The Nothing
platform, as the name implies, does nothing. It does not compile anything and does not generate a Makefile
. It is possible to define additional platforms for other compiler toolchains in the settings: [ToDo:
bild
]
Configuration Items The Configuration Items
sections configures the generators of mbeddr. For example the reporting
item configures what the report
Statement should be gernerated to. If it is set to printf
the generator will create printf
statements for them. There are various configuration items for generators in mbeddr. These configuration items are discussed later in the user guide in the context of the languages that contribute (and whose generation is affected by) these config items.
Binaries The last section is Binaries
. It configures which mbeddr modules are compiled as a single compilation unit. By default it can be either Executable
or Library
. The first one tells the Makefile
to create an ELF binary the later one just creates a linkable that can be used in further compilation. An Executable
can also be marked as a test. This way it will be included in a target called test
inside the resulting makefile. If the tests should be run on the command line [ToDo:
ref
] this flag has to be true
. mbeddr will also check if all the modules that are used in the code, that should be compiled as a single compilation unit, are included in the Binary
(based on the transitive import dependencies between modules). If not, mbeddr will prevent you from generating a broken Makefile
. There is also an intention on the binary that adds missing modules.
Currently only one binary can be specified for each build configuration (this may change in the future).
The model checker checks that there are no errors left in the mbeddr code. You can run the model checker from the context menu of models, solutions or the whole project. If there's something wrong, code generation will likely not work. You may want to make it a habit to check your model (and fix errors) before you attempt a rebuild.
To build a binary in the IDE, make sure that the model from which you want to build contains a BuildConfiguration
. You can then invoke the action from the context menus of models, solutions and the project -- each rebuild their own scope. You can also press Ctrl+F9 to rebuild.
Rebuilding in MPS reduced all extensions back to mbeddr C, generates textual C99 from mbeddr C and then also invokes the generated Makefiles
to build the final binary.
If you don't have any errors in your mbeddr code (see previous subsection), then the code generation and compilation should go without problems and you can run the binary (see Section Running Applications from MPS/mbeddr). If something goes wrong nontheless (during generation of compilation), then the errors are shown in the messages view.
This section provides an overview of how build scripts for mbeddr-based applications are written and how you can integrate them into your continous integration environment. For trying out the shown examples, you will either need the mbeddr IDE (see Section mbeddr IDE) or mbeddr Distribution (see Section mbeddr Plugin Distribution). When building your mbeddr-based applications, we recommend using any of the two installation methods.
In the folder userguide.samples
, you will find a solution named com.mbeddr.tutorial.sample.CProject
;
its main function is here: Main. It contains an mbeddr-based example application, which we will use throughout this chapter for describing the build process in MPS. The application consists of a single module, which contains a main
function that invokes a testcase
. As you can see from the assert
statements, the latter one will fail due to a false comparison.
You can build the application from within MPS by marking the solution, opening the context menu and selecting main
function (located inside Main), opening the context menu and selecting . You will now see a console appearing at the bottom of MPS with some messages in it (Section Running Applications from MPS/mbeddr describes their meaning). The next sections will describe how to write a build script for this application and how to invoke it from the command-line.
In software projects, you often want to build and test your software automatically. To achieve this for our example application, this section will show you how a corresponding script will look like. MPS comes already with a language for writing these scripts. Scripts written with this language are translated to regular ant
scripts, which you can then execute from command-line or your CI environment.
The tutorial project contains a build script for our example application. You can find it here: build-example. This script is organized into different sections, which have different responsibilities. The following list describes the meaning of each:
build
build-example generates:
build-example is the name of the MPS script, whereby build.xml is the name of the generated ant file.
base directory:
the file system location to which the ant file is generated.
use plugins:
a list of MPS plugins that are required while executing the script. When building mbeddr-based applications, you just need those two.
macros:
macros are used in the lower sections to avoid replication of file system paths and string values.
dependencies:
builds scripts, i.e., artifacts on which our build script depends at generation-time. Since we use mbeddr (and depend on other languages as well), its plugins should be available while generating code for our application.
project structure:
contains plugin descriptors and solutions/languages you want to build. In our case, we have a dummy plugin descriptor that causes classes from MPS' debugger api to be loaded during generation and the solution that contains our example application.
default layout:
in case your build has artifacts (e.g., plugins or zip files), you can declare their structure here. Since we just want to run code generation and execute make, we do not need to list anything here.
Invoking rebuild
on the build solution will cause generation of an ant file. This section describes how we can invoke this file for building and testing our application from command-line. For just building the application, we could directly invoke the scripts from command-line. But, we will write a wrapper script in ant, that calls the generated file. Reason is, this wrapper script will invoke make test
, which executes the tests written with mbeddr. MPS itself doesn't provide off-the-shelf support for that, therefore we have to do it this way.
The listing below shows the wrapper script with two targets: build
for building the application and test
for invoking the tests. In order to get it run on your machine, please adjust the following paths:
source.location
: points to the folder where the generated files of your appplication solution are located (inside folder source_gen).
mps.home
: on Windows and Linux, this property points to the directory where your MPS is installed. On Mac, it points to the Contents folder, located inside MPS' installation directory.
In order to execute the script, please install ant
on your machine (see Section Required Tools and Versions). Next, store this script to your local file system as wrapper.xml. For building our example application, on the console, navigate to the location where you stored the script and enter ant -f wrapper.xml build
. For executing the tests, enter ant -f wrapper.xml test
at the same file system location.
<project name="build and test helloworld" default="build" >
<property name="base.path" location="/Users/domenik/repositories/mbeddr.core/code/applications/tutorial/solutions" />
<property name="source.location" location="${base.path}/com.mbeddr.tutorial.sample.CProject/source_gen/com/mbeddr/tutorial/sample/CProject/main" />
<property name="mps.home" location="/Users/domenik/MPS/MPS-135.1441.app/Contents" />
<target name="build">
<ant antfile="build.xml" target="clean" />
<ant antfile="build.xml" target="generate" />
<ant antfile="build.xml" target="build" />
</target>
<target name="test">
<ant antfile="build.xml" target="clean" />
<ant antfile="build.xml" target="generate"/>
<ant antfile="build.xml" target="build" />
<exec executable="make" dir="${source.location}">
<arg value="test"/>
</exec>
</target>
</project>
You can copy the same script to your CI environment and execute it there. Since you might have different file system paths there, we will need to change locations of the ant properties. You can either change the value in the scripts or invoke the script with the following command (replace the brackets):
ant -Dsource.location=<path to solution> -Dmps.home=<path to mps> -f wrapper.xml build
Once you have built a binary you can run it on the PC (assuming it has been built for the the PC using the Desktop
platform). There are two ways to run it:
Launch Configuration
and run or debug it.
The main goal of the Launch Configuration
is to be able to specify certain properties of the mbeddr binary you'd like to run. The Launch Configuration
is related to the Build Configuration
because the latter one specifies which modules constitute the executable and it is mandatory that exactly one module listed in the Build Configuration
contains a function which can act as a main function (basically as an entry point).
As an example let's take a look at the PlainCDemo, which is located in mbeddr.tutorial.main.plainC
. In the same model, you find the corresponding BuildConfiguration (m.t.m.plainC). This Configuration is used to specify which modules should be compiled into the binary. As a next step, the plainC
model needs to be (re-)built. There are two options to run/debug the binary from within MPS/mbeddr:
Build Configuration
. This approach can be seen in Fig. 6.3.1-A.
Build Configuration
of your application and run/debug the executable from the context menu of the executable definition. This approach can be senn in Fig. 6.3.1-B.
After you have run the executable a new launch configuration will be created with the name of the executable that you have specified. If a configuration already exists with the same name then it will be overwritten. From this point, it is also possible to run/debug the executable from the menu bar or from the Fig. 6.3.1-B or Fig. 6.3.1-A) for the corresponding Create PlainCDemo
element.
An existing launch configuration can also be used to launch your application. Select the PlainCDemo
element from the drop down list which lists the available Launch Configuration
s. Simply click on the green run button to launch the executable as shown on Fig. 6.3.1-B. In order to debug the application, you need to click on the green bug icon next to the selected Launch Configuration
. You can read more about how you can debug your application in Section Debugging.
Launch Configuration
s can be opened for editing either from the menubar or by clicking on the menu item. Fig. 6.3.2-A shows the contents of the Launch Configuration
for the PlainCDemo
.
The following properties can be edited:
Launch Configuration
. If you specify a custom configuration, it may be important to rename it and save it like that, so that during the next run it will not be overwritten.
Build Configuration
.
After you have launched the application all the output will be redirected to the Console
view of MPS. You will be able to see all the messages that would appear if you run the executable from the command line.
Executable tests will show the test results also in the Console
. Fig. 6.3.3-A shows the output of the test run for the mbeddr.tutorial.main.defaultExtensions.SimpleTestCase
executable. The console output contains links for executed tests and failed assertions, and you can click on them and to select the corresponding program element (test case, assertion) in the editor.
mbeddr comes with a debugger for core C. This chapter describes how to use this debugger to debug C programs written with mbeddr.
The mbeddr debugger behaves like a regular C debugger, but on the abstraction level of mbeddr and its extensions: it shows the call stack and watchables and allows stepping and setting of breakpoints. This way, you don't need to know any details about the generated C code. In addition, the debugger can be extended to enable debugging of code written with user-defined language extensions. Currently, the debugger uses gdb
as debug backend and only supports local debugging. However, in future releases of mbeddr, we plan to support debugging on the target device. In the tutorial, you find in the solution mbeddr.tutorial.main
a model named plainC
. We use the contained mbeddr code to illustrate debugging mbeddr programs in MPS.
Before you can debug your program, you have to make sure the C code for the program has been generated. Therefore, click on the model plainC
, open the context menu and select or press Ctrl+F9. Next, set a breakpoint in the first line of the main
function. This is done by clicking into the gutter of the editor. The result should look like this:
Next, create a run configuration as explained in Section Running Applications from MPS/mbeddr and run it in debug mode by pressing Shift+F9 or by clicking on the debug button in MPS' toolbar (see Fig. 6.4.2-B).
The debugger UI should now appear at the bottom of the MPS window. In the lower left corner of this UI, you can see two activated buttons: a green (arrow) and a red (square) one. This indicates, that the debugger is now connected to the underlying C debugger. You can now start program execution by clicking on the green (arrow) button. This will suspend the debugger on the first line of the main function, on your previously created breakpoint:
Next, press F7 to step into the current line, this will suspend the debugger in the called function add
. For returning to the calling function main
, press Shift+F8. Once the debugger is suspended there, you can see the stack frame main
in the call stack and the local variables p
, argc
and argv
in the watchables view (see Fig. 6.4.2-D).
This section assumes that you have installed MPS and mbeddr correctly. This section walks you through creating a new project.
We first create a new project. You can do this via Fig. 6.5.1-A shows the Wizard that opens subsequently. We select a Solution Project 1, which is a project that contains one solution. We use the usual Java package naming conventions for the solution.
.If you have a project alredy, then you can simply create a new solution inside the existing project. Select the project node itself and select
from the context menu.1: If you work with the mbeddr IDE, then the language project shown in Fig. 6.5.1-A is not available.
Inside the solution you have to create a (or potentially several) new model (see Fig. 6.5.2-A). Use the in the context menu of the solution. The model should have a (qualified) name which typically starts with the solution name. We call our model ...main
.
After pressing ok, the new model is created. Also, the model's property dialog pops up automatically (it can be brought up manually by selecting com.mbeddr.core
devkit (note that you can select devkits and languages by just typing the first letters (c.m.c
)). Fig. 6.5.2-B shows the result.
Note: the stuff explained in this subsection can be done automatically using the
wizard. We do it manually to explain things in detail.
Inside the model we create a new Implementation Module
. To do so, select from the model's context menu. Specify the name HelloWorld
for the module. The result looks as shown in Fig. 6.5.3-A.
Inside the module we now write a test case that (wrongfully) asserts that 1 + 1 == 3
. Here is the code:
Next is a main function. You can create one by simply typing main
: this expands to a fully parameterized main function. It also already contains a return 0;
statement. Next, we want to replace the 0
with a call to our test case. Delete the 0
and instead enter a test
expression. Inside it's brackets, add a reference to the testAdding test case. The resulting main function looks like this:
exported int32 main(int32 argc, string[] argv) {
return test testAdding; (unittest);
} main (function)
To be able to run the code, we also have to create a build configuration. Use the menu main
model to create one. Initially it looks as shown in Fig. 6.5.3-B
To make the system run, you have to do three things:
desktop
platform. No need to change any of its parameters.
reporting
item that determines how console output is handled. The default printf
is ok.
HelloWorld
, and add the HelloWorld implementation module to its modules
slot.
You're done. It should now look as shown in Fig. 6.5.3-C
You can now build the system, for example by selecting main
model or the com.mbeddr.tutorial.sample.HelloWorld
solution. In the Messages view you should see the following output:
rm -rf ./bin
mkdir -p ./bin
gcc -std=c99 -c -o bin/HelloWorld.o HelloWorld.c
gcc -std=c99 -o HelloWolrd bin/HelloWorld.o
make finished successfully for com.mbeddr.tutorial.sample.HelloWorld/com.mbeddr.tutorial.sample.HelloWorld.main
No input. Skipping optional target.
"textGen" target execution time: 546 ms
"callMake" target execution time: 446 ms
"copyTraceInfo" target execution time: 138 ms
"generate" target execution time: 131 ms
"reconcile" target execution time: 37 ms
Other targets execution time: 35 ms; compile: 24 ms, configure: 6 ms, runProcessors: 5 ms, preloadModels: 0 ms, make: 0 ms, addLanguages: 0 ms, resetCache: 0 ms, checkParameters: 0 ms, auxCompile: 0 ms, collectPathes: 0 ms
To run the code, you can select main
function or the HelloWorld
executable. The result should be shown in the Run
tool, as shown in Fig. 6.5.4-A. Note how it contains clickable lines for the test(s) that have run as well as for the failed assertions.
This concludes the Hello, World.
So far we have not accessed any functions external to the mbeddr program -- everything was self-contained. Let us now look at how to access external code. We start with the simplest possible example. We would like to be able to write the following code:
module LibraryAccess imports nothing {
exported test case testPrintf {
printf("Hello, World\n");
int8 i = 10;
printf("i = %i\n", i); }
}
To make this feasible, we have to integrate C's standard printf
function. We could import all of stdio
automatically (we'll do that below). Alternatively, if you only need a few API functions from some library, it is simpler to just write the necessary proxies manually. Let's use the second approach first.
External functions are represented in a so-called external module
. After you create one and give it a name, it looks like this:
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
resources 6b853dfb-19be-33ee-b67c-33b658dd0753
An external module is always associated with one or more "real" header files. The trick is that when an implementation module imports an external module in mbeddr, upon generation of the textual C code, the referenced real header is included into the C file. This also means that, even if you stdio.h
requires all kinds of additional definitions for it to work, these do not have to be imported into mbeddr. Only the function prototype must be imported so it can called from mbeddr C.
So the first thing we need to do is to express that this stdlib_stub
external module represents the stdlib.h
file:
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
resources header: <stdlib.h>
In general, we also have to specify the .o
or .a
files that have to be linked to the binary during the make
process (in the resources
section of the external module). In case of stdlib.h
, we don't have to specify this because gcc somehow does this automatically. So what remains to do is to write the actual printf
prototype. This is a regular function signature. The ellipsis can be added via an intention. The same is true for the const
modifier. This leads us to this:
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
void* malloc(size_t size);
void free(void* ptr);
resources header: <stdlib.h>
To be able to write the test case, we have to import the stdio_stub
into our LibraryAccess
implementation module. And in the build configuration we have to add the LibraryAccess
and the stdio_stub
to the binary. We should also call the testPrintf
test case from Main
.
mbeddr has support for importing existing C code. But this is a comercial extension that has to be installed seperately. For futher information please contact mailto:mbeddr@itemis.de
mbeddr can calculate the static value of an expression if all children of that expression are statically evaluable. This is the case if the leaf nodes of the expression tree consist of literals, references to #constant
or references to #macro
(function macro calls). Whether an expression is statically evaluatable, as well as as its calculated static value, is shown in the inspector.
The interpreter which calculates the static value works with a precision of 100 decimal places. The 101st place is rounded half up.
By default the interpreter has no effect on the code generated by mbeddr. However by using the staticValueOf
expression you can ask mbeddr's code generator to first use the interpreter to calculate the static value and use this value then in the generated code. This functionality was introduced so that your code readability does not need to be sacrificed for performance reasons (in traditional C code you would typically see "magic numbers" where staticValueOf
is used in mbeddr). The expression can be inserted into your code using the staticValueOf
alias or by using a surround-with intention on a statically evaluable expression (Ctrl+Alt+T).
In addition to the pure static value calculation mbeddr can perform various rounding operations before the static value is inserted into your code (an example is shown in Fig. 7.1-A). These rounding operations can be added via an intention on the staticValueOf
expression ( ) or it can be selected in the inspector of the staticValueOf
expression. The following rounding operations are available:
The purpose of physical units is to annotate types and literals with additional information - units - used to improve type checking. Many embedded systems work with real-world quantities, and many of those have a physical unit associated with them.
Defining and Attaching Units to Types Let us go back to the definition of Trackpoint
introduced in Section Function Pointers and make it more useful. Our application is supposed to work with tracking data (captured from a bike computer or a flight logger). For the examples so far we were not interested in the physical units.
Let us now explore how we can work with physical units, adding more semantics to this data structure. In order to use the physical units language, we need to import either the com.mbeddr.physicalunits
devkit. After the import we can add units for the types by simply pressing the / at the right side of them.
The 7 standard SI units are available from an accessory module called SIUnits. It can be imported into any implementation module. There is an extra nounit
unit defined in this module. The usage of this unit is limited to test case definitions and conversion rules, which we will talk about in later sections. When you import this module, the simple units like s
and m
will be immediately available.
For the speed
member of the Trackpoint
struct we need to add m/s
. Since this is not an SI base unit, we first have to define it, which can be done either in the ImplementationModule
or in a UnitContainer
(such as UnitDeclarations). To create a new unit just use its alias unit
. The specification of the derived units can be typed in after the name of the unit. A simple unit specification consists of one unit with possibly an exponent. If you would like to type in a composite unit specification then press ENTER to expand the components in the specifiction. Exponents can be added to the units by pressing the cap (^) symbol and the value of the exponent. This works for all units in the specification except for the last component; there the cap is disabled because it conflicts with the binary XOR operator. At the last position use the appropriate intention (by the way it is available for all components regardless of the position in the specification) to add the exponent to the unit. An exponent can either be simple integer number or it can be transformed to a fraction by typing / in the exponent expression.
The following code example shows how can we use units for the members of the Trackpoint
struct.
exported struct Trackpoint {
int8 id;
int8/s/ time;
int8/m/ x;
int8/m/ y;
int16/m/ alt;
int16/mps/ speed;
};
Units on Literals Adding these units may result in errors in the existing code (depending on whether you had added them in previous tutorial steps) because you cannot simply assign a plain number to a variable or member whose type includes a physical unit (int8/m/ length = 3;
is illegal). Instead you have to add units to the literals as well. You can simply type the unit name after the literal to get to the following:
You also have to add them to the comparison in the assertions as well, for example in this one:
{
processor = :process_doNothing;
Trackpoint i2 = processor(i1);
assert(0) i2.id == 1 && i2.alt == 100 m;
}
Operators If we were to write the following code, we would get an error:
int8 someInt = i1.x + i1.speed; // error, adding m and mps
This is because all the mathematical operators are overloaded for physical units and these operations are also type-checked accordingly. Clearly, the problem with this code is that you cannot add a length (i1.x
) and a speed (i1.speed
). The result is certainly not a plain int8
, so you cannot assign the result to someInt
. Adding i1.x
and i1.y
will work, though. Another example where the units are matched properly:
The calcVerticalSpeed provides a few more examples of working with units in expressions:
void calcVerticalSpeed(TrackpointWithVertical* prev, TrackpointWithVertical* cur) {
/* cur.alt = 0 m; */
/* cur.speed = 0 mps; */
if (prev == null) {
cur.vSpeed = 0 mps;
} else {
int16/m/ dAlt = cur.alt - prev.alt;
int8/s/ dTime = cur.time - prev.time;
cur.vSpeed = dAlt / dTime;
}
} calcVerticalSpeed (function)
Editing Composite Units When you edit an expression with composite units you may encounter the problem that multiple side transformations are available at a given place and so you need to make the choice by hand. Consider the following code example:
Here, when you are editing the unit definition of the number literal 2, the * operator may be used at the end to introduce a multiplication in the expression or to extend the unit defition. Similarly, the ^ symbol may be used to introduce exponent for the units or could be also used for the binary xor operator. This problem only arises at the end of the unit definition.
Build Configuration If you try to rebuild the model, you will get an error message, saying that you need to add the units
configuration item to the build configuration. If you have added this item, the build should be successful and we should be able to run the test again.
So far, we have used units only for type checking. However, sometimes you have several units for the same physical quantity. For example, speed can be measured in m/s
or km/h
. For this purpose you can define conversion rules between the units.
Defining Conversions The conversion rule
must define a source and a target unit (the conversion will happen between these units) and it must contain one or more conversion specifiers. A conversion specifier defines the conversion expression for a given type. Inside the conversion expression one can use the val
expression as a placeholder for the to-be-converted value. It will be substituted with the value that is passed to the conversion rule. The type that you define for the specifier is the type that the val expression will have. Additionally, it is also possible to omit the type: in this case the specifier works as a generic one, where the expression may be applied to any types (the type of the val
expression will be double in this case, but this is just a trick that is needed for the typesystem to work properly). The conversion specifiers are checked, and redundant specifiers will be marked as erroneous (the ones that are covered by some other specifier due to its type being the supertype of the original type).
section conversions {
exported conversion mps -> kmh {
exported conversion kmh -> mps {
exported conversion s -> h {
exported conversion m -> km {
} section conversions
val as double -> val * 3.6 val as <no type> -> val / 3.6 val as <no type> -> val / 3600 val as <no type> -> val / 1000
} } } }
Conversion rules can be either lazy or eager. The default behavior is the lazy evaluation, you can switch to eager from the corresponding intention on the conversion rule itself.
Using Conversions You can now invoke this conversion within a convert expression:
void somethingWithConversion() {
Trackpoint highSpeed;
highSpeed.speed = ((int16/mps/) convert[300 kmh-> mps]);
} somethingWithConversion (function)
The convert expression does not explicitly refer to any specific conversion rule, you only need to define the target unit of the conversion, while the source unit is known from the type of the original expression. The system will try to find a matching conversion specifier (where both the units and the types match). Here comes the conversion specifier with no specific type handy, because it can be applied to any expression if the units match.
The conversion specifier can be set manually too in the Inspector of the convert expression.
Generic units can be used to enhance the type safety for function calls by also providing the possibility to specify the function in a generic way with respect to the used type annotations. The following code snippets show some use cases of generic units:
Trackpoint process_generic(Trackpoint e) {
e.time = sum(10 s, 10 s);
e.id = sum(1, 2);
e.x = sum(10 m, 10 m);
e.speed = mul(10 m, 10 s-1);
return e;
} process_generic (function)
First you need to create generic units by invoking the Add Generic Unit Declaration
intention on the function. You can specify multiple generic units once the first one has been created by just simply pressing Enter in the unit list. These newly created generic units can then be used for the type annotation just like any other unit. The substitutions will be computed based on the input parameters of the function call. One can also combine a generic unit with additional arbitrary non-generic units for the type annotations of the parameters and the return type. In addition, it is also possible to invoke the function with bare types, but be aware that once at least one substitution is present, the function call will be type checked also for matching units. The generic units can also have exponents (even as fractions) and the same type-checks also apply to them as it was described for the non-generic units. An example shows how this could be done for a square root function:
double/U112/ sqrt(double/U1/ a) {
double/U112/ res = 0 U112;
//here goes your sophisticated square root approximation
return res;
} sqrt (function)
Some additional notes on the usage of the generic units:
Let us assume we have an existing (legacy or external) function that does not know about physial units and you cannot or do not want to use generic units. An example is anExistingFunction:
To be able to call this function with arguments that have units, we have to strip away the units before we call the function. This can be achieved by selecting the corresponding expression and invoking the Strip Unit
intention. The type of this stripped expression will be simply the type of the original expression but without units.
int16/m/ someFunction(Trackpoint* p1, Trackpoint* p2) {
int16 newValueWithoutUnit = anExistingFunction(stripunit[p1.alt]);
return newValueWithoutUnit m;
} someFunction (function)
The opposite direction (i.e., adding a unit to a value that has no unit) is also supported. The introduceunit
operator is available for this. It takes an expression plus the to-be-introduced unit as arguments.
Next to components and units, state machines are one of the main C extensions available in mbeddr. They can be used directly in C programs, or alternatively, embedded into components. To keep the overall complexity of the example manageable, we will show how state machines can be used directly in C.
State machines can either be viewed and edited with the textual notation or with the table based projection. You can switch between the two modes with the option.
This section will give a brief overview on state machines; how they can be defined, how one could interact with C code when using the state machines and we will also give details about hierarchical state machines and their visualizations with the help of the PlantUML tool.
Import the com.mbeddr.statemachines
devkit into your model and create a new ImplementationModule (the example module is called also StateMachines. Then you can create the state machine in this module and add the ImplementationModule to the build configuration. This section will guide you through an example that creates a state machine for a flight judgement system. This system has 5 states which describe the behavior of the plane and also give the semantics for awarding the points for a given flight. We will create these states and see how they can be connected with transitions. We will also explore how one could specify guard conditions for the transitions. The informal rules for judging flights are as follows:
In order to create the state machine in the ImplementationModule simply type in statemachine
at the top level in implementation modules. This will create a new state machine with an initial state and one event already defined. You can leave the event there, we will come back to that later. We know that the airplane will be in various states: beforeFlight, airborne, landing (and still rolling), landed and crashed. You can just rename the already existing initial state to beforeFlight and add the other states to the state machine. In the end you should have the following states:
exported statemachine FlightAnalyzer initial = beforeFlight {
state beforeFlight {
} state beforeFlight
state airborne {
} state airborne
state landing {
} state landing
state landed {
} state landed
state crashed {
} state crashed
}
The state machine will accept two kinds of events. The first one is the next event, which takes the next trackpoint submitted for evaluation. Note how an event can have arguments of arbitrary C types, a pointer to a Trackpoint in this example. The Trackpoint
struct is already defined in the DataStructures module. The other event, reset, resets the state machine back to its initial state.
exported statemachine FlightAnalyzer initial = beforeFlight {
in event next(Trackpoint* tp) <no binding>
in event reset() <no binding>
}
We also need a variable in the state machine to keep track of the points we have accumulated during the flight. In order to create a new variable simply type var
in the state machine. This creates a new variable, you need to specify its name and the type. The newly created variable is invisible from outside by default, you can change this to readable
or writable
with the corresponding intentions. Readable variables may be read from outside of the state machine, while writable variables can also be modified from the interacting C code (you will have to create a state machine instance first; we explain this below). In the end you should have a points
variable in the state machine which is readable only:
We can now implement the rules outlined above using transitions and actions. Let us start with some simple ones. Whenever we enter beforeFlight we reset the points to 0. We can achieve this with an entry action in beforeFlight:
exported statemachine FlightAnalyzer initial = beforeFlight {
state beforeFlight {
entry { points = 0; }
} state beforeFlight
}
There are some additional rules for taking off, landing and conditions for crashing.
#constant TAKEOFF = 100;
). We also make use of the physical units extension (see Section Units) and annotate the speed and altitude with the appropriate unit.
The complete set of transitions is as follows:
exported statemachine FlightAnalyzer initial = beforeFlight {
state beforeFlight {
entry { points = 0; }
on next [tp.alt > 0 m] -> airborne
exit { points += TAKEOFF; }
} state beforeFlight
state airborne {
on next [tp.alt == 0 m&& tp.speed == 0 mps] -> crashed
on next [tp.alt == 0 m&& tp.speed > 0 mps] -> landing
on next [tp.speed > 200 mps&& tp.alt == 0 m] -> airborne { points += VERY_HIGH_SPEED; }
on next [tp.speed > 100 mps&& tp.speed <= 200 mps&& tp.alt == 0 m] -> airborne { points += HIGH_SPEED; }
on reset [ ] -> beforeFlight
} state airborne
state landing {
on next [tp.speed == 0 mps] -> landed
on next [tp.speed > 0 mps] -> landing { points--; }
on reset [ ] -> beforeFlight
} state landing
state landed {
entry { points += LANDING; }
on reset [ ] -> beforeFlight
} state landed
state crashed {
entry { send crashNotification(); }
} state crashed
}
Note that the transitions are checked in the order of their appearance in the state machine; if several of them are ready to fire (based on the received event and the evaluation of the guard conditions), the first one is picked. Actually this kind of nondeterminism is usually not wanted and mbeddr provides support for the verification of state machines, which you can read more about in Section Checking State Machines.
Junction states (influenced by Simulink) can also be created in the state machine in mbeddr. A junction state can only contain epsilon-transitions, meaning these transitions are immediately ready to fire when the state is entered, they don't need a triggering event. Having multiple epsilon-transitions clearly introduces nondeterminism, so one typically specifies guards for these transitions. The following example uses the points
variable to select the state transition that should be applied. The example junction state makes a decision based on the points and immediately fires.
exported statemachine FlightAnalyzer initial = beforeFlight {
junction NextRound {
[points > 100] -> airborne
[points <= 100] -> beforeFlight
} junction NextRound
}
Junctions are essentially branching points in a statemachine and help modularize complex guards (especially if several guards in one state have common subexpressions):
So how do we deal with the crashed state? Assume this flight analyzer runs on a server, analyzing flights that are submitted via the web. If we detect a crash, we want to send notifications or perform other kinds of error handling. In any case, this would involve the invocation of some external code. This can be performed in two ways:
The first one is to simply invoke a C function from an entry or exit action. Another alternative, which is more suitable for formal analysis (as we will see below and in Section Checking State Machines) involves out events. From the entry action we send
an out event, which we have defined in the state machine. The following code example shows how the latter one would look like.
StateMachines
model mbeddr.tutorial.main.defaultExtensions
package examples constraints
exported statemachine FlightAnalyzer initial = beforeFlight {
out event crashNotification() => raiseAlarm
state crashed {
entry { send crashNotification(); }
} state crashed
}
void raiseAlarm() {
//invoke some kind of real-world thingy
//that reacts suitably to a detected crash
} raiseAlarm (function)
imports FlightJudgementRules
DataStructures
stdlib_stub
stdio_stub
UnitDeclarations
SIUnits
We create an out event called crashNotification (which will be sent when we enter the crashed state). We then specify a binding to the out event; the binding is part of the out event definition: simply add the name of the function as the target of the arrow (in the example this is the raiseAlarm function).
The benefit of this approach compared to the previous one is that formal verification can check whether the notification was sent at all during the execution of the state machine. The effect is the best of both worlds: in the generated code we do call the raiseAlarm function, but on the state machine level we have abstracted the implementation from the intent. See Section Checking State Machines for a discussion of state machine verification.
Let us write some test code that interacts with a state machine. To write a meaningful test, we will have to create a whole lot of trackpoints. So to do this we create helper functions. These in turn need malloc
and free
, so we first create an additional external module that represents stdlib.h
:
stdlib_stub
// contents are exported by default
model com.mbeddr.tutorial.documentation.code imports nothing
void* malloc(size_t size);
void free(void* ptr);
resources header: <stdlib.h>
We can now create a helper function that creates a new Trackpoint based on an altitude and speed passed in as arguments:
exported Trackpoint* makeTP(int16 alt, int16 speed) {
static int8 trackpointCounter = 0;
trackpointCounter++;
Trackpoint* tp = ((Trackpoint*) malloc(sizeof[Trackpoint]));
tp.id = trackpointCounter;
tp.time = trackpointCounter s;
tp.alt = alt m;
tp.speed = speed mps;
return tp;
} makeTP (function)
We can now start writing (and running!) the test. We first create an instance of the state machine (state machines act as types and must be instantiated). We then initialize the state machine by using the init
operation:
exported testcase testFlightAnalyzer {
FlightAnalyzer f;
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case)
Initially we should be in the beforeFlight state. We can check this with an assertion:
exported testcase testFlightAnalyzer {
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case)
We also want to make sure that the value of points
is zero initially. Since we have declared the points
variable to be writable above, we can write:
exported testcase testFlightAnalyzer {
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case)
Let us now create the first trackpoint and pass it in. This one has speed, but no altitude, so we are in the take-off run. We assume that we remain in the beforeFlight
state and that we still have 0 points. Notice how we use the trigger
operation on the state machine instance. It takes the event as well as its arguments (if any):
exported testcase testFlightAnalyzer {
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case)
Now we lift off by setting the alt to 100 meters:
exported testcase testFlightAnalyzer {
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case)
So as you can see it is easy to interact from regular C code with a state machine. For testing, we have special support that checks if the state machine transitions to the desired state if a specific event is triggered. Here is some example code (note that you can use the test statemachine
construct only within test cases):
exported testcase testFlightAnalyzer {
f.init;
assert(0) f.isInState(beforeFlight);
assert(1) f.points == 0;
f.trigger(next|makeTP(0, 20));
assert(2) f.isInState(beforeFlight) && f.points == 0;
f.trigger(next|makeTP(100, 100));
assert(3) f.isInState(airborne) && f.points == 100;
test statemachine f {
next(makeTP(200, 100)) ➔ airborne
next(makeTP(300, 150)) ➔ airborne
next(makeTP(0, 90)) ➔ landing
next(makeTP(0, 0)) ➔ landed
}
assert-equals(4) f.points == 200;
} testFlightAnalyzer(test case)
You may have noticed that the helper function allocates the new Trackpoints on the heap, without releasing the memory. You could simply call free
on these newly created structures after the test statement has been executed or allocate the trackpoints on the stack to solve this problem.
State machines can also be hierarchical. This means that a state may contain essentially a sub-state machine. The following piece of code shows an example. It was derived from the previous flight judgement example, but the three flying, landing and landed states are now nested inside the airborne composite state. Composite states can be created in the state machine by typing composite state
.
exported statemachine HierarchicalFlightAnalyzer initial = beforeFlight {
macro stopped(next) = tp.speed == 0 mps
macro onTheGround(next) = tp.alt == 0 m
in event next(Trackpoint* tp) <no binding>
in event reset() <no binding>
out event crashNotification() => raiseAlarm
readable var int16 points = 0
state beforeFlight {
entry { points = 0; }
on next [tp.alt > 0 m] -> airborne
exit { points += TAKEOFF; }
} state beforeFlight
composite state airborne initial = flying {
on reset [ ] -> beforeFlight { points = 0; }
on next [onTheGround && stopped] -> crashed
state flying (airborne.flying) {
on next [onTheGround && tp.speed > 0 mps] -> landing
on next [tp.speed > 200 mps] -> flying { points += VERY_HIGH_SPEED; }
on next [tp.speed > 100 mps] -> flying { points += HIGH_SPEED; }
} state flying
state landing (airborne.landing) {
on next [stopped] -> landed
on next [ ] -> landing { points--; }
} state landing
state landed (airborne.landed) {
entry { points += LANDING; }
} state landed
} state airborne
state crashed {
entry { send crashNotification(); }
} state crashed
}
Here are the semantics:
A
crosses a composite state-boundary B
, then the actions happen in the following order: exit actions of A
, exit actions of B
, transition action, and entry action of the transition's target (which is outside of the composite state).
State machines can also be rendered and edited as a table: the events become the column headers, and the states become the row headers. Fig. 7.3.5-A shows an example. You can switch projection modes the usual way via the .
State machines can also be visualized with PlantUML (a graphical editor will be done soon). The visualization can be generated from the context menu of the state machine. You may want to use the statechart
or statechart (short)
, the former one displays all information for the transitions while the latter one will only print the event's name which triggers the given transition.
Fig. 7.3.6-A shows an example visualization for the hierarchical state machine that was described in the previous section.
Let us now introduce components to further structure the system. We start by factoring the Trackpoint
data structure into a separate module and export it to make it accessible from importing modules.
exported struct Trackpoint {
int8 id;
int8/s/ time;
int8/m/ x;
int8/m/ y;
int16/m/ alt;
int16/mps/ speed;
};
We now define an interface that processes Trackpoint
s. To be able to do that we have to add the com.mbeddr.components
devkit to the current model. We can then enter a client-server interface in a new module Components. We use pointers for the trackpoints here to optimize performance. Note that you can just press * on the right side of Trackpoint
to make it a Trackpoint*
.
The interface has one operation process.
To enhance the semantic "richness" of the interface we add preconditions and a postcondition. To do so, use an intention result
expression is only available in postconditions and represents the result of the executed operation.
exported cs interface TrackpointProcessor {
Trackpoint* process(Trackpoint* p)
pre(0) p != null
pre(1) p.id != 0
pre(2) p.time != 0 s
post(3) result.id != 0
}
After you have added these contracts, you will get an error message on the interface. The problem is this: if a contract (pre- or postcondition) fails, the system will report a message (this message can be deactivated in case you don't want any reporting). However, for the program to work you have to specify a message on the interface. We create a new message list and a messge.
exported messagelist ContractMessages {
message contractFailed(int8 opID, int8 constraintID) ERROR: contract failed (active)
}
You can now open the inspector for the interface and reference this message from there:
There are still errors. The first one complains that the message list must be exported if the interface is exported. We fix it by exporting the message list (via an intention). The next error complains that the message needs to have to integer arguments to represent the operation and the pre- or postcondition. We change it thusly (note that there are quick fixes available to adapt the signatures in the required way).
We create a new component by typing component
. We call it Nuller
. It has one provided port called processor
that provides the TrackpointProcessor interface.
exported component Nuller extends nothing {
provides TrackpointProcessor processor
} component Nuller
After you add only the provided port, you get an error that complains that the component needs to implement the operations defined by the port's interface; we can get those automatically generated by using a quick fix from the intentions menu on the provided port. This gets us the following:
exported component Nuller extends nothing {
provides TrackpointProcessor processor
Trackpoint* processor_process(Trackpoint* p) <= op processor.process {
p.alt = 42 m;
return p;
} runnable processor_process
} component Nuller
The processor_process runnable is triggered by an incoming invocation of the process operation defined in the TrackpointProcessor interface. The Nuller simply sets the altitute to zero.
Let us now write a simple test case to check this component. To do that, we first have to create an instance of Nuller
. We create an instance configuration that defines exactly one instance of this component. Also, we add an adapter. An adapter makes a provided port of a component instance (Nuller.processor
) available to a regular C program under the specified name n
:
Now we can write a test case that accesses the n
adapter -- and through it, the processor
port of the Nuller
component instance nuller
. We create a new Trackpoint
, using 0 as the id
-- intended to trigger a contract violation (remember pre(1) p->id != 0
). To enter the &tp
just enter a &
, followed by tp
.
section testNullerFailing {
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
}
exported testcase testNullerFailing {
initialize nullerInstancesFailing;
Trackpoint tp = {
id = 0,
time = 0 s,
alt = 1000 m
};
n.process(&tp);
assert(0) tp.alt == 0 m;
} testNullerFailing(test case)
} section testNullerFailing
Before we can run this, we have to make sure that the instances
are initialized (cf. the warning you get on them). We do this right in the beginning of the test case. We then create a trackpoint and assert that it is correctly nulled by the Nuller
.
section testNullerFailing {
instances nullerInstancesFailing {
instance Nuller nuller
adapt n -> nuller.processor
}
exported testcase testNullerFailing {
initialize nullerInstancesFailing;
Trackpoint tp = {
id = 0,
time = 0 s,
alt = 1000 m
};
n.process(&tp);
assert(0) tp.alt == 0 m;
} testNullerFailing(test case)
} section testNullerFailing
To make the system work, you have to import the Components
module into the Main module so you can call the testNullerFailing test case from the test
expression in Main
. In the build configuration, you have to add the missing modules to the executable (using the quick fix). Finally, also in the build configuration, you have to add the components
configuration item:
Configuration Items:
reporting: printf (add labels false)
physical units (config = Units Declarations (mbeddr.tutorial.main.m1))
components: no middleware
wire statically: false
You can now rebuild and run. As a result, you'll get contract failures:
./MbeddrTutorial
$$runningTest: running test () @FunctionPointers:test_testProcessing:0#767515563077315487
$$runningTest: running test () @Components:test_testNuller:0#767515563077315487
$$contractFailed: contract failed (op=0, pc=1) @Components:null:-1#1731059994647588232
$$contractFailed: contract faied (op=0, pc=2) @Components:null:-1#1731059994647588253
We can fix these problems by changing the test data to conform to the contract, i.e.
section testNullerOK {
instances nullerInstancesOK {
instance Nuller nuller
adapt n -> nuller.processor
}
exported testcase testNullerOK {
initialize nullerInstancesOK;
Trackpoint tp = {
id = 10,
time = 10 s,
alt = 100 m
};
n.process(&tp);
assert(0) tp.alt == 0 m;
} testNullerOK(test case)
} section testNullerOK
Let us provoke another contract violation by returning from the implementation in the Nuller
component a Trackpoint
whose id
is 0.
Running it again provokes another contract failure. Notice how the contract is specified on the interface, but they are checked for each component implementing the interface. There is no way how an implementation can violate the interface contract without the respective error being reported!
[ToDo: Need to fill in the static verification part here. ]
Let us look at interactions between components. We create a new interface, the
TrackpointStore1. It can store and return trackpoints1. Here is the basic interface:
exported cs interface TrackpointStore1 {
void store(Trackpoint* tp)
Trackpoint* get()
Trackpoint* take()
query boolean isEmpty()
}
Let us again think about the semantics: you shouldn't be able to get or take stuff from the store if it is empty, you should not put stuff into it when it is full, etc. These things can be expressed as pre- and postconditions. The following should be pretty self-explaining. The only new thing is the query
operation. Queries can be used from inside pre- and postconditions, but cannot modify state 2
exported cs interface TrackpointStore1 {
void store(Trackpoint* tp)
pre(0) isEmpty()
pre(1) tp != null
post(2) !isEmpty()
Trackpoint* get()
pre(0) !isEmpty()
Trackpoint* take()
pre(0) !isEmpty()
post(1) result != null
post(2) isEmpty()
query boolean isEmpty()
}
These pre- and postconditions mostly express a valid sequence of the operation calls: you have to call store
before you can call get
, etc. This can be expressed directly with protocols, as implemented in TrackpointStore2:
exported cs interface TrackpointStore2 {
void store(Trackpoint* tp)
protocol init(0) -> new full(1)
Trackpoint* get()
protocol full -> full
Trackpoint* take()
post(0) result != null
protocol full -> init(0)
query boolean isEmpty()
}
You can add a new protocol using the respective intention. The protocol is essentially a state machine. On each operation you can specify the transition from the old to the new state. We have one special state which is called initial. On a transition you can either jump into an already existing state or create a new state and then directy move into that. I.e. you see on the store operation that we transition from the initial
state into the newly created full state. The operation get can now make use of the previously created full state and does not create a new state. It is also worth mentioning that you can reset the protocol by transitioning into the initial
state again (as done in take)
The two interfaces are essentially equivalent, and both are checked at runtime and lead to errors if the contract is violated.
We can now implement a component that provides this interface: InMemoryStorage Most of the following code should be easy to understand based on what we have discussed so far. There are two new things. There is a field Trackpoint* storedTP;
that represents component state.
Second there is an on-init
runnable: this is essentially a constructor that is executed as an instance is created.
To keep our implementation module Components well structured we can use sections. A section
is a named part of the implementation module that has no semantic effect beyond that. Sections can be collapsed.
module Components imports DataStructures {
exported messagelist ContractMessages {...}
section processor {...}
section store {
exported cs interface TrackpointStore1 {
...
}
exported cs interface TrackpointStore2 {
...
}
exported component InMemoryStorage extends nothing {
...
}
}
instances nullerInstances {...}
test case testNuller {...}
instances interpolatorInstances {...}
exported test case testInterpolator { ... }
}
We can now implement a second processor, the Interpolator. For subsequent calls of process it computes the average of the two last speeds of the passed trackpoints. Let us start with the test case. Note how p2
has its speed changed to the average of the p1
and p2
originally.
section testInterpolator {
instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
}
exported testcase testInterpolator {
initialize interpolatorInstances;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 1,
time = 1 s,
speed = 20 mps
};
ip.process(&p1);
assert(0) p1.speed == 10 mps;
ip.process(&p2);
assert-equals(1) p2.speed == 3 mps;
} testInterpolator(test case)
mock component StorageMock report messages: true {
provides TrackpointStore1 store
Trackpoint* lastTP;
total no. of calls is 5
sequence {
step 0: store.isEmpty return true;
step 1: store.store {
assert 0: parameter tp: tp != null
}
do { lastTP = tp; }
step 2: store.isEmpty return false;
step 3: store.take return lastTP;
step 4: store.store
}
}
instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
}
exported testcase testInterpolatorWithMock {
initialize interpolatorInstancesWithMock;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 2,
time = 2 s,
speed = 20 mps
};
ipMock.process(&p1);
ipMock.process(&p2);
validatemock(0) interpolatorInstancesWithMock:storeMock;
} testInterpolatorWithMock(test case)
} section testInterpolator
Let us look at the implementation of the Interpolator:
exported component Interpolator extends nothing {
provides TrackpointProcessor processor
requires TrackpointStore1 store
init int8 divident;
Trackpoint* processor_process(Trackpoint* p) <= op processor.process {
if (store.isEmpty()) {
store.store(p);
return p;
} else {
Trackpoint* old = store.take();
p.speed = (p.speed + old.speed) / divident;
store.store(p);
return p;
}
} runnable processor_process
} component Interpolator
A few things are worth mentioning. First, the component requires
another interface, TrackpointStore1
. Any component that implements this interface can be used to fulfil this requirement (we'll discuss how, below). Second, we use an init
field. This is a regular field from the perspective of the component (i.e. it can be accessed from within the implementation), but it is special in that a value for it has to be supplied when the component is instantiated. Third, this example shows how to call operations on required ports (store.store(p);
). The only remaining step before running the test is to define the instances:
instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
}
A few interesting things. First, notice how we pass in a value for the init field divident
as we define an instance of Interpolator
. Second, we use connect
to connect the required port store
of the ipc
instance to the store
provided port of the store
instance. If you don't do this you will get an error on the ipc
instance since it requires this thing to be connected (there are also optional
required ports which may remain unconnected and multiple
required ports which can be connected to more than one required port). Finally, the provided interface processor
is made available to other code as the variable ip
. You can run the test case now. On my machine here it works successfully :-)
To better understand the connections between component instances, there is also a graphical editor available. To switch to the graphical wireing you can select the respective option form the
menu.1: sure, it is completely overdone to separate this out into a separate interface/component, but for the sake of the tutorial it makes sense.
2: Currently this is not yet checked. But it will be.
Let us assume we wanted to test if the Interpolator
works correctly with the TrackpointStore
interface. Of course, since we have already described the interface contract semantically we would find out quickly if the Interpolator
would behave badly. However, we can make such a test more explicit. Let us revisit the test from above:
section testInterpolator {
instances interpolatorInstances {
instance InMemoryStorage store
instance Interpolator interpolator(divident = 10)
connect interpolator.store to store.store
adapt ip -> interpolator.processor
}
exported testcase testInterpolator {
initialize interpolatorInstances;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 1,
time = 1 s,
speed = 20 mps
};
ip.process(&p1);
assert(0) p1.speed == 10 mps;
ip.process(&p2);
assert-equals(1) p2.speed == 3 mps;
} testInterpolator(test case)
mock component StorageMock report messages: true {
provides TrackpointStore1 store
Trackpoint* lastTP;
total no. of calls is 5
sequence {
step 0: store.isEmpty return true;
step 1: store.store {
assert 0: parameter tp: tp != null
}
do { lastTP = tp; }
step 2: store.isEmpty return false;
step 3: store.take return lastTP;
step 4: store.store
}
}
instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
}
exported testcase testInterpolatorWithMock {
initialize interpolatorInstancesWithMock;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 2,
time = 2 s,
speed = 20 mps
};
ipMock.process(&p1);
ipMock.process(&p2);
validatemock(0) interpolatorInstancesWithMock:storeMock;
} testInterpolatorWithMock(test case)
} section testInterpolator
In this test, we expect the following: when we call process
first, the store is still empty, so the interpolator stores a new trackpoint. When we call process
again, we expect the interpolator to call take
and then store
. In both cases we expect isEmpty
to be called first.
We can test for this behavior explicitly via a mock. A mock is a component that specifies the behavior it expects to see on a provided port during a specific test case. The crucial point about mocks is that a mock implements each operation invocation separately (the step
s below), whereas a regular component or even a stub just describes each operation with one implementation. This makes a mock implementation much simpler -- it doesn't have to replicate the algorithmic implementation of the real component. Let us look at the implementation:
mock component StorageMock report messages: true {
provides TrackpointStore1 store
Trackpoint* lastTP;
total no. of calls is 5
sequence {
step 0: store.isEmpty return true;
step 1: store.store {
assert 0: parameter tp: tp != null
}
do { lastTP = tp; }
step 2: store.isEmpty return false;
step 3: store.take return lastTP;
step 4: store.store
}
}
This mock component expresses that we expect 5 calls in total. Then we describe the sequence of calls we expect. The first one must be a call to isEmpty
and we return true
. Then we expect a store,
and for the sake of the example, we check that tp
is not null
. We also store the tp
parameter in a field lastTP
so we can return it later (you can add the parameter assertions and the do
body with intentions). We then expect another isEmpty
query, which we now answer with false
. At this point we expect a call to take
, and another call to store.
Notice how we return null
from take
: this violates the postcondition! However, pre- and postconditions are not checked in mock components because their checking may interfere with the expectations! Also, we have slightly changed the test case so we don't stumble over the null
. We don't assert
anything about the result of the process
calls:
exported testcase testInterpolatorWithMock {
initialize interpolatorInstancesWithMock;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 2,
time = 2 s,
speed = 20 mps
};
ipMock.process(&p1);
ipMock.process(&p2);
} testInterpolatorWithMock(test case)
Two more steps are required for this test to work. The first one is the instances and the wiring. Notice how we now connect the interpolator with the mock:
instances interpolatorInstancesWithMock {
instance StorageMock storeMock
instance Interpolator ip(divident = 2)
connect ip.store to storeMock.store
adapt ipMock -> ip.processor
}
The second thing is the test case itself. Obviously, we want the test case to fail if the mock saw something other than what it expects on its port. We can achieve this by using the validate mock
statement in the test:
Check out the complete testInterpolator test case.
So far we have always used client/server interfaces to communicate between components. These essentially define a set of operations, plus contracts, that can be invoked in a client/server style. However, mbeddr comes with a second kind of interface, the sender/receiver interface. In this case, the providing and requiring components exchange data items. To demonstrate how they work, let us explore another aspect of the application around Trackpoint
s (the example is in the ComponentsSRI implementation module). The data has to be collected in the airplane. Let us assume we have the following components:
Trackpoint
s that capture the progress of the flight
All these components are time-triggered, i.e. it is assumed that they execute in regular intervals, by some kind of scheduler. They all provide an interface Timed
that provides an operation tick
that is called by the scheduler. So far, these components don't exchange any data yet: sender/receiver interfaces will be used for that later1. Here is the code so far:
module ComponentsSRI imports DataStructures {
exported cs interface Timed {
void tick()
}
exported component GPS extends nothing {
provides Timed timed
void timed_tick() <= op timed.tick {}
}
exported component SpeedIndicator extends nothing {
provides Timed timed
void timed_tick() <= op timed.tick {}
}
exported component FlightRecorder extends nothing {
provides Timed timed
void timed_tick() <= op timed.tick {}
}
}
Let's now look at the data exchange, focussing on the position first. Here is a sender/receiver interface position provider. The interface declares a set of data elements, in this case with physical units:
The GPS is supposed to provide this data, so we give it a provided port with this interface:
exported component GPS extends nothing {
provides PositionProvider pos
provides Timed timed
void init() <= on init {
pos.x = 0 m;
pos.y = 0 m;
pos.alt = 0 m;
} runnable init
void timed_tick() <= op timed.tick {
pos.x++;
pos.y++;
} runnable timed_tick
} component GPS
Note how from within component runnables we can use expressions to assign to the data values defined in the interface as if they were normal fields. Let us now look at the flight recorder. It is supposed to read the data written by the GPS (and the same with the speed indicator):
exported component FlightRecorder extends nothing {
provides Timed timed
requires PositionProvider pp
requires SpeedProvider sp
Trackpoint[1000] recordedFlight;
uint16 count = 0;
void timed_tick() <= op timed.tick {
with (recordedFlight[count]) {
id = ((int8) count);
x = pp.x;
y = pp.y;
alt = pp.alt;
speed = sp.speed;
}
count++;
} runnable timed_tick
Trackpoint getFlightNo(uint16 no) <= no trigger {
return recordedFlight[no];
} runnable getFlightNo
} component FlightRecorder
Inside the with
-statement, we can access the data acquired via the pp
and sp
required ports. What distinguishes this from global variables, of course, is that the component instances still have to be wired: required ports have to be connected to provided ports, in this case, defining access to the data items:
instances instances {
instance GPS gps_comp
instance SpeedIndicator indicator
instance FlightRecorder recorder
connect recorder.sp to indicator.speed
connect recorder.pp to gps_comp.pos
adapt gps -> gps_comp.timed
adapt si -> indicator.timed
adapt rec -> recorder.timed
}
This part of the tutorial only provided a few examples of Interfaces and Components. For a full discussion of Interfaces and Components see \\fordetails{Interfaces and Components}{\\sect{details_components}}
1:
Note that this time-triggered architecture is very widespread in embedded software. In future releases of mbeddr we will provide direct support for time-triggered runnables, so you don't have to use an explicit interface such as Timed
.
Components are great to structure your code in reusable pieces. But typically when your systems grow you want to reuse whole subsystems. To the outside they look exactly like any other component but on the inside they again are wired up out of many components. This can be achieved with mbeddr's composite components.
A composite component, as any other component can require and provide interfaces from/to its environment. In addition to regular components it has an additional code block called internal instances
for instantiating other components and wire them up.
Let's build an imaginary GPS parser that can be reused. The first interface we create is a client server interface which accepts single characters. Those characters will be used by a line parser which can tokenize the stream of chars into whole lines and at the same time figures out which command was send by the client. Each line is then fed into another component which itself delegates the command to a command handler. This command handler parses the command and transforms the data into a struct that can be easily consumed by a client. The client of the GPS parser does not need to know how the parsing works internally and that it again is split up into separate components. The only relevant part is that she needs to provide the system with the stream of characters and takes the events produced by the GPS parser to work on them. To achieve this we create a composite component which provides the CharParser and requires the NmeaRmcEventProcessor. In the wiring we instantiate the respective components and wire them up. Finally, we delegate the outside required and provided ports. The whole code for the composite component is here. Don't forget to add the composite components
config item to the build configuration.
exported composite component NmeaStack {
provides CharParser charParser
requires NmeaRmcEventProcessor rmcEventProcessor
internal instances {
instance NmeaLineParserImpl lineParser
instance NmeaSentenceHandlerImpl sentenceHandler
instance NmeaRmcHandlerImpl rmcHandler
connect lineParser.sentenceHandler to sentenceHandler.sentenceHandler
connect multi sentenceHandler.commandHandlers to rmcHandler.nmeaCommandHandler
delegate charParser to lineParser.charParser
delegate rmcEventProcessor to rmcHandler.nmeaRmcEventProcessor
}
}
In a few rare cases you want to store a required port also as a local variable (e.g. when iterating over a bunch of multiports where you filter out some of them). In even rarer cases you may want to pass a required port as an argument to a function or runnable you are calling or store it in a global variable. To address those use cases we introduces a type in mbeddr called interface
. In order to use an interface in such a interface type this interface has to be specially marked with an annotation .
exported cs interface NmeaCommandHandler can be used as type {
boolean canHandle(string command)
void handle(int8[MAX_NMEA_WORD_COUNT]* words, uint8 wordCount)
}
void sentenceHandler_parseSentence(int8[MAX_NMEA_WORD_COUNT]* words, uint8 wordCount) <= op sentenceHandler.parseSentence {
interface<NmeaCommandHandler> handlerToCall = null;
for (i ++ in [0..MAX_NMEA_COMMAND_HANDLER_COUNT[) {
if (commandHandlers[i] != null && commandHandlers[i].canHandle(words[0])) {
handlerToCall = commandHandlers[i];
} if
} for
handlerToCall.handle(words, wordCount);
} runnable sentenceHandler_parseSentence
The example above actually would not need the interface type as the handle
runnable could be called right in place.
mbeddr's diagramming capabilities are put to use in two ways in the context of components: component/interface dependencies and instance diagrams.
* Component/Interface Dependencies Select a component or an interface and execute the Visualize action from the context menu (or press Ctrl+Alt+V). Fig. 7.4.9-A shows the result.
* Instance/Wiring Diagrams You can also select an instance configuration and visualize it. You'll get a diagram that shows component instances and their connections (Fig. 7.4.9-B).
mbeddr comes with support for verifying contracts of components statically. This verification is based on C-level verification with CBMC (for a more detailed discussion about the formal verification with mbeddr, please look at Section Functional Verification. Let's set up our tutorial to use verification. Let us verify the InMemoryStorage
component. To do so, first add the com.mbeddr.analyses.componentcontracts
devkit to the model that contains the components code. Then use an intention to add the verifiable
flag to the component. To make the verification work, you will have to provide some more information in the inspector:
entry point: verification
loops unwinding: 2
unwinding assertions: false
Let us look at the three parameters you have to set here: The first one determines from where the program is "executed". The entry point should be selected to be "close" to the to-be-verified component (if you verify the whole system, then, at least for big systems, this will take long). In our case we use a special test case verification
, which looks as follows:
instances verificationInstances {
instance Interpolator interpol(divident = 2)
connect interpol.store to store.store
instance InMemoryStorage store
adapt verificationInterpolator -> interpol.processor
}
exported testcase verification {
initialize verificationInstances;
Trackpoint p1 = {
id = 1,
time = 1 s,
speed = 10 mps
};
Trackpoint p2 = {
id = 1,
time = 1 s,
speed = 20 mps
};
verificationInterpolator.process(&p1);
verificationInterpolator.process(&p2);
} verification(test case)
The second line in the configuration determines how often a loop is executed. You should start with low numbers to keep verification times low. Finally, the third parameter determines if the verification should fail in case it cannot be proven that the unwinding loops
number is sufficient. You can now run the verification by selecting the component and executing the action. After a few seconds, you'll get a result table that reports everything as ok (see Fig. 7.4.10-A): every precondition of every operation in every provided port has been proven to be correct.
Let us introduce an error. The following version of the trackpointStore_store
runnable does not actually store the trackpoint. This violates the postcondition, which claims that storedTP != null
. Note that for the analysis to work, all paths through the body of a function (or a runnable) must end with a return (you'll get an in-IDE error if you don't do this).
void trackpointStore_store(Trackpoint* tp) <- op store.store {
return;
}
Let us rerun the verification. Now we get an error, as shown in Fig. 7.4.10-B. Note how the lower part of the table now shows the execution trace that led to the contract violation. You should check the Call/Return
checkbox to filter the trace to only show the call/return-granularity, and not every statement. You can also double-click onto the trace elements to select the particular program element in the code.
Let us implement another interface, one that lets us judge flights (we do this in a new section in the Components
module). The idea is that clients add trackpoints, and the FlightJudger
computes some kind of score from it (consider some kind of biking/flying competition as a context):
exported cs interface FlightJudger {
void reset()
void addTrackpoint(Trackpoint* tp)
int16 getResult()
}
Here is the basic implementation of a component that provides this interface.
exported component Judge extends nothing {
provides FlightJudger judger
int16 points = 0;
void judger_reset() <- op judger.reset {
points = 0;
}
void judger_addTrackpoint(Trackpoint* tp) <- op judger.addTrackpoint {
points += 0; // to be changed
}
int16 judger_getResult() <- op judger.getResult {
return points;
}
}
Of course the implementation of addTrackpoint
that just adds 0
to the points
doesn't make much sense yet. The amount of points added should depend on how fast and how high the plane (or whatever) was going. The following screenshot shows an embedded decision table that computes points (Notice we mix the components language, the decision tables and the units in one integrated program):
You can create the decision on your own by first of all typing the keyword dectab
- this instanciates the concept. To add a column, hit enter in one of the cells. For adding a row, move your cursor on the left side of the table (between the default return value and the table) and hit enter. Now, let us write a test. Of course we first need an instance of Judge
:
Below is the test case. It contains two things you maybe haven't seen before. There is a second form of the for
statement that iterates over a range of values. The range can be exclusive the ends or inclusive (to be changed via an intention). In the example we iterate from 0 to 4, since 5 is excluded. The introduceunit
construct can be used to "sneak in" a unit into a regular value. This is useful for interacting with non-unit-aware (library) code. Note how the expression for speed
is a way of expressing the same thing without the introduceunit
in this case. Any expression can be surrounded by introduceunit
via an intention.
exported testcase testJudging {
initialize instancesJudging;
j.reset();
Trackpoint[5] points;
for (i ++ in [0..5[) {
points[i].id = ((int8) i);
points[i].alt = 1810 + 100 * i m;
points[i].speed = 130 mps+ 10 mps* i;
j.addTrackpoint(&points[i]);
} for
assert-equals(0) j.getResult() == 0 + 0 + 20 + 20 + 20;
} testJudging(test case)
So far so good. The test case is fine. However, as with many tests, this one only tests part of the overall functionality. And in fact, you may have noticed that an error lurks inside our decision table: for 2000 m of altitude, the table is non-deterministic: both conditions in the column header work! We could find this problem with more tests, or we could use formal verification as described in Section Checking Decision Tables.
mbeddr ships with a language for capturing, documenting and maintaining requirements. To use them, use the com.mbeddr.reqtrace
devkit in your model. In this documentation we refer to a number of requirements modules, in particular FlightJudgementRules, ArchitecturalComponents and UseCases.
Requirements are contained in requirements modules. An example is FlightJudgementRules. Like C implementation modules, requirements modules can import other modules. The (exported) contents of the imported modules then become visible in the current module.
Each requirement is decribed with a number of characteristics; an example is shown in Fig. 8.1.2-A. Each requirement has a unique ID (InitialNoPoints
), a summary (Initially you have no points.
), a requirements kind (/functional
), a number of tags (none here), a detailed textual description (only one line given here), plus optional additional data (here we show a workpackage). The number (1 in the example) is automatically determined by the tree structure of the requirements. In order to reference a requirement, the unique ID is used, not the number. We discuss some of these things below.
Hierarchy Requirements are stored in a hierarchy. Each requirement can have children, as can be seen from InFlightPoints. The semantics of hierarchical nesting are not rigorously defined, but usually assumed to represent refinement (i.e., additional details).
Kind Each requirement has a kind. The kind represents the nature of the requirement, and existing kinds include functional
, non-functional
and technical
. Language extension can be used to define arbitrary additional kinds. In addition to being a label (to sort or query requirements), the kind can also be used to enforce the presence of certain additional requirements data nodes (discussed next).
Additional Data In addition to the description, a requirement can have data objects. As usual, these are extensible and can represent any additional structure. Examples include the (demo) business rules in PointForATrackpoint, tables as in priceDep or architectural components as shown in Driver. The requirements kind described above can include constraints that enforce certain kinds of data (e.g., a timing
requirement may require a timing spec
data object).
Cross References An important aspect of requirements are their relationships: a requirement can establish relations to other requirements. The requirement shown in Fig. 8.1.2-B shows both kinds of cross references. First, cross references can be added to the prose description using the @req
word (press Ctrl+Space anywhere in the prose block to insert such special words). The other alternative is to use special data objects (such as requires also
and conflicts with
. The former approach is a generic relationship, the latter ones are qualified. Of course, additional relationships can be created using language extension.
Tags Tags are similar the data objects, but they are "syntactically smaller". Existing tags include the requirements status (you can set it to draft
, accepted
, etc.), the estimated effort for implementing the requirement as well as a general string tag represented by the @
sign.
The requirements language is intended to be used by (among others) non-programmers. They may not be familiar with pressing Ctrl+Space to enter new program elements. Hence, there is a special projection mode Editor Helper Buttons
that projects buttons into a requirements module to add child requirements, data or change the nesting hierarchy (see Section Projection Modes on how to swich Projection Modes).
The requirements visualization show the upstream and downstream dependencies of a particular requirement. An example is shown in Fig. 8.1.3-A. In addition, several tree views are available that show the requirements hierarchy, the dependencies and also the up- and downstream dependencies of the requirements module (shown in Fig. 8.1.3-B).
A trace is a pointer from some implementation artifact to one or more requirements 1. To use traces, make sure you use the com.mbeddr.reqtrace
devkit in your models.
For an example of a trace, see the StateMachines module. It has traces attached to a number of C constants and to various parts of the state machine (make sure you select a projection mode that actually shows the traces); an example is also shown in Fig. 8.2-A. Make sure you select a projection mode that lets you actually see the traces (see below).
Trace Kind Each trace has a trace kind associated with it. Trace kinds characterize the nature of the trace relationship. While the set of trace kinds is extensible, mbeddr ships with three default trace kinds: implements
, exemplifies
and tests
. Note that a trace kind can restrict the kinds of requirements it can trace to, and it can also restrict to which nodes the trace can be attached. While the default trace kinds have no such constraints, it is useful in general to enforce specific semantics through those constraints.
Projection Modes There are three different modes how traces can be shown (and they can be switched like all other projection modes, see Section Projection Modes):
Detailed Traces
: The trace kind and the trace target(s) are shown on the right side of the traced node (see Fig. 8.2-A).
Compact Traces
: Only a small [T]
is shown on the right side of the traced node (see Fig. 8.2-B). A tooltip shows the first of the traces elements and the trace kind. The kind and targets can be changed in the inspector of the [T]
node.
No Traces
: the traces are not shown (even though, of course, they are still in the code)
To get a better feel for the projection modes we suggest you play around with them in the StateMachines module which contains traces to requirements.
To attach a trace to any given requirement, the requirements module, that contains the target requirement, must be imported to the client module (using the regular module imports). You can then use the Attach Trace annotation to attach a trace to any node. By default, the implements
trace kind will be used and you can then use regular code completion to select a target requirement. Note that if you use the Compact Trace
projection mode, you will have to use the inspector to change the kind or the target requirement(s).
To remove a trace, you can simply use the Backspace key when the trace itself is selected.
Traces always go towards a requirement. However, often you want to see which artifacts trace to which requirement, i.e., you want to reverse the direction. You can do this by using the customized Traces
in the dialog (see Fig. 8.2.2-A). In the result, you will get a separate section with only the traces to the respective requirement. Fig. 8.2.2-B shows an example result. Section Find Usages explains the find usages facility in general.
The find usages facility discussed in the previous section provides an interactive means to find out which code locations trace to a requirement. But sometimes you want to get an overview. For this purpose, you can use the traces
assessment. Assessments are reports that query the model and list the results; they are explained in more detail in Section Assessments.
An example traces
assessment can be found in Traces. In the query, you have to select the requirements module for whose requirements you want to run the assessment. The results then show each requirement and the code locations from which it is traced.
1: Strictly speaking, it can also point to other traceable elements; mbeddr is extensible in this way. But by default, only requirements are supported as trace targets.
An assessment is essentially a report over a model. It performs a query and then reports the results. The queries are of course customizable; there are dozens of different assessments and new ones can be developed using language engineering. Also the structure of the result data items is of course different for each assessment. Check out the following assessments to get an impression:
Since assessment and their state is persisted, they can be used to assess, or audit, models over time. As we discuss below, it is easy to track changes to the result set over time.
Setting up an Assessment Assessments live in an Assessments
root. Once such a root is created, any number of assessments can be added. Each assessment has a name and a query. The query is a separate node that performs the query itself; it also may have parameters that can be set by the users. Queries are the primary extension point, new queries can be developed via language extension.
Updating and the Color Code Once a query is defined, you can use
to execute it (either via an intention or via the context menu). Updating the assessment reruns the query. However, the old results are not replaced by the new ones. Instead, the system performs a kind of diff:By using this color coding scheme, users can always see which result items have been added during the latest update. This is very useful for continuing audits of models because it is immediately obvious which result items may require attention (see next paragraph).
Checkboxing and Errors A particularly interesting use case for assessments is the detection and tracking of smells, style guide violations or other things that may have to be fixed or changed. The assessment can be used to find and report them. The user then goes through all of the result items and decides for each of them what to do:
To support the latter case, the must be ok
option can be set on an assessment. Then every result item gets its own checkbox. Checking this checkbox means that "this result item is ok", i.e., it should not be marked as an error (the colored vertical bar becomes blue in this case). It is also possible to hide all those items where the checkbox is checked in the result set by selecting the hide ok ones
option. Finally, an assessment that is marked as must be ok
results in a regular type system error if it has one or more result items that are not marked as ok.
Writing documentation for code is annoying, since you always have to copy code snippets into the document (as text or as a screenshot), make sure they are formatted nicely, and - most importantly - keep them up to date as the code changes. The mbeddr documentation language avoids this problem: it treats documentation just like code in MPS, the documentation language is just another language written in MPS. Consequently it is trivial to refer to code, make sure that the reference is refactored along with the code itself (in particular in case of renames) and, by using a few tricks, you can even "virtually embed" code into the documents.
The user guide you are reading right now is written with this language. Consequently, a good way to learn mbeddr's documentation language is to switch off Presentation Mode (via the user guide.
menu) and take a look how the user guide is built. We'll explain some basics in this chapter but otherwise recommend you to just look at theIn addition, you can also check out this video: http://www.youtube.com/watch?v=-kMrvreg6n0
We support several target plaforms for the documentation language:
xelatex
on them). A more detailed documentation is still [ToDo:
].
In this section we describe the most important abstractions used in the documentation language. Together with the user guide as an extensive example, you should be able to find your way around the language.
Structure Here are the most important structural concepts:
@sect
word.
Paragraphs The paragraph is an abstract concept of which many subconcepts exist. They hold the meat of the document:
Words Inside text paragraphs, you can use various different special words. Those that start with a backslash are used for formatting. Those that start with an @
are references, the ones starting with an \
are formatting options. They can be entered by pressing Ctrl+Space inside text blocks. The default text for references can be overridden in the inspector view.
\bold
, \emph
, \math
, \key
, \menu
and \code
. These can also be entered by selecting existing text and then pressing Ctrl+Alt+T to bring up the wrapper intentions menu.
@sect
), to figures, images or code embedded as images (via @fig
).
@attachment
@node
@language
, @solution
, @generator
or @devkit
.
\url
\footnote
\todo
Configuation and Resources Every document must refer to a Configuration node. It is mostly used in the context of LaTeX or HTML rendering. However, it also defines the default path where the visualizations store their temporary images. Most importantly, it defines the path prefixes where images (and other attachments) are searched. The path is always relative to the current solutions root directory. When defining image paragraphs or attachments, you use one of the paths defined in the Config as the path prefix and select an image "below" this path prefix. This is supported with code completion.
Building For consuming documents inside MPS using the presentation mode, no build is necessary. However, in case of generating LaTeX or HTML, a DocumentExport
node is needed to configure the generation process. [ToDo:
]
Code reviews are an important ingredient of many development processes. The idea is that in addition to writing the code, the code is also reviewed by another person as a means of quality assurance and joint learning. To this end, it must be tracked which parts of the code are ready for review, which parts have been reviewed (and by whom), and which parts have changed since the review (and hence need to be re-reviewed). In mbeddr, we support this process with the following approach:
The code review data is shown with the review state annotation and a custom background color. Sometimes, however, you don't want to see this stuff in the code. Hence there are two projection modes: the review state and the background colors are only shown if you check the Review State
option in the menu. Make sure you enable the display of the review state so you can follow the next subsections.
mbeddr supports several states as part of the review process:
new:
The code has just been written. No review-related information is stored in the code. For example, the FlightJudger component is in this state.
ready:
When the developer thinks the code is finished, it can be marked as ready
. The code gets a yellow background, and an annotation that states that the code is ready
. See Judge2 for an example.
reviewed:
Once a reviewer has reviewed the code, it is marked as reviewed
. The code gets a green background. Check out this node instancesJudging for an example.
raw:
After the code has been marked as ready
or reviewed
, it may be modified again. At this point, of course, the review is invalid. The code now becomes raw
(red background). This test case is an example: testJudging
When a piece of code is marked as ready
or reviewed
, we create a hash of the code structure. This hash is stored in the code review annotation, inside the code. By recalculating the hash and comparing it to the one stored in the code, we can detect whether the code has been changed. However, because of the performance implications, we don't calculate this hash automatically. Instead, you have to use the intention on the reviewed node. Alternatively you can also reevaluate all review states from one single location; this is discusseb below in Section Assessments.
There is an assessment (see Section Assessments for details about assessments) to show the global state of the code review. Fig. 8.5.4-A shows an example.
The query that shows the code review state is called code review summary
. In the query you can set the scope to either a single chunk, a single model, or a model includings its imports. The results show the state of the review for the various reviewable nodes (incl. the color code); the second column is clickable, so you can directly jump to the mentioned section of code.
Updating the assessment also re-evaluates all the hashes of the reviewed sections of code. So the assessment, in addition to showing an overview, is also the central place from which to reevaluate all review states.
mbeddr supports two kinds of variability: runtime and static. Runtime variability makes the decision about which variant to execute as the program runs. The binary contains the code for all options. Since the code must be aware of the variability, the underlying language must know about variability. Static variability makes the decision before program execution. In particular, the MPS generators remove all the program code that is not part of a particular variant. The binary is tailored, and the mechanism is generic -- the target language does not have to be aware of the variability. We discuss both approaches in this chapter.
A discussion of the trade-offs between static and runtime variability is beyond the scope of this user guide: it is not a tutorial on product line engineering in general, but only a tutorial on how to do it with mbeddr.
Both approaches have in common that in mbeddr, you first describe the available variability on an abstract level that is unrelated to the implementation artifacts that realize the variability. We use feature models for this aspect. A feature model describes the available variability (aka the configuration space) in a software system. Let us start by creating a feature model that describes variability for processing flights. To do so, add the com.mbeddr.cc.variability
devkit to your model and add a VariabilitySupport
node into your program. VariabilitySupport
nodes contain feature models as well as configurations. An example can be found in FlightVariability.
Feature Models Feature models map a configuration space. An example feature model is shown in Fig. 8.6.1-A. A feature is simply a named entity. Features can have children (subfeatures). A feature specifies a constraint over the subfeatures that determine how they can be selected. The following four tree constraints exist:
! (mandatory)
: this means that all child features are mandatory. In a valid configuration, all of them must be selected.
? (optional)
: this means that in any valid configuration of this feature model, any combination of the child features can be selected. In other words, all children are optional.
xor (exclusive)
: this means that exactly one subfeature must be selected in a valid configuration
or (n-of-m)
: this means that one or more (but not zero) of the subfeatures must be selected in a valid configuration.
In addition, a feature model can also define derived features. These are essentially just macros, i.e., expressions over the existing feature tree. The features everything and empty are examples.
Features can also have attributes. For and example see maxSpeed below. These are (more or less primitively) typed attribute declarations. In a valid configuration, all attributes of all selected features must have a value.
Configurations Once a feature model is defined, you can specify configurations or variants. A configuration has a name and selects any valid (in terms of the constraints discussed above) subset of the features defined in the referenced feature model. The example below has three configurations: cfgDoNothing, cfgNullifyOnly and cfgNullifyMaxAt200. Note how cfgNullifyMaxAt200 specifies a value of 200 for the attribute associated with the maxCustom feature. An example configuration is also shown in Fig. 8.6.1-B.
Note that, in order for the stuff described below to work (static and runtime variability), you need to make sure that the feature models, on which the variability relies, are actually imported into the respective implementation modules.
As mentioned above, runtime variability means that we will evaluate a configuration at runtime and based on the evaluation, make decisions about program execution. Take a look at the following piece of code:
Trackpoint processTrackpoint(fmconfig<FlightProcessor> cfg, Trackpoint tp) {
Trackpoint result;
variant<cfg> {
case (nullify && maxCustom) {
result = process_nullifyAlt(tp);
if (tp.speed > maxCustom.maxSpeed) {
result.speed = maxCustom.maxSpeed;
} if
}
case (nullify && max100) {
result = process_nullifyAlt(tp);
if (tp.speed > 100 mps) {
result.speed = 100 mps;
} if
}
case (nullify) { result = process_nullifyAlt(tp); }
default { result = process_doNothing(tp); }
}
return result;
} processTrackpoint (function)
Variant-dependent behavior This function takes two arguments, one of them is an fmconfig
. This data type holds a configuration for the specified feature model (FlightVariability in this case). Inside the function we use the variant
statement to branch the code based on the selected feature. The variant
statement works essentially like a switch
statement, but it "switches over" feature configurations. Note that the conditions in the case
parts only allow to access the features defined in by feature model mentioned in the embracing variant
statement.
Since variant
is a statement, it can only be used in statement context -- so it cannot be used to vary arbitrary behavior as expressed, for example, with state machines. Of course, additional similar language constructs could be built for other DSLs.
Note also that we can access the values of attributes associated with features. In the code above we access the maxSpeed attribute and use it as a normal expression. Note that only attributes of selected features can be used! Otherwise, no value may be specified.
Storing Configurations in Programs In order to be able to change program behavior based on feature configurations at runtime, the configuration has to be stored in the program itself. The test case testRuntimeVar exemplifies this. Note how we declare a fmconfig
variable for a given feature model and then use the store config
statement to store a specific configuration into the variable. We then call the abovementioned function where we then use the variant
statement to exploit the data.
Static variability refers to modifying the code statically (before it is executed) based on configuration models. In mbeddr this means that various transformations handle the variability. In the following paragraphs we describe the various ingredients to static variability in mbeddr.
Presence Conditions A presence condition is a Boolean expression over features attached to any program element. During transformation, the program element is deleted if, based on the selected configuration, the Boolean expression evaluates to false. Presence conditions are essentially a kind of "structured #ifdef". In the two functions process_trackpoint and testPresenceConditions, presence conditions are attached to several statements.
The program elements to which presence conditions are attached are color-coded. The color depends on the expression. This means that all program elements that are annotated with the same presence condition get the same color, making it easy to spot disparate parts of programs that rely on the same variant configuration.
Presence conditions are attached to program nodes with the
intention.
Projection Modes Variability-aware code can be projected in various ways (switchable via the Detailed Product Line
shows the presence conditions in-line in the code (above the annotated node). The Compact Product Line
just shows small markers in the code. Hovering over the marker shows the presence condition in a tool-tip. The third mode shows the program as it would look for a given variant (as if the transformation would be executed directly in the editor). For this to work, the system has to know which variant should be rendered. To do this, a so-called Variant Selection
annotation must be annotated to the respective root node (it must be on the root node!) using an intention. This annotation allows the selection of a feature model and a particular configuration. Once one is specified, the Selected Variant
projection mode can be selected.
To better understand these modes, please go to StaticVariability and experiment for yourself.
Conditional Replacements Presence conditions are useful for blocks, or statements, or other structural parts of programs. However, sometimes you only want to change the value of expressions. Exchanging the complete statement (e.g., in case of a variable declaration) has the problem of changing the identities of the variables, which leads to all kinds of downstream problems. It is hence much better to change the value of a variable by exchanging an expression. Conditional Replacements
and Conditional Switches
can be used for this. You can also use Conditional Alternatives
, as discussed below.
Take a look at testConditionalReplacement. In the case of v1 we exchange the value 10
with 0
if the nullify
feature is selected using a conditional replacement. A conditional replacement replaces an expression with one other expression, based on a feature condition. It's a bit like an if
statement. However, sometimes an expression must be replaced with various different ones based on a set of feature models. A conditional switch can be used for this; it is more like a switch
statement. Take a look at v2 for an example. Both conditional replacements and conditional switches are attached with intentions.
Conditional Alternatives Presence conditions are good to exchange code blocks or statements. Conditional replacements and switches are good to replace expressions. What's missing is a way to replace things that have a name and that can be referenced. The problem is that all references to something break if it is removed via a presence condition. To solve this problem, mbeddr has a third option, the so-called conditional alternative.
Take a look at alternatives. We declare a function add which we call from the test case testConditionalAlternative. Now let's imagine we wanted to replace this function with another one based on a presence condition. If we did that, we'd also have to use presence conditions on all call sites of the function. This is annoying. Conditional alternatives solve this problem in the following way:
conditional original
conditional alternative
: it points to the original, it has a presence condition that determines when it should be used, and its name must start with the name of the original plus an underscore.
Note that this does not just work for functions but for anything that has a name and can be referenced.
Building Variable Systems In order to be able to build variant-aware software, one has to specify the configurations for all involved feature models. To do this, you have to add a variability mappings
configuration item to the build configuration (please see [ToDo:
] for a general discussion on mbeddr's build system). In this item you can list all relevant feature models plus their configuration. Check out this build configuration as an example:
BuildConfiguration (m.t.m.defaultExtensions). Note that you will get errors in the build configuration and sometimes during generation itself, if no configuration is specified for a feature model used in the system.
As explained in [ToDo: ], each model can only have one build configuration. Relative to building variability-aware software, this results in two alternative setups:
variability mappings
in the build configuration, you can build different variants of the software. However, since they are all in the same model, you can only build one variant at a time!
S
), and then an additional model for each variant you want to build (models V_1
through V_n
). The additional models V_i
import the model S
and each contain a build configuration with a suitable variability mapping. This way, each of these models builds a separate variant. They exist in parallel, so you can automate the build for all the variants at the same time.
Besides increasing the programmers productivity, domain specific languages allow the definition of advanced, user friendly and domain specific analyses. One special kind of analyses is formal verification, which uses mathematical precise methods for checking properties of mbeddr code. In mbeddr we have integrated different formal verification techniques aiming to be used on a continuous basis in everyday work of practicing engineers. As of today, we have implemented two different kinds of analyses:
mbeddr allows the definition of product lines with the help of feature models, configuration models and attaching advanced presence conditions to the produced artifacts (e.g. programs code, requirements). Defining and using product lines in mbeddr is described in Section Section Product Line Variability.
Based on the above content, mbeddr provides the following analyses:
Variability analyses are enabled both fine-granular on feature models and configuration models as well as on models and solutions that use variability support.
In the figure below we present an example of a feature model defined in mbeddr.
The FM-Analyzer can be started by right-clicking on the feature model name and selecting the corresponding menu entry.
FM-Analyzer will open a window with the result of the analysis. In the case when the analysis fails, FM-Analyzer provides an explanation about the cause of failure.
In the figure below we present an example of a configuration model defined in mbeddr. Each configuration model contains a subset of features from the feature model it configures.
The CM-Analyzer can be started by right-clicking on the feature model name and selecting the corresponding menu entry.
CM-Analyzer will open a window with the result of the analysis. In the case when the analysis fails, CM-Analyzer provides an explanation about the cause of failure.
In the figure below we present an example of an implementation module in mbeddr which has attached presence conditions to several entities.
The Modules-Analyzer can be started by right-clicking on the implementation module name and selecting the corresponding menu entry. It checks whether the implementation module is consistent with respect to the feature model. Example of inconsistencies are cases when for a function (or variable) declaration we have a presence condition, however, for the calls (or references) to this function (or variable) we do not have any presence condition. In this case, if the code generation is performed, we can get a compile error.
Module-Analyzer will open a window with the results of the analysis. In the case when the analysis fails, Mo-Analyzer provides an explanation about the cause of failure by giving a possible configuration (selecting concrete features) that would cause the failure.
Whenever a model contains a root node of type VariabilitySupport
, by right-clicking on this model we can check the consistency of variability definition. Whenever a model contains an ImplementationModule
which is referencing a feature model, by right-clicking on this model we can check the consistency of variability use.
When a solution contains models that define feature models, then we can check the variability consistency (both definition and use) for the entire solution.
To perform formal verification, we have integrated the CBMC http://www.cprover.org/cbmc/ C-level model-checker which is free under a BSD-4-clause license. Below we reproduce the 'advertising requirements' of the CBMC license:
"This product includes software developed by Daniel Kroening, ETH Zurich and Edmund Clarke, Computer Science Department, Carnegie Mellon University."
The focus in mbeddr is on hunting bugs at the unit level in a continuous and user-friendly manner. In general, we do not aim to prove correctness of a system but rather help users to find bugs. Each run of an analysis can provide one of the results: SUCCESS (colored GREEN) - meaning that no bug could be found; FAIL (colored RED) - meaning that a bug could be found, or DON'T KNOW (colored orange) meaning that no bug could be found but the verification is incomplete. When a bug is found, most of the times we have a trace through the system (counterexample) that leads to that failure.
We distinguish between the following kinds of analyses:
Before you start The verification tool that we are using is run on the generated C code (not on the mbeddr models themselves). Thereby, it assumes that from your models you can generate valid C code 1 (otherwise, CBMC will complain and no verification will be performed).
Before starting each verification we should ask ourselves the following questions:
main
. Code not reachable from the entry point will not be checked at all!
currentSpeed
with int32
as type, only a small sub-interval of int32
is relevant since cars cannot travel very fast or have high negative speeds).
mbeddr allows the definition of higher-level, domain specific properties. Our approach to check them is to generate C-level labels (or asserts) for each of them and to check the reachability of these labels (or violation of asserts). In some cases, if a label is reachable then the property is violated (e.g. for decision tables); in other cases labels that are not rechable represent property violations (e.g. for state-machines states reachability verification, if a label corresponding to a state is not reachable, then that state is not reachable). Encoding of high-level properties as labels (or assertions) and lifting the verification results is done automatically and transparent for the user.
In order not to clutter the production code with unnecessary lines, the labels used for the verification are generated only when a model 'rebuild' which is started by an analysis action is performed - please make sure that the code you analyze gets rebuilt before the analysis starts.
1: If advanced environments specification is used, the generated C code might contain also CBMC specific macros, so it does not need to compile.
2: loops unwinding is also known as loops unrolling
Before starting mbeddr verification, one needs to perform the following steps:
Global settings for CBMC-based analyses can be changed via
menu and then selecting tab as shown below.The meaning of these settings is briefly described below:
For checking robustness of C programs, we need to add the com.mbeddr.analyses.core
devkit as shown in the following figure. C programs are called to be 'robust' if no run-time errors (e.g. division by zero, null pointers dereferencing) can occur. To search for these errors, CBMC will check each place in the generated C code where the error can occur (e.g. each division operation, each dereferencing of pointers). The properties to be checked in this case are automatically generated by CBMC and mean for example that all divisions have divisors that are not zero.
To start the robustness checks for a given function 1 you can either open a pop-up menu on the function or define an robustness analysis configuration 2.
When the analysis is started from the pop-up menu, all robustness properties will be checked. Below we summarize the robustness properties (details about these propertie can be found on the web-page of CBMC http://www.cprover.org/cprover-manual/properties.shtml) and in its user guide:
1: All analyses re performed on the code from the function set as entry point and transitively for the functions that are called from it
2: Most analyses can be either run with one click (in that case the global settings (see Section Basic Configuration) will be used and the entry point in the analysis is 'main' or can be run via checking 'analysis configurations'. An analysis configuration enables the specification of entry points (e.g. verification environments) and fine granular definition of CBMC parameters.
Higher-level language constructs carry with them higher-level properties that can be checked. For example, when programming using decision tables, we might ask ourselves if the decision table is complete (have we covered all cases?) or if it is consistent (do we have cases where multiple cells could be active at the same time?). Another example are state-machines about which we might ask ourselves if all states are reachable (i.e. we do not have superfluous states) and if all transitions can be fired (i.e. no transition is completely shadowed by previous transitions).
Let's consider a decision table which implements a look-up table to compute a breaking distance given the current speed and the information whether the road is icy or not.
A decision table can be verified only if the 'checked' annotation is enabled as shown below. This flag will instruct the C-code generator to generate labels for each of the properties to be checked.
It is possible to start the verification with one click from the pop-up menu of the decision table node or through an analysis configuration as described in the case of robustness checks. The one-click start will use the main
method as entry point; in the case when analysis configurations are used then we can specify another entry point (usually a harness - see Section Defining the Environment for Verification).
After the verification is finished, the results (i.e. have we missed cases?, do we have cases where two cells of the table are active at the same time?) are automatically shown. When a result fails then a trace through the system is given that shows an example of values that could cause the failure.
The first step to check state-machines is to add the com.mbeddr.analyses.statemachines
devkit to the model containing the state machine. This devkit enables actions in context menus and the lifting of counterexamples such that they are aware of state-machines.
Let's consider a state-machine that implements a simple counter. After the state-machine is started (with the start event), it counts up or down.
statemachine Counter initial = Init {
in event countUp(int16 step) <no binding>
in event countDown(int16 step) <no binding>
in event start() <no binding>
var int16 counterState = 0
state Init {
on start [ ] -> Counting
} state Init
state Counting {
on countUp [ ] -> Counting { counterState += step; }
on countUp [step < 0] -> Init
on countDown [ ] -> Counting { counterState += step; }
on countDown [step < 0] -> Init
} state Counting
}
A state-machine can be verified only if the 'checked' annotation is enabled as shown below. The checked flag is enabled via an intention on the state-machine. This flag will instruct the C-code generator to generate labels for each of the properties to be checked (see Section Behind the Courtain).
The verification can be started either with one click from the pop-up menu of the state-machine node or through an analysis configuration. The one-click start will use the main
method as entry point and global settings for CBMC; in the case when analysis configurations are used then we can specify another entry point and fine-tune the settings.
After the verification is finished, the results are automatically shown - i.e. if a state cannot be reached or a transition cannot be fired then the result is marked with FAIL. Since the generated labels cannot be reached, we do not have any trace through the system.
Many times we want to check properties in the code that are relevant to the business domain of our application and originate from requirements. In order to do so, we need to be able to write such properties in the code. In mbeddr one can use plain C-level assertions, extended assertions or can attach properties to higher level constructs like for example pre/post conditions to components.
Writing assertions in C code is a 'classical' method to check properties. In the code fragment below we present a simple assertion.
Extended assertions Many times using simple assertions to encode more complex properties that represent the evolution of the system in time (e.g. 'after Q then P', 'before Q must P') is cumbersome. As usual, mbeddr provides a set of first class language constructs that allow users to directly express such properties. By clicking on these constructs, you can find in the 'Inspector' information about how the code will be generated - basically C-level assertions wrapped in small monitors.
The assertions verification can be started either with one-click from the pop-up menu of the function, or by defining an analysis configuration item. Both ways are illustrated below.
In order to enable components verification, we should firstly add the com.mbeddr.analyses.components
devkit to the current model.
mbeddr allows its users to define interfaces and components that implement them. To each of the runnables of an interface, we can attach pre/post conditions. Furthermore, the expected ordering of calls to functions of an interface can be defined using a protocol specification. Pre-/post-conditions and protocol together we call the contract of the interface.
Interfaces can be implemented by components like in the code fragment shown below.
Having specified the contract of an interface, we would like to verify if the clients of the components implementing this interface comply with the specified preconditions and call the interface functions in an appropriate order. Furthermore, we would like to check that the implementation of the methods fo the interface fulfills the specified post-conditions.
To check the component, we firstly need to add the attribute 'checked' via an intention (having this attribute enabled, we can generate labels for the cases where pre-/post-conditions are violated).
The verification is started either by one-click from the components pop-up menu or via an analysis configuration item which allows to specify an arbitrary entry point. Both of these ways are illustrated below.
Once the verification finishes, a window with results will be automatically opened as illustrated below.
Most of the software modules we develop (e.g. functions, state machines, components) are included within bigger systems. Other systems, that are neighboring with our module that we verify are called the environment of the system. Most of the time, the data types of the inputs of the to-be-verified system are too permissive and in practice, the sub-system we are verifying should work under much tighter constraints.
The constraints over variables and ranges can be modeled inside harness modules
. Below we show an example of such a module:
Furthermore, the sequence in which different operations are called can be modeled as well using the harness. For doing this, we can use the nondeterministic_choice
construct that will choose nondeterministically one of the defined cases.