show all comments


How to Compare Language Workbenches

December 09, 2013 17:49:24 +0200 (EET)

Several ways have been proposed and applied to compare Language Workbenches (see a review of the work done). Language Workbench Challenge 2014 continues the tradition of inviting tools to be presented showing their solution to the challenge: implementing the support for the given domain-specific language. Past editions of LWC have followed the same pattern in 2011, 2012 and 2013.

LWC is great because it allows people to see different tools at the same time and see how they support the given case. Unfortunately its format is not completely suitable for comparison. One reason is that not all tools participating have implemented all the tasks. Secondly, effort to implement the solution has not been reported (man-hours, number of persons involved, expertise needed, etc.) and I would expect that that would be of interest to many. For example, in 2011 challenge only one participant and tool actually showed in live how the implementation is done. Third, LWC has focused on building a partial language solution from the scratch as in reality - similarly to software development - most work deals with redeveloping and maintaining the tooling while it is already been used. Fourth, there is hardly ever only one person for both developing and using the language, and I was particularly happy to see that LWC 2014 extends the scope to scalability and collaboration (larger teams, larger models, several language engineers working together). These same issues are at least partly demonstrated in a 9 min video showing how MetaEdit+ approaches collaboration.

Single event or study obviously can't address all the aspects of tool comparison, but LWC is bringing perhaps the largest number of tools to the table. Hopefully there will be many submissions. Visit for the submission guidelines.


Choosing the Best Level of Abstraction for Your Domain-Specific Language

May 28, 2013 10:11:07 +0300 (EEST)

Aligning a language closer to the problem to be tackled, regardless if the language is internal or external, represented as a graphical, textual, map, matrix etc. will almost automatically offer major improvements. Empirical studies (1, 2, 3, 4) have reported improvements for example in error detection and prevention, quality of code, productivity, maintainability and in communication and understanding of the system developed. The biggest improvements, however, are achieved when the level of abstraction is raised as high as possible. For me this means that the appropriate language-level abstraction should be the same as the problem domain. Or at least it should be as close as possible to the problem domain, area of interest that the language targets.

I have a pleasure joining the workshop on language design and implementation to defend my claim that the best level of abstraction for a language is as high as possible. I have no arguments against DSL'ish ideas of extending current programming languages, embedding a DSL to a host language, using better naming for API calls etc., but I don't see them raising the level of abstraction much from the implementation level. I'm afraid that these efforts then don't provide significant benefits to the development teams either.

At the workshop I plan to proof my claim by showing results from cases (partly public like 4, 5, 6, 7, 8, 9, 10) that develop and test different type of software, such as consumer electronics, telecommunication, product lines, medical, and automation systems with domain-specific languages, models and generators. These industry cases report 5-10 fold productivity increase. While many find such ratios hard to believe they are quite easy to demonstrate by showing some of the languages in action with a concrete case data. Obviously those cases target specific needs - as all DSLs - and the good way to find out if domain-specific modeling languages could help in a particular situation is trying it out, e.g. by running a short workshop as a proof-of-concept.

The industry cases also show that when raising the level of abstraction closer to the actual problem domain, traditional textual approach, dominating programming language design, is not necessarily the most suitable way. Instead the domain and the most "natural" way to express and find solutions in that domain should guide the language designers. For example, graphical block languages of Simulink and LabView have been de-facto way for engineers in signal processing and in control engineering, spreadsheets are heavily used in accounting, telecom standards are specified in MSCs of SDL, etc. Linear text presentation can still be generated to feed the compiler, integrate with existing libraries and code.

I can't avoid discussing tools, and I argue that the domain - area of interest - that the language addresses should again drive the language design, not the capabilities of a particular implementation technology or tool. Still DSLs can't survive without infrastructure such as tooling support. Too often companies have started to build this infrastructure along their DSLs, just to find out few years later that tool development is not their core business. And that building the tooling support took more time and effort than expected - yet to remember that tools need to be maintained too.

While metasystems, metaCASE tools, language workbenches etc. can help in creating the tooling support for the languages, such as editors, checkers and generators, most of them still require significant amount of resources (for a comparison of the creation functionality in tools, see [12]. Since maintaining the languages and supporting tooling is much bigger effort than their initial creation, perhaps at the workshop we can discuss issues after language design too. For example, at MetaCase we have tried to implement MetaEdit+ tool so that it supports also language maintenance, updates previously made models to a newer version of the language automatically (so that work is not lost and editors always open and work), support for multiple concurrent language engineers (each focus on different parts of the language (family), generators, testing while language is defined etc).

This year EC* 2013 combines ECMFA, ECOOP and ECSA and I expect that domain-specific languages and models are well covered from modeling, programming and architecture point of view!


Generating full code for PLC (IEC structured text) from Domain-Specific Models

May 03, 2013 14:32:49 +0300 (EEST)

Last year at Language Workbench Challenge I've implemented a domain-specific modeling language for heating systems along with code generators producing the code - integrated with Beckhoff's TwinCAT enviroment for execution. After updating my old computer at home to Windows8, I had a chance to record the whole example since TwinCAT runs only on 32bit machines. A video at MetaCase Youtube channel shows the whole path: from high-level domain concepts to code, integrated with TwinCAT for build and executed for simulation.

If you want to play with this example, check the languages and generators, you can find it from the evaluation version of MetaEdit+, along with a guide.


Reusing language concepts, modular languages

April 05, 2013 10:06:12 +0300 (EEST)

The optional task of Language Workbench Challenge 2013 opens the space for more advanced language designs: It emphasizes modularity of languages and possibility to reuse parts of the DSM solution with a very typical scenario: one language for specifying logic and another for specifying layout.

Having now in my hands the submission of LWC 2013 implementation in MetaEdit+, I played with the combined languages of QL and QLS. QL stands for Questionnaire Language (see earlier blog entry) for defining questions and their sequential order and QLS stands for Question Layout and Style for defining the visual layout of the questions. In the metamodel, implemented by my colleague, these languages (and generators) are tightly integrated.

The combination of the languages allows creating different layout options for the same guestions and their logic. Consider the examples below: questions and question logic can be the same but for layout there are differences - not only for visualization but also for example how the questionnaire is splitted into different pages/steps. Naturally also logic can be different as support for variability space is built directly into the languages.

Two different layouts for the same questionnaire

This kind of integration works usually better than keeping the logic and layout disconnected at design time or using model to model transformations. With this language implementation developers using MetaEdit+ can work in parallel: some focusing on question logic and others on layout - and work seamlessly using the same questionnaire design information on both logic and layoyt. At any point of time either group can also generate and run the questionnaires to try them out. Integrated languages also enable better reasoning, checking and trace among the design elements.

Visit to see the submissions to the third challenge. The website shows also earlier years' submissions allowing you to compare how tools perform and implement the tasks given. I personally have not been involved in organizing these events (just implemented one solution), but what would make me happy in hopefully coming next challenges would be language design tasks dealing with:

  1. Language evolution (so far at LWC languages have been created from the scratch)
  2. Model evolution when DSL/DSM is refined/maintained (so far there has not been interest to maintain models done with earlier version of the languages while this is what happens in practice)
  3. Multiple language engineers (there are often multiple language engineers defining the same language)
  4. Scalability: large models and multiple persons use the language, multiple persons modify the language
  5. Different representations, not only graphical or text, but also matrixes, tables, and their mixtures

While this year looked more like framework and runtime development challenge than language development challenge (my colleague estimated only 20% to language development part in MetaEdit+), perhaps even bigger differences among the language workbenches would be visible when implementing larger languages - integrated and obviously modular. Join LWC 2013 next week to see how all the solutions work.


Heavy use of generators: Single source, multiple targets

March 19, 2013 09:23:43 +0200 (EET)

When our consultants are involved in language development, whether assisting in the beginning, providing training or participating in modeling language implementation, we typically sign an NDA. And after signing the NDA our mouths are closed. That is natural since users of MetaEdit+ own the languages they develop - and hopefully we were able to support them.

Occasionally we get across with cases that permit publishing some more details of the DSM solution, like what the language looks like or what kind of artifacts are generated. The best cases are those where language engineers and users are allowed to describe their experiences. For me, the use of Domain-Specific Modeling at Hofernet is particularly nice as they apply heavily generators. Hofernet IT-Solutions has created a domain-specific language targeting automation systems for fish farms. FishFarm DSL uses the domain concepts like ponds, feeders, water levels, etc. directly as language constructs. It is thus truly domain-specific having a narrow focus and raising the level of abstraction. This DSM solution is also a prime example of making models work: models provide a single source and then generators produce the rest.

An example model created with the Fish Farm DSL

First of all, after modeling the fish farm they generate the code for the automation systems - running as PLC code with a specific platform. In addition they also generate the UI application code so that the fish farm owner can use his touchscreen device to follow the status and control the operations of the fish farm.

While most companies would be very happy with the possibility to generate code for production use from the high-level models, this is just the beginning with Hofernet's FishFarm DSL. Since the fish farm as a system contains also hardware, generators produce also configuration for the devices in the network as well as hardware mappings. Generators produce also configuration for the web portal.

A particularly notable part is the document generation as it is needed to install and maintain the system in operation. Thus they also generate wiring plans, list of parts to be installed and even the labels to be attached to the wiring closet.

use of code generators with Hofernet's Fish Farm DSL

All in all, FishFarm DSL by Hofernet is a prime example of doing model driven development in the right way: single source in models generating multiple target formats. No need to maintain the same information in different places, checking consistency, diffing and merging various models and formats. A more detailed description will be published in the special issue of DSM in the Journal of Software and Systems Modeling. An electronic version of the article is already available by the publisher.

People reading the case has said that it makes perfectly sense in the domain of fish farm automation. Obviously the prime reason why it works so well is that the creators of the language have narrowed down the domain well and raised the level of abstraction. This is how good languages should work. If you are interested in reading cases from other domains than fish farms, check an article reviewing 76 cases of DSM, a paper focusing on 20+ cases in product line companies or try out the 4 industry cases described in the book on Domain-Specific Modeling.

The truth is that there exists thousands of similar narrow domains like fish farm automation systems - and I suspect that the one you are working with is one of them. If you would like to chat about the possible case, feel free to contact me (jpt _


Small languages, large frameworks (at LWC2013)

March 08, 2013 14:37:08 +0200 (EET)

The 3rd Language Workbench Challenge takes place next month in Cambridge, UK. One of my colleagues, Risto Pohjonen, will take part in the challenge having implemented the modeling language and code generators with MetaEdit+. The domain for 2013 challenge is questionnaires, and the language to be implemented is called QL (Questionnaire Language). QL allows specifying form-based questionnaires with conditions.

I was playing with the QL and an example of the language in MetaEdit+ is shown below. The diagram shows the specification of one of the assignment tasks related to house owning. With QL, questionnaire developers create such models using questionnaire concepts and then run the generator producing the application code and running it in the browser. The level of abstraction is raised since programming (now JavaScript, HTML) and framework details are hidden allowing the person to focus on the domain: questionnaires.

Model based on QL in MetaEdit+ along with generated questionnaire

What strikes me a bit is that this year the language and generator parts are relatively simple. To test and run the QL without having to install any additional components/programs actually most effort went to the framework. While we described in the book ( different ways to divide the work among language, generator and framework, the tasks of LWC 2013 were clearly requiring more framework development than language or generator development.

Small language, large framework in LWC 2013 case

On the other hand, a particularly nice part of the assignment is that it calls for language integration: reusing and referencing among language concepts. Such integration is usually always better than model-to-model transformation since we don't want to create copies of the same information to be changed, checked and kept consistent in different places. That just adds unnecessary complexity. In the assignment such modular language integration can be used when defining questionnaire logic with one language (as above) and integrate it with another language focusing on layout and styling. Interesting to see how other tools support language modularity and evolution.


SoSyM issue on Domain-Specific Modeling online

February 20, 2013 11:28:40 +0200 (EET)

The papers of the SoSyM theme issue on Domain-Specific Modeling are now available online. Getting to this point took a bit longer than we originally anticipated - in particular the large number of submissions required bit more work and we were only able to accept about 10% of the submissions (6 papers).

The articles in the DSM theme issue mix nicely the work on both the theory and application sides - they cover language demonstrations, description of the cases of DSM use, and also empirical data on using the DSM in practice. Guest editorial to the theme issue can be downloaded, and the other articles of the issue are available at SprinkerLink. The articles of the theme issue cover:


2nd Workshop on Graphical Modeling Language Development

February 11, 2013 14:39:15 +0200 (EET)

It is a pleasure to be involved in organizing a workshop closely related to language development. GMLD'13 looks at principles of modeling language development, particularly graphical modeling languages for domain-specific needs. We are looking for submissions that cover all the phases of language development, including definition, testing, evaluation, and maintenance of modeling languages. In particular we seek contributions that are backed up by experiences on developing modeling languages. If you are interested, the call is available at

This year is also a bit special as the European modelling (ECMFA), programming (ECOOP) and architecture (ECSA) conferences take place at the same time in Montpellier.


20 cases of Domain-Specific Modeling

December 03, 2012 11:23:35 +0200 (EET)

Unfortunately most of the Domain-Specific Modeling solutions are not made publicly available. Reasons are clear: From the technical point of view a particular domain-specific language addresses a narrow domain and makes perfect match for a small audience only. Business reasons are perhaps even more obvious: If you have created a technology that makes your development 5-10x faster compared to your competitors, you most likely don’t want to publish it.

Luckily, some companies allow showing or even sharing their languages. To demonstrate the variety of domain-specific modeling languages, I’ve picked 20 different languages developed by MetaEdit+ users. They were selected to show various domains targeted as well as how a wide variety of code (or other output) can be produced from domain-specific models. For all the examples we can show the languages as they are used in MetaEdit+.

The session is available on the MetaCase YouTube channel: 20 DSM examples.


More or less languages: panel at MODELS 2012

November 26, 2012 14:18:46 +0200 (EET)

This year MODELS conference had a panel discussion “unified vs. domain-specific” - a topic that has been touched earlier at MODELS too. This year audience was asked at the end of the panel share their opinions and the result was clearer than in 2005 panel: Vast majority (over 80%) voted towards more languages.

While everybody can vote and have an opinion, obviously the best approach would be listening those who have truly applied both approaches. While I’ve also tried to twist UML with stereotypes and tagged values - and always found profiles more complex and less powerful than plain metamodeling - it was particularly nice to read thesis of Kirsten Mewes as she had applied both language design approaches with MetaEdit+.

Since I have never went so far that I would implement the same language in two ways, it was good to find someone who had done that: defined the concepts, rules (semantics) and notation along with tool support for a railway domain. This effort nicely demonstrates the difference between UML+profiling and metamodeling:

Using RCSD in MetaEdit+ by Kirsten Mewes

1) Some of the rules presented with metamodeling tools of MetaEdit+ takes place in two pages, and with UML profiles along with OCL that would take tens of pages. If we count all the rules from the language defined as found from the appendix, profiles require at least 2 times more space than plain metamodeling. The exact comparison is bit hard since the checks implemented with MetaEdit+ also describe how the model should be corrected whereas the profiles with OCL constraints perform checking only.

2) Usability between the resulting languages is huge: the metamodel-based language mimics closely the notation of the domain whereas UML-models with profiles stay as … classes.

3) Changing the metamodel reflects automatically to rules and can be also traced to rules defined with MERL (generator) whereas OCL lives in different space than the metamodel. This causes the usual problem in UML tools supporting profiles that if the metamodel is changed the rules do not update or cannot be traced. (* this happens also in the UML itself as its definition has OCL rules for elements that have actually been removed half-decade ago… Well, it is called a standard :-(

The conclusion of the thesis says it well: “In comparison, the usage of language frameworks [MetaEdit+ metamodeling] has been proven superior to profiles”. Keeps me wondering why some people still considers the use of UML profiles. If you are considering different ways to define langauges, even with UML profiles, drop me an email (jpt@meta...) and let’s look together what the language would look like when defined with pure metamodel approach.


Testing web applications: Experiences on using Domain-Specific Modeling

June 18, 2012 16:42:29 +0300 (EEST)

I’ve had during this spring a nice opportunity to be involved in creating modeling languages for testing web applications (WebAppML). While Stephan Schulz from Conformiq did all the fun part on defining different language versions, my role was mainly on discussing about alternative language structures. Stephan also implemented generators for test case generation that integrates WebAppML and MetaEdit+ with Conformiq’s testing tools.

Using WebAppML in MetaEdit+: testing shopping card

A good part of this work is that the user feedback and results are also publicly available (talk at ETSI MBT Workshop, Bangalore, last week). In particular I would like to raise (again) the obvious outcome:

“100% of the participants felt that WebAppML significantly speeds up their work”. This will always happen when we can raise the level of abstraction closer to the problem domain. Obviously WebAppML raises the level of abstraction.

I also like some other findings of the survey:

“100% of the participants prefer MetaEdit+ as a modeling tool”. I know that new MetaEdit+ 5.0 makes the modeling experience in many parts, in particular the reuse emphasized in this testing language too, even more enjoyable.

The slides describing the language and its user feedback is available from the ETSI test conference site at

Previous Next (130 total)