show all comments

DSM

History and Family Tree of Modeling Languages

April 16, 2014 21:58:20 +0300 (EEST)

I had a pleasure to give a keynote at Code Generation Conference 2014 last week and present there my views on the business cases for modeling and generators. Since several persons have asked a better resolution figure of the family tree of modeling languages (describing a bit history of languages too), I put it available here. The attached figure opens into a larger png file. I created it just for this talk (Tolvanen, J-P., The business cases for modeling and generators, keynote, Code Generation Conference 2014, Cambridge, UK, 2014).

history of modeling language, family of modeling languages

Obviously many relevant languages are still missing and there can also be errors. For those of you who would like to edit it please find here a .mec file that you can import into your copy of MetaEdit+ and start editing the diagram.

(For those of you who would like to change the family tree modeling language used to create the above diagram will find from the same .mec file also the metamodel definition of the language. So you may also change the language used to create the diagram... ;-)

DSM

Scaling to large: > 100.000 model elements

March 25, 2014 15:18:34 +0200 (EET)

How about scalability? From time to time I hear this question when showing various examples of domain-specific modeling and code generation. It is also a bit hard question to answer as scalability can mean different things, like size of a diagram, number of model elements, depth of model hierarchies, number of languages used in parallel, number of concurrent engineers and obviously speed of the tool when working in the "large scale". I recorded a short session opening and working with something that can be called large: tens of different languages, thousands of diagrams, and hundreds of thousands model elements.

The video shows how MetaEdit+ performs: opening and working in large project look pretty much the same than when working in a small project.

Tool is obviously one, and important part when providing scalability, but perhaps the right question would be: how to define the modeling languages that acknowledge scalability?

DSM

On extending modeling languages based on user feedback

March 03, 2014 16:17:57 +0200 (EET)

Based on my experience, often the best ideas for the language and generators come when trying them out in typical development cases - in contrast to initial language definition (aka creating plain metamodel). By best I mean what language users appreciate the most. This lesson got recently confirmed again when working with a relatively large modeling language. By large I mean that the number of types (metamodel elements) is close to the number of types in languages like SysML or UML.

With this language case, there are in fact many different sub-languages and different engineers are interested in working on certain views and sublanguages only (e.g. Hardware, Functions, Timing, Events, Features, Failure behavior etc.). In fact, I can't see that a single engineer would really need all the parts of the language for his job. Still, for this domain in automotive, it is obvious to support collaboration among team members and provide integration among various views and sublanguages.

To give an example, we (2 language engineers) had defined modeling support to specify, check, and view allocations among functions and hardware. Usually both of these are graphical models, but for allocations some people prefer to have diagrams and others matrix. With MetaEdit+ we then support both and have various checking there (like that the same function is not allocated several times) along with generators (like for AUTOSAR style runnables etc). Later when the language was used with a bigger case it was told to us that a good way would be to illustrate directly in HW architecture the functions been allocated. We never had thought about that - nor the original language spec - but after 30 minutes work with the example we had to agree: makes a lot of sense and nicely illustrates graphically the integration of different views! See screenshot below showing HW components and how various logical functions are been allocated. This visualization support can be turned on/off as needed

visualizing allocation

Things obviously didn't stop here as we tested immediately also options to hide the ports when showing allocations so that the graphical model would show only that part of information that is considered relevant: allocations. Now having used this visualization support to meet language users' needs, we could do the similar visualizations for all other parts of the language, like show features been realized in a function, show all functions that satisfy a given requirement, safety cases been covered per function etc. Rather than rushing back to edit the language definition (metamodel) in MetaEdit+ we stopped and wanted to follow our experience: let's first see what kind of visualization aids/views are needed in practical work and then provide support for those. In other words, let's build the language based on the needs of the practice - not on what all could be done. The particular nice part of MetaEdit+ is that you can define the languages while they are been used and already existing models update to the new language version and features ... just like in the above case of visualizing function allocations in HW architecture.

DSM

How to Compare Language Workbenches

December 09, 2013 17:49:24 +0200 (EET)

Several ways have been proposed and applied to compare Language Workbenches (see a review of the work done). Language Workbench Challenge 2014 continues the tradition of inviting tools to be presented showing their solution to the challenge: implementing the support for the given domain-specific language. Past editions of LWC have followed the same pattern in 2011, 2012 and 2013.

LWC is great because it allows people to see different tools at the same time and see how they support the given case. Unfortunately its format is not completely suitable for comparison. One reason is that not all tools participating have implemented all the tasks. Secondly, effort to implement the solution has not been reported (man-hours, number of persons involved, expertise needed, etc.) and I would expect that that would be of interest to many. For example, in 2011 challenge only one participant and tool actually showed in live how the implementation is done. Third, LWC has focused on building a partial language solution from the scratch as in reality - similarly to software development - most work deals with redeveloping and maintaining the tooling while it is already been used. Fourth, there is hardly ever only one person for both developing and using the language, and I was particularly happy to see that LWC 2014 extends the scope to scalability and collaboration (larger teams, larger models, several language engineers working together). These same issues are at least partly demonstrated in a 9 min video showing how MetaEdit+ approaches collaboration.

Single event or study obviously can't address all the aspects of tool comparison, but LWC is bringing perhaps the largest number of tools to the table. Hopefully there will be many submissions. Visit http://www.languageworkbenches.net/ for the submission guidelines.

DSM

Working together with multi-user version of MetaEdit+

October 28, 2013 13:56:52 +0200 (EET)

MetaEdit+ is available as single-user and multi-user versions. The differences among these versions are typically not visible to users: all modeling editors, browsers and even metamodeling tools look and behave similarly. The main difference is just that in multi-user version several users work and edit the same model (or metamodel) at the same time.

To illustrate the multi-user capabilities I've recorded a session by running three MetaEdit+ clients in a single machine (and the single screen was recorded). Then using these clients I've edited the same models and metamodels at the same time. The resulting 9 minute video is available at MetaEdit+ YouTube channel.

The video not only demonstrates how the same diagram can be edited at the same time, but also how several language engineers can change the metamodel (concepts, rules, symbols, generators) at the same time and how the update of the metamodel becomes available for the other members of the team. As you can see from the video, there is actually very little to show as all the work among the team members just gets integrated.

So I wonder that perhaps it is easier to explain what team members do not need to do, like:

  • No need to decide what to checkout/check in, and do the related actions
  • No need to make diff/comparison to see what others have been doing
  • No need to manually merge with the work others have done
  • No need to wait that one language engineer gets all the metamodel updates done
  • No need to find and upload the latest metamodel to use
  • No need to manually update the current models to the new metamodel

In multi-user version people can work together by creating and editing the same models (or metamodels) and MetaEdit+ aims to make the teamwork seamless.

DSM

Program for Graphical Modeling Language Development 2013

June 03, 2013 14:34:17 +0300 (EEST)

The program and proceedings for GMLD'13 are published. This year the accepted papers deal with language design and in particular the concrete syntax, metamodeling changes, metamodel metrics and advances in tools. As in the past, this workshop includes also "work" part as there are space for group work on relevant topics.

two different layouts for the same questionnaire

Home to see you in Montpellier! The city may be pretty full, not only because ECMFA, ECOOP, and ECSA takes place at the same time, but also the famous bicycle race (Le Tour De France) passes the city…

DSM

Choosing the Best Level of Abstraction for Your Domain-Specific Language

May 28, 2013 10:11:07 +0300 (EEST)

Aligning a language closer to the problem to be tackled, regardless if the language is internal or external, represented as a graphical, textual, map, matrix etc. will almost automatically offer major improvements. Empirical studies (1, 2, 3, 4) have reported improvements for example in error detection and prevention, quality of code, productivity, maintainability and in communication and understanding of the system developed. The biggest improvements, however, are achieved when the level of abstraction is raised as high as possible. For me this means that the appropriate language-level abstraction should be the same as the problem domain. Or at least it should be as close as possible to the problem domain, area of interest that the language targets.

I have a pleasure joining the workshop on language design and implementation to defend my claim that the best level of abstraction for a language is as high as possible. I have no arguments against DSL'ish ideas of extending current programming languages, embedding a DSL to a host language, using better naming for API calls etc., but I don't see them raising the level of abstraction much from the implementation level. I'm afraid that these efforts then don't provide significant benefits to the development teams either.

At the workshop I plan to proof my claim by showing results from cases (partly public like 4, 5, 6, 7, 8, 9, 10) that develop and test different type of software, such as consumer electronics, telecommunication, product lines, medical, and automation systems with domain-specific languages, models and generators. These industry cases report 5-10 fold productivity increase. While many find such ratios hard to believe they are quite easy to demonstrate by showing some of the languages in action with a concrete case data. Obviously those cases target specific needs - as all DSLs - and the good way to find out if domain-specific modeling languages could help in a particular situation is trying it out, e.g. by running a short workshop as a proof-of-concept.

The industry cases also show that when raising the level of abstraction closer to the actual problem domain, traditional textual approach, dominating programming language design, is not necessarily the most suitable way. Instead the domain and the most "natural" way to express and find solutions in that domain should guide the language designers. For example, graphical block languages of Simulink and LabView have been de-facto way for engineers in signal processing and in control engineering, spreadsheets are heavily used in accounting, telecom standards are specified in MSCs of SDL, etc. Linear text presentation can still be generated to feed the compiler, integrate with existing libraries and code.

I can't avoid discussing tools, and I argue that the domain - area of interest - that the language addresses should again drive the language design, not the capabilities of a particular implementation technology or tool. Still DSLs can't survive without infrastructure such as tooling support. Too often companies have started to build this infrastructure along their DSLs, just to find out few years later that tool development is not their core business. And that building the tooling support took more time and effort than expected - yet to remember that tools need to be maintained too.

While metasystems, metaCASE tools, language workbenches etc. can help in creating the tooling support for the languages, such as editors, checkers and generators, most of them still require significant amount of resources (for a comparison of the creation functionality in tools, see [12]. Since maintaining the languages and supporting tooling is much bigger effort than their initial creation, perhaps at the workshop we can discuss issues after language design too. For example, at MetaCase we have tried to implement MetaEdit+ tool so that it supports also language maintenance, updates previously made models to a newer version of the language automatically (so that work is not lost and editors always open and work), support for multiple concurrent language engineers (each focus on different parts of the language (family), generators, testing while language is defined etc).

This year EC* 2013 combines ECMFA, ECOOP and ECSA and I expect that domain-specific languages and models are well covered from modeling, programming and architecture point of view!

DSM

Generating full code for PLC (IEC structured text) from Domain-Specific Models

May 03, 2013 14:32:49 +0300 (EEST)

Last year at Language Workbench Challenge I've implemented a domain-specific modeling language for heating systems along with code generators producing the code - integrated with Beckhoff's TwinCAT enviroment for execution. After updating my old computer at home to Windows8, I had a chance to record the whole example since TwinCAT runs only on 32bit machines. A video at MetaCase Youtube channel shows the whole path: from high-level domain concepts to code, integrated with TwinCAT for build and executed for simulation.

If you want to play with this example, check the languages and generators, you can find it from the evaluation version of MetaEdit+, along with a guide.

DSM

Reusing language concepts, modular languages

April 05, 2013 10:06:12 +0300 (EEST)

The optional task of Language Workbench Challenge 2013 opens the space for more advanced language designs: It emphasizes modularity of languages and possibility to reuse parts of the DSM solution with a very typical scenario: one language for specifying logic and another for specifying layout.

Having now in my hands the submission of LWC 2013 implementation in MetaEdit+, I played with the combined languages of QL and QLS. QL stands for Questionnaire Language (see earlier blog entry) for defining questions and their sequential order and QLS stands for Question Layout and Style for defining the visual layout of the questions. In the metamodel, implemented by my colleague, these languages (and generators) are tightly integrated.

The combination of the languages allows creating different layout options for the same guestions and their logic. Consider the examples below: questions and question logic can be the same but for layout there are differences - not only for visualization but also for example how the questionnaire is splitted into different pages/steps. Naturally also logic can be different as support for variability space is built directly into the languages.

Two different layouts for the same questionnaire

This kind of integration works usually better than keeping the logic and layout disconnected at design time or using model to model transformations. With this language implementation developers using MetaEdit+ can work in parallel: some focusing on question logic and others on layout - and work seamlessly using the same questionnaire design information on both logic and layoyt. At any point of time either group can also generate and run the questionnaires to try them out. Integrated languages also enable better reasoning, checking and trace among the design elements.

Visit LanguageWorkbenches.net to see the submissions to the third challenge. The website shows also earlier years' submissions allowing you to compare how tools perform and implement the tasks given. I personally have not been involved in organizing these events (just implemented one solution), but what would make me happy in hopefully coming next challenges would be language design tasks dealing with:

  1. Language evolution (so far at LWC languages have been created from the scratch)
  2. Model evolution when DSL/DSM is refined/maintained (so far there has not been interest to maintain models done with earlier version of the languages while this is what happens in practice)
  3. Multiple language engineers (there are often multiple language engineers defining the same language)
  4. Scalability: large models and multiple persons use the language, multiple persons modify the language
  5. Different representations, not only graphical or text, but also matrixes, tables, and their mixtures

While this year looked more like framework and runtime development challenge than language development challenge (my colleague estimated only 20% to language development part in MetaEdit+), perhaps even bigger differences among the language workbenches would be visible when implementing larger languages - integrated and obviously modular. Join LWC 2013 next week to see how all the solutions work.

DSM

Heavy use of generators: Single source, multiple targets

March 19, 2013 09:23:43 +0200 (EET)

When our consultants are involved in language development, whether assisting in the beginning, providing training or participating in modeling language implementation, we typically sign an NDA. And after signing the NDA our mouths are closed. That is natural since users of MetaEdit+ own the languages they develop - and hopefully we were able to support them.

Occasionally we get across with cases that permit publishing some more details of the DSM solution, like what the language looks like or what kind of artifacts are generated. The best cases are those where language engineers and users are allowed to describe their experiences. For me, the use of Domain-Specific Modeling at Hofernet is particularly nice as they apply heavily generators. Hofernet IT-Solutions has created a domain-specific language targeting automation systems for fish farms. FishFarm DSL uses the domain concepts like ponds, feeders, water levels, etc. directly as language constructs. It is thus truly domain-specific having a narrow focus and raising the level of abstraction. This DSM solution is also a prime example of making models work: models provide a single source and then generators produce the rest.

An example model created with the Fish Farm DSL

First of all, after modeling the fish farm they generate the code for the automation systems - running as PLC code with a specific platform. In addition they also generate the UI application code so that the fish farm owner can use his touchscreen device to follow the status and control the operations of the fish farm.

While most companies would be very happy with the possibility to generate code for production use from the high-level models, this is just the beginning with Hofernet's FishFarm DSL. Since the fish farm as a system contains also hardware, generators produce also configuration for the devices in the network as well as hardware mappings. Generators produce also configuration for the web portal.

A particularly notable part is the document generation as it is needed to install and maintain the system in operation. Thus they also generate wiring plans, list of parts to be installed and even the labels to be attached to the wiring closet.

use of code generators with Hofernet's Fish Farm DSL

All in all, FishFarm DSL by Hofernet is a prime example of doing model driven development in the right way: single source in models generating multiple target formats. No need to maintain the same information in different places, checking consistency, diffing and merging various models and formats. A more detailed description will be published in the special issue of DSM in the Journal of Software and Systems Modeling. An electronic version of the article is already available by the publisher.

People reading the case has said that it makes perfectly sense in the domain of fish farm automation. Obviously the prime reason why it works so well is that the creators of the language have narrowed down the domain well and raised the level of abstraction. This is how good languages should work. If you are interested in reading cases from other domains than fish farms, check an article reviewing 76 cases of DSM, a paper focusing on 20+ cases in product line companies or try out the 4 industry cases described in the book on Domain-Specific Modeling.

The truth is that there exists thousands of similar narrow domains like fish farm automation systems - and I suspect that the one you are working with is one of them. If you would like to chat about the possible case, feel free to contact me (jpt _ metacase.com).