show all comments

DSM

Managing Co-evolution of Domain-Specific Languages and Models: Part 2 on adding

February 01, 2019 13:56:07 +0200 (EET)

Domain-Specific Languages and related generators are refined when the domain or development needs change. As this happens frequently we describe in this article series practices for managing co-evolution of languages and models. Part 1 discussed updates with renaming. In this part 2 we focus on adding elements to the language, and demonstrate how the change reflects to existing work done.

Adding new elements to the modeling solution is usually safe as it does not affect existing models. Such a new element could be a new concept (abstract syntax added to metamodel), notation been part of the concrete syntax, or semantics such as new generators or checks.

As soon as any of these elements are added in MetaEdit+ they can be immediately applied in all modeling editors, browsers, generators etc. And when using multi-user version for collaborative modeling, such additions can be automatically provided to all language users. This is important as it ensures that people all are using the up-to-date language version.

However, if the addition is related to semantics and constraint then its influence on the existing models needs to be checked as there may be models that do not satisfy the new constraint. Language engineer don’t need to make language updates blinded without considering infuence to existing models as Type Browser gives access to all the instances for the given type. Language engineers can also use Info Tool for traceability among elements in language definition. 

If the change is contextual (i.e. models cannot be updated automatically but require input from the model creator to make the right change based on the model data), language engineer can create a model checker with MERL notifying language users on the changes needed. This check can be run separately when moving to the new metamodel or highlight the elements requiring update with a symbol definition as illustrated below: a conditional symbol element is added showing red text when element has illegal values. The same could be applied for other kind if changes too, like if model has illegal objects, relationships, roles, ports, submodels, non-uniquenesss, etc.

Defining a model uddate guidance

Figure below illustrates how this would work in practice and show in Diagram Editor. Here temperature sensor has previously accepted too high temperatures, and now a new constraint has been added restricting the value. The model checker in this example is run in two places to illustrate alternative possibilities: First temperature sensors having too high value are annotated directly in the symbol (giving right context for modeler), as well as in LiveCheck pane at the bottom of the window: Clicking the reported element selects the respective element in the diagram and in the design context so developer can decide how to update the temperature in the given context.

Diagram Editor opened showing update needs within context

Finally, if the type of model update is not contextual and thus can be automated, you may use API to update models programmatically. Our experiences, and backed also by research, indicates that this option is not typical albeit one might think so in the beginning.

DSM

Managing Co-evolution of Domain-Specific Languages and Models: Part 1 on renaming

January 08, 2019 15:11:44 +0200 (EET)

Domain-Specific Languages and related generators are refined when the domain or development needs change. As this happens frequently we describe in this article series practices for managing evolution of modeling languages along with the models that have already been created.

The evolution is addressed from two angles. The first angle is the nature of the change: are we adding things, renaming/updating or deleting things from the modeling solution. The second angle is the part of the modeling language been changed: It’s abstract syntax, concrete syntax or semantics. In this first blog entry, we focus on renaming and updating the metetamodel. In part 2 we address language extensions.

Renaming or updating language elements in the metamodel has an effect on concrete syntax and often on semantics too. Moreover, it always has an effect on existing models. In MetaEdit+, updating the language definition (aka updating the metamodel) is automatically reflected to all parts of the language definition, such as to the binding rules on legal connections, to constraints, submodel structures, notational elements, dialogs, and most importantly to existing models. This enables fast and easy language updates without worrying that something breaks or that existing models don’t anymore open.

Figure below illustrates this using Digital watch from the evaluation version as an example. When a ‘State’ object is renamed to ‘DisplayState’, it’s bindings with relationships and roles, constraints (like submodels and unique name) as well as notation is updated automatically.

Update in metamodel reflects to notation, checks, dialgos, toolbar etc.

Even more importantly in terms of evolution, all models where State existed are automatically updated to follow the new metamodel (see figure below where ‘DisplayState’ currently been shown in the sidebar, property sheet, and status bar).

Models updated automatically when metamodel changes

The only exception in terms of automated update is related to generators that are potentially used to produce identifiers, conditions for notational symbols, checks or generate the code. This because the same term could be used also in other parts of the language definition, e.g. there could be role or property called State too. So, when renaming or updating the metamodel you should use Advanced Find… provided by Generator Editor showing all possible places to be updated. Generator Editor below shows all generators accessing the renamed element.

Updating generators after renaming language element

Renaming a property can be a special case if the language reuses the same property definition in many places of the metamodel. If you want to rename the property for one case only but leave the other situations unchanged, then you may just rename the given local name. This is in fact one way to establish language integration: reusing the same type and its values, yet name the property differently based on the type/language in which it is applied.

The language definition features of MetaEdit+ are developed so that the languages can evolve along with the domain and language engineers can make easily changes knowing that existing models will be available and open for all language users.

DSM

Industrial Experiences on Domain-Specific Modeling: Panel summary

November 17, 2016 16:21:37 +0200 (EET)

DSM'16

The 16th workshop on Domain-Specific Modeling took place at SplashCon, Amsterdam. Papers and presentations are available online. This year we had also a panel with industry participants sharing their experiences on Domain-Specific Modeling. The full panel summary is available as pdf.

I was particularly happy to hear about the 5+ productivity gains reported as they are in line with what we have seen from others: e.g. reported in Modelsward 2016 paper on Industry experiences and what the company I work for can share publicly from customer cases.

One of the panelists put it perhaps most nicely: I have not seen a technique comparable providing similar results with productivity improvements. Therefore the real question is not if to apply MDE/DSM/DSL, but how to introduce it. Large portion of the panel went to discuss how to handle obstacles and follow the same success path as the panelists. If you are interested in speeding development activities I can recommend reading the panel summary.

DSM

Modeling for Internet of Things devices

September 18, 2015 15:32:58 +0300 (EEST)

I've been working a bit on the area of Internet of Things, and lately related with IoT device Thingsee. This device provides a wide variety of sensors (location, temperature, humindity, pressure, light, speed, etc.) along with wireless connectivity. For my purpose the size and complexity of the applications grew quickly so it felt natural for me that raising the level of abstraction from the code is needed. I also needed full access to device API like displays, handling multiple apps (called purposes) and linking between purposes available only in coding manually.

I've created during a weekend a language that covered most of the sensors along with code generator producing jsn code. The benefits of this become quickly visible: from a model shown below (harbor monitoring) 370 lines of code is generated to be run in the device. So far for the larger apps created over thousand lines of code is the generated.

Winter docking Internet of Things application

The video below tries to show you the idea: in 2 minutes an application for speed tracking is created and about 100 lines of code been generated.

Video on Internet of Things example

I then extended the language to cover more details and also add there features that are useful also for other purposes than for plain code generation. For example,

  • Safety issue: language warns me when creating application that is dangerous, leads to loss of property or similar (e.g. temperature is set so high that device should not be used there)
  • Language also prevents from generating apps that are illegal or incomplete (e.g. conflicting triggers, entering values for pressure, luminance etc. not possible)
  • Language also warns if something is not complete (e.g. unconnected states, no start state)

If you want to play with this language feel free to download it along with the example models (mec) and (mxs). To use it you need MetaEdit+ modeling and code generation tool, available at: http://www.metacase.com/download/. After you start and login to MetaEdit+ import the .mec and .mxs files there and you should see sample apps too. Now you can start creating IoT apps pretty quickly and easily.

I included there also my applications for monitoring boat in harbor, during winter docking and while sailing. You may also freely extend the language and generator with other services and connections with other tools.

DSM

Modeling for Safety Engineering

January 12, 2015 12:27:41 +0200 (EET)

Related to the released EAST-ADL support in MetaEdit+, I was last month working a bit with safety engineering following standards like ISO 13849-1 and ISO 26262 that focus on the development of software for electrical and/or electronic (E/E) systems. Rather than creating models for safety analysis from the scratch we applied the already existing architecture models. As a result, safety engineers can choose the nominal architecture, or part of it, and translate it to equivalent safety model. In MetaEdit+ this model-to-model transformation takes existing functional architecture model and transfers it to a dependability model and to a number of error models depending on the size of the architecture chosen. Safety engineers can then adapt the model for various safety cases and run safety analysis calling the wanted analysis tool.

Due to customer request I applied Sistema tool but it would work similarly for other tools too. Tool integration was straightforward after having created a generator that takes dependability model and related error models and produces Sistema's project file and at the end opens the tool for it. Analysis tool then has already the safety functions, subsystems, blocks, channels etc. and can then be used to run analysis with different options.

This integration provides several benefits, including:

  • Ensures that safety analysis is done for the intended/designed architecture
  • Makes safety analysis faster as it is largely automated
  • Error-prone routine work is reduced

What makes this even more interesting is the feedback back to architecture models. First, models in MetaEdit+ could already include component specific performance levels permitting even more automated calculation of reliability values. In fact, my colleague even made performance level annotations back to MetaEdit+ by calculating ASIL values (as used in automotive). This kind of extension naturally called for modifying the language which was EAST-ADL in our case.

Another interesting direction is updating the model data and annotating it based on the analysis. In the sample screenshot below I've tried to illustrate this by highlighting blocks influenced with the blue color. If the analysis tool has open interfaces then MetaEdit+ and its integration mechanisms (command line, XML, API, generator-based parsing) can utilize it.

Error Model sample

DSM

DSM'14: Call for Papers and Demonstrations

June 18, 2014 11:05:19 +0300 (EEST)

DSM'14

This year the workshop on Domain-Specific Modeling will be held at SPLASH conference, Portland, Oregon, 21st October.

The 14th workshop on DSM continues to keep emphasizes also on the _work_ part in the workshop permitting the participants to leave with ideas for what to do next to improve the field. We are looking in particular for experience papers, demonstrations and also early level research descriptions in terms of position papers too. The call for contributions is available at: http://www.dsmforum.org/events/DSM14/.

DSM

Results of LinkedIn Poll: What is the most challenging part when starting to define your own modeling languages and generators?

May 07, 2014 16:10:21 +0300 (EEST)

Thank you for the comments and for the votes to the poll. See results below (I took the screenshot from the original poll (http://lnkd.in/dhKy_TH) because LinkedIn stated that these would not be any more available after mid May). Clearly the two most common challenges deal with identifying the domain and defining the languages. I voted for the first, partly perhaps because my work is often related to early phases of language definition dealing with issues like what to model and what to leave out.

Poll results

I should reveal that the same poll in a group on Domain-Specific Modeling (DSMForum) lead to different results as the actual language and generator development was not seen a big issue. For 50% in DSMForum the most challenging part is identifying a suitable domain, and 27% saw tool integration the biggest challenge. I think this could be because participants at DSMForum have perhaps more experiences on creating the languages and generators whereas in MDA group there are perhaps more people interested in using ready solutions than creating their own. For the same reason the number of participants in MDA group is bigger than in DSMForum including mostly language engineers only.

The poll has also a number of shortcomings if we evaluate the research method in terms of data collection and small number of answers, but as commented by others it reveals something interesting that perhaps would deserve a more detailed and better planned research. Having some demographics or background data could help to analyze the differences too, like those mentioned in the comments where Rafael sees hard to find a domain for DSL whereas Jens things that to be perhaps the last thing to vote for.

DSM

Scaling to large: > 100.000 model elements

March 25, 2014 15:18:34 +0200 (EET)

How about scalability? From time to time I hear this question when showing various examples of domain-specific modeling and code generation. It is also a bit hard question to answer as scalability can mean different things, like size of a diagram, number of model elements, depth of model hierarchies, number of languages used in parallel, number of concurrent engineers and obviously speed of the tool when working in the "large scale". I recorded a short session opening and working with something that can be called large: tens of different languages, thousands of diagrams, and hundreds of thousands model elements.

The video shows how MetaEdit+ performs: opening and working in large project look pretty much the same than when working in a small project.

Tool is obviously one, and important part when providing scalability, but perhaps the right question would be: how to define the modeling languages that acknowledge scalability?

DSM

On extending modeling languages based on user feedback

March 03, 2014 16:17:57 +0200 (EET)

Based on my experience, often the best ideas for the language and generators come when trying them out in typical development cases - in contrast to initial language definition (aka creating plain metamodel). By best I mean what language users appreciate the most. This lesson got recently confirmed again when working with a relatively large modeling language. By large I mean that the number of types (metamodel elements) is close to the number of types in languages like SysML or UML.

With this language case, there are in fact many different sub-languages and different engineers are interested in working on certain views and sublanguages only (e.g. Hardware, Functions, Timing, Events, Features, Failure behavior etc.). In fact, I can't see that a single engineer would really need all the parts of the language for his job. Still, for this domain in automotive, it is obvious to support collaboration among team members and provide integration among various views and sublanguages.

To give an example, we (2 language engineers) had defined modeling support to specify, check, and view allocations among functions and hardware. Usually both of these are graphical models, but for allocations some people prefer to have diagrams and others matrix. With MetaEdit+ we then support both and have various checking there (like that the same function is not allocated several times) along with generators (like for AUTOSAR style runnables etc). Later when the language was used with a bigger case it was told to us that a good way would be to illustrate directly in HW architecture the functions been allocated. We never had thought about that - nor the original language spec - but after 30 minutes work with the example we had to agree: makes a lot of sense and nicely illustrates graphically the integration of different views! See screenshot below showing HW components and how various logical functions are been allocated. This visualization support can be turned on/off as needed

visualizing allocation

Things obviously didn't stop here as we tested immediately also options to hide the ports when showing allocations so that the graphical model would show only that part of information that is considered relevant: allocations. Now having used this visualization support to meet language users' needs, we could do the similar visualizations for all other parts of the language, like show features been realized in a function, show all functions that satisfy a given requirement, safety cases been covered per function etc. Rather than rushing back to edit the language definition (metamodel) in MetaEdit+ we stopped and wanted to follow our experience: let's first see what kind of visualization aids/views are needed in practical work and then provide support for those. In other words, let's build the language based on the needs of the practice - not on what all could be done. The particular nice part of MetaEdit+ is that you can define the languages while they are been used and already existing models update to the new language version and features ... just like in the above case of visualizing function allocations in HW architecture.

DSM

How to Compare Language Workbenches

December 09, 2013 17:49:24 +0200 (EET)

Several ways have been proposed and applied to compare Language Workbenches (see a review of the work done). Language Workbench Challenge 2014 continues the tradition of inviting tools to be presented showing their solution to the challenge: implementing the support for the given domain-specific language. Past editions of LWC have followed the same pattern in 2011, 2012 and 2013.

LWC is great because it allows people to see different tools at the same time and see how they support the given case. Unfortunately its format is not completely suitable for comparison. One reason is that not all tools participating have implemented all the tasks. Secondly, effort to implement the solution has not been reported (man-hours, number of persons involved, expertise needed, etc.) and I would expect that that would be of interest to many. For example, in 2011 challenge only one participant and tool actually showed in live how the implementation is done. Third, LWC has focused on building a partial language solution from the scratch as in reality - similarly to software development - most work deals with redeveloping and maintaining the tooling while it is already been used. Fourth, there is hardly ever only one person for both developing and using the language, and I was particularly happy to see that LWC 2014 extends the scope to scalability and collaboration (larger teams, larger models, several language engineers working together). These same issues are at least partly demonstrated in a 9 min video showing how MetaEdit+ approaches collaboration.

Single event or study obviously can't address all the aspects of tool comparison, but LWC is bringing perhaps the largest number of tools to the table. Hopefully there will be many submissions. Visit http://www.languageworkbenches.net/ for the submission guidelines.