show all comments

DSM-tech

MERL primer for openArchitectureWare Xpand users

August 31, 2009 12:00:08 +0300 (EEST)

MERL and openArchitectureWare's Xpand languages are rather similar in approach and functionality. The main differences are in syntax and keywords, so learning one if you know the other is easy. In the MetaEdit+ forum I've posted a quick primer on how to translate from Xpand to MERL. The primer is below, with comments after the respective row where necessary.

Xpand MERL
SomeFixedText 'SomeFixedText'
«SOMECOMMAND» SOMECOMMAND
In XPand commands are quoted, whereas in MERL fixed text is quoted.
«DEFINE foo ... ENDDEFINE» Report 'foo' ... EndReport
In MERL, each report is defined on its own, not with several reports one after another in a single text.
Each report is defined on a particular Graph type (or the supertype of all, Graph itself).
«FILE Name + ".java" ... ENDFILE» filename :Name '.java' write ... close
«EXPAND foo FOREACH Bar» foreach .Bar { subreport 'foo' run }
You don't need to define a subreport, you can just put the commands from foo in the {}.
This often makes more sense, e.g. if you're not going to call foo from elsewhere.
 
foreach is for navigation from a graph to its elements.
For navigation between elements use "do" or "dowhile" (which covers SEPARATOR):
«EXPAND foo FOREACH Bar» do .Bar { subreport 'foo' run }
«EXPAND foo FOREACH Bar SEPARATOR ","» dowhile .Bar { subreport 'foo' run ','}
«this.name» :name
«LET ... AS var ..<var>.. ENDLET» variable 'var' write ... close ..$var..
For assignments with a single element on the right-hand side, you can use the shorter form: $var = 'foo', $var = :foo etc.
«REM...ENDREM» /*...*/
«PROTECT ID ... ... ENDPROTECT» md5id ... md5block ... md5sum
«CSTART ... CEND ...» filename 'bar.java'
md5start ... md5stop ...
merge .. .. .. close
The start and end sequences are specified in the filename command, since they will be the same for the whole file
«this.name.toUpper()» :Name%upper
For text manipulation, e.g. with Java in oAW, you can use MERL translators . Many are defined in _translators, such as %upper a-z A-Z, and you can define your own to convert any combination of characters, strings and regular expressions.

DSM-tech

Re: Processing of MetaEdit Models with oAW

August 28, 2009 14:49:21 +0300 (EEST)

Heiko Kern has written a great set of information on how to process MetaEdit+ models with oAW (the openArchitectureWare model transformation tools for Eclipse). The integration he's built is a great example of how easy it is to integrate MetaEdit+ with other tools:

  • You can export models or metamodels from MetaEdit+ as XML. The format is an extension of the open Graph eXchange Language standard, GXL, supported by over 40 tools.
  • You can quickly write a little generator to output your models in whatever XML or other text format you want.
  • MetaEdit+ can call other tools from its generators, e.g. for build integration.
  • Other tools can call MetaEdit+ with command-line parameters to specify a series of actions to run.
  • Other tools can call MetaEdit+ through its WebServices / SOAP API, to create/read/update/delete any data in models, and for control integration, e.g. to animate models for model-level debugging.
  • You can import models or metamodels as XML.
  • You can import text in any format and convert it to models via reverse engineering generators.

At last year's OOPSLA DSM workshop, Heiko had an article about his MetaEdit+ / Eclipse integration. We had a good discussion about it, in particular about his reasons for building it. His paper gave the impression that he wanted to use oAW rather than MetaEdit+'s own MERL generator language, because he needed some specific features in oAW. It turned out though that he hadn't actually used MERL, and didn't realise that MERL and oAW's XPand are actually very similar in terms of approach and functionality.

MERL tends to be a little more succint: here is the MERL generator to output simple Java classes for a UML Class Diagram, as in Heiko's example:

subreport '_translators' run

foreach .Class [UML]
{  filename id '.java' write
      'public class ' id ' {' $cr2

      do :Attributes
      {  '   ' :Visibility ' ' :Data type; ' ' :Name ';' $cr2

         '   public void set' :Name%firstUpper '(' :Data type; ' ' :Name ') {' $cr
         '      this.' :Name ' = ' :Name ';' $cr
         '   }' $cr2

         '   public ' :Data type; ' get' :Name%firstUpper '() {' $cr
         '      return this.' :Name ';' $cr
         '   }' $cr2
      }
      '}'
   close 
}

Heiko's oAW XPand code is 65% longer. Even ignoring the extra loop over all Class Diagrams that Heiko needs (MetaEdit+ offers that automatically in the UI or via the forAll:run: command-line parameter), oAW is still over 20% longer. The actual difference isn't that important: I'm sure both could be made shorter for this example, but the current code is typical of what is generally written. My point is that there's no real saving to be had by using XPand instead of MERL. If your models are in MetaEdit+, use MERL; if they're in Eclipse, use oAW. Having integration is great, but if you can avoid using it then that's even better.

Edit:Since MERL and Xpand are quite similar, I've made a MERL primer for Xpand users. I find it fascinating (and somewhat reassuring) that two languages created from scratch for the same task, and evolving for many years before encountering each other, should have chosen or converged on such similar approaches and solutions.

DSM-tech

Oslo Quadrant reviews

June 12, 2009 19:17:12 +0300 (EEST)

The May 26th CTP of Oslo includes the first public version of Quadrant, Microsoft's visual model editor. I've had my head down on other topics, so haven't had a chance to play with it yet, but here are some reviews from others.

Charles Young, Initial experiments

Microsoft is publically committed to providing strong UML and XMI support in 'Oslo' and this is our first glimpse of what they intend. ... My initial experiments with LoadUML suggest that the tool is not yet fully functional. For example, it fell over the use of the xmi:type attribute on the uml:Model element. It failed to handle a type element of an ownedAttribute, and it didn't recognise the packageImport element. The error messages were not always very helpful and the tool is slow...
Initial experiments with LoadAssembly went a little more smoothly. Again, the tool is very slow, and can take several minutes to complete imports...
This early version of Quadrant has big problems with big models. It could, in some cases, take several minutes of 100% CPU usage to display the contents of a folder. Memory usage can also grow to monumental proportions...
All in all, don’t expect Quadrant or the new loaders to behave very well. This is very early preview code.

Charles did manage to get an XMI file and .NET assembly imported after some messing around, so it wasn't all bad. But those speed and memory problems aren't going to go away just by optimising code: scalability is something that must be architected in from the start.

Frank Lillehagen, Quadrant - First Impressions (I had the pleasure of meeting Frank in May 2001, when he was VP at Computas and responsible for the Metis modeling tool - first released in 1991!)

Quadrant's user interface is novel, uniform, and functional, but a bit cumbersome, and as an early preview it exposes a lot of the underlying wiring, nuts and bolts. Some functionality is well supported, such as customizing views and interacting with large models in multiple workpads. On the other hand, services for e.g. relationship modeling are poor. ... Visualization is the focus, more than modeling.
The layout of diagrams is partially automated, however when you close and reopen a diagram, it will revert to an automatic layout, not keeping the location changes you made manually the last time.
The support for key visual modeling concepts like relationships is not native, and limited. Quadrant does not recognize many-to-many relationships from entities, leading to diagrams ... where [half] the shapes are really relationships that ... should be shown as links.

From the pictures Frank posted, the existing models in Oslo break many principles of good modeling design. Having automatic layout that loses your manual layout changes pretty much rules out the chance of getting to know your way around your models, for any diagram more complex than a simple tree. And having no n-ary relationships is going to mean unwelcome hacking for both metamodelers and modelers: many relationships are binary, but certainly not all.

I'll continue to follow the progress of Quadrant with interest, but there seems little point getting my hands dirty with it yet. It's a shame that it seems to be back to square one for modeling at Microsoft - this is like the early versions of DSL Tools, and you'd think they'd have moved on in the 5 years since we first saw that. When we did a complete rewrite of MetaEdit (released 1993) to get the first version of MetaEdit+ (1995), there was rather a lot more that worked, and the scalability was already in place. The UI wasn't pretty, so we'll give Quadrant the thumbs up on that score, but the real worth of an application like this lies between the UI and the database. If Quadrant only works for binary relationships, autolayout, and small models, there's some major rework needed before it becomes a serious contender. Let's hope their bosses give them chance to do it!

DSM-tech

The Model Repository (was: The CASE Repository)

March 16, 2009 17:19:43 +0200 (EET)

At last year's OOPSLA Workshop on Domain-Specific Modeling I had the pleasure and privilege of giving the keynote. One nice thing about keynotes is that you are given more freedom than for normal talks. I decided to take that to its limit by giving as my keynote a paper that was written 20 years earlier. As far as I could tell, nobody noticed :-). Actually, not wanting the audience to feel they had been fooled, I came clean near the start of the talk. All the same, the message in the talk was news to the lion's share of the audience.

In 1988 Dr. Richard J. Welke, with 26 years of computing experience and two CASE tool companies behind him, wrote a white paper on how model data should be structured, stored and manipulated -- irrespective of the modeling language. In a series of four tiny example model fragments he shows the problems we get into if we try to to represent models using just binary or Entity-Relationship-Attribute concepts, or to store models using just files or relational databases.

With today's users of Microsoft or Eclipse modeling tools only just finding out these problems through their own painful experience, now seemed a good time to revisit that paper. Prof. Welke has kindly allowed me to make it available here: The CASE Repository: More than another database application.

The sad truth is that for my keynote, the starting position was worse than for this article 20 years ago. Back then, people knew that storing models in files didn't work, and most were trying to store them in relational databases. They knew that by default, things just existed on their own, and had association links to other things, either directed or undirected. Nowadays, people are trying to store model data in files again, and worse in XML files -- with the in-built assumption that the world can be shoehorned into a tree structure, a hierarchy of strong containment aggregation.

Another difference between then and now is version control: back then it was obvious from databases that you couldn't talk about versions of individual pieces of data or tables, only of the whole set of inter-related data. The loss of fine granularity of versioning was a small price to pay for the gain in being able to support multiple simultaneous users working in the same set of data. Now, version control's "check out, edit, merge" has become the de facto poor man's multi-user capability -- so much so that few realise there could even be an alternative.

So, the only things I had to add to my keynote on top of the original paper were actually steps backward, hence the two titles: "The Model Repository: More than just XML under version control", or: "Domain-Specific Modeling: 20 years of progress?". Of course there has been progress, at least in the tools like MetaEdit+ or GME that have been around for a decade or more. For the others, all I can do is refer them to Welke's paper, and to the quote from the start of the tools chapter in our book on DSM :-)

"Those who cannot remember the past are condemned to repeat it."
- George Santayana, The Life of Reason (1905)

DSM-tech

MetaEdit and oAW recommended by Peter Bell

January 12, 2009 12:28:23 +0200 (EET)

It was a nice start to my New Year to see kind words about us in Peter Bell's blog. You might have had the pleasure of meeting Peter at OOPSLA or Code Generation, but what you may not know is what a "rock star" guru he is in the ColdFusion community. In our terms, ColdFusion is a DSL solution for dynamic web sites -- like Ruby on Rails, but since 1995. Peter is thus well acquainted with the issues of DSL evolution: in ColdFusion, and in his own DSLs around it. And, as anyone who has met him will attest, he's also a truly great guy to be with: rare in someone with that level of technical skill. (And I so wish I'd said this about him earlier, before having to blog about him saying nice things about us!)

Domain Specific Modeling: Key Vendors

"I'd strongly recommend anyone interested in DSM check out MetaEdit+ and openArchitectureWare.
The commercial (but affordable) MetaEdit+ is still for me the reference implementation for DSM tools and has solved problems such as being able to load old models after upgrading the metamodel that most other vendors haven't come to grips with yet. It doesn't work for every use case, but if it works for you it is a very mature offering and the team has a great deal of technical experience. If nothing else, check it out just to know the questions to ask any other vendor you may be considering!
openArchitectureWare is an open source set of tools for developing DSM solutions within Eclipse. It has really matured recently with better tooling for easily generating plug-ins for textual DSLs based on a meta-model and constraints and it also has good tooling for visual modeling. It isn't as polished or seamless as MetaEdit+ but with the team behind it, it is again one of the products you have to consider if you're going to be doing any DSM and if you use Eclipse it's probably a really strong contender."

openArchitectureWare (oAW to its friends) is a set of little languages for manipulating models: model checking (Check), transformation of models to text (Xpand), models to models (Xtend), and text to models (Xtext). It's most commonly used on Eclipse EMF models, but can work on other models too. The fascinating thing for me is the Xpand language. It's remarkably similar to our MERL, even though there is no shared history: neither party knew about the other until much later on.

MERL and Xpand are both true DSLs for the task of turning models into code or other text. That makes them IMHO much easier to use for creating and maintaining your generators, compared with more simplistic templating languages like JET or T4. It also means that once you have learned to use one, switching to the other is largely just changing the keywords and punctuation.

DSM-tech

DSL Tools Lab shows pain of constraints in C#

November 12, 2008 17:06:46 +0200 (EET)

Microsoft have a new DSL Tools Lab for beginners to learn how to use DSL Tools in one day. Full marks to Jean-Marc Prieur for starting off by explaining that modeling languages should be based on the problem domain, not the solution domain:

[Y]ou want to create a vertical language that is suitable for your business, and from the models that the language manipulates, generate the code for your business Framework. Nevertheless, because it is difficult to ensure that everyone who takes this training knows the professional tasks that are addressed by the targeted business Framework, we will settle for a horizontal (that is, technical) DSL.

The example language is thus a simple state machine (why is it always a state machine?!). The part that interested me is Part 3, Validation and coherence of the model. Let's look first at how to display the label for a transition. A transition specifies an Event that causes it, a guard Condition that must be true for it to be followed, and an Action that is taken when it is followed. All of these are written in the example as C# code snippets (tut tut :->). They want to display them as one single label, formatted as "Event [Condition] / Action".

Here's how to do it in DSL Tools: four pages of dense (8 pt font!) instructions and code, total 242 lines:

Formatting a transition label the DSL Tools way

And here's how to do it in MetaEdit+: three lines of MERL in the transition symbol's label:

:Event
if :Condition then ' [' :Condition ']' endif
if :Action then ' / ' :Action endif

About 10-15% of the DSL Tools solution is taken up by what I'd say is a really bad idea: storing the calculated label as a property in the transition alongside the Event, Condition and Action -- effectively duplicating that information. This also means they need to be able to parse the syntax of the label if it is edited, to update the other three properties.

Next, let's look at how to ensure there is only one Initial State. First, the DSL Tools way: just into the 4th page of 8pt instructions and code, total 200 lines:

Ensuring only one Initial State the DSL Tools way

And here's how to do it in MetaEdit+: a simple Occurrence Constraint for Initial State in graphs of type State Diagram:

Initial State may occur at most 1 time

The lab goes on to mention several other possible constraints and checks, but doesn't show you how to implement them (presumably since the intention is to finish it in one day). Here they are, along with how to implement them in MetaEdit+ -- I just tried, and it took just over a minute to do all five:

  • Final States should not have an exit transition
    Normally this would just be in the Binding for Transition, which specifies for each role (end) of the relationship which objects it may connect to: Exit (State | Initial State), Enter (State | Final State).
    Another way would be a connectivity constraint: Final State may be in at most 0 roles of type Exit
  • Entry and Exit actions of the States must have non-blank code
    The Value Regex for the code property in the Actions would be .+
    Actually, I think this is an unnecessary requirement: many states have no entry or exit action.
  • Names of States must not be blank
    The same Value Regex as above.
  • Names of States must be valid C# identifiers
    Change the Value Regex to something like [a-zA-Z_][a-zA-Z0-9_]*
    Again, I think this is a poor requirement: much better is to allow modelers to type whatever seems sensible, and then turn it into a valid identifier when outputting. In MERL you can use translators to do this, e.g. :Name%var. Using %var translates the :Name property's non-identifier characters into underscores; you can create your own similar translators, e.g. "%upper a-z A-Z" translates lowercase characters to uppercase.
  • Names of States must be unique in a State Diagram
    A Uniqueness Constraint for State Diagrams: Property "Name" in State must have unique values
    Note that this is per diagram: if you want States to have unique names in the whole repository, you can mark the Name property in State as unique.

Hopefully this gives some idea of how easy it is to add constraints and checks in MetaEdit+. With experience from making hundreds of modeling languages over the last 15 years, we have a pretty good idea of what kinds of constraints you actually need. That allows us to offer you simple ways to define them, without having to resort to hundreds of lines of hand coding for each. And there's no "customization cliff": if you have something we haven't thought of, you can just write it in MERL, or in whatever language you want via the API.

DSM-tech

Oslo roundup

October 29, 2008 20:10:06 +0200 (EET)

Overall, it looks like Oslo is primarily just a way to provide configuration information for Microsoft applications. If you want to model and generate Java, or something running on Linux, or a standalone Windows program, or embedded software, it's not for you. If you're building in-house IT applications in the Microsoft stack, it could be useful at some point in the future.

From Aali Alikoski , who ran the DSM workshop at XP with me, knows the DSL Tools well, and is at PDC:

"Oslo" initially seems like a disappointment for me. I continue wondering what is the relationship between it and DSL Tools. It seems that there is no relationship at all, which seems odd since Oslo has many of the same features as DSL Tools (a couple of years later), although in a more limited way in my opinion. It looks from outside like Don Box et al haven't synched their thoughts with Jack Greenfield, which they should have done.

Paraphrased from Martin Fowler , who got a pre-PDC showing of Oslo from Don Box et al., based on further input from Sam Gentile and what I can glean from playing around for an hour with the SDK:

Oslo has three main components:
  • a language (M) for textual DSLs, with three sub-languages:
    • MGrammar: defines grammars for Syntax Directed Translation.[think BNF]
    • MSchema: defines schemas for a Semantic Model [think C structs]
    • MGraph: is a textual data format for representing instances of a Semantic Model [think JSON]
  • a design surface (Quadrant) for graphical DSLs
  • the "Oslo" repository that stores semantic models in a relational database
You could define a semantic model with MSchema. You could then populate it (in an ugly way) with MGraph. A better way to populate it would be to build a decent DSL using MGrammar, and pass the result to the general parser mechanism that Oslo provides, along with some input in that DSL.The parser gives you a syntax tree, but that's often not the same as a semantic model. So usually you'll write code to walk the tree and populate a semantic model defined with MSchema. Once you've done this you can easily take that model and store it in the repository.

For me, that "write code" is the worst part. If Martin's right, that means that there is no good way to just specify your DSL syntax with your schema, and have statements in that DSL saved to the repository. That seems a step back from what I'm used to in other tools like XText or MPS.

DSM-tech

Code generation performance comparison

April 17, 2008 19:55:21 +0300 (EEST)

I've just finished booking our trip to Code Generation 2008, whose program is now published. One talk I'm particularly looking forward to is Bran Selic's keynote on how DSM can meet the highest standards of performance needed for generated code in Quality of Service constrained applications. Our experience too is that while generated code cannot outperform the best handwritten code (how could it?), with DSM and domain-specific generators it does outperform the average handwritten code, so the overall speed of the whole system is better.

Thinking back to Code Generation 2007 reminded me that the performance of the code generator itself can also be an issue. One company I talked to had a modeling tool generating code as part of a nightly build. The problem was, they kept running out of "night". They were already up to a four processor machine dedicated to running the generation, and that wasn't able to finish the job overnight. What really surprised me was that they only had a few hundred diagrams. I've seen organizations with many gigabytes of models - orders of magnitude more than this case - managing just fine

An article in IEEE Software (Cuadrado & Molina, Sep-Oct 2007) examined the performance of the Eclipse MDD tools for code generation, comparing it with DSLs made with Ruby. They took the same UML model as a starting point, and followed the common Eclipse practice of a model-to-model transformation first to make a "Java model", then a model-to-text transformation to make the .java code files. For the former they used ATL, and for the latter MOFScript; in Ruby there were two corresponding DSLs.

The model was very simple: 40 classes and 50 inheritance relationships, with each class defining around 6 attributes. The code generated was what you'd expect: a one-to-one mapping to .java files, with accessor methods for each attribute, giving a total of 2550 LOC.

Since I'm inherently incapable of resisting a competitive challenge, I imported the UML model from the Eclipse format into MetaEdit+, and made a code generator in MERL to output the same code. Here then are the results: times for Eclipse and Ruby are from the article, my MetaEdit+ time is on comparable hardware.

Time to generate Java code for a UML model: Eclipse 5.423s, Ruby 3.557s, MetaEdit+ 0.176s

I guess the graph speaks for itself: MetaEdi+ is over 30 times faster than Eclipse, and over 20 times faster than Ruby. Even if we ignore all the reading and writing that Eclipse has to do, MetaEdit+ is still over 20 times faster. Since I imagine some people won't be too happy with those results, let's make some things clear:

  • Having two phases, M2M then M2T, roughly doubles the times for Eclipse and Ruby. All the M2M phase really does is add a pair of accessor operations for each attribute, which in my opinion belongs in the code generation phase anyway. The MetaEdit+ generator is both faster and simpler than the combination of the ATL and MOFScript generators, and IMHO this would continue to be true even for much more complicated generators.
  • A modeler in MetaEdit+ can just run the generator, which is executed on his model and corresponding metamodel in memory. In Eclipse, he must first save the model (I'm assuming above the time for that is zero), then the generator must parse that XML file, and the corresponding metamodel. For M2M stages (Eclipse and MDA proponents often envisage many), the generator must also read the metamodel for the output format, and perhaps serialize the result into XML to write a temporary model for input to the next stage. I believe the MetaEdit+ approach is better suited to what developers actually need.
  • For a nightly build, or other occasions when the model is not already open in MetaEdit+, it would have to be read first. This adds 0.276ms, although that figure may be rather unfair to MetaEdit+. We are loading a full UML class with all its information, as opposed to just the class name and attribute names and types in the Eclipse XML file. If all the extra class and attribute information was filled in, MetaEdit+ would be hardly any slower, but the Eclipse and Ruby tools' time to parse the XML models would increase considerably.
  • I could include the time to import the Eclipse XML file into MetaEdit+, but that seems unfair: it's the native format for the Eclipse tools and the Ruby DSLs here, so MetaEdit+ too should start from its native format, as a modeler would. If the Eclipse guys build an importer that reads MetaEdit+ repositories, we can include and compare the times for "import from other tool". For the record: reading the XML file took 5ms, executing the translation to MXM, which MetaEdit+ can import, took 146ms, and importing the MXM file to the MetaEdit+ repository took 1.72s. Building the translator from XMI to MXM took a little under an hour, and used MERL's reverse engineering features, new in 4.5 SR1.

Of course, the Eclipse tools will get faster -- as will MetaEdit+. I think the main difference is one of architecture, though, and internal data structures and algorithms. Changing some of those should be possible, but some -- like EMF -- will be hard to rip out of Eclipse modeling without breaking absolutely everything else.

It would be interesting to see the results for other tools like oAW or Microsoft's DSL Tools' T4. Any competitive natures in those teams? :-). Finally, many thanks are due to Jesús Cuadrado, who provided me the models and generators used in his article, as well as the details of the environment from their tests, to make mine as comparable as possible.

DSM-tech

Comparing tools, plus spatial relations in MetaEdit

January 15, 2008 17:54:37 +0200 (EET)

Steffen Mazanek writes about an interesting metamodeling practical that he's conducting: students have to implement a cut-down UML Class Diagram editor using MetaEdit+, Eclipse GMF or Microsoft DSL Tools. It reminds me of the "Use Case cartoon" experiment where MetaCase and Microsoft both built very simple Use Case diagram support in their respective tools. Doing it with MetaEdit+ was 6 times faster back then, but hopefully Microsoft have caught up somewhat since then. Mind you, those figures were when the tools were used by their developers: when used by students, I'd expect MetaEdit+ to fare better.

Steffen also wrote a nice mini-review of MetaEdit+ . He especially liked the ease of use, the Symbol Editor and the high level of integration (as opposed to the multiple mapping languages of GMF). He said MetaEdit+ would find it hard to support languages using spatial relations, e.g. VEX, which uses visual containment rather like Venn diagrams. I'm not sure I'd agree with that. Here's a picture of something like VEX: each circular object has its name in bold at the top, and at the bottom a list of the objects that it contains (recursively).

VEX-like diagram in MetaEdit+

To build the metamodel took a couple of minutes. MetaEdit+ already understands containment via the 'do contents' structure in its MERL generator language. However, that calculates contents based on the enclosing rectangle of the symbol, whereas for VEX it should be based on circles. Otherwise 4 above will be considered as completely contained in 1: true if you think of their enclosing rectangles, but not for the circles. The little bit of generator script that produces the text at the bottom of the circles therefore needed to be a bit longer than just "do contents { id }". Here's what it took:

Report '_contents'
  /* report all contents of the current object, flattening any nesting */
  do contents
  { subreport '_calcMargin' run
    if $margin >= '0' NUM then :Name endif
  }
endreport

We go through all the objects contained in this object, using the standard "do contents" rectangular definition of containment. For each little object we calculate the margin between it and this circle. If it's positive, the little object is contained and we print its name.

Calculating the margin is done in the calcMargin sub-generator, which saves it in a variable called margin. The formula is simple enough, but might take a moment's thought if your geometry is as rusty as mine:

Report '_calcMargin'
  variable 'margin' write
    /* big object radius - center difference - little obj radius */
    math 
      width;1 '/2 - '
      '((' centerX '-' centerX;1 ')^2 + (' centerY '-' centerY;1 ')^2)^(1/2)'
      ' - ' width '/2'  
    evaluate
  close
endreport

Basically we want to check that the big object's radius is bigger than the distance from the centre of the big object to the outer edge of the little object. The distance to the outer edge is the distance between the centres, plus the radius of the little circle. The distance between the centres is calculated with Pythagoras' theorem, and the radii are just half of the width of the objects. CenterX and width here refer to the little object, whereas the ;1 suffix in width;1 makes it return the width of the big object, one level further out on the element stack -- i.e. from outside the "do contents" loop.

When drawing the symbols, MetaEdit+ thus calculates and displays this list of contained objects. It's even updated on the fly as you drag and scale objects. This lets you do cool things like show big red error signs if someone drags an object into the wrong kind of container.

Putting _calcMargin in its own sub-generator allows us to reuse it from other generators, e.g. to produce an indented "tree" listing showing the containment hierarchy of all the objects (like the default "Object Nesting" generator in MetaEdit+).

DSM-tech

Microsoft DSL Tools book

June 01, 2007 16:26:54 +0300 (EEST)

Steven -- Hope this is good for a Blog! AlanOne of the nicest moments for me at Code Generation 2007 was sitting on the same table as Alan Cameron Wills during Jos Warmer's presentation. Alan had a white package that he was unwrapping with some glee, and when I saw the contents I could understand his happiness: it was a copy of the DSL Tools team's new book, Domain-Specific Development with Visual Studio DSL Tools.

I reached over and shook his hand to congratulate him, knowing what it feels like to finally finish a long project. He pushed the book over to me for a look, and I had a quick browse. Pretty cover, inside lots of pictures, big font, and loads of information about how to use their tools. I gave him the thumbs up and slid the book back to him. He smiled and pushed it back to me... He whispered that this was copy #1 in the world as far as he knew, and I could have it! After the talk he was kind enough to write a short message inside, as you can see from the picture: "Steven -- Hope this is good for a Blog! Alan". It certainly is! Many thanks to Alan for his generosity.

Unlike the Software Factories book, which was a broad general vision of DSM and existing techniques like AOP and patterns, this book focuses on how to accomplish individual tasks with Microsoft's DSL Tools. In effect, the book is the manual for the tools. There are short introductions to each section, saying why you might want to do that particular task, but the vast majority of the book is a description of the tools. It describes the user interface for the tasks, and for areas that still have to be hand-coded, covers the methods and classes of the framework they provide.

There are a good number of examples throughout, which helps the reader understand by making the ideas more concrete. At some points, particularly where the examples are not possible to achieve using just the DSL Tools UI but require hand coding, the amount and length of code listings starts to get in the way of that understanding. To the authors, the code and the reasoning behind it is clear, but not always I suspect for the readers. The individual lines all make sense, there are nice comments, and you can probably write your own tweaked version for your own case, but I wonder whether there is a danger that looking at things on this low level makes it hard to see the wood for the trees.

Of course, this worry is a common one when dealing with frameworks, and is actually one of the main reasons we need DSM. When coding gets to the stage that you can copy and paste (or retype) 15-20 lines of code from a book, and just tweak a couple of parameters to fit your case, it's often time to look at finding a higher level way that allows you to write a single line of code with just those parameters, or even better a language or tool that knows what parameters to expect. Since in this case that new framework, language or tool will hopefully be DSL Tools 2.0, there is of course little the authors can do at the moment. They're giving the best help they can to current users by writing the book, and now I'm sure they're happy to get back to writing the new versions of the tools.

In summary, this is a great book for users of the current version of DSL Tools. In fact I guess it's pretty much a "must have" book, and is a rare case of the developers of a tool being able to write a decent manual for that tool. Farming this out to some technical writers at this stage probably wouldn't have worked. However, if you're not already committed to making your own modeling language with the current version of DSL Tools, there's not really all that much in this book for you. It's interesting enough to read, but can only give brief snippets of information and guidance about the general task of creating DSM languages and generators, using them in projects and maintaining them. For instance, there are only two pages on "DSL evolution", a topic which provided one of the most interesting and enlightening sessions at Code Generation 2007. Whilst this is more of a manual than a general book about the DSM approach, I'm sure the users of Microsoft's tools will find they need some help in that area too. So, I'm sure DSL Tools users will be overjoyed to get this book, but also eagerly await the next book: "Advanced Domain-Specific Development" -- no pressure, Alan! :-)

Alan Cameron Wills giving Jos Warmer copy #2 of Domain-Specific DevelopmentPS There's a great picture by Clemens Reijnen on Flickr of Alan handing copy #2 to Jos at the end of the talk. Unfortunately, the network is down at the moment so I can't link to it: follow the Flickr link on Clemens' home page. [Edit: added picture]

DSM-tech

Survey on future of Microsoft DSL Tools

May 09, 2007 14:16:14 +0300 (EEST)

This Don Smith post looks interesting:

I can't devulge too much detail yet, but some team in Microsoft might be in the middle of building some serious software factory infrastructure for a future version of Visual Studio - that's right, much better than GAX and DSL tools. Do you want it to suck? Do you think software factories are just a pipe dream and would rather they build something else?

Don wants to get feedback on the four "factories" they've released, and so they have a survey: 10 general questions, and 10 questions for each of the factories you've tried. The biggest question seems to be missing though, at least in my opinion and based on my experience of their tools so far:

  • Would you like your existing DSL Tools modeling languages and generators to keep on working?
  • Would you like to see major improvements in DSL Tools?
    (pick one of two :-) )

As long as Microsoft thing of building modeling languages as a programming project, it's going to be impossible for them to make the changes they need to, without breaking existing modeling languages built with their tools. If they have the possibility to make major improvements to DSL Tools, my suggestion would be to go right ahead: the important market is those who have not yet been reached. Those brave enough to use CTPs, betas and version 1.0 knew what they were letting themselves in for.

This is of course a problem with tools that require programming to build modeling languages, or a separate compilation step between the definition of the metamodel and using it to model with. In MetaEdit+, the metamodel is specified directly in the tool, with no programming, and models are made based directly on it, with no intermediate compilation. In other words, the metamodel is expressed as pure data, which makes both metamodel updates and tool updates easy and painless for the users.

Next (36 total)