hide all comments

DSM

Microsoft, MDA, and Precise and Imprecise Models

December 20, 2005 21:23:09 +0200 (EET)

Gareth Jones and Harry Pierson of Microsoft are having an interesting little transatlantic debate on model precision, completeness and perfection. In the latest installment, GarethJ tries to express how the development process moves towards its goal, saying it is:

"progressing gradually from models with less precision to models with more precision"

That can be read two ways because of all those plurals, so let's cut it down to one model at the start and see what it could mean. Either (1) there's one model, which is less precise to start with but gets worked on until it is more precise, or (2) there is initially one model, then we build another model, maybe in a different modeling language, by some mixture of automatic transformation of the first and building a completely new model by hand.

I like the sound of (1) far more as a process for development: there's one clear versionable component, and each fact is represented once, all at the same level of abstraction. The second sounds more like OMG's MDA, whose transformations end up duplicating the same facts in more than one place (PIM and PSM, for instance). Keeping that lot in synch simply isn't going to happen, and even if it did, why would we want to have everything said twice (stated dually). If we did that (were we to choose such a way) we would quickly find (in short order it would become apparent) that actually this was a bad idea (and I guess you got the picture!). :-)

The idea of transformations to produce a new model, whilst staying in the same modeling language, is to me a similarly poor idea to "first write your code in high-level Java, then rewrite it in low-level Java (and your IDE provider should give you some functions that partially automate this)". Who would want to do that? Especially if you envisage trying to keep both versions around...

I think some people in the OMG must have got confused about what raising the level of abstraction and automation means, and why it works. In compilers, that's been a great thing: there's an automatic transformation from code to assembler (and on to executables). But the big thing here is that you never have to look at the results of the transformation. It just works (war stories aside). You stay on the high level of abstraction, and the automation takes care of the low level. If you have to work with both the input and the output to the transformation, and are even expected to be interested in both going forward, something seems seriously wrong to me.