Re: Fundamentally open approach to Domain Specific Tooling
Microsoft's Gareth Jones posts on the need for tools to be customizable and extensible:
We don't believe that at a given point in time ANY tool addresses 100% of the needs of 100% of its customers but we don't want our customers to be blocked from having their needs met by the scope of the feature set that we happen to have implemented at that particular time.
In an old discussion on a feature we don't support currently Steven Kelly expressed the view that we seem to be targeting developers with our tools, whereas his esteemed tool aims more (or at least as much) to target domain experts by always doing 100% code generation thereby relieving them of the need to write any code. A question Steven: What happens when the feature an authoring user wants isn't supported by the current version of your tooling?
Good question, which should be answered on both levels: what if the metamodeler wants something that isn't in our tool, and what if the modeler wants something that isn't in the modeling language. The first is more important from my point of view, in that there's something I can do about it. Interestingly, the "old discussion" Gareth mentions was precisely such an issue: Microsoft doesn't provide support for n-ary relationships. What do you do if you want to make a modeling language with such a feature with their tools? Basically, you're stuck, since their tools aren't open source. If you had the source code, you could throw anything from weeks to months at the problem, and in theory end up with something that works. But with a closed source tool, even if the metamodels are expressed in a combination of C# and XML, you simply can't add n-ary relationships if the tool doesn't support them.
That's the first clear point then: it's much better for customers if the tool already supports things. As developers, we love the ability to be able to dig down into the internals of a tool. You can't do that with the Microsoft tools, but you can with open source DSM frameworks like Eclipse EMF & GEF. The bad news with Eclipse is that honestly, you wouldn't want to dig down into it. Even without touching the internals, building modeling language support with Eclipse is several hundred times slower than using MetaEdit+ (see last para here for links). I hate to think what those figures would be if you had to start rewriting the blessed thing. Good luck though to the team(s) working on the replacement(s) to GEF: having real DSM support in Eclipse would be great.
Gareth is clearly right that no tool supports 100% of the needs of 100% of its customers. The concept of need is however an interesting one. In fact, it's probably a poor concept, and often no more real than in the mouth of a five year old: "but Daddy, I need it!" What is really important is how much time the tool will save you overall. For DSM tools, you should also ask whether the modeling tool you end up with will save the modelers time. If the tool is missing 'necessary' features, you'll either have to hand code something, use its API, find another way to store or generate that information, or simply tell the modelers that they can't do that part by modeling, but must code it by hand. Any of those ways equates to extra time spent, both on solving the problem and some extra for the cost of switching between two ways of doing something.
If you want to be able to do something with a pre-beta release of version 1.0 of a DSM tool, you'll have to accept that it's missing many things. Indeed, the tool vendor does the right thing in that case to leave lots of things open, to be hand-coded by the users. That way, in particular if users can be encouraged to be open and share their resulting tools, the vendor can learn what is actually needed, and use that as a basis for version 2.0. Microsoft have consistently emphasized that their DSL Tools are at that stage, and there's no shame in that. They've got a good number of things right, far more than Eclipse, and they've had a good attitude in listening to users pointing out things that are missing, and often have been able to suggest how the users could code those missing parts by hand for now.
The situation is however different if you have a tool that is more mature. Then a user can rightly expect that what the tool offers is good and sufficient. Oddly, it may even be the case that the tool deliberately doesn't offer something that you may initially feel to be a good idea, because the vendor has seen that in the long run it leads to trouble, or simply doesn't work. Of course, there will also still be things that could be there and aren't, or things which everyone would like but no-one has figured out how to do yet. For some of these things, the problem is that different users have sufficiently different needs that there is no one way to suit them all, or even enough of them. In those situations you expect to be able to achieve what you want via the tool's API (for direct manipulation of the models) or some sensible import/export format (for integration with tools up or downstream of the DSM tool).
MetaEdit+ has offered model export from the start, and import and an API for nearly two years. We'd have had the latter earlier, but there were no API protocols that were cross-platform and sufficiently easy to use that we could see many people being interested. Interestingly, to my knowledge no customer has yet found a need for the API, which confirms my impression that you can get what you want from MetaEdit+ without having to resort to programming. Of course, it's always nice to know you can get in there and change the models programmatically if you need to, and we will continue to develop the API to give people that warm fuzzy feeling of security. If somebody uses it, that's brilliant; if nobody needs to, that's even better.
On are web pages are a couple of animated examples of things we see the API as being useful for. The first is for for model-level debugging, which I believe is important if we are to completely raise the level of abstraction away from source code. Since the correspondence between modeling language elements and code lines is completely domain-dependent, this kind of thing will always remain something a DSM tool cannot offer out of the box. The best you can do is keep the interaction down to having an extra line of generated code every now and then, which can be turned off for production code. That's what's happening here: each change of state, loosely corresponding to a case in the switch statements, calls back to MetaEdit+ through the API to highlight the corresponding model object in red.
