Tuesday, October 9, 2007

Eclipse Summit Europe Day One: The Modeling Symposium

The Modeling Symposium was really well attended with roughly 35 people. We traded for a larger room with one of the less well attended symposiums. I got a snap shot of all the folks in the room.




Markus Voelter gave a quick introduction and I talked very briefly about what's happening in EMF and the other Modeling projects. Then Bernd Kolb and Peter Friese talked about the Model Workflow Engine. It provides a language to write scripts that drive Bean-like classes representing the components within a workflow. It includes debug framework integration so you can track the activity of a flow. Folks were interested to see a sample running under debug control, so Bernd was under the gun to produce that quickly. Someone asked why not use Ant? MWE has more specialized control mechanisms; it also has an Ant task and they are working on being able to call out to Ant.

There was a presentation about Product Line Unified Modeler by Aitor Aldazabal of the European Software Institute. This involves managing variability of a line of related products via a decision model. Eclipse was a good choice because they wanted OSGi support for deploying on devices. They use Ecore to define models and generate a basic editor as well as using GMF for graphical support. They also use OCL for defining constraints on configuration validity as well as oAW for scripting the process. They need a distributed architecture so are looking into MDDi.

Then Bernd and Peter came back with a quick demo. It's very cool that you can step into the various parts of the flow processing!

Holger Papajewski of Pure Systems presented about Integrating Model-Driven development and Software Product Line Engineering. It was quite closely related to Aitor's talk. They are very happy with the choice to use Eclipse, which was a choice made very early in Eclipse's public life. We noticed a pattern that the talks all used oAW, but Markus denied selecting based on oAW usage, but that just made it more fun to tease him about it. This project manages variants and variability using feature models. The variants are generated using a decision model. They use BIRT for report generation. They are moving over to using Ecore to describe their models.
And they have a need for scripting the types of changes they commonly make.

During the break, I talked to Stephan Eberle who is at TNI-software now. I also chatted with Miguel Garcia and helped him set up his OCL tools demo on my machine. This was the only meeting I've ever been too that was ahead of schedule. Markus runs a tight ship! I also talked to Reinhold Bihler about the C# component for translating EMF's runtime for .NET; Mikael from INRIA was quite interested in this.

Dimitros Kolovos of University of York, an Epsilon committer, talked about automating repetitive tasks. Their project provides infrastructure for implementing task-specific languages. Epsilon Object Language is based on OCL and includes support for comparison, validation, merging, and transformation. He showed how to extract a class into an interface automatically, transforming references to the existing class to refer to the new interface instead. They plan to look at converting a recorded change description into a generated wizard skeleton.

Gabriele Taentzer of Philipps-Univeristat talked about supporting complex editing operations using in-place transformations based on a Transformation Rule Model that lets you describe what needs to be done to transform a graph. When I spoke with her at lunch, she expressed an interest in starting a model refactoring component in EMFT. Cool!

Maarten Steen of Telematica Institute talked about using Business Process Designs to generate code. They use QVT to transform between models. There was quite a bit of discussions about what parts of QVT are currently being supported or planned in the Modeling project.

Miguel Garcia give his impromptu presentation about his OCL tools and the Emfatic work referring to his home page: http://www.sts.tu-harburg.de/~mi.garcia/Promotionsprojekt.html

At the end of the morning, Markus collected topic for afternoon break out discussions.
  1. Would be a nice to have one QVT implementation.
  2. Dashboard for model users; making the tools more usable.
  3. Organization of the Modeling project. How to make the overall project more organized?
  4. IP? Autosar has a private metamodel...
  5. Validation and OCL constraints.
  6. Scripting and orchestrating the tools.
  7. Scalable support for big models.
  8. What's happening with M2T?
  9. Textual modeling framework.
  10. Integration of textual and graphical aspects of DSLs.
  11. Weaving instances and determining correspondence between instances.
  12. Visualization of complex models.
We voted on our favorite topics and used that to determine which discussions should happen sequentially. After that we had lunch.

As part of the "Better organization" discussion lots of issue were raised. How to use all the different projects in a cohesive way to achieve a particular result? Many people invest a lot of time learning poorly documented tools to understand which ones will help contribute useful capability for their task at hand. We need better summaries of what projects really provide. It's hard to find the information you need. The ability to use all the frameworks in an end-to-end solution is an important need to address. I pointed out that what would be really good would be if users help to focus on building up the information base that documents what they've learned to help make life easier for the next set of users. I.e., the community can contribute to the solution via the wiki.

Markus argued that we need better leadership, i.e., folks who understands and promote the big picture with less particular focus on a specific project. I think that comment was directed mostly at me rather than at Rich, who is busy working on a book for how to use the various Modeling project technologies to build a domain specific language. Hey, if the shoe fits, I put it on! There is also a problem of reinventing technology; researchers tend to like to do this. Overall, a dashboard for technology in Modeling project, like the one for GMF would be appreciated. It was observed that it's not necessarily in the funding company's best interests to make open source so competitive it competes with their products or services, so the community really needs to step in to help polish the open source results. It was also observed that academic research funding is often driven by folks relabeling their work to fit into the special interest of the day without actually changing what they are doing. Folks were also concerned about competition with Microsoft's DSL tooling; obviously Microsoft knows how to polish their results. In general, there are perceived to be too many projects with overlapping scope and with vague interrelations. In the end, we don't need yet more solutions to the same problems, we need a smaller number of well-integrated high-quality solutions. I'll need to think about all this a lot...

As a prelude to tomorrow's Big Models meeting, the issue of scalability was raised. Solutions that work well for smaller models will often not work so well for larger models. It was noted that unloading a large model will actually increase the memory consumption by converting objects to proxies. The objects are detached from the resource but they maintain all their connections,
including their containment tree so any reference to any object in the tree will keep the whole tree in memory. One could unset features to eliminate the containment connections between the objects and thereby allow the individual unreferenced ones to be collected. Fragment paths are seen to be quite large and fragile, though that space is transient while the ID overhead stays around longer. Some folks need to track all instances of some specific type so I pointed out at a specialized EContentAdapter could maintain that information on the fly to avoid visiting the whole model each time.

The issue of large models that are too big to fit all in memory came up, but even without EMF, there is really no solution to this problem. One needs to split them into smaller pieces to make working with them manageable and to reduce change conflicts; cross resource containment can help with splitting large models into smaller pieces. Repositories can be a solution for the search and query problems by maintaining caches and indexes; it can understand the deep structure of what is saved. Also, the EStore API can be useful for separating the data from the EObjects that carry the data; CDO is using this approach. In the end, the best way to make things scale is to have a larger number of smaller resources rather than a small number of very large resources. One can always process the smaller resources one at a time, but a large resource taxes pretty much any technology.

There was a slightly confusing discussion about end user integration because defining an end-user is tricky in this meta meta world. But we decided that by end-users, we mean the modelers, i.e., those folks who will use the Modeling project infrastructure to build models, tools driven by those models, and utilities to manipulate those models. It would be awfully nice to make the tools feel more like the JDT with proper end to end integration to handle common issues easily. Scripting or workflows to perform more complex tasks as part of a build would be a time saver and error preventer as well. Make tools simpler by hiding incidental complexity...

It was a very interesting but tiring day. Rich arrived at the end and I sat down with him to go over the demo we do tomorrow. I'm a little ill prepared. I hope I don't screw it up too much!

Finally we went out for the sponsors dinner organized by Ralph.


Tomorrow's Big Models meeting should prove fun as well. Stay tuned...

2 comments:

Chris Aniszczyk (zx) said...

I see Dirk and Jeff!

Suresh Krishna said...

Oh man....you seem to be having a rocking time in the symposium. The pictures reminds me of my old days in Bosch and Ludwigsburg.
Anyway enjoy :)