Why Model Driven Software Development isn't fast enough and how to fix it
Did you hear all the rumors about Model Driven Development (MDD) lately? Start using MDD because it improves your productivity, use business engineering / MDD for more business agility, if you want your SOA fast you need Model-Driven SOA, and so on...
If you’re new to Model Driven Software Development and its related acronyms like MDA, MDD, MDSD, MDE, and it all sounds a bit abstract to you, please read this article explaining MDE with a simple metaphor.All mentioned statements link MDD to the fast delivery of business results. MDD as the solution to slow software development cycles. Guess what? That’s just not true.
Why Model Driven Software Development isn't fast enough
Yes, MDD can be much faster than traditional software development. However, the result, working software for end-users isn't delivered fast enough. If we look at a typical software development project using MDD we see something like this:Agile is key...
Short iterations...
Easy modeling...
Early results...
Showing prototypes to business / end-users...
Easy to involve business...
After a bunch of iterations the application is finished.
(see also 15 reasons why you should start using MDD)And now... deployment!
The application should move to production. That's were it all starts.
We need to build / package everything...
We need a server to deploy the application on...
IT guys will start talking about corporate policies, security, reference architectures, etc...
We need to configure / bind all kind of things (e.g. addresses of integration points)...
And what about testing for acceptance?
Lot's of people need to be involved...It's needed, don't get me wrong! However, were the first part was fast, the second part, deployment, is just slow. Most MDD tools / projects focus on the development part and they do it well. Believe me, I've seen some incredible results! But, to really unleash the power of MDD to the business we need more...
And how to fix it...
Luckily there is a way to fix this problem. We can make the whole process faster. Let's look at three alternatives.
Cloud deployment
A possible way to make the deployment of applications easier and faster is to deploy them in the cloud. Don't think about hardware, platforms, architecture, etc. Send your model to a cloud and just use your application. Model-Execution-as-a-Service!
Advantages:
- No discussions about hardware, platforms, architecture, etc. The important thing is: make it work.
- If the cloud is selected on before hand deployment can become very fast.
- Probably more cost-effective and scalable.
Challenges:
- Make it easy! Abstract away from all kind of deployment details.
- Corporate policies will soon cover cloud infrastructures.
- Take care of your security wishes.
- Integration with corporate systems within the firewall.
- Performance, watch your connection speed!
- You still need to arrange a sufficient test process.
Change at runtime - engine
Another way to speed-up the deployment cycle is to define two levels of variability for applications in a certain domain. Let me explain that.
If we select a domain (e.g. insurance, healthcare, web applications) we can build a Model Driven Software Factory (MDSF) to build applications in that domain using high-level models (i.e. with use of Domain-Specific Languages). A MDSF is build on the idea that if you compare applications in a certain domain with each other, there's a static part and a variable part. The static part is the same for each application in that domain, the variable part is different for one or more applications in the domain. The static part is implemented in libraries. The variable part is defined in DSLs and can be modeled by the one building the application. The code generated from the model defined with these DSLs (or the engine executing the model) uses the libraries containing the static part of the implementation.
The idea of two levels of variability is to select a part of the variability which needs to change a lot during the lifetime of an application. This part, the second level of variability, should be adaptable at runtime (i.e. during the use of the application, without re-deploying it).So, in principle we can define the following levels of variability for an application:
- Level 0: the static part of applications in a certain domain.
- Level 1: the variable part which is defined during the development and design of the application using DSLs.
- Level 2: the part of the application which can be configured or adapted at runtime.
Examples of level 2 elements are the targets/goals of KPIs, authorizations, GUI personalization, task-role relations (in workflows), business rules, etc.
Business Rules engines often support changes at runtime without re-deployment or stopping the current running transactions. MDD can learn from them. Use an engine (virtual machine, model executor) and allow for part of the model to be changed at runtime.
Advantages:
- Fast and easy to apply changes.
- Runtime changes, without the need for a maintenance time frame.
- Only controlled changes, i.e. only changes at level 2 are possible at runtime.
- If changes can be constrained / controlled, less testing effort.
Challenges:
- Changing a running system means that all current running processes / transactions should keep running on the old model. New triggers will start executing the new version of the model.
- It's difficult to define the appropriate variability for each level.
- What about testing, errors, risks, rollback of model versions, etc.
Change at runtime - adaptive modeling
The third alternative is mainly the same as the previous one. It also builds on the idea of two levels of variability. Instead creating a second level of variability within the tooling by allowing for model changes at runtime, this alternative is based on adaptive modeling.
In 'normal' Model Driven Development the metalevel is part of the model editor (see this article on DSLs for a deep-drive in meta-modelling). In case of adaptive modeling a metalevel is introduced in the model itself. The instances of a group of objects affect the behaviour of instances of other objects. In Domain-Driven Design (as described by Eric Evans) this is known as the pattern named Knowledge Level. The Knowledge Level pattern splits a model into two levels:
- Operations level: the place were we do our daily business. For example Shipment, Account.
- Knowledge level: objects that describe / constrain the objects in the operations level. For example EmployeeType, RebatePaymentMethod, ShipmentMethod.
Adaptive modeling allows you to alter the application by creating knowledge level objects and wiring them together. The knowledge level represents what I called variability level 2 before.
Advantages:
- Fast and easy to apply changes.
- Runtime changes, without the need for a maintenance time frame.
- Only controlled changes, i.e. only changes at level 2 are possible at runtime.
- If changes can be constrained / controlled, less testing effort.
Challenges:
- The advantage of MDD is to use language which is as specific as possible. Adaptive modeling means that the model, and thus the tool support, is more abstract and less specific.
- Where to stop? I.e. what should be level 1 and what level 2 variability?
- Editing isn't the hard part of adaptive modeling. How to use test and debug tools? How to ensure quality? The previous alternative is stronger on this point.
Conclusion
Model Driven Development (or Model Driven Engineering) can't deliver software as fast as nowadays dynamic business environment needs it. The main slow-down is the phase between development and production. This phase can be made faster by using cloud deployment, runtime engines, or adaptive modeling.
We should go beyond MDD! Not only Model Driven Development, but also Model Driven Deployment or runtime adaption.
What's your preferred alternative? Or do you have a fourth one?
Comentarios
Publicar un comentario
1) Lee el post
2) Poné tu opinión sobre el mismo.
Todos los comentarios serán leidos y la mayoría son publicados.