For five years ago, I wrote a paper on modeling. Since then, agile practices have pretty much become mainstream in our industry. So, perhaps it’s time to revisit the topic of architecture and modeling again…
Let’s start with architecture itself: as I wrote in an earlier post, my fear and impression is that while agile practices certainly make sense, I’m also noticing a decline of the importance placed upon architecture in general occurring in parallel, that is, organizations now doing agile practices seem to “care” less about the architectural workflows than they did before. The fundamental reason for this, I believe, is that agile practices are much more focused on the “here and now”, i.e. on delivering immediate value, and there’s a risk IMO that unless care is taken, agility without due consideration for architecture could very well lead to problems downstream, e.g by dramatically increasing technical debt, ultimately demanding massive rework.
Why do we need architecture in the first place, anyway….? Well, as soon as we are about to build something of any complexity, we will need a “blueprint” or “map” for it. So, while a doghouse probably doesn’t need a very elaborate architecture (unless your dog happens to be very fuzzy! 🙂 a high rise definitely will need one. Furthermore, we will need to capture and communicate an architecture, somehow, as soon as the effort of building the thing demands a team to do it, i.e. we will need to capture and communicate the overall plan (the architecture) so that each team member can know what the system is supposed to look like. The architecture is also used by the architects to reason about pros and cons of various alternative solutions to the problem.
So, depending on the complexity of the system we are going to build, and the complexity of the organization and team that’s going to build it, we must tune our architectural discipline and workflows to match, i.e. we need to adjust the level of stringency and formalism of our architectural work to match the complexity of what we are up against.
One of the main questions then is how to capture and communicate an architecture ? Well, I’d say it depends on the level of stringency needed: if the system you are building is trivial, and if the process of building it is trivial, then your architectural discipline can be trivial, too. On the other hand, if the system you are building has a high level of technical complexity, or if the project complexity dimension (e.g. application and team size, organizational distribution, standards adherence, certification requirements, continuous evolutionary development etc) is high, then you will need more formalism and a higher level of stringency in your architectural workflows. So, in a less complex situation, capturing your architecture on a whiteboard and communicating it with digital images would do fine. However, in a more complex situation, that type of “architecture by whiteboards” will not suffice. For these more complex situations, your architectural workflows must be more formal, and modeling is one way – and IMO a good way, if done correctly – to do it. Yes, even in an agile world!
In my 2007 paper on modeling I described a “taxonomy” of the by then current “best practices” of modeling:
- whiteboard sketching
- modeling by powerpoint
- disposable UML
- Fully constructive modeling
IMO, two of these practices are very applicable and consistent with agile practices: I wouldn’t hesitate using “whiteboard sketching” if the complexity of my system and organization is modest. However, if the complexity level is high, architecture by whiteboard alone will not suffice. So what’s the alternatives ? IMO, there are two: either skipping modeling (at least the formal UML variant of it) completely, and use “traditional” techniques to capture the architecture, such as textual specifications, interface descriptions etc, combined with whiteboard sketching and digital fotos to communicate it. The other alternative is to use fully constructive models (a.k.a executable modeling), where you use a modeling tool capable of generating 100% of the application.
Thus, from the taxonomy above, in an agile context, I’d skip “modeling by powerpoint” and “disposable UML” – IMO, neither of these practices are worth the time, and do not conform to the agile mindset.
So, it might sound like I’m all for fully constructive modeling then….? Yes. And No. “Yes”, because in theory, you’ll be able to have your teams working at a higher level of abstraction, with a unified language for architecture, design and implementation and test, and generating the nitty gritty detail level code that otherwise often would be very difficult and a pain and very slow to write. In other words, executable modeling (in theory) would give you the agility, speed, and quality necessary in today’s highly competitive market place. “No”, because unfortunately, most executable modeling tools available today are far from perfect, and as soon as you commit your organization to executable modeling, you are going to be totally dependent on the quality of your modeling tool, in terms of it’s usability, scalability, performance, robustness and quality of the qenerated code. Thus, for executable modeling, the quality of your modeling tool, and of the code it generates, is as important as the quality of your compiler, or the quality of your target OS, or hardware for that matter! Unfortunately, this is not where most of today’s modeling tools are. When I wrote the paper on modeling five years ago, I had some expectations that modeling tools by 2012 would have functionality and quality levels high enough to make traditional code based development obsolete, but unfortunately, that’s not where we stand. Let’s revisit this theme five years from now and see what has happened… 🙂