One of the challenges in software engineering, at least from my perspective, is that we don't generally have very good modeling capabilities that let us explore tradeoffs in our systems. Where the governing equations of physical systems are well understood and amenable to simulation or closed form analytical solutions, the software domain is not. In part, this is because software is constantly evolving and introducing new techniques that aren't captured very well by our existing techniques, and in part because software is not yet a very mature field, meaning that effective modeling techniques have not had much time to gain a foothold. Combined with the tendency towards fast iteration, an emphasis on coding rather than design, and the received wisdom that "documentation is useless because it's always out of date", these factors make it difficult to build models or to get others to value them.
This is a shame, because I think system models of our more complex applications could be invaluable. On my team analyzing the potential impact of changes has become, to a large degree, a matter of "push and pray" where we release a feature (with an off switch) and hope that no negative emergent behaviors form. I think we can do better than that, but it'll take some effort and a level of discipline that is somewhat anathema to the hyper-agile style.
If we wanted to improve our ability to predict the impact of changes, what kind of models could we use? There are a multitude of different types out there, and each is suited to different tasks. For example, we could create a Design Structure Matrix like I discussed earlier and use it to predict which subsystems may need to change to accommodate a new feature. Or we could create a formal analytic model, perhaps using the Pi calculus, and use it to prove the absence of unwanted behaviors such as deadlock. Another option would be to create a simulation of the system written in a scripting language that abstracts the major system components and allows us to gather metrics such as average response latency.
All of these models yield value to the engineering process, and they come with accompanying costs. Building a simulation or a Pi calculus representation of the system may be quite costly, whereas a DSM may be generated from information in the code itself. Obviously the utility of a model needs to be weighed against these costs, but I would argue that if you are doing no modeling whatsoever, you're probably missing something very important about your system.
Once you have a model, you need to keep it up to date. I think this is one case where the pure software domain has a distinct advantage over other disciplines, because there's the potential to automate enforcement and validation of the models we build. Source level analysis combined with runtime data collection could be fed back as validation inputs to the model, and used to determine whether the model is trustworthy or if it needs to be updated. Teams that use continuous integration servers may also be able to enforce architectural requirements that mandate model updates when certain properties of the system change, such as when new network calls are created or a new package is added. Updating the model incrementally when the context is readily available may help to reduce the burden and make the model more generally useful, as it will be kept ready for use all the time.
To achieve this we need a better integration between the modeling tools and the systems we hope to use them on. If CI systems are able to query the model and enforce system properties, quality will improve. When architects and engineers are able to test proposed changes during the design phase, they will build better systems. An on-call engineer woken up during the middle of the night by a pager will be happy (or at least less peeved) to find a probable cause assessment generated by comparing the system model to the actual system performance. If we can get there, I think there's a lot to be gained from more systematic approaches to the systems engineering here.