Another brand new day. Last day went very smoothly, and the format really works well. The papers discussed yesterday and today can be found here: http://dl.acm.org/citation.cfm?id=3129790 (or open it directly with the UU proxy!).
paper 4 “Modeling Embedded Systems Using a Tailored View Framework and Architecture Modeling Constraints” by Andreas Morgenstern, Pablo Antonino, Thomas Kuhn, Patrick Pschorn, and Benno Kallweit, presented by Sandra Schröder
Most architecture frameworks are multi-purpose, and thus very generic. For embedded systems, many properties need to be modelled, such as real-time properties or energy consumption. These cannot easily be captured in thbese generic frameworks. The authors propos an approach based on SPES metholdology for modeling architecture of embedded systems and offers tailored elements for modeling architectures of embedded systems and an automated
mechanism to specify and perform quality checks in the models. It proposes four views: context, functional, logical, and technical, crosscutted by data models. Their main question is consistency: how can we show that the technical view adheres to the logical view? They added the software design view, with crosscutting traceability views, that allow to express properties over different views. The traceability view takes elements from different views and connect those with arcs labeled e.g. with “realizes”, and “deploys”. Such views enable completeness checks: “Every X is realized by at least one Y”, and consistency: “Every X realizes at least one Y”. Initial evaluation has been done on two cases: cherenkov telescope array and in the automotive domain, but Sandra raises the question: “what is really new in the approach?”. As the author states, the approach started in 2014. The problem is that in the domain people have problems following the architecture approach. This framework helps them in guiding model-based architecting. This framework is the core that is extended for each domain.. However, the source code level is not (yet) added to it. One of the reasons is that most people in the domain want to use code generation. So, a next interesting question would be whether the code complies to the models.
paper 5 “Are Code Smell Detection Tools Suitable for Detecting Architecture Degradation?”, by Jörg Lenhard, Mohammad Mahdi Hassan, Martin Blom and Sebastian Herold, presented by Pablo Antonio
The problem is that architecture and implementation evolve independently. This raises threats of architectureal violations, such as developers having a local scope, being under time pressure, etc. A code smell is a surface indication that usaually corresponds to a deeper problem in the system. In this way, code smells could indicate architecture violations as well. This paper tries to use code smells to find architectural deteriation. They used Jabref as their object under study. With JITTAC they found 1459 inconsistencies, and 186 classes that participated in these. Next, they inspected all classes that participated to identify causes of at least one inconsistency. With tools as Findbugs, PMD, SonarQube and SonarGraph, they collected all kinds of metrics, and created a big data collection, and added response and predictor variables, resulting in a set of 10 metrics, and did a combinatorial analysis. They created a Bayes model. Initial studies found a relation between code smells detected by humans and architectural inconsistencies. However, the study shows that tooling is not that good: “it seems that it is not possible to use contemporary code smell detection tools out-of-the-box to aid in the task of architectural repair. Conclusion: more sophisticated detection mechanisms for architectural inconsistencies are needed. As Pablo states: 10 years ago he got similar results, so he hoped it would have improved. The problem with this type of tools is that humans are able to detect code smells that relate to architectural deteration, but tools not. How can we improve those tools? What is needed to make these tools perform better…
paper 6: “Architecture Conformance Checking with Description Logics”, by Sandra Schröder and Matthias Riebisch, presented by Najd Altoyan
Current architecture approaches lack conceptualization and formalization. In this paper, they propose a formal approach to flexibly define a “domain-specific” architectural language capturing the concepts and rules. SROIC is a description logic that is sufficiently expressive, decidable and well supported. It also forms the basis of ontologies. One could for example make rules that are violated in event-based architectures, but allowed in component-based architectures. DL makes a difference between TBox and ABox. TBox is the predicate, ABox is an instantiation of it. (TBox: terminological, ABox: Assertional).
The approach has been validated on a SOA example. The ontology is extracted from the set of rules. Next, FAMIX is used to create a source code meta model. They built an ABox representation of the code based on this model, and then create a mapping to connect the architecture ontology with the source code representation, based on SWRL rules. Based on these rules, the code can be checked for consistency against the initial architectural ontology. Limitations: only a small partwas formalized, and further evaluation, e.g. on expressiveness should be studied. Some inconsistencies were not yet detected, so that also requires additional research.
Another limitation pointed out by the author is to make the link between source code and architecture should be improved, e.g., by making the mapping explicit in another ontology.
Important to notice: the rules can only be used to infer more knowledge (e.g. the mapping). How about consistency rules? (Due to the open world assumption…), but great to link models and allow reasoning over models… Feels very much like what I did during my postdoc-time using Prolog and ontologies to define auditing rules.
paper 7: “Towards a Well-Formed Software Architecture Analysis”, by Najd Altoyan and Dewayne E. Perry, presented by Leo Pruijt
Correctness of software architecture is crucial for the success of the software products, and early evaluation and verification of the system help in identifying risks early on, before the system is built. That is the idea… However, many languages have been proposed, but almost none have been adopted in industry. Many are simply not intuitive enough for the software developer. There is the tradeoff between correctness and consistency which require formal methods, but hamper the developer, who tend to use informal methods (like UML). An architecture is well-formed if it is complete, self-sufficient and self-contained. Their solution is an approach based on Alloy and an existing semi-formal model: AAM (Abstract ADL Model). An architectural element is defined by services, dependencies and constraints.
It looks quite familiar to the FAM: features are modeled on the top, but information flows are between modules only, and not between features, as we do…
In their approach the ADL is in XML, and translated (still manually 🙁 ) into an Alloy specification, on which analysis tools can be run to validate against the given analysis rules. Why not directly use Alloy? Alloy is a full blown language, and not all details are needed for architects.
In the afternoon we discussed again TEAMMATES. Sebastian presented Structure101 and how it performs on the software. Great tool, maybe something for my students to work with…