All papers are available in the ACM Digital Library.
Dependency Bugs: The Dark Side of Variability, Reuse and Modularity
A dependency bug is a software fault that manifests itself when accessing an unavailable asset. Dependency bugs are pervasive and we all hate them. But why do they appear?
I will present a case study of dependency bugs in the Robot Operating System (ROS). From one point of view, ROS is a highly general distributed architecture for building robotics systems, supported by a communication middleware, and a large number of reusable and configurable components. We will discuss results of a qualitative (N=78) and quantitative (N=1354) analyses of bug reports in ROS. We will contrast this with 19553 reports in top 30 GitHub projects. A definition and a taxonomy of dependency bugs emerges from these data. We find that these bugs are surprisingly pervasive, and very annoying. As many as 15% (!) of all reported bugs are dependency bugs. They also contribute tremendously to the (possibly incorrect) perception of new developers that the system is unstable, unpredictable, or hard to use.
It seems that dependency problems are an inherent cost paid for software modularity and reuse. Yet, we rarely discuss them when we evaluate our research ideas and tools. They can be considered as a technical debt introduced by generality of software architectures. A debt that is growing, more decoupled the software becomes — when components evolve at various speeds and are controlled by separate maintainers. Perhaps we should include this cost as an explicit criterion in evaluation of reuse ideas in software. Perhaps this is a cost worth paying for the benefits.
On the other hand, dependency bugs do not seem to be impossible to combat. We have built simple lightweight linters to find some of them. Lightweight tools can find dependency bugs efficiently, although it is challenging to decide which tools to build and difficult, hopefully not impossible, to build general tools. Perhaps the VAMOS community can help both building tools that are finding and eliminating dependency problems, and by identifying the general architectures, or ecosystems organizations, that minimize the number of dependency problems without loosing the agility.
Joint work with: Anders Fischer Nielsen, Zhoulai Fu (IT University of Copenhagen), Ting Su (ETH Zurich). Partially sponsored by European Commission through H2020 ROSIN Grant No. 732287.
Andrzej Wąsowski works with design and use of technologies that improve quality of software, including issues such as correctness and maintainability. He has worked extensively with software product line methods-ways to develop software for similar products at lower cost but with higher quality. He has collaborated with open source projects (Linux kernel and ROS among others) and with industry. Currently, he is investigating quality assurance methods for robotics platforms, the privacy and information flow in machine learning programs, and generating patches for locking bugs in the Linux kernel.
Andrzej Wąsowski is a professor of Software Engineering at IT University in Copenhagen (ITU). He holds an MSc degree from Warsaw University of Technology and a PhD degree from ITU. He has previously held visiting positions at Aalborg University (Denmark), INRIA Rennes (France) and University of Waterloo (Canada).
Next Steps in Variability Management due to Autonomous Behaviour and Runtime Learning
Due to uncertainty, contemporary variability models face the challenge of representing runtime variability to allow the modification of variation points during the system's execution, and underpin the automation of the system's reconfiguration. I argue that the runtime representation of feature models (i.e. the runtime model of features) is required to automate the decision making.
Software automation and adaptation techniques have traditionally required a priori models for the dynamic behaviour of systems. With the uncertainty present in the scenarios involved, the a priori model is difficult to define. Even if foreseen, its maintenance is labour-intensive and, due to architecture decay, it is also prone to get out-of-date. Different techniques such as machine learning, or mining software component interactions from system execution traces can be used to build a runtime feature model, which is in turn used to analyze, plan, and execute adaptations, or synthesize emergent software on the fly.
Another well-known problem posed by the uncertainty that characterizes autonomous systems is that different stakeholders (e.g. end users, operators and even developers) may not understand them due to the emergent behaviour. In other words, the running system may surprise its customers and/or developers. The lack of support for explanation in these cases may compromise the trust to stakeholders, who may eventually stop using a system. I speculate on the issue that variability models can offer great support for (i) explanation to understand the diversity of the causes and triggers of decisions during execution and their corresponding effects using traceability, and (ii) better understand the behaviour of the system and its environment.
An extension and potentially reframe of the techniques associated with variability management may be needed in order to help taming uncertainty and also, support explanation and understanding of the systems. The use of new techniques, such as machine learning, exacerbates the current situation. However, at the same time machine learning techniques can also help and be used, for example, to explore the variability space as members of the Research Community VaMOS have already recognised. What else can the community do to face the challenges associated?
I also argue that we need to meaningfully incorporate techniques from areas such as artificial intelligence, machine learning, optimization, planning, decision theory, and bio-inspired computing into our variability management techniques to provide explanation and management of the diversity of decisions, their causes and the effects associated. My own previous work has progressed to reflect what was discussed above. I would like to share thoughts about it and hear the feedback from the VaMOS community.
My research is joint work over the years with different collaborators and co-authors, including my own research students. Hopefully they have been acknowledged in the References appearing below. Partially sponsored by The Lerverhulme Trust Fellowship "QuantUn: quantification of uncertainty using Bayesian surprises" (Grant No. RF-2019-548/9) and the EPSRC Research Project Twenty20Insight (Grant No. EP/T017627/1) .
Nelly Bencomo is a Senior Lecturer in the CS Research Group in Aston University, in Birmingham, UK. Before joining Aston University, Nelly was a Marie-Curie Fellow in Inria, France (2011-2013). Nelly's research spans Software Engineering and Self-adaptive and Autonomous Systems. In Software Engineering, Nelly is best known for her work on decision-making under uncertainty, requirements-aware systems and runtime models. She is a recipient of a Leverhulme Fellowship in the UK (2019-2020). She has also received the Best Paper awards in the international Conferences MODELS and RE (both in 2019) and also other previous nominations. Nelly has Chaired the conferences such as SEAMS 2014 and ICAC 2018. She has also run and founded several workshops such as AIRE (Workshop on the synergies of AI and RE) and the Requirements@run.time in the RE Conference and, also the workshop Models@run.time (in the MODELS Conference). Currently, Nelly's research focuses on decision-making under uncertainty and explanation using machine learning techniques. As such, she currently leads the research projects and grants Quantum using Bayesian Surprises (2019-2020, Leverhulme, UK and Inria, France ) and Twenty20Sight (2020-2023, EPSRC, UK).
E037 - Faculty of Computer Science, Otto-von-Guericke Unversity Magdebrug
09:00-17:30: MODEVAR Pre-Conference Event (free of charge, please indicate during the registration whether you wan to attend)