# Software Intelligence Vision
Ultimately, any software strives to satisfy their users to maximize profit. However, user needs evolve - they change, they expand. Additionally, not only the market situation changes, but also the underlying technology and hardware evolves. Developing software in such agile environments is a challenge: Being able to keep up is not trivial. And it heavily depends on the quality of the software.
To lead a software project to long-term success, software engineers are facing various decisions. From choosing the right underlying hardware and technology to prioritizing which features to implement, decisions have to be made daily. While facilitating continuous software change, managers and developers also have to consider the following: How long will our code base still be maintainable? Is our software sufficiently tested? Does our architecture satisfy current and future requirements? Those decisions are fundamental and have potentially long-term impact on the success (or failure) of the project.
For all these questions, intuition and often prominent gut-feelings are bad decision criteria. Instead, decisions should much rather be made based on solid data from the specific project. Yet, even though the data is technically available, it often remains unused because the data is implicit, hard to extract, hard to visualize or not trivial to interpret. For a profound decision making in software engineering, Software Intelligence is required: Just as business intelligence provides a "set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis purposes" Wikipedia (opens new window), software intelligence targets the same purpose: Software intelligence uses data mining to provide a solid foundation for decision making in software development.
Today, Teamscale is the only Software Intelligence Platform on the market. With Teamscale, you can steer the success of your project based on solid and profound data from your software: Integrating various sources such as code repositories, architecture documentations, test executions, issue trackers and runtime execution in production, Teamscale provides you transparent answers to the following questions:
- Code Quality: Do we have more spaghetti code than clean code?
- Test Quality: Did we actually test what we changed or did we miss something?
- Test Efficiency: How can we speed up our existing tests?
- Architecture Quality: Does our code still match our architecture?
- Feature Management: Is all of our code in fact used by the users?
Being able to control these aspects makes your project more likely to succeed - and also more fun to work in. Teamscale will signal red flags early enough when tempting feature pressure deteriorates your software quality. Small quality deviations can be often redeemed quickly when developers discover them early. But at some point, consequences of failing to do so become unbearable. This is not something we just make up. We all have seen software that needed to be rewritten. Also research has shown that software quality - wrt. to code, test, and architecture - gradually decays over time if no effective counter measures are taken [1 - 5]. So do something before it is too late! And don't waste money for maintaining code that is not even executed by the user. Even though it seems as this code just sits there at no further cost, do not underestimate the effort which this unused code creates for maintenance .
To make your project a success, Teamscale offers a broad range of prospects. It adequately addresses your needs and provides appropriate views of your software's quality regardless of whether you are a developer, a tester, an architect, a product owner or a manager. It adapts flexibly to your chosen development process and also supports feature-branch driven development. In contrast to many existing tools, it does not only analyze the main development trunk, but can be tailored to every single branch.
# Further Reading:
- Did We Test Our Changes? Assessing Alignment between Tests and Development in Practice
S. Eder, B. Hauptmann, M. Junker, E. Juergens, R. Vaas, and K-H. Prommer.
- Does Code Decay? Assessing the Evidence from Change Management Data
Stephen G. Eick, Todd L. Graves, Alan F. Karr, J. S. Marron, and Audris Mockus, In: IEEE Software. 2001
- The loss of architectural knowledge during system evolution: An industrial case study
M. Feilkas, D. Ratiu, and E. Jürgens, In: Proceedings of the 17th IEEE International Conference on Program Comprehension (ICPC’09). 2009
- Software aging
David Lorge Parnas, In: Proceedings of the 16th International Conference of Software Engineering (ICSE). 1994
- Continuous Software Quality Control in Practice
D. Steidl, F. Deissenboeck, M. Poehlmann, R. Heinke, and B. Uhink-Mergenthaler, In: Proceedings of the IEEE International Conference on Software Maintenance and Evolution (ICSME). 2014
- How much does unused code matter for maintenance?
S. Eder, M. Junker, E. Jürgens, B. Hauptmann, R. Vaas, and K.-H. Prommer, In: Proceedings of the 34th International Conference on Software Engineering (ICSE 12). 2012