I work on a large software project in the cable and telecommunications space. We have a team of developers about 50 in size that is growing every month. What once started as a simple .NET web application has since grown into a large multi-tenant Azure-hosted enterprise solution with over 100 projects, 7 builds, and a host of different technologies and patterns. We’ve got a skilled team and everyone has been working hard to keep the quality and consistency high, but as with any software project things will eventually slip past the radar and start to cause problems.
One tool that I have employed successfully in the past to help find and catalog these issues is NDepend. Using the new version 6 I was able to pull together reports to assist our development teams identify potential problem areas and add tech debt items to our backlog to address them before they grew too big. NDepend is a big tool and from the moment you first open it up and point it at a project it starts presenting all manner of data and visualizations. This can be a little daunting at first but once you start to break it down the data quickly becomes useful.
The first step is to run VisualNDepend.exe. This launches the main studio and displays the start page. From there it is a simple journey of selecting Analyze a set of .NET assemblies and either browsing to a build folder or selecting a Visual Studio solution file. A dialog displaying the list of assemblies that will be analyzed contains a big play button and a check-box marked “Build Report.” Press the button and you’re off to the races.
Once analysis is complete, a web browser opens showing the main report as well as a dialog for NDepend beginners. This describes some common destinations to begin analysis such as viewing the NDepend dashboard, showing the interactive graph, or browsing code rules. Description are provided for each to help understand what information is being provided and how it can be used.
I begin by reviewing the application metrics. These are some fun statistics that provide a good idea of the size and scope of a project. Counts such as the number of source files, namespaces, and method count frame the maintenance discussion. Knowing how many source files is useful later in the analysis cycle when reviewing rules violations to understand mentally how many issues might exist per physical file for example. Not that warnings will be evenly distributed across files but it give a good human metric of the scope of potential issues.
The metrics above represents our largest solution. We have several other smaller solutions but for this analysis I focused primarily on our core project. At 67,000 lines of code it represents almost two-thirds of the project team’s .NET assets. A thousand source files may not seem like a lot on the surface but we definitely feel that count when it comes time to refactor. Below the application metrics is a section that summarizes the results of NDepend’s rules analysis. Rules are the primary mechanism for detecting and categorizing potential issues during the analysis cycle and the summary pairs well with the metrics section to give an idea of the size of potential problems.
For a project our size, 92 violations isn’t that bad. The colored boxes in the top-left show a breakdown of the rules by severity and in our case 55 passed, 80 had warnings, and the remaining 12 represent rules for more serious issues. The total number of rules violated as 92 is derived from the sum of the warnings and errors; 80 + 12 = 92.
From here I will usually move into the visualizations before digging into rules further. I find at the start of the analysis cycle that I like to get my arms around the scale of the report. Fortunately NDepend provides several options for visualizing the solution from a variety of different perspectives. The browser report contains links to four such visualizations: the dependency graph, dependency matrix, tree-map metric view, and a chart showing abstractness vs. instability.
I start at the right. The abstractness vs. instability graph grades each assembly based upon these two factors. Assemblies that are abstract and stable or concrete and unstable define the edges of a theoretical green zone along with assemblies that are a good combination of both. Assemblies that are both stable and concrete occupy the zone of pain, where assemblies that are abstract and unstable occupy the zone of uselessness. While the verbiage is a bit stark for some audiences, technically minded folks will often see this as a quick-look for assemblies with confused responsibilities.
For the most part our project’s assemblies occupy the edges of the yellow band. There are a few in the zone of pain that could use attention but for the most part everything looks OK. It’s a nice gut-check for separation of concerns. The assemblies in the zone of pain are generated assemblies from SQL metal. Generated ORM layers typically result in knotted masses of code, but since they are generated they don’t suffer the usual maintenance drawbacks.
Class and method names have been blurred (by me) deliberately.
The next diagram I review is the dependency matrix. This shows references between pairs of assemblies. Larger numbers don’t necessarily represent bad news but it does point to an area to examine. When I see a high count between a pair I will dig in a little further to understand if it is really necessary and see if there are opportunities to break responsibilities and dependence down a little. Core libraries, ORM assemblies, and interface layers are often highly coupled as they provide hook points that many different parts of the system interact with. In addition to looking for problems I’ll use this grid to validate boxes I expect to be empty, in cases where pairs of assemblies should explicitly not reference each other. This snapshot is from the preview included in the browser report. However, there is a full and interactive dependency matrix included in the main NDepend studio as well.
Once I’ve reviewed the overview information, I dive into the main NDepend studio and head to the queries and rules explorer. This contains the meat of the actionable information for my role. The rules engine searches code that violates one or more high level programming principles. The explorer allows you to not only review these exceptions but also create and manage a custom set of rules that more closely matches the practices of your particular development team.
NDepend comes with a large set of built-in rules that can be used as examples for building out your own custom suite. I’ll usually start by reviewing the built-in set of rules to guide me towards any particular problem spots in the code. The dashboard panel shows an overview similar to that in the browser report but this time the code rules section includes more information as well as interactive links. The option for “Recent Violations Only” is an invaluable feature that allows you to review only the violations that have occurred in the most recent refactor. This is a good way to see if attempted fixes are in fact introducing new problems of their own.
From this view we can see that there are 361 critical violations of 12 rules. There are considerably more in the warning category, but consider that this analysis includes a lot of generated code. Code generators don’t expect anyone messing with the code they generate and as such often violate common principles that would otherwise make the code more difficult to manage. To focus our analysis I opened the Project Properties dialog and removed a few assemblies from the analysis.
Trimming out some test assemblies and generated code focuses the analysis on the core of our functionality. This lets me focus on the most important issues for us to address.
Clicking on any of the links in this panel applies a filter to the Queries and Rules Explorer panel. I start here to dig into critical violations first. These are typically, but not always, issues that should be addressed right away.
Selecting a rule opens the rule editor window. This contains the rule script as well as a list of methods that are found to be in violation of the selected rule. I selected the “Methods too complex – critical” rule. The script at the top is the NDepend definition for the rule. The dialog below shows the method that was found to be in violation.
The class and method names have been blurred (by me) deliberately.
While I can’t share the code, needless to say this was a very large method with far too many responsibilities. It became a prime candidate for refactor and was squashed within just a couple of days. Interestingly this method was built to support unit testing. It’s purpose was to reset a large object and the reset path wasn’t necessary for our production run-time, only for the tests. We moved the method to its appropriate location in a test support assembly, simplifying our core project.
The rule script is easy to modify if you disagree with the severity of a rule or would like to make it stricter or more relaxed. In the example above the rule is searching for methods with one of three traits:
- Cyclomatic complexity > 30
- IL Cyclomatic complexity > 60 (because IL is chatty)
- IL Nesting Depth > 6
While some of this may seem a bit daunting it’s pretty easy to start changing values and seeing the effects. One exercise I found helped me in learning what these metrics mean was to take a method I considered to be a great example and move values around until it violated or failed to violate particular rules. In the example above the method was flagged because of its IL cyclomatic complexity. It has a value of 75 where the rule is looking for anything greater than 60. Changing the rule in the editor to look for values greater than 90 removes the violation from the list in real time. In this way it is easy to experiment with the effects of different rule settings.
The editor comes with a real-time parser and syntax highlighter, so if you mess things up it’s easy to see how to correct it. Here I’ve changed 60 to NotANumber. The editor highlights the issue and provides a description in the error list indicating that ‘NotANumber’ does not exist in the current context. This hints at another feature of the editor, which is the ability to use variables as well as literals within these rules. The editor comes with Intellisense and auto-complete so exploring the available methods is easy.
Here I’ve done something silly that I don’t recommend but it shows off the power of the editor. The rule now looks for methods where the IL Cyclometic Complexity is greater than the number of assemblies being analyzed.
In this overview I’ve only scratched the surface and can’t hope to cover all of the features that are available. I’ve been an NDepend user for over a decade and have still only used a small amount of what’s there. As with every architect I’ve customized my process and workflow to exploit the tools I need when I need them. NDepend provides me with guidance towards problems, focusing my review attention so that I can use the small time slices I have available to look at the largest issues first. The tool can be integrated much more deeply, replacing or augmenting other static analysis solutions. At a previous company we chained ND analysis into our continuous integration cycle with the results of critical rule violations being routed to me. It certainly simplified the weekly code review meetings I was having with our offshore teams. It all depends upon the situation and project to determine how the workflow can be best assisted.
The usual disclaimers apply. My views are my own and do not represent that of my employer. I’ve known the author of NDepend for over ten years and can attest that he delivers a very high quality product and provides great support. I am not compensated for my words but he has always been kind enough to provide me with evaluation copies of the software, for which I am very thankful.
If you’re curious about what NDepend could do on your projects, I encourage you to get involved with a free trial and …