This is a repost of my article, originally published on CodeProject on 24 May 2016.
The Story of a Memory Dump
Memory dumps are a common way to diagnose various problems with our applications (such as memory leaks or application hangs). You may think of them as photos which allow you to have a look at the past and notice all the details you might have missed. There are different types of memory dumps which we may compare to different types of photos we take:
- minimal – focus is on one element (such as an exception) and the whole background is blurry, they take very little space on the hardrive (eg. 2MB)
- minidumps with thread and process data/heaps/stacks/exception data, etc. – depending on how many options we choose, they might be very detailed high-resolution pictures or very blurry ones, the range of space they take can vary from tens of MBs to several GBs
- full memory dumps – those can be compared to high-resolution pictures, they are as big as the whole process committed virtual memory
Every developer knows that unit testing improves the quality of the code. We also profit from static code analysis and use tools such as SonarQube in our build pipelines. Yet, I still find that many developers are not aware of a much older way of checking the validity of the code: assertions. In this post, I will present you the benefits of using assertions, as well as some configuration tips for .NET applications. We will also learn how .NET and Windows support them.
I recently spent some time analyzing OutputDebugString method. For my another experiment I needed a version of OutputDebugString which depends only on Native API. While implementing it, I discovered few interesting facts about OutputDebugString that maybe will interest you too. The title mentions System.Diagnostics.Trace. It is because the default trace configuration in .NET sends trace messages to an instance of the DefaultTraceListener class, which uses OutputDebugString. And if you do not remove it explicitly from the trace listener collection, your logs will always go through it. You will later see why sometimes it might not be a good idea.
Each build and release definition in TFS has a set of custom variables assigned to it. Those variables are later used as parameters to PowerShell/batch scripts, configuration file transformations, or other tasks being part of the build/release pipeline. Accessing them from a task resembles accessing process environment variables. Because of TFS detailed logging, it is quite common that values saved in variables end up in the build log in a plain text form. That is one of the reasons why Microsoft implemented secret variables.
The screenshot below presents a TFS build configuration panel, with a sample secret variable amiprotected set (notice the highlighted padlock icon on the right side of the text box):
Once the secret variable is saved, it is no longer possible to read its value from the web panel (when you click on the padlock, the text box will be cleared).
And this is how the output log looks like if we pass the secret variable to a PowerShell script and print it:
Let’s now have a look where and how the secret variables are stored.
Recently, the idea of protected variables in TFS struck my attention and pushed me to do some more research on how exactly those variables are stored. I hope I will write a separate post on that subject, but today I would like to share with you a small trick I use whenever I need to work with managed application traces (and TFS is one of them).
On Windows, when I want to know how things work internally, I usually start with procmon. Seeing which paths and registry keys are accessed, combined with TCP/IP connections is often enough to get an idea where to put breakpoints in further analysis. My TFS investigation was no exception to this rule. I collected a trace while saving a protected build variable – this is how such a variable looks like (in case you are interested :)):
TestLib, Version=126.96.36.199, Culture=neutral, PublicKeyToken=769a8f10a7f072b4
If the above line means anything to you, you are probably a .NET developer. You also probably know that the hex string at the end represents a public key token, which is a sign that the assembly has a strong name signature. But do you know how this token is calculated? Or do you know the structure of the strong name signature? In this post, I will go into details how strong naming works and what are its shortcomings. We will also have a look at certificate-based signatures and, in the end, we will examine the assembly verification process.
Application Insights is a performance monitoring service, created by Microsoft and available on Azure. It gives you space to store the performance metrics and logs of your application (1GB for free!), as well as functionalities to search and manage them. In this post I am not going to present you the whole platform – Microsoft already did it in the Azure documentation, but rather focus on an element of the log collection, named dependency calls tracking. I did some analysis on the Application Insights libraries, and decided to publish my findings, in the hope that the results might interest some of you too.
Dependency calls are requests, which your application makes to the external services (such as databases or REST services). When this telemetry type is enabled, all the dependent actions form a timeline within the scope of the parent action. Using this timeline we may easily verify whether the delay in our application is caused by an external service, or the application itself. Let’s analyze in detail how this data is collected.
In today’s short post I would like to present you a new tool in my diagnostics toolkit: wtrace, and an update to procgov (or Process Governor). Let’s start with wtrace.
On Linux, when I need to check what a given process is doing, I usually use strace. I was always missing such a tool for Windows. We have procmon (which is great), but it does not run in a console, and thus can’t be used in the command line scripts, or on a Nano server. This might change soon, as in one of the latest episodes of the Defrag Tools show, Mark Russinovich shared the plan of releasing the procmon version for Nano. Till then though we don’t have much choice when it comes to real-time tracing. You may think of xperf or wpr, but those tools only record ETW events for further analysis. However, we may use the same ETW events in a realtime session, and print information they provide to the console output. This is how the idea for wtrace was born in my head. Few weeks ago Sasha Goldshtein released another tool for ETW processing named etrace, which basically does something very similar and has many interesting options. I decided to publish wtrace nonetheless, as my point was to create a tool with an extremely simple interface. Wtrace is collecting only a small subset of events (FileIO, TcpIp, Process/Thread Start) from the kernel provider. It may either start a process, or trace one that is already running. At the end of the trace it also shows some statistics (unless you use the –nosummary switch). Trace session will end either when you press Ctrl+C, or when the traced process terminates. Events are printed in the console window. An example session might look as follows:
This is how the story begins. On our build server we are using a JetBrains Resharper CLT to generate a code analysis report. In one of the projects build we started getting the following exception log:
Executing the powershell script: C:\install\TFS \1.0.691\resharp.ps1
JetBrains Inspect Code 2016.1.2
Running in 64-bit mode, .NET runtime 4.0.30319.42000 under Microsoft Windows NT 6.2.9200.0
'AutoMapper' already has a dependency defined for 'Microsoft.CSharp'.
--- EXCEPTION #1/2 [InvalidOperationException]
Message = "'AutoMapper' already has a dependency defined for 'Microsoft.CSharp'."
ExceptionPath = Root.InnerException
ClassName = System.InvalidOperationException
HResult = COR_E_INVALIDOPERATION=80131509
Source = NuGet.Core
StackTraceString = "
at NuGet.Manifest.ValidateDependencySets(IPackageMetadata metadata)
at NuGet.Manifest.ReadFrom(Stream stream, IPropertyProvider propertyProvider, Boolean validateSchema)
at NuGet.LocalPackage.ReadManifest(Stream manifestStream)
at NuGet.SharedPackageRepository.OpenPackage(String path)
at NuGet.LocalPackageRepository.GetPackage(Func`2 openPackage, String path)
at NuGet.LocalPackageRepository.<>c__DisplayClass13.<FindPackage>b__f(String path)
at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source)
at NuGet.LocalPackageRepository.FindPackage(Func`2 openPackage, String packageId, SemanticVersion version)
at NuGet.SharedPackageRepository.FindPackage(String packageId, SemanticVersion version)
at JetBrains.ProjectModel.Packages.SharedPackageRepositoryInTemp.FindPackage(String packageId, SemanticVersion version)
at NuGet.PackageRepositoryExtensions.FindPackage(IPackageRepository repository, String packageId, SemanticVersion version, IPackageConstraintProvider constraintProvider, Boolean allowPrereleaseVersions, Boolean allowUnlisted)
at NuGet.PackageReferenceRepository.GetPackage(PackageReference reference)
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at JetBrains.Util.ILoggerEx.Catch[TValue](ILogger th?s, Func`1 F, ExceptionOrigin origin, LoggingLevel loggingLevel)
I am working on adding a support for ASP.NET performance counters into Musketeer. Compared to other .NET performance counters they have quite surprising instance names. ASP.NET developers decided that their performance counter instances will be identified by names derived from the AppDomain names (more information can be found here). This is probably due to a fact that one process may host multiple ASP.NET applications, thus one counter instance per process won’t be enough. Consequently, in order to match collected metrics with process ids we need to know which AppDomain belongs to which process. How can we do that?