Recently at work, I needed to trace several syscalls to understand what SQL Server was doing. My usual tool for this purpose on Windows was API Monitor, but, unfortunately, it hasn’t been updated for a few years already and became unstable for me. Thus, I decided to switch back to WinDbg. In the past, my biggest problem with tracing the system API in WinDbg was the missing symbols for the internal NT objects. Moreover, I discovered some messy ways to work around it. Fortunately, with synthetic types in WinDbg Preview it’s no longer a problem. In this post, I will show you how to create a breakpoint that nicely prints the arguments to a sample
MiniDumper is a diagnostic tool for collecting memory dumps of .NET applications. Dumps created by MiniDumper are significantly smaller than full-memory dumps collected by, for example, procdump. However, they contain enough information to diagnose most of the issues in the managed applications. MiniDumper was initially developed by Sasha Goldstein, and I made few contributions to its code base. You may learn more about this tool from Sasha’s or my blog posts.
Recently, one of MiniDumper users reported a memory leak in the application. The numbers looked scary as there was a 20MB leak on each memory dump. The issue stayed opened a few weeks before I finally found a moment to look at it. As it was quite a compelling case, I decided to share with you the diagnostics steps in the hope it proves useful in your investigations.
Developing system applications in C# requires a lot of PInvoking. Although there are many great PInvoke Nuget libraries, for smaller projects I still prefer to import only the definitions I use. The pinvoke.net site is an excellent source of stub definitions. However, it happens that the online definition does not contain all the needed constants or lacks something. In such a case you have to look into the Windows headers (which btw. contain not only definitions but also a lot of interesting comments). I used to search through those files using Total Commander “Find Files” dialog, but it was slow and inefficient. So I switched to Sublime Text and created a project for the Windows headers folder (C:\Program Files (x86)\Windows Kits\10\Include\10.0.x.x). Once the folder index is cached, Sublime becomes a great tool for analyzing the source code (not only for C++!). However, when you read a lot of code and switch between various projects, Sublime replaces the old cached projects with the new ones to keep the cache at a reasonable size. That triggers the cache rebuilt when you open the “old” project again, which takes time and makes your search inefficient again.
I then started looking for a way to build a permanent index on the folders I regularly scan (such as the Windows headers directory). At first, I was thinking about running a local instance of Elasticsearch or Apache Solr server, but that seemed like overkill. I was looking for something simpler, some kind of a wrapper over the Apache Lucene library, which is the core engine for the servers mentioned above. Then I stumbled upon the Lee Holmes article about Scour, a PowerShell module that wraps the Lucene.Net library and provides cmdlets to create full-text indexes for your folders. After using it for some time, I am happy with the results so I decided to share my simple setup with you.
I was recently looking for a tool which would allow me to limit the total execution time of a process and its children. I haven’t found anything, so I decided to implement such a feature in Process Governor, my open-source process-monitoring application. You may download the v2.3 version from GitHub. In this post, I want to present you the new functionality and describe its implementation details.
When we know the PIDs of our running processes, we could use a simple command to wait for the processes to finish (the
Wait-Process cmdlet is an ideal example) and kill the remaining ones if they pass the limit. However, what if we only know the PID of the initial process? Tracking processes hierarchy in a script could become problematic. A simple and clear solution would be to assign a job object to the initial process, let it create new processes, wait the specified period and terminate the job if any of the processes is still running (terminating the job exits all the processes). There are, however, few questions we need to answer:
- How do we know all processes associated with the job finished their execution?
- What types of process execution time should we measure?
By default when you record a trace in Wireshark, you won’t find process IDs in it. And sometimes this information is necessary to investigate the problem you are facing. I run into one of such issues this week. I needed to locate a process on a Virtual Machine (local address 10.0.2.5) which was still using TLSv1 to connect to our load balancer. At first, I only recorded traces in Wireshark and filtered them (
ssl.record.version == "TLS 1.0"):