Notes from the Microsoft Management Summit 2013

This was another great year at the Microsoft Management Summit (MMS) in Las Vegas. While there were not an major product launches, much focus was given on the enhancements with SP1 for System Center. This news isn’t new since SP1 has officially been out since January but while there has been a lot of discussion about the features, seeing how Microsoft sees them in action and their alignment with the cloud mindset was beneficial. In the ConfigMgr space, there were numerous enhancements that were made with SP1 but my favorite is the hierarchal changes and the expansion of non-windows and non-PC device support.

Down to one

One great feature of the SP1 enhancements for ConfigMgr were the changes made to the architecture permitting a much flatter hierarchy. A very compelling argument was made as to why a CAS is not needed and that a single Primary site is all you need (unless you have over 100K clients or a solid reason to have multiples). Again and again it was stated from MS product team as well as MVPs managing huge deployments that you don’t need the CAS in the design and that a single Primary site server should be good for almost all but the largest deployments. This is backed up by the fact that the design changes in SP1 enable you to add a CAS server at any time later (thank goodness) and that the total number of clients supported at a single primary is 100,000. This is a huge shift for many of us, who based on the RTM specs, had installed CAS servers in solutions just in case a customer would want to expand their hierarchy later.

What was also discussed was the impact of having a CAS that doesn’t do anything, as in the solutions we described above. This impact was defined as the “replication tax” and basically described that since all primary servers in a hierarchy are equal, any change made at one server has to replicate to all the other servers and then up the hierarchy. When all your clients are reporting to a single primary with a CAS, that means that to see changes made at a Primary, you have to wait for it to replicate to see it at the CAS, for no real benefit. Since Primaries can’t be used to separate rights or access, the argument to have multiple primaries and a CAS really become difficult to support.

To example this effect, the product team was performing some “bathtub” testing against a design managing 400,000 clients during a normal Patch Tuesday rollout. With the minimal 4 Primary Site Servers they found it took around 14 hours to process all the backlogs. You would think throwing more servers at the solution would speed things up, however increasing the number of Primaries to 10 increased the backlog to 26 hours! In both scenarios the CAS was running at 100% utilization trying to keep up with the replication needs. This is huge, so make sure you are understanding this when you are designing your solution. If you have multiple Primary Servers now and have under 100,000 clients, I would strongly suggest you review your design and adjust accordingly.

Intune and ConfigMgr – Better together

Another great feature in ConfigMgr SP1 is the expanded support for deploying applications across numerous platforms and devices. Native support for IOS 10.6+, Linux, and Android means that you can have an agent, manage devices, and deploy software all from the same console. The user experience across all devices are similar and can even deeplink into the platform’s store to a specific public software install (App Store, Microsoft Store, Google Play). You can even use SCEP 2012 on your Apple systems.

While using ConfigMgr natively is great to manage on-prem devices, Microsoft expects you to manage cloud devices (mobile devices, disconnected pc’s, windows RT) from the cloud. Sounds obvious, and why not, since that is the easiest way to ensure an internet connected device can be managed without the work of making your management solution public facing. Microsoft has been working hard on their unified device management initiative, and with the latest version of Intune, creates a connection between your ConfigMgr SP1 solution and your Intune subscription service. Now there are ways you can empower users to be able to enroll their own devices and allow you to inventory, manage, deploy applications, and wipe those devices. All while having a single toolset to manage and a consistent experience for the end-user for application delivery. Let’s face it, keeping things simple and having a happy user makes a productive user and a happy you. There is so much to tell about this that I just can’t write it all but if you want more details feel free to reach out to me and I can help you dig in deeper.

As always, the sessions were great, the food was plentiful, the vendor parties were fun, and the socializing with other IT folks that wrestle with the same things I do was priceless. If you didn’t get a chance to go or was able to but missed some sessions in lieu of other ones, Microsoft has the recorded sessions along with slide decks available for download at

Now the only question (beside the obvious one about upgrading to SP1) is whether I will see you at next year’s MMS. However, the decision as to whether Microsoft will have another is still up in the air. We can leave that for another post though 🙂

Jason Condo
Principal Consultant

Using PerfView to Diagnose a .NET Memory Leak

I recently worked with a customer that was experiencing a memory leak with custom code running in Outlook. They were having trouble isolating the source of the leak, and they called us to help. There are several ways to dig into the process and profile the memory, but each have their own challenges and require some amount of supposition and guesswork.

When looking into these types of memory leaks, I have used a variety of tools over the course of my career, including DebugDiag, VMMap, and WinDBG with SOS. However, in investigating this particular leak, I came across a relatively new tool created by the .NET Performance Testing team called PerfView. This tool proved to be much easier to use in this situation, and it did not require multiple, cryptic steps, such as capturing multiple memory dumps and comparing the .NET object counts from one dump to the next. Instead, PerfView was able to capture multiple snapshots of the heap, compare those snapshots, and provide a listing of what was different between them.

To provide you with an idea of how simple this tool can be to help you find leaks, I created a simple application that contains a supposed “leak”. In reality, a ‘leak’ in a garbage collected runtime, like .NET, is typically just an object that is still being referenced and therefore cannot be removed from memory. It is not really a leak in the traditional sense, but it still is causing memory use to grow inside the process.

Knowing that my sample app has a leak, we can use PerfView to attempt to locate the source. The application has a simple WPF user interface which reports the size of the process. Over time, the process grows, but it gives no indication why. Below is a screen shot of the application:


To dig into this process, I used PerfView to inspect the heap. Below are the steps I took:

From the PerfView UI, choose “Take Heap Snapshot,” located on the Memory menu.


And choose the process you want to capture:


Click the “Dump GC Heap” button or simply double click on the process name.

When complete, PerfView will display the largest objects, sorted by the largest contributors.


As you can see, my sample application has an ArrayList as its largest contributor to the memory. That does not, however, necessarily mean that this object is the source of any leak. The largest object in an application may be a business object or some other component that exists to support the application’s functionality. In order to find the source of a leak, multiple snapshots must be captured and compared over time.

To capture another snapshot, simply return to PerfView’s main window and choose “Take Heap Snapshot” again from the Memory menu. Leave the current snapshot open so that you will be able to use it as a baseline when comparing it to the next snapshot. After capturing the second snapshot, you should have a second “Stacks” window open which looks similar to the first. To compare this snapshot with the first, locate and open the “Diff” menu. The first item in the list (assuming you did not close it) should be your original snapshot. (If the original snapshot was closed, you can reopen it from the main PerfView window.) Select the baseline snapshot and allow PerfView to compare the two.


After the Diff is created, you will see a screen that looks similar to the Stacks screens that displayed each snapshot.


In a Diff view, the columns to the right of the object types indicate the percentage and raw value differences between the two snapshots. In the case above, notice that the “Totals Metric” value in the header section of the window, near the upper left corner, shows that the total size difference between the two snapshots is about 3.4 MB.  In the main section of that same window, we can see that there is/are ArrayList object(s) that have contributed about 99.8% to that difference, or about 3.4MB. If we double-click on the ArrayList line, we can see what objects use that particular type and how much each of those referring objects are contributing to the increase.


From this screen shot, we can see that an object called MyLittleLeaker.Leaker.a makes up the largest difference in memory between the two snapshots. This object is indeed the source of my leak.

As you can see from the contrived example above, PerfView can help provide insight into what is changing over time inside your application, and it can be much less cumbersome than capturing and interpreting memory dumps with commands that are hard to remember.

The Microsoft .NET Performance team has created a series of videos, which are posted on Channel 9, on how to use PerfView in several scenarios, including live profiling and investigating high CPU scenarios. Take a look at the Channel9 PerfView Tutorial to learn more.


Rich Deken
Principal Consultant