As we all know some of the most obvious paths into the system through the browser is through out of date ActiveX controls like old versions of Java and Flash, among others. While many enterprises may still have a need to run old versions of Java for their line of business app that they just can’t get upgraded, this leaves their user and systems vulnerable to malware written to take advantage of those old, unpatched versions. I had a customer not too long ago that had to have the older version of Java 1.6 for a time keeping system. Every time I would go in and review their SCEP logs I would see JAVA vulnerabilities at the top of the list and many systems infected to a point that they had to reimage them.
Microsoft has recognized this and is implementing a patch to Internet Explorer 8 and newer that will implement functionality to identify a list of known ActiveX controls (from a hosted definition file at Microsoft) and if not in the Local Intranet or Trusted sites zone, will display a pop-up bar notifying the user the ActiveX control has been blocked and that they should upgrade it to the latest. IT Pros will be able to manage this experience, as well as make sure their line-of-business applications are in the correct zones. To aid in this, there are new ADM templates available so that GPOs can be created to assist in configuring this.
While Microsoft was looking to implement the blocking functionality this week, we have some reprieve from the feedback heard from the community and provided an update yesterday that they will initially just be warning on old ActiveX controls for 30 days before the blocking goes into effect. This give IT Pros like you about 30 days to address this. While you can read more here (http://blogs.msdn.com/b/ie/archive/2014/08/06/internet-explorer-begins-blocking-out-of-date-ActiveX-controls.aspx), I see a few options available to you:
Upgrade or replace your application to work with the latest ActiveX control
I am pretty sure this will not be the immediate option since this most likely requires a budget, time, and resources to implement before the deadline and I have seen approvals for projects take longer than that. This is the best option though since it only takes one system to get infected from a vulnerability to bring an enterprise down.
Look to moving your applications into the proper security zones in IE
I have worked many customers who did not know how to manage security zones in IE (or even why it was important) and open their Internet zone up to enable their line-of-business apps/websites to run. I feel this is worse than any outdated ActiveX issue since every bit of code on the web gets the same open access the LOB app did. I recommend that if you aren’t familiar with zones, make an effort to do so and use them. Then look to moving your outdated application to a zone that allows it to run.
Temporarily block the IE update
If you manage your IE settings already and manage updates to your systems, you may have the ability to prevent the update from installing. While this is definitely a short term workaround, it would at least prevent the blocking aspect of the patch from taking effect until you have had time to implement a zones workaround or application upgrade. This is technically feasible but I have not tried it to verify.
Use a different browser
While I see this happening more and more because of other compatibility issues with IE, this is an option if you are dead set on keeping that old application and cannot move it to an appropriate security zone or manage it. This still may not be an option because many of those older apps were written to work with older versions of IE as well.
Whatever you choose, I wish you well in keeping your line-of-business apps working and hopefully this is a step from Microsoft towards a safer surfing experience for your users.
Join us Tuesday, February 11th @ 5:45pm for the .NET SIG. Jeff Mlakar from our Business Intelligence team (@BIatBA) will be presenting on the Microsoft Power BI stack, including Power Query, Power Pivot, Power View, and Power Map. Jeff will be showing how these free add-ins can be used within Excel, and he will be demonstrating how to leverage Power BI on Office 365 to share and collaborate with the data both online and via the new Power BI mobile app.
Register for the event here.
The Microsoft Online Services “Performance Test for Internet Connection to Microsoft Online Services” (formerly found at speedtest.microsoftonline.com) is back, after several months of absence.
As a Microsoft Cloud Partner, we’ve found the tool to be extremely useful when performing initial client environment discovery and general Office 365 readiness by measuring response times, bandwidth and connection quality with Microsoft Online Services. The one notable difference with this tool is the requirement to enter an Office 365 tenant domain before beginning. Interestingly enough, entering “company.onmicrosoft.com”, as listed in the example, does allow the test to begin.
We typically use the tool during the pre-sales phase with clients. During instances when we can’t be on site to run the tool ourselves, It’s very easy to direct clients to the tool so they can run it from a computer within their company network, and then return the results to us.
Once the tool has completed it’s run, click on the down arrow in the bottom-left (highlighted in the image below) to reveal more tabs:
Clicking on the Advanced tab reveals summary statistics from the test, including download and upload capacity:
The tool is hosted on two different sets of domains/URLs, each with a version available for three different regions. Currently, the cloudapp.net locations appear to be the most reliable and available.
Fast Track Network Analysis (North America)
Fast Track Network Analysis (Asia Pacific)
Fast Track Network Analysis (EMEA)
At the time of this writing, the APAC and EMEA sites at deployoffice365.com are not yet available:
Office 365 Network Analysis Test (North America)
Office 365 Network Analysis Test (Asia Pacific)
Office 365 Network Analysis Test (EMEA)
Principal Consultant – Cloud Solutions
With Microsoft releasing the Windows 8.1 (Blue) upgrade for download yesterday evening and us always wanting to jump into new technology, our first impressions of Windows 8.1 (Blue) upgrade on our test Windows RT tablet were pretty good. There were some good things, and some difficulties. One of those difficulties were around getting to our applications using the familiar ways we learned in RT. The following is from one of our consultant’s experiences. Keep checking back often as we blog about our experiences with the Windows Server 2012 R2 and Windows 8.1 previews!
All my apps are gone!!!
For those of you who have installed the 8.1 Blue preview, you may have found it more difficult to find any of your applications that are not pinned to the start screen.
Previously in Windows RT (and in Surface Pro), you could just swipe up and then click on the icon in the corner to view all you applications.
However, in the update, this has been replaced by an icon for customizing the groups of apps in the start screen (sorting and naming groups). This is easier now than it was before for those functions, however it didn’t get me to what I wanted, which was access an application tile not on my start screen.
All was not lost however. I could still search for an app (swipe from the right and choose search from the charms menu) and then open it. But to actually get to an app’s tile and then select it to pin to the Start, I found the following two ways:
First, the swipe method:
Once in the start tile screen, just swipe up from the middle of the screen to be presented all of your applications. Swiping up or down then swaps between all apps and the start screen. It makes sense, but wasn’t as intuitive as I expected and was discovered with some trial and error.
Second, the more apps icon:
The second isn’t obvious, but if you notice small things is pretty easy to catch. If you swipe your start screen all the way to the right you will notice an arrow in the lower left corner pointing down. clicking on that will take you to all of your applications, same as the swipe down does.
While not immediately intuitive, I think my kids could have found these quickly enough and after using it a few times I find it to be a much faster way to get to my apps without having them on the start screen.
I hope our consultant’s experience can help some of you who are wondering where all of your applications are in the Windows 8.1 preview. We hope to have more of their experiences in the coming posts to give you some exposure to Microsoft’s newest version of Windows 8.
If you are eager to get your hands on the latest release from the System Center suite, Microsoft has released System Center 2012 R2 for preview today. That is more commonly referred to as its components; Configuration Manager (SCCM, ConfigMgr), Operations Manager (SCOM, OpsMgr), Virtual Machine Manager (SCVMM), Service Manager (SCSM), Data Protection Manager (SCDPM), and Orchestrator (SCORCH). With it you can choose to also get your hands on Server 2012 R2 as well. I will be blogging more on this later as I get the bits installed and start playing with the many new features, but I wanted to get you the information for getting to download the preview now.
Here is an excerpt from the System Center team blog on the announcement (http://blogs.technet.com/b/systemcenter/archive/2013/06/25/microsoft-system-center-2012-r2-preview-is-now-available-for-download.aspx):
Windows Server 2012 R2 and System Center 2012 R2 provide a wealth of new advancements to help IT organizations build and deliver private and hybrid cloud infrastructure for their businesses. Some of the highlights include:
- Enabling hybrid cloud – Windows Server Hyper-V and System Center enable virtual machine portability across customer, service provider and Windows Azure clouds, while a new System Center Management Pack for Windows Azure enhances cross-cloud management of virtual machine and storage resources. Windows Azure Backup and Hyper-V Recovery Manager provide offsite backup and disaster recovery options.
- Windows Azure Pack provides Windows Azure technology that enterprises and services providers can run on their Windows Server infrastructure for multi-tenant web and virtual machine cloud services.
- Built-in software-defined networking – Site-to-Site VPN Gateway helps customers seamlessly bridge physical and virtual networks and extend them from their datacenter to service provider datacenters.
- High performance, cost effective storage – Features such as Storage Spaces Tiering, VHDX resizing and de-duplication for virtual desktop infrastructure provide high performance for critical on-premises workloads (like SQL and Hyper-V) using lower-cost, industry-standard hardware.
- Empowering employee productivity – Windows Server Work Folders, Web App Proxy, improvements to Active Directory Federation Services and other technologies will help companies give their employees consistent access to company resources on the device of their choice.
For those of you running ConfigMgr 2012 SP1 and still having some minor issues (or major depending on the business criticality of the function), Microsoft has released a hotfix (CU2) to help address them.. I do not believe this requires you to have installed CU1 first.
This update just bundles a number of fixes discovered by MS in support of SP1. Some of the things addressed in this update are:
- Administrator Console – issues adding site servers and screen reader software enhancement
- APP-V – errors with 2007 migrations and cert errors
- OSD – app installs in task sequences, custom ports issues, limited functionality with WinPE 3.1 images, multicast functionality
- Asset Intelligence – fixed a report for more accurate data
- MDM – fixed mobile 6.5 client issue
- Software distribution – fixed the waiting for content forever issue, content status issues during upgrades, and status routing for DPs
- Non-Windows support – added more OSs supported
- Site Systems – fixed some status messages and filtering, site server installs, fixed AD discovery with deltas
- ConfigMgr SDK – object error on 64 bit systems for CPapplet.CPAppletMgr Automation object
- Client – fixed automatic client updates error
- CU Setup wrapper – now can update all in one instead of separately, better logging
More information on the above items and the hotfix can be found here:
With the new 2012 import/export functionality, the new file format is “.zip” file. This compressed file contains not only the task sequence XML can also include any dependencies to the task sequence like a boot image. While this is awesome for migrating between a test and production ConfigMgr 2012 environment, it does not help if you are trying to import task sequences from a disconnected 2007 environment.
In my consulting practice, we do a lot of OSD implementations using a base set of task sequences that we already have pre-configured. Once at a customer, we customize our base templates for the specific project and then export the XML or ZIP to the project documentation. Well today I was at a client that we had previously done work for and they had already performed a 2012 upgrade and removed their old 2007 environment. However, they did not migrate any of the OSD and were looking for us to re-implement OSD in their new environment. Instead of importing our canned OSD for 2012 and then customizing for their needs, we wanted to use the customized 2007 task sequences we had implemented for their old environment. The first problem, however, was the only copy of those were from the archived XML from our project files we had left them. The second is that you can’t import that XML through the 2012 console. Not to worry though, we can still make it work.
The 2012 exports are just compressed files full of the resources, some configuration files, and then the task sequences XML. This 2012 task sequence XML is not the same as the old 2007, but we are able to insert the 2007 XML into the appropriate spot to make it useful. This enabled us to save a bunch of time from recreating the old TS logic. The following is a quick example of how this works.
Start with a 2012 exported task sequence. This is in .ZIP format.
Once exported, open the zip file and navigate to the task sequence folder and copy out the object.xml
Open the object.xml file and you will see a lot of new xml, however, scrolling almost to the end of the file you will find a section with embedded task sequence XML.
This XML is the same task sequence XML as you have in a normal exported task sequence from 2007, however you need to be sure only to grab the appropriate XML nodes and not the whole task sequence. To do so, in the old 2007 XML, copy the nodes and data from the sequence xml node:
and paste it into the object.xml in the CDATA section in the 2012 XML replacing the existing embedded sequence node:
You don’t have to worry about the text/line formatting. Save the file and then copy it back into the .ZIP file. You can then import the ZIP file into your 2012 environment and adjust your referenced objects accordingly. This is great when you have a master task sequence of custom tasks and you just would like the ability to copy/paste them into your new 2012 task sequences. One thing to remember is that your old task sequences were built on the package/program model for software installs. If you are leveraging the new applications model (which you should be) you will have to recreate those specific tasks anyways.
It seems that some people are having problems importing. While I’m not sure as to what they are seeing specifically, I found that the best option that worked for me was to create a blank default task sequence (not a MDT task sequence) to use as the export template from 2012. I grabbed the sequence node from the old and inserted it into the new, replacing the embedded sequence xml node. I don’t see why you couldn’t grab below the sequence node as well (after <sequence version=”3.00″>). It think may address some of the users’ experiences of having 3.10 as a sequence version. Hope that helps and keep sharing your experiences.
This was another great year at the Microsoft Management Summit (MMS) in Las Vegas. While there were not an major product launches, much focus was given on the enhancements with SP1 for System Center. This news isn’t new since SP1 has officially been out since January but while there has been a lot of discussion about the features, seeing how Microsoft sees them in action and their alignment with the cloud mindset was beneficial. In the ConfigMgr space, there were numerous enhancements that were made with SP1 but my favorite is the hierarchal changes and the expansion of non-windows and non-PC device support.
Down to one
One great feature of the SP1 enhancements for ConfigMgr were the changes made to the architecture permitting a much flatter hierarchy. A very compelling argument was made as to why a CAS is not needed and that a single Primary site is all you need (unless you have over 100K clients or a solid reason to have multiples). Again and again it was stated from MS product team as well as MVPs managing huge deployments that you don’t need the CAS in the design and that a single Primary site server should be good for almost all but the largest deployments. This is backed up by the fact that the design changes in SP1 enable you to add a CAS server at any time later (thank goodness) and that the total number of clients supported at a single primary is 100,000. This is a huge shift for many of us, who based on the RTM specs, had installed CAS servers in solutions just in case a customer would want to expand their hierarchy later.
What was also discussed was the impact of having a CAS that doesn’t do anything, as in the solutions we described above. This impact was defined as the “replication tax” and basically described that since all primary servers in a hierarchy are equal, any change made at one server has to replicate to all the other servers and then up the hierarchy. When all your clients are reporting to a single primary with a CAS, that means that to see changes made at a Primary, you have to wait for it to replicate to see it at the CAS, for no real benefit. Since Primaries can’t be used to separate rights or access, the argument to have multiple primaries and a CAS really become difficult to support.
To example this effect, the product team was performing some “bathtub” testing against a design managing 400,000 clients during a normal Patch Tuesday rollout. With the minimal 4 Primary Site Servers they found it took around 14 hours to process all the backlogs. You would think throwing more servers at the solution would speed things up, however increasing the number of Primaries to 10 increased the backlog to 26 hours! In both scenarios the CAS was running at 100% utilization trying to keep up with the replication needs. This is huge, so make sure you are understanding this when you are designing your solution. If you have multiple Primary Servers now and have under 100,000 clients, I would strongly suggest you review your design and adjust accordingly.
Intune and ConfigMgr – Better together
Another great feature in ConfigMgr SP1 is the expanded support for deploying applications across numerous platforms and devices. Native support for IOS 10.6+, Linux, and Android means that you can have an agent, manage devices, and deploy software all from the same console. The user experience across all devices are similar and can even deeplink into the platform’s store to a specific public software install (App Store, Microsoft Store, Google Play). You can even use SCEP 2012 on your Apple systems.
While using ConfigMgr natively is great to manage on-prem devices, Microsoft expects you to manage cloud devices (mobile devices, disconnected pc’s, windows RT) from the cloud. Sounds obvious, and why not, since that is the easiest way to ensure an internet connected device can be managed without the work of making your management solution public facing. Microsoft has been working hard on their unified device management initiative, and with the latest version of Intune, creates a connection between your ConfigMgr SP1 solution and your Intune subscription service. Now there are ways you can empower users to be able to enroll their own devices and allow you to inventory, manage, deploy applications, and wipe those devices. All while having a single toolset to manage and a consistent experience for the end-user for application delivery. Let’s face it, keeping things simple and having a happy user makes a productive user and a happy you. There is so much to tell about this that I just can’t write it all but if you want more details feel free to reach out to me and I can help you dig in deeper.
As always, the sessions were great, the food was plentiful, the vendor parties were fun, and the socializing with other IT folks that wrestle with the same things I do was priceless. If you didn’t get a chance to go or was able to but missed some sessions in lieu of other ones, Microsoft has the recorded sessions along with slide decks available for download at http://channel9.msdn.com/Events/MMS/2013.
Now the only question (beside the obvious one about upgrading to SP1) is whether I will see you at next year’s MMS. However, the decision as to whether Microsoft will have another is still up in the air. We can leave that for another post though :)
I recently worked with a customer that was experiencing a memory leak with custom code running in Outlook. They were having trouble isolating the source of the leak, and they called us to help. There are several ways to dig into the process and profile the memory, but each have their own challenges and require some amount of supposition and guesswork.
When looking into these types of memory leaks, I have used a variety of tools over the course of my career, including DebugDiag, VMMap, and WinDBG with SOS. However, in investigating this particular leak, I came across a relatively new tool created by the .NET Performance Testing team called PerfView. This tool proved to be much easier to use in this situation, and it did not require multiple, cryptic steps, such as capturing multiple memory dumps and comparing the .NET object counts from one dump to the next. Instead, PerfView was able to capture multiple snapshots of the heap, compare those snapshots, and provide a listing of what was different between them.
To provide you with an idea of how simple this tool can be to help you find leaks, I created a simple application that contains a supposed “leak”. In reality, a ‘leak’ in a garbage collected runtime, like .NET, is typically just an object that is still being referenced and therefore cannot be removed from memory. It is not really a leak in the traditional sense, but it still is causing memory use to grow inside the process.
Knowing that my sample app has a leak, we can use PerfView to attempt to locate the source. The application has a simple WPF user interface which reports the size of the process. Over time, the process grows, but it gives no indication why. Below is a screen shot of the application:
To dig into this process, I used PerfView to inspect the heap. Below are the steps I took:
From the PerfView UI, choose “Take Heap Snapshot,” located on the Memory menu.
And choose the process you want to capture:
Click the “Dump GC Heap” button or simply double click on the process name.
When complete, PerfView will display the largest objects, sorted by the largest contributors.
As you can see, my sample application has an ArrayList as its largest contributor to the memory. That does not, however, necessarily mean that this object is the source of any leak. The largest object in an application may be a business object or some other component that exists to support the application’s functionality. In order to find the source of a leak, multiple snapshots must be captured and compared over time.
To capture another snapshot, simply return to PerfView’s main window and choose “Take Heap Snapshot” again from the Memory menu. Leave the current snapshot open so that you will be able to use it as a baseline when comparing it to the next snapshot. After capturing the second snapshot, you should have a second “Stacks” window open which looks similar to the first. To compare this snapshot with the first, locate and open the “Diff” menu. The first item in the list (assuming you did not close it) should be your original snapshot. (If the original snapshot was closed, you can reopen it from the main PerfView window.) Select the baseline snapshot and allow PerfView to compare the two.
After the Diff is created, you will see a screen that looks similar to the Stacks screens that displayed each snapshot.
In a Diff view, the columns to the right of the object types indicate the percentage and raw value differences between the two snapshots. In the case above, notice that the “Totals Metric” value in the header section of the window, near the upper left corner, shows that the total size difference between the two snapshots is about 3.4 MB. In the main section of that same window, we can see that there is/are ArrayList object(s) that have contributed about 99.8% to that difference, or about 3.4MB. If we double-click on the ArrayList line, we can see what objects use that particular type and how much each of those referring objects are contributing to the increase.
From this screen shot, we can see that an object called MyLittleLeaker.Leaker.a makes up the largest difference in memory between the two snapshots. This object is indeed the source of my leak.
As you can see from the contrived example above, PerfView can help provide insight into what is changing over time inside your application, and it can be much less cumbersome than capturing and interpreting memory dumps with commands that are hard to remember.
The Microsoft .NET Performance team has created a series of videos, which are posted on Channel 9, on how to use PerfView in several scenarios, including live profiling and investigating high CPU scenarios. Take a look at the Channel9 PerfView Tutorial to learn more.
- Download PerfView
- Publication of the PerfView performance analysis tool!
- Next Version of PerfView has been released!
- Channel9 PerfView Tutorial