The Microsoft Online Services “Performance Test for Internet Connection to Microsoft Online Services” (formerly found at speedtest.microsoftonline.com) is back, after several months of absence.
As a Microsoft Cloud Partner, we’ve found the tool to be extremely useful when performing initial client environment discovery and general Office 365 readiness by measuring response times, bandwidth and connection quality with Microsoft Online Services. The one notable difference with this tool is the requirement to enter an Office 365 tenant domain before beginning. Interestingly enough, entering “company.onmicrosoft.com”, as listed in the example, does allow the test to begin.
We typically use the tool during the pre-sales phase with clients. During instances when we can’t be on site to run the tool ourselves, It’s very easy to direct clients to the tool so they can run it from a computer within their company network, and then return the results to us.
Once the tool has completed it’s run, click on the down arrow in the bottom-left (highlighted in the image below) to reveal more tabs:
Clicking on the Advanced tab reveals summary statistics from the test, including download and upload capacity:
The tool is hosted on two different sets of domains/URLs, each with a version available for three different regions. Currently, the cloudapp.net locations appear to be the most reliable and available.
Fast Track Network Analysis (North America)
Fast Track Network Analysis (Asia Pacific)
Fast Track Network Analysis (EMEA)
At the time of this writing, the APAC and EMEA sites at deployoffice365.com are not yet available:
Office 365 Network Analysis Test (North America)
Office 365 Network Analysis Test (Asia Pacific)
Office 365 Network Analysis Test (EMEA)
Principal Consultant – Cloud Solutions
With Microsoft releasing the Windows 8.1 (Blue) upgrade for download yesterday evening and us always wanting to jump into new technology, our first impressions of Windows 8.1 (Blue) upgrade on our test Windows RT tablet were pretty good. There were some good things, and some difficulties. One of those difficulties were around getting to our applications using the familiar ways we learned in RT. The following is from one of our consultant’s experiences. Keep checking back often as we blog about our experiences with the Windows Server 2012 R2 and Windows 8.1 previews!
All my apps are gone!!!
For those of you who have installed the 8.1 Blue preview, you may have found it more difficult to find any of your applications that are not pinned to the start screen.
Previously in Windows RT (and in Surface Pro), you could just swipe up and then click on the icon in the corner to view all you applications.
However, in the update, this has been replaced by an icon for customizing the groups of apps in the start screen (sorting and naming groups). This is easier now than it was before for those functions, however it didn’t get me to what I wanted, which was access an application tile not on my start screen.
All was not lost however. I could still search for an app (swipe from the right and choose search from the charms menu) and then open it. But to actually get to an app’s tile and then select it to pin to the Start, I found the following two ways:
First, the swipe method:
Once in the start tile screen, just swipe up from the middle of the screen to be presented all of your applications. Swiping up or down then swaps between all apps and the start screen. It makes sense, but wasn’t as intuitive as I expected and was discovered with some trial and error.
Second, the more apps icon:
The second isn’t obvious, but if you notice small things is pretty easy to catch. If you swipe your start screen all the way to the right you will notice an arrow in the lower left corner pointing down. clicking on that will take you to all of your applications, same as the swipe down does.
While not immediately intuitive, I think my kids could have found these quickly enough and after using it a few times I find it to be a much faster way to get to my apps without having them on the start screen.
I hope our consultant’s experience can help some of you who are wondering where all of your applications are in the Windows 8.1 preview. We hope to have more of their experiences in the coming posts to give you some exposure to Microsoft’s newest version of Windows 8.
If you are eager to get your hands on the latest release from the System Center suite, Microsoft has released System Center 2012 R2 for preview today. That is more commonly referred to as its components; Configuration Manager (SCCM, ConfigMgr), Operations Manager (SCOM, OpsMgr), Virtual Machine Manager (SCVMM), Service Manager (SCSM), Data Protection Manager (SCDPM), and Orchestrator (SCORCH). With it you can choose to also get your hands on Server 2012 R2 as well. I will be blogging more on this later as I get the bits installed and start playing with the many new features, but I wanted to get you the information for getting to download the preview now.
Here is an excerpt from the System Center team blog on the announcement (http://blogs.technet.com/b/systemcenter/archive/2013/06/25/microsoft-system-center-2012-r2-preview-is-now-available-for-download.aspx):
Windows Server 2012 R2 and System Center 2012 R2 provide a wealth of new advancements to help IT organizations build and deliver private and hybrid cloud infrastructure for their businesses. Some of the highlights include:
- Enabling hybrid cloud – Windows Server Hyper-V and System Center enable virtual machine portability across customer, service provider and Windows Azure clouds, while a new System Center Management Pack for Windows Azure enhances cross-cloud management of virtual machine and storage resources. Windows Azure Backup and Hyper-V Recovery Manager provide offsite backup and disaster recovery options.
- Windows Azure Pack provides Windows Azure technology that enterprises and services providers can run on their Windows Server infrastructure for multi-tenant web and virtual machine cloud services.
- Built-in software-defined networking – Site-to-Site VPN Gateway helps customers seamlessly bridge physical and virtual networks and extend them from their datacenter to service provider datacenters.
- High performance, cost effective storage – Features such as Storage Spaces Tiering, VHDX resizing and de-duplication for virtual desktop infrastructure provide high performance for critical on-premises workloads (like SQL and Hyper-V) using lower-cost, industry-standard hardware.
- Empowering employee productivity – Windows Server Work Folders, Web App Proxy, improvements to Active Directory Federation Services and other technologies will help companies give their employees consistent access to company resources on the device of their choice.
For those of you running ConfigMgr 2012 SP1 and still having some minor issues (or major depending on the business criticality of the function), Microsoft has released a hotfix (CU2) to help address them.. I do not believe this requires you to have installed CU1 first.
This update just bundles a number of fixes discovered by MS in support of SP1. Some of the things addressed in this update are:
- Administrator Console – issues adding site servers and screen reader software enhancement
- APP-V – errors with 2007 migrations and cert errors
- OSD – app installs in task sequences, custom ports issues, limited functionality with WinPE 3.1 images, multicast functionality
- Asset Intelligence – fixed a report for more accurate data
- MDM – fixed mobile 6.5 client issue
- Software distribution – fixed the waiting for content forever issue, content status issues during upgrades, and status routing for DPs
- Non-Windows support – added more OSs supported
- Site Systems – fixed some status messages and filtering, site server installs, fixed AD discovery with deltas
- ConfigMgr SDK – object error on 64 bit systems for CPapplet.CPAppletMgr Automation object
- Client – fixed automatic client updates error
- CU Setup wrapper – now can update all in one instead of separately, better logging
More information on the above items and the hotfix can be found here:
With the new 2012 import/export functionality, the new file format is “.zip” file. This compressed file contains not only the task sequence XML can also include any dependencies to the task sequence like a boot image. While this is awesome for migrating between a test and production ConfigMgr 2012 environment, it does not help if you are trying to import task sequences from a disconnected 2007 environment.
In my consulting practice, we do a lot of OSD implementations using a base set of task sequences that we already have pre-configured. Once at a customer, we customize our base templates for the specific project and then export the XML or ZIP to the project documentation. Well today I was at a client that we had previously done work for and they had already performed a 2012 upgrade and removed their old 2007 environment. However, they did not migrate any of the OSD and were looking for us to re-implement OSD in their new environment. Instead of importing our canned OSD for 2012 and then customizing for their needs, we wanted to use the customized 2007 task sequences we had implemented for their old environment. The first problem, however, was the only copy of those were from the archived XML from our project files we had left them. The second is that you can’t import that XML through the 2012 console. Not to worry though, we can still make it work.
The 2012 exports are just compressed files full of the resources, some configuration files, and then the task sequences XML. This 2012 task sequence XML is not the same as the old 2007, but we are able to insert the 2007 XML into the appropriate spot to make it useful. This enabled us to save a bunch of time from recreating the old TS logic. The following is a quick example of how this works.
Start with a 2012 exported task sequence. This is in .ZIP format.
Once exported, open the zip file and navigate to the task sequence folder and copy out the object.xml
Open the object.xml file and you will see a lot of new xml, however, scrolling almost to the end of the file you will find a section with embedded task sequence XML.
This XML is the same task sequence XML as you have in a normal exported task sequence from 2007, however you need to be sure only to grab the appropriate XML nodes and not the whole task sequence. To do so, in the old 2007 XML, copy the nodes and data from the sequence xml node:
and paste it into the object.xml in the CDATA section in the 2012 XML replacing the existing embedded sequence node:
You don’t have to worry about the text/line formatting. Save the file and then copy it back into the .ZIP file. You can then import the ZIP file into your 2012 environment and adjust your referenced objects accordingly. This is great when you have a master task sequence of custom tasks and you just would like the ability to copy/paste them into your new 2012 task sequences. One thing to remember is that your old task sequences were built on the package/program model for software installs. If you are leveraging the new applications model (which you should be) you will have to recreate those specific tasks anyways.
It seems that some people are having problems importing. While I’m not sure as to what they are seeing specifically, I found that the best option that worked for me was to create a blank default task sequence (not a MDT task sequence) to use as the export template from 2012. I grabbed the sequence node from the old and inserted it into the new, replacing the embedded sequence xml node. I don’t see why you couldn’t grab below the sequence node as well (after <sequence version=”3.00″>). It think may address some of the users’ experiences of having 3.10 as a sequence version. Hope that helps and keep sharing your experiences.
This was another great year at the Microsoft Management Summit (MMS) in Las Vegas. While there were not an major product launches, much focus was given on the enhancements with SP1 for System Center. This news isn’t new since SP1 has officially been out since January but while there has been a lot of discussion about the features, seeing how Microsoft sees them in action and their alignment with the cloud mindset was beneficial. In the ConfigMgr space, there were numerous enhancements that were made with SP1 but my favorite is the hierarchal changes and the expansion of non-windows and non-PC device support.
Down to one
One great feature of the SP1 enhancements for ConfigMgr were the changes made to the architecture permitting a much flatter hierarchy. A very compelling argument was made as to why a CAS is not needed and that a single Primary site is all you need (unless you have over 100K clients or a solid reason to have multiples). Again and again it was stated from MS product team as well as MVPs managing huge deployments that you don’t need the CAS in the design and that a single Primary site server should be good for almost all but the largest deployments. This is backed up by the fact that the design changes in SP1 enable you to add a CAS server at any time later (thank goodness) and that the total number of clients supported at a single primary is 100,000. This is a huge shift for many of us, who based on the RTM specs, had installed CAS servers in solutions just in case a customer would want to expand their hierarchy later.
What was also discussed was the impact of having a CAS that doesn’t do anything, as in the solutions we described above. This impact was defined as the “replication tax” and basically described that since all primary servers in a hierarchy are equal, any change made at one server has to replicate to all the other servers and then up the hierarchy. When all your clients are reporting to a single primary with a CAS, that means that to see changes made at a Primary, you have to wait for it to replicate to see it at the CAS, for no real benefit. Since Primaries can’t be used to separate rights or access, the argument to have multiple primaries and a CAS really become difficult to support.
To example this effect, the product team was performing some “bathtub” testing against a design managing 400,000 clients during a normal Patch Tuesday rollout. With the minimal 4 Primary Site Servers they found it took around 14 hours to process all the backlogs. You would think throwing more servers at the solution would speed things up, however increasing the number of Primaries to 10 increased the backlog to 26 hours! In both scenarios the CAS was running at 100% utilization trying to keep up with the replication needs. This is huge, so make sure you are understanding this when you are designing your solution. If you have multiple Primary Servers now and have under 100,000 clients, I would strongly suggest you review your design and adjust accordingly.
Intune and ConfigMgr – Better together
Another great feature in ConfigMgr SP1 is the expanded support for deploying applications across numerous platforms and devices. Native support for IOS 10.6+, Linux, and Android means that you can have an agent, manage devices, and deploy software all from the same console. The user experience across all devices are similar and can even deeplink into the platform’s store to a specific public software install (App Store, Microsoft Store, Google Play). You can even use SCEP 2012 on your Apple systems.
While using ConfigMgr natively is great to manage on-prem devices, Microsoft expects you to manage cloud devices (mobile devices, disconnected pc’s, windows RT) from the cloud. Sounds obvious, and why not, since that is the easiest way to ensure an internet connected device can be managed without the work of making your management solution public facing. Microsoft has been working hard on their unified device management initiative, and with the latest version of Intune, creates a connection between your ConfigMgr SP1 solution and your Intune subscription service. Now there are ways you can empower users to be able to enroll their own devices and allow you to inventory, manage, deploy applications, and wipe those devices. All while having a single toolset to manage and a consistent experience for the end-user for application delivery. Let’s face it, keeping things simple and having a happy user makes a productive user and a happy you. There is so much to tell about this that I just can’t write it all but if you want more details feel free to reach out to me and I can help you dig in deeper.
As always, the sessions were great, the food was plentiful, the vendor parties were fun, and the socializing with other IT folks that wrestle with the same things I do was priceless. If you didn’t get a chance to go or was able to but missed some sessions in lieu of other ones, Microsoft has the recorded sessions along with slide decks available for download at http://channel9.msdn.com/Events/MMS/2013.
Now the only question (beside the obvious one about upgrading to SP1) is whether I will see you at next year’s MMS. However, the decision as to whether Microsoft will have another is still up in the air. We can leave that for another post though
I recently worked with a customer that was experiencing a memory leak with custom code running in Outlook. They were having trouble isolating the source of the leak, and they called us to help. There are several ways to dig into the process and profile the memory, but each have their own challenges and require some amount of supposition and guesswork.
When looking into these types of memory leaks, I have used a variety of tools over the course of my career, including DebugDiag, VMMap, and WinDBG with SOS. However, in investigating this particular leak, I came across a relatively new tool created by the .NET Performance Testing team called PerfView. This tool proved to be much easier to use in this situation, and it did not require multiple, cryptic steps, such as capturing multiple memory dumps and comparing the .NET object counts from one dump to the next. Instead, PerfView was able to capture multiple snapshots of the heap, compare those snapshots, and provide a listing of what was different between them.
To provide you with an idea of how simple this tool can be to help you find leaks, I created a simple application that contains a supposed “leak”. In reality, a ‘leak’ in a garbage collected runtime, like .NET, is typically just an object that is still being referenced and therefore cannot be removed from memory. It is not really a leak in the traditional sense, but it still is causing memory use to grow inside the process.
Knowing that my sample app has a leak, we can use PerfView to attempt to locate the source. The application has a simple WPF user interface which reports the size of the process. Over time, the process grows, but it gives no indication why. Below is a screen shot of the application:
To dig into this process, I used PerfView to inspect the heap. Below are the steps I took:
From the PerfView UI, choose “Take Heap Snapshot,” located on the Memory menu.
And choose the process you want to capture:
Click the “Dump GC Heap” button or simply double click on the process name.
When complete, PerfView will display the largest objects, sorted by the largest contributors.
As you can see, my sample application has an ArrayList as its largest contributor to the memory. That does not, however, necessarily mean that this object is the source of any leak. The largest object in an application may be a business object or some other component that exists to support the application’s functionality. In order to find the source of a leak, multiple snapshots must be captured and compared over time.
To capture another snapshot, simply return to PerfView’s main window and choose “Take Heap Snapshot” again from the Memory menu. Leave the current snapshot open so that you will be able to use it as a baseline when comparing it to the next snapshot. After capturing the second snapshot, you should have a second “Stacks” window open which looks similar to the first. To compare this snapshot with the first, locate and open the “Diff” menu. The first item in the list (assuming you did not close it) should be your original snapshot. (If the original snapshot was closed, you can reopen it from the main PerfView window.) Select the baseline snapshot and allow PerfView to compare the two.
After the Diff is created, you will see a screen that looks similar to the Stacks screens that displayed each snapshot.
In a Diff view, the columns to the right of the object types indicate the percentage and raw value differences between the two snapshots. In the case above, notice that the “Totals Metric” value in the header section of the window, near the upper left corner, shows that the total size difference between the two snapshots is about 3.4 MB. In the main section of that same window, we can see that there is/are ArrayList object(s) that have contributed about 99.8% to that difference, or about 3.4MB. If we double-click on the ArrayList line, we can see what objects use that particular type and how much each of those referring objects are contributing to the increase.
From this screen shot, we can see that an object called MyLittleLeaker.Leaker.a makes up the largest difference in memory between the two snapshots. This object is indeed the source of my leak.
As you can see from the contrived example above, PerfView can help provide insight into what is changing over time inside your application, and it can be much less cumbersome than capturing and interpreting memory dumps with commands that are hard to remember.
The Microsoft .NET Performance team has created a series of videos, which are posted on Channel 9, on how to use PerfView in several scenarios, including live profiling and investigating high CPU scenarios. Take a look at the Channel9 PerfView Tutorial to learn more.
- Download PerfView
- Publication of the PerfView performance analysis tool!
- Next Version of PerfView has been released!
- Channel9 PerfView Tutorial
As any designer not living under a rock the last year can tell you “responsive design” is the latest buzz word to take the industry by storm. While I believe that responsive design is a great and necessary thing, the problem pops up when a front end developer (that’s me) needs to incorporate this new technology into his current work flow. How can we take advantage of various screen sizes without devoting too much time and resources to study and trial-by-error?
In an effort to become well versed in responsive design the Digital Brand Experience Team experimented with following three solutions:
1. A free software download that allows the user to set template parameters using an online tool and then exports out HTML/CSS that can be modified and tweaked in your favorite HTML editor.
2. Use a paid software from an industry giant that closely mimics existing design software that designers use day in and day out.
3. Writing the code from scratch using online guides and templates to get a base level of knowledge to hopefully expedite the learning/implementation process.
So, with that being said, today we are going to discuss option two. Since the Digital Brand Experience Team currently uses Adobe software through the Creative Cloud we were all curious/excited to give Adobes new preview software, Adobe Edge Reflow, a try.
Adobe Edge Reflow performs like 99% of the other Adobe products and is very easy to pick up if you have used Fireworks, Illustrator or InDesign for any length of time. I will give a brief overview on how the software works – if there is interest in going into more depth in a future blog post, let us know in the comments.
Edge Reflow is set up with your main canvas and a single toolbar to the left. All of our actions/settings can be controlled from this minimal view. There are four features that I would like to point out:
1. Four main selectors that allow you to select objects, create shapes, text, and graphics.
2. When one of these tools is selected the panel below changes to reflect different options and settings.
3. Canvas with a column grid and gutter width set to your liking.
4. The “plus” button is what allows you to set different break points for when your design will re-factor based on different screen sizes.
Since we all have a basic understanding of design software and how Adobe products behave, lets jump to a mock up already laid out in Edge Reflow.
As you can see I laid out a very basic grid structure just for the sake of argument. Now for the fun part and where I think that Edge Reflow really shines. Easily manipulating the content on a smaller screen.
So, first, we are going to click the “plus” button in the top right corner (to set our break point). Once the button is pressed the entire bar lights up and you can drag the arrow to the width that you need. We are going to set ours to 320px for the iPhone.
Now from the screenshot above you can see that this caused my layout to get a little squirrely. No need to freak out, adjusting this layout is as simple as setting up the the initial layout. You just need to resize and reorganize.
So, to fix this layout we are going to do a few things:
1. Change our column structure from 6 down to 1.
2. Reduce the top/bottom margin around the logo since we have less area to work with on mobile.
3. Reorient our main navigation and make the button size larger to account for tapping.
4. Reduce the size of our main banner graphic.
5. Adjust the body copy and right hand rail.
I wish you could have seen that in real-time, as it only took me 10-15 minutes to re-organize that layout.
Now that we have this done, how do we get the HTML/CSS exported so we can upload it to its final destination? Ah, you have found the main weak spot of Edge Reflow. I cannot figure out a way to export the code – which to me is a major stumbling block. The best I can figure is that under “view > preview in chrome” you can see the page in the browser. At that point you can view source and cut & paste the code out of the browser and into the HTML editor of your choice.
Edge Reflow is a very powerful tool that is very easy to pick up for any designer with a working knowledge of other Adobe products. You do not need any HTML/CSS experience and can do the entire layout through the visual interface. It is a great product for front end developers that are just getting their feet wet in the responsive design arena.
That being said I do have a few cons to point out:
1. Exporting HTML: Adobe really needs to come up with an “Export to Dreamweaver” feature. To me this is a no brainer and should have been included even in the preview release.
2. Editing someone else’s code: For me, it is not time efficient to make edits to someone else’s code. I always have problems finding a specific style or the main site structure is not laid out the way I would do it.
Final Thoughts: I am going to reserve this tool for quick prototyping when I need to show a rough responsive design to a client. Using this tool I can get the work done in an afternoon and be able to show the client a visual of what their site will look like on both a desktop and mobile screen. Today, I do not feel comfortable writing final website code with Edge Reflow—I will reserve that for writing the code from scratch.
Senior Web Designer
For those of you on SharePoint 2013, the March 2013 Public Update is now available with 30+ fixes as well as some performance and stability fixes around search . The good news is the SharePoint Server 2013 update contains the SharePoint Foundation 2013 update as well, so you only need to apply the one update. The bad news is that there is a change in the package configuration that requires you to install this update in order to install any future SharePoint updates.
The following are the KB links for the respective updates:
- KB 2768000 – SharePoint Foundation 2013
- KB 2767999 – SharePoint Server 2013
- KB 2768001 – Project Server 2013
The Full Server Packages for March 2013 PU are available through the following links:
- Download SharePoint Foundation 2013 March 2013 PU
- Download SharePoint Server 2013 March 2013 PU
- Download Project Server 2013 March 2013 PU
After installing the fixes you need to run the SharePoint 2013 Products Configuration Wizard on each machine in the farm. Additionally, if you are running Search Service Application in the farm, you will need to perform the following:
For those of you wondering the difference between and Public Update (PU) and a Cumulative Update (CU), a public update is a monthly release of general fixes and security updates applying to all customers where as a cumulative update is a bi-monthly release of specific hotfixes meant to address a specific customer(s) problem. CUs are often rolled into a PU later, as in this case where the February CU is rolled into this March PU.
Jason Condo, MCITP
Principal Consultant, Systems Management and Operations