I’ve just returned from Integrate 2014, the annual gathering of BizTalk developers in Redmond. The big story this year was that Microsoft’s BizTalk team gave its first public briefings and demonstrations of the new BizTalk architecture it’s been planning for several years. The key features of this new architecture are:
- BizTalk Server will be refactored and re-implemented as small pluggable components. Each component can be used separately from the others, and new ones can be written by third parties and developers. They can each be developed and versioned separately, so there will no longer be single monolithic releases of “BizTalk Server”. I was reminded of how Microsoft has been breaking up ASP.NET into components with OWIN and Katana.
- But unlike OWIN, the new BizTalk components will not connect directly to each other. Instead their inputs and outputs will all pass through a new type of runtime engine that acts as a message broker. The message flow will thus be pub/sub rather than a pipeline.
- There will a web-based “gallery” where developers and business users can pick and choose components and arrange them into workflows. Developers will also have access to components in Visual Studio via Nuget.
- This architecture will be implemented first on Windows Azure, but will also run on-premise in a future version of the Windows Azure Pack. The latter appeared to be how the Microsoft devs were running their demos.
At the conference Microsoft referred to the new components as “microservices”. This term didn’t seem to appeal to everyone, and I won’t be surprised if Microsoft comes up with new terminology. (They no longer refer to it as “AppFabric” as they did in 2010.) And although the BizTalk team is moving the technology forward, we learned from Scott Guthrie (who gave the keynote) and Bill Staples (Director of Program Management for the Azure Application Platform) that Microsoft is planning to adopt this architecture for other Azure features and services.
Microsoft did not have a public preview of the microservice architecture to announce at the conference, but they promised it for 2015 Q1. That is also when they plan to release the first preview of the BizTalk Server 2015, which should be a “major” release since it will come in an odd-numbered year.
Although GA for the new BizTalk architecture is probably more than a year off, the most exciting takeaway for me was the affirmation, both from Microsoft and the developers assembled from round the world, that BizTalk Server and Microsoft Azure BizTalk Services (MABS) are still strong, vital and more able than ever to handle demanding enterprise integration. Old customers are sticking with BizTalk, and new ones are adopting it all the time. At Bennett Adelson we will continue to keep BizTalk at the center of our Connected Systems practice.
Join us Tuesday, February 11th @ 5:45pm for the .NET SIG. Jeff Mlakar from our Business Intelligence team (@BIatBA) will be presenting on the Microsoft Power BI stack, including Power Query, Power Pivot, Power View, and Power Map. Jeff will be showing how these free add-ins can be used within Excel, and he will be demonstrating how to leverage Power BI on Office 365 to share and collaborate with the data both online and via the new Power BI mobile app.
Register for the event here.
During a System Center Configuration Manager 2012 (ConfigMgr 2012) implementation, I had a need to install the client to large groups of computers based on subnet in a controlled manner. For a controlled client installation, I think one of the simplest ways is to group the systems together in a collection and then install the client from the console from the actions menu to the whole collection. However, the customer had a requirement that the client push had to be performed on a subnet by subnet basis and the newly discovered computers didn’t have subnet information yet to easily create a query collection based on subnet or IP.
Well, instead of creating a script that would add computers from a list to a collection (or manually creating collections with direct memberships) and then me doing the client install per collection, I was able to leverage a tool in System Center Configuration Manager 2012 that assists you in creating CCR records for manual deployment of clients. The Generate CCR Tool (ClientPushGenerator.exe) was exactly what I needed for a controlled deployment into the environment.
To start, I was able to create lists of computer names based on subnet and saved each list off to a separate text file. I then launched the ClientPushGenerator.exe from the <configmgr install folder>\AdminConsole\bin on a Configuration Manager site server. It launched a tool that allowed me to choose the text file of computer names (each computer on a separate line) and choose the site they would be assigned to. I also had the option to choose whether I wanted to force the installation (Always install the client). Note that this is not required for upgrading SCCM 2007 clients to 2012.
I used the tool to generate CCR files to initiate a client push deployment to the computers in the text file. After monitoring for status and having the customer satisfied with the results, we repeated the steps for each subnet over a controlled period, monitoring each deployment set for success and impact. We were able to deploy 100+ subnets quickly this way with great results. Once we reached a majority of installs we were able to enable Client Push for the site and allow the rest of the systems to automatically get installed that we didn’t have in the lists.
While there are many ways to get your client deployed, this was a quick way to process some pre-defined lists of computers to stage the client rollout. If you want to use this tool, you don’t have to do this by subnet. This is just how the customer wanted their deployment groups to be processed in my case. No need to write a custom script now to generate those CCR files.
Principle Consultant, System Management and Operations