Bennett Adelson Technical Blog

Posts from the consultants at Bennett Adelson

Azure Service Fabric: a Platform for Mission Critical Apps – Part 1

Microsoft’s unveiling of HoloLens at Build 2015 caused a lot of excitement. But for me the biggest excitement was something Microsoft released that we can download now. It’s this:

image

This is Preview 1 of Azure Service Fabric. If you aren’t sure what that is, look on the left. Onebox is my laptop, and it is running a local cluster comprised of four applications and five nodes. The applications are named ClusterManagerService, FailoverManagerService, ImageStoreService and NamingService. Sound familiar?

Those four applications are what we commonly refer to as “Azure.” That’s the Azure Microsoft runs in the cloud to host Azure SQL Database, Power BI, Cortana, DocumentDb, Event Hubs, and “many other core Azure services”. That Azure is now running on my laptop.

Which means I can build Azure apps and run them locally on real Azure, not an emulator. I’ll be able to deploy them to Azure on Microsoft’s public cloud, and they’ll run the same there as they do locally. And I’ll be able to deploy them to on-premise data centers or ISV data centers when they run Microsoft Azure Stack on Windows Server 2016.

I haven’t gotten a chance to try a HoloLens yet. But this is plenty of excitement for me. Let’s consider how Azure Service Fabric apps are different from other apps.

Azure Service Fabric applications run in clusters. A cluster is a group of virtual or physical machines, each hosting a collection of isolated processes called nodes. On my laptop a Windows Service called FabricHost.exe is managing the cluster. Each of the five nodes is implemented by a trio of Windows processes running Fabric.exe, FabricGateway.exe and FileStoreService.exe.

image

An Azure Service Fabric application consists of one or more microservices. Each microservice will be deployed in one or more containers on one or more nodes. Microservices run in isolation from each other, and can be either stateless or stateful.

Here is what my Service Fabric Explorer looks like after I have deployed four Service Fabric applications (one from each project template in Visual Studio). Each application contains one microservice, but most of them are deployed on multiple nodes.

image

What can we gain by dividing applications into microservices and running them on clusters of nodes?

First we gain High Availability. When a microservice crashes, the Service Fabric intervenes immediately to redirect traffic to a backup copy of the microservice on a different node. Thanks to the Naming Service, microservices hide their physical locations, so redirection to a new instance happens transparently. Service Fabric then instantiates a new microservice instance to replace the old one. If the failed microservice is stateful, then the backup instance will include its own copy of the state, and the replacement instance will get a copy of the current state as well. By the same means, Service Fabric can quickly recover from the loss of an entire node or the loss of an entire machine in the cluster.

We gain High Scalability. We can have as many instances of our microservices running on as many nodes as we need. We can constantly right-size our deployment to optimize performance while minimizing costs.

The support for stateful microservices brings important advantages, and I think this is one of the big things that sets Service Fabric applications apart from other kinds of Azure applications. By putting state side-by-side with code – by not separating between “service tiers” and “data tiers” – we can dramatically Reduce Latency. Think of Microsoft’s Cortana, an Azure service that finds restaurants and looks up movie times in split seconds. And by packing data and computing together, we can reduce our apps’ hardware footprint and thus further Reduce Costs. And the Programming Experience for developers can become much simpler.

The last big gain I’ll mention now (I’ll talk about many more in coming posts) is support for Rolling Live Upgrades. To deploy a new version of a microservice, you don’t need to stop what’s already in production. Service Fabric will create instances of the new version and silently substitute them for old instances as they become idle, taking care that work underway is never handled by inconsistent versions. This is the same way Microsoft rolls out updates to Azure SQL Database and its other cloud services.

I think these make Azure Service Fabric an excellent platform for building all sorts of apps, but especially mission critical apps, such as

  • Apps that need to run all the time, never going down for planned or unplanned reasons
  • Apps that handle heavy workloads
  • Apps requiring split-second throughput
  • Apps with heavy resource demands that need to be frugal with costs

In this series I will explore Azure Service Fabric in some detail. Watch for more posts exploring the architecture, the tooling, and the application lifecycle on this new but proven platform.

In the meantime you can start your own exploration by downloading the Service Fabric Preview 1 here.

Build Day 3 – In With the New: Microsoft Edge, More HoloLens, Azure Logic Apps, and Direct2D

[This post is by Jeff Mlakar, a member of the Business Intelligence Team at Bennett Adelson.  Follow us @BIatBA and @JeffMlakar]

This was my third and final day at the Microsoft Build Conference. My first was almost all HoloLens. My second was all data. I tried to find a theme in the sessions I saw today. Best I could come up with was that they were all dealing with something new. Of course that could also be said for any collection of sessions at this conference, but anyway… Session 1:

The Microsoft Edge Rendering Engine That Makes the Web Just Work

Presented by David Catuhe and David Rousset.

I gotta say, on Wednesday it was pretty exciting to be in the room for a name announcement. And yes, we have our new browser name: Microsoft Edge, with a logo that looks not far off from IE. Which has to make me wonder: IE has a strong presence within the enterprise, right? A lot of companies have old versions of IE as part of their approved software. It’s probably not going to be going away from the enterprise any time soon. A lot of employees have their old IE versions for business applications and then install a newer/other browser for going out to the web. So what will happen if these employees start to choose Edge as their other browser? Will they now have two different browsers of near-identical logos in their work computer taskbar? Gotta wonder. Anyway, on to the session.

The Davids talked about the history of IE, with its trident engine and many document modes. They talked about how Edge’s new engine, a fork of trident called EdgeHTML, has one document mode and currently benchmarks better than Chrome or Firefox. A lot of the session was on WebGL performance and gaming in the browser, with built-in game pad support and all. This led to an impressive spooky-scene game demo in Edge where the biggest cheer was for a gravestone reading “RIP IE 6”.

One interesting thing was the discussion of IE on Windows Phones currently having issues with mobile displays as some sites check for Android or iOS as a criterion for displaying mobile. Edge corrects this by almost faking as Android or iOS:

Jeff Mlaker 1

The Davids stressed the use of CSS filters. And feature detection, which hopefully developers are doing already. Edge seems to be catching up a couple years to the other browsers in the use of @supports to do feature detection in CSS.

All in all, Edge looks very promising. I’m looking forward to learning more specifics about its features, seeing how it will work for business applications, and trying out its dev tools.

Case Studies of HoloLens App Development

This was a panel discussion where we got to text questions to people who have worked on HoloLens projects already. It was moderated by Rukari Austin, a Microsoft Community Manager for HoloLens. The panel included Dr. Jeff Norris of NASA, Aviad Almagor of Trimble Navigation, Professor Mark Griswold of Case Western Reserve University in Cleveland, and Microsoft Studios Manager Megan Saunders. You’ve probably already seen the videos of the projects these panelists have been working on, so I won’t elaborate on their work. If you haven’t seen the videos, I’d highly recommend you see them. I’ll mostly go through and outline some of the points that resonated with me as I listened to the panelists. These are somewhat paraphrased and could be out of order, but I’ll stick to the spirit of what the speaker was saying. This is not complete, so I’ll urge you to watch the video.

Jeff Mlaker 2

(from left: Rukari Austin, Jeff Norris, Aviad Almagor, Mark Griswold, and Megan Saunders)

Jeff described how HoloLens is helping the two main initiatives of NASA: to explore and to share the experiences of that exploration. He noted that when first developing for such a new technology, all his instincts were wrong. Megan elaborated on the issues of defining a new kind of UI and UX. Aviad said that in this new world, a proper user experience is critical. Mark postulated that as we develop for the technology, we’re probably going to start out making it look like a traditional computer, but we’re not going to end there.

Mark’s background is in building MRIs. In addition to his physics, radiology, and engineering background, he had programming experience and had worked with Unity. He talked about how unsure he was of how long it would take to get set up and working with a new technology like HoloLens. Installing SDKs, configuring, troubleshooting, doing the “hard math” of 3D graphics transformations, etc. He decided to time how long it would take him to go from first sitting down, to making a hologram, to viewing it in HoloLens. It was less than 5 minutes. Which is amazing. Jeff described an experience of intuitively wanting to move his mouse off the screen to interact with the hologram. He suggested that to a member of the team, who implemented it in an hour or two, and he now has mouse interaction with the holograms. So ease of development in the Windows Holographic platform is a big talking point.

All speakers talked of the importance of collaborating via holograms. Jeff said we take for granted how important physical co-presence is. Mark described it as the only way he can educate students, and he can see applications for museums and other such institutions coming out of the technology. Mark also reminded us of the importance of the spatial sound feature of the HoloLens for further immersing you in the augmented reality.

Aviad discussed the inherent human difficulty in translating 2D images into imagined 3D. For a device to do that is huge for “real architects”, as he said (not “system architects” or such). I wonder: if Microsoft is working with someone from Trimble on HoloLens, and Trimble bought SketchUp from Google, will we see SketchUp integrating with HoloLens at some point? I’m just wondering.

No one could answer the big question of when we’ll be able to get our hands on a HoloLens with which to develop. Or when we might see an emulator or such. It’s interesting to me that as this new technology comes out, our gaze is not on how it works but on how we can develop and expand on it.

After the session, we got a chance for one-on-one questions with the panelists. I’m surprised at how the choice of platform is always Unity for their applications. So basically, if you want to get ready for HoloLens development, learn Unity. (http://unity3d.com/)

Logic Apps

Presented by Ilya Grebnov and Stephen Siciliano.

Logic Apps are a part of the relatively new Azure App Service. They enable a developer to create business process workflows in a designer. Business processes are thus automated as a SaaS service. In this session, Ilya and Stephen gave us an introduction to the concept and showed us how to lay out our workflows in a designer, and how to code them.

They talked about all key app services currently available. I learned a new acronym: IPaaS, where the I is for Integration. Ilya and Stephen also taught me about Azure Pack, basically a “mini Azure” that can run on prem. In this way, Logic Apps could run on premises.

Over the course of the demo, they defined Logic Apps in the designer and through code. It’s nice that the Logic Apps can be completely defined in JSON. This means that whatever you can do with Logic Apps, you can configure via the RESTful API of the Logic App.

There are a handful of components in Logic Apps. First Triggers, which might be a bit of a misnomer. Because let’s say you use a Twitter trigger, it is still polling on a schedule, say every minute. We define a Recurrence trigger. Next is Actions, which is a set of steps. Possible types are Http, ApiApp, or even a Workflow itself. In the demo, we wire up a Twitter connector and a DropBox connector.

At this point, I can’t help but be struck by the number of sessions I’ve seen that are using Twitter in their demos. Does this mean it’s a treasure-trove of valuable data? Or is it just really good for pedagogical purposes? Either way, I’m glad that in at least one session, I was able to get a tweet saying “Lee-rooooy Jenkins!!” onto the presenter’s screen.

It’s nice that in Logic Apps everything is metadata driven, defined in Swagger. This is nice because then everything can wire up apparently pretty seamlessly. At least it did in the demo. More complex logic can be accomplished through conditions and JSON transformations. As for what you have to do to prepare your own API to be used by a Logic App, as long as you have a RESTful, JSON-returning API using OAuth or another common Auth provider, you should be able to use it in a Logic App.

As far as a new possible location for your Logic App, just this week, Azure App Service Environments were released, which encapsulate a fully isolated and dedicated environment to run all your apps, be they Web, Mobile, API, or Logic.

It was a great session on a cool technology with good demos. Though I do have to wonder about Logic Apps’ similarity to Azure Data Factories. There are certainly similarities, as each are visual workflows of data in the cloud. But Data Factories seem to be more about the Analytic Data Pipeline and have better connections to SQL and other data stores. Logic Apps seem to be all about APIs and smaller volume business process data flows. It could just be all about size of data. I’ll have to play with each more to give a better comparison.

What’s New in Direct2D and DirectWrite for Windows 10

Presented by Anthony Hodsdon.

This was my final session at Build and it was nice coverage of some fundamental graphics enhancements that can really make a difference in your Windows apps. Imagine you have document text in your app and you provide pinch-zoom-in to enlarge your text. Odds are the text is pixelated as you’re zooming and then snaps to a much cleaner view when you let go. (I just confirmed it on a Word document on my Windows 8.1 slate). This is because in Windows 8.1, by default, text was constantly being re-rasterized, putting a lot of work on the CPU that gets uploaded to the GPU. So, the app developer probably did what was common practice, which was to render the document to a bitmap during the scaling, hence the pixilation on zoom-in. The solution: Hardware accelerated vector-based text rendering in Windows 10. Anthony gave an impressive demonstration of zooming in very close to a document in a Windows 10 app and seeing the text remain clear with nice, round Béziers.

I think it’s little formatting things of this sort that really make for an enjoyable app experience for users. This is what the Direct2D and DirectWrite enhancements were bringing in this session. Direct2D and DirectWrite are hardware accelerated graphics APIs for 2D graphics and text display, respectively. Enhancements that come for them in Windows 10 are going to help displays not just on PCs, but phones as well.

For example, most phones don’t want to give up the ½ GB of space needed for all desktop fonts. If they don’t have the font your app is using, it could cause serious formatting issues. The solution: asynchronous font downloading and rendering built in not only to DirectX, but directly to XAML as well. You can configure your app to display a default font as you download the appropriate font, as in the following code:

class MyFontDownloadListener : IDWriteFontDownloadListener
{
STDMETHOD_(void, DownloadCompleted)( /* … */)
{
RerenderDocument();
}
}
spDwriteFactory->GetFontDownloadQueue(/* … */, &spQueue);
spQueue->AddListener(/* … */);
spQueue->BeginDownload();

Non-text graphics enhancements include things like built-in shader linking for optimization, and a simple, efficient API for loading and rendering images which reduces what was chucks of code down to a manageable amount.

Inking capabilities exist as a new 2D primitive that is hardware accelerated and provides low latency, high fidelity, and no polymerization. Anthony gave a great demo with his own handwriting showing how one can manipulate ink thickness and ink rotation. Makes me bummed for missing the DirectInk talk yesterday; I’ll have to go back and watch it.

In summary, as Anthony put it, these enhancements will mean your apps on DirectX will be faster, more maintainable with less code, more consistent across devices, and more beautiful.

Final thought:

So that’s it. Heck of a 3 days. Build is the kind of conference that makes you realize how much more you’ve got to learn. I’d say that’s a good thing.

My Data Day at Build – Azure Elastic Databases, Azure Search, IoT, Azure Stream Analytics, and more

[This post is by Jeff Mlakar, a member of the Business Intelligence Team at Bennett Adelson.  Follow us @BIatBA and @JeffMlakar]

Today was Day 2 for me at Microsoft Build. And it was all about DATA. From the new Azure SQL Database Elastic Databases, to Azure Search, to Big Data, to Azure Stream Analytics, and Azure Machine Learning. All data. And, more amazingly, all data in the cloud. I remember when Azure’s data offerings were limited to their blob and table storage and the beginnings of SQL Server Data Services. To see how much the data offerings in Azure have exploded is surprising and exciting. Even today’s keynote was heavy on Azure Machine Learning. Analyzing mapped human genomes in R and exposing that algorithm as an api in the cloud so anyone with a (now surprisingly accessible) mapped human genome can create a heat map of their health risks shows how what is happening in data analytics can really make personal differences in our lives. And, I gotta say, when thinking about IoT and ML, I didn’t see cow pedometer artificial insemination coming. “AI meets AI”…

My first session:

Modern Data in Azure

Presented by Lance Olson, et al. In many ways, it was perfect that this session kicked off my day of data. This demo was presented as a tour of Azure data offerings, primarily from a developer point of view. As in, it wasn’t just an explanation of different data storage types in Azure, as I was expecting. They built a web app and brought in each form of data technology as it was needed for the application. A nice approach. The app was called the WingTip Ticketing application, and would be expanded on in the next session to be a SaaS offering. The first data offering being added to the ticketing application was:

Azure SQL Database Elastic Database Pool

The ticket ordering was to be handled by a relational database, Azure SQL Database. The argument for using the brand new Elastic Database Pool (announced today and in preview) was that it would make sense to logically and physically partition the database by artist, as some artists might naturally have far more load than others, depending on demand. They demonstrated ticket sales loads against a standard database and against a newly sharded Elastic Database Pool, sharded by artist. The load was measured in DTUs, or Database Throughput Units. I’ll explain this more in a bit as it was heavily covered in the next session. I was impressed by how performance could be increased, but I was more impressed with how easy it all was to configure. This includes setting up the sharding strategy, and integrating the Elastic Database Client Library into the web application code. Elastic Databases were covered thoroughly in the next session, so I’ll come back to them then.

Azure Document DB

They used Azure’s NoSQL document database service, Azure Document DB to handle ratings and reviews, with the thought that this data would be largely unstructured. For those who don’t work with document databases, you can basically think of a document as a record in a table whose schema is fluid. This way when new data is added, no schema changes are needed, just updates to the Model, View, and Controller in the web. Documents could be created with the code:

Documents doc = Client.CreateDocument(Collection.SelfLine, entity);

Even SQL-like queries could be created using

Client.CreateDocumentQuery

Azure Search

Azure Search was utilized to provide users with an intuitive search box in the ticketing application. Azure Search is a fully managed Search-as-a-Service in the cloud. It reminds me of working with ElasticSearch in the way you set up indexes, analyzers, and suggestions, though since we are in the Azure cloud it is FAR easier to provision and set up. Once an index and indexer was set up and data populated, wiring up the search was easy using the namespace:

Microsoft.Azure.Search

And using the SearchIndexClient for operations like:

SearchIndexClient.Documents.SearchAsync
and
SearchIndexClient.Documents.SuggestAsync

They showed the use of better scoring in Search, as well as suggestions. They didn’t demo any highlighting of hits capability, but I asked them afterwards and they said this was available.

Apache Storm for HD Insight

Some Big Data work was then done for the interesting example of upping the search results score based on number of recent tweets. They used Apache Storm on HDInsight with a spout to twitter based on a hashtag of a fictional music star. They bolted this to our Azure Search index and then had us in the audience tweet to the hashtag. When the hard-coded number of 10 tweets for the hashtag was met, that artist’s score would increase in the Search results. A compelling example.

All-in-all a great tour of many newer Azure data offerings. It was like 4 sessions in one.

My next session:

Building Highly Scalable and Available SaaS Applications with Azure SQL Database

Presented by Bill Gibson and Torsten Grabs. This session was more of a deep-dive into the new elastic capabilities of Azure SQL Databases, like I mentioned before. For me, I kept trying to get my head around how this was different from Federations. I’m starting to get the idea now, though, that this is not just a logical separation, but a strong data sharding strategy that can handle predicted and unpredicted loads while saving you from having to write all the routing code that you had to with Federations.

We’re back in the WingTip Ticketing application (a tongue-twister name all presenters were having trouble with). This time we’re making the application to be a SaaS offering, with different customers using the service for their own ticketing pages. We’re shown how to set up our elastic databases in PowerShell scripts. We create a database, a database per customer, we register these databases with the ShardMap, and then we add our customers to traffic manager rules. What we end up with is a collection of customer databases and a common customer catalog database. Not that different from a Federation, but without the usual bottlenecks.

Establishing the connection is achieves as follows:

SqlConnection conn = SaasSharding.GetCustomerShardmap.OpenConnectionForKey(

passing in the customer’s key.

We’re shown how we can scale our databases’ min and max DTUs via the portal (see pic below), PowerShell, Rest APIs, or T-SQL.

min-max-DTUs

I mentioned DTUs (Database Throughput Units) before, so let me elaborate on at least my current understanding of what it means. Of 4 dimensions of performance: Reads, Writes, ComputeCPU, and Memory, a DTU is the max value of these 4, after they have (somehow) been adjusted. I’ll have to read their whitepaper sometime to see exactly how this is done.

In the end, it all looks good. Easy to manage and with the possibility of handling unpredictable usage.

Building Data Analytics Pipelines using Azure Data Factory, HDInsight, Azure ML, and More

Presented by Mike Flasko. Boy, if ever there was a session that made me feel like I didn’t know anything about data flows, this was it. Azure Data Factories, named so because they resemble a Henry Ford assembly line for data, are a shift for me in my SSIS-centric ideas of data flows. They are a new preview service for modeling and executing the data analytics pipeline. In a visual designer in the Azure portal, we create data sets (be they tables or files), activities (like Hadoop jobs, custom code, ML models, etc), and pipelines (a series of Activities) to complete a data analytics load process.

The Data Set source is defined in a json document. Activities to partition data are done via a Hive script on an HDInsight cluster. We combine and aggregate data in an activity defined by a JSON object. A final activity is used to call an Azure ML scoring activity. We don’t need to know its inner workings. Only the schema of the input and output and how to call the algorithm.

The end result is a process that takes cell phone log data, combines it with our existing customer data, aggregates it, and spits out a data set that says what the probability is of each customer cancelling their service. This end result is then easily (also using the Factory) sent to PowerBI for a lovely dashboard.

This is all still really new to me and I really need to study up on this. I have left over questions like how would you handle workflows for bad data and what is the best way to promote a factory from staging to prod (Mike answered the latter for me after the session: leaving the factory as is and swapping linked services definitions to make the factory run against production). One way or another, the data game is changing, and this session was an excellent introduction to the brave new world.

Gaining Real-Time IoT Insights using Azure Stream Analytics, Azure ML, and Power BI

Presented by Bryan Hollar et al. Azure Steam Analytics was just released to GA 2 weeks ago and this is the first I’ve gotten to see it in action. After seeing case studies from Fujitsu and the Kinect team, ASA implementation was mapped out for us. The shift is from thinking about reporting on data at rest, to data in motion. For example, we could analyze how many twitter users switched sentiment on a topic within a minute in the last ten minutes. SAQL (Stream Analytics Query Language), makes this easy by being a flavor of SQL mixed with temporal extensions. You’re analyzing within a time window, and these windows could be tumbling, hopping, or sliding depending on how you’ve set up your queries. For example:

SELECT Topic, COUNT(*) AS TotalTweets
FROM TwitterStream TIMESTAMP BY CreatedAt
GROUP BY Topic, HoppingWindow(second,10,5)

With Azure Stream Analytics, your data flow pipeline is set to pull from existing event hubs, analyze, and persist (or display) its results. The processed data doesn’t even need to be persisted to be reported on. ASA basically exists in the same place in the data flow pipeline as ML. But where ML would take the “cold path” of analyzing large sets of data at rest, ASA is analyzing the data as it streams. Though now, for a brief preview, ML is integrated into ASA. I’m told you’ll be able to sign up for this preview at the ASA team blog:

http://blogs.msdn.com/b/streamanalytics/

As far as integrating the Internet of Things, it’s basically a matter of configuring your event hub in your data pipeline. So, there’s not much difference pulling from twitter or the Internet of Things. You can then configure your output to be PowerBI (also in preview) for a real-time dashboard. The most impressive IoT and ASA example was by Fujitsu who showed an impressive app that geo-mapped energy consumption data and could zero in on spikes right down to an area of a building in real time. Though, now that I think about it, they may be tied with pedometer analysis to tell when your cows were in heat. And now I finally know why they call it a “Heat Map”.

Building Big Data Applications Using Azure HDInsight Service

Presented by Asad Khan. The final session of my day was a tour-de-force of Big Data in Azure. It was four one-hour sessions compressed to one and it was a doozy. Started with 30 seconds of the basics of big data: caring about volume, velocity, variety, and variability of data. They mentioned how Apache Hadoop is an Open Source platform for large amounts of unstructured data, but how the managed infrastructure of Azure makes HDInsight an enticing implementation. They covered HBase for NoSQL, and taught us how to use Storm for streams of data, by showing us how to build spouts for twitter and bolts for Signal R to display data on the web in real time. It was an ambitious session, but pulled off very well.

Final thought:

I’m just now realizing that of everything I’ve mentioned, Big Data with HDInsight is the old man at the ripe old age of a year and a half. Speaks volumes to how much Microsoft is invested in growing its cloud data offerings. Can’t wait to see what’s next!

My Day at Holographic Academy

[This entry was written by Jeff Mlakar, a member of the Business Intelligence Team at Bennett Adelson.]

Today was day 1 at the Microsoft Build Conference.  While there were many exciting things that came out of the Keynote like an Android subsystem on Windows, Objective-C apps, and, I’m happy to report, lots pieces for the data geek like me such as Azure SQL Data Warehouse and Data Lake.  But by far the most exciting was again Windows Holographic and HoloLens.

If you’re not familiar, take a look:

And I’m also proud to say Cleveland represented.  Both Case Western Reserve University and the location of my last consulting job, the Cleveland Clinic, were front and center for the main demo:

I was eagerly jonesing to get my hands on the device and had no idea if I’d get a chance to, let alone code for it.  So when they announced at the end of the Keynote that they were now taking registration for HoloLens events at the conference, I couldn’t register fast enough.  Literally, the site was slammed with requests and by the time I finished I was certain I didn’t get in.  You can imagine my excitement when I’m sitting in my next session on the Microsoft Band when I get an email that my registration was approved and that I was to report to the Intercontinental Hotel in a half an hour.  After figuring out exactly where that was, I skipped lunch and made a beeline for it.

There were 3 possible sessions:  a demo, a one-on-one, and a 4-and-a-half hour “Holographic Academy”, where you’ll actually learn to code for the thing.  Naturally, the 3rd option was my 1st choice.

After I got to the hotel, there was actually a little bit of waiting, so I ordered a burger at the bar.  Only a minute later I was approached to take part in a user experience study on HoloLens prior to the Academy.  My desire to get the thing on my head greatly outweighed my desire for the burger, so I left before I even had a bite. 

I had to sign an NDA on the experience, so I can’t speak too much on it, except to say that it was mainly about how a first time user would react to working with the device.  Nothing too exciting; didn’t even get to see a hologram yet.

So it’s on now to the Holographic Academy (after a few fast bites of the burger the barman graciously saved for me).  We had to check all cell phones and devices before going in, so please forgive the lack of pictures or proper code samples.  The code samples I provide will be from memory and what’s jotted by hand in my notebook, so I can neither confirm nor deny if they’re correct at this point.  And the only picture I can give you is of my badge:

clip_image001

 The 65 sticker on there isn’t my attendee number but my measured PD (Pupillary Distance), 6.5 centimeters.  I don’t know if that’s good or not…

They marched us in 2-by-2 into a large computer lab lit like a hip night club.  There were tables each with 2 large desktop workstations and couches besides coffee tables behind us which we were told we would use to try out the HoloLens’ interaction with physical objects.  My partner, a Kinect MVP I met in line, and I met our guide for the session.  It was 1 guide per 2 attendees with one speaker leading from the center. 

With cheers from the attendees, they start the session.  The intro is quick, with emphasis on HoloLens being the first device of its kind and the ease at which one will be able to develop and release apps for it, since apps will be on the Universal Windows Platform (UWP) with an existing store.  The Windows Holographic team talks about how every major advancement in computing has been a change to Input and Output, a good way to look at it.  HoloLens’ Inputs:

  •          Gaze, Gesture, Voice
  •          Spatial Mapping
  •          Holographic Camera

Its Outputs:

  •          Scalable Augmented Reality
  •          Light and Color locked to the Physical World
  •          Spatial Sound

We start our demos with a “Holo World” app.  I’m not usually one for puns or wordplay, but given how excited I was to actually hold the device, I’ll allow it.  By plugging our HoloLens’ into our lab machines via a micro-USB connection in the back right, we can open a browser and navigate to http://127.0.0.1:10080/AppXManager.htm to administer the device.  We find there is one application already loaded, and we start it from the website.

Putting on the device is harder than I expected.  But maybe that’s just me.  You start be tilting the inner headband portion that moves separately from the device, and then turning a wheel in the back to loosen it.  Your head almost feels like it’s climbing into the device as you slip the headband around the back of your head and hairline.  You turn the wheel to tighten the headband and then adjust the lens down and forward.  It doesn’t have to, and shouldn’t, rest on your nose.

The first thing we see is a blue windows logo in the middle of a blue rectangle.  We’re told to perform the most common clicking gesture, which is holding your hand out, pointing your index finger up and then pinching it down to your thumb.  It’s basically the “I’m crushing your heads” motion.  After performing this, the app starts and we see a three dimensional jeep floating in front of us.  We move our heads and find that as the virtual jeep approaches the physical coffee table in front of us, the surface of the table is permeated with small virtual triangles, indicating that the jeep is near a surface.  We click again and the jeep falls to the physical table.  Now that it is placed we can walk around it and observe our augmented reality.

First impressions:  I’ve seen 3D before, but I’m surprised how quickly my eyes and brain accept a virtual object in a physical world.  It’s very impressive.  There is one big limitation I see with the device right now and that is the clipping boundaries.  When you’re wearing the device, there’s only a relatively small rectangle of your field of vision that can see the virtual objects.  If you’re not looking in that rectangle, you don’t see the objects.  It’s hard to say the size of this rectangle, but I would say you can think of sitting on your couch watching a decent-sized TV.  The TV screen is roughly the amount of your vision within this clipping boundary.  So as I’m walking around looking at the virtual jeep, I notice it is sometimes clipped.  I’m sure the technology will adapt to expand this soon.

We find we can place pins on the table for the jeep to drive to.  I put a pin on the neighboring couch and watch the jeep jump from the coffee table to the couch.  I’m giddy.

After playing with this demo for a bit, it’s time to make an app of our own.  We take the device off and plug it back in to the usb.  From the webpage, we stop the app.

The app we’ll be building is called “Project Origami”, and we’ll be building it in Unity.  Developing graphics in HoloLens is as simple as using DirectX with some Holographic APIs.  So this means we have the usual options for developing graphics applications against it.  You could imagine making a graphics layer in C++/DirectX and then the majority of your application’s code in C#.  I ask how migrating XAML apps to HoloLens will work, as it seems from the keynote that 2D Windows Universal apps will just run 2D in the 3D HoloLens world.  I’m told this won’t be covered today, but should be a fairly seamless migration.  We’ll be using Unity today.  Unity is there and they give a brief overview for anyone who hasn’t used it.  It’s very nice that before the device is barely able to be seen by the public that it’s already getting support from a platform as respected as Unity.

We work in a Unity project, “Project Origami”, which is already started for us.  Over the course of the session, we don’t really do anything out of the ordinary in Unity.  We have meshes, game objects, and write scripts to control the behavior of the objects and accept user input.  The only big differences are Unity connecting a camera to the Holographic camera, our scripts containing a reference to the HoloToolkit namespace, and a Unity mesh that is dynamically built from the Spatial Data returned from the HoloLens.

We drag some pre-built meshes into a new Unity scene, set up our cameras, and preview the scene in Unity.  To send the device to HoloLens is a 3-step process.

1)      Build the Project from Unity

2)      Load from Visual Studio and configure the project properties to run in a remote device

3)      Start the project from Visual Studio with HoloLens connected

We do these and see the mesh in front of us in our HoloLens.  It is two Origami balls floating above a few other origami objects on a white canvas with a drain in the middle.  We disconnect our device from the usb, walk around and view the scene from all angles.

The next step is to give ourselves a cursor with which to interact with the world.  We create a small red torus and create a C# script for its behavior.  We add the following using directive:

using HoloToolkit;

And in its update method we put in the following code:

var head = StereoCamera.Instance.Head

head in this case is of type UnityEngine.Transform.  This gives us the location and direction of our gaze from HoloLens.  We can then do a ray trace to find what object our gaze is on with the following code:

if(Physics.Raycast(head.position, head.forward, out hitInfo)
{
FocusedObject = hitInfo.collider.gameObject;
}

We put in some more code to position the cursor torus based on the normals of the mesh it’s hitting, but I won’t include that here, as there is nothing HoloLens specific about it.

Our next step is to add some select code.  We add a script called SphereCommands and attach it to our origami spheres.  We put in an OnSelect() Method and invoke it when we detect the user input of a click from the click gesture.  If there is collision between our cursor and the sphere, we release the sphere to gravity.  We try it out and experience selecting the hovering spheres and watching them hit our surface, rolling off it, and then falling through the physical floor.

For our next demo, we use a mesh in Unity that represents the spatial data brought in from HoloLens.  We set up collisions with our objects.  We now demo and watch our origami balls fall to our virtual canvas, and then fall and roll on the virtual floor.  I play with it trying to get the balls to collide with the couch and other objects.

We perform more demos using the spatial data.  First we simply set our Unity visualization of the spatial mapping data to the triangular mesh so we can see how HoloLens has interpreted the physical objects it sees around it.  Of course, it’s not perfect.  But still, I’m mesmerized by it.  In our next demo we practice moving our scene around the room using the spatial data.  For all this, we use the following:

SpatialMapping.Physics.RaycastMask

We do some sound and voice recognition.  For sound, we do some ambient and impact sound and even demonstrate how we can dim sound based on distance from the scene.  Here is a snippet of the impact sound code:

SpatialSound.Play(“Impact.wav”, this.gameObject, vol: 0.3f)

We add voice commands to drop our objects and reset our scene like so:

KeywordRecognizer.Instance.AddKeyword(“Reset world”, (sender, e) =>
{
Resetting = true;
} , null);

We wrap up by adding a pre-built scene to demonstrate HoloLens creating a scene where you look “through” physical objects.  As in, where it adds reality that appears to be behind physical objects.  We watch our origami balls fall through the drain hole to a whole scene beneath the computer lab floor, complete with a green origami landscape and red flying origami birds.

All-in-all, it was amazing stuff.  We wrapped up getting to meet the team that built the following:

And now I’m exhausted in my hotel room and ready to turn in for another day.  Tomorrow’s activities for me are mostly data-related: sessions on Azure SQL Database, HDInsight, AzureML, PowerBI and such.  Exciting, but probably not as dazzling as today’s HoloLens activities.

Simple Augmented Browsing for Website Development and Troubleshooting

Often times developers face the challenge of quickly making a few trivial changes to an existing website just to see how a change of an image or a css style would look. We can make these changes in a development environment, no problem there. But what if you have to do it to a live website, and the changes cannot impact any other user except yourself?

Augmented browsing techniques can come to our rescue. You might have used GreaseMonkey, a popular add-on that lets you change the look and feel of any website. In short, it installs scripts that read the DOM of the loaded html and alter its html/css etc. But creating and running the scripts might be overkill or cumbersome to work with, especially if you need to test with many different browsers.

Let’s take an alternative approach. How about intercepting the incoming resource file requested by the webpage and loading a different resource file that is stored in your local drive? aha!

For this I use my favorite tool, Fiddler. It is a debugging proxy that sits between your browser and the server and intercepts calls between them. The tool has many features that make a developer’s life easier, and we are going to use the feature “AutoResponder”.

clip_image002

Here are the steps to follow to intercept an image file and point to your own image.

a. Download, install & run Fiddler

b. Select the AutoResponder tab, check ‘Enable automatic responses’, and check ‘Unmatched requests pass-through’. This says that if no rule matches the incoming resource then do not intercept and use the file served from the web server.

c. Get the url of the image on the page you want to change. You can probably find it by viewing the page’s source code.

d. Have your replacement image in your local drive ready.

e. Click Add Rule button (or you can import the rule, if you previously exported it).

f. In the bottom of the window, type in the relative URL of the source image in the first dropdown.

clip_image004

g. For the second one, type in the local file path of the image file to be used in place of the original one.

h. Save

i. Refresh the webpage, voila! new image in place of the original!

clip_image006

j. You may turn on/off the interception by check/uncheck the checkbox in front of each of the rules you specify

clip_image008

To alter a .css or .js file, first download the file from the web server and store it in your local drive, add the interception rule, do modifications to that local file and refresh the page to see the change.

Happy coding!

Adding All Services to an Existing Office 365 User License

When working with our clients, we often find that they have enabled only some of the services within an Office 365 license.  Some companies, for example, may enable E3 licenses for a subset of users, but they don’t enable Lync Online.  While it’s very easy to add a service from within the Office 365 Admin Center, this method is not very efficient when a company has to modify several hundred or thousands of accounts and instead want to leverage Windows PowerShell.

By combining the use of the New-MsolLicenseOptions and Set-MsolUserLicense cmdlets, it’s possible to remove and add services.  In the following example, the account has been assigned all E3 services except for Office 365 ProPlus (OFFICESUBSCRIPTION) and Lync Online ‎(Plan 2) (MCOSTANDARD):

clip_image001

The company wants to add the Office 365 ProPlus service, but keep the Lync Online service disabled.  Running the following cmdlet will set the disabled service to only “MCOSTANDARD”:

$LicenseOptions = New-MsolLicenseOptions -AccountSkuId "company:ENTERPRISEPACK" -DisabledPlans MCOSTANDARD

Running this next cmdlet will change the license settings:

Get-MsolUser -UserPrincipalName john.doe@company.com | Set-MsolUserLicense -LicenseOptions $LicenseOptions

Since the “OFFICESUBSCRIPTION” service was not explicitly excluded in the “DisabledPlans” parameter, by default, it will now be enabled:

clip_image002

Note that the “ProvisioningStatus” for OFFICESUBSCRIPTION changed from “Disabled” to “PendingInput”.  When viewing the license settings in the Admin Center, the service will now be enabled under the E3 license details:

clip_image004

Now, again consider the scenario where a company has assigned E3 licenses, but left the Office 365 ProPlus and Lync Online (Plan 2) services disabled for all E3 licensed users.  The company now wants to enable all services, and not exclude any services.  In the past, Microsoft support has always provided that the only way to accomplish this is to remove the license, then reassign it without any “LicenseOptions”, effectively enabling all services.  While this method is perfectly safe, some companies are a bit apprehensive to make this change to a large number of accounts at once, for fear of disconnecting the users’ mailboxes and causing a service outage.

Instead of removing and re-adding the license, it’s possible to accomplish the same task by setting the “DisabledPlans” parameter to “$Null” within the “New-MsolLicenseOptions” cmdlet.  Example:

$LicenseOptions = New-MsolLicenseOptions -AccountSkuId "company:ENTERPRISEPACK" -DisabledPlans $Null
Get-MsolUser -UserPrincipalName john.doe@company.com | Set-MsolUserLicense -LicenseOptions $LicenseOptions

Note that both the OFFICESUBSCRIPTION and the MCOSTANDARD “ProvisioningStatus” have changed to “PendingInput”, and the services will show as enabled under the E3 license details in the Admin Center:

clip_image005

clip_image007

I hope you find this tip useful when managing your Office 365 licenses with Windows PowerShell.

Barry Thompson
Principal Consultant

JavaScript & CSS – Lessons Learned from the Field

In the past year, I’ve been able to work primarily on SharePoint intranet projects – both from the perspective of re-branding an existing site, as well as creating new, branded sites from scratch. These efforts were made much easier through the power of JavaScript and CSS, and they continue to be essential tools for any modern web development project. Here are some of the lessons I learned (sometimes the hard way) while working on projects in the past year:

Use the Right Tool for the Job

My three primary tools were Visual Studio, SharePoint Designer, and Internet Explorer’s F12 Developer Tools. Each has some unique advantages over the others, especially as code editors for viewing CSS and HTML. But I found that for the most part the Internet Explorer Developer Tools were the most indispensable of the three, mainly for its ability to inspect and modify CSS and HTML on the fly. Here, for instance, we can see & modify all the properties in effect on the highlighted section of text:

01

JavaScript & CSS Can Do Anything

While ‘anything’ might be an exaggeration, I did learn that more often than not, there was a solution for even the most complex problems when JavaScript & CSS were used effectively. Both have a large framework of methods and features that seem to meet any need, including dynamic run-time HTML changes. For instance,

· the function setTimeout() can delay the execution of your JavaScript (which can be useful if you’re waiting for something else to load), and

· the jQuery function addClass() can dynamically add a class to an element programmatically at runtime – this is useful if the element you’re referencing doesn’t get generated until runtime.

Internet Explorer 8 is a Pain

Many of my clients have some small subset of users who still need to use Internet Explorer 8, and from a JavaScript/CSS perspective, this continues to be challenge to my intranet projects. Fortunately, there always seems to be a IE8-specific fix that can be applied, and to ease the pain of integrating these one-off fixes, we can use features like conditional CSSRegistration:

https://tommdaly.files.wordpress.com/2012/05/2012-05-01_2256.png

or in-line CSS tags that target certain browsers:

03

The JavaScript & CSS Community is awesome

There are so many bloggers and developers who are actively sharing their expertise, tips, and tricks regarding JavaScript & CSS that, with a quick internet search, you always seem to be able to find the answer you need. And the people behind the frameworks are continuing to build on it, producing libraries like jQuery and others.

InfoPath is Still Alive

In January 2014, Microsoft announced the end of any future updates to InfoPath, with an implication that it might not be included in the next version of SharePoint. A year later, Microsoft has officially confirmed that InfoPath Forms Services will continue to be one of the services included in SharePoint Server 2016, and its inclusion in Office 365 will remain ‘until further notice’. (The InfoPath 2013 desktop application still remains the last version to be released.) They also announced the cancellation of FoSL (Forms on SharePoint Lists), the InfoPath alternative they were developing, which they had announced at SPC2014. This is very relevant to news to the many organizations wondering how to develop forms in SharePoint if there was no more InfoPath.

Recommendations

Based on this latest announcement, we are continuing to advise our clients to consider InfoPath for any forms project for which it is a good fit. Short term projects or agile processes that need rapid forms development make good candidates. With its inclusion in the next version of SharePoint, and Microsoft’s standard 10-year support cycles, InfoPath still has quite a bit of life left in it.

Why use InfoPath?

Even when it was facing extinction last year, it’s important to realize that InfoPath still has a comprehensive and broad set of powerful features that give it an advantage over many of the alternatives. Here are just a few of the features that are sometimes overlooked:

  • Promoted columns
    • Promoted columns represent fields inside the form that have been published into columns in the SharePoint forms library. The classic example of the value of the promoted column is the Expense Report. A manager can view a forms library that lists each report, with a column representing the total expense amount that needs approval, as well as a sum of all totals. Without the promoted column, the manager would have to open each form individually.
  • XML backend
    • InfoPath uses an XML schema behind the scenes to power its forms. For the normal power user, this fact is irrelevant, and should be considered a black box that need not be opened. But for the SharePoint developer who may need to create code to programmatically examine the contents of InfoPath forms, this is a useful fact. The CreateNavigator method, for instance, can be used to grab an instance of the XMLForm object for the current form document as a data source:
      02
  • Workflow integration
    • Part of InfoPath’s value in the creation of no-code solutions in SharePoint lies in its natural integration with SharePoint Workflow. Both InfoPath and SharePoint workflow natively interact with SharePoint columns, and can use them to coordinate with each other regarding the status of the process, relevant data fields, etc. The included Workflow Status column provides a convenient in-line way to see the progress of the associated workflow right from within the form library.
  • Code-behind
    • There are many times when the standard InfoPath features aren’t quite enough. Sometimes we need to apply code behind our forms to programmatically perform certain features. The Developer tab of InfoPath Designer gives us the ability to attach C# or VB code in Visual Studio to our form for just such a purpose, as seen in the simple example below which applies code that runs when the form loads, adding text to a field. We should always, of course, be mindful of the implications of code-behind to our forms deployment process:03
  • Outlook integration
    • Many times, you can interact with a forms process right from Outlook. It can present InfoPath forms embedded into an email message, with the ability to open, fill out, or submit. These forms could be submitted to you via an automated workflow, or could be opened on demand via the New button:
    • 04

It will be interesting to see if organizations will continue to use these powerful features included in InfoPath, given the ever-uncertain future of the product.

SharePoint 2013 Search Results Not Returned – Alternative Access Mappings (AAM)

Worked through a search issue last week.  Hope this post helps to give some guidance.

We had a Default Zone URL called http://foo

It was extended to FBA on the Internet Zone with URL called http://bar

We configured a content source that crawled the Internet Zone, i.e. we crawled http://bar.

Here are the results:

http://foo (Default Zone Url)

  • The search results web part worked correctly when viewed through http://foo
  • The configured Result Query also was honored to help filter results.
  • The search results links resolved as http://foo.

http://bar (Internet Zone Url)

  • The search results web part returned all results when viewed through http://bar
  • The search results links resolved as http://bar.
  • The configured Result Query was NOT honored to help filter results.

We focused first on permissions with no resolution.

Then we started looking at the AAMs role in configuration.

After some initial positive results, we discovered this article explaining the situation: http://blogs.msdn.com/b/sharepoint_strategery/archive/2014/07/08/problems-when-crawling-the-non-default-zone-explained.aspx

Summary: Always crawl the Default Zone’s URL!  DO NOT attempt to crawl any other alternative access mapping URLs.

Variations Not Working After SharePoint 2010 to SharePoint 2013 Upgrade

A customer had a SharePoint 2010 site collection that we upgraded to SharePoint 2013.

The variation pages propagation jobs were set to run every three minutes.

The publish of an existing page in the variation root caused a “Started…Finished” propagation log entry with no information about the child variations:

The publishing of a new page in the variation root showed the “Started…Finished” message, along with the information about the child variation pages:

It turns out that there is a very important hidden property called NotificationMode on the Variation Label page that seems to be set to null during upgrade.

This NotificationMode property needs to:

1.  Have a value for Variations to propagate;

2.  Be set to true on the item in the list that is the root label;

3.  Be set to false on child variations in the list.

Here is the KB article that contains a powershell script to run to fix NotificationMode:  http://support.microsoft.com/kb/2925599

Follow

Get every new post delivered to your Inbox.

Join 28 other followers