Bennett Adelson Technical Blog

Bennett Adelson Technical Blog

Build Day 3 – In With the New: Microsoft Edge, More HoloLens, Azure Logic Apps, and Direct2D


[This post is by Jeff Mlakar, a member of the Business Intelligence Team at Bennett Adelson.  Follow us @BIatBA and @JeffMlakar]

This was my third and final day at the Microsoft Build Conference. My first was almost all HoloLens. My second was all data. I tried to find a theme in the sessions I saw today. Best I could come up with was that they were all dealing with something new. Of course that could also be said for any collection of sessions at this conference, but anyway… Session 1:

The Microsoft Edge Rendering Engine That Makes the Web Just Work

Presented by David Catuhe and David Rousset.

I gotta say, on Wednesday it was pretty exciting to be in the room for a name announcement. And yes, we have our new browser name: Microsoft Edge, with a logo that looks not far off from IE. Which has to make me wonder: IE has a strong presence within the enterprise, right? A lot of companies have old versions of IE as part of their approved software. It’s probably not going to be going away from the enterprise any time soon. A lot of employees have their old IE versions for business applications and then install a newer/other browser for going out to the web. So what will happen if these employees start to choose Edge as their other browser? Will they now have two different browsers of near-identical logos in their work computer taskbar? Gotta wonder. Anyway, on to the session.

The Davids talked about the history of IE, with its trident engine and many document modes. They talked about how Edge’s new engine, a fork of trident called EdgeHTML, has one document mode and currently benchmarks better than Chrome or Firefox. A lot of the session was on WebGL performance and gaming in the browser, with built-in game pad support and all. This led to an impressive spooky-scene game demo in Edge where the biggest cheer was for a gravestone reading “RIP IE 6”.

One interesting thing was the discussion of IE on Windows Phones currently having issues with mobile displays as some sites check for Android or iOS as a criterion for displaying mobile. Edge corrects this by almost faking as Android or iOS:

Jeff Mlaker 1

The Davids stressed the use of CSS filters. And feature detection, which hopefully developers are doing already. Edge seems to be catching up a couple years to the other browsers in the use of @supports to do feature detection in CSS.

All in all, Edge looks very promising. I’m looking forward to learning more specifics about its features, seeing how it will work for business applications, and trying out its dev tools.

Case Studies of HoloLens App Development

This was a panel discussion where we got to text questions to people who have worked on HoloLens projects already. It was moderated by Rukari Austin, a Microsoft Community Manager for HoloLens. The panel included Dr. Jeff Norris of NASA, Aviad Almagor of Trimble Navigation, Professor Mark Griswold of Case Western Reserve University in Cleveland, and Microsoft Studios Manager Megan Saunders. You’ve probably already seen the videos of the projects these panelists have been working on, so I won’t elaborate on their work. If you haven’t seen the videos, I’d highly recommend you see them. I’ll mostly go through and outline some of the points that resonated with me as I listened to the panelists. These are somewhat paraphrased and could be out of order, but I’ll stick to the spirit of what the speaker was saying. This is not complete, so I’ll urge you to watch the video.

Jeff Mlaker 2

(from left: Rukari Austin, Jeff Norris, Aviad Almagor, Mark Griswold, and Megan Saunders)

Jeff described how HoloLens is helping the two main initiatives of NASA: to explore and to share the experiences of that exploration. He noted that when first developing for such a new technology, all his instincts were wrong. Megan elaborated on the issues of defining a new kind of UI and UX. Aviad said that in this new world, a proper user experience is critical. Mark postulated that as we develop for the technology, we’re probably going to start out making it look like a traditional computer, but we’re not going to end there.

Mark’s background is in building MRIs. In addition to his physics, radiology, and engineering background, he had programming experience and had worked with Unity. He talked about how unsure he was of how long it would take to get set up and working with a new technology like HoloLens. Installing SDKs, configuring, troubleshooting, doing the “hard math” of 3D graphics transformations, etc. He decided to time how long it would take him to go from first sitting down, to making a hologram, to viewing it in HoloLens. It was less than 5 minutes. Which is amazing. Jeff described an experience of intuitively wanting to move his mouse off the screen to interact with the hologram. He suggested that to a member of the team, who implemented it in an hour or two, and he now has mouse interaction with the holograms. So ease of development in the Windows Holographic platform is a big talking point.

All speakers talked of the importance of collaborating via holograms. Jeff said we take for granted how important physical co-presence is. Mark described it as the only way he can educate students, and he can see applications for museums and other such institutions coming out of the technology. Mark also reminded us of the importance of the spatial sound feature of the HoloLens for further immersing you in the augmented reality.

Aviad discussed the inherent human difficulty in translating 2D images into imagined 3D. For a device to do that is huge for “real architects”, as he said (not “system architects” or such). I wonder: if Microsoft is working with someone from Trimble on HoloLens, and Trimble bought SketchUp from Google, will we see SketchUp integrating with HoloLens at some point? I’m just wondering.

No one could answer the big question of when we’ll be able to get our hands on a HoloLens with which to develop. Or when we might see an emulator or such. It’s interesting to me that as this new technology comes out, our gaze is not on how it works but on how we can develop and expand on it.

After the session, we got a chance for one-on-one questions with the panelists. I’m surprised at how the choice of platform is always Unity for their applications. So basically, if you want to get ready for HoloLens development, learn Unity. (http://unity3d.com/)

Logic Apps

Presented by Ilya Grebnov and Stephen Siciliano.

Logic Apps are a part of the relatively new Azure App Service. They enable a developer to create business process workflows in a designer. Business processes are thus automated as a SaaS service. In this session, Ilya and Stephen gave us an introduction to the concept and showed us how to lay out our workflows in a designer, and how to code them.

They talked about all key app services currently available. I learned a new acronym: IPaaS, where the I is for Integration. Ilya and Stephen also taught me about Azure Pack, basically a “mini Azure” that can run on prem. In this way, Logic Apps could run on premises.

Over the course of the demo, they defined Logic Apps in the designer and through code. It’s nice that the Logic Apps can be completely defined in JSON. This means that whatever you can do with Logic Apps, you can configure via the RESTful API of the Logic App.

There are a handful of components in Logic Apps. First Triggers, which might be a bit of a misnomer. Because let’s say you use a Twitter trigger, it is still polling on a schedule, say every minute. We define a Recurrence trigger. Next is Actions, which is a set of steps. Possible types are Http, ApiApp, or even a Workflow itself. In the demo, we wire up a Twitter connector and a DropBox connector.

At this point, I can’t help but be struck by the number of sessions I’ve seen that are using Twitter in their demos. Does this mean it’s a treasure-trove of valuable data? Or is it just really good for pedagogical purposes? Either way, I’m glad that in at least one session, I was able to get a tweet saying “Lee-rooooy Jenkins!!” onto the presenter’s screen.

It’s nice that in Logic Apps everything is metadata driven, defined in Swagger. This is nice because then everything can wire up apparently pretty seamlessly. At least it did in the demo. More complex logic can be accomplished through conditions and JSON transformations. As for what you have to do to prepare your own API to be used by a Logic App, as long as you have a RESTful, JSON-returning API using OAuth or another common Auth provider, you should be able to use it in a Logic App.

As far as a new possible location for your Logic App, just this week, Azure App Service Environments were released, which encapsulate a fully isolated and dedicated environment to run all your apps, be they Web, Mobile, API, or Logic.

It was a great session on a cool technology with good demos. Though I do have to wonder about Logic Apps’ similarity to Azure Data Factories. There are certainly similarities, as each are visual workflows of data in the cloud. But Data Factories seem to be more about the Analytic Data Pipeline and have better connections to SQL and other data stores. Logic Apps seem to be all about APIs and smaller volume business process data flows. It could just be all about size of data. I’ll have to play with each more to give a better comparison.

What’s New in Direct2D and DirectWrite for Windows 10

Presented by Anthony Hodsdon.

This was my final session at Build and it was nice coverage of some fundamental graphics enhancements that can really make a difference in your Windows apps. Imagine you have document text in your app and you provide pinch-zoom-in to enlarge your text. Odds are the text is pixelated as you’re zooming and then snaps to a much cleaner view when you let go. (I just confirmed it on a Word document on my Windows 8.1 slate). This is because in Windows 8.1, by default, text was constantly being re-rasterized, putting a lot of work on the CPU that gets uploaded to the GPU. So, the app developer probably did what was common practice, which was to render the document to a bitmap during the scaling, hence the pixilation on zoom-in. The solution: Hardware accelerated vector-based text rendering in Windows 10. Anthony gave an impressive demonstration of zooming in very close to a document in a Windows 10 app and seeing the text remain clear with nice, round Béziers.

I think it’s little formatting things of this sort that really make for an enjoyable app experience for users. This is what the Direct2D and DirectWrite enhancements were bringing in this session. Direct2D and DirectWrite are hardware accelerated graphics APIs for 2D graphics and text display, respectively. Enhancements that come for them in Windows 10 are going to help displays not just on PCs, but phones as well.

For example, most phones don’t want to give up the ½ GB of space needed for all desktop fonts. If they don’t have the font your app is using, it could cause serious formatting issues. The solution: asynchronous font downloading and rendering built in not only to DirectX, but directly to XAML as well. You can configure your app to display a default font as you download the appropriate font, as in the following code:

class MyFontDownloadListener : IDWriteFontDownloadListener
{
STDMETHOD_(void, DownloadCompleted)( /* … */)
{
RerenderDocument();
}
}
spDwriteFactory->GetFontDownloadQueue(/* … */, &spQueue);
spQueue->AddListener(/* … */);
spQueue->BeginDownload();

Non-text graphics enhancements include things like built-in shader linking for optimization, and a simple, efficient API for loading and rendering images which reduces what was chucks of code down to a manageable amount.

Inking capabilities exist as a new 2D primitive that is hardware accelerated and provides low latency, high fidelity, and no polymerization. Anthony gave a great demo with his own handwriting showing how one can manipulate ink thickness and ink rotation. Makes me bummed for missing the DirectInk talk yesterday; I’ll have to go back and watch it.

In summary, as Anthony put it, these enhancements will mean your apps on DirectX will be faster, more maintainable with less code, more consistent across devices, and more beautiful.

Final thought:

So that’s it. Heck of a 3 days. Build is the kind of conference that makes you realize how much more you’ve got to learn. I’d say that’s a good thing.

3 Comments

    Trackbacks

    1. GRFX at //build 2015: What will you build? - DirectX Developer Blog - Site Home - MSDN Blogs
    2. GRFX at //build 2015: What will you build? | SDK News
    3. Build Day 3 – In With the New: Microsoft Edge, More HoloLens, Azure Logic Apps, and Direct2D | Jeff Mlakar's Blog

    Leave a Reply or Comment

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s

    %d bloggers like this: