My Day at Holographic Academy


[This entry was written by Jeff Mlakar, a member of the Business Intelligence Team at Bennett Adelson.]

Today was day 1 at the Microsoft Build Conference.  While there were many exciting things that came out of the Keynote like an Android subsystem on Windows, Objective-C apps, and, I’m happy to report, lots pieces for the data geek like me such as Azure SQL Data Warehouse and Data Lake.  But by far the most exciting was again Windows Holographic and HoloLens.

If you’re not familiar, take a look:

And I’m also proud to say Cleveland represented.  Both Case Western Reserve University and the location of my last consulting job, the Cleveland Clinic, were front and center for the main demo:

I was eagerly jonesing to get my hands on the device and had no idea if I’d get a chance to, let alone code for it.  So when they announced at the end of the Keynote that they were now taking registration for HoloLens events at the conference, I couldn’t register fast enough.  Literally, the site was slammed with requests and by the time I finished I was certain I didn’t get in.  You can imagine my excitement when I’m sitting in my next session on the Microsoft Band when I get an email that my registration was approved and that I was to report to the Intercontinental Hotel in a half an hour.  After figuring out exactly where that was, I skipped lunch and made a beeline for it.

There were 3 possible sessions:  a demo, a one-on-one, and a 4-and-a-half hour “Holographic Academy”, where you’ll actually learn to code for the thing.  Naturally, the 3rd option was my 1st choice.

After I got to the hotel, there was actually a little bit of waiting, so I ordered a burger at the bar.  Only a minute later I was approached to take part in a user experience study on HoloLens prior to the Academy.  My desire to get the thing on my head greatly outweighed my desire for the burger, so I left before I even had a bite. 

I had to sign an NDA on the experience, so I can’t speak too much on it, except to say that it was mainly about how a first time user would react to working with the device.  Nothing too exciting; didn’t even get to see a hologram yet.

So it’s on now to the Holographic Academy (after a few fast bites of the burger the barman graciously saved for me).  We had to check all cell phones and devices before going in, so please forgive the lack of pictures or proper code samples.  The code samples I provide will be from memory and what’s jotted by hand in my notebook, so I can neither confirm nor deny if they’re correct at this point.  And the only picture I can give you is of my badge:

clip_image001

 The 65 sticker on there isn’t my attendee number but my measured PD (Pupillary Distance), 6.5 centimeters.  I don’t know if that’s good or not…

They marched us in 2-by-2 into a large computer lab lit like a hip night club.  There were tables each with 2 large desktop workstations and couches besides coffee tables behind us which we were told we would use to try out the HoloLens’ interaction with physical objects.  My partner, a Kinect MVP I met in line, and I met our guide for the session.  It was 1 guide per 2 attendees with one speaker leading from the center. 

With cheers from the attendees, they start the session.  The intro is quick, with emphasis on HoloLens being the first device of its kind and the ease at which one will be able to develop and release apps for it, since apps will be on the Universal Windows Platform (UWP) with an existing store.  The Windows Holographic team talks about how every major advancement in computing has been a change to Input and Output, a good way to look at it.  HoloLens’ Inputs:

  •          Gaze, Gesture, Voice
  •          Spatial Mapping
  •          Holographic Camera

Its Outputs:

  •          Scalable Augmented Reality
  •          Light and Color locked to the Physical World
  •          Spatial Sound

We start our demos with a “Holo World” app.  I’m not usually one for puns or wordplay, but given how excited I was to actually hold the device, I’ll allow it.  By plugging our HoloLens’ into our lab machines via a micro-USB connection in the back right, we can open a browser and navigate to http://127.0.0.1:10080/AppXManager.htm to administer the device.  We find there is one application already loaded, and we start it from the website.

Putting on the device is harder than I expected.  But maybe that’s just me.  You start be tilting the inner headband portion that moves separately from the device, and then turning a wheel in the back to loosen it.  Your head almost feels like it’s climbing into the device as you slip the headband around the back of your head and hairline.  You turn the wheel to tighten the headband and then adjust the lens down and forward.  It doesn’t have to, and shouldn’t, rest on your nose.

The first thing we see is a blue windows logo in the middle of a blue rectangle.  We’re told to perform the most common clicking gesture, which is holding your hand out, pointing your index finger up and then pinching it down to your thumb.  It’s basically the “I’m crushing your heads” motion.  After performing this, the app starts and we see a three dimensional jeep floating in front of us.  We move our heads and find that as the virtual jeep approaches the physical coffee table in front of us, the surface of the table is permeated with small virtual triangles, indicating that the jeep is near a surface.  We click again and the jeep falls to the physical table.  Now that it is placed we can walk around it and observe our augmented reality.

First impressions:  I’ve seen 3D before, but I’m surprised how quickly my eyes and brain accept a virtual object in a physical world.  It’s very impressive.  There is one big limitation I see with the device right now and that is the clipping boundaries.  When you’re wearing the device, there’s only a relatively small rectangle of your field of vision that can see the virtual objects.  If you’re not looking in that rectangle, you don’t see the objects.  It’s hard to say the size of this rectangle, but I would say you can think of sitting on your couch watching a decent-sized TV.  The TV screen is roughly the amount of your vision within this clipping boundary.  So as I’m walking around looking at the virtual jeep, I notice it is sometimes clipped.  I’m sure the technology will adapt to expand this soon.

We find we can place pins on the table for the jeep to drive to.  I put a pin on the neighboring couch and watch the jeep jump from the coffee table to the couch.  I’m giddy.

After playing with this demo for a bit, it’s time to make an app of our own.  We take the device off and plug it back in to the usb.  From the webpage, we stop the app.

The app we’ll be building is called “Project Origami”, and we’ll be building it in Unity.  Developing graphics in HoloLens is as simple as using DirectX with some Holographic APIs.  So this means we have the usual options for developing graphics applications against it.  You could imagine making a graphics layer in C++/DirectX and then the majority of your application’s code in C#.  I ask how migrating XAML apps to HoloLens will work, as it seems from the keynote that 2D Windows Universal apps will just run 2D in the 3D HoloLens world.  I’m told this won’t be covered today, but should be a fairly seamless migration.  We’ll be using Unity today.  Unity is there and they give a brief overview for anyone who hasn’t used it.  It’s very nice that before the device is barely able to be seen by the public that it’s already getting support from a platform as respected as Unity.

We work in a Unity project, “Project Origami”, which is already started for us.  Over the course of the session, we don’t really do anything out of the ordinary in Unity.  We have meshes, game objects, and write scripts to control the behavior of the objects and accept user input.  The only big differences are Unity connecting a camera to the Holographic camera, our scripts containing a reference to the HoloToolkit namespace, and a Unity mesh that is dynamically built from the Spatial Data returned from the HoloLens.

We drag some pre-built meshes into a new Unity scene, set up our cameras, and preview the scene in Unity.  To send the device to HoloLens is a 3-step process.

1)      Build the Project from Unity

2)      Load from Visual Studio and configure the project properties to run in a remote device

3)      Start the project from Visual Studio with HoloLens connected

We do these and see the mesh in front of us in our HoloLens.  It is two Origami balls floating above a few other origami objects on a white canvas with a drain in the middle.  We disconnect our device from the usb, walk around and view the scene from all angles.

The next step is to give ourselves a cursor with which to interact with the world.  We create a small red torus and create a C# script for its behavior.  We add the following using directive:

using HoloToolkit;

And in its update method we put in the following code:

var head = StereoCamera.Instance.Head

head in this case is of type UnityEngine.Transform.  This gives us the location and direction of our gaze from HoloLens.  We can then do a ray trace to find what object our gaze is on with the following code:

if(Physics.Raycast(head.position, head.forward, out hitInfo)
{
FocusedObject = hitInfo.collider.gameObject;
}

We put in some more code to position the cursor torus based on the normals of the mesh it’s hitting, but I won’t include that here, as there is nothing HoloLens specific about it.

Our next step is to add some select code.  We add a script called SphereCommands and attach it to our origami spheres.  We put in an OnSelect() Method and invoke it when we detect the user input of a click from the click gesture.  If there is collision between our cursor and the sphere, we release the sphere to gravity.  We try it out and experience selecting the hovering spheres and watching them hit our surface, rolling off it, and then falling through the physical floor.

For our next demo, we use a mesh in Unity that represents the spatial data brought in from HoloLens.  We set up collisions with our objects.  We now demo and watch our origami balls fall to our virtual canvas, and then fall and roll on the virtual floor.  I play with it trying to get the balls to collide with the couch and other objects.

We perform more demos using the spatial data.  First we simply set our Unity visualization of the spatial mapping data to the triangular mesh so we can see how HoloLens has interpreted the physical objects it sees around it.  Of course, it’s not perfect.  But still, I’m mesmerized by it.  In our next demo we practice moving our scene around the room using the spatial data.  For all this, we use the following:

SpatialMapping.Physics.RaycastMask

We do some sound and voice recognition.  For sound, we do some ambient and impact sound and even demonstrate how we can dim sound based on distance from the scene.  Here is a snippet of the impact sound code:

SpatialSound.Play(“Impact.wav”, this.gameObject, vol: 0.3f)

We add voice commands to drop our objects and reset our scene like so:

KeywordRecognizer.Instance.AddKeyword(“Reset world”, (sender, e) =>
{
Resetting = true;
} , null);

We wrap up by adding a pre-built scene to demonstrate HoloLens creating a scene where you look “through” physical objects.  As in, where it adds reality that appears to be behind physical objects.  We watch our origami balls fall through the drain hole to a whole scene beneath the computer lab floor, complete with a green origami landscape and red flying origami birds.

All-in-all, it was amazing stuff.  We wrapped up getting to meet the team that built the following:

And now I’m exhausted in my hotel room and ready to turn in for another day.  Tomorrow’s activities for me are mostly data-related: sessions on Azure SQL Database, HDInsight, AzureML, PowerBI and such.  Exciting, but probably not as dazzling as today’s HoloLens activities.

9 thoughts on “My Day at Holographic Academy

    1. While the work we did didn’t involve seeing any universal windows 2D applications in virtual windows, the resolution was such that you should be able to read and write text, such as VS code. But bear in mind you would be limited to the clipping rectangle based on your gaze. Also, the holograms have, by design, a front clipping plane that clips the object meshes if you get to close. So, if you’re like me and code with your face inches from the screen, you might just want to stick with the old fashioned monitor.

      Like

  1. The near clipping was a setting on the camera and can be customized. I changed it too much closer in the oragami project, but there was still a point where my eyes lost focus, maybe at half arm length.

    Like

Leave a Reply or Comment