Bennett Adelson Technical Blog

Bennett Adelson Technical Blog

My Day at Holographic Academy

[This entry was written by Jeff Mlakar, a member of the Business Intelligence Team at Bennett Adelson.]

Today was day 1 at the Microsoft Build Conference.  While there were many exciting things that came out of the Keynote like an Android subsystem on Windows, Objective-C apps, and, I’m happy to report, lots pieces for the data geek like me such as Azure SQL Data Warehouse and Data Lake.  But by far the most exciting was again Windows Holographic and HoloLens.

If you’re not familiar, take a look:

And I’m also proud to say Cleveland represented.  Both Case Western Reserve University and the location of my last consulting job, the Cleveland Clinic, were front and center for the main demo:

I was eagerly jonesing to get my hands on the device and had no idea if I’d get a chance to, let alone code for it.  So when they announced at the end of the Keynote that they were now taking registration for HoloLens events at the conference, I couldn’t register fast enough.  Literally, the site was slammed with requests and by the time I finished I was certain I didn’t get in.  You can imagine my excitement when I’m sitting in my next session on the Microsoft Band when I get an email that my registration was approved and that I was to report to the Intercontinental Hotel in a half an hour.  After figuring out exactly where that was, I skipped lunch and made a beeline for it.

There were 3 possible sessions:  a demo, a one-on-one, and a 4-and-a-half hour “Holographic Academy”, where you’ll actually learn to code for the thing.  Naturally, the 3rd option was my 1st choice.

After I got to the hotel, there was actually a little bit of waiting, so I ordered a burger at the bar.  Only a minute later I was approached to take part in a user experience study on HoloLens prior to the Academy.  My desire to get the thing on my head greatly outweighed my desire for the burger, so I left before I even had a bite. 

I had to sign an NDA on the experience, so I can’t speak too much on it, except to say that it was mainly about how a first time user would react to working with the device.  Nothing too exciting; didn’t even get to see a hologram yet.

So it’s on now to the Holographic Academy (after a few fast bites of the burger the barman graciously saved for me).  We had to check all cell phones and devices before going in, so please forgive the lack of pictures or proper code samples.  The code samples I provide will be from memory and what’s jotted by hand in my notebook, so I can neither confirm nor deny if they’re correct at this point.  And the only picture I can give you is of my badge:


 The 65 sticker on there isn’t my attendee number but my measured PD (Pupillary Distance), 6.5 centimeters.  I don’t know if that’s good or not…

They marched us in 2-by-2 into a large computer lab lit like a hip night club.  There were tables each with 2 large desktop workstations and couches besides coffee tables behind us which we were told we would use to try out the HoloLens’ interaction with physical objects.  My partner, a Kinect MVP I met in line, and I met our guide for the session.  It was 1 guide per 2 attendees with one speaker leading from the center. 

With cheers from the attendees, they start the session.  The intro is quick, with emphasis on HoloLens being the first device of its kind and the ease at which one will be able to develop and release apps for it, since apps will be on the Universal Windows Platform (UWP) with an existing store.  The Windows Holographic team talks about how every major advancement in computing has been a change to Input and Output, a good way to look at it.  HoloLens’ Inputs:

  •          Gaze, Gesture, Voice
  •          Spatial Mapping
  •          Holographic Camera

Its Outputs:

  •          Scalable Augmented Reality
  •          Light and Color locked to the Physical World
  •          Spatial Sound

We start our demos with a “Holo World” app.  I’m not usually one for puns or wordplay, but given how excited I was to actually hold the device, I’ll allow it.  By plugging our HoloLens’ into our lab machines via a micro-USB connection in the back right, we can open a browser and navigate to to administer the device.  We find there is one application already loaded, and we start it from the website.

Putting on the device is harder than I expected.  But maybe that’s just me.  You start be tilting the inner headband portion that moves separately from the device, and then turning a wheel in the back to loosen it.  Your head almost feels like it’s climbing into the device as you slip the headband around the back of your head and hairline.  You turn the wheel to tighten the headband and then adjust the lens down and forward.  It doesn’t have to, and shouldn’t, rest on your nose.

The first thing we see is a blue windows logo in the middle of a blue rectangle.  We’re told to perform the most common clicking gesture, which is holding your hand out, pointing your index finger up and then pinching it down to your thumb.  It’s basically the “I’m crushing your heads” motion.  After performing this, the app starts and we see a three dimensional jeep floating in front of us.  We move our heads and find that as the virtual jeep approaches the physical coffee table in front of us, the surface of the table is permeated with small virtual triangles, indicating that the jeep is near a surface.  We click again and the jeep falls to the physical table.  Now that it is placed we can walk around it and observe our augmented reality.

First impressions:  I’ve seen 3D before, but I’m surprised how quickly my eyes and brain accept a virtual object in a physical world.  It’s very impressive.  There is one big limitation I see with the device right now and that is the clipping boundaries.  When you’re wearing the device, there’s only a relatively small rectangle of your field of vision that can see the virtual objects.  If you’re not looking in that rectangle, you don’t see the objects.  It’s hard to say the size of this rectangle, but I would say you can think of sitting on your couch watching a decent-sized TV.  The TV screen is roughly the amount of your vision within this clipping boundary.  So as I’m walking around looking at the virtual jeep, I notice it is sometimes clipped.  I’m sure the technology will adapt to expand this soon.

We find we can place pins on the table for the jeep to drive to.  I put a pin on the neighboring couch and watch the jeep jump from the coffee table to the couch.  I’m giddy.

After playing with this demo for a bit, it’s time to make an app of our own.  We take the device off and plug it back in to the usb.  From the webpage, we stop the app.

The app we’ll be building is called “Project Origami”, and we’ll be building it in Unity.  Developing graphics in HoloLens is as simple as using DirectX with some Holographic APIs.  So this means we have the usual options for developing graphics applications against it.  You could imagine making a graphics layer in C++/DirectX and then the majority of your application’s code in C#.  I ask how migrating XAML apps to HoloLens will work, as it seems from the keynote that 2D Windows Universal apps will just run 2D in the 3D HoloLens world.  I’m told this won’t be covered today, but should be a fairly seamless migration.  We’ll be using Unity today.  Unity is there and they give a brief overview for anyone who hasn’t used it.  It’s very nice that before the device is barely able to be seen by the public that it’s already getting support from a platform as respected as Unity.

We work in a Unity project, “Project Origami”, which is already started for us.  Over the course of the session, we don’t really do anything out of the ordinary in Unity.  We have meshes, game objects, and write scripts to control the behavior of the objects and accept user input.  The only big differences are Unity connecting a camera to the Holographic camera, our scripts containing a reference to the HoloToolkit namespace, and a Unity mesh that is dynamically built from the Spatial Data returned from the HoloLens.

We drag some pre-built meshes into a new Unity scene, set up our cameras, and preview the scene in Unity.  To send the device to HoloLens is a 3-step process.

1)      Build the Project from Unity

2)      Load from Visual Studio and configure the project properties to run in a remote device

3)      Start the project from Visual Studio with HoloLens connected

We do these and see the mesh in front of us in our HoloLens.  It is two Origami balls floating above a few other origami objects on a white canvas with a drain in the middle.  We disconnect our device from the usb, walk around and view the scene from all angles.

The next step is to give ourselves a cursor with which to interact with the world.  We create a small red torus and create a C# script for its behavior.  We add the following using directive:

using HoloToolkit;

And in its update method we put in the following code:

var head = StereoCamera.Instance.Head

head in this case is of type UnityEngine.Transform.  This gives us the location and direction of our gaze from HoloLens.  We can then do a ray trace to find what object our gaze is on with the following code:

if(Physics.Raycast(head.position, head.forward, out hitInfo)
FocusedObject = hitInfo.collider.gameObject;

We put in some more code to position the cursor torus based on the normals of the mesh it’s hitting, but I won’t include that here, as there is nothing HoloLens specific about it.

Our next step is to add some select code.  We add a script called SphereCommands and attach it to our origami spheres.  We put in an OnSelect() Method and invoke it when we detect the user input of a click from the click gesture.  If there is collision between our cursor and the sphere, we release the sphere to gravity.  We try it out and experience selecting the hovering spheres and watching them hit our surface, rolling off it, and then falling through the physical floor.

For our next demo, we use a mesh in Unity that represents the spatial data brought in from HoloLens.  We set up collisions with our objects.  We now demo and watch our origami balls fall to our virtual canvas, and then fall and roll on the virtual floor.  I play with it trying to get the balls to collide with the couch and other objects.

We perform more demos using the spatial data.  First we simply set our Unity visualization of the spatial mapping data to the triangular mesh so we can see how HoloLens has interpreted the physical objects it sees around it.  Of course, it’s not perfect.  But still, I’m mesmerized by it.  In our next demo we practice moving our scene around the room using the spatial data.  For all this, we use the following:


We do some sound and voice recognition.  For sound, we do some ambient and impact sound and even demonstrate how we can dim sound based on distance from the scene.  Here is a snippet of the impact sound code:

SpatialSound.Play(“Impact.wav”, this.gameObject, vol: 0.3f)

We add voice commands to drop our objects and reset our scene like so:

KeywordRecognizer.Instance.AddKeyword(“Reset world”, (sender, e) =>
Resetting = true;
} , null);

We wrap up by adding a pre-built scene to demonstrate HoloLens creating a scene where you look “through” physical objects.  As in, where it adds reality that appears to be behind physical objects.  We watch our origami balls fall through the drain hole to a whole scene beneath the computer lab floor, complete with a green origami landscape and red flying origami birds.

All-in-all, it was amazing stuff.  We wrapped up getting to meet the team that built the following:

And now I’m exhausted in my hotel room and ready to turn in for another day.  Tomorrow’s activities for me are mostly data-related: sessions on Azure SQL Database, HDInsight, AzureML, PowerBI and such.  Exciting, but probably not as dazzling as today’s HoloLens activities.

Simple Augmented Browsing for Website Development and Troubleshooting

Often times developers face the challenge of quickly making a few trivial changes to an existing website just to see how a change of an image or a css style would look. We can make these changes in a development environment, no problem there. But what if you have to do it to a live website, and the changes cannot impact any other user except yourself?

Augmented browsing techniques can come to our rescue. You might have used GreaseMonkey, a popular add-on that lets you change the look and feel of any website. In short, it installs scripts that read the DOM of the loaded html and alter its html/css etc. But creating and running the scripts might be overkill or cumbersome to work with, especially if you need to test with many different browsers.

Let’s take an alternative approach. How about intercepting the incoming resource file requested by the webpage and loading a different resource file that is stored in your local drive? aha!

For this I use my favorite tool, Fiddler. It is a debugging proxy that sits between your browser and the server and intercepts calls between them. The tool has many features that make a developer’s life easier, and we are going to use the feature “AutoResponder”.


Here are the steps to follow to intercept an image file and point to your own image.

a. Download, install & run Fiddler

b. Select the AutoResponder tab, check ‘Enable automatic responses’, and check ‘Unmatched requests pass-through’. This says that if no rule matches the incoming resource then do not intercept and use the file served from the web server.

c. Get the url of the image on the page you want to change. You can probably find it by viewing the page’s source code.

d. Have your replacement image in your local drive ready.

e. Click Add Rule button (or you can import the rule, if you previously exported it).

f. In the bottom of the window, type in the relative URL of the source image in the first dropdown.


g. For the second one, type in the local file path of the image file to be used in place of the original one.

h. Save

i. Refresh the webpage, voila! new image in place of the original!


j. You may turn on/off the interception by check/uncheck the checkbox in front of each of the rules you specify


To alter a .css or .js file, first download the file from the web server and store it in your local drive, add the interception rule, do modifications to that local file and refresh the page to see the change.

Happy coding!

Adding All Services to an Existing Office 365 User License

When working with our clients, we often find that they have enabled only some of the services within an Office 365 license.  Some companies, for example, may enable E3 licenses for a subset of users, but they don’t enable Lync Online.  While it’s very easy to add a service from within the Office 365 Admin Center, this method is not very efficient when a company has to modify several hundred or thousands of accounts and instead want to leverage Windows PowerShell.

By combining the use of the New-MsolLicenseOptions and Set-MsolUserLicense cmdlets, it’s possible to remove and add services.  In the following example, the account has been assigned all E3 services except for Office 365 ProPlus (OFFICESUBSCRIPTION) and Lync Online ‎(Plan 2) (MCOSTANDARD):


The company wants to add the Office 365 ProPlus service, but keep the Lync Online service disabled.  Running the following cmdlet will set the disabled service to only “MCOSTANDARD”:

$LicenseOptions = New-MsolLicenseOptions -AccountSkuId "company:ENTERPRISEPACK" -DisabledPlans MCOSTANDARD

Running this next cmdlet will change the license settings:

Get-MsolUser -UserPrincipalName | Set-MsolUserLicense -LicenseOptions $LicenseOptions

Since the “OFFICESUBSCRIPTION” service was not explicitly excluded in the “DisabledPlans” parameter, by default, it will now be enabled:


Note that the “ProvisioningStatus” for OFFICESUBSCRIPTION changed from “Disabled” to “PendingInput”.  When viewing the license settings in the Admin Center, the service will now be enabled under the E3 license details:


Now, again consider the scenario where a company has assigned E3 licenses, but left the Office 365 ProPlus and Lync Online (Plan 2) services disabled for all E3 licensed users.  The company now wants to enable all services, and not exclude any services.  In the past, Microsoft support has always provided that the only way to accomplish this is to remove the license, then reassign it without any “LicenseOptions”, effectively enabling all services.  While this method is perfectly safe, some companies are a bit apprehensive to make this change to a large number of accounts at once, for fear of disconnecting the users’ mailboxes and causing a service outage.

Instead of removing and re-adding the license, it’s possible to accomplish the same task by setting the “DisabledPlans” parameter to “$Null” within the “New-MsolLicenseOptions” cmdlet.  Example:

$LicenseOptions = New-MsolLicenseOptions -AccountSkuId "company:ENTERPRISEPACK" -DisabledPlans $Null
Get-MsolUser -UserPrincipalName | Set-MsolUserLicense -LicenseOptions $LicenseOptions

Note that both the OFFICESUBSCRIPTION and the MCOSTANDARD “ProvisioningStatus” have changed to “PendingInput”, and the services will show as enabled under the E3 license details in the Admin Center:



I hope you find this tip useful when managing your Office 365 licenses with Windows PowerShell.

Barry Thompson
Principal Consultant

JavaScript & CSS – Lessons Learned from the Field

In the past year, I’ve been able to work primarily on SharePoint intranet projects – both from the perspective of re-branding an existing site, as well as creating new, branded sites from scratch. These efforts were made much easier through the power of JavaScript and CSS, and they continue to be essential tools for any modern web development project. Here are some of the lessons I learned (sometimes the hard way) while working on projects in the past year:

Use the Right Tool for the Job

My three primary tools were Visual Studio, SharePoint Designer, and Internet Explorer’s F12 Developer Tools. Each has some unique advantages over the others, especially as code editors for viewing CSS and HTML. But I found that for the most part the Internet Explorer Developer Tools were the most indispensable of the three, mainly for its ability to inspect and modify CSS and HTML on the fly. Here, for instance, we can see & modify all the properties in effect on the highlighted section of text:


JavaScript & CSS Can Do Anything

While ‘anything’ might be an exaggeration, I did learn that more often than not, there was a solution for even the most complex problems when JavaScript & CSS were used effectively. Both have a large framework of methods and features that seem to meet any need, including dynamic run-time HTML changes. For instance,

· the function setTimeout() can delay the execution of your JavaScript (which can be useful if you’re waiting for something else to load), and

· the jQuery function addClass() can dynamically add a class to an element programmatically at runtime – this is useful if the element you’re referencing doesn’t get generated until runtime.

Internet Explorer 8 is a Pain

Many of my clients have some small subset of users who still need to use Internet Explorer 8, and from a JavaScript/CSS perspective, this continues to be challenge to my intranet projects. Fortunately, there always seems to be a IE8-specific fix that can be applied, and to ease the pain of integrating these one-off fixes, we can use features like conditional CSSRegistration:

or in-line CSS tags that target certain browsers:


The JavaScript & CSS Community is awesome

There are so many bloggers and developers who are actively sharing their expertise, tips, and tricks regarding JavaScript & CSS that, with a quick internet search, you always seem to be able to find the answer you need. And the people behind the frameworks are continuing to build on it, producing libraries like jQuery and others.

InfoPath is Still Alive

In January 2014, Microsoft announced the end of any future updates to InfoPath, with an implication that it might not be included in the next version of SharePoint. A year later, Microsoft has officially confirmed that InfoPath Forms Services will continue to be one of the services included in SharePoint Server 2016, and its inclusion in Office 365 will remain ‘until further notice’. (The InfoPath 2013 desktop application still remains the last version to be released.) They also announced the cancellation of FoSL (Forms on SharePoint Lists), the InfoPath alternative they were developing, which they had announced at SPC2014. This is very relevant to news to the many organizations wondering how to develop forms in SharePoint if there was no more InfoPath.


Based on this latest announcement, we are continuing to advise our clients to consider InfoPath for any forms project for which it is a good fit. Short term projects or agile processes that need rapid forms development make good candidates. With its inclusion in the next version of SharePoint, and Microsoft’s standard 10-year support cycles, InfoPath still has quite a bit of life left in it.

Why use InfoPath?

Even when it was facing extinction last year, it’s important to realize that InfoPath still has a comprehensive and broad set of powerful features that give it an advantage over many of the alternatives. Here are just a few of the features that are sometimes overlooked:

  • Promoted columns
    • Promoted columns represent fields inside the form that have been published into columns in the SharePoint forms library. The classic example of the value of the promoted column is the Expense Report. A manager can view a forms library that lists each report, with a column representing the total expense amount that needs approval, as well as a sum of all totals. Without the promoted column, the manager would have to open each form individually.
  • XML backend
    • InfoPath uses an XML schema behind the scenes to power its forms. For the normal power user, this fact is irrelevant, and should be considered a black box that need not be opened. But for the SharePoint developer who may need to create code to programmatically examine the contents of InfoPath forms, this is a useful fact. The CreateNavigator method, for instance, can be used to grab an instance of the XMLForm object for the current form document as a data source:
  • Workflow integration
    • Part of InfoPath’s value in the creation of no-code solutions in SharePoint lies in its natural integration with SharePoint Workflow. Both InfoPath and SharePoint workflow natively interact with SharePoint columns, and can use them to coordinate with each other regarding the status of the process, relevant data fields, etc. The included Workflow Status column provides a convenient in-line way to see the progress of the associated workflow right from within the form library.
  • Code-behind
    • There are many times when the standard InfoPath features aren’t quite enough. Sometimes we need to apply code behind our forms to programmatically perform certain features. The Developer tab of InfoPath Designer gives us the ability to attach C# or VB code in Visual Studio to our form for just such a purpose, as seen in the simple example below which applies code that runs when the form loads, adding text to a field. We should always, of course, be mindful of the implications of code-behind to our forms deployment process:03
  • Outlook integration
    • Many times, you can interact with a forms process right from Outlook. It can present InfoPath forms embedded into an email message, with the ability to open, fill out, or submit. These forms could be submitted to you via an automated workflow, or could be opened on demand via the New button:
    • 04

It will be interesting to see if organizations will continue to use these powerful features included in InfoPath, given the ever-uncertain future of the product.

SharePoint 2013 Search Results Not Returned – Alternative Access Mappings (AAM)

Worked through a search issue last week.  Hope this post helps to give some guidance.

We had a Default Zone URL called http://foo

It was extended to FBA on the Internet Zone with URL called http://bar

We configured a content source that crawled the Internet Zone, i.e. we crawled http://bar.

Here are the results:

http://foo (Default Zone Url)

  • The search results web part worked correctly when viewed through http://foo
  • The configured Result Query also was honored to help filter results.
  • The search results links resolved as http://foo.

http://bar (Internet Zone Url)

  • The search results web part returned all results when viewed through http://bar
  • The search results links resolved as http://bar.
  • The configured Result Query was NOT honored to help filter results.

We focused first on permissions with no resolution.

Then we started looking at the AAMs role in configuration.

After some initial positive results, we discovered this article explaining the situation:

Summary: Always crawl the Default Zone’s URL!  DO NOT attempt to crawl any other alternative access mapping URLs.

Variations Not Working After SharePoint 2010 to SharePoint 2013 Upgrade

A customer had a SharePoint 2010 site collection that we upgraded to SharePoint 2013.

The variation pages propagation jobs were set to run every three minutes.

The publish of an existing page in the variation root caused a “Started…Finished” propagation log entry with no information about the child variations:

The publishing of a new page in the variation root showed the “Started…Finished” message, along with the information about the child variation pages:

It turns out that there is a very important hidden property called NotificationMode on the Variation Label page that seems to be set to null during upgrade.

This NotificationMode property needs to:

1.  Have a value for Variations to propagate;

2.  Be set to true on the item in the list that is the root label;

3.  Be set to false on child variations in the list.

Here is the KB article that contains a powershell script to run to fix NotificationMode:

A Lap Around the Azure API Management Service

At a recent conference, our team presented a talk called “A Lap Around Azure API Service Management.” It was a great opportunity to meet others in the area who are active developing on the Microsoft platform.  We appreciated meeting people with varying levels of familiarity in the Web APIs, and it was a perfect opportunity to exchange ideas and experiences.

For people who are new to this space, the presentation covered the Web API ecosystem as well as their value in building modern applications.


From a Web API user’s perspective, there is a wide range of functionality that they expose, including security, caching, logging, tracing, storage, etc.  If you’re building an app, changes are there is already an existing API that will fit your needs.


In addition to pre-built APIs, there is a large, vibrant developer community who are creating and consuming these APIs.  Your company may be able to connect with new customers and new revenue channels by creating your own APIs and working with this community to connect your services in these developers’ applications.

At a high level, the Windows Azure API Management Service (AMS) has four feature sets:

· API Management via the Admin portal

· Admin Portal – manage your APIs

· Proxy – hosting public version of your APIs

· Developer portal – helps developers discover your APIs and promotes adoption

· Analytics – provides insight into usage and the health of your APIs


Publisher/Admin Portal:

Also called the API Management Console, this is where API publishers configure and manage their public APIs.

In AMS, a product contains one or more APIs as well as a usage quota and the terms of use. Once a product is published, developers can subscribe to the product and begin to use the product’s APIs.

The screenshot below shows some of the various types of products that can be created with the management console.  Here, each product represents a tier of service. API publishers can use the AMS product configuration feature to provide different levels of service using call rates, subscriptions requiring approvals, etc.



The AMS Proxy is the middleware that glues the published APIs to an actual implementation. It uses the information provided when importing an API to invoke this “backend” API in response to someone calls the AMS-published API. The proxy is very useful because not only does it isolate the backend API but it also allow the pre and post processing of messages through policies.


Developer Portal:

The developer portal is where developers can learn about the publisher’s APIs, view and call operations, and subscribe to products. Prospective customers can visit the developer portal, view APIs and operations, and sign up. The URL for the developer portal is located on the dashboard in the Azure portal for the API Management service instance.  API publishers can customize the look and feel of their developer portal by adding custom content, customizing styles, etc.  Features like the developer portal, alongside the product and subscriber management, can help developers accelerate the adoption of their APIs.



The Analytics features provide insight into your API platform. Usage data like successful/blocked/failed calls are reported on a per-user, per product and per API level. There are several charts and tables that allow you to quickly understand how your APIs are operating.  The Analytics features can help providers track API usage and identify performance issues, should these arise.

In addition to these features, the portal also provides a mechanism for policy management.  Using this feature, administrators can easily create policies that can control several facets of the API, such as quotas, payload transformation, etc.  Below is an example of a policy that limits the rate of calls to the API to a maximum of three calls every 60 seconds:


If you would like to learn more about the Azure Management Service and Web API development, please feel free to contact us at Bennett Adelson.  Also, the links below can help provide more information:

“One Size Doesn’t Fit All – User Experience 101”

Often we meet with clients who have already determined the type of technology for their application before they have determined what they want the application to accomplish.

With any project it’s crucial to start with User Experience first. In a sea of frameworks, platforms and operating systems at our disposal, it’s easy to get side tracked by the technology. The user experience tends to take the back seat, when in reality it should drive the appropriate technology set.

By asking a few basic questions we begin to understand what type of technology is best suited to accomplish the business goals and what experience will resonate most with users.


1. What are the business goals of the application? Simply, we want to know what you are hoping the application will accomplish. Is it to increase conversion? Is it to market new products? Is it train or educate your employees? Without understanding the business goals we cannot measure and determine success.

2. Who will be using the application? We want to clearly define user demographics and understand user limitations. As designers we need to learn everything we can about the user: their age, gender, level of technical aptitude and physical limitations that could impact the success of the application. Designing a website for a 55-year old female can be quite different than designing a website for an 18-year old male.

3. Are there specific limitations or inefficiencies that could impact the overall design or layout? This is where we start to learn more about the user’s environment and what elements of their job could impact the application interface. For example, if a user is working in a warehouse and needs to scan parts, this might be difficult to do if he is required to wear gloves to perform his job. Environmental limitations can be just as important as physical limitations because they introduce unique design hurdles, which if not solved properly can negatively impact the experience.

4. What are the project requirements? All best laid projects need to start with a plan. This begins with talking to project owners and stakeholders to get common consensus on capabilities, features or attributes of the project’s deliverables. Once this has taken place the next step is to create a prioritized list which will be used as the basis for the project deliverables and ultimately, the project plan. This is the map to keep the project on time and on budget.

5. What are your technology requirements and limitations? Understanding a client’s current technology stack or environment will also impact the way designers will approach their design and layout. We often have to rethink the way a user will complete a task, knowing that a specific feature might night be accessible in certain software or database versions. This is a common problem for mobile operating systems. The innate features of the iPhone 6 are different than those of the iPhone 4S.

By asking a few basic questions upfront, designers and developers begin to gather a clear picture of what they are designing and most importantly, who they are designing for. In the end, this creates a seamless experience for the user and a big win for the client.

Coercion Failed Error when Running a Workflow from a Document Retention Policy

Recently, I had a client that wanted to create a “document review” workflow that would run if a document had not been modified in the past year. The solution involved creating a simple SharePoint 2010 style workflow that would assign a task to review the document to the reviewer(s) defined in the workflow’s association settings. A document retention policy was created to run the workflow if the document had not been modified in the past year. The workflow worked fine when run manually. However, when the workflow was run from the retention policy it was failing with the error: “Coercion Failed: Input cannot be null for this coercion.”


As it turns out there is a (minimally documented) web application property called PolicyUseAssocDataAsInitData that controls whether the workflow association properties are passed to the workflow when it is started from a retention policy. This property was introduced with an October 2011 hotfix for SharePoint 2010 (see

After setting this property the workflow ran as expected from the retention policy.

You can enable this property on a web application using the following PowerShell commands:

$webApplication = Get-SPWebApplication http://yoursite.url
$webApplication.Properties[“PolicyUseAssocDataAsInitData”] = ‘true’

NOTE: After setting the property you need to restart the SharePoint Timer service in order for the change to take effect.


%d bloggers like this: