Bennett Adelson Technical Blog

Posts from the consultants at Bennett Adelson

InfoPath is Still Alive

In January 2014, Microsoft announced the end of any future updates to InfoPath, with an implication that it might not be included in the next version of SharePoint. A year later, Microsoft has officially confirmed that InfoPath Forms Services will continue to be one of the services included in SharePoint Server 2016, and its inclusion in Office 365 will remain ‘until further notice’. (The InfoPath 2013 desktop application still remains the last version to be released.) They also announced the cancellation of FoSL (Forms on SharePoint Lists), the InfoPath alternative they were developing, which they had announced at SPC2014. This is very relevant to news to the many organizations wondering how to develop forms in SharePoint if there was no more InfoPath.

Recommendations

Based on this latest announcement, we are continuing to advise our clients to consider InfoPath for any forms project for which it is a good fit. Short term projects or agile processes that need rapid forms development make good candidates. With its inclusion in the next version of SharePoint, and Microsoft’s standard 10-year support cycles, InfoPath still has quite a bit of life left in it.

Why use InfoPath?

Even when it was facing extinction last year, it’s important to realize that InfoPath still has a comprehensive and broad set of powerful features that give it an advantage over many of the alternatives. Here are just a few of the features that are sometimes overlooked:

  • Promoted columns
    • Promoted columns represent fields inside the form that have been published into columns in the SharePoint forms library. The classic example of the value of the promoted column is the Expense Report. A manager can view a forms library that lists each report, with a column representing the total expense amount that needs approval, as well as a sum of all totals. Without the promoted column, the manager would have to open each form individually.
  • XML backend
    • InfoPath uses an XML schema behind the scenes to power its forms. For the normal power user, this fact is irrelevant, and should be considered a black box that need not be opened. But for the SharePoint developer who may need to create code to programmatically examine the contents of InfoPath forms, this is a useful fact. The CreateNavigator method, for instance, can be used to grab an instance of the XMLForm object for the current form document as a data source:
      02
  • Workflow integration
    • Part of InfoPath’s value in the creation of no-code solutions in SharePoint lies in its natural integration with SharePoint Workflow. Both InfoPath and SharePoint workflow natively interact with SharePoint columns, and can use them to coordinate with each other regarding the status of the process, relevant data fields, etc. The included Workflow Status column provides a convenient in-line way to see the progress of the associated workflow right from within the form library.
  • Code-behind
    • There are many times when the standard InfoPath features aren’t quite enough. Sometimes we need to apply code behind our forms to programmatically perform certain features. The Developer tab of InfoPath Designer gives us the ability to attach C# or VB code in Visual Studio to our form for just such a purpose, as seen in the simple example below which applies code that runs when the form loads, adding text to a field. We should always, of course, be mindful of the implications of code-behind to our forms deployment process:03
  • Outlook integration
    • Many times, you can interact with a forms process right from Outlook. It can present InfoPath forms embedded into an email message, with the ability to open, fill out, or submit. These forms could be submitted to you via an automated workflow, or could be opened on demand via the New button:
    • 04

It will be interesting to see if organizations will continue to use these powerful features included in InfoPath, given the ever-uncertain future of the product.

SharePoint 2013 Search Results Not Returned – Alternative Access Mappings (AAM)

Worked through a search issue last week.  Hope this post helps to give some guidance.

We had a Default Zone URL called http://foo

It was extended to FBA on the Internet Zone with URL called http://bar

We configured a content source that crawled the Internet Zone, i.e. we crawled http://bar.

Here are the results:

http://foo (Default Zone Url)

  • The search results web part worked correctly when viewed through http://foo
  • The configured Result Query also was honored to help filter results.
  • The search results links resolved as http://foo.

http://bar (Internet Zone Url)

  • The search results web part returned all results when viewed through http://bar
  • The search results links resolved as http://bar.
  • The configured Result Query was NOT honored to help filter results.

We focused first on permissions with no resolution.

Then we started looking at the AAMs role in configuration.

After some initial positive results, we discovered this article explaining the situation: http://blogs.msdn.com/b/sharepoint_strategery/archive/2014/07/08/problems-when-crawling-the-non-default-zone-explained.aspx

Summary: Always crawl the Default Zone’s URL!  DO NOT attempt to crawl any other alternative access mapping URLs.

Variations Not Working After SharePoint 2010 to SharePoint 2013 Upgrade

A customer had a SharePoint 2010 site collection that we upgraded to SharePoint 2013.

The variation pages propagation jobs were set to run every three minutes.

The publish of an existing page in the variation root caused a “Started…Finished” propagation log entry with no information about the child variations:

The publishing of a new page in the variation root showed the “Started…Finished” message, along with the information about the child variation pages:

It turns out that there is a very important hidden property called NotificationMode on the Variation Label page that seems to be set to null during upgrade.

This NotificationMode property needs to:

1.  Have a value for Variations to propagate;

2.  Be set to true on the item in the list that is the root label;

3.  Be set to false on child variations in the list.

Here is the KB article that contains a powershell script to run to fix NotificationMode:  http://support.microsoft.com/kb/2925599

A Lap Around the Azure API Management Service

At a recent conference, our team presented a talk called “A Lap Around Azure API Service Management.” It was a great opportunity to meet others in the area who are active developing on the Microsoft platform.  We appreciated meeting people with varying levels of familiarity in the Web APIs, and it was a perfect opportunity to exchange ideas and experiences.

For people who are new to this space, the presentation covered the Web API ecosystem as well as their value in building modern applications.

clip_image001

From a Web API user’s perspective, there is a wide range of functionality that they expose, including security, caching, logging, tracing, storage, etc.  If you’re building an app, changes are there is already an existing API that will fit your needs.

clip_image001[4]

In addition to pre-built APIs, there is a large, vibrant developer community who are creating and consuming these APIs.  Your company may be able to connect with new customers and new revenue channels by creating your own APIs and working with this community to connect your services in these developers’ applications.

At a high level, the Windows Azure API Management Service (AMS) has four feature sets:

· API Management via the Admin portal

· Admin Portal – manage your APIs

· Proxy – hosting public version of your APIs

· Developer portal – helps developers discover your APIs and promotes adoption

· Analytics – provides insight into usage and the health of your APIs

clip_image002

Publisher/Admin Portal:

Also called the API Management Console, this is where API publishers configure and manage their public APIs.

In AMS, a product contains one or more APIs as well as a usage quota and the terms of use. Once a product is published, developers can subscribe to the product and begin to use the product’s APIs.

The screenshot below shows some of the various types of products that can be created with the management console.  Here, each product represents a tier of service. API publishers can use the AMS product configuration feature to provide different levels of service using call rates, subscriptions requiring approvals, etc.

clip_image003

Proxy:

The AMS Proxy is the middleware that glues the published APIs to an actual implementation. It uses the information provided when importing an API to invoke this “backend” API in response to someone calls the AMS-published API. The proxy is very useful because not only does it isolate the backend API but it also allow the pre and post processing of messages through policies.

 

Developer Portal:

The developer portal is where developers can learn about the publisher’s APIs, view and call operations, and subscribe to products. Prospective customers can visit the developer portal, view APIs and operations, and sign up. The URL for the developer portal is located on the dashboard in the Azure portal for the API Management service instance.  API publishers can customize the look and feel of their developer portal by adding custom content, customizing styles, etc.  Features like the developer portal, alongside the product and subscriber management, can help developers accelerate the adoption of their APIs.

 

Analytics:

The Analytics features provide insight into your API platform. Usage data like successful/blocked/failed calls are reported on a per-user, per product and per API level. There are several charts and tables that allow you to quickly understand how your APIs are operating.  The Analytics features can help providers track API usage and identify performance issues, should these arise.

In addition to these features, the portal also provides a mechanism for policy management.  Using this feature, administrators can easily create policies that can control several facets of the API, such as quotas, payload transformation, etc.  Below is an example of a policy that limits the rate of calls to the API to a maximum of three calls every 60 seconds:

clip_image004

If you would like to learn more about the Azure Management Service and Web API development, please feel free to contact us at Bennett Adelson.  Also, the links below can help provide more information:

http://azure.microsoft.com/en-us/documentation/articles/api-management-get-started/

http://channel9.msdn.com/Events/TechEd/NorthAmerica/2014/DEV-B382

“One Size Doesn’t Fit All – User Experience 101”

Often we meet with clients who have already determined the type of technology for their application before they have determined what they want the application to accomplish.

With any project it’s crucial to start with User Experience first. In a sea of frameworks, platforms and operating systems at our disposal, it’s easy to get side tracked by the technology. The user experience tends to take the back seat, when in reality it should drive the appropriate technology set.

By asking a few basic questions we begin to understand what type of technology is best suited to accomplish the business goals and what experience will resonate most with users.

QUESTIONS:

1. What are the business goals of the application? Simply, we want to know what you are hoping the application will accomplish. Is it to increase conversion? Is it to market new products? Is it train or educate your employees? Without understanding the business goals we cannot measure and determine success.

2. Who will be using the application? We want to clearly define user demographics and understand user limitations. As designers we need to learn everything we can about the user: their age, gender, level of technical aptitude and physical limitations that could impact the success of the application. Designing a website for a 55-year old female can be quite different than designing a website for an 18-year old male.

3. Are there specific limitations or inefficiencies that could impact the overall design or layout? This is where we start to learn more about the user’s environment and what elements of their job could impact the application interface. For example, if a user is working in a warehouse and needs to scan parts, this might be difficult to do if he is required to wear gloves to perform his job. Environmental limitations can be just as important as physical limitations because they introduce unique design hurdles, which if not solved properly can negatively impact the experience.

4. What are the project requirements? All best laid projects need to start with a plan. This begins with talking to project owners and stakeholders to get common consensus on capabilities, features or attributes of the project’s deliverables. Once this has taken place the next step is to create a prioritized list which will be used as the basis for the project deliverables and ultimately, the project plan. This is the map to keep the project on time and on budget.

5. What are your technology requirements and limitations? Understanding a client’s current technology stack or environment will also impact the way designers will approach their design and layout. We often have to rethink the way a user will complete a task, knowing that a specific feature might night be accessible in certain software or database versions. This is a common problem for mobile operating systems. The innate features of the iPhone 6 are different than those of the iPhone 4S.

By asking a few basic questions upfront, designers and developers begin to gather a clear picture of what they are designing and most importantly, who they are designing for. In the end, this creates a seamless experience for the user and a big win for the client.

Coercion Failed Error when Running a Workflow from a Document Retention Policy

Recently, I had a client that wanted to create a “document review” workflow that would run if a document had not been modified in the past year. The solution involved creating a simple SharePoint 2010 style workflow that would assign a task to review the document to the reviewer(s) defined in the workflow’s association settings. A document retention policy was created to run the workflow if the document had not been modified in the past year. The workflow worked fine when run manually. However, when the workflow was run from the retention policy it was failing with the error: “Coercion Failed: Input cannot be null for this coercion.”

Workflow-CoercionFailed

As it turns out there is a (minimally documented) web application property called PolicyUseAssocDataAsInitData that controls whether the workflow association properties are passed to the workflow when it is started from a retention policy. This property was introduced with an October 2011 hotfix for SharePoint 2010 (see http://support.microsoft.com/kb/2596584).

After setting this property the workflow ran as expected from the retention policy.

You can enable this property on a web application using the following PowerShell commands:

$webApplication = Get-SPWebApplication http://yoursite.url
$webApplication.Properties[“PolicyUseAssocDataAsInitData”] = ‘true’
$webApplication.Update()

NOTE: After setting the property you need to restart the SharePoint Timer service in order for the change to take effect.

 

CU4 for ConfigMgr 2012 R2 has been released

An update (CU4) was released yesterday, Feb 2, 2015, for System Center Configuration Manager 2012 R2 that replaces Cumulative Update 3 (CU3).

This update addresses many distribution related issues, some minor OSD issues, a few critical site issues, some minor client bugs, some MDM fixes, and some SUP fixes.

Also, there have been some additions, like new PowerShell cmdlets (https://support.microsoft.com/kb/3031717) fixes as well as 34 new ones like:

  • Add-CMDeploymentTypeDependency which adds a deployment type as a dependency to a dependency group.
  • Add-CMDeploymentTypeSupersedence which sets one deployment type to supersede another.
  • Get-CMDeploymentTypeDependency which gets existing dependent deployment types from a dependency group.
  • Get-CMQuery which gets a query.

Some optimizations have been made to reduce latency and optimize the data replication in large hierarchies.

Lastly, the updated Endpoint Protection client has been updated to match the version distributed currently.

You can find more information here:
https://support.microsoft.com/kb/3026739/en-us

Jason Condo
Principle Consultant

Reflections on Integration 2014 (aka BizTalk Summit)

I’ve just returned from Integrate 2014, the annual gathering of BizTalk developers in Redmond. The big story this year was that Microsoft’s BizTalk team gave its first public briefings and demonstrations of the new BizTalk architecture it’s been planning for several years. The key features of this new architecture are:

  • BizTalk Server will be refactored and re-implemented as small pluggable components. Each component can be used separately from the others, and new ones can be written by third parties and developers. They can each be developed and versioned separately, so there will no longer be single monolithic releases of “BizTalk Server”. I was reminded of how Microsoft has been breaking up ASP.NET into components with OWIN and Katana.
  • But unlike OWIN, the new BizTalk components will not connect directly to each other. Instead their inputs and outputs will all pass through a new type of runtime engine that acts as a message broker. The message flow will thus be pub/sub rather than a pipeline.
  • There will a web-based “gallery” where developers and business users can pick and choose components and arrange them into workflows. Developers will also have access to components in Visual Studio via Nuget.
  • This architecture will be implemented first on Windows Azure, but will also run on-premise in a future version of the Windows Azure Pack. The latter appeared to be how the Microsoft devs were running their demos.

At the conference Microsoft referred to the new components as “microservices”. This term didn’t seem to appeal to everyone, and I won’t be surprised if Microsoft comes up with new terminology. (They no longer refer to it as “AppFabric” as they did in 2010.) And although the BizTalk team is moving the technology forward, we learned from Scott Guthrie (who gave the keynote) and Bill Staples (Director of Program Management for the Azure Application Platform) that Microsoft is planning to adopt this architecture for other Azure features and services.

Microsoft did not have a public preview of the microservice architecture to announce at the conference, but they promised it for 2015 Q1. That is also when they plan to release the first preview of the BizTalk Server 2015, which should be a “major” release since it will come in an odd-numbered year.

Although GA for the new BizTalk architecture is probably more than a year off, the most exciting takeaway for me was the affirmation, both from Microsoft and the developers assembled from round the world, that BizTalk Server and Microsoft Azure BizTalk Services (MABS) are still strong, vital and more able than ever to handle demanding enterprise integration. Old customers are sticking with BizTalk, and new ones are adopting it all the time. At Bennett Adelson we will continue to keep BizTalk at the center of our Connected Systems practice.

Microsoft releases Out of Band update today

Microsoft has rereleased update MS14-068 (Kerberos Checksum Vulnerability) as an out of band update and urges customers to deploy it. Stated on their Security Bulletin Summary page (https://technet.microsoft.com/en-us/library/security/ms14-nov.aspx) is that Microsoft is aware of targeted threats for 068. Microsoft recommends customers apply this update to their domain controllers as quickly as possible as it could allow a normal domain account to be elevated to that of a domain admin. An attacker with administrative privilege on a domain controller can make a nearly unbounded number of changes to the system that can allow the attacker to persist their access long after the update has been installed. Therefore, it is critical to install the update immediately.   The implications are huge here, so I wouldn’t sit on this too long if I were you.

MS14-068
Kerberos Checksum Vulnerability

This security update resolves a privately reported vulnerability in Microsoft Windows Kerberos KDC that could allow an attacker to elevate unprivileged domain user account privileges to those of the domain administrator account. An attacker could use these elevated privileges to compromise any computer in the domain, including domain controllers. An attacker must have valid domain credentials to exploit this vulnerability. The affected component is available remotely to users who have standard user accounts with domain credentials; this is not the case for users with local account credentials only. When this security bulletin was issued, Microsoft was aware of limited, targeted attacks that attempt to exploit this vulnerability.

This security update is rated Critical for all supported editions of Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2. The update is also being provided on a defense-in-depth basis for all supported editions of Windows Vista, Windows 7, Windows 8, and Windows 8.1. For more information, see the Affected Software section.

The security update addresses the vulnerability by correcting signature verification behavior in Windows implementations of Kerberos. For more information about the vulnerability, see the Frequently Asked Questions (FAQ) subsection for the specific vulnerability.

For more information about this update, see Microsoft Knowledge Base Article 3011780.

Additional Notes: If you aren’t already aware, Azure Active Directory (AAD) does not expose Kerberos over any external interface and is therefore not affected by this vulnerability (although domain controllers running in Azure would be).

Jason Condo
Principle Consultant

Windows 10 IT Pro Training – November 20th

Newly announced, Microsoft is offering some free live training November 20th on MVA for IT Pros around Windows 10. Simon May, Brad McCabe, Michael Niehaus, Chris Hallum, and Fred Pullen are your hosts and I expect it to be a great session. If you have had the chance to see Simon or Michael speak I am sure you will agree this is something you don’t want to miss. If you have the time, check it out.

http://www.microsoftvirtualacademy.com/liveevents/windows-10-technical-preview-fundamentals-for-it-pros

Windows 10 Technical Preview Fundamentals for IT Pros

Live Event Details
November 20, 2014
9am–1pm PST

In this Jump Start training with live Q&A, join us as the lead Windows 10 Enterprise Product Managers roll back the covers on the Windows 10 Technical Preview. Learn about new UI enhancements, find out how management and deployment is evolving, and hear how new security enhancements in Windows 10 can help your organization respond to the modern security threat landscape. Be sure to bring your questions!

Follow

Get every new post delivered to your Inbox.