Tuesday, November 01, 2011

SharePoint Information Architecture from the Field

Over the years, I've written quite a few articles on how to technically structure your SharePoint solutions into web-apps, sites, subsites, lists and document libraries. All based on having a defined Information Architecture (IA) as a basis for the solution design, or at least being able to reason about your content management using my "chest of drawers" and "news paper" analogies. The latter is based on my experiences from the field as most companies don't have a well-defined IA in place.

Common questions from customers are "what is Information Architecture?" and "what is the value of having an IA for SharePoint, can't we just create sites and doc-libs on the fly as needed?". Not to forget "how do we go about creating an Information Architecture for SharePoint?". So here are some IA advice from Puzzlepart projects:

In short, Information Architecture defines how to classify and structure your content so that it is easy for content consumers to find and explore relevant content, while making it simple for workers to contribute and manage content in an efficient manner.

The business goal of having an Information Architecture for your SharePoint solution is enabling workers to contribute, store and mange content in a manner that is simple and efficient, enabling more content sharing; and at the same time making it easy for workers to browse and find content they need, while also making it easy for workers to discover and explore relevant content they didn't know of. The outcome is more knowledgeable workers that are better informed about what’s going on in the company and about the range of intellectual property possessed by other employees, while also saving time wasted on finding information, and time wasted on incorrect or outdated information.

An outcome is a metric that customers use to define the successful realization of the objectives and goals. Outcomes happens after the project has delivered on its objectives and goals, and the customers must themselves work against securing the outcomes to achieve the desired business value.

The business value of having a working IA is capturing company knowledge from employees with better quality of shared content, which combined with good findability drive more knowledgeable workers that make better decisions and better faster processes. In addition, more and better content sharing helps user not only discover and explore content, but also people such as subject matter experts, allowing employees to build and expand their network throughout the company, helping the company to retain talented employees through social ties and communities. Access to discover more and better content and people expertise is central to enabling innovation and process improvement, as new knowledge is a trigger for new ideas and for identifying new opportunities.

The process of defining your IA for your SharePoint solution should focus on these objectives and goals:
  • Analyze and define the content classification and structure for the solution 
    • goal: identify what content to manage and plan how to store it in SP, leading to sites, subsites and doc-lib structure organized into SP web-apps
  • Analyze and define how to browse and navigate the content
    • goal: make it simple and efficient for users to find and use known content that they need in their daily work to drive better faster processes
  • Analyze and define how to discover and explore the content
    • goal: make it easy for users to stumble upon novel shared knowledge based on "common focus" to trigger innovation and build social ties
  • Provide simple and efficient content contributor experience with liberal appliance of default metadata values, storing content close to the authors 
    • goal: make workers contributors, not knowledge management grunts, and help them store content correctly with better metadata and tagging, driving findability and "common focus" content discovery; drive better sharing and collaboration
  • Analyze and define the starter content types with metadata and term set taxonomy based on the defined site and doc-lib architecture, with a strong focus on needed search experience capabilities
    • goal: enable content management and support both search-driven and "common focus" content; drive findability, sharing and innovation
  • Analyze and define the policies for social tagging and rating in the solution, also in relation to user profile interests, skills and responsibility tagging
    • goal: drive "common focus" content discovery, drive findability, drive social communities, drive innovation
  • Analyze and define the search experience, focusing on both search-driven content and on search center scopes and refiners
    • goal: drive findability and provide both search-driven and "common focus" content
  • Enable disposition of redundant and irrelevant content 
    • goal: provide users with better, correct and up-to-date information, drive findability, save storage cost, save process cost
Including the search experience helps you avoid an initial version of your IA with a too narrow scope, which is easy to do for a starter IA when e.g. analyzing only project collaboration needs or just the document types of an attachment-based intranet. It makes you focus on what you get out, not only what you put in.

Note that navigation is not IA, its just one way to explore the content. Using navigation to structure your content is just reapplying the fileshare folder approach, which we all know doesn't work too good for findability and discovery. Navigation should not define the statical IA structure for the content, do a LATCH analysis to model the possible IA structures, and choose one of them to define the statical IA structure. The site map is closer to define statical IA structure than navigation, still it is only good for logical IA structure and cannot be expected to be used directly as the physical IA structure in SharePoint.

In an upcoming article I will give some practical advice from the field on how to define and realize the Information Architecture for your SharePoint solution in an agile fashion.

Thursday, October 27, 2011

SharePoint is like a Chest of Drawers

I often get asked "how many document libraries and sites will we need?" in SharePoint, followed by "how will we know whether we should use more doc-libs in a site or just throw it all in there?" and "where should content be stored? we need to show it on the intranet home page, but it is really edited and owned by HR in region Gokk". Well, SharePoint is like a chest of drawers.

To answer such questions, you need to know how to classify and structure your content; ideally you should have an Information Architecture for all your different kinds of data. If you are like most others, you don't. This is where the "chest of drawers" analogy might help you reason about your content.

Whether you need many doc-libs or subsites or not depends on your IA policies for information management. Still don't have an IA? Think of a chest of drawers for you clothes: it makes it easier to manage different types of clothes in different ways at different schedules, by e.g. separating t-shirts from trousers. Maybe even handle different kinds of t-shirts differently, such as your precious Maiden t-shirts. It also allows for delegating a few drawers to be managed by your wife; maybe you even want to have some locked drawers with more privacy :)

Your content are the clothes, the drawers are doc-libs or even subsites; and depending on the variety of clothes you have, you might need quite a sophisticated chest of drawers. Throw in all your other stuff, and you might need a bigger closet or a garage!

So now all your content is stored into nicely separated drawers, with delegated and secure handling where needed. But is is not so easy to see what is in the drawers without actually opening and browsing the content of each drawer. Until we get one of those science fiction closets that knows whats in the drawers and let us explore what trendy outfits we can wear today, its time for another analogy: the good old news paper, even in its modern online incarnation.

Think of a news paper with a front page and then multiple sections, such as domestic, foreign, sports, economics, etc. The front page and section front pages are used to show the (elsewhere) stored content to readers, helping them quicly browse the content at wellknow locations in the paper. The shown stories are typically rollup content stored elsewhere, typically where maintained, close to the content editors.

So a paper is built from dispersed storage of content that can be rolled up and targeted to readers multiple places. The home page and section pages rollup content "teasers" and allows the user to browse the content stored elsewhere in the paper and decide whether to explore it further.

The front page is the home page of your SharePoint site, the sections are subsites and the section pages are the subsite welcome pages in SharePoint parlance. As for the drawers, there might be different management policies and different people handling the different sections, and this helps you decide when subsites are needed. The rollup, or cross publishing if you like, is achieved using the content by query web-part or search-driven content based on content types, tagging and metadata.

Controlled and secure management of content according to different policies and schedules is much simpler when using subsites as compared to throwing it all into one site. Store the content close to the producers, show it everywhere the users expect to find it - and also where *you* want them to discover and explore knowledge new to them.

Thursday, September 08, 2011

Issue with SP2010 Personal Site UserInfo Synchronization

Today we discovered an issue with the SharePoint synchronization from the user profile database to the hidden UserInfoList in all site-collections. This sync is performed by two timer jobs (see profile sync details in this excellent article on the Bamboo Team Blog), which will update changes to your user profile in all the cached profile data in the hidden user info lists, except for the UserInfoList in your personal site under the profile site (My Site Host).

To verify the reported bug, I updated my mobile phone number in my user profile, and ran the two sync timer jobs. This is how my updated user information looks in a team-sites:

And this is how my non-updated user information looks in my personal site:

As you can see from the time stamps in the lower left corner, the profile data is still exactly as cached in the UserInfoList when I first created and visited my personal site. As of now I don't know any fix for this issue.

[UPDATE] A list of things to check, not all applies to SP2010 though: Troubleshooting User Profile Sync issues in Microsoft Office SharePoint Server 2007

As it turns out, all our personal sites get the "ProfileSynchronizationInternalException: ProfSynch: The site with ID <guid> cannot be synchronized due to an unprovisioned root web" error in the ULS. This seems to be a common problem in SharePoint 2010 according to this MSDN forum thread, which also provides an unsupported workaround that updates the Flags column of the Webs table in the My Site Host content database.

Wednesday, August 10, 2011

Some Gotchas when Customizing the "My Content" Personal Site

Customizing an existing SharePoint 2010 site definition such as the personal site (SPSPERS) that provides the "My Content" section in the My Site Host web-application, is a bit different than customizing your own site definitions. As the supported way of customizing existing site definitions is to use feature stapling, you need to consider the provisioning order of elements in onet.xml and referenced and stapled 'SPSite' and 'SPWeb' features. Failing to do so might result in strange end results when creating a new site.

The MCS Norway team has done a good job of documenting the SharePoint element provisioning order, as part of their SiteConfigurator available at CodePlex:
"There are several steps in the creation process and SharePoint provisions in the following order:
  1. Global onet.xml This file defines list templates for hidden lists, list base types, a default definition configuration, and modules that apply globally to the deployment.
  2. SPSite scoped features defined in site definitions onet.xml, in the order they are defined in the file. The onet.xml file defined in the site definition can define navigational areas, list templates, document templates, configurations, modules, components, and server e-mail footers used in the site definition to which it corresponds.
  3. SPSite scoped stapled features, in quasi random order
  4. SPWeb scoped features defined in onet.xml, in the order they are defined in the file.
  5. SPWeb scoped stapled features, in quasi random order
  6. List instances defined in onet.xml
  7. Modules defined in onet.xml
This is a fairly complex process and it can often be hard to know the method for customizing a site definition. A solution can be right in one scenario and completely wrong in another, making this somewhat confusing."
Here are some gotchas related to SPSPERS customization:
  • The doc-libs Shared Documents and Personal Documents do not exist yet during feature stapling; list instances in onet.xml are provisioned in step 6.
  • The my content home page default.aspx do not exist yet during feature stapling; files and pages in onet.xml are provisioned in step 7.
  • Do not create your own customized default.aspx file, it will get filled with the standard web-parts when the file's AllUsersWebParts are provisioned by the onet.xml module in step 7.
  • The quick launch heading node titles are not yet localized during site provisioning, look them up by their id rather than their title when adding links
  • The standard BlogView web-part only works when in a site page in the site root, it won't work in pages stored in lists, doc-libs or custom folders.
  • The Wiki Page Home Page feature is not activated by default for personal sites; do not provision your own /SitePages/ custom list, doc-lib or folder, as this will prevent enabling wiki pages later on.
To customize the standard personal site home page, you must provision a new home page with a different name and change the site's home page setting (SPFolder rootFolder.WelcomePage). Remember to restore the standard setting when deactiving your customization feature.

I strongly recommend using or learning from the SiteConfigurator, download the feature and the source code from CodePlex and join the community there.

Wednesday, July 06, 2011

Problem creating a FAST Content SSA in SharePoint 2010

While installing Fast Search Server for SharePoint 2010 (FS4SP) on a dev farm today, I got a problem with the provisioning of a new FAST Content SSA (Search Service Application), it would hang forever at "0:01 Configuring the Search Service..." waiting for the TopologyConfigFinish.aspx page to complete.

The problem turned out to be that the SharePoint 2010 Administration service wasn't started after the mandatory server reboot after installing FS4SP. The FAST "nctrl status" cmdlet does not check this. Make sure that both the SP2010 Administration and Timer services are running:

If you still can't create new or delete search service application instances, or make topology changes at all, then you might need to delete the old SSA the hard way. See Deleteing the search service application and How to delete orphan configuration objects from SharePoint farm. Heed this warning: "Please be VERY careful when executing the deleteconfigurationobject command, if this command is not used in the correct way (if you end up deleting the wrong object) there is NO way to revert back the changes and it has the potential to render your Configuration Database useless, hence you may require to restore / rebuild your SharePoint farm".

Remember to configure SSL enabled communication again when recreating the FAST Content SSA, otherwise your next crawl will be stuck on starting while retrying every 60 seconds to connect to the document engine. Also remember to restart the FAST Search for SharePoint and the SharePoint Server Search 14 services before starting a new full crawl.

Friday, June 24, 2011

Delay Loading of Data in SharePoint 2010 Web Parts

Sometimes your web-parts may take a long time to load their data, e.g. when connecting to external data through BCS, doing SPSiteDataQuery across a large number of sites, or when iterating over a user's site memberships to read some items from lists in different site-collections. Put a few of such web-parts on a dashboard page and wait for the combined load time of all those web-parts to complete before the page is shown. Not a nice user experience. These days users expect something as shown in this short screencast:

If you've used ASP.NET Ajax UpdatePanels, you might wish to utilize the asynchronous partial page update experience seen on postbacks also during page load. The simple thing seems to be calling __doPostBack for each UpdatePanel from the page load JavaScript event to trigger the Ajax async partial postback. That won't work, as only one concurrent postback is allowed by ASP.NET Ajax, so only one of your web-parts will work as expected, the other __doPostBack calls will get canceled by the ScriptManager.

A simple solution to this problem, is to put an asp:timer control inside the UpdatePanel and let it trigger a postback to your web-part code. Then load the data and update the content of the UpdatePanel during this async Ajax postback.

Here are the code to two base classes that implements this delayed load approach:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Web.UI;
using System.Web.UI.WebControls;
namespace Puzzlepart.SharePoint.WebParts
    public class AjaxPanelWebPart : System.Web.UI.WebControls.WebParts.WebPart
        protected UpdatePanel AjaxPanel;
        protected override void OnLoad(EventArgs e)
        protected override void CreateChildControls()
            AjaxPanel = new UpdatePanel()
                ID = this.ID + "UpdatePanel1",
                UpdateMode = UpdatePanelUpdateMode.Conditional
        protected virtual void ApplyUserActions()
            //to be overridden in derived classes
        protected void RebindControlsWhenNoViewState()
            if (Page.IsPostBack == true &&
            System.Web.UI.ScriptManager.GetCurrent(Page).IsInAsyncPostBack == false)
    public class AjaxPanelDelayedLoadWebPart : AjaxPanelWebPart
        protected Timer LoadTimer;
        protected override void OnLoad(EventArgs e)
        protected override void CreateChildControls()
        private void CreateLoadTimer()
            LoadTimer = new Timer()
                ID = this.ID + "LoadTimer1",
                Interval = 1 //millisecond
            LoadTimer.Tick += new EventHandler<EventArgs>(LoadTimer_Tick);
        protected void LoadTimer_Tick(object sender, EventArgs e)
            LoadTimer.Enabled = false;
            catch (Exception ex)
                Label msg = new Label()
                    Text = "An error occurred in delayed load: " + ex.Message,
                    ToolTip = ex.ToString()

The ApplyUserActions method is where you should fetch your data and update the content of the AjaxPanel member control. All the controls of your web-part should be created as usual, remember to call base.CreateChildControls in your derived web-parts to ensure that the Ajax controls get created.

Note that no slow data must be fetched and bound to your web-part controls during page load, e.g. in the CreateChildControls or OnPreRender methods, as this defeats the purpose of delay loading the data in the ApplyUserActions async postback method. A typical scenario is creating and configuring an SPGridView control in CreateChildControls and then fetch the data and set the grid's DataSource and call the grid's DataBind method in the overridden ApplyUserActions method in your derived web-part.

Note that all page event code get executed on partial postbacks for all web-parts on the page. This can cause problems that are unrelated to the web-part that triggers the postback, manifested as ScriptResource.axd JavaScript errors. Some problems are related to viewstate handling, such as the "Error=Value cannot be null. Parameter name: container" SPGridView exception. The simple solution is to turn off viewstate, and then call the RebindControlsWhenNoViewState method to load and bind the data when the postback is not an async Ajax postback. This must also be done for all controls that do not use viewstate, otherwise they will end up empty after e.g. modal dialogs that reload the page on close.

This ASP.NET Timer approach allows the page to load quickly, then each web-part will in turn get the timer tick postback and update itself using ASP.NET Ajax partial page updates. Note that this code won’t work as a sandboxed web-part. The UpdatePanel control requires the ScriptManager, which isn’t accessible from the sandboxed worker process.

The more professional way of getting real asynchronous loading for web-part content is to use PageAsyncTask as shown in Chapter 9 in Wictor Wilen's excellent SharePoint 2010 Web Parts in Action book. It does require a bit more code, but will allow parallell data fetching and thus faster page load time. It also works without using any UpdatePanels as all is done server-side.

Thursday, May 19, 2011

New Sites and the SharePoint 2010 Content Type Hub

The SharePoint 2010 content type hub does quite a good job of managing and publishing a centrally controlled set of content types. There are a few quirks and limitations, some of them documented in Content Type Hub FAQ and Limitations by Chaks' SharePoint Corner; not to forget the content type publishing timer jobs that actually push the content types to the subscribers.

One of the less documented areas of using a content type hub (HUB) is what happens to new site-collections that are provisioned? What if I have list definitions in my features, how can I be sure that their referenced content types have been provisioned at feature activation time? Can I deploy my enterprise content types feature at both the hub site-collection and also at new site-collections that users create?

First, when a new site-collection is created, it will immediately have all the published content types from the parent SharePoint web-application's connected Managed Metadata Service (MMS) application's defined HUB, automatically provisioned into its local content type gallery. Note that this applies only to the published content types as configured in the source hub. Content types that are not published, will not exist in your new site-collection. Note that hub content types are not by default published; this must be configured for every single content type in the source hub.

So if your list definitions depend on global content types that have not been published in the HUB, your feature activation will fail. You can of course solve this by publishing the applicable global content types in the source hub and run the timer jobs first, as this will ensure that new site-collections will have the enterprise content types auto-provisioned from the MMS HUB.

However, you can also deploy your enterprise content types feature to both the content type hub and to any other site-collection that you create. This works fine as the site content type definitions are identical, including the content type ID structure - after all it is the same content type CAML feature. This won't affect subscribing to the content type hub, and publishing new, updated or derived content types from the hub works just as expected.

Activate your site content type feature before activating your list definition feature, or any other feature that depends on the site content types being provisioned, to ensure that they exists locally in the new site-collection even if not yet published in the HUB.

As your taxonomy is subject to change, so are your enterprise content types. Thus, your deployment strategy for enterprise content types needs to handle change. I strongly recommend using the Open-Closed Principle for modifying and extending the enterprise content types. The Open-Closed Principle is based on using a set of immutable base content types that you derive from to make new specialized content types, inheriting fields from the base. The immutable base of the Open-Closed Principle coincides nicely with provisioning global content types through both a feature and the content type hub, as by policy any changes are made by extending the former through the latter.

Even trivial stuff such as providing standardized company templates for Word and other Office applications, is best done by publishing new derived content types. Use the content type hub to inherit your base PzlDocument into PzlDocumentMemo and attach a template, go to "Manage publishing for this content type" to publish the content type. Wait for, or run, the two HUB timer jobs, and then add the Word template to the applicable document libraries.

Now you're in for a surprise later on. The next time you try to create a new site-collection after publishing modifications in the HUB, you might get this "content type is read only" error:

The ULS log typically contains an exception like this:

Error code: -2146232832
The content type is read only or updateChildren is true and one of the child objects of the content type is read only.

The root cause for this is that published content types are by default read-only in the subscribers. What typically leads to this error is the need to use code when provisioning content types, e.g. when renaming and reordering fields, or when adding the enterprise keywords field to your content type. Another typical scenario where code is required is managed metadata fields; see How to provision SharePoint 2010 Managed Metadata columns by Wictor Wilén.

Making changes to the site content type definition in the FeatureActivated code and then calling SPContentType Update with updateChildren=true will work fine, until someone creates a new derived content type in the source hub and publish it. Your carefully tested code will suddenly crash, as the published child content type is read-only! Alas, what better proof that the deployed and the published global content types are the same?

Luckily, the change is isolated to the new inherited content type, thus it can safely be ignored when deploying the base content types. Use this overloaded Update method when modifying the global content types:

public void Update(
         bool updateChildren := true,
         bool throwOnSealedOrReadOnly := false

The HUB change did not affect your global content type due to using the Open-Closed governance policy for enterprise content types. See my SharePoint 2010 Open-Closed Taxonomy post to learn more about this recommended policy.

The content type hub and the Managed Metadata Service are perhaps the best new features in SharePoint 2010, still there are some uncharted areas that make developers reluctant at using the MMS HUB. There are a lot of articles at Technet and MSDN on the architecture, but way too little about deployment scenarios and issues such as those in this post.

Saturday, May 07, 2011

Site Lifecycle Management using Retention Policies

The ootb governance tools for site lifecycle management (SLM) in SharePoint 2010 have not improved from the previous version. You're still stuck with the Site Use Confirmation and Deletion policies that will just periodically e-mail site owners and ask them to confirm that their site is still in use. There is no check for the site or its content actually being used, it is just a dumb timer job. If the site is not confirmed as still being active, the site will then be deleted - even if it is still in use. As deleting a site is not covered by any SharePoint recycle bin mechanism (coming in SP1), Microsoft also provides the site deletion capture tool on CodePlex.

Wouldn't it be nice if we could apply the information management policies for retention and disposition of content also for SharePoint 2010 sites? Yes we can :) By using a content type to identify and keep metadata for a site, the standard information management policies for content expiration can be configured to implement a recurring multistage retention policy for site disposition.

Create a site information content type and bind it to a list or library in your site definition, and ensure that this list contains one SiteInfo item with the metadata of the site. Typical metadata are site created date, site contact, site type, cost center, unit and department, is restricted site flag, last review date, next review date, and last update timestamp. Restrict edit permissions for this list to just site owners or admins.

Enable retention for the SiteInfo content type to configure your site lifecycle management policy as defined in your governance plan.

Add one or more retention stages for the SiteInfo content type as needed by your SLM policy. You will typically have a first stage that will start a workflow to notify the site owner of site expiration and ask for disposition confirmation. Make sure that the site owner knows about and enacts on your defined governance policies for manual information management, such as sending valuable documents to records management. Then there will be a second stage for performing the site disposition steps triggered by the confirmation.

You can also implement custom information management policy expiration formula or expiration action for use when configuring your retention policy. You typically do this when your policy requires retention events that are not based on date fields only. See Sahil Malik's Authoring custom expiration policies and actions in SharePoint 2007 which is still valid for SharePoint 2010.

Use a custom workflow or custom expiration action to implement the site disposition steps: user removal, automated content clean-up and archiving, and finally trigger deletion of the site. If the site is automatically deleted by a custom workflow, or marked for deletion to be processed by a custom timer job, or a custom action just sends an e-mail to the site-admin, is up to your SLM policy.

If you need to keep the site in a passive state for e.g. 6 months before deleting it, you can use a delegate control in your site master pages to prevent access to passive sites or you can move the site to an archive web-app that use a "deny write" / "deny all" access policy to prevent access. Note that the former is not real security, just content targeting for the site. The latter is real security, as "deny" web-app policies overrides site specific access rights granted to SharePoint groups and users. This allows for keeping the site users and groups "as-is" in case the site can be reactivated again according to your SLM policies. If site owners can do housekeeping on a site while passive, then grant them access by creating extra "steward" accounts that are not subject to being denied access.

I recommend removing all users from the default site members group before deleting the site, otherwise the site will not be deleted from the site memberships list in the user's my site.

The astute reader may wonder how the content type retention policy knows if the site is actually in use. The answer is quite simple; each SPWeb object provides a LastItemModifiedDate property. This timestamp is also stored in the SharePoint property bag. Use a delegate control in your site's master page to check and push the timestamp to a date-time field the SiteInfo item, so that the rentention policy can trigger on it. Remember to use SystemUpdate when updating the SiteInfo, otherwise you will change the site's LastItemModifiedDate to now. You can also use a custom expiration formula that inspects the last modified timestamp for the site when the information management policy timer job runs.

We also use the site information content type in our Puzzlepart projects to provide a search-driven site directory. It is quite simple to make a nicely categorized and searchable site catalog by simply using one or more customized the search results web-parts. This search-driven catalog can of course be sorted by the search result 'write' managed property, which must be mapped to the crawled property field that contains the LastItemModifiedDate of a site.

Using a search-driven approach makes it unnecessary to have a classic site directory list. The site metadata is simply stored directly in a list within each site, managed by the respective site owners. This is more likely to keep the site metadata up-to-date rather than going stale in a central site directory list that no one maintains.

I hope this post have given you some new ideas on how to store, manage and use site metadata both for site lifecycle management and for providing a relevant search-driven site directory.

Friday, April 29, 2011

Using SP2010 BCS Resource Files for BDC Model Settings

You've probably seen way too many Business Connectivity Services (BCS) demos using SharePoint Designer (SPD), showing how simple it is to connect to a SQL Server 2008 database and automagically create external content types and operations, with external lists that can be used both to display and update external data.

Have you ever wondered how to manage those data source connection settings across multiple SharePoint 2010 farms? How do you change the SQL Server name and other login information when deploying to your test and staging farm, and then to your production farm without using SPD again? Even good BCS books such as Professional Business Connectivity Services in SharePoint 2010 have very little coverage of BCS external system settings in Central Admin and of how to actually use BDC resource files. There is a nice end-to-end overview in the Migrating Business Connectivity Services External Content Types in SharePoint 2010 article on MSDN, also lacking some of these details.

Data source connection settings are stored as External System properties that can be configured from Central Admin by managing the BDC service application. This can be a bit confusing, as you might run into the "there are no configurable properties" message when trying to manage Settings for an External System. The trick is to remember that these settings are not for the external system per se, but for a specific external system instance aka connection. Chose the External Systems view in the ribbon, click the link of the applicable external system to see the instances, then select the instance and finally click Settings in the ribbon. Change the connection properties and click OK.

In the example data connection I've used the Secure Store Service (SSS) application for the login information because the target database requires SQL Server authentication instead of passthrough integrated Windows authentication.

Instead of manually changing these External System settings whenever deploying a new version across your development, testing, staging and production farms, you can use BDC resource files to apply the settings. This is done by exporting and importing resource files, either using Central Admin, code or Powershell.

Prototype and test your BCS solution on your development farm first, then package your solution into a feature as explained in How to: Deploy a Declarative BDC Model with a Feature on MSDN. Remember to change the model name and external system name, including the entity namespace into durable names, or at least change the entity version, from your SPD prototype when packing your BDC model.

The declarative BDC model is really just some XML stored in a BDCM file. In addition to the BDC model file, SharePoint 2010 also supports using BDC resource files for specific metadata elements that commonly change, such as SQL Server connection configuration. A resource file is really just some XML stored in a BDCR file, that can be merged with the stored model without deleting the existing model and its configuration from the BCS metadata store.

The simplest way to get a BDCR file to start with, is to export the BDC resources using Central Admin. Chose the BDC Models view in the ribbon, then select the applicable model and finally click Export in the ribbon. Set the file type to Resource and select which resources to include in the exported BDCR file, typically properties, and click Export to save the selected set of resources.

Edit the the resource XML to change e.g. the name of the SQL Server (RdbConnection Data Source) and save your changes, using one file per BDC model and target farm. Only the XML for settings that should be updated when applying the resource file need be in the file.

The edited BDC resource file can now be applied to the applicable BDC model in one of your farms, typically when deploying a new version of a model to the staging or production farm. Chose the BDC Models view in the ribbon, then select the applicable model and finally click Import in the ribbon. Click Browse to select the applicable resource file and set the file type to Resource. Then select which resources to import from BDCR file, typically properties, and click Import to load the selected set of resources.

Validate that the correct settings were imported by reviewing the import log warnings, and by reviewing e.g. the settings for your external system instance or the permissions for the BDC model and its external content types.

See How to: Use a Resource File to Specify Localized Names, Properties, and Permissions on MSDN to include an edited BDC resource file in a feature in your Visual Studio 2010 package. Note that BDC models created with SPD cannot be exported from CA as they are not complete. Such declarative BDC models must be exported from SPD.

Thursday, April 28, 2011

Using Dynamic Stored Procedures in BCS Finders

We use quite a lot of Business Connectivity Services (BCS) at my current SharePoint 2010 project both for traditional integration of data from external systems into web-parts and lists, and also for crawling external systems for integrating those system using search and search-driven web-parts.

One of our integration partners prefers to provide their integration points as SQL Server 2008 stored procedures, which is very well supported by BCS. BCS supports both stored procedures and table valued functions, called "routines" in SharePoint Designer (SPD). SharePoint Designer is dependent on being able to extract metadata about the returned table data set when adding External Content Type (ECT) operations or when using the Operations Design View.

Alas, the provided integration sprocs used dynamic SQL statements, and for technical reasons this could not be rewritten to inline SQL select statements. This is as always a problem with tooling such as SPD, as no result set metadata can be discovered. When connecting SPD to the external system, I got no fields in the Data Source Elements panel in the Read List operation's Return Parameter Configuration. Rather I got three errors and a warning.

The workaround is quite simple and requires the use of a SQL Server table variable, which defines the result set and allows SPD to discover the table metadata. Rewrite the stored procedure by declaring a table variable, insert the result of the dynamic SQL statement into the variable, and finally return the result set by reading the table variable. The changes to the sproc is shown in blue in this example:

CREATE PROCEDURE [dbo].[GetFavorites]
DECLARE @DbName AS NVARCHAR(max) = 'ARISModellering1'
DECLARE @ObjDef AS NVARCHAR(max) = dbo.GetArisTableName(@DbName, 'ObjDef')
DECLARE @Model AS NVARCHAR(max) = dbo.GetArisTableName(@DbName, 'Model')
, ModelType.ModelTypeName AS FavoriteType
, ''http://puzzlepart/index.jsp?ExportName=ARISModellering&modelGUID='
INNER JOIN ModelType ON m.TypeNum = ModelType.ModelTypeId
DECLARE @userFavs TABLE(FavoriteId int not null, FavoriteName nvarchar(max), 
FavoriteType nvarchar(max), UserName nvarchar(max), FavoriteUrl nvarchar(max))
insert @userFavs EXEC sp_executeSQL @FavoriteSQL
select * from @userFavs

Refreshing the external system data connection and then creating the ECT read list operation now works fine, and all the return type errors and warnings are gone.

Note that the classic #temp table workaround won't work with SPD, you have to use a table variable in your stored procedure. The sproc will now use more memory, so the BCS best practice for keeping finder result sets small applies.

The table_variable declaration is also a good place to make sure that the identifier column is "not null" and that it is a supported BCS data type such as "int32". External lists cannot be created from an ECT whose identifier field is unsupported, such as SQL Server "bigint", and I strongly recommend using a supported identifier data type right from the start. Getting the ECT identifier wrong in the BCS model will give you problems later on when using SharePoint Designer.

Wednesday, April 20, 2011

BCS External Lists causes SharePoint 0x80131600 exception for SPSiteDataQuery

Issue: Your SharePoint 2010 code use the SPSiteDataQuery or CrossListQuery and get an exception with code <nativehr> 0x80131600 and absolutely no other helpful details.

Cause: External list referencing an external content type that has been deleted from the BDC metadata store.

Solution: Delete the applicable external lists from your sites - or recover the deleted external content type.

A related error code is code 0x8102003 which is caused by missing list definitions in activated features.

Friday, April 01, 2011

Installing SharePoint SPSF on Visual Studio 2010 SP1

In preparing for the 2011 Arctic SharePoint Challenge next week with my awesome Puzzlepart team, we're installing the SharePoint Software Factory (SPSF) tooling available at CodePlex. There is a nice prerequisite installer that helps you download and install the Guidance Automation Extensions and Toolkit, but it would not install the guidance packages properly on my Visual Studio 2010 SP1.

Installing GAT2010.vsix failed with missing "Visual Studio 2010 SDK", so I downloaded that and got the "you must have Visual Studio 2010 installed" error instead. As it turns out, you will of course need "Visual Studio 2010 SP1 SDK" (download).

Now I've got the SPSF project types in the Guidance Packages section of File > New Project and are ready for #ASC2011.

Tuesday, March 15, 2011

SharePoint News Feed Formatting of ActivityEvent

It is quite easy the get the news feed for activities from your colleagues and for your interests and skills in SharePoint 2010. It is not, however, that simple to format each event to display them in your own web-part using the activity feed object model.

Sure, the data of the different activity types are all there in the ActivityEvent object, and you can get the ActivityTemplate based on the ActivityType of the event. But then you need to process the display template tags to merge in the event values or the event XML from TemplateVariable string property using the SimpleTemplateFormat and ActivityTemplateVariable classes. See the Fun and Games with the ActivityEvent post by Toby Statham to get you started.

Luckily, the activity feed is based on the web syndication model, so you can simply create a SyndicationItem object based on the activity event, and it will find and process the activity template for you:

private Panel CreateFeedEventPanel(ActivityEvent activity)
    Panel panel = new Panel()
CssClass = "MyProfileActivityFeedEventPanel"
    //access the LinkList property in order to populate the ActivityEvent
    List<Link> temp = activity.LinksList;
    string picture = activity.Publisher.Picture;
    picture = string.IsNullOrEmpty(picture) ? "/_layouts/images/O14_person_placeHolder_32.png" : picture;
    Image publisherImage = new Image()
ImageUrl = picture,
AlternateText = activity.Publisher.Name,
CssClass = "MyProfileActivityFeedEventPublisher"
    SyndicationItem syndicationItem = activity.CreateSyndicationItem(_activityManager.ActivityTypes, ContentType.Html);
    panel.Controls.Add(new LiteralControl() { Text = syndicationItem.Summary.Text });
    return panel;
private void PopulateNewsFeedActivityList(bool useTodayOnly)
    string url = SPContext.Current.Site.Url;
    using (SPSite site = new SPSite(url))
SPServiceContext context = SPServiceContext.GetContext(site);
UserProfileManager profileManager = new UserProfileManager(context);
SPUser user = SPContext.Current.Web.CurrentUser;
UserProfile userProfile = profileManager.GetUserProfile(user.LoginName);
_activityManager = new ActivityManager(userProfile, context);
if (useTodayOnly)
    DateTime todayFilter = DateTime.Now.Date;
    _activityList = _activityManager.GetActivitiesForMe(todayFilter);
    _activityList = _activityManager.GetActivitiesForMe(MaxItems);

The formatted HTML will be the same as rendered by the NewsFeedWebPartBase class, except for the profile picture size and some missing timestamps for some event types. Use Reflector on the news feed web-part base class to see the code for mitigating such details.

The code for getting activity events for a user and other SharePoint 2010 social computing "how-tos" can be found in the User Profiles and Social Data section at MSDN.

Wednesday, February 23, 2011

Help Content Cannot Be Displayed in SharePoint 2010

Today we had a weird error on our SharePoint 2010 production farm: clicking on help got the "help content cannot be displayed" error for all normal sites, even though it worked perfectly well in Central Admin. The same applied to Site Settings>Help Settings for the site-collection, it worked in Central Admin, but not in any other site. In addition, the 'SharePoint Foundation Search' service was running on one WFE server.

First I checked all settings in KB939313 without that fixing the problem, then I checked the log files and found this access denied error for our site's app-pool account:

SqlError: 'The EXECUTE permission was denied on the object 'proc_EnumResourcesAtScope', database 'SharePoint_AdminContent_ABBAef34-7603-4da5-823a-43ee1327ABBA', schema 'dbo'.'

Before embarking on changing any database rights, we decided to test with an English site just in case, as all our custom site definitions are in Norwegian. Lo and behold - help worked for the new team-site; and what's more, suddenly help was working for all our existing Norwegian LCID 1044 sites also. Go figure...

[UPDATE] See the comments for tips on granting execute rights on the sprocs listed in the ULS to fix this problem once and for all - even beyond IISRESET.

Monday, February 21, 2011

Starting Term Store Management in SharePoint 2010

If you can't get any edit or management popup menus such as add term store term group to show in the SharePoint 2010 Managed Metadata Service application Term Store Management Tool, check that:
  • Internet Explorer is started with "Run as administrator"
  • You have added your taxonomy managers to the "Term Store Administrators" for the MMS root node

This is required even if you are an administrator of the MMS application itself and you have full control MMS connection permissions.

So then you're ready to realize your ingenious taxonomy for classifying and organizing your knowledge with managed metadata and content types.

Friday, January 14, 2011

Arctic SharePoint Challenge 2011

The lynx is out of the bag: Get down with Arctic SharePoint Challenge at http://pzl.no/asc2011

Twitter: @spchallenge