SVG templates for data bound graphics

One of the most frequent questions I get about the HTML5 PivotViewer is how to create data bound HTML templates, styled with CSS. Invariably the answer is that it cannot be done due to the nature of the HTML5 canvas element, however there is an alternative.

Early on when building the HTML5 PivotViewer I was faced with a technical decision of how I was going to display and animate up to several thousand images at once without affecting performance – especially on a mobile device. Ultimately there were three choices either plain ol’ HTML like Isotope, SVG based like D3 or the canvas. In the end I chose the canvas which provided the best performance and support for raster images (Deep Zoom for backwards compatibility and large image support).

While the canvas element was the best choice it presents a problem of how to easily create and style a template that can then be converted into a raster image using only JavaScript. The solution I’m going to demonstrate is to use SVG.

What is SVG?

If you’re not familiar with SVG it’s a vector image format that is defined with XML. This means that it’s possible to create vector images with a just a text editor! If you don’t find generating images from text very intuitive there are also plenty of tools available for creating SVG images, I’d recommend Inkscape as a solid and free choice.

One of the huge benefits of using SVG is that it’s just XML and therefore  possible generate an SVG image using code. With a little jQuery it’s as simple as:

$('#svgContainer').append(
  '<svg xmlns="http://www.w3.org/2000/svg">' + 
  '<circle cx="50" cy="50" r="25" stroke="none" />' +
  '</svg>'
);

SVG templates with ICanHaz.js

While this is ok for simple examples, creating elements using string concatenation is messy and difficult to maintain. Instead I like to use a little library called ICanHaz.js which uses Mustache syntax for data binding to a template. I’ve been using ICanHaz.js for a while now to create templates for all the PivotViewer UI. It provides a much cleaner separation of the UI from the code and has a simple syntax. To demonstrate I’ve got to two examples.

For the first example I’m going to create a simple SVG chart with which I can bind my chart data. The SVG template with Mustache tags is below:

<script type="text/html" id="svgTemplate">
  <svg xmlns="http://www.w3.org/2000/svg">
    <g id="chart">
      <path d="m10 10 V10 210 H10 210" stroke="#000" stroke-width="1px" fill="none"/>
      <text x="220" y="20" fill="black">Legend</text>
      {{#series}}
      <text x="220" y="{{ypos}}" fill="black">{{name}}</text>
      <rect x="280" y="{{ypos}}" transform="translate(0,-13)" width="15" height="15" fill="{{colour}}" />
      {{/series}}
    </g>
    <g id="data">
      {{#series}}
      <g id="series-{{name}}" fill="{{colour}}">
        {{#data}}
        <circle cx="{{xAxis}}" cy="{{yAxis}}" r="2" stroke="none" />
        {{/data}}
      </g>
      {{/series}}
    </g>
  <svg>
</script>

Here I’ve got the SVG template wrapped in a SCRIPT tag with a type of text/html. There are a few SVG elements but the real power is the extra Mustache tags. There are two key tag types in the example, the {{#data}} and {{/data}} define a repeating section and the {{xAxis}} and {{yAxis}} tags correspond to properties in my data set.

Now that I’ve defined the temaplte I can then use ICanHaz.js to grab the template and apply the data in the chartData object.

$(document).ready(function() {
  var $container = $('#svgContainer');

  var chartData = {
    series: [
	  {
      name: 'Series 1',
      colour: 'red',
      ypos: 50,
      data: [
	    { xAxis: 125, yAxis: 60}, 
		{ xAxis: 15, yAxis: 177}, 
		{ xAxis: 33, yAxis: 105}]
      },
      {
      name: 'Series 2',
      colour: 'blue',
      ypos: 70,
      data: [
	    { xAxis: 44, yAxis: 60}, 
		{ xAxis: 66, yAxis: 77}, 
		{ xAxis: 130, yAxis: 130}]
      }
    ]
  };

  var template = ich.svgTemplate(chartData);
  $container.append(template);
});

The chartData object contains two series, each containing an array called data (remember {{#data}} and {{/data}}) that contains each of the points xAxis and yAxis. In the code above the key line is:

var template = ich.svgTemplate(chartData);

Which is where the chartData is bound to the svgTemplate. If we run the example in a browser we should see the following:

SVG Chart

While this is a simple example, and the chart still needs a little work before I would be happy to put it into produciton, it does demonstrate the potential power and speed of templating with SVG.

SVG templates and the canvas

Once we’ve got a data bound template the next step is to convert the XML in the SVG image into canvas draw methods. This time I’m going to rely on the excellent 3rd party library canvg to do the heavy lifting. It’s constructor accepts the SVG and the id of a canvas element. It then parses the SVG and draws it onto canvas – simple!

The following SVG template is a simplified version of the PASS Summit 2012 tiles I’ve created in a previous collection, without the speaker images.

<svg xmlns="http://www.w3.org/2000/svg">
  <defs>
    <linearGradient id="grad1" x1="0%" y1="0%" x2="100%" y2="0%">
      <stop offset="0%" stop-color="#C3381B" />
      <stop offset="14%" stop-color="#F68D29" />
      <stop offset="28%" stop-color="#E1B524" />
      <stop offset="42%" stop-color="#87A63F" />
      <stop offset="56%" stop-color="#2A854F" />
      <stop offset="70%" stop-color="#266B8F" />
      <stop offset="100%" stop-color="#1C4161" />
    </linearGradient>
  </defs>
  <rect x="0" y="0" rx="10" ry="10" width="256" height="256" stroke="none" fill="{{Category}}" />
  <rect x="16" y="16" rx="10" ry="10" width="224" height="224" stroke="none" fill="#000" />
  <rect x="24" y="168" rx="10" ry="10" width="208" height="64" stroke="none" fill="#fff" />
  <circle cx="128" cy="100" r="40" stroke="url(#grad1)" stroke-width="30" fill="none"/>
  {{#Title}}
  <text x="30" y="{{ypos}}" style="font-family: arial; font-size: 11px;">{{Text}}</text>
  {{/Title}}
</svg>

I can then dynamically create a canvas element, use canvg to apply the data bound SVG to the canvas, cache the canvas element in an array and then remove the canvas from the DOM. In this example I’m using the JavaScript implementation of Mustache as I don’t need all the functionality that ICanHaz.js provides.

if (!this._items[item.Img]) {
  var borderColour = GetColour(item.Facets["Category"][0].Value);
  //attach a canvas to the DOM
  $('#pivotviewer')
    .after("<canvas id='" + item.Id + "' width='256' height='256'></canvas>");
  var data = {
    Category: borderColour,
	//TODO: implment word breaker.
    Title: [ { Text: item.Name, ypos: 190 } ]
  };
  //use Mustache to bind the data to the template
  var databound = Mustache.render(this.template, data);
  //use canvg to convert the SVG to canvas methods
  canvg(item.Id, databound);
  //cache canvas
  var canvas = $('#' + item.Id);
  this._items[item.Img] = canvas[0];
  canvas.remove(); //once cached remove it from the DOM
}

BadReports You can see it all in action here: http://pivot.lobsterpot.com.au/json.htm. this collection is based on a JSON data source, with the newly added Spatial type (see Locations for an interactive map).

PASS Summit 2012 – what’s new in Power View

This years PASS Summit for me is a bit different than previous years, not only have I been invited to sit on the bloggers table, but I’m also a speaker.

Yesterday there were some big announcements from Microsoft – there is a lot of really cool stuff coming out especially Hekaton which allows regular tables to be moved into an in memory data structure to achieve increased query performance – you can read more about that here.

What about Power View?

I you’ve read my recent 3 part series on Power View you would know that the conclusion I came to is that under it’s current architecture a HTML5 version of Power View is not yet possible – well at least not easily.

Last year at the PASS Summit Amir Netz demonstrated Power View working on an iPad and while  no announcement has been made for a mobile version of Power View, for me the biggest news was that Power View now supports multidimensional data sources (as well as pie charts which apparently was a highly requested feature). These new features are part of SQL Server 2012 sp1 and when I get around to installing it expect to see some more posts on that. Correction: Power View working with multidimensional is not part of sp1.

Being able to use multidimensional sources in a Power View report is going to make a huge difference in it’s acceptance. I know from my experience with clients there is still a large amount of traditional SSAS cubes out there, but aside from PerformancePoint there is no Microsoft solution for creating compelling ad-hoc reports. Coupled with Power View’s integration in Excel 2013 it’s now much easier to get your hands on Power View without having to go through the process of installing SharePoint.

So despite the current lack of a mobile solution, it’s clear to me that Power View as a product is slated to become the default reporting tool when working with SQL Server.

Power View – how it works: part 3

So far in this series we’ve explored some of the internals of Power View, how it communicates with Reporting Services and how it’s possible to create our own service to mimic the SSRS web service. Last year at  the PASS Summit 2011 Microsoft demonstrated Power View working on various mobile devices, but over a year later all we currently have is a Silverlight version.

So in this final post I want to explore the possibility of creating a HTML5 version of Power View using the existing interfaces in order to simply replace the Silverlight version with a HTML5 one. To some this may seem like a strange thing to do, but I really believe the future of BI (especially mobile BI) is based on open web standards. In fact it was one of my main motivators in liberating PivotViewer from Silverlight.

However it turns out there are quite a few challenges to overcome in order to render an rdlx report using only JavaScript. This post will cover just a few of the pieces of the puzzle that I’ve investigated and is not the entire solution.

The first task is to extract the actual report definition from the rdlx zip file. To do this I found a really great library called zip.js that uses HTML5 web workers to enumerate and extract the contents of zip files. It’s actually a pretty impressive library, and allows for extraction from http, blob or string encoded zip files.

Reading the RDLX

// use a HttpReader to read the zip from URL
            zip.createReader(new zip.HttpReader('/Content/rdlx/Report1.rdlx.zip'), function (reader) {
                // get all entries from the zip
                reader.getEntries(function (entries) {
                    if (entries.length) {
                        for (var i = 0; i < entries.length; i++) {
                            if (entries[i].filename.indexOf('.rdl') >= 0) {
                                //get rdl
                                entries[i].getData(new zip.TextWriter(), function (text) {
                                    // text contains the entry data as a String
                                    console.log(text);
                                    // close the zip reader
                                    reader.close(function () {
                                        // onclose callback
                                    });
                                }, function (current, total) {
                                    // onprogress callback
                                });
                            }
                        }
                    }
                });
            }, function (error) {
                // onerror callback
                console.log(error);
            });

In the example above I’m using a zip.js HttpReader object to grab my rdlx file from a URL. The created reader has a getEntries method that enumerates the zip file and returns it’s contents as an array of files. I’m then looking for files with ‘.rdl’ in their name and dumping the contents out to the console.

Next Steps

Once we’ve extracted the rdl file we can start parsing the xml to create the elements of our report in HTML. As you can imagine this is not a trivial task, and I’m not going to go into this in any detail. I did however discover that the updated schema for Power View reports hasn’t been published anywhere by Microsoft (http://schemas.microsoft.com/sqlserver/reporting/2011/01/reportdefinition) even though the SQL Server 2012 SSRS schema has been (http://schemas.microsoft.com/sqlserver/reporting/2010/01/reportdefinition/ReportDefinition.xsd). I’ve only spent a little time looking around for it, so if anyone else finds it please let me know. Without the schema creating a one-for-one copy of a Power View report is a little trickier, but still possible.

Regardless, lets assume that parsing the rdl and building up the UI has been done, the next step is to start calling the report server web service with the RenderEdit command to grab the actual data. This could be done with a series of ajax calls, and the binary result could be parsed in JavaScript – but this would be horribly inefficient. JavaScript just isn’t built to handle data in that way.

Final Thoughts

While I believe a HTML5 version of Power View is possible, it would require a serious amount of effort to implement based on the current SSRS architecture. In it’s current incarnation the SSRS web service returns mostly binary data instead of web friendly XML or JSON formats, which could quite easily be consumed by client side code. In my opinion it’s a shame that PowerPivot and Power View are not more open and queryable.

The conclusion is that if Microsoft does release a HTML5 version of Power View some big changes are going to have to be made to the way the SSRS web service communicates.

Just been awarded as an Outstanding Volunteer!

Today I was surprised and honoured to find out that I have been awarded as an Outstanding Volunteer for helping out with PASS – the Professional Association for SQL Server.

 I have to say that I really wasn’t expecting to be nominated, volunteering with PASS is something I do for fun and to help enrich the SQL Server community. Nominations for Outstanding Volunteer are done anonymously but I wanted to say thanks to whomever it was, I also wanted to say thanks to Amy Lewis (twitter) who has been a huge help in getting the Australian/New Zealand BI VC event’s off the ground.

Power View – how it works: part 2

In the last post on Power View I had a bit of a look at how Power View communicates with Reporting Services. You can read more about that in Part 1.

Having a basic understanding on the inner workings of Power View allows us to appreciate all that is going on when a Power View report is requested. In addition we are able to use that insight to enhance the Power View experience and even create solutions to make it a better experience for our users.

Warm the Cache

For example one thing that is lacking in a Power View .rdlx file is a caching mechanism like what is possible with their .rdl cousins. By default the PowerPivot database in Analysis Services will get cleaned up after 48 hours of inactivity, the duration can be configured in Central Administration > General Application Settings > Configure service application settings > Disk Cache.
PowerPivot Disk CacheBut what if you’ve got a report that is accessed infrequently? Well based on our knowledge of the Reporting Services web service, we could, for instance write a PowerShell script to pre-load or warm the cache by sending an rs:GetReportAndModels request:

$web = Get-SPWeb http://server2008r2
$list = $web.Lists["PowerPivot Gallery"]

$spQuery = New-Object Microsoft.SharePoint.SPQuery
$spQuery.ViewAttributes = "Scope='Recursive'";
$spQuery.RowLimit = 2000
$caml = '<Where><Eq><FieldRef Name="File_x0020_Type" /><Value Type="Text">rdlx</Value></Eq></Where>' 
$spQuery.Query = $caml 

do
{
    $listItems = $list.GetItems($spQuery)
    $spQuery.ListItemCollectionPosition = $listItems.ListItemCollectionPosition
    foreach($item in $listItems)
    {
		$reportAddress = $web.Url + "/_vti_bin/reportserver/?" + [System.Uri]::EscapeDataString($web.Url + "/" + $item.Url) + "&rs:Command=GetReportAndModels"
		$request = [System.Net.WebRequest]::Create($reportAddress)
		$request.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials
		$request.ContentType = "application/progressive-report"
		$request.GetResponse()
    }
}
while ($spQuery.ListItemCollectionPosition -ne $null)

Powerless View

One of the early criticisms of Power View is that there are so many dependencies, and not everyone wants to have a full blown SharePoint Enterprise install just to make it work – in fact there is even a connect item about it spearheaded by Jen Stirrup (blog|twitter) (and yes, I do realise that in a lot of respects Excel 2013 makes this issue moot).

Well now that we know how Power View communicates with Reporting Services we can use that knowledge to simulate our own web service, slice out Power View from SharePoint, transplant it to a regular web application and then stitch it all up.

With ASP.NET MVC and custom routes this is remarkable easy. I’m using MVC 4, but custom routes are one of the core features of ASP.NET MVC and so this could be done in any version of MVC.

routes.MapRoute(
  name: "ReportServer",
  url: "_vti_bin/{controller}/{action}/{id}",
  defaults: new { controller = "ReportServer", action = "Index", id = UrlParameter.Optional }
);

We need to listen for calls to the _vti_bin/ReportServer start address, in my version I’ve set the second part of the string to be the controller, which defaults to ReportServer.

Once we’ve created the route, we then need to create a controller called ReportServer to handle to requests. I’ve put together a simple controller that looks for the rs:command query parameter and then returns a custom ActionResult for each rs:command type.

public class ReportServerController : Controller
{
  public ActionResult Index()
  {
    if(String.IsNullOrEmpty(Request.QueryString["rs:Command"]))
      return View();
    var command = Request.QueryString["rs:Command"];

    if (command == "GetReportAndModels")
      return new GetReportAndModelsActionResult(string.Format("http://{0}/Content/rdlx/Report.rdlx", Request.Url.Authority));
    else if (command == "RenderEdit")
    {
      if (!String.IsNullOrEmpty(Request.QueryString["rs:ProgressiveSessionId"]))
        return new RenderEditActionResult(string.Format("http://{0}/Content/rdlx/RenderEdit.bin", Request.Url.Authority));
    }

    //if unknown request
    return View();
  }
}

The project is still very much a work in progress and currently I’ve hard-coded parts to suit the demo report, but with a little more effort it could be completely dynamic. If you’re interested in the code I’m hosting it up on CodePlex with the cheeky name of Powerless View. Feel free to download and fork it. The only part that is missing from the project is the Power View Silverlight application itself – you’ll have to supply that.

I’d also like to point out that while this solution allows Power View to be hosted without SharePoint, we cannot get away from SharePoint completely as it’s still required to generate the initial .rdlx.

Power View – how it works: part 1

I’ve been meaning to check out Power View for a while now but with getting ready for my PASS Summit 2012 talk and PivotViewer (more on that soon) I haven’t had much spare time to dig deep into it.

Since I first saw it I was curious to know why it had so many dependencies – SharePoint 2010 (Enterprise), SSRS and SSAS, and how all those different systems are used to coordinate the rendering of a Power View report.

I’ve broken this little investigation up into a series of three posts which which I will cover:

  1. What is a Power View report and how does it fit with SharePoint, SSAS, SSRS and Sliverlight (this post).
  2. How to create a standalone version of Power View (without SharePoint and Reporting Services)
  3. If it’s possible to create a HTML5 equivalent of Power View that can be swapped-in and still use the existing web services.

What is an .rdlx?

Power View reports are based on a .rdlx file which is generated by SharePoint when you create a new Power View report. If you are interested in what makes up the internals of an .rdlx file then Dan English (blog|twitter) has already cracked one open - http://denglishbi.wordpress.com/2012/06/12/inside-the-power-view-rdlx.

As Dan discovers Power View reports are based on an .rdl file like Reporting Services, but with a slightly different xml schema. The .rdl file is then wrapped in a .zip file with the extension .rdlx – just like all the other office document files (as they are all based on the Open XML standard).

Contents of an .rdlx file

However if you are curious enough to try and run the .rdl file that is inside the Power View report in BIDS/SSDT you’ll be greeted with an error saying something like “The report definition is not valid or supported  by this version of Reporting Services”.

If you are not aware Power View reports are rendered by a Sliverlight application called Microsoft.Reporting.AdHoc.Shell.Bootstrapper.xap which comes in two flavours – one for SharePoint (version 11.0.2100.60), and another for Excel 2013 (version 11.0.2809.6).
The Sliverlight control that hosts Power View in SharePoint takes two key initialisation parameters. The first is ItemPath which is the path to the Power View .rdlx file on SharePoint. The second ReportServerUri is the path to the Reporting Services web service – located at /_vti_bin/reportserver/. There are a few other parameters but those two are the most important.

Loading a Power View report

When a Power View report is first requested it sends a request to the SSRS web service with the path to the .rdlx file (specified in the ItemPath parameter) and an rs:command parameter of GetReportAndModels. A typical first request would look something like this (parameters are URL encoded):

http://intranet/_vti_bin/reportserver/?http%3A%2F%2Fintranet%2FPowerPivot%20Gallery%2FReport.rdlx&rs:Command=GetReportAndModels

Reporting services then signals to SharePoint that the PowerPivot mode has been requested and then streams the model into the Analysis Services instance (in SharePoint integrated mode). SharePoint does this basically because it doesn’t understand what a tabular model is, and because Analysis Services is much better a processing and distributing the query load. What’s interesting about the relationship that SharePoint has with SSAS is that it’s only a temporary model, once it’s no longer in use a SharePoint timer process will come along and remove it from SSAS (which is why it gets such a horrible name).

After Cache

The reporting services web service then returns a response to Power View, it’s contents contain two parts – the binary .rdlx file and an xml document. The xml document contains all the data source connection details from the Power View report which (I assume) is so Power View can start requesting the datasets in one thread while it  extracts the contents of the .rdlx and builds the UI in the other. This is what’s happening when you see this kind of behaviour:

Report Waiting

Power View has finished rendering the report UI and is waiting for the asynchronous request to come back for the actual dataset.

Dataset requests use a different command of rs:RenderEdit, along with the current session id. Once the request had been processed  the reporting services web service then sends a binary response which contains the requested dataset back. A typical request looks like the following:

http://intranet/_vti_bin/reportserver/?rs:Command=RenderEdit&rs:ProgressiveSessionId=fe51a94b919c421380a57093caecb725y0y1yb552ckd2dbjqmmhpy45

In the next post I’m going to show how I’ve been able to host a Power View report outside of SharePoint and Reporting Services.

Visualisation to convert times to other time zones

I volunteer with the PASS Business Intelligence Virtual Chapter who run free technical training sessions on BI topics based on Microsoft technologies.

Lately PASS as an organisation has been positioning itself as a truly global representative of the SQL Server community. As part of that push the BI Virtual Chapter has been scheduling live sessions that run at times all across the world and giving opportunities for local speakers to showcase their talents.

Traditionally the listed time for a session has been the local time of that speaker. For example sessions that I’ve organised are always run at 12pm Brisbane time. Now if you live in Brisbane or even Australia then working out when the session is running is relatively easy, but for everyone else it’s not so simple.

When I have to work out what time something is running in my local time here in Australia I usually have to go through a process of using the Time Zone converter at timeanddate.com, finding the time zone that it’s being run in, and then converting it to my local time.

For example if a session is running at 12pm EST in the US, I first have to find the right EST.

Time Zone picker at timeanddate.com

Not only are there multiple EST’s listed (and the full description is obscured) but when the North American EST is selected there is a second choice of location. In this case I always pick New York because to be honest it’s the only place on the list I’ve heard of (although I’m sure George Town is a nice place). Once selected I then convert it to my local time only to discover that it’s at 1:30am…

Before going any further I have to say that timeanddate.com provide an excellent service and I use it all the time. But I feel that this process is overly complicated when at the very least I’d like to know if I’ll be asleep or not.

My proposed solution is to produce a simple visualisation that displays the approximate time a session is being run at any point across the world. The goal is to quickly communicate a close enough time at a glance. If the time looks good then the user can investigate further and perform the actual conversion. In the example below the session has been listed as 12pm EST.

Time Zone Map US-EST 12pm

You may have noticed that the map has been significantly cropped – not to offend, but to show a section of the world that is both easily identifiable and space saving. The idea is to produce a visualisation that communicates an approximate time for an approximate location. In fact contrary to most visualisations this chart is inaccurate by design. If you’ve ever looked at a map with the time zones marked you’ll notice that they don’t fall into evenly distributed bands, in most cases the time depends more about the country you live in rather than the position of the sun above it.

Based on the example chart I can very easily tell that this session is running when I’ll hopefully be asleep, saving myself around ten clicks and one browser tab. However if I live in the UK I can see that the session falls at the end of the working day, in which case working out the exact time will be worthwhile.

To further reduce the total number of clicks a user must perform the image will link to the Event Time Announcer at timeanddate.com (http://timeanddate.com/worldclock/fixedtime.html?iso=20120821T12&p1=179) which not only lists all capital cities, but reduces the total number of clicks to a maximum of one.

Time Zone Map AU EST 12pm

 

The map was sourced (with thanks) from vectorworldmap.com. The sequential colours were sourced from colorbrewer2.org and were chosen so they don’t loose meaning for those who are colour-blind. Finally the visualisation was put together with Inkscape.

The plan is to start using these little visualisations for each scheduled sessions that the BI Virtual Chapter runs and I’m keen to get any feedback – positive and negative. So what do you think?

LobsterPot HTML5 PivotViewer – now Open Source!

Two months ago I posted about a project that I’ve been working on during down time here at LobsterPot, a port of the Silverlight PivotViewer control that has been built exclusively on web technologies – HTML5 and JavaScript. If you’re not familiar with PivotViewer it is a visualisation tool that I’ve always felt never got the attention it deserved.

So I put an early version out there to see what people thought – not expecting much. Well I can honestly say that the response has been overwhelmingly positive, I’ve been inundated with requests to finish it off as people were exited to build collections with their own data.

So I’m pleased to announce that the LobsterPot HTML5 PivotViewer is now an Open Source project hosted on CodePlex. You can find it here: http://lobsterpothtml5pv.codeplex.com.

The control is still very much a work in progress and there are still pieces of functionality that is missing. I’ll be updating the documentation over the next few days and the plan is to continue work on the control so that it can render static CXML based collections as well as its Silverlight counterpart.

If you’ve got an existing CXML based collection then please download the source and let me know how well it does/doesn’t work as well as if there are any bugs or functionality that is missing. The LobsterPot HTML5 PivotViewer has been built as a jQuery plugin with extensibility in mind. I’ll be posting more about ways that the control can be enhanced, including how get started extending it to work with other data sources.

Going forward the plan is to have two versions of the control: The open source version that will support static CXML based collections and a paid version that will be enhanced with dynamic collections, tile templates and additional views for mapping, data grids and charts. If you’re interested in having LobsterPot build a collection for you please contact us.

 

 

Picking MaxWidth for PivotViewer Semantic Zoom

The benefit of creating an implementation of PivotViewer and Deep Zoom from scratch is that you get a better understanding for how it all works under the covers. (If you haven’t had a look at the HTML5 PivotViewer you can check it out here - http://pivot.lobsterpot.com.au/html5.htm)

The Silverlight 5 version of the PivotViewer control brings a whole heap of long awaited features including dynamic collections, item templates and Semantic Zoom. I’ve been playing around with item templates a bit lately as I prepare for my SQL Saturday talks later this month in Brisbane, Wellington and Adelaide and I thought I would go into a bit more detail on picking the correct MaxWidth for a PivotViewerItemTemplate.

If your unfamiliar with the Silverlight 5 PivotViewer item templates can be defined in XAML and data bound to items the PivotViewer controls ItemsSource. The item templates can be further enhanced by specifying a value for the MaxWidth property – a value that tells the control when to render the item/tile based on the current tile size or zoom level. For example creating three item templates, one with minimum detail for when the collection first loads, one with a little more detail when only a few tiles are visible and a final one for when only one tile is displayed would look something like this:

<pivot:PivotViewerItemTemplate MaxWidth="150">
	<Border Width="30" Height="20">
		<TextBlock Text="150" />
	</Border>
</pivot:PivotViewerItemTemplate>
<pivot:PivotViewerItemTemplate MaxWidth="500">
	<Border Width="30" Height="20">
		<TextBlock Text="500" />
	</Border>
</pivot:PivotViewerItemTemplate>
<pivot:PivotViewerItemTemplate>
	<Border Width="30" Height="20">
		<TextBlock Text="> 500" />
	</Border>
</pivot:PivotViewerItemTemplate>

While this example is simplistic – it only displays some text at each level, it demonstrates how the MaxWidth property is used to specify when to display a tile. The first template will get used until the tile becomes greater than 150 pixels, the second will be visible until 500 pixels and the last will be visible for all width after that.

While choosing a value for MaxWidth seems straightforward enough, having a closer look behind the scenes reveals that there is a little bit more to picking the right value. When we talk about MaxWidth what we are really referring to is the level at which the template applies, and when I say level I mean a level in the context of a Deep Zoom pyramid (http://msdn.microsoft.com/en-us/library/cc645077).

Even though the Silverlight 5 PivotViewer does away with the Deep Zoom dzc and dzi files, behind the scenes Deep Zoom tiles are still being rendered client side by Silverlight. Deep Zoom works by splitting up an image into smaller pieces where the widths equal a power of 2.

Image pyramid used by Deep Zoom

In the context of PivotViewer the current level is determined by the current width of the tile rounded down to the nearest power of 2. For example if the tile is 256 pixels wide then it is at level 8 (2^8), and when it’s 300 pixels wide it also gets rounded down to level 8.

Note: The formula to work out the level based on the width is: Floor(Log(width)/Log(2)).

To put this to the test I created 4 tile templates with the MaxWidth set to 300, 500, 700 and > 700. The result was that only the tiles with a MaxWidth of 300, 700 and > 700 were actually rendered by PivotViewer – it was if the item template with the MaxWidth set to 500 was never defined.

Unfortunately the official documentation mentions nothing about this (or anything really: http://msdn.microsoft.com/en-us/library/system.windows.controls.pivot.pivotvieweritemtemplate.maxwidth(v=vs.95).aspx) but as the item templates get converted to Deep Zoom behind the scenes, the MaxWidth of an item template can really only be set to powers of 2.

PASS BI Virtual Chapter – now in multiple time zones

For those of you who are familiar with the PASS and the Business Intelligence Virtual Chapter you may be interested to know that I have recently started volunteering my time (if you’re not familiar with the virtual chapters that PASS run and work with SQL Server, then I would strongly recommend checking it out). If you’re a regular reader Marco Russo’s (blog|twitter) or Chris Webb’s (blog|twitter) you would be aware that they with Jen Stirrup (blog|twitter) and Alberto Ferrari (blog|twitter) are helping to organise sessions in European friendly time zones. The exciting news is that starting this month (March 2012) there will also be regular sessions held at a time when us Australians and New Zealanders will (hopefully) be awake.

The first session organised will be presented by Bhavik Merchant (blog|twitter) on Blazing SSIS! High Performance Design Techniques. It will be held at 12pm Brisbane time, which I’ve done for a few reasons – first is that they are without daylight savings so it will always be held at the same GMT offset. The other reason is that it’s hopefully at a time that works well for people both in Perth and Wellington – if anyone has any feedback on that please let me know.

If you’re interested in speaking please send your details, speaking experience, session titles and abstracts to PASSDWBIVC@sqlpass.org. Submissions from all are welcome as without volunteers none of this would be possible.

Big thanks to Amy Lewis (twitter) for helping to make all this happen.

Next Page »