Saturday, May 3, 2008

Lowering the Bar for Data Visualization

The luxury watchmaker Romain Jerome has created a $300,000 watch. For billionaires to tell time? Not exactly - the watch doesn't actually tell you the time. What's more, it sold out in 48 hours!

So what does the watch actually do?

“With no display for the hours, minutes or seconds, the Day&Night offers a new way of measuring time, splitting the universe of time into two fundamentally opposing sections: day versus night.”

Day versus night, huh? And it sells out for an unbelievable price?

I'm going to start working on a new dashboard project, directed at CEOs of multi-billion dollar firms.

It won't have KPI's, trends, links or navigation; all it will do is flash two words - either "MAKING MONEY" or "LOSING MONEY".

Next I'll develop a separate version to sell sports teams - a new scoreboard that flashes only whether the home team is "WINNING," "LOSING" OR "TIED".

Thursday, May 1, 2008

Knowledge Harvests!

Knowledge Harvests - what a great term! Authors Katrina Pugh and Nancy Dixon define a “knowledge harvest as a systematic, facilitated gathering and circulation of knowledge”. I stumbled upon their article on the topic in the May edition of HBR (Harvard Business Review). It was in the Forethought section of the magazine which looks at ideas and trends on the business horizon. Let me recap my now limited understanding of a knowledge harvest and then offer some thoughts on its challenge to us as we seek to leverage E2.0.

From their short article, I believe that a knowledge harvest is a simple but purposeful and interactive approach to a postmortem analysis or debriefing. The basic idea is that the intentional review of a business occurrence or process will yield helpful information or insights for the future; hence - a knowledge harvest!

However, there is a twist. The authors say that the first step in the process is to recruit a set of “knowledge seekers” who want to learn from the harvest. They go on to characterize these people.

Because seekers are self-interested, they ask tough, exploratory questions of knowledge originators, extracting important nuances – not only about how a project was executed but also about how costs built up, how knowledge might be applied elsewhere, what worked and what didn’t, and so on.

A knowledge facilitator leads these seekers through a process of interacting with the knowledge originators to derive key information and valued insights. The knowledge facilitator then works with the seeker to package the content and distribute it around the company.
My question is whether or not our E2.0 applications are focused enough on these knowledge seekers. Do we have people who are clearly articulating what they need to know in order to do their jobs better? Do our apps help to connect these knowledge seekers with the appropriate knowledge originators within the business? I have a feeling that a lot of our Web 2.0 content is produced by knowledge facilitators who are doing screen scrapes from knowledge originators with no idea whatsoever of the needs of knowledge seekers! What do you think?

I do believe that we have the tools and technologies but I’m not sure that we have them working together to support this interesting approach of a knowledge harvest.

Wednesday, April 30, 2008

Another example on how to visualize data...

Being a Boston Red Sox fan and always looking for new and intersting ways to visualize data, I found this tool on Boston.com very interesting. It tracks Manny Ramirez's 496 career home runs and provides different ways to visualize what could be some pretty boring data if presented in a typical grid (see the HR information grid at the bottom). As a baseball fan, it is interesting to see the distances and ball parks where he has hit his homeruns. As a opposing manager, the Pitch count graphic would certainly be a tool to use when facing Manny. Certainly this only scratches the surface on the different ways that baseball measures performance (see Bill James and sabremetrics).

Markets Rule, Even in Politics

This is a line from L. Gordon Crovitz’s opinion article in the Wall Street Journal called “Trading on the Wisdom of Crowds” from April 28th. Prediction markets have been popular posts here on talkDIG the last couple of weeks and I apologize if I am sounding like a broken record. But the topic seems to be appearing every where. I rarely read the opinion section in the WSJ, but the title caught my eye. Crovitz discusses the topic of prediction markets and the deadly accurate Iowa Electronic Market. Now, if you think prediction markets are a fairly recent phenomenon, think again. According to Crovitz, some $165 million in today’s dollars were wagered on the 1916 election where Woodrow Wilson defeated Charles Evans Hughes.

One interesting topic that Crovitz raises is the difference between using the traditional form of predicting political results, statistical polling, and using a prediction market that trades future results like stocks. There are plenty of examples that prove that a properly formed market will provide more accurate results then a statistical polling sample set.

Are you convinced yet that prediction markets can be an effective tool for your organization? Have you identified any areas, either internally or externally where a prediction market can more accurately predict an outcome?

Tuesday, April 29, 2008

Taking the Heat Out of a Hot Kitchen

(Long-time fans of the Pittsburgh hockey team will understand the title of this post. Go Pens!)

We’ve all seen ‘heat maps’ used as visualization tools. A heat map is a graphical representation of data where the values taken by the variables are represented as colors. Often, heat maps are used in conjunction with an actual map – like the weather map on the back page of USAToday, or the real-time traffic display at traffic.com. And while the information from these maps is useful - “It’s cold and rainy in Boston in April, and the traffic on the Mass Pike is really bad at 5:00pm” - it’s not particularly insightful.

Here’s an interesting application of heat map visualization. It’s from Purdue University’s Project Vulcan, which is quantifying North American fossil fuel carbon dioxide (CO2) emissions at space and time scales much finer than have been achieved in the past. This 5-minute video provides an overview and shows several fascinating examples of the heat map visualizations used in representing the underlying data:



Again, some of the results are expected – "carbon dioxide emissions are high where there are lots of people spending lots of time in their cars" – but not overly insightful. More interesting, however, are the discoveries that researchers have made from analyzing the data in graphical form. There’s an excellent summary in the April 27, 2008 issue of the Boston Globe and two results stand out:

“When you rank America’s counties by their carbon emissions, San Juan County, NM – a mostly empty stretch of desert with just 100,000 people – comes in sixth, above heavily populated places like Boston and even New York City. It turns out that San Juan County hosts two generating plants fired by coal, the dirtiest form of electrical production in use today.”

And the heat maps shows a small, bright-red area (high carbon emissions) in the northwest corner of New Mexico surrounded by wide expanses colored green.

“Purdue researchers discovered higher-than-expected emissions levels in the Southeast, likely due to the increasing population of the Sun Belt, long commutes, and the region’s heavy use of air conditioning. According to Kevin Gurney, assistant professor of atmospheric science at Purdue and the project leader, this part of the map also overturns the prevailing assumption that industry follows population centers: In the Southeast, smaller factories and plants are distributed more evenly across the landscape. Cities, meanwhile, prove less damaging than their large populations might suggest, partly thanks to shorter commutes and efficient mass transit.”
Work is underway to add Canadian and Mexican data to the Project Vulcan inventories. It will be interesting to see what other non-intuitive conclusions will be reached with these analytical and visualization techniques.

Monday, April 28, 2008

Master Data Management at DIG

Last week I mentioned that Dan Power from Hub Solution Design will be speaking at DIG 2008 on the topic of master data management. Dan has over 20 years of experience in enterprise technology and is a frequent contributor on the topic of MDM in industry magazines such as DM Review. Dan recently added a post to his blog on speaking at DIG and the importance of master data management in the context of data governance, business intelligence and performance management platforms. I cannot agree more. Every reporting, dashboard and planning application is not only dependent on getting quality data like sales, but are also equally dependent on having common “hierarchies” of the business. Hierarchies may be a standard chart of accounts, products or organizational structure. Without a common way to consolidate these hierarchies, those sales numbers may not be right! Master data management and data governance practices start to address these common issues. We are looking forward to hear Dan’s perspective on master data management and its linkages to business intelligence and analytics.

What Gets in the Way of Good Analytics?

Today at Bank Systems and Technology, there’s an article on the increasing importance of analytics to the banking industry. The story is fairly typical in the genre – “we used to manage by gut, but better information about our customers can help us in so many ways!”

What caught my attention was that quite a few of the contributed quotes came from places on the org chart that just don't exist at most organizations – the “Director of Statistics and Modeling” and the “Department of Insight and Innovation” to name two. These references were threaded alongside a frequent comparison of “mature” analytics areas, such as credit card predictive modeling and “growing” areas, such as customer attrition modeling. This might suggest that organizations who create a dedicated function related to analytics and related disciplines are more successful at spreading the competency internally than those organizations that leave it to chance. This is certainly the position put forth by Thomas Davenport in Competing on Analytics, and is certainly intuitive in some respects.

It’s easy to envision a success story for such a group – evangelizing the power of analytics, introducing new skills to functions without a historical strength in analysis, etc. But what are the likely barriers and points of failure? How can an organization considering such an investment get ahead of the curve and mitigate the risk?

I’d speculate there are a handful of key reasons for struggle or failure:

  1. Lack of a starting point / quick win “pilot” - Perhaps it is difficult for a Center of Excellence-type structure to get off the ground without one demonstrated benefit within the first year or so
  2. Insufficient data trail - For businesses or domains without a solid trail of transactional information, it might be tougher to get started (there goes my idea for a chain of cash-only restaurants with no POS system)
  3. Lack of data architecture / infrastructure investment - If a new analytics team’s first report includes a request for $5 million just to organize the data, rough roads may be ahead
  4. Active resistance to the scientific approach - If a CEO is commonly heard to say “you guys think too much,” is that an organization likely to be hospitable to analytics?

What do you think is the biggest barrier? One I didn’t identify? What are the keys to success in building an organization's overall competency in analytics?