Showing posts with label information technology. Show all posts
Showing posts with label information technology. Show all posts

Tuesday, April 22, 2008

Example Prediction Market for IT Projects

A colleague of mine forwarded me this great research paper on an example internal prediction market for an IT project. The research is not fully complete, but there were a few interesting nuggets that support the usage of internal markets for accurate predictions This is the topic that Bo Cowgill from Google will be presenting at DIG next month.

The research paper highlights 4 key needs for an accurate prediction market
  1. Ability to aggregate information and knowledge from individuals
  2. Incentives to encourage active participation
  3. Feedback to participants based on market prices
  4. Anonymous trading

The results from the case study were quite positive. Acxiom Corporation was the test case and used the Inkling Markets software to host the market to predict 26 milestone events of an internal IT project. Two results jumped out at me. The market was 92% accurate on the milestone events (24 for 26) and had an 87% participation rate (33 participants). There was also a higher perceived level of collaboration as a project team, which had positive impact on the outcome.

The authors of the paper are Herbert Remidez, Jr. Ph.D. and Curtis Joslin from the University of Arkansas. Looking forward to seeing further output from the research.

Tuesday, April 15, 2008

Deregulation of Utility Computing and my Gmail account

I may be jumping the gun on this one a bit since there isn’t as of yet a “Computing Utility” the way natural gas, telephone and electricity are currently piped into my home. Thus, there is no need to deregulate the industry the way the natural gas industry was in the 1980s. But will there be?

Google last week announced their foray into utility computing with their Google App Engine. Google is opening up their computing horsepower to allow scalable, web-based application development for anyone. And it’s free. They aren’t the only ones doing providing utility computing. Amazon has been providing a similar platform called Elastic Compute Cloud (EC2) for a “resizable” computing capacity cloud and their Simple Storage Service (S3) for inexpensive storage services.

The idea of utility computing has been batted around as an idea for a while, but Nicholas Carr’s book “The Big Switch” makes an interesting correlation with the switch manufacturers made 100 years ago from providing their own electricity to tapping into the expanding power grids. Carr makes a compelling case that this is the direction of computing for businesses and consumers.

If you aren’t familiar with Carr, he is a bit of a lightning rod in the IT industry based on his controversial point of view of IT. He was just named #93 on the Ziff Davis Most Influential People in IT. Not everyone necessarily agrees with Carr’s view of IT, but he has forced the industry to take a look in the mirror and question the value being provided.

So you may be asking yourself, what does this post have to do with DIG and why did I start on the topic of utility computing? Honestly, there is not direct relationship beyond I have been having some “constructive” budgetary discussions with a client around disk storage sizing. When I got home tonight I asked myself “This has to be easier”, thus my research on utility computing. Why is it that I can get 6.6 gigabytes of free storage from Google for my email but not enough storage for a data mart? (btw – this is a hypothetical question and doesn’t need to be answered via a comment).

(sigh)

Monday, April 7, 2008

Shifting Mindsets on BI

Pete Graham recently wrote a post on Using Business Intelligence in E2.0 that challenged each of us to bring business intelligence (BI) into the business conversation (verses creating a business conversation around BI). It was a prickly role reversal for those of us who like to look at the information value chain in a linear fashion beginning with data: data -> information -> knowledge (picture below of basic analytical information systems strategy). However, he provided a gentle but persuasive reminder that our mental mindsets and diagrams need to shift.

Let me explain. The idea of information and its use within business is an old idea, but its mastery reigns rather elusive. There are three core competencies that need to be achieved: Data IN, Information OUT, & Knowledge AROUND.

Data IN
Every time something happens within a business, there exists the opportunity for us to capture a piece of “data” that records its occurrence. For instance, when someone walks into a retail outlet, their visit can be recorded with a date stamp and time stamp. When the visitor buys a greeting card, the transaction is stored, inventory is marked down, and cash can be credited. If the person happens to pay by credit card, the purchase is tagged with the person’s card number. If the customer scanned their loyalty card, the transaction is immediately tagged with their profile information - and on and on. We could go on to name thousands of activities that are tracked within our organizations. These transactions let us know that something has happened!

This is not surprising. We live in a digital world where many of our actions are recorded. The challenge for businesses is to store this point-in-time data in a timely fashion and in such a way that it can be accessed quickly and easily in the future. I call this exercise, the “Data IN” process. This is the opportunity for our organizations to capture all of the happenings within our business ecosystem. Unfortunately, this raw data is unwieldly to the average business person.

Information OUT
Therefore, an organization is tasked with putting this data into context so that users can see an evolving narrative about their business. This narrative helps us to understand the what, when, and how of our businesses and their performance within the marketplace. We get to see the single occurrence (or piece of data) with the context of the business story. This process of transforming data into “information” is invaluable and gives us the digestible analytics to manage, measure, and improve our businesses.

Getting “Information OUT” is achieved by answering both traditional and current business questions with information about the past or with forecasts about the future.

Knowledge AROUND
The last piece of the information value chain is to seize the Aha! moments and business insights and push them out to the organization. For instance, a store manager who sees a declining trend in her customer base may realize that a profound shift is taking place in her market. With the combination of some analytical reporting and some field observation, she may notice that a local competitor has cut deeply into her customer base. This “knowledge” needs to be shared with her organization so that other store managers can prevent a similar decline and so functional groups within the organization can support or assist with planning a response (or change to the business). Our companies have a need to easily and quickly share insights throughout the organization, or broadcast “Knowledge AROUND”.

Today’s E2.0 tools have brought renewed energy to the business conversation represented by the Knowledge AROUND piece of the value chain. Tools like blogging, microblogging, wikis, prediction markets, etc… are democratizing the voice of the market facing parts of our organizations! This is exciting because it allows the conversation that is happening out in the field – between the people in the field and the market (customers, vendors, etc… ) to more effectively influence the information value chain. To Pete’s point, at the beginning of this post, our organizations need to bring BI into the business conversation. If we do, we have the opportunity to consistently adapt to fulfill the needs of our changing markets.

Let’s keep thinking about the paradigm shifts required to bring BI to E2.o. What do you think? What topics should we be discussing?

Wednesday, March 26, 2008

A New Type of CIO

I read an interesting article last week in the Wall Street Journal about Douglas Merrill, Google's Chief Information Officer. To say the least, Merrill is not your typical CIO. Granted, Google is not your typical type of organization. Here are a few snippets and comments from the article that I found interesting.

"Mr. Merrill's group lets Google employees download software on their own, choose between several types of computers and operating systems...."

This isn't totally off the wall, but certainly unique. They are a company focused on innovation and technology, so it isn't surprising that they wouldn't "constrain" anyone from getting the resources they need.


"We're a decentralized technology organization...Google's model is choice. We let employees choose from a bunch of different machines and different operating systems, and [my support group] supports all of them. It's a bit less cost-efficient - but on the other hand, I get slightly more productivity from my [Google's] employees."

The decentralization and the idea of choice certainly go hand in hand. The cost-efficiency comment is interesting and it doesn't sound like something that they necessarily quantify. I also assume that Mr. Merrill is being extremely sarcastic when he says he gets slightly more productivity out of Google employees. Based on the number of tools that Google produces for consumers, I would say that they are quite productive and innovative. I made the comment the other day to someone that I can't keep up with all the new things that they constantly release! On the topic of security, Mr. Merrill had the following response.


"The traditional security model is to try to tightly lock down end-points...we put security into the infrastructure. We have programs in our infrastructure to watch for strange behavior."

"When I talk to Fortune 100 CIOs, they want to understand, 'What is your security model? Is it really as reliable? What's the catch?' I already had to build security standards because search logs are really private. Very few [Google employees] have access to consumer data, [and those who do] have to go through background checks."

What I find most interesting about Mr. Merrill's comments is the fact that fortune 100 CIOs immediately have questions about security, primarily because of Google's approach to employee "choice". Google has shown that they can provide a best-of-breed infrastructure that is stable and secure. My guess is that most CIOs struggle to successfully provide a decentralized environment without ending up on the front page of the newspaper because their systems have been
compromised.

The last comment I will make about Mr. Merrill is that he has a fascinating academic background, especially for a CIO. He studied social and political organization at the University of Tulsa and a masters and doctorate degree in Physcology from Princeton University. His IT knowledge came through practical experience at RAND, Price Waterhouse and Charles Schwabb. Certainly politics and physcology play into the role of a successful CIO.

When I look at Google's approach to information technology and management, their approach is around the principles of the organization: decentralized, autonomy and a focus on innovation. Those principles may not make sense for every organization, but they clearly work for Google. Do I think that every physcology major is going to be the next CIO at company XYZ? No. Do I think that there are lessons to be learned from Google and Douglas Merrill, absolutely.

If you are interested in listening to Mr. Merrill speak to how Google eats their own dog food, take a look at the YouTube video below. He and CEO Eric Schmidt discuss how they use Google Apps internally in an effective and collaborative manner while meeting business and consumer needs, including security. Google Apps is a great example that relates back to the three themes of DIG: having organized and accessible data, providing robust analytic tools, and providing a massively collaborative environment.


Comments are welcome. I would be interested to hear related experiences and if you feel that IT is helping enable your business the way that Google is enabling theirs.

Wednesday, March 19, 2008

Real-time Data Usage

This past weekend was a major weekend for me, one that would determine my core happiness for the rest of the year!

You see I’m Rugby mad. I’ve played it, I’ve coached it, and now I watch it, incessantly.

This past weekend was the concluding weekend of the European 6 Nations Rugby Championship, an annual international tournament between the home nations of Europe (England, Ireland, Scotland, Wales, France and Italy). A tournament that started in 1871 and every year since has been an excuse for all to bring out their nationalistic pride and cheer for their ancestral team! For me it’s Wales, the land of my forefathers, the land of daffodils and of course sheep. Wales had the opportunity to win the tournament in style, beat France at home and raise the championship trophy undefeated, Grand Slam winners. And did they do it? They sure did!

Rugby, just like most American sports is now a professional sport and with it many changes have come. Dragged out of the traditions of amateur pastimes where the local butcher was your star player, teams today are forced to continually explore all possible avenues in an attempt to better themselves and obtain competitive advantage over their rivals. No longer is it good enough to just employ the best players and coaching staff, teams are looking elsewhere.

One interesting field of innovation that we were given a brief insight into during one of the games was the usage of statistical information real-time by the Welsh coaches that allowed them to then make real-time adjustments to how their team and players were approaching the game. Using the interesting technology Sportstec the Welsh team was actively making adjustments that helped provide a competitive advantage over their opposing team.

Around the field “spotters” were employed to feed information into a database application on specific events happening. Number of times a player passed to his left vs. his right, how many carries an individual had with the ball, number of times the ball was kicked from a certain place vs. passed, number of missed tackles by each player, success rate of a particular move, etc... By providing such detailed information on actions performed by their team as well as the opposition, the coaches were then able to react and make tactical real-time changes, for example adjustments to the team’s strategy on the field, instruction to specifically focus and improve in certain aspects of the game, as well as instruction to target identified weaknesses in the opposing team.

Did having this level of information access have a direct result on the outcome of the game? Who knows, but one thing for sure Wales beat Italy in this game 47-8, when on average the other teams who beat Italy did so by only 6 points! The other thing, did I mention, Wales won the Championship, undefeated!

So where next? If teams are able to get hold of and make use of such real-time data I just wonder how much further they could go.

What if players began to wear RFIDs in their shirts so we know where they are at anyone time. The ability to understand in real-time how much time they have spent in one location, how much time it took them to reposition, the average distance they make while running with the ball, efficiency of path travelled by each player when covering a kick-off. What about collecting information from body skins that can sense applied pressure? Could we measure the level of impact endured in a tackle thus giving the ability to predict level of fatigue vs. amount where performance begins to degrade, an opportune time for a tactical substitution perhaps?

For more detailed commentary on how the Welsh team and others are finding innovative ways to capture and use information check out the videos @ http://www.sportstec.com/videos.htm

Monday, March 17, 2008

Are you a Funky Business?

I just received a new book on the recommendation of a co-worker called "Funky Business Forever" by Jonas Ridderstrale and Kjell Nordstrom. It is actually a follow up to their first book "Funky Business", which I have not read. In "Funky Business Forever", Ridderstrale and Nordtrom discuss capitalism, the economy, politics, technology, the environment and talent. One topic they discuss that I enjoyed was "Technology: The Endless Riff" and 4 reasons why Information Technolgy creates digital dreams (I have paraphrased and shortened the descriptions):
  1. IT Decreases Time and Space: there is no longer a workplace. It has become a workspace and a lifespace, which are being blended together over time.

  2. IT Enables Total Transparency: people with access to information are beginning to challenge any type of authority. The periphery is becoming the center.

  3. IT Perfects Markets: markets are becoming more efficient thanks to information technology. These efficiencies are breaking down the traditional hierarchies that exist inside of an organization that were originally established to create efficiencies.

  4. IT Affects Us All and Affects Everything: Competitors are never more than a click away. All organizations are now information based, no matter the type of business.

Ridderstrale and Nordtrom conclude the idea behind digital dreams with the term "infostructure", the electronic nervous system of the company, which will be more important than infrastructure. Companies with lousy "infostructures" will look like 65-year-olds competing in the Olympic marathon wearing high heels and evening gowns.

Beyond the entertainment of the final visual, the points are well taken. I agree that information technology is changing the game for most organizations. What I find most interesting is that the concepts of "digital dreams" are still not a reality inside mosts business. In the way that information is at our finger tips when using tools like Google, Wikipedia and YouTube, the same cannot be said for understanding business performance. The internal systems, including data, information and knowledge, very much look like the marathoner that Ridderstrale and Nordstrom desribe! So how can we change the game? Can we apply the concepts of Web 2.0 inside the enterprise?

Thursday, March 6, 2008

Building an enterprise semantic layer

I recently read a blog post on FastForward by Paula Thorton mentioning a Reuters technology infrastructure called Calais. The purpose of Calais, to put it in simple terms, is to provide a service to automatically put context to unstructured data. The unstructured data could be in the form of news articles, blog postings, or any other text based content. The Calais web service would then identify the entities, facts and events based on the natural language descriptions in the text. The service would then return a descriptive model of the unstructured data.

The reason why this is important or at least is generating excitement is that many believe Web 3.0 will be based on this type of inference of content. Web 2.0 is highly dependent on many individuals applying context themselves through concepts like tagging, social sharing and general broader collaboration.

Now, the majority of the excitement and focus is on the user base that is currently driving the Web 2.0 movement. As someone who has done a majority of my professional work inside the four walls of an enterprise in areas such as data architecture, data integration, business intelligence and corporate performance management, I see incredible opportunity in something like Calais. A majority of the effort associated with building internal measurement systems like dashboards and management reporting applications is in developing a single semantic layer of metadata. Aside from the effort to develop the semantic layer, it is typically inconsistent because the organization lacks the ability to agree on a common business taxonomy that describes the enterprise.

When we think about bringing Web 2.0 technologies (social networks, wikis, prediction markets, blogs) to the enterprise, the critical first step is building out a metadata layer that puts descriptors on data to create information and starts to establish context. There is plenty of unstructured data floating around organizations in the form of documents, internal web sites, email communications and traditional knowledge centers. In addition, there is an inordinate amount of structured data, which I would argue, can have very little value when it lacks context. It requires speaking to an info-worker who can explain the report, dashboard or data set. These sets of structured data are typically living/trapped within silos of different functions such as finance, marketing, sales and operations. If you look at the collective structured and unstructured data/information that an organization captures across disparate groups/functions, being able to "infer" the entities, facts, and events will start to build the context needed to make better informed decisions. Add the ability to link people through the social network of an organization to share and disseminate information, and you are starting to see the value in implementing a Web 2.0 platform across the enterprise.