Saturday, August 27, 2005

Weird combination: EE, CA, GT, SD?

In my first post to this blog I mentioned that I studied Electrical Engineering (EE) at Delft University of Technology, specialized in Computer Architecture (CA), hold an MBA from London Business School and am somewhat of a business junkie with a special interest in practical applications of Game Theory (GT) and System Dynamics (SD). At first sight this may seem a weird combination, but it is not as weird as you may think.

The brilliant Princeton mathematician John von Neumann is generally seen as the father of Game Theory. Beginning in 1928 with a famous article, von Neumann single-handedly invented the field of Game Theory, which led to the publication of his seminal 1944 book with Oskar Morgenstern, Theory of Games and Economic Behavior. But von Neumann is also considered the father of modern computer architecture. In 1944 von Neumann became involved in the ENIAC project as a consultant, which led to the publication of a paper on the concept of stored-program computers, offering brilliant solutions to the most important problems facing computer design at that time. In 1946, von Neumann and his colleagues at the Princeton Institute for Advanced Studies began the design of a new stored-program computer, referred to as the IAS computer or MANIAC. Although not completed until June 1952, MANIAC is the prototype of all subsequent general-purpose computers and architectures based on the IAS/MANIAC concepts, are often called von Neumann architectures. In its simplest form, the von Neumann architecture consists of three parts: a central processing unit (CPU), a memory and a connecting device that can transmit data between these two (often called a 'bus' or 'databus'). Although von Neumann, who died in 1957, will be remembered first and foremost for his contributions to pure and applied mathematics, including Game Theory, it is interesting to see that obviously the step from mathematics to computer architecture is only a small one.

Also Jay W. Forrester, the father of System Dynamics, has a background in computer architecture. From 1935 to 1939, Forrester studied Electrical Engineering at the University of Nebraska. Upon graduation he joined MIT as a research assistant, where he worked with Gordon S. Brown in developing servomechanics for controlling radar antennae and gun mounts. After receiving his MSc from MIT in 1945, Forrester became Director of the MIT Digital Computer Laboratory, where he was responsible for the design and construction of Whirlwind I, one of the first high-speed digital computers. When in 1956 Forrester became Professor of Management at the Sloan School of Management, he started the System Dynamics Group and with it, the field of System Dynamics. Apart from many papers and articles on System Dynamics, he published five books: Industrial Dynamics (1961), Principles of Systems (1968), Urban Dynamics (1969), World Dynamics (1971) and Collected Papers (1975).

But what is it that attracts me to the fields of Game Theory and System Dynamics? I believe it has to do with the way of thinking that is involved in both and that is very similar to the kind of skills one need to be a good computer architect. I was first introduced to Game Theory as part of my Microeconomics class, while reading for my MBA at London Business School. Not aware of von Neumann's role in the field, I was immediately attracted to the concepts. Far from a maths whiz, I am particularly interested in the practical application of Game Theory. Hence, I enjoy books like Joel Watson's Strategy: An Introduction to Game Theory, Ghemawat's Games Businesses Play and Brandenburger and Nalebuff's Co-opetition, which apply game-theoretic thinking to business oriented case studies. In a future post I may actually review some of these books in more detail.

My first interest in the field of System Dynamics dates back to 1994, reading 'The Fifth Discipline' by Peter Senge, one of Jay Forrester's PhD students at MIT. At London Business School I was lectured by one of Jay Forrester's other PhDs, John D.W. Morecroft, who in 1990 was awarded the Jay Wright Forrester Award of the System Dynamics Society. in 2002, Kim Warren, one of Morecroft's colleagues at LBS and my lecturer for 'Dynamics of Strategy' published the interesting book Competitive Strategy Dynamics. In hindsight, however, it was already in 1988/1989 that I first learned about system dynamics. In a course on modern control systems, our lecturer, Professor Honderd, told us about his cooperation with the Business and Economics departments of the Erasmus University in Rotterdam. At that time I did not fully understand how such technical insights in feedback loops, overshoot and undershoot related to these non-technical fields of study. Had I known about Jay Forrester and his background in servomechanics, this would of course not have been the case...

Hence, it is clear from the above that the combination of Electrical Engineering, Computer Architecture, Game Theory and System Dynamics is not so weird afterall!

Google Talk: The Power of Open?

Last Wednesday, Google released a beta version of its instant messaging (IM) and VOIP client, Google Talk. Available for Windows only, unable to communicate directly with users of other IM services such as MSN Messenger and AOL IM and less advanced than Skype, it would be easy to discharge Google Talk as an 'also-ran'. Looking closer, however, Google Talk seems to have a lot of potential, partially due to its use of open standards.

Google Talk's IM functionality is based on the Jabber/XMPP protocol, an open messaging standard. This will allow Google Talk to build critical mass fast as its users can exchange messages with users of other Jabber/XMPP based IM services such as iChat (Apple) and GAIM (Linux). And where IM clients such as GAIM already allow for the exchange of messages with the MSN and AOL networks, it is not at all unlikely that future versions of Google Talk will offer such functionality as well. This would result in a level playing field where competition is not tied to the size of its user group (value of the network). By the way, note that Skype's IM service does not allow for exchange of messages with other IM networks at all.

Also the VOIP part of Google Talk is based on open standards, namely SIP (Session Initiation Protocol). Where Skype is using its own proprietary protocol, Google Talk users in the future will be able to 'call' users on other SIP-based VOIP networks. This would certainly give it a leg up on Skype, where users can only call other Skype users (PC-to-PC). SkypeIn and SkypeOut (still in beta), however, allow these users to call out to regular PSTN users and have people call them on their Skype number from regular PSTN phones respectively. In the future Google Talk is believed to offer similar PC-to-PSTN calling as well, especially as this is where Skype is making most of its money.

By conincidence or in response to Google Talk, Skype published some APIs this week, allowing developers to integrate Skype's IM and VOIP services into their own applications. By doing so, it basically 'opens' up its proprietary platform a little, without the necessity to give full insight in its workings nor having to go through the efforts of establishing a new (open) standard.

Hence, it will be interesting to see whether the use of open standards can help Google to overcome Skype's first mover advantage. Although being very secretive about its strategic plans, Google, the highest new entrant in BusinessWeek's 2005 Global Brand Top 100 at nr. 38, seems to be after domination on the internet. To be continued....

Saturday, August 13, 2005

In Search of A New Paradigm

The last year a new paradigm has come to the fore; Desktop search will change the way we work and interact with our computers. In the past it was advisable to set up a logical folder structure and use a clear file naming convention, as this was often the only way to find stuff long after it had been created and saved. The search function could help you find folders and files, but searching within documents was not supported. Enter desktop search tools. These tools can not only search folders and files, but also their contents, irrespective whether they are Word documents, PDF files or PowerPoint presentations. Furthermore, they can not only search in files saved on your hard disk but also in your e-mails, contacts, instant messages and bookmarks/ favorites. This way it is not only easy to find that one particular file or document, but all other documents that refer to it as well. Or all the music of a certain artist, or all the pictures you took during your last vacation (provided that you tagged them as such)...

One of the best desktop search tools available for the Windows platform is Copernic Desktop Search, which can be downloaded for free from www.copernic.com. Also Google, the icon in web search, offers a desktop search tool, namely Google Desktop Search.

Apple was (to my knowledge) the first to introduce desktop search (Spotlight) as an integral part of their operating system (Mac OS X (Tiger)). Microsoft will incorporate comparable functionality in Windows Vista, the successor of Windows XP, due out in 2006.

In the past, the Linux platform was often lagging behind the Windows and Macintosh platforms when it came to new functionality. This time around, however, Linux seems to be leading far ahead of Microsoft. With Beagle the Linux platform has its own, very powerful and potent desktop search tool. So, where Vista is expected to be a bit of a Mac OS X and Linux clone anyway, Spotlight and Beagle set the standard for Microsoft with respect to desktop search.

Playing Hardball: DIY markets in The Netherlands

In their article 'Hardball: Five Killer Strategies for Trouncing the Competition' in the April 2004 issue of Harvard Business Review, George Stalk, Jr. and Rob Lachenauer of the Boston Consulting Group (BCG) suggest that businesses drop the soft approach to competition and play rough instead.

The Hardball Manifesto prompts business to relearn the fundamental behaviors of winning:
- Focus relentlessly on competitive advantage
- Strive for "extreme" competitive advantage
- Avoid attacking directly
- Exploit people's will to win
- Know the caution zone

Stalk and Lachenauer suggest five strategies, that should be deployed in bursts of ruthless intensity:
- Devastate rivals' profit sanctuaries
- Plagiarize with pride
- Deceive the competition
- Unleash massive and overwhelming force
- Raise competitors' costs

This article came to mind when I read a newspaper article about the ruthless competition between DIY markets in the Netherlands. The five largest players in this market, Praxis, Gamma, Formido, Karwei and Hubo are ganging up against Hornbach, a German company that is expanding into the Dutch market. They appeal whenever Hornbach applies for a building permit, hoping to stave off increased competition or at least delay Honbach's entry into local markets. With every new location, which is on average four times as large as those of the other pleyers, Hornabach gains 1 percent marketshare. But although they have found a common enemy in Hornbach, these five players also frustrate eachothers' re-location and expansion plans, while the whole sector is involved in heavy price competition.

Hornbach fights back, however. It has six white trucks that drive past and park in front of its competitors outlets, carrying the slogan: "Nobody beats Hornbach!". Customers are welcomed to cash their discount coupons from Praxis, Gamma, Formido, Karwei and Hubo at Hornbach, in combination with a 'lowest price guarantee'.

So, it can be said that the fight is no longer just for the customer's favour, but that these DIY markets are playing hardball instead.

Sunday, August 07, 2005

Breakthrough!!

Recently, I read the MIT Sloan Management Review article 'Beyond Best Practice' by Lynda Gratton and the late Sumantra Ghoshal, both from London Business School. Writing a book on strategy implementation and business architectures myself (see earlier posts), I was very pleasantly surprised when I came across the following passage:

[begin quote]

The origin of Nokia's modular structure can be traced back to the software technology heritage the firm began to develop in the 1980s. At that time, Nokia's software technology was built from two core elements: the software mantra of reusability, and standardization through the creation of a shared common platform. Reusability is considered crucial to software development. When programmers at Nokia built new software programs, up to 75% of the program typically was built by reconfiguring modules of previously developed software. The sped up the development process, reduced the cost of making new programs and ensured that knowledge could be rapidly shared. The technological leverage Nokia achieved by reusability and reconfiguration depended on the programmers' skills in slicing and sequencing the modules of previous programming.

This competence and philosophy of reusing modules, which began in the 1980s as an element of its technology, became the design foundation of the modular architecture of the company structure. In the software programs, the modular units that were reconfigured were pieces of written software. In the company architecture, the modular units that were reconfigured were modular teams of people with similar competencies and skills. In the same way that modular reconfigurations ensured that valuable software was not lost, the modular architecture ensured that valuable skills, competencies and team relationships that were held within teams of people were not lost or dissipated. In effect, the signature process of structural modularity has its roots in the software production process of reusability through modularity and reconfiguration.

Nokia's signature process of structural modularity also has its roots in a technology philosophy of shared common platforms and standardization. Reconfiguring different modules of software requires that each module be developed in a similar way with a similar underlying architecture. That is, it requires a high degree of standardization. For more than 20 years, a mindset, discipline and philosophy of reusability and standardization had pervaded Nokia. It was well understood that only through common tools, platforms, technologies and languages could speed be achieved. This became the backdrop to Nokia's signature process: the capacity to build modular corporate structure.

The quality of this signature process was tested in January 2004, when Nokia announced and then implemented what would represent a fundamental organizational shake-up for most companies. In order to focus more closely on changing customer aspirations, Nokia's nine business units were restructured into four. At the same time, in order to ensure speed of innovation and production across the globe , all the customer and market operations, product development operations, and manufacturing, logistics and support activities were reorganized on a companywide basis into three horizontal business units. This organizational change was made fully effective within one week and involved over 100 people assuming new jobs. The rest of the employees had no such change because the modular teams to which they belonged were simply reconfigured. The discipline, philosophy and mindset of reconfiguration through standardization and shared platforms, which was initially developped from the company's technology history, ensured that Nokia could skillfully and rapidly reconfigure its human resources to meet changing customer needs.

[end quote]

Having based our own approach to business architecture on modularity as well, this provides me with the best possible example I could think of. It shows our thinking is not just theory, nor is it unproven. One of the world's largest and at times most admired companies has adopted exactly the kind of approach to organizational design that we suggest!

What's Another Year?

While working under Linux (yes, I'm a dual booter, but I'm sure you already figured that one out) I came across a piece of text I wrote over a year ago on Linux on the desktop. Although I could add a few examples and might change a few others, I decided to publish the article unchanged (also because I am lazy...)

Viewpoint on Linux@Desktop

Linux and applications based on it have already proven themselves in the server market. Unknowingly thousands of Windows desktops may already print and store files through Linux servers, thanks to Samba. Even Microsoft's .net will soon have its open source alternative in mono. However, the desktop is still a Microsoft stronghold. Question is whether Linux will ever be able to bring down the walls of this fortress. Working from the ideas of Clayton Christensen (Innovator's Dilemma) and Geoffrey Moore (Crossing the Chasm) one may conclude that it is not impossible, although the Linux camp will have to play their cards right and take away the remaining pain points.

Let's first look at the appeal of Linux for three different types of users; corporate users, hobbyists and non-hobbyists aka Joe Sixpack. [corporate users] Computer hobbyist tend to like Linux for the fact that it offers them more control over their system; it's easier to tweak the system and to tune it to their personal needs and preferences. Furthermore, they get access to a broad range of free or cheaper alternatives to major applications and (programming) tools. Some of these tools and applications are actually better and offer more functionality than commercial and/or proprietary ones. Of course personal sentiments and anti-Microsoft attitudes also contribute to their preference for Linux. A current drawback is the fact that upon release not all hardware is supported as interface and driver specifications are not 'open'. Although the real hobbyists will try to figure it out themselves and write an driver and share it with the rest of the community, others may choose the safe option and go for the main platform that is always supported by any vendor of 'networked' consumer electronics.

But why would the non-hobbyist go with Linux? One reason may be that computers with pre-installed Linux are cheaper than their Windows brethren and instead of having to buy expensive commercial software packages they have access to free or cheaper alternatives for most major applications. Given the recent wave of viruses and worms, the fact that Linux is less virus-prone may also chip in. This may, however, change once Linux gets more critical mass and it becomes more attractive for developers to write viruses and worms for this platform (not for everyone as Linux kernel is believed to be less 'leaky' than Microsoft's). Joe Sixpack's major objection against Linux is still its applications' incompatibility with Windows when it comes to the exchange of documents between the two platforms. Furthermore, using it to serve the web, Joe may still run into situations where content is not available to him as it is only offered in a closed, proprietary Windows format which is not yet supported under Linux (Wimbledon scoreboard and internet radio). But if the general trend towards the use of open data formats continues and basic support (98%) for proprietary formats through initiatives like Crossover Office, Wine, VMware, etc. is perfected, Joe will have ever more incentives to go with Linux.

Crossing the Chasm – The current Linux community is primarily made up of techies. But as the needs and experiences of these innovators do not match those of early adoptors, Linux first needs to cross the chasm between them. This will require that Linux offers an ease-of-use and no-frills maintenance experience similar to what users have come to know while working with Windows and Apple. Furthermore, the exchange of data and documents with users of other platforms will be key to the widespread adoption of Linux by early adopters and early majority. Initiatives like Crossover Office, Wine and VMware contribute to this, as well as easy to install and maintain distributions like Linspire and Vindalux Gentoo. These initiatives all contribute to Linux crossing the chasm over the next few years.

Innovator's Dilemma – From a functional and ease-of-use perspective a Linux desktop underperfoms relative to Windows and Apple, but the developments and initiatives named above will make that Linux will soon reach the minimal requirement level upon which it becomes a true alternative to Windows and Apple. Once it reaches this level, the cost argument will become more important, which will benefit Linux even more.

Hence, I believe that Linux really stands a good chance to break Microsoft's hegemony on the desktop. Once similar functionality and ease-of-use is offered at different price points, the average consumer will go for the lowest cost option, being Linux. However, before we get to this point all efforts should be focused on raising the platform's ease-of-use (aimed at Joe Sixpack instead of techies) and addressing incompatibility issues.

Frans van Camp
July 2004

Postscript: Based on my recent posts you may come to think that I am more of an IT consultant than a strategy consultant. Nothing is further from the truth, however. As an engineer by training I still have a strong personal interest in these matters, whereas in my daily work I don't deal with things like this at all.