Tuesday, February 26, 2008

The portal

2/25/2008

In 1980, a 3278 green-screen terminal cost (as I recall) about $6,000, not including the mainframe it attached to.

Along came the PC. It cost half that, and was self-sufficient.

IT tried to keep them out. They put too much power in the hands of ignorant users and you couldn't do serious computing on them anyway. Yes, the IT authorities made both arguments, simultaneously, and didn't even blush.

PCs leaked in despite IT's opinion. Distributed computing leaked in too. The economics made them unavoidable. Various pundits claim otherwise, but they are comparing the costs of PCs and distributed computing with the competition-deflated costs of mainframe computing, not the pre-PC high-margin early-1980s price tag.

Fast-forward to now. You can buy a 150 gigabyte drive for less than $100. For another hundred bucks you can buy a USB external disk to back it up.

For 150 backed-up gigabytes, IT would charge $1,500 per year.

From my backup drive I can restore any file in five minutes. IT would take more than a day. Self-service? Forget it.

In 1980, IT completely locked down the 3278 terminal, by definition. IT now locks down the PC, not by definition but by choice. Meanwhile, at home, people install whatever they please, and in spite of what the doomsayers tell you, few run into insurmountable problems. Those who do sheepishly ask their teen-aged children to help them. Their teenagers give them the same eyeball rolls they get from the Help Desk staff at work, but much better service.

Nothing in this comparison is fair. Fair has nothing to do with it. My PC at home beckons, saying "Yes, you can do that too." My PC at work says, "No you can't." Your end-users experience that contrast every working day.

Preaching to them that "it's a business computer, to be used only for business purposes," isn't persuasive, because they know something we in IT often ignore: It isn't really a computer.

Oh, technically that's what it is, but technically doesn't matter. The PC is a portal to a universe of possibilities. While the word "cyberspace" has fallen out of use, the idea of cyberspace is alive, well, and built into the perception of everyone who looks at a screen while manipulating a keyboard and mouse.

It might be time ... past time ... for IT to look at its job in a new way.

Try this on for size: Imagine you ran IT as if it embraced this PC-as-portal perspective. As if IT's job was to manage one corner of that universe of possibilities.

What would you do differently?

Let's take it a step further. Let's look, not just at the PC but about work as a whole from the employee's point of view. That shouldn't be too hard. We in IT are employees too, when we aren't busy being IT professionals.

From the employee's point of view the job is, in addition to being a way to earn a living, a place for: social interaction; developing the self-esteem that comes from creating value and achieving important things; structuring time and staying occupied; exercising their brains and keeping them from becoming stale.

Few employees draw a hard boundary around their work life, keeping it psychologically distinct and independent of the rest of their life. They are the same people in the office as out of the office.

We in IT are stuck in a 1950s industrial view of the workplace. Much of the workforce is post-industrial in perspective. They don't "achieve work/life balance." They just live their lives, wherever they happen to be at the moment -- sometimes in the office, sometimes out of it.

In the office they research reports, create presentations, check their investment portfolios, answer business e-mail, answer personal e-mail, make business phone calls, answer personal phone calls.

Out of the office they think about the reports, edit the presentations, check their investment portfolios, answer business e-mail, answer personal e-mail, make business phone calls, and answer personal phone calls.

Employees live a significant part of their lives in the universe of possibilities they reach through their PCs, their Blackberries, the Treos, their iPhones.

The economic gap between self-sufficient computing and central IT that drove the PC revolution is back. The existential gulf separating IT's perception of work from the employee perception of work is new, and wider. We in IT had better figure it out, or business users will figure it out without us.

Because for us, a PC is an expensive, hard-to-support business resource. But for them it's a portal to an entire universe they can buy at Costco for a few hundred bucks.

-----------------

Copyright and other stuff -- The great KJR link point

Tuesday, February 19, 2008

Capability Maturity Model revisited

2/18/2008

I wrote harsh words about the Software Engineering Institute's (SEI) Capability Maturity Model (CMM) (The connection between leadership and process in IT Keep the Joint Running 1/14/2008).

Nobody wrote to disagree. Not exactly. Hillel Glazer, an SEI insider, politely informed me that I was hopelessly out of date: SEI abandoned CMM in 2001 in favor of Capability Maturity Model Integration (CMMI). He agreed to an interview to explain the situation. Mike Konrad, one of CMMI's authors, reviewed Hillel's answers and wrote me independently to endorse them.

Bob: SEI describes CMMI as a general-purpose process framework that can be used to create just about any sort of business process -- not just software methodologies. Do we need yet another process methodology, given that we already have: Lean, Six Sigma, Lean Six Sigma, Theory of Constraints, and Reengineering?

Hillel: To be more precise, "CMMI-DEV" is not for creating business processes or for creating (developing) products, but for improving the business processes of product and service development. CMMI assumes a given organization already has business processes and desires to improve them. What happens all too often is organizations are compelled to use CMMI but start out not having/knowing their processes and so CMMI becomes both how they define their processes as well as how they improve them. It's a fundamental -- and widely held -- misunderstanding of CMMI.

Bob: In his 2005 interview, Capers Jones was openly dismissive of Agile and similar adaptive, iterative methodologies. With CMMI, has SEI softened its stance?

Hillel: Capers Jones is not a reviewer or contributor to CMMI, and I wouldn't say SEI folks and Capers Jones see eye to eye on many things.

When CMMI was being written, reviewers were criticizing SEI for making it so "waterfall" centric. The authors were genuinely dumbfounded as to what it is in the model that gave that "anti-iteration" connotation. CMMI is 100% agnostic as to the product development life cycle an organization chooses to use, and always has been. Also, the SEI has created a team-oriented development method called "Team Software Process" (TSP) which has been demonstrated as being very agile-oriented.

Bob: Does any of this really matter to bread-and-butter IT shops? Most companies don't develop -- they integrate and configure purchased applications. The packaged methodologies, waterfall or iterative, weren't designed for this world. Does CMMI offer something useful for it, or is it also intended for new development?

Hillel: Right now, ITIL probably provides more value than CMMI to pure-play IT shops. However, stay tuned, there's another CMMI in the works for "services" organizations that goes beyond a static library of management and service workflows and offers a continuous improvement mechanism to companies providing services -- IT or otherwise.

Bob: I've recommended culture change as the first step in implementing a more process-driven approach in any organization -- to create a "culture of process" -- and to make sure process owners are fully educated in process management. Do you agree? If not, how do you recommend starting down the CMMI path?

Hillel: I completely agree. When working with companies that do need a culture shift, the first thing I get them to internalize are the CMMI's "generic practices" which provide the process acculturation so many organizations lack.

In many cases, no matter how much power they've been given, you can't start with the CIO, you must start with the CEO. The CEO needs to see the business value of being process-oriented. Otherwise, the CIO will be placed in an untenable position at the business level.

Bob: Anything else?

Hillel: What's most often misunderstood about this is that "CMMI is a model, not a standard." CMMI contains no processes or procedures of its own, and is not a thing that can be complied with. It's a fundamental shift in expectation and experience of most CMMI users -- who are used to complying with standards. Wrapping one's head around and being able to apply a model takes a whole different set of aptitudes than complying with a standard.

We hear "just tell me what to do" all the time. Combine that with the impatience (read: short attention span) and lack of process culture among too many executives -- a topic you regularly rail against -- and you have a recipe for process disaster.

Bob: Last question: Do you agree Keep the Joint Running is the most insightful commentary in the IT industry? Or would you rather have me twist your responses in creative ways that make you appear to be the biggest schlemiel in the trade?

Hillel: LOL!

Bob: Don't say I didn't warn you ...

-----------------

Copyright and other stuff -- The great KJR link point

Tuesday, February 12, 2008

Carr-toonish engineering

2/11/2008

If someone does something that's patently ridiculous, but manages to draw enough attention that it generates a lot of discussion, has that person performed a valuable service or just wasted our time?

But enough about Paris Hilton, Kiefer Sutherland and Lindsay Lohan. In our own industry we can ask a similar question about Nicholas Carr, who, as mentioned last week, has predicted that the "technical aspect of IT" (which in Carr's world is IT infrastructure management) will move to the Internet, which will become the CPU-cycle-provisioning equivalent of an electrical power plant.

With the technical part gone, handling the non-technical remainder (I bet you didn't know application design, development and integration are non-technical undertakings) won't require a separate in-house IT organization any more. Instead, they will become a mixture of Software as a Service (SaaS) applications and business-department-developed code that runs on the utility computing infrastructure.

Sometimes, fault-finding isn't the best way to evaluate a new idea. A superior alternative is to be helpful and positive -- to figure out how to make it work (for a historical example, see Inhaling network computers KJR, 1/13/1997.)

What will be required for IT to go away?

First of all, let's assume Carr isn't simply "predicting" the success of IT infrastructure outsourcing, as I contended last week -- that he's serious about utility computing in the electrical generation sense.

In the deregulated electrical power industry, generation companies pump 60-cycle current onto the grid, metering how much they provide. End-customers draw electricity off the grid. Their local provider acts as a broker, buying current from the low-cost provider and metering consumption.

This is, by the way, how you can buy wind-generated electricity if you prefer. You don't really get the exact electrons pushed onto the grid by a wind farm. You simply instruct your broker to buy enough of them to satisfy your consumption. The rest is just balancing the books.

For utility computing to work, we would need a similar metering and billing infrastructure. We'd need a lot more, too. For example:
  • Web 3.0: We will need a grid computing architecture that runs applications wherever CPU cycles happen to be cheapest (with suitable metering and billing -- see above).
  • Virtualization: This will have to be perfect, so that the CPU cycles you buy can run your applications no matter what operating system they were written for.
  • Quality of Service: Different applications need different levels of performance. Buyers will need a way to specify how fast their cycles have to be, and without the help of those pesky engineers who would be housed in an IT department if it hadn't been disbanded.
  • AI-based data design: With professional, centralized IT evaporated into the business, which will be building whatever custom applications remain, there will no longer be an organizational home for data designers. The only alternative is technology smart enough to handle this little engineering chore.
  • Automated, self-tuning pre-fetch: Last week's column demonstrated the impact of latency in the communications channel on linked SaaS-supplied systems -- the speed of light slows table joins to a crawl.

    This is fixable, so long as systems are smart enough to automatically pre-fetch records prior to needing them. Every SaaS vendor will have to provide this facility automatically, since businesses will no longer employ engineers able to manually performance-tune system linkages.

  • New security paradigm: Sorry about the use of "paradigm." It fits here. You'll be running all of your applications on public infrastructure, on the wrong side of the firewall (which -- good news! -- you'll no longer need). Think it will be hard for someone with ingenuity to plant a Trojan that monitors the cycles and siphons off every business secret you no longer have?
  • AI-based data warehouse design: Let's assume for the sake of argument that the Carr-envisioned future happens as he predicts. You will still want to mine all of your data, in spite of it being schmeared out across the SaaS landscape.

    I see two choices. The first, almost unimaginable, is an efficient, distributed, virtual data warehouse, reminiscent of the sea shell collection Steven Wright keeps scattered on beaches all over the world.

    The alternative is the same data warehouse technology we've grown accustomed to. Except we don't have IT anymore, so we'll need an AI design technology to pull it together for us, performance optimized and ready for analysis.

Look far enough into the future and all of this is possible. Heck -- look far enough and broadcast power is possible.

Now, look ahead just far enough that you're at the end of any useful business planning horizon. You'll reach a very different conclusion.

-----------------

Copyright and other stuff -- The great KJR link point

Wednesday, February 6, 2008

Electile dysfunction

None of the candidates raise my interest. - ap's whiteboard

Tuesday, February 5, 2008

Carr-ied away

2/4/2008

Nicholas Carr has a new theory -- that internal IT is reaching the end of the line, because information technology will follow the same commoditization curve that electrical utilities followed a century ago.

Okay, it isn't really a new theory, although from the attention Carr is getting for his new book, which should have been titled The Joys of Griddish but instead is called The Big Switch: Rewiring the World, from Edison to Google (W. W. Norton, 2008), you'd think he had come up with it all by himself.

Carr, you'll recall, previously theorized that IT doesn't matter (We ain't there quite yet Keep the Joint Running 6/16/2003). His reasoning: Every business has access to the same information technology, so IT can't provide a sustainable strategic advantage.

That his old theory was fatally flawed is easily demonstrated: Every business has access to the same everything as every other business -- the same technology, ideas, people, processes, capital, real estate, and silly articles published in the Harvard Business Review because their authors were once on its editorial staff.

Were we to accept Carr's past logic and apply it equally to all subjects, we would despairingly conclude that nothing matters. Now, not content with turning us all into depressed nihilists, Carr has discovered (and we should be pleased for him) the Internet and the possibility of outsourcing all of the computing cycles of every business to it.

What Carr has visionarily discovered, while tossing in terms like grid and utility computing to prove he is Fully Buzzword Compliant, is IT infrastructure outsourcing, a mere three decades after it began. Meanwhile, many very large corporations that outsourced their IT infrastructure have found that economies of scale reach a point of diminishing returns -- enterprises reach a size where running their own data center costs less and provides more value than contracting with an outsourcer.

But never mind this little quibble. After all, many businesses aren't that big and data center outsourcing does make sense for them. It's nothing new and makes no difference. It's business as usual right now, and companies still need an IT organization, because ...

Applications and the information they process are where the IT rubber meets the business road. Computer programs are not indistinguishable from one another. The information in the data repositories they control is unique, valuable, and (assuming corporations are careful about information security) private.

Carr hasn't entirely ignored this reality in "his" theory of utility computing. He merely waves it off as trivial -- something easily solved through a combination of Software as a Service (SaaS, which if you've been asleep for awhile means hosted solutions) and ... here's an exact quote ... "the ability to write your own code but use utility suppliers to run it and to store it. Companies will continue to have the ability to write proprietary code and to run it as a service to get the efficiencies and the flexibility it provides."

With unparalleled perspicuity, Carr has figured out that companies can write their own code and then run it in an outsourced data center. Hokey smokes!

Carr's New Insight is that responsibility for applications will move "into the business" which is why IT will eventually go away. He endorses the notion that businesses can easily integrate disparate SaaS-provided applications and databases across the Internet using a few easy-to-use interfaces.

What nonsense. Most internal IT organizations long ago changed their focus. They seldom develop. Mostly they configure and integrate purchased applications.

Nothing about this is easy. Integrating multiple applications and their databases takes complex engineering, not facile hand-waving. Moving responsibility "into the business" means nothing more than managing multiple, smaller, poorly cooperating IT departments instead of single, larger centralized ones. Ho hum.

Nor can integrating multiple SaaS systems work in a high-volume production environment. That's because of a concept network engineers but not self-appointed "experts" understand: latency.

Imagine a financial services company. Customer management is SaaS in California. loan operations is SaaS in Massachusetts. You have to update 10 million customer accounts every day with interest computations. The minimum latency imposed by the laws of physics on an ordinary two-table join adds more than 45 hours to this little batch run.

Well-integrated computing environments come from serious engineering. Phrases like utility computing and grid might obscure this fact behind a fog of vagueness. They don't eliminate it.

I have my own vision for the future of IT. In it, only people who have written code, designed databases, administered servers or engineered networks at some time in their careers will get to write about IT's past, present and future.

The rest can include themselves out.

-----------------

Copyright and other stuff -- The great KJR link point

Monday, February 4, 2008

Beat Me Daddy, Eight to the Bar

Natives who beat drums to drive off evil spirits are objects of scorn to smart Americans who blow horns to break up traffic jams. - Mary Ellen Kelly q.TFTD-L@TAMU.EDU
dan-galvin@oldschool.tamu.edu

a man after my own heart

http://www.bbc.co.uk/comedy/onefootinthegrave/index.shtml