What Smart Techies Are Stealing From Finance
Editor’s Note: Alexander Haislip is a marketing executive with cloud-based server automation startup ScaleXtreme and the author of Essentials of Venture Capital. Follow him on Twitter @ahaislip.
“Portfolio” is a word that Silicon Valley loves. Venture firms have portfolios of startups, web designers have portfolios of their work and even public relations agencies have a portfolio of clients. Now chief information officers and IT architects have portfolios of computing power made up of physical servers, virtual machines and public cloud instances at multiple providers.
For most people, a portfolio is little more than an accumulation of individual decisions over time. Look in a typical VC’s portfolio, and you’ll see a storage locker stuffed with buzzword bingo startups slouching toward an orderly shutdown. A web designer’s portfolio? A collection of unrelated commissions.
CIOs don’t have the same luxury to be so haphazard. They have to have a little more foresight when it comes to putting together a compute portfolio. They’re juggling functionality, availability, security and cost and can’t afford to drop anything. And they’re stealing concepts from the world of high finance to make it work.
Finance became a true science when Harry Markowitz invented Modern Portfolio Theory. His simple insight was this: each asset has a risk/reward tradeoff and there’s a single collection of assets—an optimal portfolio—that maximizes reward and minimizes risk. Then Markowitz did a lot of math to show how an investor could get to that optimal portfolio.
Today the most cutting edge IT professionals are starting to discover the idea themselves. For them, the tradeoff isn’t between risk and reward, it’s between functionality and cost. Functionality encompasses a range of variables, from availability to security to sheer processing power. Cost, on the other hand, has never really been that clear until now.
NAKED COST
Cloud computing exposes the naked cost of processing cycles. It strips away the long amortization of under-utilized physical hardware and confusing vendor contracts. It eliminates the abstraction of virtualization efficiency. It clarifies the ambiguous costs of IT employees. Public cloud vendors focus on a single metric: Cents per hour.
The naked cost of computing, once exposed, re-orients your thinking. You’re constantly appraising whether a given job would be cheaper to run on Rackspace or internally.
CIOs are thinking harder than ever before about the tradeoffs they make. They’ve got options now that they didn’t have before. They’re no longer stuck in a monogamous relationship with hardware vendors, so they’re thinking about the features that matter most and what they’re willing to pay for them. It’s clear that credit card data should run only on hardened machines optimized for security, but what about player data in a virtual world? The public cloud may be perfect when you launch a new online video game, but it may make sense to build your own private cloud when demand levels off.
The compute infrastructure has to fit the workload and the cost has to fit the budget.
OPTIMAL COMBINATION
The good news is that there’s an optimal portfolio of computing capacity—just like Markowitz laid out for stock and bond investors. Given a company’s functional requirements and its budget, there’s a perfect combination of physical servers, virtual machines and public cloud instances that maximizes the benefits and minimizes the cost.
Getting to that optimal combination of compute resources isn’t easy. IT Professionals need visibility their existing IT assets and the ability to reorient their resources. It also requires forethought, a truly rare commodity.
It’s often stunning to see how little systems architects know about their own systems. We’ve all heard about rogue IT and the BYOD movement, and some confusion about those devices might be understandable. But not know the basics of what servers you have, what they’re running, how close they are to capacity and how much they cost you is shocking. As Peter Drucker once opined: If you can’t measure something, you can’t manage it. Visibility is the first step to control.
Better information is necessary for creating an optimal compute portfolio but it isn’t sufficient. You also have to have the ability to swap one type of compute infrastructure for another. In finance, this corresponds to the concept of liquidity, or your ability to buy and sell assets on the open market. Getting liquidity in compute infrastructure is more complicated.
It’s long been possible to swap one machine out for newer one, or to re-provision a workload. Virtualization makes it easier to move processing around, but you’re still locked in to your underlying hardware and the operating systems you’ve loaded. Moving between cloud providers can be extremely tricky if you’ve not architected your instances with templates.
Most importantly, getting to an optimal compute portfolio requires forethought. You have to think carefully about the mix of functionality you’ll need and your willingness to pay for it. Not just now, but also in the future. You have to plan for the time when you’ll need to re-balance your compute portfolio. That means architecting your systems for portability from the outset.
Finance, when it’s done right, is a disciplined way of balancing tradeoffs and planning—that’s a rigor IT departments need to adopt as they build modern computing infrastructure.
Let’s just hope they don’t take it too far and start issuing compute default swaps.
Leave a Reply