The latest insights from your peers on the latest in Enterprise IT, straight to your inbox.
Making IT purchasing decisions isn’t easy – at least if you’re doing it right
By Alan Davidson, Chief Information Officer, Broadcom Inc.
The semiconductor industry is an intensely margin-focused business, so it’s important to be effective and efficient in engineering and then in manufacturing your chips and your wafers. As we say back home in Scotland, Many a mickle makes a muckle! That is, small amounts of savings add up.
For the last 19 years, my job has been keeping the IT contribution to those margins as tight as possible. Whether the issues relate to the cloud, hardware purchases, or approvals to buy more storage or computing power, I have thought a lot about how to keep costs down. A lot of my work can be boiled down to answering two questions, over and over:
- Do we need this service?
- How much of this service do we need?
Do we need this service?
For an example of what goes into making a purchasing decision, let me walk you through how we make our cloud choices.
Before you join a public cloud or data center, it is important to have a fundamental understanding of your workload, your scale, and the reasons for and the likely ramifications of making the move.
If you are running a modern application that can scale and be managed automatically, it’s probably good to take a look at a public cloud. But if you run a three-tier model with legacy architecture that has not changed in 20 years, a public cloud may not be the best option.
On the other hand, if your workload doesn’t fluctuate too much, moving to the cloud is more of a business decision than a strategic decision. The cloud is a data center, with servers, storage systems and network infrastructure such as switches and routers, which you already have. If you go to a public cloud, you give up flexibility, because you have to fit your foundational structure to AWS, Google Cloud, or Azure. They won’t change for you. You will also need to know your network structure in a lot more detail, so don’t expect you will be able to make do with fewer infrastructure people.
We use the public cloud mostly in our software development business. Broadcom has 16 data centers, but we are never going to have a geographic spread that AWS, Google Cloud, and Azure do. This is where public clouds are helpful to us, because data compliance rules are easier to meet when you have data centers in different places. We also have fairly modern software design and architecture on this side of the business that can take advantage of the capabilities that are available in the public cloud.
In Engineering and Design, the decision comes down to cost. If you had unlimited scale, could you really design a chip in 9 months instead of 12? Is it worth the cost penalty for you to do that? Maybe it is, maybe it’s not, but either way, it really is more of a business than a technology decision.
But don’t underestimate the degree of concern your customer will have about your choice of cloud, especially banks, insurance companies, and aircraft manufacturers. The global multinationals focus very heavily on where their data resides, and how you’re handling it. Audit compliance, data privacy and the regulations around those issues are key concerns.
However, customer preferences are often not founded on fact. Some people like Microsoft, hate Google; love Google, hate AWS; love AWS, hate Google. Why are you going to Google? That’s a consumer company. Why are you going to AWS? They’re a grocery store. Why are you going to Azure? Microsoft makes Windows. The fact is that all three of them operate in compliance with all the security rules. Yes, there are subtle differences between them, but ultimately it’s a fairly level playing field. On the back end, they all work, they’re all compliant, and they all protect your data. They really do. But that reality won’t keep your clients from having very strong preferences.
How much of this service do we need?
The other variable is demand from our internal customers.
Over the last 20 years, the rate of change has been immense – everything keeps getting smaller, faster, and crazier. The size of a technology node has gone from 48 nanometers to 24 to 16 to seven to five. We can now print billions upon billions of transistors on a chip.
The amount of computing power, storage, and network bandwidth required to design and make these chips has continued to expand. Going from seven to five nanometers required 4X the amount of computing power and memory to engineer. What was a 10-terabyte design is now a 40-terabyte design. Instead of billions of files, you are now designing trillions upon trillions of files, and that leads to immense pressure to buy more servers, more storage, and more network bandwidth. We now host 100 petabytes worth of storage to accommodate all our designs – over 15,000 computers.
Fortunately, the amount of storage you can fit in a 2x2 rack in a data center is 10X, 20X of what it was five or six years ago. Solid-state storage allowed us to reduce our overall footprint by 10X, which was a staggering gain from a cost optimization perspective.
We also ask our engineers to simplify their designs and achieve more with less compute and storage resources. And so far, they’ve been able to make it work. Setting boundaries forces discipline that is otherwise not a priority and while not as fun as creating the latest five nanometer chip, it’s just as necessary.
Cloud isn’t for everybody. If you are running an old system, you may be better off keeping it on premises.
Businesses that must cope with volatile demand or diverse regulatory needs are good candidates for a global cloud service.
Customers often have prejudices about one or more of the cloud services that are unfounded but must still be taken into account.
Storage capacities are growing increasingly efficient, but in the semiconductor industry, these increases are being outstripped by demand.
If your internal computing and storage needs are growing rapidly, it’s important to encourage your developers to streamline their designs.