By Asaf Ezra, Co-Founder & CEO, Granulate

As businesses reevaluate their profits and losses to prioritize the expenses that matter the most in today’s unpredictable economy, there’s massive potential for tech growth and transformation projects to fall by the wayside this year.

Gartner forecasted that global IT spending would fall 8% year-over-year in 2020 with sweeping drops in all IT sectors as a result of the COVID-19 pandemic, with the full impact yet to be realized in 2021.

The demand for high-performance and consistent digital experiences continues to grow as the world rapidly shifts to a digital native economy. When the novel Coronavirus sent shock waves worldwide, businesses were underprepared to support the precipitous increases in suddenly living and working entirely online.

A shift to remote work – increasing costs and strain

The shift to remote work has strained infrastructure and increased cloud computing costs, while CIOs and CTOs have had to be vigilant in prioritizing and managing enterprise tech system expenditures. Though they might be hesitant to invest in a time where reducing spending is a premium, targeted investments might be the easiest, most cost-effective way to bring down total IT infrastructure spend.

While they try to offset some of the added costs of a remote workforce, they may be unaware that the best and fastest opportunity for reducing compute costs lies in the most unlikely of places – the operating system.

On the business front, advanced preparation does not necessarily mean adopting the newest technologies as they become available on the market. In fact, the simplest place to start is optimization. This may seem like a no-brainer, but optimization’s benefits outside of cutting costly corners go largely overlooked. By optimizing IT infrastructure, businesses also reap the full benefits from its versatile capabilities when taking the time to identify the systems already in place that are underutilized and can be used instead of spending more on data servers, maintenance and storage. A bonus: increased mindfulness on corporate responsibility and sustainability.

To reduce financial strain and minimize impact on the rest of the organization, the best low-cost digital IT initiative that enterprises should focus on right now is improving operational efficiency by optimizing production IT infrastructure.

The Optimization Advantage

Enterprises come up against inefficient infrastructure because of the lift and shift in their cloud migration approach and legacy practices of over-provisioning. Solving for inefficiencies like this can be daunting, slow-going, and costly; no matter the size of the organization or complexity of the infrastructure.

Optimizing resource usage solves another common issue for enterprises: wasted cloud spend. According to the Flexera 2020 State of the Cloud Report, which focuses on infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), enterprise IT professionals surveyed estimate their organizations waste 30% of cloud spend, while struggling to accurately forecast expenditures. However, Flexera, in working with customers to identify waste, has found that actual waste is on average 35%, and sometimes even higher. Not surprisingly, for the fourth year in a row, optimizing existing cloud use remains the number one priority among companies surveyed.

There are many actions that can be taken to alleviate this outrageous waste in costs and vital resource consumption. Firms with existing data centers should be looking at how their IT infrastructure can be optimized to reduce power consumption while improving performance of their servers. DevOps can attempt to increase the utilization by better resource matching between the infrastructure and the application, which directly leads to more efficient use of the infrastructure, reduction in the amount of compute resources required, and therefore less energy. This can be achieved through better infrastructure provisioning and tools as well as with sizing and placement policies.

Optimizing production infrastructure to drive better IT efficiency at reduced costs can hit all these targets in real time – these methods are low-cost both in terms of human capital by automating the process and reducing overhead maintenance, and in terms of financial capital, due to the clear and quick ROI driven from the investment.

Demystifying Optimization

Optimization isn’t a single solution; rather, it’s a full category of tactics a business can implement on the system level to improve efficiency across the system. The focus on automated and autonomous technological solutions is a mindset that can result in a high ROI.

This category of solutions includes real-time continuous optimization, workload orchestration and automated configuration tuning.

Real-time continuous optimization solutions help enterprises improve their production infrastructure performance by customizing the operating system and kernel to fit current application needs. Optimizing the decision-making of the operating system in real-time for  application-specific goals leads to significant performance improvement and cost reduction.

The air-traffic controller

Consider an air traffic controller, who keeps airports running smoothly by directing aircrafts and managing airspaces and landing patterns. The air traffic controller must monitor and maintain air traffic data and make resource allocation decisions to ensure all the moving parts happen in sync to avoid accidents and comply with strict timeline restrictions. These dynamic prioritization and scheduling decisions happen continuously in real-time to reach changing and adapting goals.

Would you imagine an air traffic controller doing a successful job without knowing the flight routes of the airplanes? Similar to air traffic controllers, operating systems perform complex resource management, requiring both a detailed understanding of each task and a big picture overview to see the entire system at once.

Unlike an air traffic controller, operating systems today do not have the full picture – they are unaware of the application and workload running on the server. Operating systems make resource scheduling decisions based on a predefined algorithm without adapting to the running workload, regardless of the specific application goals.

Workload orchestration solutions are designed to provide automated resource management for matching of the workload to the underlying resources. With hundreds, if not thousands, of configuration options across applications, servers, and cloud providers, configuration tuning solutions help in finding the optimal combination of configuration that will drive better infrastructure performance.

These resource management optimization solutions are aimed at allowing organizations to increase utilization to achieve better cost-efficiency. Automated configuration tuning solutions are focused on ensuring the right configurations are set for the right application. They change configuration settings until they have the right fit.

Good for Business, Good for the Environment

Underutilizing data servers is not only costly, but an irresponsible business practice.

We should have been better prepared, because this surge wasn’t so much a once-in-a-lifetime, historic anomaly. It’s more like an early dose of what the near future has in store. According to analyst firm IDC, the total amount of data created worldwide will reach 175 zettabytes (ZB) by 2025, up from 33 ZB just two years ago. To put that quantity in perspective, writing the number for a zettabyte requires 21 zeros, and it’s been said one zettabyte is as much information as there are grains of sand on all the world’s beaches.

It’s easy to think of data as having no physical form, and thus no environmental impact. Unfortunately, that is not the case. It takes an enormous amount of electricity to operate a data center. According to the International Energy Agency, last year data centers consumed approximately 200 terawatt hours (TWh) of electricity, or roughly 1% of global electricity demand — as much as some entire countries — while contributing to 0.3% of overall carbon emissions and, in turn, climate change.

Migrating business-critical applications and data

Today, we see more and more enterprises migrating their business-critical applications and data to the cloud to achieve greater agility and operational and energy efficiencies. The cloud, whether public or private, relies on data centers. More data centers means more electric usage, more carbon emissions and greater risk to Earth.

Amazon Web Services (AWS), the world’s leading cloud service provider, saw its carbon footprint increase by 15% last year, despite its promise to become carbon neutral by 2040. Yes, AWS should be applauded for its ongoing investment in renewable energy across its data center portfolio, but the company also recognizes that it will take some time for its green investments to yield significant results.

Now, as the world stands on the cusp of the Fourth Industrial Revolution, during which data-heavy technologies such as the IoT, autonomous vehicles, robotics and augmented/virtual reality will become prevalent in daily life. We can’t stop the generation of this staggering quantity of data, but we can, and we must, reduce the environmental impact.

The good news is, this doesn’t require us to sacrifice the functionality of data centers. We just have to increase their efficiency and performance.

First Serve: Double Fault

That shouldn’t be that hard. Across all verticals, servers and processors are dramatically underutilized or mis-utilized. That is, there is far more capacity whirring away than is actually being used.

While underutilization estimates vary, according to Computer Economics, nearly 80% of production UNIX servers are utilized at less than 20% capacity, and more than 90% of Windows servers are utilized at less than 20% of their capacity while still drawing 30-60% of their maximum power.

The issue of underuse isn’t one stemming from a knowledge gap as it is a function of fear. Businesses intentionally underuse their data center and cloud resources in an attempt to ensure optimal performance under every conceivable scenario. They take precautions to avoid a decline in quality of service regardless of the demand at any moment.

What’s Holding Companies Back from Optimization

CIOs and CTOs know that in any system, there is no one silver bullet; every system is different, and automation will need to be regularly monitored to be effective. There are also challenges to optimization, with diminishing returns depending on the state of the system in place.

Workload orchestration solutions help enterprises reduce infrastructure costs while reducing infrastructure administration overhead. However, if you already have a reasonably utilized cloud environment, the cost reduction benefits can be marginal. Similarly, if best practices were not properly applied, infrastructure configuration tuning can improve efficiency. These solutions’ value diminishes after initial tuning of the infrastructure configuration.

Real-time continuous optimization solutions are available only for compute workloads, but allow organizations to handle these workloads with significant performance improvement, cutting response by half and tripling throughput while reducing the amount of servers required. They are relatively easy to implement and require little R&D effort, which means these solutions can also be tested rather quickly.

Companies should allocate budgets and resources to determine how much slack is in their system and invest in in-house expertise to dedicate time and attention to speeding up areas that could be optimized. But in times when simply stopping to take stock of infrastructure optimization could make a sizable difference on your company’s bottom line, it’s worth looking into optimization as a way to discover hidden savings and efficiencies.