By Andreea Andrei, Marketing and Business Administration Executive at The Cloud Computing and SaaS Awards

This article is part of an A to Z series by Cloud and SaaS Awards, continuing with G for Green Cloud Computing

Cloud computing consists of offering services through the Internet on a large scale. These services may include storage, databases, network services, applications, and security services.

One of the main purposes of cloud computing is the reliability, trustworthiness and performance at a low cost. However, there are new challenges in today’s world regarding the environmental issues for the industry. The cloud is being redesigned with environment-friendly features, such as e-waste and carbon print reduction and energy efficiency improvement, which ultimately constitutes the green cloud computing.

The influence of sustainability on green cloud computing

In the last two decades, consumers and developers of hardware and software have become more concerned with sustainability as a result of the explosive development in energy use. Information and communication technologies (ICTs) have had a significant impact on the environment across the entire life cycle.

Studies on the environment have been conducted in the area of cloud computing, an ICT topic. Arguments and points of view in favor of and against these technologies exist. In addition to the interest demonstrated by the companies that offer cloud-based goods and services, there is also a lot of pressure from regulatory bodies to lessen environmental harm.

Since data centers are the foundation of cloud computing, the expansion of green data centers and the development of green cloud computing are closely intertwined. Koomey estimates that in 2010, 1.3% of all energy consumption was used by data centers. The percentage of total carbon dioxide (CO2) emissions from ICTs rose from 1.3% of global emissions in 2002 to 2.3% in 2020, according to a GeSI report, which is regarded as “one of the most comprehensive and well-recognized snapshots of the Internet’s energy demand at the global level.”

Green cloud computing trends

Due to the attention that green computing is receiving from the computing world, there is a growing interest in researching how cloud computing affects the environment. It was in response to a Gartner report that said the worldwide ICT sector produced about 2% of the world’s CO2 emissions. GreenCloud, a new design that promises to reduce data center power consumption, was introduced by Liu et al. in 2009. However, the need to discover ways to reduce data center energy use is even older and has grown since 2009.

For the development of green cloud computing, these studies were crucial. Actual and future green cloud computing is based on green data centers, which maximize energy efficiency while minimizing CO2 emissions and e-waste, not just for ICT equipment but for all environmental factors (building, lighting, cooling, etc.). Green computing encompasses more than just how much power computers use.
In addition to other environmental concerns like CO2 emissions, (e-)waste management, and resource consumption, it covers the energy used by networks or cooling equipment. In this situation, there are divergent research interests in the area of green computing. They started by looking at how “sustainable” and “cloud computing” relate to one another.

How to move to green cloud computing?

Virtualization is the underlying technology behind cloud computing. Both academia and business have largely accepted this concept. Depending on management choices, virtual machines that house virtualized services can be moved, copied, created, and removed. This offers fantastic chances to raise a data center’s energy efficiency. To accomplish the goal, we outline three complementing steps that are highly novel:

  1. Energy-aware dynamic resource allocation
    The encapsulating services receive good performance isolation from virtual machines (VM). An intelligent system will assign resources to a newly generated VM depending on predetermined mechanisms. In actuality, the service’s workload is always shifting. If the workload is much less than the resource allotted, or if the workload exceeds redline, a static setup will result in energy inefficiency or a service-level agreement (SLA) violation.
    Therefore, based on online monitoring performance and workload forecast, our initial option is to dynamically reconfigure the virtual machine. There are two levels in it. The initial level, known as the machine level, is in charge of scaling the entire resource by descriptive video service (DVS) in terms of the overall workload. The second is the application-level, which is managed by a VM hypervisor and is in charge of handling specific application (or service) resource management, such as altering the distribution of resources among numerous VMs.
  2. Load balancing with consideration for thermal efficiency
    It goes without saying that the first stage is server-specific, making it less than ideal for a data center. Load balancing and just-in-size resource provision make sense for achieving global energy efficiency since the server’s dynamic power is a convex function of its frequency. In order to achieve minimal energy consumption, the majority of research uses absolute balancing in homogeneous environments (in a heterogeneous environment, it depends on the respective resource capability). Sadly, this type of approach is ineffective because the researchers just take the server into account and neglect the cooling power. Research shows that identical servers operate at various temperatures even when loaded equally. For example, the air flow in the middle of the aisles between the machine racks is better than it is near the ends of the aisles, keeping the middle of the aisles cooler than the ends. As a result, it is not ideal to evenly distribute the load among all servers. Thermal-aware energy-efficient load balancing is our second approach. When choosing to start/stop a VM or carry out a VM migration, load balancing takes place. The former has to compute the best placement and is influenced by application (or service) workload fluctuation. The latter is activated if the workload surpasses a predetermined threshold that takes into account resource capacity and temperature. It must determine the best course of action for moving each service to a different server.
  3. Efficient VM consolidation while taking into account overall energy
    When the burden is heavy, step two is effective. Consolidation has been shown to be more effective when the load is light. We believe that maximizing resource use alone will result in inefficiency. Servers collaborate with networks and the cooling system, as is well known. We can distribute the workload across a smaller group of servers with little network support by taking into account the network topology. Therefore, we must also turn off idle network links in addition to inactive servers (and switches).
    To further lower the needed cooling power, we can also take into account where the active servers and network switches are located. From a different angle, VMs frequently converse with one another, creating virtual network topologies. However, because of VM migrations or inefficient consolidation, the interacting VMs may end up being housed on physically distinct nodes that require expensive data transport. Network switches that use a lot of power may be used in the network communication if the communicating VMs are assigned to hosts in various racks or enclosures. Instead, this portion of the network traffic would be eliminated, allowing the switch’s ports to be set to a slower speed. As a result, it’s important to monitor VM communication and locate them on the same or nearby nodes. Implementing the VMs consolidation in an integrated manner is our third solution. In a decision model, two distinct relations must be taken into account. The first one is the connection between the cooling system, network fabric, and servers.
    The relationship between VMs is the second (namely applications or services).

This article focuses on a thorough analysis of current energy-efficient building practices for green cloud computing systems.
A novel approach called cloud computing integrates pre-existing technology to improve the effectiveness of resource consumption. Utilizing these technology has a variety of outcomes. Both positive and negative aspects of the influence of cloud computing on the ecosystem have been highlighted by the providers of such services and the authors of research conducted by organizations interested in environmental conservation.

Cloud computing is generally thought to favor a harmonious relationship with the environment to the extent that ICT equipment manufacturers and the businesses that provide services in the field align themselves with environmental policies and accept the suggestions of non-governmental organizations regarding ways to lessen the negative effects of hardware and software. This article addresses how cloud computing helps to protect the environment in light of the research that has been done so far in this area.