Smart Cooling Technologies for Geothermal Data Centers

Introduction
The average data center uses quite a lot of electric power. This is no surprise considering the amount of computing power of the IT equipment within the data center, as well as the cooling system’s power consumption in maintaining ideal operating conditions for the servers. By early 2020, data centers accounted for 3% of the total world’s electricity consumption. And with the ongoing scaling up of energy-intensive facilities, power consumption is likely to continue increasing as per the advancing computing demands for data centers (Matthews, 2019). For this reason most companies are abandoning on-premises data centers in favor of private data solutions. By doing so they are able to shift the high costs of data center’s cooling infrastructure to private data center providers.
This has promoted private data centers to invest heavily in smart data center cooling technologies; to ensure that with lower power requirements they can still harness the computing power of the next generation servers. Earlier on, solutions like liquid server cooling systems were the most efficient cooling techniques for data centers (Matthews, 2019). However, with the today’s data centers generating huge chunks of information about their cooling and power demands; there has been a need to improve of the efficiency and effectiveness of liquid server cooling methods. Just recently, Google decided to leverage artificial intelligence power to improve the efficiency of their data centers’ cooling systems.
GSnake Cluster is a whole new innovative cooling system that improves on liquid server cooling methods while making use smart monitoring and IoT solutions that employ thermostat and robotic arms. The main objective of this system is to better manage power and cooling requirements of geothermal data centers, while allowing real-time monitoring of the servers and the entire cooling network. It could be considered one of the most efficient facility as will discuss later in the report.
GSnake Cluster Cooling System: Design Outline

Description
First, it is important to note that the GSnake GDC is a high-power density data center; with a single server having an IT power load of between 10 to 20kw.This is because the servers are designed to run power analytics programs, facilitate huge data storage densities as well as running artificial intelligence –driven machine learning for the operation of the robotic arms constituted within the data center.
To provide more comprehensive and robust data solutions, the GSnake Cluster GDC has mainly increased the server rack density; whereby a single data center will have at most 5 servers with each server having 20kw power load. The massive energy demand of the data center translates to generation of huge amounts of energy. Thus, the GSnake Cluster cooling system is an innovative technology designed to meet the cooling needs of a geothermal data center with 2000Kw IT load.

Heat Pipes Design
The GSnake Cluster cooling system utilizes a system of two heat pipes forming a closed loop piping system, with an access point at the hot interface which acts as an entry and exit point of the server cabinets. At the cold interface the heat pipes are connected to a series of fins buried 7m to 12m underground. The heat pipes are lined with dielectric thermal grease which acts as coolant thus creating a cooling track all-round the heat pipes-loop. For this system, each of the two heat pipes is of 0.2m diameter and 7m long. The overall thermal conductivity varies with the length of the heat pipe, and can tend to 100kW/m.K for longer heat pipes, compared to copper’s thermal conductivity of 0.41kW/m.K.
Basically, a heat pipe is an equipment with very high thermal conductance that transfers huge amounts of heat between its very hot and cold ends, with a small temperature gradient. The idea of heat pipes has gained popularity in the oil & gas industry since the 1960s, and has been in use to transmit heat from one section to another.
Material Selection
The heat pipes used in the GSnake Cluster GDC cooling system are made of copper, reason being copper has a considerably high thermal conductivity of 401W/m.K (Touloukion, 1970). Also, copper is compatible with dielectric fluid (liquid or vapor) with only few special plumbing requirements. Heat pipes are preferred for this cooling system mainly because of the extremely high thermal conductivity of copper as well as its low heat conduction resistance. Also, each and every server is enclosed in a leak-proof copper casing. The copper casing protects the delicate server components from contamination, in addition to ensuring efficient heat transfer, from the hot server to the dielectric thermal grease.
Additionally, due to increased demand for better heat transfer within our cooling system a very efficient thermal interface material was chosen. The GSnake Cluster cooling system makes use of dielectric thermal grease as the lining or coolant within the copper heat pipes. This dielectric thermal grease is made of Graphene, one of the thinnest compound ever developed with a one-atom thickness, coupled with very good electrical and thermal conductivities at room temperatures (Touloukion, 1970). The graphene thermal grease acts as a thermal transfer bridge or rather as a heat sink of the cooling system. Essentially, its use makes the cooling system very effective because it has a thermal conductivity of over 2000W/m.K (single graphene layer), considerably higher than that of the heat pipes made of copper with 401W/m.K.
The graphene based thermal grease is enhanced with a gallium paste to prevent drying-out of the grease at elevated temperatures, which could increase its thermal resistance. However, the gallium paste is highly corrosive to aluminum, the reason why the GSnake Cluster cooling system doesn’t use aluminum heat pipes at all. In addition the thermal conductivity of liquid gallium at 379K (105oC) is 30.5W/m.K. Thus, the dielectric thermal grease used in this system is a compound of graphene (2000W/m.K) and liquid gallium (30.5W/m.K) (Touloukion, 1970).This composition enhances its overall thermal conductivity to 40W/m.K at 105oC operating temperatures, making it a super-high grade thermal grease. It is important to note that a single layer of the grease lining is used within the copper heat pipe, because if it’s too thick its thermal conductivity will decrease.
Working Principle
The GSnake Cluster cooling system applies the liquid-immersion cooling technique, in which several data servers moved through a heat pipe by a robotic arm. The servers move in a rail which is attached in the interior side of the heat pipe. This is aided by motors and gears. The heat pipe is lined with dielectric thermal grease; such that as the servers swirl around the interior of the heat pipe, the dielectric thermal grease absorbs heat from the servers conducting it away. The extracted heat is transmitted by the thermal grease as well as the copper heat pipe to the soil heat sink, 7m to 12m below the earth’s surface.
Essentially, the heat pipes-closed loop design, aligns the moving server cabinets in an alternating row, thereby creating a hot and cold aisle. The idea of alternating row design is such that, when the smart thermostat detects high temperatures on one side of the loop (when one pipe is heated enough to around 105oC), it prompts the servers to be moved to the other pipe within the loop. Let’s say high temperatures of 105oC are detected inside the left sided-heat pipe, the temperature sensors actuate the robotic arm to move the servers to the right sided-heat pipe. Once the left side heat pipe is cooled to room temperatures (25oC), the thermostat notices this and then it actuates the robotic arm to move the serves to the left side again. And the cycle continues. The GSnake Cluster cooling system designed to maintain the data center temperature between 25.4°C and 27.6°C.
Basically, this system applies the fundamental thermodynamics and heat transfer physical mechanisms. One, heat from the hot server is transmitted to the dielectric thermal grease lining through conduction. In return, the dielectric thermal grease transmits the extracted heat by conduction to the copper heat pipe all the way through to the heat sink. Finally, the base of the copper heat pipes comes into contact with the soil underground and it dissipates the heat away by conduction.
At the heat sink, the copper heat pipes are considered to be at 105oC, while the underground temperatures 7m below the earth’s surface are almost constant ranging between 10 oC to 16 oC (Batu, 1998). This temperature gradient between the heat pipes and the soil allows the ground to draw away the heat and cool down after sometime. The base location (7m underground) allows the copper heat pipes and the thermal dielectric grease to cool down quickly, thus enabling the two materials to continuously transmit away the heat that is steadily generated by the servers swirling within the closed loop. This process ultimately cools down the GSnake Cluster geothermal data center, continuously.

GSnake Cluster Cooling System: Design Analysis
Heat Transfer from the Server Cabinet
Heat generated by a single server is transmitted through its copper casing by conduction. From thermodynamics, conduction is defined as the transfer of energy from more energetic particles to less energetic particles (Touloukion, 1970). And since the server generates heat steadily and heat transfer through the copper casing is one dimensional, we apply the One Dimensional Fourier’s Law equation to calculate the rate of heat transfer between the server cabinet and the dielectric thermal grease. The equation is give as:
q_x=-KA ∆T/∆X = -KA ∆T/L……….(i)
Where, q_x=Rate of heat transfer
∆X=L=Thickness(m)
∆T=Temperaure difference(K)
A=Area of heat transfer(m^2)
K=Material thermal conductivity(W/m.K)
q_x^”=Heat flux(W/m^2 )
Also, we can calculate the heat flux, which represents the rate of heat transfer through a section of unit area. It is calculated as follows:
q_x^”=-K ∆T/∆X =-K ∆T/L……….(ii)
Thus,
q_x=q_x^”×A_x……….(iii)
The server rack/cabinet dimensions are as follows;
K_C= 392W/m.K, which is the thermal conductivity of copper at high temperatures
w_s= width of the server rack, which is slightly less than the width of the heat pipe to allow for free movement of the servers within the heat pipe. Given that the diameter of the heat pipe is ϕ=0.25m, the width of pipe is thus w_hp=πD →3.14 ×0.25m=0.785m.Hence, w_s=0.6m
A_s= Area of the server cabinet, considering a square shape the area is given by〖 A〗_s=l×w⇒o.6m ×0.6m=0.36m^2
〖∆X〗_s=L_s = the thickness of the server cabinet, for maximum heat transfer 〖∆X〗_s=0.2m
We use equation (ii) to determine the heat flux of the server cabinet
q_s^”=-K_C ∆T/L_s = -K_C ((T_2-T_1))/L_s
Where,
T_1=105 oC ⇒ (105 oC+273= 378K), the maximum allowable server temperatures, while steadily generating heat on full load
T_2=30 oC ⇒ (30 oC+273= 303K), since the dielectric thermal grease is enhanced with liquid-gallium metal, which is usually in liquid form at 29.6 oC
Thus,
q_s^”= -392W/m.K ((303K-378K))/0.2m
q_s^”= 147kW/m^2
The rate of heat transfer of the copper server cabinet is calculated using equation (iii)
q_s=q_s^”×A_s
q_s=147000W/m^2 ×0.36m^2=52920W
q_s=52.92kW,is the amount of heat dissipated by a single server. The total amount of heat emitted by all the servers within the data center is simply calculated as,
q_(s(total))=52.92kW ×5=264.6kW
So, the servers emit 264.6kJ (kilo joules) of heat per second.

Heat Transfer by the Dielectric Thermal Grease
The dielectric thermal grease absorbs the heat emitted by the serves and transmits it to the copper heat pipe by conduction all the way through to the heat sink. This process is through conduction because there is no bulk movement of the fluid used (dielectric thermal grease), and the temperature gradient exists in a stationary medium. We will use equation (ii) to calculate the heat flux of the thermal grease,
q_g^”=-K_g ∆T/L_g = -K_g ((T_2-T_1))/L_g
Where,
K_g= 40W/m.K, which is the thermal conductivity of the dielectric thermal grease as discussed earlier
T_1=105 oC ⇒ (105 oC+273= 378K), the maximum allowable server temperatures, while steadily generating heat on full load
T_2=30 oC ⇒ (30 oC+273= 303K), since the dielectric thermal grease is enhanced with liquid-gallium metal, which is usually in liquid form at 29.6 oC
〖∆X〗_g=L_g = the thickness of a single layer of dielectric thermal grease, 〖∆X〗_g=0.05m
q_g^”= -40W/m.K ((303k-378K))/0.05m
q_g^”= 60kW/m^2
The rate of heat transfer of the dielectric thermal grease to the copper heat pipe all through the pipe from the access point to the base (7m), is calculated using equation (iii); but in this case we use the interior surface of the heat pipe that is contact with the thermal grease. The dimensions of the heat pipe are;
ϕ=0.25m,which is the diameter(D)of the heat pipe)
l=7m,length of a single heat pipe
w_p=πD,width of the heat pipe given as ⟹3.14 ×0.25m=0.785m
A_p=l×w,given as πD×l⟹A_p=(3.14 ×0.25m)×7m=5.495m^2
Hence, the rate of heat transfer by the dielectric thermal grease is:
q_g=q_g^”×A_p
q_s=60000W/m^2 ×5.495m^2=329.7kW
Therefore, the dielectric thermal grease dissipates 329.7kJ of heat per second to the interior of the copper heat pipe.

Heat Dissipation by the Copper Heat Pipe to the Soil Underground
This process of heat transfer takes place by conduction. We therefore use equation (ii) to calculate the heat flux.
q_p^”=-K_p ∆T/〖∆X〗_p = -K_p ((T_2-T_1))/L_p
Where,
K_g= 392W/m.K, which is the thermal conductivity of copper(105oC) as the heat pipes are made of copper
T_1=105 oC ⇒ (105 oC+273= 378K), the maximum allowable server temperatures, while steadily generating heat on full load
T_2=16 oC ⇒ (16oC+273= 289K), since heat dissipation by the heat pipes occurs 7m underground and the soil temperatures at this depth are 16oC(on the higher limit).
〖∆X〗_P=L_P =heat pipe thickness given as, 〖∆X〗_p=0.5m
Hence,
q_p^”=-392W/m.K ((289k-378K))/0.5m
q_p^”= 69.78kW/m^2
The rate at which the heat pipe dissipates heat to the soil is given by equation (iii),
q_p=q_p^”×A_p
Where,
A_p=l×w,given as πD×l⟹A_hp=(3.14 ×0.25m)×7m=5.495m^2
q_p=69776W/m^2 ×5.495m^2=383.42kW
Thus, the copper heat pipe dissipates 383.42kJ of heat per second to the soil underground at a depth of 7m.
Soil Cooling Time
On absorbing the heat dissipated by the copper heat pipes, the soil needs to cool so that the cooling process within the data center can continue seamlessly. Hence, it is important to determine the time the soil 7m underground takes to cool down, as this determines how long it will take to lower the server temperatures from 105oC to between 25.4°C and 27.6°C.The following equation is used to determine the cooling time of the soil;
T_S=T_ambient +(T_initial-T_ambient )×e^((-kt))…….(iv)
Where,
T_s=105 oC ⇒ (105 oC+273= 378K), the hot soil temperatures at time t(s)
T_ambient=10oC ⇒ (10 oC+273= 283K), the temperatures of the surrounding soil that is not heated
T_initial=16oC ⇒ (16 oC+273= 289K), the temperatures of the soil before it was heated by the copper heat pipes
k(1/s) =the cooling coefficient of soil. Depending on the soil characteristics, the ground thermal diffusivity ranges from 1.72×10^(-6) m^2/s to 3×10^(-6) m^2/s at depth of 7m. Taking the higher limit thermal diffusivity value and given the soil a specific gravity is between 2.65m^2/s to 2.85m^2/s ,and is 0.01mm thick; then the soil’s cooling coefficient will be 0.03(1/s)
Substitute the above values to equation (iv), we get:
378K=283 K+(289K-283K )×e^((-0.03(1/s)t))…….(iv)
Using logarithms and antilog we solve for t which equals;
t=393s ∽6.55minutes,approximated to 7minutes
Thus, it takes approximately 7 minutes for the ground (soil) to cool down from 105oC to 16oC at depth of 7m underground.

Discussion of the Design Analysis
From the calculations above, the heat dissipated by the entire server system is 264.6kW.This could mean that the server system generates 264.6kJ/s of heat when operating at the maximum allowable temperatures of 105oC and running on full IT load of 100kW (each server consumes 20kW IT load and there are 5 servers).The dissipated heat is effectively absorbed by the dielectric thermal grease and transferred to the copper pipe at a rate of 329.7kW or rather 329.7kJ/s. Thus, it takes less than a second for the server generated heat to be drawn off by the dielectric thermal grease and get transmitted along the copper heat pipe.
Additionally, the copper heat pipe conducts the heat away from the servers to the cold interface/soil heat sink, along the 7m length. On reaching 7m below the earth’s surface, the heat pipe quickly dissipates the collected heat to the cool soil at a rate of 383.42kW.This means that as the servers steadily generate heat it also being continuously drawn away to the soil heat sink within seconds.
As we saw earlier, it takes approximately 7 minutes for the soil to cool down after being heated by the copper heat pipes. And since the soil is required to cool down to 16oC for the cooling process within the GSnake Cluster server system to be highly effective and efficient; it therefore means that if the servers are running on full load which causes their operating temperatures to rise to a maximum of 105oC, it would around 5 to 10 minutes for the GSnake Cluster cooling system to reduce the temperatures from 105oC to between 25.4°C and 27.6°C.This proves how effective and efficient the GSnake Cluster cooling system is.
Justification of the GSnake Cluster Cooling System
Efficiency of GSnake Cluster
The geothermal cooling solution utilized by the GSnake Cluster, uses an estimated 45% less electrical power compared to a conventional cooling system. This is because the system only uses input power for the operation of the robotic arms and other smart monitoring and IoT solutions, like the smart thermostats. Since heat transfer within the cooling system is by thermodynamic mechanisms such as conduction, which take place naturally due to the existing temperature gradients (differences) between the materials involved in the cooling process.
Additionally, there is no bulk movement of the working fluid (dielectric thermal grease) which could necessitate the use of a pump. Also, the GSnake Cluster cooling system does not waste power trying to pump high-temperature indoor air into the already hot air outside; instead it simply releases the heat collected from the data center into the cool underground beneath the earth’s surface. Even in the hottest summers and coldest winters the GSnake Cluster will always be efficient and effective at cooling the geothermal data center.
Generally, the Energy Efficiency Ratio (EER) of a geothermal cooling solution is rated between 15 and 25 while the most efficient conventional cooling systems have an EER value of between 9 and 15 (Geothermal Data Centers, 2020).The higher the EER value the more energy output you get from your cooling system compared to the power input it requires to operate. And with GSnake Cluster system have an EER value of 24.5, its efficiency is outstanding.
Power Usage Effectiveness(PUE)
Geothermal data centers consume large amounts of energy, however, much of this energy is used in powering and cooling the IT equipment they house. A well-designed data center should be able to offset the enormous energy costs, because if a GDC facility is consuming more power in cooling than it uses in powering the IT infrastructure; then it is of less value as it represents misuse of the limited energy resources. To ensure this problem does not occur, PUE was developed as measure of evaluating the efficiency of a geothermal data center while still at the design stage.
PUE is essentially the ratio of the amount of energy used by the IT infrastructure within the GDC to the total power input into the data center. If data center has a high PUE value it means that it is using more power than it should be making it less efficient (Felter, 2020). That is why the main focus of this report was in designing the best cooling system for the GSnake Cluster data center, which could translate in less power consumption and a low PUE value making the data center much more efficient. The following calculations show a comparison of the energy consumptions of an average data center and that of GSnake Cluster.
A GSnake Cluster cooling system for a large sized geothermal data center with an IT load of 2000kW, is designed to operate at 120kW.Hence, the PUE value is calculated as follows:
PUE=(Total GDC power input)/(Total IT load)…………..(i)
where,Total power input for GSnake Cluster GDC=Total IT load+cooling load,
Total GDC power input=2000kW+120kW=2120kW
Thus,
PUE=2120kW/2000kW=1.06
With a PUE value of 1.06, which shows that the power usage of the data center is much closer to the energy requirements of the IT infrastructure. This makes the GSnake Cluster the most efficient data center to be developed so far, it is an almost perfect system. As a perfect data center has 1.0 PUE score (Felter, 2020), which means that every kW of energy consumed by the data center was primarily used to run the IT equipment within the facility.
We could compare the power consumption of the GSnake Cluster with that of an average data center with a PUE value of 1.11, using the same IT load of 2000kW.Let’s calculate the total power consumed/total power input using equation (i) as follows;
PUE=(Total GDC power input)/(Total IT load)…………..(i)
Where, PUE =1.11
Total IT load=2000kW
1.11=(Total power input for an average GDC)/2000kW…………..(ii)
Solving equation (ii) we get, Total power input for an average GDC=2220kW
Thus, the cooling system of an average data center with a 2000kW IT load uses 220kW which is a higher value compared to the 120kW used by the GSnake Cluster cooling system. Which shows that with GSnake Cluster data center we get to save 100kW for an IT load of 2000kW, the % cooling system power saving is calculated as follows:
%Power Savings=(Average data center cooling load-GSnake GDC cooling load)/(Average data center cooling load)×100%

%Power Savings=(220kW-120kW)/220×100%=45.45%
Therefore, with a GSnake Cluster geothermal data center we will have a power saving of 45.45% percent as stated earlier in the report, which will translate to reduced energy costs.
Similarly, using equations (i), (ii) and (iii) we can calculate the power savings of GSnake Cluster geothermal cooling system for a medium and small sized data center, in comparison to the conventional cooling system of an average data center. The calculated values are summarized in the tables below:
Average Data Center GSnake Cluster Geothermal Data Center
PUE 1.11 1.06
Medium-Sized Data Center with IT load of 1500kW 1500kW 1500kW
Total Power consumed 1665kW 1590kW
Cooling Load 165kW 90kW
GSnake Cooling System % Power Savings 45.45%
Table 1.1 Power savings for GSnake Cluster compared to an Average Data Center

Average Data Center GSnake Cluster Geothermal Data Center
PUE 1.11 1.06
Small- Sized Data Center with IT load of 1000kW 1000kW 1000kW
Total Power consumed 1110kW 1060kW
Cooling Load 110kW 60kW
GSnake Cooling System
% Power Savings 45.45%
Table 1.2 Power savings for GSnake Cluster compared to an Average Data Center

Cost Analysis
The GSnake Cluster cooling system costs involves the cost of the two heat pipes, the dielectric thermal grease, the Smart monitoring and IoT system, drilling and installation costs of the entire system 7m underground. The estimated cost breakdown of the cooling system is as follows:
Component Description Unit Costs Total Costs
Heat Pipe(ϕ=0.25m) $150.00/m $2100 for 7m (2 heat pipes)
Gallium-based dielectric thermal grease $73.50 – $78.00/kilogram $3900.00 for 50kgs
Drilling and Installation of the heat pipes underground $52.59/m $368.13 for 7m
Smart Monitoring and IoT System costs – $6,500
Total Estimated costs – $12,868.13
Table 1.3 GSnake Cluster cooling system cost breakdown
The table below makes a comparison of the upfront costs (purchase/installations costs) for conventional cooling systems, average geothermal cooling systems and upfront costs of GSnake Cluster smart cooling system.
Conventional Cooling System
(GOOD) Average Geothermal Cooling System
(BETTER) GSnake Cluster Cooling System
(BEST) res
Installation Costs $25,000 Well Drilling, External and Internal Piping System Costs $3,748,000 Heat Pipe(ϕ=0.25m) $2100 for 7m (2 heat pipes)
Air Handler Costs $330,000 Heat Exchanger& Circulation Pump costs $340,000 Gallium-based dielectric thermal grease $3900.00 for 50kgs
Control Unit $5,000 Smart IoT & Monitoring Control Unit Costs $6,500 Drilling and Installation of the heat pipes underground $368.13 for 7m
– – – – Smart Monitoring and IoT System costs $6,500
Total Upfront Costs $360,000 Total Upfront Costs $4,094,500 Total Estimated Upfront Costs $12,868.13

Table 1.4 Comparison of GSnake Cluster upfront costs comparison with other systems
From the table above, the upfront costs of GSnake Cluster cooling system are way much lower compared to other cooling systems that are currently in the market. This makes it the most cost-effective system to have been developed so far.
GSnake Cluster GDC Operational Costs
To calculate the operational costs of an average data center and that of a GSnake Cluster GDC, we will use a large-sized GDC in which the IT load is 2000kW/hr (exempting power consumption from lighting and security devices).From the PUE calculations above we know that a GSnake Cluster GDC has a total power consumption of 2120Kw/hr while that of an average data center is 2220kw/hr. Currently, the cost of electricity in the US is $12.0cents per kilowatt hour( kWh) (R. Gordon Bloomquist),this value will allow us calculate the annual operational costs of the two data centers. Remember, all data centers are designed to operate 24hrs continuously all year round. The operational cost savings of GSnake Cluster data center due to geothermal cooling are illustrated in the tables below.
Average Data Center GSnake Cluster Geothermal Data Center GSnake Costs Savings
Power Consumption 2220kwh Power Consumption 2120kwh
Daily Operational Costs(24hrs) $6,393.60 Daily Operational Costs(24hrs) $6105.60 $288.00
Total Annual Operational Costs $2,333,664 Total Annual Operational Costs $2,228,544 $105,120

Table 1.5 Annual Operational Costs of Average data center and GSnake GDC

Market Growth Projections for Smart Data Cooling Technologies
Current market analyses indicates an ongoing great interest in smart data center cooling technologies. Research suggests that by 2023 smart data centers cooling system will be worth $8 billion. This estimate represents a 6% annual growth rate of the systems. Other studies indicate an even more substantial market growth rate of the smart cooling technology with a likely increase in value by 2024 to a total of $20 billion worth, which approximates to 12% combined annual growth rate (R. Gordon Bloomquist).
The main reason for the above estimated increase in value of the data center cooling systems, is because geothermal data centers are being built in developing countries and in other regions such as Latin America and Singapore. Thus, financial analysts believe that as geothermal data centers begin operating in these places there will be continued emphasis on running these facilities much more efficiently (Kester, & Cho).This reality will definitely spur data center managers and owners to look for innovative, highly efficient and cost effective data cooling systems. Therefore, the development and installation of the GSnake Cluster GDC is justifiable, as there is a ready market for its smart cooling technology.

Conclusion
From the analysis and discussion above, we can confidently conclude that the GSnake Cluster utilizing geothermal cooling is the most efficient data center in comparison to an average data center using a conventional cooling system. We have seen that the GSnake Cluster has a PUE value of 1.06 which translates to 45% energy savings, proving that it is a totally energy-efficient system. Also, the upfront costs of GSnake Cluster cooling system are way much lower compared to other cooling systems that are currently in the market, in addition to less operational cost savings as well as reduced maintenance requirements and a long service-life.
Lastly, with the current technological advancements, energy and data demands will continue to escalate. Thus, controlling and reducing these power costs through energy-efficient data center systems is the best practice in making a data center reliable, cost-effective and sustainable. And from this report we conclude that the GSnake Cluster Geothermal Data Center is a very cost-effective, reliable, sustainable and above all a Smart IoT facility.

We provide a wide range of science paper writing services. Some of the research and writing services that we offer include:

– Science Assignment Help
– Science Homework Help
– Natural Sciences Writing Services
– Neuroscience Writing Services
– Data Science Writing Services
– Mathematics Assignment Writing Services
– Social Science Writing Services