Inside Thinkmate

Prepare Your Datacenter to Support 5G-Enabled Edge Devices

How 5G Will Impact Edge Computing

With the shift towards fifth generation (5G) wireless networks, deploying 5G-enabled devices to the edge – and managing the movement of crucial data between locations – is becoming more practical.

In fact, by 2023, more than half of new IT infrastructure will be deployed at the edge, according to technology research firm IDC. Similarly, most enterprise data will be generated and processed outside of centralized data centers and the cloud by 2025, according to Gartner.

While 5G is a critical advancement that makes these edge deployments possible, it also means there must be substantial changes to datacenter infrastructure.

How 5G-Enabled Edge Devices Will Change the Datacenter

Even with an explosion in edge devices, the datacenter will still be the hub of the modern computing model for years to come. But, with 5G, the amount of information that the datacenter will be asked to manage will grow dramatically, likely in ways we can’t fully anticipate today.

Organizations seeking competitive advantage will want to capitalize on their new ability to quickly access and analyze so much data. The result will be more, new, or expanded AI and HPC systems, including full or hybrid HPC/AI in the cloud. However, datacenters that support these deployments will need specific changes to operate to their potential.

Edge devices will need advanced computing power, expanded storage, and improved connectivity equipment to handle demanding workloads, larger quantities of data, and faster transmission of data to and from the datacenter.

Challenges in Building a 5G-Ready Datacenter

Edge computing can carry some of the burden of data processing, but many workflows will require support from more powerful compute resources for things like remote human oversight, high-performance data analytics, and training of AI algorithms. To support this, the datacenter will require significant changes.

  • IT departments will require localized, low-latency computing resources for a large portion of edge data processing and storage to meet their goals. To improve performance and limit operational expenses, they may even need to limit the data transmitted to cloud or on-premises data centers for intensive compute tasks.
  • One major factor that organizations must take into account is the effect of this shift on the networking capabilities of a datacenter environment, both from the edge to the datacenter and between compute and storage within the system. This can have a ripple effect on things like power and cooling, which needs to be accounted for in system design.
  • Another consideration is the potential need for flexibility in workflows within the datacenter. With hundreds or thousands of edge devices producing and consuming data, supporting various applications and workloads from a single datacenter environment is key. It will need to support current workloads as well as future workloads that are potentially not even on anyone’s roadmap – without downtime to revamp or additional, unplanned costs.
  • Extending the reach of your data to a multitude of edge devices can also expose your data to security risks. Edge devices will often be less physically secure than a centralized datacenter and increased transmission of data between devices means sensitive data will be in-flight more often than in a centralized environment. This means new edge infrastructures will also need advanced security solutions to eliminate redundant copies of data or resource silos; facilitate secure data sharing and efficient collaboration between the edge and datacenter; and consolidate security auditing for greater accountability.

Key Technologies for 5G-Ready Datacenter

Fortunately, there are several technologies that can help address the new datacenter requirements that 5G devices bring. These include faster networking connections as well as the ability to connect more hardware (and more varied types of hardware) with PCIe4/PCIe5. However, the most important suggestion we have for preparing the various known and as-yet-unknown challenges is building out a GPU-accelerated datacenter.

GPU-based infrastructure requires fewer servers, dramatically improves performance per Watt, and offers unrivaled performance. GPUs can accelerate AI and HPC workloads. However, they can also improve the performance of data-heavy applications. Virtualization allows users to take advantage of the fact that GPUs rarely operate anywhere near capacity. By abstracting GPU hardware from the software virtualization essentially right-sizes GPU acceleration for every task.

In addition, many exciting new technologies are being built on GPUs or explicitly need the acceleration GPUs provide, including new technologies to support edge computing.

Best of all, the cost of GPUs has been dropping in recent years while the hardware infrastructure and software stacks that can take advantage of them – both storage and compute – have been rapidly expanding. This means you can predict future performance capacity and, thus, the costs of potential workload expansion, with a high degree of accuracy.

Learn more about our GPU-accelerated computing options here.


Speak with an Expert Configurator at 1-800-371-1212