With the shift towards fifth generation (5G) wireless networks, deploying 5G-enabled devices to the edge – and managing the movement of crucial data between locations – is becoming more practical.
In fact, by 2023, more than half of new IT infrastructure will be deployed at the edge, according to technology research firm IDC. Similarly, most enterprise data will be generated and processed outside of centralized data centers and the cloud by 2025, according to Gartner.
While 5G is a critical advancement that makes these edge deployments possible, it also means there must be substantial changes to datacenter infrastructure.
Even with an explosion in edge devices, the datacenter will still be the hub of the modern computing model for years to come. But, with 5G, the amount of information that the datacenter will be asked to manage will grow dramatically, likely in ways we can’t fully anticipate today.
Organizations seeking competitive advantage will want to capitalize on their new ability to quickly access and analyze so much data. The result will be more, new, or expanded AI and HPC systems, including full or hybrid HPC/AI in the cloud. However, datacenters that support these deployments will need specific changes to operate to their potential.
Edge devices will need advanced computing power, expanded storage, and improved connectivity equipment to handle demanding workloads, larger quantities of data, and faster transmission of data to and from the datacenter.
Edge computing can carry some of the burden of data processing, but many workflows will require support from more powerful compute resources for things like remote human oversight, high-performance data analytics, and training of AI algorithms. To support this, the datacenter will require significant changes.
Fortunately, there are several technologies that can help address the new datacenter requirements that 5G devices bring. These include faster networking connections as well as the ability to connect more hardware (and more varied types of hardware) with PCIe4/PCIe5. However, the most important suggestion we have for preparing the various known and as-yet-unknown challenges is building out a GPU-accelerated datacenter.
GPU-based infrastructure requires fewer servers, dramatically improves performance per Watt, and offers unrivaled performance. GPUs can accelerate AI and HPC workloads. However, they can also improve the performance of data-heavy applications. Virtualization allows users to take advantage of the fact that GPUs rarely operate anywhere near capacity. By abstracting GPU hardware from the software virtualization essentially right-sizes GPU acceleration for every task.
In addition, many exciting new technologies are being built on GPUs or explicitly need the acceleration GPUs provide, including new technologies to support edge computing.
Best of all, the cost of GPUs has been dropping in recent years while the hardware infrastructure and software stacks that can take advantage of them – both storage and compute – have been rapidly expanding. This means you can predict future performance capacity and, thus, the costs of potential workload expansion, with a high degree of accuracy.
Learn more about our GPU-accelerated computing options here.