At next month’s Optical Fiber Communication Conference and Exhibition (OFC), many of the technologies that are driving current and future Ethernet development will be on full display by the Ethernet Alliance, which is set to unveil its tenth anniversary Ethernet roadmap.
Among the hot trends is, unsurprisingly, outfitting Ethernet for all things AI.
“Right now we are trying to get all of the people and groups who want to build Ethernet for AI to collaborate and work together. At this point, trying to do that isn’t like trying to turn a supertanker but rather guide the Dunkirk evacuation fleet. There are lots of different ideas and development – we are just trying to get them all going the same direction,” Peter Jones, chairman of the Ethernet Alliance, told Network World.
The Ethernet community is huge and diverse, and it includes individual vendors such as Cisco, Arista and Juniper as well as technology groups like the Ultra Ethernet Consortium, IEEE, and Universal Accelerator Link (UALink), which is focused on developing a standardized way for AI accelerators to communicate. All are interested in enabling networking technology to better support AI.
The Ethernet Alliance’s 10th anniversary roadmap references the consortium’s 2024 Technology Exploration Forum (TEF), which highlighted the critical need for collaboration across the Ethernet ecosystem: “Industry experts emphasized the importance of uniting different sectors to tackle the engineering challenges posed by the rapid advancement of AI. This collective effort is ensuring that Ethernet will continue to evolve to provide the network functionality required for next-generation AI networks.”
Some of those engineering challenges include congestion management, latency, power consumption, signaling, and the ever-increasing speed of the network.
“While the IEEE P802.3dj project is working toward defining 200G per lane for Ethernet by late 2026, the industry is (loudly) asking for 400G per lane yesterday, if not sooner,” Jones wrote in a recent Ethernet Alliance blog.
In a post about Ethernet’s AI evolution, John D’Ambrosia wrote about the development of 400 Gb/s signaling: “The IEEE P802.3dj project is defining the underlying 200Gb/s PAM4 signaling technologies in support of chip-to-chip, chip-to-module, backplane, copper cable, and single-mode fiber technologies to facilitate the numerous specifications for 200GbE, 400GbE, 800GbE, and 1.6TbE. These efforts are expected to be completed in the second half of 2026 so AI applications will have some near-term solutions to leverage. However, the staggering growth rates of computational power require the industry to start looking beyond 200 Gb/sec based signaling now for the networks of the future.”
“One of the outcomes of [the TEF] event was the realization the development of 400Gb/sec signaling would be an industry-wide problem. It wasn’t solely an application, network, component, or interconnect problem,” stated D’Ambrosia, who is a distinguished engineer with the Datacom Standards Research team at Futurewei Technologies, a U.S. subsidiary of Huawei, and the chair of the IEEE P802.3dj 200Gb/sec, 400Gb/sec, 800Gb/sec and 1.6Tb/sec Task Force. “Overcoming the challenges to support 400 Gb/s signaling will likely require all the tools available for each of the various layers and components.”
The IEEE in January began an “802.3 Ethernet Interconnect for AI” assessment, a multivendor effort to assess a number of key requirements for Ethernet and AI, such as:
- What are the interconnect requirements for the different AI networks?
- What are the performance requirements of these interconnects?
- What are the priorities for the development of these interconnects?
- What tradeoffs can be made between latency and resilience/reach/power?
“We are actively trying to figure out and understand which set of problems to solve here,” Jones said.
Ethernet vs. InfiniBand
There’s also the trend of moving AI networks toward Ethernet rather than current connectivity stalwart InfiniBand.
Ethernet is experiencing significant momentum, propelled by supply and demand factors, according to a recent blog by Sameh Boujelbene, vice president with the Dell’Oro Group.
“More large-scale AI clusters are now adopting Ethernet as their primary networking fabric. One of the most striking examples is xAI’s Colossus [supercomputer], a massive Nvidia GPU-based cluster that has opted for Ethernet deployment,” Boujelbene stated. “We therefore revised our projections, moving up the anticipated crossover point where Ethernet surpasses InfiniBand to 2027.”
Wireless and campus networks
There are other Ethernet trends and directions beyond AI networking. For example, the Ethernet Roadmap notes that as Wi-Fi 7 (802.11be) rolls out, Ethernet remains the backbone ensuring high-speed, low-latency connectivity for next-gen wireless networks.
“With multi-link operation (MLO), 320 MHz channels, and 4096-QAM, Wi-Fi 7 delivers faster speeds and improved efficiency, but reliable wired backhaul is essential to unlock its full potential,” the Ethernet Alliance stated. “Ethernet’s role in powering dense enterprise, industrial, and home networks continues to expand, supporting higher-speed access points, lower latency, and seamless integration with 5G and fiber networks. The synergy between Wi-Fi and Ethernet is critical for enabling scalable, high-performance hybrid networks for the future.”
The roadmap also spells out the continued importance of Ethernet development for bigger workloads and higher speeds.
“Enterprise and campus networks represent a massive market for Ethernet, with over a billion ports shipping annually. The majority of these ports are BASE-T at the access layer, while multi-mode (MMF) and single-mode fiber (SMF) support higher-speed connections deeper in the network,” according to the Ethernet Alliance’s roadmap. “Evolving Wi-Fi access points and Enterprise-class client devices are accelerating the transition to higher-speed Ethernet. BASE-T ports are shifting from 1000BASE-T to 2.5G, 5G, and 10G BASE-T, while optical ports are rapidly advancing from 10G/40G to 25G, 100G, and 200G, ensuring greater capacity, efficiency, and future scalability.”
Future developments: co-packaged optics
Optics are also part of the trend toward more efficient data center networking, according to the roadmap.
“New interconnect solutions, such as Co-Packaged Optics (CPO), On-Board Optics (OBO, and Linear Pluggable Optics (LPO). As data centers deploy higher and higher link speeds, the power consumption of the optical module increases significantly. The need for reduced-power optical solutions is fueling innovation and creativity in this market,” according to the roadmap.
“The number of chips that can be connected on an electronic module has historically been limited by electrical pathways,” Boujelbene told Network World recently. “However, CPO holds the promise of significantly increasing the interconnection density between accelerators and chips.
Although CPO technology is not yet ready for mass deployment in scale-out networks that connect accelerators across multiple racks or data centers, its initial market opportunity lies in chip-to-chip connectivity, Boujelbene said. This application operates in a more confined environment with fewer interoperability requirements, Boujelbene said.