Big Data Drives Optical Networking Changes
HANK HOGAN, CONTRIBUTING EDITOR,
hank.hogan@photonics.comLong driven by telecom, optical networks are now being pushed by large data centers operated by Facebook, Amazon, Google, Microsoft and others. Here, runs are shorter and the emphasis is not on utility-grade reliability, a change from the telecom world. Instead, lowering the cost per bit and boosting bandwidth are of paramount importance. Suppliers have come up with new solutions, with users not waiting for standards to be finalized.
Consider Facebook. Host to 1.8 billion monthly active users, the Menlo Park, Calif.-based company sees a future of higher bandwidth demands than what’s needed for text and still images. Analysts predict 75 percent of the world’s mobile data traffic will consist of video and virtual reality by 2020, noted Katharine Schmidtke, the company’s source manager for optical technology strategy.
Massive data centers, like Facebook’s 290,000-sq-ft one on the edge of the Arctic Circle in Luleå, Sweden, are being built in response to growing demand, which is leading to changes in optical networking. Courtesy of Facebook.
“With the onset of these new services, we need to make sure our global infrastructure is designed to handle richer content at faster speeds. To meet these current requirements and any future bandwidth demands, we’re deploying the 100G (gigabits per second or Gbps bandwidth) data center, which puts increasing pressure on the optical network,” she said.
Facebook is actively working to bring about solutions that combine packet and dense wavelength division multiplexing technologies. Dubbed “Open Packet DWDM,” an advantage of this approach is that it cleanly separates software and hardware, Schmidtke said.
Some optical networking components are fabricated in a clean room, such as this facility in NeoPhotonics headquarters in San Jose, Calif. Courtesy of NeoPhotonics.
That enables each to independently advance. Because it is based on open specifications, anyone can contribute systems, components or software. Facebook has done this and intends to continue this work, driven, in part, by self-interest.
“There needs to be a cost-effective solution that’s optimized for the specific requirements of a data center,” Schmidtke said.
As director of packet optical architectures at Cisco Systems Inc., Russ Esmacher has seen the impact of large data centers. Headquartered in San Jose, Calif., Cisco has optical networking products in both the telecom and data center markets.
Traffic from millions of users flows into and out of data centers, requiring large amounts of storage and network capacity. Courtesy of Facebook.
Telecom, also known as transport, has its roots in telephone voice traffic. Such networks have traditionally been built to utility-grade specifications, with reliability of 99.999 percent while moving information over thousands of kilometers. In contrast, data centers use runs of, at most, a few tens of kilometers. They also utilize packet-based networks, a technology developed for the internet. Due to architectures that have been designed for network and not nodal redundancy, reliability of individual components can be lower than five 9s without impacting data delivery.
Pressure to cut costs
Traditionally, the transport and packet worlds were built and existed independently. However, the influence of large data centers has started to change this, to the benefit of transport operators. They face flat rates and demands for more bandwidth, so they’re looking for ways to reduce expenses. “Can you deliver an optimized network that cuts the cost to deliver a bit to the subscriber by 50 percent?” Esmacher said.
The answer, he added, is yes. By using the spine-and-leaf network architecture and other optical technologies employed in the data center as well as with packet-optical transport solutions, metro transport networks can see their costs slashed by half or more. Such solutions, along with other approaches that achieve the same savings by making routers more intelligent and sophisticated, are now being deployed, Esmacher said.
Access, metro and long-haul fiber optic networks. Courtesy of NeoPhotonics.
Going forward, achieving optical cost reductions will be increasingly critical because networks are a mix of electronic and optical components. Hence, data is translated from electrons to photons and back again repeatedly.
At one time, most of network expenditures went to the electronic side, which accounted for perhaps 85 percent of capital costs a decade ago, Esmacher said. Thanks to Moore’s Law, the cost of electronics has fallen more rapidly than it has for optical elements. Today, about half the capital cost is optical transport and associated optics, which has changed the industry’s focus.
“Now we’re seeing the market, probably over the last year and a half, starting to look at the optical industry and say what can be done here. Silicon photonics, using CMOS technology, certainly feels like a right move,” Esmacher said.
Mobile data traffic is expected to surge, leading to changes in data centers and optical networking. Courtesy of Cisco.
By taking advantage of the large silicon manufacturing base to build modulators and other components, silicon photonics promises cost reductions and performance improvements. The rapidly growing volume of such devices used in data centers could also help achieve cost goals.
However, Esmacher noted that the current inability of a silicon platform to lase via electrical pumping means that a pure silicon solution is not in the cards. Instead, lasing sources will be constructed out of a material like indium phosphide or gallium arsenide.
Shorter links on the horizon
Esmacher predicted that in the future optical technology will find itself in shorter and shorter links. Today it is being looked at for runs between routers and servers. Soon it will be used within the boxes themselves. That is one reason to go with silicon photonics and photonic integrated circuits. The number of connections is going up as this push to shorter links proceeds. Only with increasing integration can the necessary density be realized.
San Jose, Calif.-based NeoPhotonics Corp. designs and manufactures hybrid photonic integrated circuits for bandwidth-intensive, high-speed communications networks using silicon, indium phosphide and gallium arsenide wafers and components. Its products are found in telecom and data centers, where speeds today max out at 100 Gbps and distances run up to 10 kilometers. Between data centers, distances run from a few kilometers to several thousand, with rates running up to 600 Gbps on a single wavelength, said Winston Way, chief system architect.
High-speed fiber optic connections for a data center. Courtesy of Broadcom Ltd.
Way said the company’s strength lies in getting higher bandwidth out of modulators and receivers, as well as clean laser sources. The requirement for more bandwidth is driven, in part, by the increasing speed of electronics — performance the optical components need to match. Historically, electronics data rates double every six to seven years.
In 2000, electronic components were rated at 10 Gbps. By 2010, they were up to 25, and the bandwidth was 100, achieved by ganging up four channels. Today, electronics are up to 50 Gbps and so the industry is looking at 400-Gbps connections. That points to future directions for optical networks, Way noted.
If the cost can be reduced to the degree that seems possible, silicon photonics will enable more parallel connections and, therefore, greater bandwidth. “They can integrate a lot of channels in one silicon IC [integrated circuit]. But they have to face all the challenges of packaging and the limited optical power,” Way said.
The multiple channels on such chips can be utilized by having one wavelength per fiber and then putting as many fibers in a bundle as needed. Alternatively, there can be multiple wavelengths traveling down a single fiber. The first is better suited for short runs because the expense per unit length of multifiber cable tends to be higher than for a single fiber one. The second approach makes more demands on the photonic components, such as requiring multiple uncooled lasers, low loss multiplexing/demultiplexing, and minimization of channel cross-talk.
The big data center operators are taking their own approaches to optical networking, Way said. Thus, it may be some time before a consensus and a standard emerge.
Way also noted that data centers get attention, in part, because they are a rapidly growing market. Another reason is interconnecting data centers are having an impact on the transport side of optical networking. However, long haul and metro still make up the bulk of the transport market, accounting for perhaps five out of every six dollars spent outside the data center.
High-speed fiber optic connections for a data center. Courtesy of Broadcom Ltd.
The long-reach optical transport and short-range data center domains use different sources, fibers, receivers and modulators. Somewhere around 40 to 120 km, however, there is a crossover between the two; Way noted both technologies need to demonstrate a lower cost of ownership to win over that range. That also happens to be about the distance of a typical data center interconnect, or runs that tie two separated sites together with a high-bandwidth, high-capacity link. These tend to be 100 kilometers or less and are a very hot application area, Way said.
The march up in optical networking bandwidth presents challenges, said Mitchell Fields, vice president and general manager of the fiber optic products division at Broadcom Ltd. Headquartered in Singapore and San Jose, Calif., the company is a supplier of sources, detectors and chips, including vertical-cavity surface-emitting lasers and lasers for use in silicon photonics.
One challenge is that data center optical networking can require speeds that are hard to achieve. Another is that when the switch is made to a new higher speed, the surge in demand for components can be difficult to satisfy. It may be possible, for instance, to build the necessary components but not at the desired price.
Working out those kinks can take time and such problems have in the past delayed the widespread adoption of a new bandwidth threshold. Today 100 Gbps is being deployed but the next step up, to 400 Gbps, is some time off, Fields noted.
“There will always be these niche applications where people use 400 gig as the early demonstrations of it,” he said. “But what I always look for is when you start to see that knee in the ramp curve.”
LATEST NEWS