How the Internet Is Physically Built: Cables, Servers, and Infrastructure

Learn about the physical infrastructure of the internet, from undersea fiber optic cables and data centers to exchange points and the last-mile connections.

The InfoNexus Editorial TeamMay 3, 20269 min read

The Physical Reality of the Internet

The internet is often described using metaphors like "the cloud" that suggest something ethereal and intangible. In reality, the internet is a vast physical infrastructure of fiber optic cables, copper wires, data centers, routers, switches, and transmission towers spanning every continent and crossing every ocean floor. Understanding how the internet is physically built reveals a remarkable global engineering achievement — a network that transmits data at the speed of light through millions of kilometers of cable to connect over 5 billion users worldwide.

Every email sent, every video streamed, and every web page loaded depends on physical hardware occupying real space, consuming real energy, and requiring constant maintenance. The internet's physical infrastructure represents one of the largest construction projects in human history.

The Internet's Layered Physical Architecture

Tier 1: The Backbone

The internet backbone consists of high-capacity fiber optic cables that carry the vast majority of internet traffic between continents, countries, and major cities. These cables — typically bundles of hair-thin glass fibers — transmit data as pulses of light using a technology called wavelength-division multiplexing (WDM), which sends multiple signals simultaneously on different wavelengths of light through the same fiber. A single modern fiber pair can carry over 25 terabits per second.

Major backbone operators include companies like Lumen Technologies (formerly CenturyLink), Telia Carrier, NTT Communications, and GTT. These Tier 1 networks peer with each other without payment — a system called settlement-free peering — forming the top level of the internet hierarchy.

Submarine Cable Network

Approximately 99% of intercontinental data traffic travels through submarine (undersea) fiber optic cables, not satellites. As of 2024, over 550 submarine cable systems span more than 1.4 million kilometers across ocean floors worldwide.

Cable SystemRouteLength (km)CapacityYear Completed
2AfricaAfrica, Europe, Middle East45,000180+ Tbps2023–2024
PEACE CableAsia to Europe via Pakistan, Africa15,00096 Tbps2022
DunantU.S. to France6,600250 Tbps2020
MAREAU.S. to Spain6,600200 Tbps2018
SEA-ME-WE 6Southeast Asia to Europe19,200100+ Tbps2025

Submarine cables are typically 17 to 21 millimeters in diameter in deep water — roughly the size of a garden hose. They contain fiber optic strands surrounded by layers of steel wire, copper, polyethylene, and tar coating for protection. Cables are laid by specialized ships that carry thousands of kilometers of cable on massive spools. In shallow coastal waters, cables are buried 1 to 2 meters into the seabed to protect against anchors and fishing trawlers.

Optical amplifiers (repeaters) are spliced into the cable every 60 to 100 kilometers to boost the light signal. These repeaters are powered by a constant electrical current sent through a copper conductor in the cable from shore-based power feed equipment — a single cable can require up to 15,000 volts DC.

Data Centers

Data centers are the warehouses of the internet — facilities that house the servers storing and processing the data that users access. A large hyperscale data center can contain hundreds of thousands of servers consuming 100+ megawatts of electricity — enough to power a small city.

  • Hyperscale data centers: Operated by companies like Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and Meta. There are over 900 hyperscale data centers worldwide as of 2024.
  • Colocation data centers: Third-party facilities (Equinix, Digital Realty, CyrusOne) where multiple companies rent space for their servers.
  • Edge data centers: Smaller facilities positioned closer to end users to reduce latency for applications like video streaming, gaming, and IoT.

Data centers require massive cooling systems (accounting for roughly 40% of their energy consumption), redundant power supplies (diesel generators as backup), and physical security measures including biometric access, 24/7 surveillance, and man-traps.

Internet Exchange Points (IXPs)

An Internet Exchange Point is a physical location where different networks (ISPs, content providers, cloud companies) interconnect to exchange traffic directly, rather than routing through third-party networks. IXPs reduce latency, lower costs, and improve performance by keeping local traffic local.

IXPLocationPeak TrafficConnected Networks
DE-CIX FrankfurtFrankfurt, Germany~14 Tbps1,100+
AMS-IXAmsterdam, Netherlands~12 Tbps900+
LINXLondon, UK~8 Tbps950+
Equinix IX AshburnAshburn, Virginia, USA~5 Tbps300+
IX.br (PTT)Sao Paulo, Brazil~30 Tbps2,700+

The Last Mile

The "last mile" refers to the final leg of connectivity from local infrastructure to end users' homes and businesses. This segment is often the bottleneck of internet performance:

  • Fiber to the home (FTTH): Fiber optic cable runs directly to the building. Offers speeds of 1–10 Gbps. Increasingly common in new developments and urban areas.
  • Cable (DOCSIS): Uses existing coaxial cable TV infrastructure. Current DOCSIS 3.1 supports up to 10 Gbps downstream. Widely deployed in North America.
  • DSL: Transmits data over telephone copper wires. Speed drops with distance from the exchange; typically 10–100 Mbps. Being phased out in favor of fiber.
  • Fixed wireless: Uses radio signals from nearby towers. Serves rural areas where laying cable is uneconomical.
  • Satellite: Geostationary satellites (high latency, ~600 ms) and low Earth orbit constellations like Starlink (~20–40 ms latency) provide coverage to remote areas.

The Scale and Vulnerability of Internet Infrastructure

The physical internet is both remarkably resilient and surprisingly vulnerable. The network's redundancy means that traffic can be rerouted around damage — if one submarine cable breaks (which happens roughly 100 times per year, usually from ship anchors or earthquakes), traffic flows through alternative paths. However, certain chokepoints concentrate risk: the Strait of Malacca, the Suez Canal area, and the waters around the UK carry disproportionate shares of global cable traffic. A coordinated attack on a few key submarine cable landing stations could severely disrupt international communications. The physical internet is a critical piece of global infrastructure that modern civilization depends upon, yet its physical reality remains largely invisible to the billions of people who use it every day.

engineeringinternetinfrastructure