Services
Managed IT + Business Phone Cybersecurity & 24/7 SOC Co-Managed IT vCIO Strategy
Industries
Healthcare Manufacturing Legal Financial Services Retail
Resources
Blog Free Tools & Templates How to Choose an MSP About Contact Free Cost Audit
Internet & Network

Six Weeks, Six Failures: The Concentration Risk Behind 2026's Telecom Outages

HUB SITE A SITE B CLOUD PHONES POS CARRIER One cut. Six paths to dark.

In the six weeks ending May 13, 2026, Ohio businesses watched their phones, POS terminals, cloud applications, scheduling platforms, and 911 trunks go dark in five separate, unrelated events. A backhoe in Western Pennsylvania severed a fiber line and took down Verizon customers from the Mahoning Valley to Atlanta. A cooling unit failed in a Virginia data center and stopped commerce at thousands of businesses that didn't know they depended on US-EAST-1. A hosting provider in Columbus went dark for 76 minutes and dragged down a long list of companies that had nothing to do with the outage. AT&T started decommissioning its first wave of copper wire centers. And in the background, the fiber supply chain that BEAD-funded rural buildouts depend on quietly tilted toward hyperscaler data centers.

The events were unrelated. The lesson was not. Every one of them traced back to the same underlying problem: concentration risk. The infrastructure Ohio businesses rely on — carriers, cloud regions, conduit paths, fiber suppliers, hosting platforms — has consolidated faster than the average business has updated its understanding of what it depends on. "Redundant" almost never means what the brochure says. And when one of those dependencies fails, the failure radius is bigger than anyone expects.

The backhoe theory of network reliability

On May 5, a contractor cut a fiber line somewhere in Western Pennsylvania. Within the hour, Verizon customers across the eastern third of the country were down. Mahoning Valley callers got dead air. So did people in New York, D.C., Atlanta, New Orleans, and parts of Florida. Field-service apps stopped syncing. Calls dropped outright. Full restoration took nearly seven hours.

Carriers love the word "redundancy." It shows up in every proposal, every SLA, every slide deck. What they don't tell you is that "redundant" routes increasingly share the same physical conduit and the same handful of long-haul corridors. A single shovel takes down what was sold to you as three separate paths because two of them were running through the same trench. A multi-location business with "diverse" carriers can still go dark if both carriers ultimately ride the same regional POP.

The logical topology in the portal looks fine. The contract says "diverse path." But the physical reality is that nobody on the carrier side is paid to verify true diversity end to end. The sales engineer who quoted your circuits doesn't talk to the network planner who owns the conduit map. The account manager rotates every nine months. And the day your business goes dark, the support line is "we're investigating" for six hours.

Quick win: Pull every internet and voice circuit contract for every site. For each one, ask the carrier (in writing) for the entrance facility, the regional aggregation point, and the long-haul corridor the circuit traverses. If you can't get an answer, that's your answer.

The "cloud" is a building, and buildings break

Three days later, on May 8, the air conditioning failed in an Amazon data center in Northern Virginia. Servers in that availability zone shut themselves down to keep from cooking. Coinbase went offline for more than five hours. FanDuel went dark mid-trading. And thousands of smaller Midwest businesses that don't even know they have a dependency on US-EAST-1 found their scheduling platforms, payroll providers, time-clocks, CRMs, and cloud point-of-sale all stumbling at once. The line on the phone was "AWS is investigating." That was the entire answer.

US-EAST-1 handles roughly a third of global internet traffic. The vendor you bought "the cloud" from doesn't always tell you which region they run in, and they almost never tell you that their "high availability" inside that region is two servers six feet apart in the same building. A thermal event took out billions of dollars of commerce because nobody planned for the cooling to fail.

Ohio should be paying attention. The state is now the third-largest data center market in the country — 1.6 gigawatts running, 2.4 more planned, over 60% sitting in central Ohio. We are on track to become the next Virginia. The same concentration that took down US-EAST-1 is being built into the grid around Columbus right now. For a multi-location business, the question is no longer "is the cloud reliable" — it's "do I know which buildings my business runs out of, and what happens at my sites when those buildings fail?"

The copper sunset is happening to you

While outages were stealing the headlines, a slower-moving but equally dangerous deadline kept advancing. AT&T is shutting down copper wire centers starting next month — June 2026 — with about 500 in the first wave. Ohio is on the list. By November 15, a lot of those POTS lines stop working.

If your building has a fire alarm panel, an elevator emergency phone, or a security panel that calls a monitoring station, there's a decent chance it's still riding analog copper. The six-figure fire suppression system you installed talks to the central station through a $40-a-month phone line that is about to disappear. When that line goes dark, the DACT in your fire panel can't reach the monitoring center, your next inspection fails, and "we'll just move it to VoIP" doesn't actually work — most fire panels won't pass UL864 over a standard SIP trunk, and the AHJ in your jurisdiction won't sign off either.

Businesses that called their carrier in March were quoted June install dates. The ones who waited until April are looking at September — past the cutoff. POTS-replacement specialists are booked out. Cellular failover hardware lead times stretched to twelve weeks somewhere around Easter. The copper isn't dying. It's being killed on a published schedule that most building owners haven't read.

Rural Ohio is paying for Meta's AI buildout

Most business owners never see this part of the story, but it will reshape what gets built across Ohio for the next three years. Meta just signed a $6 billion fiber deal with Corning. The supply chain shifted overnight. Two small ISPs that won BEAD awards in Ohio had fiber orders with CommScope canceled months later because Corning stopped selling them the glass. Loose tube fiber, the kind rural FTTH actually needs, now has a 52-week lead time. Ribbon fiber is over 60. Smaller providers are seeing 70–80 percent cost increases. Prices generally have climbed 40 percent in weeks.

If you operate in rural Ohio — manufacturers in Wayne County, healthcare networks in the southeast, distribution out of Findlay or Mansfield — that supply squeeze hits you directly. The fiber buildout you were promised slips a year. The ISP partner walks because the economics blew up. The project gets re-scoped to slower technology. And your bandwidth requirements keep going up regardless.

I'm not anti-AI. I'm not anti-Meta. I'm anti-pretending. A real partner in this environment doesn't promise build dates they can't hit. They tell the truth about lead times, and they use what's actually available — fixed wireless, hybrid topologies, microtrenching, leased dark fiber from cooperatives that planned ahead — to keep businesses connected while the glass works its way through the queue.

UCaaS cutovers: the project that was set up to fail

The second category of concentration risk doesn't look like a fiber cut. It looks like a project plan. Three sites. 240 phones. A "twelve-week" cloud-phone cutover that is now in month seven. I've heard the same story three times this spring from operations leaders across the Midwest. The names of the providers change but the failure mode doesn't.

Porting dates slip — because the losing carrier has every incentive to stall and nobody at the new provider is paid to fight that battle on your behalf. E911 records show up at the new platform with addresses that haven't been right since 2019, so when somebody dials 911 from a back warehouse, the dispatcher gets sent to corporate. The "complete feature parity" promised in the sales deck turns out to mean parity with a different customer's PBX. And nobody — nobody — tested the physical handsets on the actual network under actual traffic until the day of cutover.

The technology works. RingCentral works. 8x8 works. Webex and Teams Phone and Zoom Phone all work — when somebody who knows what they're doing runs the cutover. These projects fail because of concentration of a different kind: a single national provider running a shared queue of project managers across forty accounts, none of whom has ever seen your network. A $25,000 quote becomes a $90,000 invoice. If your cloud phone project is two months past go-live and the vendor is still "investigating," that's not a delay. That's a diagnosis.

The vendor chain you didn't sign up for

On January 16 of this year, a hosting provider with nodes right here in Columbus went dark for an hour and sixteen minutes. Multiple downstream partners — companies that had nothing to do with the outage — went down with it. Their customers couldn't reach their data. Their phones rang. And their IT teams took the heat for something they didn't break.

When you sign a contract with a big carrier or a national MSP, you are not really signing with one company. You are signing with a chain of them — an ISP on top of a hosting platform on top of hardware suppliers and cloud platforms, any one of which can take you down without warning. The week of April 20–26 alone saw 170 network outage events in the US, a 19 percent jump from the prior week. A squirrel took out phone and internet for Medina County government in February. That one's funny. The fact that nobody noticed the lines were that exposed isn't.

The question for a multi-location business in Ohio is no longer "is my provider big enough." It's "do they have one number I can call when something breaks, and does the person who answers that phone have the authority to do anything about it?" The big carriers don't, and the reason is structural — their customer service is a queue, their account team is a rotating door, and your ticket joins a stack of ten thousand others on a bad night.

What "redundancy" actually requires in 2026

Real redundancy isn't a checkbox in a portal. It's a set of physical, contractual, and operational choices that no single carrier has any incentive to make for you.

Physical diversity is verified, not promised. Map the actual conduit path of every circuit. Confirm carrier A and carrier B aren't sharing the last mile or the regional POP. Use fundamentally different transport for failover — fiber primary, fixed wireless secondary, LTE out-of-band for management. Test the cutover on a Tuesday afternoon before an outage forces you to.

Cloud dependencies are mapped, not assumed. Know which SaaS platforms run in which region. Where you can, push critical workloads into multi-region architectures. Where you can't, make sure the local infrastructure at every site can keep operating without the cloud for the hours that matter. Point-of-sale should not require Virginia. Time-clocks should not require Virginia. Door access should not require Virginia.

The vendor chain is short and known. Demand to know who is actually delivering each part of the stack — the ISP, the hosting platform, the SIP carrier, the hardware supplier — and who you call when one of them fails. If your provider can't draw you that picture in one sitting, they don't know it either.

The deprecation calendar is on the wall. POTS lines that touch fire panels, elevators, and security panels need a written replacement plan with hardware on order before November. Anything depending on 12-plus-week lead times needs to be ordered now.

Quick win: Build a one-page "what we depend on" map for your business. Internet at every site. Voice at every site. Cloud applications by region. Vendor chain for each. The exercise alone surfaces the concentration risks no one has been paid to find.

A 90-day plan for Ohio business owners

If you do nothing else with this article, do these four things in the next 90 days. Any one of them would have softened the impact of at least one of the six events above.

One: audit every POTS line. Pull bills for every site. Highlight every line under $50 a month. For each one, identify what it actually powers — fire panel, elevator, security panel, fax, alarm — and put a replacement on the calendar with a code-compliant solution and a real install date.

Two: verify true path diversity. Ask your primary carrier, in writing, for the physical path of your primary and backup circuits. If the answer is "we'll get back to you," start shopping for a true secondary on different infrastructure.

Three: cloud dependency map. List every SaaS tool the business depends on. For each, write down the cloud region it runs in. For each "must keep working" application, decide whether a multi-region failover or local fallback is the right answer, and budget the change.

Four: get the vendor chain on one page. Who is the carrier? Who is the hosting provider? Who is the hardware supplier? Who do you call at 2 a.m.? If any of those answers is "I'm not sure," fix that this quarter.

Buckeye Telecom is based at 1 Miranova Pl, Suite 1610, Columbus, OH 43215. We work with multi-location businesses across all 88 Ohio counties — manufacturers, healthcare networks, legal practices, financial services firms, and retailers — auditing telecom invoices, designing diverse networks, replacing POTS lines, and answering the phone when it rings.

Want a second set of eyes on your network?

A free telecom and IT cost audit will surface where you're concentrated, where you're overpaying, and where your contracts have stopped protecting you. No obligation. We do not resell a single carrier's product line — the advice you get is not shaped by anyone's quota.

Get a Free Cost & Risk Audit

Bottom line

Bottom line: 2026 is not a year of bigger outages — it's a year of more connected outages. The same handful of carriers, cloud regions, conduit paths, and fiber suppliers underpin almost everything an Ohio business depends on, and when one of them stumbles, the failure radius is national. None of the fixes are technically difficult. They require the willingness to look at the actual picture, spend a little money on real diversity, and pick a partner who is incentivized to tell you the truth.

If the last six weeks were a stress test, the next six are an opportunity. Map what you depend on. Verify the things you were told. Replace what's about to disappear. And get the vendor chain short enough that when the phone rings, you know who answers.

Call +1 (614) 224-2003. We'll walk your sites and tell you, in plain English, what you actually have — and what you don't.

← Previous 7 Ways Ohio Businesses Are Overpaying for Telecom
← Back to all posts
Free Telecom Cost Audit No cost, no obligation
Start