How difficult is it to disrupt a service nowadays
Radar

Today we often talk about SLA and redundancy. And the increasing role of clouds in the overall Internet infrastructure. Someone says that they will play a crucial role in traffic share in the nearest future. However, there are other huge ISPs - Tier-1, aka the biggest transit operators, which have transnational cables and indeed are part of the historical Internet backbone. They often play the role of last resort in the filtration process of bad routes. Because they have hundreds of customers. Also, almost all of these customers believe in what they got from the provider ISPs. That is the main reason why modern internet drafts rely on Tier-1s as flag carriers and hope that they’ll apply a new security mechanism among all the others.

Is this always a real scenario?

June 24, 2019, at 10:35 UTC the BGP session between AS396531 (ATIMETALS) and one not so small ISP AS701 aka Verizon was reestablished after several minutes of downtime. The standard story, but ATIMETALS somewhere lose all of their filters and all the routes from its other provider, namely AS33154, were leaked. So, they created a route leak — also a standard story. However, two main factors made this story terrible. Firstly - among all these routes there were more specifics of real prefixes and Job Snijders was the first to figure out this issue. It product from Noction, namely ‘BGP Optimizer’ and already a good explanation of its role in this incident was given in this article , so we would not stop on that. Secondly - Verizon is Tier-1 operator, and all of its customers learned the bad routes and because of more specifics rerouted all their traffic for the affected networks via AS396531. Who was affected? Mainly - Cloudflare, who started to solve the problem immediately. Also, Facebook, Amazon, and others. Of course, AS396531 was not able to cope with all that traffic and thus began to drop all of those traversed packets. They were leading to Denial of Service.

So, 10:35 - the start of the incident, 11:02 - Cloudflare announced the problem, 11:36 - found out, that it was a route leak, 12:38 - route leak was fixed. Two hours of DoS. Not only for them but also for others. Even though our BGP collector doesn’t have BGP session with Verizon itself, we can still confirm, that we saw in this route leak more than 6.7k prefixes of more than 850+ different operators. 500+ of this prefixes have vast spreading and was seen by many Points of Presence all around the globe. Before blaming anyone, let’s look at what could have been done to prevent this unfortunate situation.

So we would begin with Verizon itself. Of course, it’s not a happy task to create a new filter policy on each new customer. However, this customer was not new. We know that Verizon doesn’t have any prefix filter because AS396531 is a stub. Moreover, it hadn’t set a prefix limit for that link. Even a simple check of the presence of other Tier-1 operators in the AS_PATH from customers direction (this type of check can be set by default and doesn’t need to be changed).

There are several discussions about this incident in the press. Besides possible actions for Verizon, they also described benefits of RPKI and continue to support strict maxLength in ROA records. But what about maxLength? We know that during strict max length, all of the more specific became Invalid according to ROA validation. We also know that there exists a “Drop Invalid” policy. So, there is a draft that said that maxLength should be equal to prefix length.

Cloudflare follows this best practice. However, there is a small problem. Verizon did not have “Drop Invalid” policy. Maybe it did not have RPKI based validation at all. So, all the more specifics were happily announced even further, even though they were considered as Invalid by Prefix Origin Validation. And redirected all of the traffic by themselves. The main problem was - they were still Invalid, and Cloudflare couldn’t simply announce those more specifics by themselves, because their upstreams could easily drop those routes. So, the max value of maxLength is again needed a further round of discussion.

Route leak could have been avoided when it was found out. Simple AS_PATH manipulation of the form: ASx AS396531 ASx (were ASx is origin AS number) during a route creation could help these routes to avoid a leaker on their path by using loop detection mechanism while ISP was still waiting for an answer from the third party.

Again, in reality, the most common way was used - writing a complaint. And this resulted in another hour of delay

So, what do we have? Sometimes we were asked how difficult it is to use BGP in organizing network attacks. So here is a possible example. You are a beginner attacker. You connect to Tier-1 or someone huge without any filters. Choose any service you like as a target. Took prefixes of this target service and started to announce more specific of them. Also, drop all the redirected to you data packets. You are thus creating a blackhole for this service for all of the transit traffic on this ISP. So the target service would lose real money because it’s a DoS for a considerable amount of their customers. Because it would take an hour to find out the reason and only after another hour it would be resolved. If the incident was unintentional, and the third part is willing to solve it.

Don’t understand us wrong - guys from Cloudflare were awesome to solve this particular problem so fast. However, continuous monitoring and automated mitigation can solve the problem much faster by highlighting a problem and introducing a solution in several minutes, thus drastically reducing a delay.

Our main hope is that ISPs finally understand their responsibility to others and implement even the most straightforward filters on their border because the current situation is depressing.