By Ivan Golushko, Product Development and Innovation Manager, DDoS-Guard.
Protection of personal data, Internet privacy, fraud protection, website and server uptime – all these aspects of information security have already moved beyond the concerns of the narrow expert circle. The juicy scandals, which focused on the biggest players in the market, made many people think of how vulnerable business processes on the Internet remain, and how it reflects on their main participants – clients and enterprises.
Among the top critical threats to businesses, experts still put DDoS attacks on the front burner. As global digital competence is tending to grow, people start feeling uncertain that their website won’t suffer from DDoS. The more powerful and extensive these attacks become, the greater problem website owners face when trying to get protected themselves. Supply creates its own demand, and we can witness how the DDoS protection market is flourishing. So, how can we keep focused and find a solution that best fits our needs? Let’s put vendors’ pricing policy in cold storage and figure out the technical aspects of DDoS mitigation services.
When looking for a proper solution, one should keep in mind that the variety is mostly based on the degree of integration between the protection provider’s networks and the customer.
Changing DNS records: in for a penny, in for a pound?
First of all, you will probably face the so-called ‘remote protection’. It is based on the redirection of DNS A-records (as well as AAAA-records, with the introduction of IPv6) to the dedicated IP address of the service provider. Such a solution applying Reverse Proxy technology has become widespread due to several indisputable advantages, including easiness of activation, no need in changing the infrastructure/code on the customer’s side, and concealing the target server’s IP (this can be considered both as an advantage and a disadvantage, depending on the case).
However, solutions based on this technology also have a number of significant drawbacks. Here’s what should be kept in mind:
- If the attackers somehow know your IP, your web server and infrastructure are vulnerable to direct DDoS attacks;
- DNS records for all protected IP addresses must be changed. If there are plenty of them or their pool is constantly changing, the process of activating protection can get rather laborious.
- The global fault point comes out when protection depends entirely on a single vendor.
- Noticeable delays when enabling or disabling protection: data cannot be updated quickly on all the DNS servers used.
- You cannot protect sites and applications that use TCP ports other than the standard 80 and 433.
Overall, the typical DNS change scheme is perfect for websites and applications that do not have any specific functionality (e.g., non-standard ports) and that are not high-loaded systems with strict uptime requirements. However, the scheme is not suitable, for example, for the Enterprises having several major projects with their own large pool of hosted addresses, and the business processes depending on the constant availability.
Hands off DNS: In Search for Ultimate Solution
All-in-one DDoS mitigation alliance
You can organize protection on your own. First of all, you need to rent or buy IP addresses / register your AS. Also, it requires spending money on traffic scrubbing appliances, licences for using them and maintaining by experts. Moreover, you’ll have to set up broadband Internet access links and pay for a huge amount of ‘trash’ traffic in case of DDoS-attacks. If you are not a large telecom operator with >500G free bandwidth, the option to buy all-in-one DDoS mitigation alliance is not for you.
- Pros: keeping your own IPs, highest customizability, protection over all OSI levels
- Cons: extremely high price, building/management complexity.
Protected VDS or Dedicated Server
These solutions are very similar and both require a ‘move’ from an existing platform to new hardware within the vendor’s secure perimeter. VDS (virtual dedicated servers) essentially emulate many individual servers on a common computing cluster. You can easily buy the resources you need (you will simply be provided with more non-volatile memory/RAM/processor cores). On the one hand, VDS provides greater fault tolerance (failure of some parts of the cluster will not lead to complete unavailability of the resource), but on the other hand, many operators are engaged in overselling, driven by the fact, that the customer services mostly use not all the purchased power. If a large number of resources on a single cluster are running in a ‘full-power’ mode, website’s availability and performance may suffer significantly.
By renting a dedicated server, you are independent of your ‘neighbors’ and have greater customization capabilities. Generally, such a solution is more expensive. Both solutions usually assume that the customer/his team is qualified to configure and administer the server. You will not be able to keep your IP addresses and use protection services from multiple vendors. The current solution is suitable for developing projects that are ready to be relocated.
- Pros: customizability, protection over all OSI levels
- Cons: IP address change, relocation of your project, single provider
This solution has much more limited customization capabilities than previously discussed, and is, in fact, a reduced analogue of protected physical/virtual dedicated server. Advantages of the given service are the minimum requirements for the customer’s qualification and the minimum responsibility for the resource functioning. It is suitable for small projects with standard functionality, such as blogs, thematic sites, small online stores.
- Pros: low price, simplicit
- Cons: IP address change, virtually no customizability, single provider
Merging The Benefits
This technology does not reinvent the wheel, but only successfully combines all the techniques discussed above. The customer assigns his PI addresses to the provider or announces his network through the provider. Traffic designated to the customer gets into the protection provider’s network, where at the Link, Network and Transport layers of OSI it undergoes primary analysis and cleansing from obvious ‘trash’. Then, the traffic is processed due to the application layer protocols involved. For example, when TCP packets containing HTTP(s) headers (web traffic) are sent to 80 and 443 ports, they can be intercepted and directed to the processing appliances – servers that perform reverse proxying.
What are the benefits of such traffic scrubbing technology for the website owner?
- Quick enabling/disabling protection via API. There is no need to wait for DNS records to be updated.
- Convenient customization. The web resource owner chooses which addresses and subnetworks from his network can interact with the larger Internet and by which rules/ports.
- Protection at all layers of the OSI model.
- Keeping IP addresses, equipment, infrastructure (no need to move anywhere).
- Establishing Internet access from multiple independent operators and balancing traffic via BGP.
Thus, this solution is suitable for protecting both individual websites and entire AS. The only drawbacks are that nowadays the service is available from a rather limited number of providers.
– read more below the image –
No Way for Unfiltered Traffic
Whatever solution the website owner chooses, he faces an acute question – how will his traffic be processed? Let’s look under the cover and see how traffic is processed at the Application Level. Let’s introduce three criteria, especially important for businesses.
- Scrubbing accuracy – No one wants legitimate visitors going to buy a product or service to lose access to the website along with the attacker. Frightened by the terrifying browser report of disconnects, your website’s visitors will immediately leave for the competitors.
- Reaction time to the attack – The system must start mitigating the attack from the first seconds, so it must be sensitive to malicious traffic from the moment it appears.
- Influence on legitimate traffic – Protection should not interfere with the interaction between respectable customers and your website while maintaining data privacy.
Now let’s compare the top three traffic scrubbing technologies by assigning points for each of the categories from 1 to 3 (3 is the leading technology)
Reverse proxy technology is easy to connect and configure. A scrubbing proxy server reestablishes the connection between a visitor and a customer’s website, analyzing the “correctness” of data packets. To perform encrypted sessions (SSL/TLS) scrubbing, the website owner has to provide a digital signature to the service provider. This is not a secure way to deliver confidential data.
Scrubbing accuracy: 3
Attack response time: 3
Influence on legitimate traffic: 3
DPI (Deep Packet Inspection) and its equivalents. Technology that detects and blocks attacks by analyzing the content of data packets both directly and indirectly (e.g., packet length or symbol frequency) through the detection of attack signatures and statistical analysis.
Scrubbing accuracy: 1
Attack response time: 2
Influence on legitimate traffic: 3
Scrubbing web traffic based on request logs without disclosing the encrypted HTTP connection. In general, scrubbing is a process of receiving IP packets of web traffic, processing and redirecting them to the target web server. The decision of blocking the packet/IP address is made on the basis of the request logs received from the webserver. Therefore, packets are analyzed in the context of the traffic over a given connection.
Scrubbing accuracy: 2
Attack response time: 1
Influence on legitimate traffic: 3
Thus, the optimal protection for the site/application will depend on what the web resource offers to its visitors and what data it collects from them.
Caveats to Keep In Mind
As a bonus, there are several general recommendations on how to organize protection.
- Avoid using virtual tunnels: major fragmentation of packets makes the service performance dependent on different transit operators’ policies.
- Use the services offered by a provider owning network with high degree of connectivity — traffic must never bypass the globe on its way from a customer to a web server located in neighboring cities.
- Make sure you back up your links and equipment.
- Good DDoS protection requires the opportunity to eliminate any impact on traffic immediately.
DDoS-GUARD delivers a comprehensive DDoS protection service for networks and websites. The company has a geo-distributed DDoS mitigation network available with nodes in the United States, Hong Kong, Russia, Kazakhstan, and the Netherlands.
Want to share your thoughts and vision through an Expert Blog on HostingJournalist.com? Get in touch with us: [email protected].