Logo
Guide to web crawling without getting blocked

Guide to web crawling without getting blocked

Web Crawling

Web scraping is a task that must be carried out responsibly to avoid causing an impact on the websites being scraped. Web crawlers can retrieve data considerably faster, and in greater depth than people, so poor scraping methods might affect the site's speed. While most websites do not have anti-scraping techniques, certain websites employ procedures that can result in restricted web scraping because they oppose free data access.

If a crawler makes several requests per second and downloads huge files, an underpowered server will struggle to keep up with multiple crawlers' demands. Some site managers dislike spiders and try to limit their access because web crawlers, scrapers, or spiders (all terms used interchangeably) don't drive human website visitors and appear to impact the site's performance.

In this article, we'll go through the best web crawling techniques to avoid being blocked by anti-scraping or bot detection software when crawling web pages.

Post Quick Links

Jump straight to the section of the post you want to read:

What is a Web Crawler bot?

Crawling is the technical term for automatically accessing a website and getting data via a software program, which is why they're called "web crawlers." a web crawler, often known as a spider or a search engine bot, downloads and indexes content from all across the web. The goal of a bot like this is to learn what (nearly) every web page on the Internet is about to obtain information when needed.

Search engines nearly often control these bots. Search engines can give appropriate links in response to user search queries by applying a search algorithm to the data collected by web crawlers, generating the list of web pages that appear after a user types a search into Google or Bing (or another search engine). A web crawler bot is similar to someone who goes through all of the books in a disorganized library and creates a card catalog so that anyone visiting the library can locate the information they need quickly and easily. The organizer will study the title, summary, and some of the internal content of each book to figure out what it's about to help categorize and sort the library's books by topic. However, unlike a library, the Internet does not have tangible stacks of books, making it difficult to determine whether all relevant content has been correctly cataloged or if large amounts of it have been overlooked. A web crawler bot will start with a set of known webpages and then follow hyperlinks from those pages to other pages, then follow hyperlinks from those other pages to more pages, and so on, in an attempt to uncover all relevant material on the Internet.

How much of the publicly accessible Internet is crawled by search engine bots is uncertain. According to some sources, just 40-70 percent of the Internet – or billions of web pages – is indexed for search.

According to some sources, just 40-70 percent of the Internet – or billions of web pages – is indexed for search.

How do Web Crawlers Work?

The Internet evolves and expands at a rapid pace. Web crawler bots start from a seed or a list of known URLs because it is impossible to determine how many total web pages there are on the Internet. They start by crawling the sites at those URLs. When they crawl those pages, they'll find linkages to other URLs, and they'll add those to the list of pages to crawl next.

This process might carry on indefinitely, given the massive amount of web pages on the Internet that may be indexed for search. On the other hand, a web crawler will follow particular policies that allow it to be more selective about which pages to crawl, in what order they should be crawled, and how often they should be crawled to check for content updates. Most web crawlers aren't designed to crawl the entire publicly available Internet; instead, they choose which pages to crawl first based on the number of other pages that link to it, the number of visitors it receives, and other factors that indicate the page's likelihood of containing important information.

The idea is that a webpage that is cited by a lot of other webpages and receives a lot of traffic is likely to contain high-quality, authoritative information, so having it indexed by a search engine is especially important – just as a library would make sure to keep plenty of copies of a book that gets checked out by a lot of people.

Revisiting webpages: The content on the Internet is constantly being updated, removed, or relocated. Web crawlers will need to review pages regularly to ensure that the most recent information is indexed.

Robots.txt requirements: Web crawlers use the robots.txt protocol to determine which pages to crawl (also known as the robots exclusion protocol). They will verify the robots.txt file hosted by the page's web server before crawling it. A robots.txt file is a text file that provides the rules for any bots that attempt to access the hosted website or application. These rules specify which pages the bots are allowed to crawl and which links they can follow. Take a look at the robots.txt file on Cloudflare.com as an example.

All of these characteristics are weighted differently within the secret algorithms that each search engine incorporates into its spider bots. Different search engines' web crawlers will act in slightly different ways, but the final purpose is the same: to retrieve and index content from web pages.

What are some examples of web crawlers?

Web crawlers are available on all popular search engines, and the larger ones have many crawlers with different focuses.

Google, for example, has a major crawler called Googlebot that covers both mobile and desktop crawling. However, Google has several other bots, including Googlebot Images, Googlebot Videos, Googlebot News, and Ads Bot.

Here are a few more web crawlers you might encounter:

  • DuckDuckBot for DuckDuckGo
  • Yandex Bot for Yandex
  • Baiduspider for Baidu
  • Yahoo! Slurp for Yahoo!
  • Bing also has a standard web crawler called Bingbot

Why are Web Crawler Mattes Important for SEO?

Web crawlers must reach and read your pages if you want to improve your site's ranking. Crawling is how search engines first learn about your sites, but crawling frequently allows them to see updates you make and stay up to date on the freshness of your content. Because web crawler activity extends beyond the start of your SEO campaign, you may think of it as a proactive technique to help you appear in search results and improve the user experience.

Crawl budget management

Continuous web crawling ensures that your newly published content appears in search engine results pages (SERPs). On the other hand, Google and most other search engines do not allow you to crawl indefinitely.

Google has a crawl budget that directs its bots in the following directions:

How often should you crawl?

Which pages should I scan?

What is the maximum amount of server stress that can be considered acceptable?

Fortunately, there's a crawl budget set aside. Otherwise, crawlers and visitors may cause your site to become overloaded.

If you're having problems with Googlebot, you can change it in the Google Search Console. You may adjust web crawling using the crawl rate limit and crawl demand if you want to keep your site running smoothly. The crawl rate limit keeps an eye on site fetching to ensure that load speed isn't slowed or a spike in errors occurs. The crawl demand reflects Google's and its users' interest in your website. So, if your site doesn't yet have a large following, Googlebot won't crawl it as frequently as popular sites.

Web crawler stumbling blocks

There are a couple of options for preventing web crawlers from viewing your pages on purpose. These crawler blockages can protect sensitive, redundant, or irrelevant pages from appearing in the SERPs for keywords. The noindex meta tag is the first hurdle, as it prevents search engines from indexing and ranking a page. No indexing admin pages, thank you pages, and internal search results is usually a good idea. The robots.txt file is another stumbling crawler issue. This directive isn't as decisive because crawlers can choose to ignore your robots.txt files, but it's useful for managing your crawl budget.

What is the Difference Between Web Crawling and Web Scraping?

A web crawler, often known as a "spider," is a standalone bot that crawls the Internet to index and search for content by following internal connections on web pages. In general, the term "crawler" refers to a program's ability to traverse websites on its own, possibly without a clear end goal or aim in mind, continually investigating what a site or network has to offer. Search engines like Google, Bing, and others employ web crawlers to extract content for a URL, check this page for other links, get URLs for these links, and so on.

Web scraping, on the other hand, is the practice of retrieving specific data from a website. As opposed to web crawling, a web scraper looks for specific information on certain websites or pages.

However, before you can do web scraping, you'll need to do some sort of web crawling to find the information you're looking for. Data crawling entails some scraping, such as saving all keywords, images, and URLs from a web page. Web crawling just copies what's already there, whereas web scraping extracts specific data for analysis or generates something new.

Web crawling is what Google, Yahoo, Bing, and other search engines perform while looking for information. Web scraping is a technique for extracting data from specific websites, such as stock market data, business leads, and supplier product scraping.

13 Tips on How to Crawl A Website Without Getting Blocked

ROBOTS.TXT

First and foremost, you must understand what the robots.txt file is and how it functions. This file contains standard scraping rules. In other words, it notifies search engine crawlers of which pages or files they may and cannot request from your site. This is mostly used to prevent any website from becoming overburdened with queries. Many websites permit GOOGLE to scrape their content. The robots.txt file can be seen on websites at http://example.com/robots.txt. User-agent: * or Disallow:/ in the robots.txt file of some websites indicates that they do not want their pages scraped. Anti-scraping mechanisms work based on a simple rule: Is it a bot or a human? To make a decision, it must analyze this regulation using specified criteria. Anti-scraping mechanisms refer to the following points: You'll be classified as a "bot" if you scrape pages quicker than a person can. While scraping, keep the same pattern in mind. For example, suppose you were gathering photos or links from every page of the target domain. If you've been scraping with the same IP for a long time. User-Agent isn't available. Perhaps you're using a browser that doesn't use headers, such as Tor Browser. If you keep these factors in mind while scraping a website, you will scrape any website on the Internet. I am confident that if you keep these factors in mind while scraping a website, you will scrape any website on the Internet.

Rotation of IP addresses

This is the most straightforward approach for anti-scraping systems to catch you off guard. You will be prohibited if you continue to use the same IP address for each request. As a result, for each successful scrape request, you must use a different IP address. Before making an HTTP request, you must have a pool of at least 10 IP addresses. You can utilize proxy rotating services like Scrapingdog or any other proxy service to avoid being blocked. Before making a request, I've included a little python code snippet that can be used to establish a pool of new IP addresses. This will return a JSON answer with three properties: IP, port, and country. This proxy API will return IP addresses based on a country code. However, you must use either mobile or home proxies for websites with advanced bot detection mechanisms. Scrapingdog can be used for such services once more. The total number of IP addresses in the globe is fixed. You will gain access to millions of IPs that can be used to scrape millions of pages if you use these services. This is the best thing you can do if you want to scrape for a longer time.

User-Agent

The User-Agent request header is a character string that allows servers and network peers to identify the requesting user agent's application, operating system, vendor, and/or version. Some websites block certain queries if the User-Agent does not belong to a major browser. Many websites will not enable you to read their content if user-agents are not set. You can find out what your user agent is by searching for it on Google. http://www.whatsmyuseragent.com/ You may also verify your user-string here: http://www.whatsmyuseragent.com/ An anti-scraping function that they utilize when blocking IPs employs a similar technique. You will be banned in no time if you use the same user agent for every request. What is the answer? The solution is straightforward: either generate a list of User-Agents or utilize utilities such as fake-user agents. I've tried both methods, but I recommend using the library for efficiency's sake.

Scraping should be slower, with random intervals in between.

As you may be aware, the speed at which humans and bots crawl web pages differs significantly. Bots may scrape websites at breakneck speed. It is not beneficial for anyone to make quick, unneeded, or random requests to a website. A website may fall as a result of this excess of requests. Make your bot sleep programmatically in between scraping tasks to prevent making this error. The anti-scraping system will make your bot appear more human. This will also have no negative impact on the website. Concurrent requests are used to scrape the least number of pages possible. Set a timer for 10 to 20 seconds before continuing to scrape. Use crawling speed throttling techniques that automatically adjust the crawling speed based on the load on the spider and the website you're crawling. After a few trial runs, adjust the spider's crawling pace to an optimal level. Because the environment changes over time, do this regularly.

Detect website changes and changes in scraping pattern

Humans, in general, do not execute repetitive tasks while browsing a website with random events. Web scraping bots, on the other hand, are configured to crawl in the same fashion. Some websites, as I already stated, have excellent anti-scraping features. They'll catch your bot and ban it indefinitely.

So, how can you keep your bot from being discovered? This can be accomplished by including some random page clicks, mouse motions, and random actions that make a spider appear human.

Another issue is that many websites alter their layouts for a variety of reasons, and as a result, your scraper will fail to deliver the data you want. For this, you'll need a sophisticated monitoring system that can detect changes in their layouts and notify you of the situation. After that, you may use this data in your scraper to make it operate properly.

Headers

When your browser sends a request to a website, it transmits a list of headers. The website analyses your identification using headers. You can use these headers to make your scraper look more human. Simply copy these and paste them into your code's header object. This will make it appear as if your request is coming from a real browser. Furthermore, by utilizing IP and User-Agent Rotation, your scraper will be impenetrable. Any website, whether dynamic or static, can be scraped. You will be able to defeat 99.99 percent of anti-scraping devices utilizing these strategies.

There is now a header called "Referer." It's an HTTP request header that tells the website where you're coming from. It's often a good idea to set things up so that it appears as though you're coming from Google; you can do this with the header "Referer": "https://www.google.com/"

If you're scraping websites in the UK or India, you can use https://www.google.co.uk or google.in instead. This will make your request appear more genuine and natural. Using a program like https://www.similarweb.com, you can search for the most popular referrers to any site; often, this will be a social media site like Youtube or Facebook.

Browser without ahead

The content of websites is shown differently depending on the browser you are using. Various browsers render certain elements differently. Take, for example, a Google search. If the browser (identified by the user agent) has advanced capabilities, the website may display "richer" content — more dynamic and stylized material that relies heavily on Javascript and CSS.

The issue is that when web scraping, the material is produced by the JS code rather than the raw HTML response delivered by the server. You may need to use your headless browser (or have Scrapingdog do it for you!) to scrape these websites.

Selenium and Puppeteer, for example, provide APIs for controlling browsers and scraping dynamic websites. We must stress that making these browsers undetectable takes a lot of work. However, this is the most efficient method of scraping a page. You can also use certain browserless services to launch a browser instance on their servers instead of putting more strain on your server. On their services, you can even open up to 100 instances at once. Overall, it's a win-win situation for the scrapping sector.

Captcha-solving services 

Many websites employ Google's ReCaptcha, which allows you to pass a test. If the test is passed within a specified amount of time, it is assumed that you are not a bot but a real person. If you scrape a website frequently, the website will eventually block you. Instead of online pages, you'll start seeing captcha pages. Scrapingdog is one service that can help you get around these limits. Because some of these CAPTCHA solution services are slow and expensive, you may want to examine if scraping sites that require constant CAPTCHA solving is still cost-effective.

Honeypot Traps

To identify hacking or web scraping, there are hidden links. In reality, it's a program that imitates the behavior of a real system. Certain websites have honeypots installed on their systems that are invisible to normal users but accessible to bots and web scrapers. If a link has the “display: none” or “visibility: hidden” CSS properties set, you should avoid clicking on it; otherwise, a site will be able to correctly identify you as a programmatic scraper, fingerprint the properties of your requests, and quickly ban you.  Honeypots are one of the simplest ways for knowledgeable webmasters to discover crawlers, so make sure you verify each page you scrape for crawlers.

\ Google Cache

Now and again, Google caches a duplicate of some web pages. So, instead of sending a request to that website, you can send one to its cached copy. Simply append “http://webcache.googleusercontent.com/search?q=cache:” to the URL. For example, you may scrape “http://webcache.googleusercontent.com/search?q=cache:https://www.scrapingdog.com/docs” to get Scrapingdog documentation.

However, keep in mind that this strategy should only be utilized for websites that do not contain important information that changes frequently. For example, Linkedin has instructed Google not to cache their data. In addition, Google creates a cached duplicate of a webpage after a specific amount of time has passed. It also relies on the website's popularity.

Scrape Out of the Google Cache

You may be able to scrape data from Google's cached copy of the website rather than the website itself as a last resort, especially for material that does not change frequently. Simply include “http://webcache.googleusercontent.com/search?q=cache:” to the beginning of the URL (for example, to scrape the documentation for the Scraper API, scrape “http://webcache.googleusercontent.com/search?q=cache:https://www.scraperapi.com/documentation”).

This is an excellent workaround for non-time-sensitive data on sites that are incredibly difficult to scrape. While scraping from Google's cache is more reliable than scraping from a site that is actively trying to block your scrapers, keep in mind that this is not a foolproof solution; for example, some sites, such as LinkedIn, actively tell Google not to cache their data, and data for less popular sites may be out of date because Google determines how often they should crawl a site based on the site's p-value.

Stay away from JavaScript.

Data nested within JavaScript elements are difficult to obtain. Many different JavaScript features are used on websites to show content based on specific user activities. It's standard practice to only show product photos in search boxes once the user has entered some information.

Memory leaks, program instability, and, in rare cases, outright crashes can all be caused by JavaScript. Dynamic features can be a pain to work with. If at all possible, avoid using JavaScript. Crawl during off-peak hours to save money.

Because they don't read the content, most crawlers go through pages much faster than the ordinary user. As a result, a single unrestricted web crawling program will have a greater impact on server demand than a typical internet user. Crawling during high-load times, on the other hand, may have a detrimental influence on the user experience due to service slowdowns.

The optimal time to crawl a website varies from case to case, but off-peak hours soon after midnight (specific to the service) are a decent place to start.

Do not scrape images.

Images are data-intensive items that are frequently copyrighted. It will not only require more bandwidth and storage space, but it will also increase the possibility of infringing on someone else's rights.

Furthermore, because images are data-heavy, they are frequently concealed in JavaScript components (for example, behind Lazy loading), adding to the complexity of the data collecting process and slowing down the web scraper. A more advanced scraping process (one that forces the website to load all content) would have to be built and used to retrieve photos out of JS elements.

FAQ's

Conclusion

Hopefully, you've picked up a few tips on how to scrape popular websites without getting blacklisted or having your IP address blacklisted. While IP rotation and suitable HTTP request headers should suffice in most circumstances, you may need to resort to more complex approaches such as utilizing a headless browser or scraping the Google cache to retrieve the information you require.

When scraping, it's always vital to consider webmasters and other site users; therefore, if you notice that the site is slowing down, reduce your request rate. This is especially critical when scraping smaller sites that may not have larger companies' web hosting capacity.

Undoubtedly, using a proxy service can help you solve problems and get over anti-scraping methods implemented by target sites. In the end, the decision is yours to make, and it will be based on your web scraping requirements, budget, and technological requirements.

If you like this article's information, share it with your friends, partners, and employees. You can do web crawling on web pages effectively if you follow precautions. Make sure to follow these 13 tips to avoid being blocked during web crawling on a website.

About the author

Expert

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Icon NextPrevOnline Media Monitoring: Challenges and Solutions
NextThe Sneaker Bot Handbook: Improve Your Copping SkillsIcon Prev

Ready to get started?