Js Web Scraping



  1. A popular, low overhead parsing library that helps us extract data from web pages. Puppeteer + Chromium. Web scraping at full power. With these tools we can log into sites, click, scroll, execute JavaScript and more. Cloud Functions + Scheduler. Many scrapers are useless unless you deploy them.
  2. Skills: Web Scraping, PHP, Software Architecture. Woocommerce grouped product variations with step by step JS wizard navigation (€30-250 EUR) Ride Share App.

This library intends to make parsing HTML (e.g. Scraping the web) as simple and intuitive as possible. Selenium is the best for scraping JS and Ajax content. NOW OUT: My JavaScript Web Scraping Course!:)If yo. Scrapingdog is a web scraping API to scrape any website in just a single API call. It handles millions of proxies, browsers and CAPTCHAs so developers and even non-developers can focus on data collection. You can start with free 1000 API calls.

Javascript has become one of the most popular and widely used languages due to the massive improvements it has seen and the introduction of the runtime known as NodeJS. Whether it's a web or mobile application, Javascript now has the right tools. This article will explain how the vibrant ecosystem of NodeJS allows you to efficiently scrape the web to meet most of your requirements.

Prerequisites

This post is primarily aimed at developers who have some level of experience with Javascript. However, if you have a firm understanding of Web Scraping but have no experience with Javascript, this post could still prove useful.Below are the recommended prerequisites for this article:

  • ✅ Experience with Javascript
  • ✅ Experience using DevTools to extract selectors of elements
  • ✅ Some experience with ES6 Javascript (Optional)

⭐ Make sure to check out the resources at the end of this article to learn more!

Outcomes

After reading this post will be able to:

  • Have a functional understanding of NodeJS
  • Use multiple HTTP clients to assist in the web scraping process
  • Use multiple modern and battle-tested libraries to scrape the web

Understanding NodeJS: A brief introduction

Javascript is a simple and modern language that was initially created to add dynamic behavior to websites inside the browser. When a website is loaded, Javascript is run by the browser's Javascript Engine and converted into a bunch of code that the computer can understand.

For Javascript to interact with your browser, the browser provides a Runtime Environment (document, window, etc.).

This means that Javascript is not the kind of programming language that can interact with or manipulate the computer or it's resources directly. Servers, on the other hand, are capable of directly interacting with the computer and its resources, which allows them to read files or store records in a database.

When introducing NodeJS, the crux of the idea was to make Javascript capable of running not only client-side but also server-side. To make this possible, Ryan Dahl, a skilled developer took Google Chrome's v8 Javascript Engine and embedded it with a C++ program named Node.

So, NodeJS is a runtime environment that allows an application written in Javascript to be run on a server as well.

As opposed to how most languages, including C and C++, deal with concurrency, which is by employing multiple threads, NodeJS makes use of a single main thread and utilizes it to perform tasks in a non-nlocking manner with the help of the Event Loop.

Putting up a simple web server is fairly simple as shown below:

Js Web Scraping

If you have NodeJS installed and you run the above code by typing(without the < and >) in node <YourFileNameHere>.js opening up your browser, and navigating to localhost:3000, you will see some text saying, “Hello World”. NodeJS is ideal for applications that are I/O intensive.

HTTP clients: querying the web

HTTP clients are tools capable of sending a request to a server and then receiving a response from it. Almost every tool that will be discussed in this article uses an HTTP client under the hood to query the server of the website that you will attempt to scrape.

Request

Request is one of the most widely used HTTP clients in the Javascript ecosystem. However, currently, the author of the Request library has officially declared that it is deprecated. This does not mean it is unusable. Quite a lot of libraries still use it, and it is every bit worth using.

It is fairly simple to make an HTTP request with Request:

You can find the Request library at GitHub, and installing it is as simple as running npm install request.

You can also find the deprecation notice and what this means here. If you don't feel safe about the fact that this library is deprecated, there are other options down below!

Axios

Axios is a promise-based HTTP client that runs both in the browser and NodeJS. If you use TypeScript, then Axios has you covered with built-in types.

Making an HTTP request with Axios is straight-forward. It ships with promise support by default as opposed to utilizing callbacks in Request:

If you fancy the async/await syntax sugar for the promise API, you can do that too. But since top level await is still at stage 3, we will have to make use of an async function instead:

All you have to do is call getForum! You can find the Axios library at Github and installing Axios is as simple as npm install axios.

SuperAgent

Much like Axios, SuperAgent is another robust HTTP client that has support for promises and the async/await syntax sugar. It has a fairly straightforward API like Axios, but SuperAgent has more dependencies and is less popular.

Regardless, making an HTTP request with Superagent using promises, async/await, or callbacks looks like this:

You can find the SuperAgent library at GitHub and installing Superagent is as simple as npm install superagent.

For the upcoming few web scraping tools, Axios will be used as the HTTP client.

Note that there are other great HTTP clients for web scrapinglike node-fetch!

Regular expressions: the hard way

The simplest way to get started with web scraping without any dependencies is to use a bunch of regular expressions on the HTML string that you fetch using an HTTP client. But there is a big tradeoff. Regular expressions aren't as flexible and both professionals and amateurs struggle with writing them correctly.

For complex web scraping, the regular expression can also get out of hand. With that said, let's give it a go. Say there's a label with some username in it, and we want the username. This is similar to what you'd have to do if you relied on regular expressions:

In Javascript, match() usually returns an array with everything that matches the regular expression. In the second element(in index 1), you will find the textContent or the innerHTML of the <label>tag which is what we want. But this result contains some unwanted text (“Username: “), which has to be removed.

As you can see, for a very simple use case the steps and the work to be done are unnecessarily high. This is why you should rely on something like an HTML parser, which we will talk about next.

Cheerio: Core jQuery for traversing the DOM

Cheerio is an efficient and light library that allows you to use the rich and powerful API of jQuery on the server-side. If you have used jQuery previously, you will feel right at home with Cheerio. It removes all of the DOM inconsistencies and browser-related features and exposes an efficient API to parse and manipulate the DOM.

As you can see, using Cheerio is similar to how you'd use jQuery.

However, it does not work the same way that a web browser works, which means it does not:

  • Render any of the parsed or manipulated DOM elements
  • Apply CSS or load any external resource
  • Execute Javascript

Js Web Scraping Tools

So, if the website or web application that you are trying to crawl is Javascript-heavy (for example a Single Page Application), Cheerio is not your best bet. You might have to rely on other options mentionned later in this article.

To demonstrate the power of Cheerio, we will attempt to crawl the r/programming forum in Reddit and, get a list of post names.

First, install Cheerio and axios by running the following command:npm install cheerio axios.

Then create a new file called crawler.js, and copy/paste the following code:

getPostTitles() is an asynchronous function that will crawl the Reddit's old r/programming forum. First, the HTML of the website is obtained using a simple HTTP GET request with the axios HTTP client library. Then the HTML data is fed into Cheerio using the cheerio.load() function.

With the help of the browser Dev-Tools, you can obtain the selector that is capable of targeting all of the postcards. If you've used jQuery, the $('div > p.title > a') is probably familiar. This will get all the posts. Since you only want the title of each post individually, you have to loop through each post. This is done with the help of the each() function.

To extract the text out of each title, you must fetch the DOM element with the help of Cheerio (el refers to the current element). Then, calling text() on each element will give you the text.

Now, you can pop open a terminal and run node crawler.js. You'll then see an array of about 25 or 26 different post titles (it'll be quite long). While this is a simple use case, it demonstrates the simple nature of the API provided by Cheerio.

If your use case requires the execution of Javascript and loading of external sources, the following few options will be helpful.

JSDOM: the DOM for Node

JSDOM is a pure Javascript implementation of the Document Object Model to be used in NodeJS. As mentioned previously, the DOM is not available to Node, so JSDOM is the closest you can get. It more or less emulates the browser.

Once a DOM is created, it is possible to interact with the web application or website you want to crawl programmatically, so something like clicking on a button is possible. If you are familiar with manipulating the DOM, using JSDOM will be straightforward.

As you can see, JSDOM creates a DOM. Then you can manipulate this DOM with the same methods and properties you would use while manipulating the browser DOM.

To demonstrate how you could use JSDOM to interact with a website, we will get the first post of the Reddit r/programming forum and upvote it. Then, we will verify if the post has been upvoted.

Start by running the following command to install JSDOM and Axios:npm install jsdom axios

Then, make a file named crawler.js and copy/paste the following code:

upvoteFirstPost() is an asynchronous function that will obtain the first post in r/programming and upvote it. To do this, axios sends an HTTP GET request to fetch the HTML of the URL specified. Then a new DOM is created by feeding the HTML that was fetched earlier.

The JSDOM constructor accepts the HTML as the first argument and the options as the second. The two options that have been added perform the following functions:

  • runScripts: When set to “dangerously”, it allows the execution of event handlers and any Javascript code. If you do not have a clear idea of the credibility of the scripts that your application will run, it is best to set runScripts to “outside-only”, which attaches all of the Javascript specification provided globals to the window object, thus preventing any script from being executed on the inside.
  • resources: When set to “usable”, it allows the loading of any external script declared using the <script> tag (e.g, the jQuery library fetched from a CDN).

Once the DOM has been created, you can use the same DOM methods to get the first post's upvote button and then click on it. To verify if it has been clicked, you could check the classList for a class called upmod. If this class exists in classList, a message is returned.

Now, you can pop open a terminal and run node crawler.js. You'll then see a neat string that will tell you if the post has been upvoted. While this example use case is trivial, you could build on top of it to create something powerful (for example, a bot that goes around upvoting a particular user's posts).

If you dislike the lack of expressiveness in JSDOM and your crawling relies heavily on such manipulations or if there is a need to recreate many different DOMs, the following options will be a better match.

Puppeteer: the headless browser

Puppeteer, as the name implies, allows you to manipulate the browser programmatically, just like how a puppet would be manipulated by its puppeteer. It achieves this by providing a developer with a high-level API to control a headless version of Chrome by default and can be configured to run non-headless.

Taken from the Puppeteer Docs (Source)

Puppeteer is particularly more useful than the aforementioned tools because it allows you to crawl the web as if a real person were interacting with a browser. This opens up a few possibilities that weren't there before:

  • You can get screenshots or generate PDFs of pages.
  • You can crawl a Single Page Application and generate pre-rendered content.
  • You can automate many different user interactions, like keyboard inputs, form submissions, navigation, etc.

It could also play a big role in many other tasks outside the scope of web crawling like UI testing, assist performance optimization, etc.

Quite often, you will probably want to take screenshots of websites or, get to know about a competitor's product catalog. Puppeteer can be used to do this. To start, install Puppeteer by running the following command:npm install puppeteer

This will download a bundled version of Chromium which takes up about 180 to 300 MB, depending on your operating system. If you wish to disable this and point Puppeteer to an already downloaded version of Chromium, you must set a few environment variables.

This, however, is not recommended. Ff you truly wish to avoid downloading Chromium and Puppeteer for this tutorial, you can rely on the Puppeteer playground.

Let's attempt to get a screenshot and PDF of the r/programming forum in Reddit, create a new file called crawler.js, and copy/paste the following code:

getVisual() is an asynchronous function that will take a screenshot and PDF of the value assigned to the URL variable. To start, an instance of the browser is created by running puppeteer.launch(). Then, a new page is created. This page can be thought of like a tab in a regular browser. Then, by calling page.goto() with the URL as the parameter, the page that was created earlier is directed to the URL specified. Finally, the browser instance is destroyed along with the page.

Once that is done and the page has finished loading, a screenshot and PDF will be taken using page.screenshot() and page.pdf() respectively. You could also listen to the Javascript load event and then perform these actions, which is highly recommended at the production level.

When you run the code type in node crawler.js to the terminal, after a few seconds, you will notice that two files by the names screenshot.jpg and page.pdf have been created.

Also, we've written a complete guide on how to download a file with Puppeteer. You should check it out!

Nightmare: an alternative to Puppeteer

Nightmare is another a high-level browser automation library like Puppeteer. It uses Electron but is said to be roughly twice as fast as it's predecessor PhantomJS and it's more modern.

If you dislike Puppeteer or feel discouraged by the size of the Chromium bundle, Nightmare is an ideal choice. To start, install the Nightmare library by running the following command:npm install nightmare

Once Nightmare has been downloaded, we will use it to find ScrapingBee's website through a Google search. To do so, create a file called crawler.js and copy/paste the following code into it:

First, a Nightmare instance is created. Then, this instance is directed to the Google search engine by calling goto() once it has loaded. The search box is fetched using its selector. Then the value of the search box (an input tag) is changed to “ScrapingBee”.

After this is finished, the search form is submitted by clicking on the “Google Search” button. Then, Nightmare is told to wait untill the first link has loaded. Once it has loaded, a DOM method will be used to fetch the value of the href attribute of the anchor tag that contains the link.

Finally, once everything is complete, the link is printed to the console. To run the code, type in node crawler.js to your terminal.

Summary

That was a long read! But now you understand the different ways to use NodeJS and it's rich ecosystem of libraries to crawl the web in any way you want. To wrap up, you learned:

  • NodeJS is a Javascript runtime that allow Javascript to be run server-side. It has a non-blocking nature thanks to the Event Loop.
  • HTTP clients such as Axios, SuperAgent, Node fetch and Request are used to send HTTP requests to a server and receive a response.
  • Cheerio abstracts the best out of jQuery for the sole purpose of running it server-side for web crawling but does not execute Javascript code.
  • JSDOM creates a DOM per the standard Javascript specification out of an HTML string and allows you to perform DOM manipulations on it.
  • Puppeteer and Nightmare are high-level browser automation libraries, that allow you to programmatically manipulate web applications as if a real person were interacting with them.

While this article tackles the main aspects of web scraping with NodeJS, it does not talk about web scraping without getting blocked.

If you want to learn how to avoid getting blocked, read our complete guide, and if you don't want to deal with this, you can always use our web scraping API.

Happy Scraping!

Resources

Would you like to read more? Check these links out:

  • NodeJS Website - Contains documentation and a lot of information on how to get started.
  • Puppeteer's Docs - Contains the API reference and guides for getting started.
  • Playright An alternative to Puppeteer, backed by Microsoft.
  • ScrapingBee's Blog - Contains a lot of information about Web Scraping goodies on multiple platforms.

If you’re wondering how to make a Chrome Extension, Chrome’s extensiondocumentation is great for basic implementations. However, to use more advancedfeatures requires a lot of Googling and Stack Overflow. Let’s make anintermediate Chrome extension that interacts with the page: it will find thefirst external link on the page and open it in a new tab.

Jquery Web Scraping

manifest.json

The manifest.json file tells Chrome important information about yourextension, like its name and which permissions it needs.

The most basic possible extension is a directory with a manifest.json file.Let’s create a directory and put the following JSONinto manifest.json:

That’s the most basic possible manifest.json, with all required fields filledin. The manifest_versionshould always be 2, because version 1 isunsupported as of January 2014. So far our extension does absolutely nothing,but let’s load it into Chrome anyway.

Load your extension into Chrome

To load your extension in Chrome, open up chrome://extensions/ in your browserand click “Developer mode” in the top right. Now click “Load unpackedextension…” and select the extension’s directory. You should now see yourextension in the list.

When you change or add code in your extension, just come back to this page andreload the page. Chrome will reload your extension.

Content scripts

A content script is “a JavaScript file that runs in the context of web pages.”This means that a content script can interact with web pages that the browservisits. Not every JavaScript file in a Chrome extension can do this; we’ll seewhy later.

Let’s add a content script named content.js:

To inject the script, we need to tell our manifest.json file about it.

Add this to your manifest.json file:

This tells Chrome to inject content.js into every page we visit using thespecial <all_urls> URL pattern. If we want to inject the script on only somepages, we can use match patterns. Here are a few examples of values for'matches':

  • ['https://mail.google.com/*', 'http://mail.google.com/*'] injects our scriptinto HTTPS and HTTP Gmail. If we have / at the end instead of /*, itmatches the URLs exactly, and so would only inject intohttps://mail.google.com/, not https://mail.google.com/mail/u/0/#inbox.Usually that isn’t what you want.
  • http://*/* will match any http URL, but no other scheme. For example, thiswon’t inject your script into https sites.

Reload your Chrome extension. Every single page you visit now pops up an alert. Let’slog the first URL on the page instead.

Logging the URL

jQuery isn’t necessary, but it makes everything easier. First, download aversion of jQuery from the jQuery CDN and put it in your extension’s folder. Idownloaded the latest minified version, jquery-2.1.3.min.js. To load it, addit to manifest.json before 'content.js'. Your whole manifest.json should looklike this:

Now that we have jQuery, let’s use it to log the URL of the first external linkon the page in content.js:

Note that we don’t need to use jQuery to check if the document has loaded. Bydefault, Chrome injects content scripts after the DOM is complete.

Try it out - you should see the output in your console on every page you visit.

Browser Actions

When an extension adds a little icon next to your address bar, that’s a browseraction. Your extension can listen for clicks on that button and then dosomething.

Put the icon.png from Google’s extension tutorial in your extension folder andadd this to manifest.json:

In order to use the browser action, we need to add message passing.

Message passing

Js Web Scraping Software

A content script has access to the current page, but is limited in the APIs itcan access. For example, it cannot listen for clicks on the browser action. Weneed to add a different type of script to our extension, a background script,which has access to every Chrome API but cannot access the current page. AsGoogle puts it:

Content scripts have some limitations. They cannot use chrome.* APIs, withthe exception of extension, i18n, runtime, and storage.

So the content script will be able to pull a URL out of the current page, butwill need to hand that URL over to the background script to do something usefulwith it. In order to communicate, we’ll use what Google calls message passing,which allows scripts to send and listen for messages. It is the only way forcontent scripts and background scripts to interact.

Add the following to tell manifest.json about the background script:

Now we’ll add background.js:

This sends an arbitrary JSON payload to the current tab. The keys of the JSONpayload can be anything, but I chose 'message' for simplicity. Now we need tolisten for that message in content.js:

Notice that all of our previous code has been moved into the listener, so thatit is only run when the payload is received. Every time you click the browseraction icon, you should see a URL get logged to the console. If it’s notworking, try reloading the extension and then reloading the page.

Opening a new tab

Web Scraping Node Js

We can use the chrome.tabs API to open a new tab:

But chrome.tabs can only be used by background.js, so we’ll have to add somemore message passing since background.js can open the tab, but can’t grab theURL. Here’s the idea:

  1. Listen for a click on the browser action in background.js. When it’sclicked, send a clicked_browser_action event to content.js.
  2. When content.js receives the event, it grabs the URL of the first link on thepage. Then it sends open_new_tab back to background.js with the URL toopen.
  3. background.js listens for open_new_tab and opens a new tab with the givenURL when it receives the message.

Clicking on the browser action will trigger background.js, which will send amessage to content.js, which will send a URL back to background.js, which willopen a new tab with the given URL.

First, we need to tell content.js to send the URL to background.js. Changecontent.js to use this code:

Now we need to add some code to tell background.js to listen for that event:

Now when you click on the browser action icon, it opens a new tab with the firstexternal URL on the page.

Wrapping it up

The full content.js and background.js are above. Here’s the full manifest.json:

And here’s the full directory structure:

More on how to make a Chrome extension

For more information, try the official Chrome extension documentation.