It is a well-known fact that Python is one of the most popular programming languages for data mining and Web Scraping. There are tons of libraries and niche scrapers around the community, but we’d like to share the 5 most popular of them.
Most of these libraries' advantages can be received by using our API and some of these libraries can be used in stack with it.
The Top 5 Python Web Scraping Libraries in 2020#
Feb 11, 2021 Lean how to scrape the web with Selenium and Python with this step by step tutorial. We will use Selenium to automate Hacker News login. Kevin Sahin Updated: 11 February, 2021 8 min read.
1. Requests#
- If you are a beginner to web scraping with Python, check out my guides on Extracting Data from HTML with BeautifulSoup and Crawling the Web with Python and Scrapy. Choose the Right Tool Choosing the right tool depends on the type of project you are working on, since Python has a wide variety of libraries and frameworks for web scraping.
- Oct 22, 2015 What is Web Scraping? Web scraping is a computer software technique of extracting information from websites. This technique mostly focuses on the transformation of unstructured data (HTML format) on the web into structured data (database or spreadsheet). You can perform web scraping in various ways, including use of Google Docs to almost every.
Well known library for most of the Python developers as a fundamental tool to get raw HTML data from web resources.
To install the library just execute the following PyPI command in your command prompt or Terminal:
After this you can check installation using REPL:
- Official docs URL: https://requests.readthedocs.io/en/latest/
- GitHub repository: https://github.com/psf/requests
2. LXML#
When we’re talking about the speed and parsing of the HTML we should keep in mind this great library called LXML. This is a real champion in HTML and XML parsing while Web Scraping, so the software based on LXML can be used for scraping of frequently-changing pages like gambling sites that provide odds for live events.
To install the library just execute the following PyPI command in your command prompt or Terminal:
The LXML Toolkit is a really powerful instrument and the whole functionality can’t be described in just a few words, so the following links might be very useful:
- Official docs URL: https://lxml.de/index.html#documentation
- GitHub repository: https://github.com/lxml/lxml/
3. BeautifulSoup#
Probably 80% of all the Python Web Scraping tutorials on the Internet uses the BeautifulSoup4 library as a simple tool for dealing with retrieved HTML in the most human-preferable way. Selectors, attributes, DOM-tree, and much more. The perfect choice for porting code to or from Javascript's Cheerio or jQuery.
To install this library just execute the following PyPI command in your command prompt or Terminal:
As it was mentioned before, there are a bunch of tutorials around the Internet about BeautifulSoup4 usage, so do not hesitate to Google it!
- Official docs URL: https://www.crummy.com/software/BeautifulSoup/bs4/doc/
- Launchpad repository: https://code.launchpad.net/~leonardr/beautifulsoup/bs4
4. Selenium#
Selenium is the most popular Web Driver that has a lot of wrappers suitable for most programming languages. Quality Assurance engineers, automation specialists, developers, data scientists - all of them at least once used this perfect tool. For the Web Scraping it’s like a Swiss Army knife - there are no additional libraries needed because any action can be performed with a browser like a real user: page opening, button click, form filling, Captcha resolving, and much more.
To install this library just execute the following PyPI command in your command prompt or Terminal:
The code below describes how easy Web Crawling can be started with using Selenium:
Selenium For Web Scraping Python Example
As this example only illustrates 1% of the Selenium power, we’d like to offer of following useful links:
- Official docs URL: https://selenium-python.readthedocs.io/
- GitHub repository: https://github.com/SeleniumHQ/selenium
5. Scrapy#
Scrapy is the greatest Web Scraping framework, and it was developed by a team with a lot of enterprise scraping experience. The software created on top of this library can be a crawler, scraper, and data extractor or even all this together.
To install this library just execute the following PyPI command in your command prompt or Terminal:
We definitely suggest you start with a tutorial to know more about this piece of gold: https://docs.scrapy.org/en/latest/intro/tutorial.html
As usual, the useful links are below:
- Official docs URL: https://docs.scrapy.org/en/latest/index.html
- GitHub repository: https://github.com/scrapy/scrapy
What web scraping library to use?#
So, it’s all up to you and up to the task you’re trying to resolve, but always remember to read the Privacy Policy and Terms of the site you’re scraping 😉.
Imagine what would you do if you could automate all the repetitive and boring activities you perform using internet, like checking every day the first results of Google for a given keyword, or download a bunch of files from different websites.
In this post you’ll learn to use Selenium with Python, a Web Scraping tool that simulates a user surfing the Internet. For example, you can use it to automatically look for Google queries and read the results, log in to your social accounts, simulate a user to test your web application, and anything you find in your daily live that it’s repetitive. The possibilities are infinite! 🙂
*All the code in this post has been tested with Python 2.7 and Python 3.4.
Install and use Selenium
Selenium is a python package that can be installed via pip. I recommend that you install it in a virtual environment (using virtualenv and virtualenvwrapper).
To install selenium, you just need to type:
In this post we are going to initialize a Firefox driver — you can install it by visiting their website. However, if you want to work with Chrome or IE, you can find more information here.
Once you have Selenium and Firefox installed, create a python file, selenium_script.py. We are going to initialize a browser using Selenium:
Selenium For Web Scraping Python Interview
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 | from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions asEC from selenium.common.exceptions import TimeoutException driver=webdriver.Firefox() returndriver driver.get('http://www.google.com') box=driver.wait.until(EC.presence_of_element_located( button=driver.wait.until(EC.element_to_be_clickable( box.send_keys(query) except TimeoutException: if__name__'__main__': lookup(driver,'Selenium') driver.quit() |
In the previous code:
- the function init_driver initializes a driver instance.
- creates the driver instance
- adds the WebDriverWait function as an attribute to the driver, so it can be accessed more easily. This function is used to make the driver wait a certain amount of time (here 5 seconds) for an event to occur.
- the function lookup takes two arguments: a driver instance and a query lookup (a string).
- it loads the Google search page
- it waits for the query box element to be located and for the button to be clickable. Note that we are using the WebDriverWait function to wait for these elements to appear.
- Both elements are located by name. Other options would be to locate them by ID,XPATH,TAG_NAME,CLASS_NAME,CSS_SELECTOR , etc (see table below). You can find more information here.
- Next, it sends the query into the box element and clicks the search button.
- If either the box or button are not located during the time established in the wait function (here, 5 seconds), the TimeoutException is raised.
- the next statement is a conditional that is true only when the script is run directly. This prevents the next statements to run when this file is imported.
- it initializes the driver and calls the lookup function to look for “Selenium”.
- it waits for 5 seconds to see the results and quits the driver
Finally, run your code with:
Did it work? If you got an ElementNotVisibleException , keep reading!
How to catch an ElementNotVisibleExcpetion
Google search has recently changed so that initially, Google shows this page:
and when you start writing your query, the search button moves into the upper part of the screen.
Well, actually it doesn’t move. The old button becomes invisible and the new one visible (and thus the exception when you click the old one: it’s not visible to click!).
We can update the lookup function in our code so that it catches this exception:
2 4 6 8 10 12 14 16 18 | from selenium.common.exceptions import ElementNotVisibleException def lookup(driver,query): try: box=driver.wait.until(EC.presence_of_element_located( button=driver.wait.until(EC.element_to_be_clickable( box.send_keys(query) button.click() button=driver.wait.until(EC.visibility_of_element_located( button.click() print('Box or Button not found in google.com') |
- the element that raised the exception, button.click() is inside a try statement.
- if the exception is raised, we look for the second button, using visibility_of_element_located to make sure the element is visible, and then click this button.
- if at any time, some element is not found within the 5 second period, the TimeoutException is raised and caught by the two end lines of code.
- Note that the initial button name is “btnK” and the new one is “btnG”.
Method list in Selenium
To sum up, I’ve created a table with the main methods used here.
Note: it’s not a python file — don’t try to run/import it 🙂
2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 | from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait driver=webdriver.Firefox() # WAIT FOR ELEMENTS from selenium.webdriver.support import expected_conditions asEC element=driver.wait.until( EC.element_to_be_clickable( (By.NAME,'name') (By.LINK_TEXT,'link text') (By.TAG_NAME,'tag name') (By.CSS_SELECTOR,'css selector') ) # CATCH EXCEPTIONS TimeoutException |
That’s all! Hope it was useful! 🙂
Don’t forget to share it with your friends!
Comments are closed.