Extract Data from HTML With BeautifulSoup

Hello Guys hope you all are having fun time going through our tutorial. In this article we will learn how to pull data from HTML and XML in other words Web Scraping ( extracting informations from websites ).

For this i will be making use of BeautifulSoup module.


BeautifulSoup is python library to pull data out of HTML and XML files. This modules provides few methods and python idioms for navigating, searching, and modifying a parse tree. It basically saves programmers few hours of coding.


This are the modules that we need to have to get started and install in you local machine using PIP.

$pip install requests
$pip install beautifulsoup4

Let learn this module feature with Examples. We will be using the Following HTML document to cover this module. Note that the examples in this tutorial should work the same way in Python2.7 and Python3.2

html_doc = """
<head><title>Python Lovers</title></head>
<p class="hello">We are the co-founders of Python Lovers</p>
<p>1. Kamal </p>
<p>2. Ankur </p>
<p>3. Manish </p>
<p>4. Jaswinder </p>
<p>5. Mulasi </p>
<p>6. Aditya </p>
<a href="https://www.pythonlovers.net/">Python Lovers</a>

Now you must see the output like shown below when you run your commands, it return us a BeautifulSoup object.

>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html_doc,'html.parser')
>>> print soup.prettify()
   Python Lovers
  <p class="hello">
   We are the co-founders of Python Lovers
   1. Kamal
   2. Ankur
   3. Manish
   4. Jaswinder
   5. Mulasi
   6. Aditya
  <a href="https://www.pythonlovers.net/">
   Python Lovers

As mentioned in the introduction we can navigate through the Data Structure. Let explore about it in some simple steps.

>>> soup.title
<title>Python Lovers</title>
>>> soup.title.name
>>> soup.p
<p class="hello">We are the co-founders of Python Lovers</p>
>>> soup.p['class']
>>> soup.a
<a href="https://www.pythonlovers.net/">Python Lovers</a>
>>> soup.find_all('p')
[<p class="hello">We are the co-founders of Python Lovers</p>, <p>1. Kamal </p>, <p>2. Ankur </p>, <p>3. Manish </p>, <p>4. Jaswinder </p>, <p>5. Mulasi </p>, <p>6. Aditya </p>]
>>> print soup.get_text()
Python Lovers
We are the co-founders of Python Lovers
1. Kamal 
2. Ankur 
3. Manish 
4. Jaswinder 
5. Mulasi 
6. Aditya 
Python Lovers

Let us consider in the next section on how can we extract URL’s from a given website.

Here is a small program that will help me to achieve the task. So prior to this i expect you guys have installed the necessary modules to move ahead, if not please do it we only require beautifulsoup4 and requests.

from bs4 import BeautifulSoup
import requests
UrlEntered = raw_input("Please enter a Website to fetch the various URL's ( begin with https://) : ")
requesting = requests.get(UrlEntered)
information = requesting.text
soupObject = BeautifulSoup(information)
for urls in soupObject.find_all('a'):

This is a small program where from user i am requesting a Website and display the URL’s the Website contains. In my example of demonstration i will be requesting “https://www.google.com/“ site and the URL’s will be responded.

We get the following output

Ankurs-MacBook-Pro:documents ankurgupta$ python beauti.py
Please enter a Website to fetch the various URL's ( begin with https://) : https://www.google.com/
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/bs4/__init__.py:166: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.
To get rid of this warning, change this:
 BeautifulSoup([your markup])
to this:
 BeautifulSoup([your markup], "html.parser")

This is pretty much to be covered in this article to begin with the rest of the Exploration. I hope you guys have understood this part of our tutorial. If u want to explore more about the interesting sides of BeautifulSoup4 then in that case you can visit the following sites.



So in case of any queries or question do reach us, We will help you in best possible way to keep the things going in you favour, Thank you.