Python provides several ways to download files from the internet. This can be done over HTTP using the urllib package or the requests library. This tutorial will discuss how to use these libraries to download files from URLs using Python.
The requests library is one of the most popular libraries in Python. Requests allow you to send HTTP/1.1 requests without the need to manually add query strings to your URLs, or form-encode your POST data.
With the requests library, you can perform a lot of functions including:
- adding form data,
- adding multipart files,
- and accessing the response data of Python
The first you need to do is to install the library and it’s as simple as:
pip install requests
To test if the installation has been successful, you can do a very easy test in your python interpreter by simply typing:
If the installation has been successful, there will be no errors.
HTTP requests include:
Making a GET request
Making requests is very easy as illustrated below.
import requests req = requests.get(“http://www.google.com”)
The above command will get the google web page and store the information in the
req variable. We can then go on to get other attributes as well.
For instance, to know if fetching the google web page was successful, we will query the status_code.
import requests req = requests.get(“http://www.google.com") req.status_code 200 # 200 means a successful request
What if we want to find out the encoding type of the Google web page?
You might also want to know the contents of the response.
This is just a truncated content of the response.