Search notes:

Python library: requests

requests promises to be the HTTP library for human beings.

Methods etc.

for x in sorted(dir(requests), key = lambda w: w.upper()): print(x)
adapters
api
auth
certs
chardet_version
charset_normalizer_version
check_compatibility
codes
compat
ConnectionError
ConnectTimeout
cookies
delete
DependencyWarning
exceptions
FileModeWarning
get
head
hooks
HTTPError
JSONDecodeError
logging
models
NullHandler
options
packages
patch
post
PreparedRequest
put
ReadTimeout
Request
request
RequestException
RequestsDependencyWarning
Response
Session
session
sessions
ssl
status_codes
structures
Timeout
TooManyRedirects
urllib3
URLRequired
utils
warnings
_check_cryptography
_internal_utils
__author_email__
__author__
__build__
__builtins__
__cached__
__cake__
__copyright__
__description__
__doc__
__file__
__license__
__loader__
__name__
__package__
__path__
__spec__
__title__
__url__
__version__

Simple examples

get

import requests


gotten = requests.get('https://raw.githubusercontent.com/ReneNyffenegger/about-python/master/libraries/requests/get.py')

print('Status code: ', gotten.status_code)
print('Headers:');

for header in gotten.headers:
    print("  %-30s: %s" % (header, gotten.headers[header]))

print('Encoding: ', gotten.encoding)

print()

print(gotten.text)
Github repository about-python, path: /libraries/requests/get.py

Specify a request header

The following example specifies the HTTP request header Accept to instruct the Wikidata endpoint to return its result as JSON:
import requests

query = """
select
  (lang(?label) as ?lang)
   ?label
{
   wd:Q22661317  rdfs:label ?label .
}
"""

response = requests.get(
  "https://query.wikidata.org/sparql"                     ,
   params  = {"query" :  query                           },
   headers = {"Accept": "application/sparql-results+json"}
)

print(response.json())

Download and save file

import requests

f = open ('script.downloaded', 'wb')
r = requests.get('https://raw.githubusercontent.com/ReneNyffenegger/about-python/master/libraries/requests/download-and-save-file.py', stream = True)

for chunk in r.iter_content(chunk_size = 1024):
    if chunk: # filter out keep-alive new chunks
       f.write(chunk)
       f.flush()

f.close()
This script was inspired/copied from this stackoverflow question.

POSTing UTF-8 data causing wrong Content-Length

I believe that I ran into an issue where the Content-Length was calculated to low when I tried to POST UTF-8 data with a request similar to:
res = requests.post(
   url,
   data = body
)
I was able to by explicitly use .encode('utf-8') on the request's body:
res = requests.post(
   url,
   data = body.encode('utf-8')
)

Logging

The following code is taken from this github gist.
import requests
import logging
from http.client import HTTPConnection

urlliblogger = logging.getLogger('urllib3')
urlliblogger.setLevel(logging.DEBUG)

# logging from urllib3 to console
logstream = logging.StreamHandler()
logstream.setLevel(logging.DEBUG)
urlliblogger.addHandler(logstream)

HTTPConnection.debuglevel = 1
print(requests.get('https://api.openstreetmap.org/api/0.6/node/1894790125'))

Installation on Windows

requests must apparently (at least on my machine) be installed as admin.
As non-admin:
>  pip install requests
…
… Rödel rödel rödel
…
ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: 'C:\\Python311\\Scripts\\normalizer.exe' -> 'C:\\Python311\\Scripts\\normalizer.exe.deleteme'
When I tried to install this library on Windows under username René (note the accent), I received a UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 57: ordinal not in range(128).
I was able to fix this by changing the environment variable USERNAME to a value without accent.

See also

The standard library urllib (especially urllib.request)

Index