Error Handling with ProxyHub in Python

Robust scrapers need to gracefully handle HTTP errors, timeouts, and quota limits. In this guide, we’ll cover how to catch and react to common proxy-related errors when using ProxyHub.

📖 Table of Contents

Common Error Types

Basic Exception Handling

import requests

try:
    resp = requests.get(proxied_url, timeout=10)
    resp.raise_for_status()
    data = resp.json()
except requests.exceptions.HTTPError as e:
    print("HTTP error:", e)
except requests.exceptions.Timeout:
    print("Request timed out")
except requests.exceptions.RequestException as e:
    print("Something went wrong:", e)

Retries & Exponential Backoff

Wrap your calls in a retry loop:

import time

def fetch_with_retries(url, retries=3, backoff_factor=1):
    for i in range(retries):
        try:
            r = requests.get(url, timeout=10)
            r.raise_for_status()
            return r.text
        except Exception as e:
            wait = backoff_factor * (2 ** i)
            print(f"Attempt {i+1} failed: {e}. Retrying in {wait}s…")
            time.sleep(wait)
    raise RuntimeError("All retries failed")

Handling Quota Exceeded 402

Detect and back off when your plan limits are hit:

r = requests.get(url)
if r.status_code == 402:
    print("Quota exceeded! Waiting 1 hour before retry.")
    time.sleep(3600)
    r = requests.get(url)
Note: You can also monitor your usage via the ProxyHub dashboard and alert proactively.

Timeouts & Connection Failures

Set sensible timeouts and catch connection errors:

try:
    r = requests.get(url, timeout=5)
except requests.exceptions.ConnectTimeout:
    print("Connection timed out while connecting")
except requests.exceptions.ReadTimeout:
    print("Server took too long to send data")

Best Practices

With these patterns you’ll be equipped to handle most proxy-related failures and keep your scraper resilient.