Skip to content Skip to sidebar Skip to footer

Max Retries Exceeded With Url Selenium

So i'm looking to traverse a URL array and open different URL's for web scraping with Selenium. The problem is, as soon as I hit the second browser.get(url), I get a 'Max retries e

Solution 1:

Solved the problem. You have to recreate the webdriver again.

from bs4 import BeautifulSoup
import time
from selenium import webdriver
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options
import json


urlArr = ['https://link1', 'https://link2', '...']

for url in urlArr:
   chrome_options = Options()  
   chromedriver = webdriver.Chrome(executable_path='C:/Users/andre/Downloads/chromedriver_win32/chromedriver.exe', options=chrome_options)
   with chromedriver as browser:
      browser.get(url)
      time.sleep(5)
      # Click a button
      chromedriver.find_elements_by_tag_name('a')[7].click()

      chromedriver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
      time.sleep(2)
      for i in range (0, 2):
         chromedriver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
         time.sleep(5)

      html = browser.page_source
      page_soup = BeautifulSoup(html, 'html.parser')
      boxes = page_soup.find("div", {"class": "rpBJOHq2PR60pnwJlUyP0"})
      videos = page_soup.findAll("video", {"class": "_1EQJpXY7ExS04odI1YBBlj"})

Post a Comment for "Max Retries Exceeded With Url Selenium"