Skip to content Skip to sidebar Skip to footer

My Scraper Fails To Get All The Items From A Webpage

I've written some code in python in combination with selenium to parse different product names from a webpage. There are few load more buttons visible if the browser is made to scr

Solution 1:

Try below code to get required data:

from selenium import webdriver
from selenium.webdriver.common.by import Byfrom selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get("https://www.purplle.com/search?q=hair%20fall%20shamboo")
wait = WebDriverWait(driver, 10)

header = driver.find_element_by_tag_name("header")
driver.execute_script("arguments[0].style.display='none';", header)

whileTrue:

    try:
        page = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".listing_item")))
        driver.execute_script("arguments[0].scrollIntoView();", page)
        page.send_keys(Keys.END)
        load = wait.until(EC.element_to_be_clickable((By.PARTIAL_LINK_TEXT, "LOAD MORE")))
        driver.execute_script("arguments[0].scrollIntoView();", load)
        load.click()
        wait.until(EC.staleness_of(load))
    except:
        break

for item in wait.until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, "[id^=item_]"))):
    name = item.find_element_by_css_selector(".pro-name.el2").text
    print(name)
driver.quit()

Solution 2:

You should only Use Selenium as a last resort.

A simple look around in the webpage showed the API it called to get your data.

It returns a JSON output with all the details:

Link

You can now just loop over and store in a dataframe easily.

Very fast, fewer errors than selenium.

Post a Comment for "My Scraper Fails To Get All The Items From A Webpage"