一尘不染

如何更快地制作selenium脚本?

selenium

我使用python Selenium和Scrapy爬行网站。

但是我的剧本太慢了

Crawled 1 pages (at 1 pages/min)

我使用CSS SELECTOR而不是XPATH来优化时间。我改变了中间件

'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,

是selenium太慢还是我应该在“设置”中更改某些内容?

我的代码:

def start_requests(self):
    yield Request(self.start_urls, callback=self.parse)
def parse(self, response):
    display = Display(visible=0, size=(800, 600))
    display.start()
    driver = webdriver.Firefox()
    driver.get("http://www.example.com")
    inputElement = driver.find_element_by_name("OneLineCustomerAddress")
    inputElement.send_keys("75018")
    inputElement.submit()
    catNums = driver.find_elements_by_css_selector("html body div#page div#main.content div#sContener div#menuV div#mvNav nav div.mvNav.bcU div.mvNavLk form.jsExpSCCategories ul.mvSrcLk li")
    #INIT
    driver.find_element_by_css_selector(".mvSrcLk>li:nth-child(1)>label.mvNavSel.mvNavLvl1").click()
    for catNumber in xrange(1,len(catNums)+1):
        print "\n IN catnumber \n"
        driver.find_element_by_css_selector("ul#catMenu.mvSrcLk> li:nth-child(%s)> label.mvNavLvl1" % catNumber).click()
        time.sleep(5)
        self.parse_articles(driver)
        pages = driver.find_elements_by_xpath('//*[@class="pg"]/ul/li[last()]/a')

        if(pages):
            page = driver.find_element_by_xpath('//*[@class="pg"]/ul/li[last()]/a')

            checkText = (page.text).strip()
            if(len(checkText) > 0):
                pageNums = int(page.text)
                pageNums = pageNums  - 1
                for pageNumbers in range (pageNums):
                    WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "waitingOverlay")))
                    driver.find_element_by_css_selector('.jsNxtPage.pgNext').click()
                    self.parse_articles(driver)
                    time.sleep(5)

def parse_articles(self,driver) :
    test = driver.find_elements_by_css_selector('html body div#page div#main.content div#sContener div#sContent div#lpContent.jsTab ul#lpBloc li div.prdtBloc p.prdtBDesc strong.prdtBCat')

def between(self, value, a, b):
    pos_a = value.find(a)
    if pos_a == -1: return ""
    pos_b = value.rfind(b)
    if pos_b == -1: return ""
    adjusted_pos_a = pos_a + len(a)
    if adjusted_pos_a >= pos_b: return ""
    return value[adjusted_pos_a:pos_b]

阅读 215

收藏
2020-06-26

共1个答案

一尘不染

因此,您的代码在这里几乎没有缺陷。

  1. 当scrapy Selectors更快,更高效时,您可以使用硒来解析页面内容。
  2. 您为每个响应启动一个webdriver。

这可以通过使用scrapy’s很好地解决Downloader middlewares。您想创建一个自定义的下载器中间件,该中间件将使用硒而不是scrapy下载器下载请求。

例如我用这个:

# middlewares.py
class SeleniumDownloader(object):
    def create_driver(self):
        """only start the driver if middleware is ever called"""
        if not getattr(self, 'driver', None):
            self.driver = webdriver.Chrome()

    def process_request(self, request, spider):
        # this is called for every request, but we don't want to render
        # every request in selenium, so use meta key for those we do want.
        if not request.meta.get('selenium', False):
            return request
        self.create_driver()
        self.driver.get(request.url)
        return HtmlResponse(request.url, body=self.driver.page_source, encoding='utf-8')

激活您的中间件:

# settings.py
DOWNLOADER_MIDDLEWARES = {
    'myproject.middleware.SeleniumDownloader': 13,
}

然后,在您的Spider中,您可以通过添加meta参数来指定要通过selenium驱动程序下载的URL。

# you can start with selenium
def start_requests(self):
    for url in self.start_urls:
        yield scrapy.Request(url, meta={'selenium': True})

def parse(self, response):
    # this response is rendered by selenium!
    # also can use no selenium for another response if you wish
    url = response.xpath("//a/@href")
    yield scrapy.Request(url)

这种方法的优点是您的驱动程序仅启动一次,仅用于下载页面源代码,其余的则留给适当的异步刮刮工具使用。
缺点是您无法单击周围的按钮,因为您没有被驱动程序暴露。大多数时候,您可以通过网络检查器对按钮的功能进行逆向工程,而您根本不需要对驱动程序本身进行任何单击。

2020-06-26