我正在尝试抓取一系列网页,但出现了漏洞,有时似乎网站无法正确发送html响应。这导致csv输出文件中包含空行。当响应上的xpath选择器为空时,如何重试n次请求和解析?请注意,我没有任何HTTP错误。
你可以使用Custom Retry Middleware来做到这一点,你只需要覆盖process_response当前Retry Middleware的方法即可:
Custom Retry Middleware
process_response
Retry Middleware
from scrapy.downloadermiddlewares.retry import RetryMiddleware from scrapy.utils.response import response_status_message class CustomRetryMiddleware(RetryMiddleware): def process_response(self, request, response, spider): if request.meta.get('dont_retry', False): return response if response.status in self.retry_http_codes: reason = response_status_message(response.status) return self._retry(request, reason, spider) or response # this is your check if response.status == 200 and response.xpath(spider.retry_xpath): return self._retry(request, 'response got xpath "{}"'.format(spider.retry_xpath), spider) or response return response
然后启用它,而不是默认RetryMiddleware的settings.py:
RetryMiddleware
settings.py
DOWNLOADER_MIDDLEWARES = { 'scrapy.downloadermiddlewares.retry.RetryMiddleware': None, 'myproject.middlewarefilepath.CustomRetryMiddleware': 550, }
现在,你有了一个中间件,你可以在其中配置,xpath以使用属性在Spider内部重试retry_xpath:
xpath
Spider
retry_xpath
class MySpider(Spider): name = "myspidername" retry_xpath = '//h2[@class="tadasdop-cat"]' ...
当“项目”的字段为空时,这不一定会重试,但是你可以在此retry_xpath属性中指定该字段的相同路径以使其起作用。