小能豆

Scrapy 部署已停止工作

py

我正在尝试使用 scrapyd 部署 scrapy 项目但它给出了错误…

sudo scrapy deploy default -p eScraper
Building egg of eScraper-1371463750
'build/scripts-2.7' does not exist -- can't clean it
zip_safe flag not set; analyzing archive contents...
eScraperInterface.settings: module references __file__
eScraper.settings: module references __file__
Deploying eScraper-1371463750 to http://localhost:6800/addversion.json
Server response (200):
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 18, in render
    return JsonResource.render(self, txrequest)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/txweb.py", line 10, in render
    r = resource.Resource.render(self, txrequest)
  File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
    return m(request)
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 66, in render_POST
    spiders = get_spider_list(project)
  File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 65, in get_spider_list
    raise RuntimeError(msg.splitlines()[-1])
RuntimeError: OSError: [Errno 20] Not a directory: '/tmp/eScraper-1371463750-Lm8HLh.egg/images'

之前我可以正确部署项目,但现在不行.....但如果使用 scrapy crawl spiderName 的爬行蜘蛛,那就没有问题了…有人可以帮帮我吗....


阅读 22

收藏
2024-12-24

共1个答案

小能豆

尝试以下两件事:1.可能是你部署了太多版本,尝试删除一些旧版本2.在部署之前,删除构建文件夹和安装文件

就运行爬虫而言,如果您运行任何任意名称的爬虫(甚至尚未部署),scrapyd 将返回“OK”响应以及作业 ID。

2024-12-24