我正在使用scrapy抓取一个似乎在每个URL末尾向查询字符串添加随机值的网站。这将爬网变成一种无限循环。
我如何抓紧忽略URL的查询字符串部分?
示例代码:
from urlparse import urlparse o = urlparse('http://url.something.com/bla.html?querystring=stuff') url_without_query_string = o.scheme + "://" + o.netloc + o.path
输出示例:
Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from urlparse import urlparse >>> o = urlparse('http://url.something.com/bla.html?querystring=stuff') >>> url_without_query_string = o.scheme + "://" + o.netloc + o.path >>> print url_without_query_string http://url.something.com/bla.html >>>