在学习过程中遇见了sll报错,在网上找的方法都试过了,没用,奇怪的是有些电脑有这个问题,有些电脑没有这个问题 报这个错:“ requests.exceptions.SSLError: HTTPSConnectionPool(host='xiaohua.zol.com.cn', port=443): Max retries exceeded with url: /new/1.html (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1006)'))) ” 代码
“ requests.exceptions.SSLError: HTTPSConnectionPool(host='xiaohua.zol.com.cn', port=443): Max retries exceeded with url: /new/1.html (Caused by SSLError(SSLError(1, '[SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled (_ssl.c:1006)'))) ”
import requests from lxml import etree def Parse_url(page_url,header,proxy): rep=requests.post(page_url,headers=header,proxies=proxy,verify=False) html=rep.text parser=etree.HTML(html) detail_url_list=parser.xpath('//ul[@class="article-list"]/li[@class="article-summary"]//a[@class="all-read"]/@href') for detail_url in detail_url_list: detail_url='http://xiaohua.zol.com.cn/'+detail_url Parse_detail(detail_url,header,proxy) def Parse_detail(detail_url,header,proxy): rep = requests.post(detail_url,headers=header,proxies=proxy,verify=False) html = rep.text parser=etree.HTML(html) joke_title = parser.xpath('//h1/text()')[0] joke_content = ''.join(parser.xpth('//div[@chass="article-text"]//text()')).strip() fp.write(joke_title+'\n'+joke_content+'\n') print(f'{joke_title}笑话下载完毕!') if **name**=='**main**': header={ 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36' } proxy={ 'http':'[http://58.220.95.54:9400](http://58.220.95.54:9400/)' } page_url='https://xiaohua.zol.com.cn/new/1.html' Parse_url(page_url,header,proxy)
你遇到的错误 requests.exceptions.SSLError: HTTPSConnectionPool... unsafe legacy renegotiation disabled 是因为最新版本的 OpenSSL 和一些 SSL 库禁用了不安全的 SSL 协商。这个问题在不同的计算机上可能表现不同,因为它们可能运行不同版本的操作系统、Python 和 OpenSSL。
requests.exceptions.SSLError: HTTPSConnectionPool... unsafe legacy renegotiation disabled
以下是解决该问题的几种方法:
requests
urllib3
降级 requests 和 urllib3 到较旧的版本,这些版本不强制执行新的 SSL 安全策略:
pip install requests==2.25.1 urllib3==1.26.5
尝试设置环境变量 PYTHONHTTPSVERIFY 为 0,以禁用 SSL 证书验证。这是一种不推荐的做法,因为它会使你的连接不安全,但可以用于临时测试:
PYTHONHTTPSVERIFY
0
export PYTHONHTTPSVERIFY=0
使用 urllib3 禁用 SSL 安全性警告并允许不安全的 SSL 协商:
import requests from lxml import etree import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) def Parse_url(page_url, header, proxy): rep = requests.post(page_url, headers=header, proxies=proxy, verify=False) html = rep.text parser = etree.HTML(html) detail_url_list = parser.xpath('//ul[@class="article-list"]/li[@class="article-summary"]//a[@class="all-read"]/@href') for detail_url in detail_url_list: detail_url = 'http://xiaohua.zol.com.cn/' + detail_url Parse_detail(detail_url, header, proxy) def Parse_detail(detail_url, header, proxy): rep = requests.post(detail_url, headers=header, proxies=proxy, verify=False) html = rep.text parser = etree.HTML(html) joke_title = parser.xpath('//h1/text()')[0] joke_content = ''.join(parser.xpath('//div[@class="article-text"]//text()')).strip() print(f'{joke_title}笑话下载完毕!') if __name__ == '__main__': header = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36' } proxy = { 'http': 'http://58.220.95.54:9400' } page_url = 'https://xiaohua.zol.com.cn/new/1.html' Parse_url(page_url, header, proxy)
确保你的 OpenSSL 配置文件没有禁用旧版 SSL 协商。这个文件通常位于 /etc/ssl/openssl.cnf 或 /usr/local/etc/openssl/openssl.cnf。找到并修改以下设置:
/etc/ssl/openssl.cnf
/usr/local/etc/openssl/openssl.cnf
[system_default_sect] MinProtocol = None CipherString = DEFAULT:@SECLEVEL=1
Session
如果你有多个请求需要发送,可以使用 requests.Session 对象来管理会话,并禁用 SSL 验证:
requests.Session
import requests from lxml import etree import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) def Parse_url(session, page_url, header, proxy): rep = session.post(page_url, headers=header, proxies=proxy, verify=False) html = rep.text parser = etree.HTML(html) detail_url_list = parser.xpath('//ul[@class="article-list"]/li[@class="article-summary"]//a[@class="all-read"]/@href') for detail_url in detail_url_list: detail_url = 'http://xiaohua.zol.com.cn/' + detail_url Parse_detail(session, detail_url, header, proxy) def Parse_detail(session, detail_url, header, proxy): rep = session.post(detail_url, headers=header, proxies=proxy, verify=False) html = rep.text parser = etree.HTML(html) joke_title = parser.xpath('//h1/text()')[0] joke_content = ''.join(parser.xpath('//div[@class="article-text"]//text()')).strip() print(f'{joke_title}笑话下载完毕!') if __name__ == '__main__': header = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36' } proxy = { 'http': 'http://58.220.95.54:9400' } page_url = 'https://xiaohua.zol.com.cn/new/1.html' with requests.Session() as session: Parse_url(session, page_url, header, proxy)
通过以上几种方法中的一种或多种,你应该能够解决 SSL 报错问题。请注意,在生产环境中禁用 SSL 验证或降级库版本可能会带来安全隐患,所以应谨慎使用这些方法。