大佬教程收集整理的这篇文章主要介绍了如何获得刮擦失败的URL?,大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
是的,这是可能的。
Failed_urls
基本SpIDer类中添加了一个列表,并在URL的响应状态为404时将uRL附加到该列表中(需要扩展此范围以涵盖其他错误状态)。from scrapy import SpIDer, signals
class MySpIDer(SpIDer):
handle_httpstatus_List = [404]
name = "myspIDer"
allowed_domains = ["example.com"]
start_urls = [
'http://www.example.com/thisurlexists.HTML',
'http://www.example.com/thisurldoesnotexist.HTML',
'http://www.example.com/neitherdoesthisone.HTML'
]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.Failed_urls = []
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spIDer = super(MySpIDer, cls).from_crawler(crawler, *args, **kwargs)
crawler.signals.connect(spIDer.handle_spIDer_closed, signals.spIDer_closed)
return spIDer
def parse(self, responsE):
if response.status == 404:
self.crawler.stats.inc_value('Failed_url_count')
self.Failed_urls.append(response.url)
def handle_spIDer_closed(self, reason):
self.crawler.stats.set_value('Failed_urls', ', '.join(self.Failed_urls))
def process_exception(self, response, exception, spIDer):
ex_class = "%s.%s" % (exception.__class__.__module__, exception.__class__.__name__)
self.crawler.stats.inc_value('downloader/exception_count', spIDer=spIDer)
self.crawler.stats.inc_value('downloader/exception_type_count/%s' % ex_class, spIDer=spIDer)
输出示例(请注意,仅当实际抛出异常时才会显示downloader / exception_count *统计信息-我在关闭无线适配器后尝试运行SpIDer来模拟它们):
2012-12-10 11:15:26+0000 [myspIDer] INFO: DumPing Scrapy stats:
{'downloader/exception_count': 15,
'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 15,
'downloader/requesT_Bytes': 717,
'downloader/request_count': 3,
'downloader/request_method_count/GET': 3,
'downloader/response_bytes': 15209,
'downloader/response_count': 3,
'downloader/response_status_count/200': 1,
'downloader/response_status_count/404': 2,
'Failed_url_count': 2,
'Failed_urls': 'http://www.example.com/thisurldoesnotexist.HTML, http://www.example.com/neitherdoesthisone.HTML'
'finish_reason': 'finished',
'finish_time': datetiR_868_11845@e.datetiR_868_11845@e(2012, 12, 10, 11, 15, 26, 874000),
'log_count/DEBUG': 9,
'log_count/ERROR': 2,
'log_count/INFO': 4,
'response_received_count': 3,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'spIDer_exceptions/nameError': 2,
'start_time': datetiR_868_11845@e.datetiR_868_11845@e(2012, 12, 10, 11, 15, 26, 560000)}
我是新手,我知道这是一个了不起的爬虫框架!
在我的项目中,我发送了90,000个请求,但其中一些失败。我将日志级别设置为INFO,我只是可以看到一些统计信息,但没有详细信息。
2012-12-05 21:03:04+0800 [pd_spider] INFO: Dumping spider stats:
{'downloader/exception_count': 1,'downloader/exception_type_count/twisted.internet.error.ConnectionDone': 1,'downloader/requesT_Bytes': 46282582,'downloader/request_count': 92383,'downloader/request_method_count/GET': 92383,'downloader/response_bytes': 123766459,'downloader/response_count': 92382,'downloader/response_status_count/200': 92382,'finish_reason': 'finished','finish_time': datetiR_868_11845@e.datetiR_868_11845@e(2012,12,5,13,3,4,836000),'item_scraped_count': 46191,'request_depth_max': 1,'scheduler/memory_enqueued': 92383,'start_time': datetiR_868_11845@e.datetiR_868_11845@e(2012,23,25,427000)}
有什么方法可以获取更详细的报告吗?例如,显示那些失败的URL。谢谢!
以上是大佬教程为你收集整理的如何获得刮擦失败的URL?全部内容,希望文章能够帮你解决如何获得刮擦失败的URL?所遇到的程序开发问题。
如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。