大佬教程收集整理的这篇文章主要介绍了使用 scrapy 时,抓取了 0 个页面(以 0 页/分钟的速度),抓取了 0 个项目(以 0 个项目/分钟的速度),大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
正如标题中提到的,我使用scrapy从特定页面(一个网站/一个域)中提取数据。现在我想描述一下,我的代码将要做什么。
抓取工具从波兰最著名的汽车广告门户网站之一 - OTOMOTO (https://otomoto.pl) 收集数据。它从特定的优惠页面中提取数据。
@H_489_8@首先,用户输入汽车品牌,然后输入汽车型号。程序检查是否有这样的品牌/品牌(品牌列表/数组是之前收集的 - 使用 BeautifulSoup;然后提取汽车模型列表 - 也使用 BeautifulSoup)。它工作得很好。
在此代码的最终版本中,我还将收集一些额外的参数/过滤器,例如价格范围或生产年份范围。但让我们暂时离开它。这只是关于链接创建。
@H_489_8@如果满足这些条件,则会在上述条件下(使用品牌和型号)创建包含优惠列表的 URL,如下所示:https://otomoto.PL/osobowe/BRAND/MODEL(例如:{{3} })。这是收集此链接内特定优惠的 URL 的起点。我只收集第一页的结果,不想过多地利用服务器。这就是为什么在这种情况下不需要实施用于抓取的规则。
@H_489_8@然后,BeautifulSoup 找到满足offer page 模式的URL:https://otomoto.PL/osobowe/opel/corsa(例如:https://otomoto.PL/oferta/BRAND-MODEL-blah-blah-blah) - 然后添加到数组中。此数组变量设置为全局变量。
@H_489_8@此时,我遇到了一个问题。因为我想从这个特定页面获取数据(如里程、颜色、发动机功率等)——所有这些都是发布广告所必需的,所以不需要检查该变量是否出现在页面上。然后这些数据应该分配给一个特定的对象(一个“汽车”)——这就是为什么我想到了 Scrapy 的 Item 扩展。 (我想在最后保存为 Json 文件或 csv - 但让我们暂时保留它)。
@H_489_8@简而言之:这个蜘蛛只是从定义的网站(根据用户的限制)收集数据,这些数据存储在动态创建的数组中。
我所做的:
class otoMotoCarObjects(scrapy.Item):
url = scrapy.FIEld()
offerID = scrapy.FIEld()
addDate = scrapy.FIEld()
offerType = scrapy.FIEld()
brand = scrapy.FIEld()
model = scrapy.FIEld()
price = scrapy.FIEld()
productionYear = scrapy.FIEld()
mileage = scrapy.FIEld()
fuelType = scrapy.FIEld()
power = scrapy.FIEld()
cubicCapacity = scrapy.FIEld()
gearBox = scrapy.FIEld()
driveType = scrapy.FIEld()
color = scrapy.FIEld()
countryimport = scrapy.FIEld()
LOCATIOn = scrapy.FIEld()
state = scrapy.FIEld()
class otoMotoCarObjects(scrapy.Item):
url = scrapy.FIEld()
offerID = scrapy.FIEld()
addDate = scrapy.FIEld()
offerType = scrapy.FIEld()
brand = scrapy.FIEld()
model = scrapy.FIEld()
price = scrapy.FIEld()
productionYear = scrapy.FIEld()
mileage = scrapy.FIEld()
fuelType = scrapy.FIEld()
power = scrapy.FIEld()
cubicCapacity = scrapy.FIEld()
gearBox = scrapy.FIEld()
driveType = scrapy.FIEld()
color = scrapy.FIEld()
countryimport = scrapy.FIEld()
LOCATIOn = scrapy.FIEld()
state = scrapy.FIEld()
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.spIDers import CrawlSpIDer,Rule
from scrapy.http import request
from scrapy.linkextractors import linkExtractor
值得一提的是,评论部分的代码(返回scrapy.request ...)也不起作用,但我留下它供您参考。 (也许它是好的,但诱导不当?) - 我将它与 yIEld
互换使用使用以下代码在 @H_597_17@main 函数中启动此抓取工具:import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.spIDers import CrawlSpIDer,Rule
from scrapy.http import request
from scrapy.linkextractors import linkExtractor
(其中,
表示此抓取工具所在的导入文件,class otoMotoCarScraper(CrawlSpIDer):
name = 'car'
allowed_domains = ['otomoto.pl']
def __init__(self,urls=[],*args,**kwargs):
super(otoMotoCarScraper,self).__init__(*args,**kwargs)
self.start_urls = urls
def start_requests(self):
for url in self.start_urls:
yIEld request(url,callBACk=self.parse_items)
"""
return [scrapy.request(url=url,callBACk=self.parsE)
for url in self.start_urls]
"""
def parse_items(self,responsE):
car = otoMotoCarObjects()
car['url'] = response.url
print(car['url'])
car['offerID'] = response.xpath('//div[@ID="ad-ID"]//text()').extract_first()
print(car['offerID'])
car['addDate'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['offerType'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[1]/div/a/text()').extract()
car['brand'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[3]/div/a/text()').extract()
car['model'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[4]/div/a/text()').extract()
car['price'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[2]/div[1]/div[1]/div[2]/div/span[1]/text()').extract()
car['productionYear'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[5]/div/text()').extract()
car['mileage'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[6]/div/text()').extract()
car['fuelType'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[8]/div/a/text()').extract()
car['power'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[9]/div/text()').extract()
car['cubicCapacity'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[7]/div/text()').extract()
car['gearBox'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[10]/div/a/text()').extract()
car['driveType'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[11]/div/a/text()').extract()
car['color'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/lI[3]/div/a/text()').extract()
car['countryimport'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/lI[8]/div/a/text()').extract()
car['LOCATIOn'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['state'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/lI[10]/div/a/text()').extract()
yIEld car
process = CrawlerProcess()
class otoMotoCarScraper(CrawlSpIDer):
name = 'car'
allowed_domains = ['otomoto.pl']
def __init__(self,responsE):
car = otoMotoCarObjects()
car['url'] = response.url
print(car['url'])
car['offerID'] = response.xpath('//div[@ID="ad-ID"]//text()').extract_first()
print(car['offerID'])
car['addDate'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['offerType'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[1]/div/a/text()').extract()
car['brand'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[3]/div/a/text()').extract()
car['model'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[4]/div/a/text()').extract()
car['price'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[2]/div[1]/div[1]/div[2]/div/span[1]/text()').extract()
car['productionYear'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/li[5]/div/text()').extract()
car['mileage'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[6]/div/text()').extract()
car['fuelType'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[8]/div/a/text()').extract()
car['power'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[9]/div/text()').extract()
car['cubicCapacity'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[7]/div/text()').extract()
car['gearBox'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[10]/div/a/text()').extract()
car['driveType'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[1]/lI[11]/div/a/text()').extract()
car['color'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/lI[3]/div/a/text()').extract()
car['countryimport'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/lI[8]/div/a/text()').extract()
car['LOCATIOn'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[2]/div[1]/div[2]/div[4]/span[3]/text()').extract()
car['state'] = response.xpath(
'/HTML/body/div[3]/main/div[2]/div[1]/div[2]/div[1]/div[1]/div[3]/div[1]/ul[2]/lI[10]/div/a/text()').extract()
yIEld car
process = CrawlerProcess()
表示导入的文件,其中定义了函数,因此存储了 scr.otoMotoCarScraper.process.crawl(scr.otoMotoCarScraper,urls=fun.offerUrls)
表)。>
Scrapy 日志如下:
scr
如您所见,不幸的是,我抓取/抓取了 0 个页面,如下行所示:fun
我尝试打印 offerUrls
- 在
2021-02-13 14:39:29 [scrapy.crawler] INFO: OverrIDden setTings:
{}
2021-02-13 14:39:29 [scrapy.extensions.telnet] INFO: Telnet password: 85e67e930e79de1f
2021-02-13 14:39:29 [scrapy.mIDdleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats','scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.logstats.LogStats']
2021-02-13 14:39:29 [scrapy.mIDdleware] INFO: Enabled downloader mIDdlewares:
['scrapy.downloadermIDdlewares.httpauth.httpAuthMIDdleware','scrapy.downloadermIDdlewares.downloadtimeout.DownloadTimeoutMIDdleware','scrapy.downloadermIDdlewares.defaultheaders.DefaultheadersMIDdleware','scrapy.downloadermIDdlewareS.Useragent.UserAgentMIDdleware','scrapy.downloadermIDdlewares.retry.RetrymIDdleware','scrapy.downloadermIDdlewares.redirect.MetarefreshMIDdleware','scrapy.downloadermIDdlewares.httpcompression.httpCompressionMIDdleware','scrapy.downloadermIDdlewares.redirect.RedirectMIDdleware','scrapy.downloadermIDdlewares.cookies.cookiesMIDdleware','scrapy.downloadermIDdlewares.httpproxy.httpProxymIDdleware','scrapy.downloadermIDdlewares.stats.DownloaderStats']
2021-02-13 14:39:29 [scrapy.mIDdleware] INFO: Enabled spIDer mIDdlewares:
['scrapy.spIDermIDdlewares.httperror.httpErrorMIDdleware','scrapy.spIDermIDdlewares.offsite.offsiteMIDdleware','scrapy.spIDermIDdlewares.referer.RefererMIDdleware','scrapy.spIDermIDdlewareS.Urllength.UrlLengthMIDdleware','scrapy.spIDermIDdlewares.depth.DepthMIDdleware']
2021-02-13 14:39:29 [scrapy.mIDdleware] INFO: Enabled item pipelines:
[]
2021-02-13 14:39:29 [scrapy.core.ENGIne] INFO: SpIDer opened
2021-02-13 14:39:29 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min),scraped 0 items (at 0 items/min)
2021-02-13 14:39:29 [scrapy.extensions.telnet] INFO: Telnet console Listening on 127.0.0.1:6023
函数内,它正确显示为表格 - 我认为这是将参数传递给刮刀的东西,但它似乎很不错。
我也尝试更改 2021-02-13 14:39:29 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min),scraped 0 items (at 0 items/min)
- 并将其转换为 self.start_urls
但也没有任何反应。
我的想法:
__init__
没有正确创建?@H_489_8@
return
?@H_489_8@
暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!
如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。
小编邮箱:dio#foxmail.com (将#修改为@)
以上是大佬教程为你收集整理的使用 scrapy 时,抓取了 0 个页面(以 0 页/分钟的速度),抓取了 0 个项目(以 0 个项目/分钟的速度)全部内容,希望文章能够帮你解决使用 scrapy 时,抓取了 0 个页面(以 0 页/分钟的速度),抓取了 0 个项目(以 0 个项目/分钟的速度)所遇到的程序开发问题。
如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。