程序问答   发布时间:2022-06-02  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了Scrapy start_urls大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。

如何解决Scrapy start_urls?

开发过程中遇到Scrapy start_urls的问题如何解决?下面主要结合日常开发的经验,给出你关于Scrapy start_urls的解决方法建议,希望对你解决Scrapy start_urls有所启发或帮助;

start_urlsclass属性包含起始网址-仅此而已。如果你要提取其他网页的网址,parse请使用[another]回调从相应的回调请求中获取收益:

class SpIDer(BaseSpIDer):

    name = 'my_spIDer'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    def parse(self, responsE):
        '''Parse main page and extract categorIEs links.'''
        hxs = HTMLXPathSELEctor(responsE)
        urls = hxs.SELEct("//*[@ID='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yIEld request(url, callBACk = self.parsecategory)

    def parsecategory(self, responsE):
        '''Parse category page and extract links of the items.'''
        hxs = HTMLXPathSELEctor(responsE)
        links = hxs.SELEct("//*[@ID='_List']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemlink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemlink, log.DEBUG)
            yIEld request(itemlink, callBACk = self.parseItem)

    def parseItem(self, responsE):
        ...

解决方法

from scrapy.spider import Spider
from scrapy.SELEctor import SELEctor

from dirbot.items import Website

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/ProgrAMMing/Languages/Python/Books/","http://www.dmoz.org/Computers/ProgrAMMing/Languages/Python/resources/",]

    def parse(self,responsE):
        """
        The lines below is a spider contract. For more info see:
        http://doc.scrapy.org/en/latest/topics/contracts.html
        @url http://www.dmoz.org/Computers/ProgrAMMing/Languages/Python/resources/
        @scrapes name
        """
        sel = SELEctor(responsE)
        sites = sel.xpath('//ul[@class="directory-url"]/li')
        items = []

        for site in sites:
            item = Website()
            item['name'] = site.xpath('a/text()').extract()
            item['url'] = site.xpath('a/@href').extract()
            item['description'] = site.xpath('text()').re('-\s[^\n]*\\r')
            items.append(item)

        return items

但是为什么只抓取这两个网页呢?我看到了, allowed_domains = ["dmoz.org"]但是这两页还包含指向dmoz.org域内其他页面的链接!为什么它也不会抓取它们?

大佬总结

以上是大佬教程为你收集整理的Scrapy start_urls全部内容,希望文章能够帮你解决Scrapy start_urls所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。
标签:start_urls