大佬教程收集整理的这篇文章主要介绍了Scrapy :- 分页抓取工作正常,但无法从下一页开始抓取内容,大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
难以抓取下一页的数据,它正在正确抓取页面,但抓取的数据与第一页相同。
从scrapy sHell 中观察到相同的行为。
我是scrapy的新手,下面给出了代码。在此先感谢您的帮助。
import scrapy
class MostactiveSpIDer(scrapy.SpIDer):
name = 'mostactive'
allowed_domains = ['finance.yahoo.com']
# This Function is used for start url.
def start_requests(self):
urls = ['https://finance.yahoo.com/most-active']
for url in urls:
print( url )
yIEld scrapy.request(url=url,callBACk=self.get_pages)
# below function is used for page nevigation.
def get_pages(self,responsE):
count = str(response.xpath('//*[@ID="fin-scr-res-
table"]/div[1]/div[1]/span[2]/span').CSS('::text').extract())
print('########## this is count ' + count)
print (int(count.split()[-2]))
@R_567_10586@l_results = int(count.split()[-2])
@R_567_10586@l_offsets = @R_567_10586@l_results // 25 + 1
print ( '######### This is @R_567_10586@l offset %s ' %@R_567_10586@l_offsets )
offset_List = [ i * 25 for i in range(@R_567_10586@l_offsets)]
print ( ' ####### This is offset List %s ' % offset_List )
for offset in offset_List:
print ( ' ####### This is offset List in the for loop %s ' % offset )
yIEld scrapy.request(url=f'https://finance.yahoo.com/most-active?count=25&offset=
{offset}',callBACk=self.get_stocks)
print (f'https://finance.yahoo.com/most-active?count=25&offset={offset}')
# below function is used for Content scraPing for Tickers.
def get_stocks(self,responsE):
stocks= response.xpath('//*[@ID="scr-res-
table"]/div[1]/table/tbody//tr/td[1]/a').CSS('::text').extract()
print ('get stocks visited stocks on this page are %s ' %stocks )
for stock in stocks:
yIEld scrapy.request(url=f'https://finance.yahoo.com/quote/{stock}?p={stock}',callBACk=self.parsE)
print(f'https://finance.yahoo.com/quote/{stock}?p={stock}')
# This below function is used for scrapPing the content on the end page.
def parse(self,responsE):
yIEld {
'Price' : response.xpath('//*[@ID="quote-header-
info"]/div[3]/div[1]/div/span[1]').CSS('::text').extract_first(),'Change' : response.xpath('//*[@ID="quote-header-
info"]/div[3]/div[1]/div/span[2]').CSS('::text').extract_first(),'Ticker' : response.xpath('//*[@ID="quote-header-
info"]/div[2]/div[1]/div[1]/h1').CSS('::text').extract_first()
}
谢谢,
您的代码中有非常奇怪的换行符。在此处修复换行符后一切正常:
for offset in offset_list:
print ( ' ####### This is offset list in the for loop %s ' % offset )
yield scrapy.request(url=f'https://finance.yahoo.com/most-active?count=25&offset={offset}',callBACk=self.get_stocks)
print (f'https://finance.yahoo.com/most-active?count=25&offset={offset}')
以上是大佬教程为你收集整理的Scrapy :- 分页抓取工作正常,但无法从下一页开始抓取内容全部内容,希望文章能够帮你解决Scrapy :- 分页抓取工作正常,但无法从下一页开始抓取内容所遇到的程序开发问题。
如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。