大佬教程收集整理的这篇文章主要介绍了(Scrapy框架)爬虫2021年CSDN全站综合热榜标题热词 | 爬虫案例,大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。
目录
前言
环境部署
实现过程
创建项目
定义Item实体
关键词提取工具
爬虫构造
中间件代码构造
制作自定义pipeline
setTings配置
执行主程序
执行结果
总结
接着我的上一篇:如何爬取CSDN全站综合热榜标题c;顺便统计关键词词频 c; 爬虫案例_阿良的博客-CSDN博客
我换成Scrapy架构也实现了一遍。获取页面源码底层原理是一样的c;Scrapy架构更系统一些。下面我会把需要注意的问题c;也说明一下。
提供一下GitHub仓库地址:github本项目地址
scrapy安装
pip install scrapy -i https://pypi.douban.com/simple
SELEnium安装
pip install SELEnium -i https://pypi.douban.com/simple
jieba安装
pip install jieba -i https://pypi.douban.com/simple
IDE:PyCharm
google chrome driver下载对应版本:google chrome driver下载地址
检查浏览器版本c;下载对应版本。
下面开始搞起。
使用scrapy命令创建我们的项目。
scrapy startproject csdn_hot_words
项目结构c;如同官方给出的结构。
按照之前的逻辑c;主要属性为标题关键词对应出现次数的字典。代码如下:
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class CsdnHotWordsItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
words = scrapy.Field()
使用jieba分词获取工具。
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2021/11/5 23:47
# @Author : 至尊宝
# @Site :
# @File : analyse_sentence.py
import jieba.analyse
def get_key_word(sentencE):
result_Dic = {}
words_lis = jieba.analyse.extract_tags(
sentence, topK=3, withWeight=True, allowPOS=())
for word, flag in words_lis:
if word in result_Dic:
result_Dic[word] += 1
else:
result_Dic[word] = 1
return result_Dic
这里需要给爬虫初始化一个浏览器参数c;用来实现页面的动态加载。
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2021/11/5 23:47
# @Author : 至尊宝
# @Site :
# @File : csdn.py
import scrapy
from SELEnium import webdriver
from SELEnium.webdriver.chrome.options import Options
from csdn_hot_words.items import CsdnHotWordsItem
from csdn_hot_words.tools.analyse_sentence import get_key_word
class CsdnSpider(scrapy.Spider):
name = 'csdn'
# allowed_domains = ['blog.csdn.net']
start_urls = ['https://blog.csdn.net/rank/list']
def __init__(self):
chrome_options = Options()
chrome_options.add_argument('--headless') # 使用无头谷歌浏览器模式
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--no-sandbox')
self.browser = webdriver.Chrome(chrome_options=chrome_options,
executable_path="E:\chromedriver_win32\chromedriver.exe")
self.browser.set_page_load_timeout(30)
def parse(self, response, **kwargs):
titles = response.xpath("//div[@class='hosetitem-title']/a/text()")
for x in titles:
item = CsdnHotWordsItem()
item['words'] = get_key_word(x.get())
yield item
代码说明
1、这里使用的是chrome的无头模式c;就不需要有个浏览器打开再访问c;都是后台执行的。
2、需要添加chromedriver的执行文件地址。
3、在parse的部分c;可以参考之前我文章的xpathc;获取到标题并且调用关键词提取c;构造item对象。
添加js代码执行内容。中间件完整代码:
# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
from scrapy.http import HtmlResponse
from SELEnium.common.exceptions import TimeoutException
import time
from SELEnium.webdriver.chrome.options import Options
# useful for handling different item types with a singlE interface
from itemadapter import is_item, ItemAdapter
class CsdnHotWordsSpiderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider.
# Should return None or raise an exception.
return None
def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response.
# Must return an iterable of request, or item objects.
for i in result:
yield i
def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middlewarE) raises an exception.
# Should return either None or an iterable of request or item objects.
pass
def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated.
# Must return only requests (not items).
for r in start_requests:
yield r
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.Name)
class CsdnHotWordsDownloaderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
js = '''
let height = 0
let interval = seTinterval(() => {
window.scrollTo({
top: height,
behavior: "smooth"
});
height += 500
}, 500);
setTimeout(() => {
clearInterval(interval)
}, 20000);
'''
try:
spider.browser.get(request.url)
spider.browser.execute_script(js)
time.sleep(20)
return HtmlResponse(url=spider.browser.current_url, body=spider.browser.page_source,
encoding="utf-8", request=request)
except TimeoutException as e:
print('超时异常:{}'.format(E))
spider.browser.execute_script('window.stop()')
finally:
spider.browser.close()
def process_response(self, request, response, spider):
# Called with the response returned from the downloader.
# Must either;
# - return a Response object
# - return a request object
# - or raise Ignorerequest
return response
def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middlewarE) raises an exception.
# Must either:
# - return None: conTinue processing this exception
# - return a Response object: stops process_exception() chain
# - return a request object: stops process_exception() chain
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.Name)
定义按照词频统计最终结果输出到文件。代码如下:
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELInes setTing
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a singlE interface
from itemadapter import ItemAdapter
class CsdnHotWordsPipeline:
def __init__(self):
self.file = open('Result.txt', 'w', encoding='utf-8')
self.all_words = []
def process_item(self, item, spider):
self.all_words.append(item)
return item
def close_spider(self, spider):
key_word_Dic = {}
for y in self.all_words:
print(y)
for k, v in Y['words'].items():
if k.lower() in key_word_Dic:
key_word_Dic[k.lower()] += v
else:
key_word_Dic[k.lower()] = v
word_count_sort = sorted(key_word_Dic.items(),
key=lambda x: x[1], reverse=TruE)
for word in word_count_sort:
self.file.write('{},{}n'.format(word[0], word[1]))
self.file.close()
配置上要做一些调整。如下调整:
# Scrapy setTings for csdn_hot_words project
#
# For simplicity, this file contains only setTings considered important or
# commonly used. You can find more setTings consulTing the documentation:
#
# https://docs.scrapy.org/en/latest/topics/setTings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOt_name = 'csdn_hot_words'
SPIDER_MODULES = ['csdn_hot_words.spiders']
NEWSPIDER_MODULE = 'csdn_hot_words.spiders'
# Crawl responsibly by identifying yourself (and your websitE) on the user-agent
# user_ageNT = 'csdn_hot_words (+http://www.yourdomain.com)'
user_ageNT = 'Mozilla/5.0'
# Obey robots.txt rules
ROBOTSTXT_OBEY = false
# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_requESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/setTings.html#download-delay
# See also autothrottle setTings and docs
DOWNLOAD_DELAY = 30
# The download delay setTing will honor only one of:
# CONCURRENT_requESTS_PER_DOMAIN = 16
# CONCURRENT_requESTS_PER_IP = 16
# Disable cookies (enabled by default)
COOKIES_ENABLED = false
# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = false
# Override the default request headers:
DEFAULT_requEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'user-Agent': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36'
}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
'csdn_hot_words.middlewares.CsdnHotWordsSpiderMiddleware': 543,
}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
'csdn_hot_words.middlewares.CsdnHotWordsDownloaderMiddleware': 543,
}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
# EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
# }
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELInes = {
'csdn_hot_words.pipelines.CsdnHotWordsPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
# AUTOTHROTTLE_ENABLED = True
# The initial download delay
# AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
# AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
# AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
# AUTOTHROTTLE_DEBUG = false
# Enable and configure http caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-setTings
# httpCACHE_ENABLED = True
# httpCACHE_EXPIRATION_SECS = 0
# httpCACHE_DIR = 'httpcache'
# httpCACHE_IGNORE_http_CODES = []
# httpCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
可以通过scrapy的命令执行c;但是为了看日志方便c;加了一个主程序代码。
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2021/11/5 22:41
# @Author : 至尊宝
# @Site :
# @File : main.py
from scrapy import cmdline
cmdline.execute('scrapy crawl csdn'.split())
执行部分日志
得到result.txt结果。
看c;java还是yyds。不知道为什么2021这个关键词也可以排名靠前。于是我觉着把我标题也加上2021。
GitHub项目地址在发一遍:github本项目地址
申明一下c;本文案例仅研究探索使用c;不是为了恶意攻击。
分享:
凡夫俗子不下苦功夫、死力气去努力做成一件事c;根本就没资格去谈什么天赋不天赋。
以上是大佬教程为你收集整理的(Scrapy框架)爬虫2021年CSDN全站综合热榜标题热词 | 爬虫案例全部内容,希望文章能够帮你解决(Scrapy框架)爬虫2021年CSDN全站综合热榜标题热词 | 爬虫案例所遇到的程序开发问题。
如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。