scrapy设置日志报错级别
在执行scrapy crawl xxx 的时候,我们在终端经常看见一大堆的信息,其实在scrapy里是可以设置日志级别的。在settings.py里添加LOG_LEVEL = "WARNING"就可以把低于warning的日志全部屏蔽不显示。
在执行scrapy crawl xxx 的时候,我们在终端经常看见一大堆的信息,比如
2019-07-11 17:16:41 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: hsh_research)
2019-07-11 17:16:41 [scrapy.utils.log] INFO: Versions: lxml 4.2.2.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.1, Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 2018, 17:00:18) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1c 28 May 2019), cryptography 2.7, Platform Windows-7-6.1.7601-SP1
2019-07-11 17:16:41 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'hsh_research', 'NEWSPIDER_MODULE': 'hsh_research.spiders', 'SPIDER_MODULES': ['hsh_research.spiders']}
2019-07-11 17:16:41 [scrapy.extensions.telnet] INFO: Telnet Password: 1c151e3b9cbebcb7
2019-07-11 17:16:41 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-07-11 17:16:42 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-07-11 17:16:42 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-07-11 17:16:42 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-07-11 17:16:42 [scrapy.core.engine] INFO: Spider opened
2019-07-11 17:16:42 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-07-11 17:16:42 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-07-11 17:16:42 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://chem.chem99.com/news/s221.html> (referer: None)
2019-07-11 17:16:43 [scrapy.core.engine] INFO: Closing spider (finished)
2019-07-11 17:16:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 228,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 11125,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 7, 11, 9, 16, 43, 19215),
'log_count/DEBUG': 1,
'log_count/INFO': 9,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2019, 7, 11, 9, 16, 42, 814189)}
2019-07-11 17:16:43 [scrapy.core.engine] INFO: Spider closed (finished)
[Finished in 2.6s]
每次查看print打印的时候非常的不方便
其实scrapy不仅可以设置快捷运行,而且还可以在scrapy里是可以设置日志级别的
在settings.py里添加
LOG_LEVEL = "WARNING"
就可以把低于warning的日志全部屏蔽不显示。
版权声明
本站部分原创文章,部分文章整理自网络。如有转载的文章侵犯了您的版权,请联系站长删除处理。如果您有优质文章,欢迎发稿给我们!联系站长:
愿本站的内容能为您的学习、工作带来绵薄之力。
评论