網(wǎng)絡(luò)爬蟲(chóng)框架Scrapy詳解之Request
介紹
Request類(lèi)是一個(gè)http請(qǐng)求的類(lèi),對(duì)于爬蟲(chóng)而言是一個(gè)很重要的類(lèi)。通常在Spider中創(chuàng)建這樣的一個(gè)請(qǐng)求,在Downloader中執(zhí)行這樣的一個(gè)請(qǐng)求。同時(shí)也有一個(gè)子類(lèi)FormRequest繼承于它,用于post請(qǐng)求。
在Spider中通常用法:
- yield scrapy.Request(url = 'zarten.com')
類(lèi)屬性和方法有:
- url
- method
- headers
- body
- meta
- copy()
- replace([url, method, headers, body, cookies, meta, encoding, dont_filter, callback, errback])
Request
- class scrapy.http.Request(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback, flags])
參數(shù)說(shuō)明:
- url 請(qǐng)求的url
- callback 回調(diào)函數(shù),用于接收請(qǐng)求后的返回信息,若沒(méi)指定,則默認(rèn)為parse()函數(shù)
- method http請(qǐng)求的方式,默認(rèn)為GET請(qǐng)求,一般不需要指定。若需要POST請(qǐng)求,用FormRequest即可
- headers 請(qǐng)求頭信息,一般在settings中設(shè)置即可,也可在middlewares中設(shè)置
- body str類(lèi)型,為請(qǐng)求體,一般不需要設(shè)置(get和post其實(shí)都可以通過(guò)body來(lái)傳遞參數(shù),不過(guò)一般不用)
- cookies dict或list類(lèi)型,請(qǐng)求的cookie dict方式(name和value的鍵值對(duì)):
- cookies = {'name1' : 'value1' , 'name2' : 'value2'}
list方式:
- cookies = [
- {'name': 'Zarten', 'value': 'my name is Zarten', 'domain': 'example.com', 'path': '/currency'}
- ]
- encoding 請(qǐng)求的編碼方式,默認(rèn)為'utf-8'
- priority int類(lèi)型,指定請(qǐng)求的優(yōu)先級(jí),數(shù)字越大優(yōu)先級(jí)越高,可以為負(fù)數(shù),默認(rèn)為0
- dont_filter 默認(rèn)為False,若設(shè)置為T(mén)rue,這次請(qǐng)求將不會(huì)過(guò)濾(不會(huì)加入到去重隊(duì)列中),可以多次執(zhí)行相同的請(qǐng)求
- errback 拋出錯(cuò)誤的回調(diào)函數(shù),錯(cuò)誤包括404,超時(shí),DNS錯(cuò)誤等,***個(gè)參數(shù)為T(mén)wisted Failure實(shí)例
- from scrapy.spidermiddlewares.httperror import HttpError
- from twisted.internet.error import DNSLookupError
- from twisted.internet.error import TimeoutError, TCPTimedOutError
- class ToScrapeCSSSpider(scrapy.Spider):
- name = "toscrape-css"
- # start_urls = [
- # 'http://quotes.toscrape.com/',
- # ]
- start_urls = [
- "http://www.httpbin.org/", # HTTP 200 expected
- "http://www.httpbin.org/status/404", # Not found error
- "http://www.httpbin.org/status/500", # server issue
- "http://www.httpbin.org:12345/", # non-responding host, timeout expected
- "http://www.httphttpbinbin.org/", # DNS error expected
- ]
- def start_requests(self):
- for u in self.start_urls:
- yield scrapy.Request(u, callback=self.parse_httpbin,
- errback=self.errback_httpbin,
- dont_filter=True)
- def parse_httpbin(self, response):
- self.logger.info('Got successful response from {}'.format(response.url))
- # do something useful here...
- def errback_httpbin(self, failure):
- # log all failures
- self.logger.info(repr(failure))
- # in case you want to do something special for some errors,
- # you may need the failure's type:
- if failure.check(HttpError):
- # these exceptions come from HttpError spider middleware
- # you can get the non-200 response
- response = failure.value.response
- self.logger.info('HttpError錯(cuò)誤 on %s', response.url)
- elif failure.check(DNSLookupError):
- # this is the original request
- request = failure.request
- self.logger.info('DNSLookupError錯(cuò)誤 on %s', request.url)
- elif failure.check(TimeoutError, TCPTimedOutError):
- request = failure.request
- self.logger.info('TimeoutError錯(cuò)誤 on %s', request.url)
- flags list類(lèi)型,一般不會(huì)用到,發(fā)送請(qǐng)求的標(biāo)志,一般用于日志記錄
- meta 可用戶(hù)自定義從Request到Response傳遞參數(shù),這個(gè)參數(shù)一般也可在middlewares中處理
- yield scrapy.Request(url = 'zarten.com', meta = {'name' : 'Zarten'})
在Response中:
- my_name = response.meta['name']
不過(guò)也有scrapy內(nèi)置的特殊key,也非常有用,它們?nèi)缦拢?/p>
- proxy 設(shè)置代理,一般在middlewares中設(shè)置
可以設(shè)置http或https代理
- request.meta['proxy'] = 'https://' + 'ip:port'
- downloadtimeout 設(shè)置請(qǐng)求超時(shí)等待時(shí)間(秒),通常在settings中設(shè)置DOWNLOADTIMEOUT,默認(rèn)是180秒(3分鐘)
- maxretrytimes ***重試次數(shù)(除去***次下載),默認(rèn)為2次,通常在settings中 RETRY_TIMES設(shè)置
- dont_redirect 設(shè)為T(mén)rue后,Request將不會(huì)重定向
- dont_retry 設(shè)為T(mén)rue后,對(duì)于http鏈接錯(cuò)誤或超時(shí)的請(qǐng)求將不再重試請(qǐng)求
- handlehttpstatuslist http返回碼200-300之間都是成功的返回,超出這個(gè)范圍的都是失敗返回,scrapy默認(rèn)是過(guò)濾了這些返回,不會(huì)接收這些錯(cuò)誤的返回進(jìn)行處理。不過(guò)可以自定義處理哪些錯(cuò)誤返回:
- yield scrapy.Request(url= 'https://httpbin.org/get/zarten', meta= {'handle_httpstatus_list' : [404]})
在parse函數(shù)中可以看到處理404錯(cuò)誤:
- def parse(self, response):
- print('返回信息為:',response.text)
- handlehttpstatusall 設(shè)為T(mén)rue后,Response將接收處理任意狀態(tài)碼的返回信息
- dontmergecookies scrapy會(huì)自動(dòng)保存返回的cookies,用于它的下次請(qǐng)求,當(dāng)我們指定了自定義cookies時(shí),如果我們不需要合并返回的cookies而使用自己指定的cookies,可以設(shè)為T(mén)rue
- cookiejar 可以在單個(gè)spider中追蹤多個(gè)cookie,它不是粘性的,需要在每次請(qǐng)求時(shí)都帶上
- def start_requests(self):
- urls = ['http://quotes.toscrape.com/page/1',
- 'http://quotes.toscrape.com/page/3',
- 'http://quotes.toscrape.com/page/5',
- ]
- for i ,url in enumerate(urls):
- yield scrapy.Request(urlurl= url, meta= {'cookiejar' : i})
- def parse(self, response):
- next_page_url = response.css("li.next > a::attr(href)").extract_first()
- if next_page_url is not None:
- yield scrapy.Request(response.urljoin(next_page_url), meta= {'cookiejar' : response.meta['cookiejar']}, callback= self.parse_next)
- def parse_next(self, response):
- print('cookiejar:', response.meta['cookiejar'])
- dont_cache 設(shè)為T(mén)rue后,不會(huì)緩存
- redirect_urls 暫時(shí)還不清楚具體的作用,知道的小伙伴們歡迎在評(píng)論留言
- bindaddress 綁定輸出IP
- dontobeyrobotstxt 設(shè)為T(mén)rue,不遵守robots協(xié)議,通常在settings中設(shè)置
- downloadmaxsize 設(shè)置下載器***下載的大小(字節(jié)),通常在settings中設(shè)置DOWNLOADMAXSIZE,默認(rèn)為1073741824 (1024MB=1G),若不設(shè)置***的下載限制,設(shè)為0
- download_latency 只讀屬性,獲取請(qǐng)求的響應(yīng)時(shí)間(秒)
- def start_requests(self):
- headers = {
- 'user-agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
- }
- yield scrapy.Request(url= 'https://www.amazon.com', headersheaders= headers)
- def parse(self, response):
- print('響應(yīng)時(shí)間為:', response.meta['download_latency'])
- downloadfailon_dataloss 很少用到,詳情看這里
- referrer_policy 設(shè)置Referrer Policy
FormRequest
FormRequest 類(lèi)為Request的子類(lèi),用于POST請(qǐng)求
這個(gè)類(lèi)新增了一個(gè)參數(shù) formdata,其他參數(shù)與Request一樣,詳細(xì)可參考上面的講述
一般用法為:
- yield scrapy.FormRequest(url="http://www.example.com/post/action",
- formdata={'name': 'Zarten', 'age': '27'},
- callback=self.after_post)