介绍
成都创新互联公司成立10多年来,这条路我们正越走越好,积累了技术与客户资源,形成了良好的口碑。为客户提供成都网站制作、成都做网站、网站策划、网页设计、国际域名空间、网络营销、VI设计、网站改版、漏洞修补等服务。网站是否美观、功能强大、用户体验好、性价比高、打开快等等,这些对于网站建设都非常重要,成都创新互联公司通过对建站技术性的掌握、对创意设计的研究为客户提供一站式互联网解决方案,携手广大客户,共同发展进步。
Request类是一个http请求的类,对于爬虫而言是一个很重要的类。通常在Spider中创建这样的一个请求,在Downloader中执行这样的一个请求。同时也有一个子类FormRequest继承于它,用于post请求。
在Spider中通常用法:
- yield scrapy.Request(url = 'zarten.com')
类属性和方法有:
Request
- class scrapy.http.Request(url[, callback, method='GET', headers, body, cookies, meta, encoding='utf-8', priority=0, dont_filter=False, errback, flags])
参数说明:
- cookies = {'name1' : 'value1' , 'name2' : 'value2'}
list方式:
- cookies = [
- {'name': 'Zarten', 'value': 'my name is Zarten', 'domain': 'example.com', 'path': '/currency'}
- ]
- from scrapy.spidermiddlewares.httperror import HttpError
- from twisted.internet.error import DNSLookupError
- from twisted.internet.error import TimeoutError, TCPTimedOutError
- class ToScrapeCSSSpider(scrapy.Spider):
- name = "toscrape-css"
- # start_urls = [
- # 'http://quotes.toscrape.com/',
- # ]
- start_urls = [
- "http://www.httpbin.org/", # HTTP 200 expected
- "http://www.httpbin.org/status/404", # Not found error
- "http://www.httpbin.org/status/500", # server issue
- "http://www.httpbin.org:12345/", # non-responding host, timeout expected
- "http://www.httphttpbinbin.org/", # DNS error expected
- ]
- def start_requests(self):
- for u in self.start_urls:
- yield scrapy.Request(u, callback=self.parse_httpbin,
- errback=self.errback_httpbin,
- dont_filter=True)
- def parse_httpbin(self, response):
- self.logger.info('Got successful response from {}'.format(response.url))
- # do something useful here...
- def errback_httpbin(self, failure):
- # log all failures
- self.logger.info(repr(failure))
- # in case you want to do something special for some errors,
- # you may need the failure's type:
- if failure.check(HttpError):
- # these exceptions come from HttpError spider middleware
- # you can get the non-200 response
- response = failure.value.response
- self.logger.info('HttpError错误 on %s', response.url)
- elif failure.check(DNSLookupError):
- # this is the original request
- request = failure.request
- self.logger.info('DNSLookupError错误 on %s', request.url)
- elif failure.check(TimeoutError, TCPTimedOutError):
- request = failure.request
- self.logger.info('TimeoutError错误 on %s', request.url)
- yield scrapy.Request(url = 'zarten.com', meta = {'name' : 'Zarten'})
在Response中:
- my_name = response.meta['name']
不过也有scrapy内置的特殊key,也非常有用,它们如下:
可以设置http或https代理
- request.meta['proxy'] = 'https://' + 'ip:port'
- yield scrapy.Request(url= 'https://httpbin.org/get/zarten', meta= {'handle_httpstatus_list' : [404]})
在parse函数中可以看到处理404错误:
- def parse(self, response):
- print('返回信息为:',response.text)
- def start_requests(self):
- urls = ['http://quotes.toscrape.com/page/1',
- 'http://quotes.toscrape.com/page/3',
- 'http://quotes.toscrape.com/page/5',
- ]
- for i ,url in enumerate(urls):
- yield scrapy.Request(urlurl= url, meta= {'cookiejar' : i})
- def parse(self, response):
- next_page_url = response.css("li.next > a::attr(href)").extract_first()
- if next_page_url is not None:
- yield scrapy.Request(response.urljoin(next_page_url), meta= {'cookiejar' : response.meta['cookiejar']}, callback= self.parse_next)
- def parse_next(self, response):
- print('cookiejar:', response.meta['cookiejar'])
- def start_requests(self):
- headers = {
- 'user-agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
- }
- yield scrapy.Request(url= 'https://www.amazon.com', headersheaders= headers)
- def parse(self, response):
- print('响应时间为:', response.meta['download_latency'])
FormRequest
FormRequest 类为Request的子类,用于POST请求
这个类新增了一个参数 formdata,其他参数与Request一样,详细可参考上面的讲述
一般用法为:
- yield scrapy.FormRequest(url="http://www.example.com/post/action",
- formdata={'name': 'Zarten', 'age': '27'},
- callback=self.after_post)
网站标题:网络爬虫框架Scrapy详解之Request
当前路径:http://www.shufengxianlan.com/qtweb/news32/50032.html
网站建设、网络推广公司-创新互联,是专注品牌与效果的网站制作,网络营销seo公司;服务项目有等
声明:本网站发布的内容(图片、视频和文字)以用户投稿、用户转载内容为主,如果涉及侵权请尽快告知,我们将会在第一时间删除。文章观点不代表本网站立场,如需处理请联系客服。电话:028-86922220;邮箱:631063699@qq.com。内容未经允许不得转载,或转载时需注明来源: 创新互联