VB.net 2010 视频教程 VB.net 2010 视频教程 python基础视频教程
SQL Server 2008 视频教程 c#入门经典教程 Visual Basic从门到精通视频教程
当前位置:
首页 > Python基础教程 >
  • python基础教程之Scrapy实战-新浪网分类资讯爬虫(2)

70 with open (b_filename,'a+') as b: 71 b.write(item['second_filename']+'\t'+item['second_title']+'\t'+item['second_urls']+'\n') 72 73 #发送每个小类url的Request请求,得到Response连同包含meta数据 一同交给回调函数 second_parse 方法处理 74 #for item in items: 75 yield scrapy.Request(url = item['second_urls'],meta={'meta_1':copy.deepcopy(item)},callback=self.second_parse) 76 77 78 #b_filename =r"/Users/jvf/Downloads/数据分析/练习/0715-新浪网导航/DATA/222.txt" 79 #with open (b_filename,'a+') as b: 80 # b.write(item['second_filename']+'\t'+item['second_title']+'\n') 81 82 def second_parse(self,response): 83 item = response.meta['meta_1'] 84 third_urls =response.xpath('//a/@href').extract() 85 86 #items =[] 87 88 for i in range(0,len(third_urls)): 89 90 #检查每个链接是否以大类网址开头,shtml结束,结果返回TRue 91 if_belong = third_urls[i].startswith(item['first_urls']) and third_urls[i].endswith('shtml') 92 if (if_belong): 93 ''' 94 item = SinaItem() 95 item['first_title'] = meta_1['first_title'] 96 item['first_urls'] = meta_1['first_urls'] 97 item['second_title'] = meta_1['second_title'] 98 item['second_urls'] = meta_1['second_urls'] 99 item['second_filename']=meta_1['second_filename'] 100 ''' 101 item['third_urls'] =third_urls[i] 102 yield scrapy.Request(url=item['third_urls'],meta={'meta_2':copy.deepcopy(item)}, 103 callback = self.detail_parse) 104 #items.append(item) 105 106 b_filename =r"/Users/jvf/Downloads/数据分析/练习/0715-新浪网导航/DATA/222.txt" 107 with open (b_filename,'a+') as b: 108 b.write(item['second_filename']+'\t'+item['second_title']+'\t'+item['second_urls']+'\n') 109 110 111 #for item in items: 112 113 114 115 def detail_parse(self,response): 116 item =response.meta['meta_2'] 117 118 #抓取标题 119 head = response.xpath("//li[@class='item']//a/text() | //title/text()").extract()[0] 120 #抓取的内容返回列表 121 content ="" 122 content_list = response.xpath('//div[@id=\"artibody\"]/p/text()').extract() 123 for i in content_list: 124 content += i 125 content = content.replace('\u3000','') 126 127 item['head']=head 128 item['content'] =content 129 130 yield item 131 132 b_filename =r"/Users/jvf/Downloads/数据分析/练习/0715-新浪网导航/DATA/333.txt" 133 with open (b_filename,'a+') as b: 134 b.write(item['second_filename']+'\t'+item['second_title']+'\t'+item['second_urls']+'\n') 135 136 137 138


相关教程
关于我们--广告服务--免责声明--本站帮助-友情链接--版权声明--联系我们       黑ICP备07002182号