色综合图-色综合图片-色综合图片二区150p-色综合图区-玖玖国产精品视频-玖玖香蕉视频

您的位置:首頁(yè)技術(shù)文章
文章詳情頁(yè)

網(wǎng)頁(yè)爬蟲(chóng) - Python3.6 下的爬蟲(chóng)總是重復(fù)爬第一頁(yè)的內(nèi)容

瀏覽:145日期:2022-06-30 17:08:03

問(wèn)題描述

問(wèn)題如題:改成while,試了很多,然沒(méi)有效果,請(qǐng)教大家

# coding:utf-8# from lxml import etreeimport requests,lxml.html,osclass MyError(Exception): def __init__(self, value):self.value = value def __str__(self):return repr(self.value) def get_lawyers_info(url): r = requests.get(url) html = lxml.html.fromstring(r.content) # phones = html.xpath(’//span[@class='law-tel']’) phones = html.xpath(’//span[@class='phone pull-right']’) # names = html.xpath(’//p[@class='fl']/p/a’) names = html.xpath(’//h4[@class='text-center']’) if(len(phones) == len(names)):list(zip(names,phones))phone_infos = [(names[i].text, phones[i].text_content()) for i in range(len(names))] else:error = 'Lawyers amount are not equal to the amount of phone_nums: '+urlraise MyError(error) phone_infos_list = [] for phone_info in phone_infos:if(phone_info[0] == ''): info = '沒(méi)留姓名'+': '+phone_info[1]+'rn'else: info = phone_info[0]+': '+phone_info[1]+'rn'print (info)phone_infos_list.append(info) return phone_infos_listdir_path = os.path.abspath(os.path.dirname(__file__))print (dir_path)file_path = os.path.join(dir_path,'lawyers_info.txt')print (file_path)if os.path.exists(file_path): os.remove(file_path)with open('lawyers_info.txt','ab') as file: for i in range(1000):url = 'http://www.xxxx.com/cooperative_merchants?searchText=&industry=100&provinceId=19&cityId=0&areaId=0&page='+str(i+1)# r = requests.get(url)# html = lxml.html.fromstring(r.content)# phones = html.xpath(’//span[@class='phone pull-right']’)# names = html.xpath(’//h4[@class='text-center']’) # if phones or names:info = get_lawyers_info(url)for each in info: file.write(each.encode('gbk'))

問(wèn)題解答

回答1:

# coding: utf-8import requestsfrom pyquery import PyQuery as Qurl = ’http://www.51myd.com/cooperative_merchants?industry=100&provinceId=19&cityId=0&areaId=0&page=’with open(’lawyers_info.txt’, ’ab’) as f: for i in range(1, 5):r = requests.get(’{}{}’.format(url, i))usernames = Q(r.text).find(’.username’).text().split()phones = Q(r.text).find(’.phone’).text().split()print zip(usernames, phones)

標(biāo)簽: Python 編程
相關(guān)文章:
主站蜘蛛池模板: 亚洲视频免费一区 | 久久久精品久久 | 美女亚洲视频 | 一级淫| 日韩成人小视频 | 国产三级小视频在线观看 | 国产91一区二这在线播放 | 中文字幕乱码在线观看 | 欧美在线视频免费 | 99久久精品费精品国产一区二 | 欧美一级大黄特黄毛片视频 | 99视频在线观看视频一区 | 日本九六视频 | 精品日韩欧美 | 天天黄色片 | 精品久久免费视频 | 免费久 | 免费一级真人毛片 | 亚洲精品区一区二区三区四 | 免费a黄色| 国产成人精品视频一区 | 亚洲精彩视频在线观看 | 亚洲天堂国产精品 | 亚洲成a人片在线看 | 亚洲免费美女视频 | 99热成人精品热久久66 | jizjiz日本| 免费一级视频在线播放 | 欧美三级欧美一级 | 免费久草 | 免费区一级欧美毛片 | 日韩在线无 | 国产专区一va亚洲v天堂 | 琪琪午夜伦埋大全影院 | 亚洲 欧美 精品 中文第三 | 九九51精品国产免费看 | 免费五级在线观看日本片 | 国内精品久久久久久野外 | 欧美综合图片一区二区三区 | 免费国产一区二区在免费观看 | 久久午夜网 |