首页 陇州商情

爬取链家北京二手房网的房源信息

(来源:网站编辑 2024-10-31 00:30)
文章正文

爬取链家北京二手房网的第1页到第10页上的全部房源信息,房源信息包括房源标题(如“云龙家园南北通透两居低楼层出行方便”)、房源地址(如“云龙家园-旧宫”)、房源简介(如“2室1厅|90.03平米|南北|简装|低楼层(共6层)|2005年|板楼”),并将这些数据保存在houselists.csv文件中。

提示:

链家北京二手房网的第1页网址为https://bj.lianjia.com/ershoufang/pg1/

链家北京二手房网的第2页网址为https://bj.lianjia.com/ershoufang/pg2/

链家北京二手房网的第3页网址为https://bj.lianjia.com/ershoufang/pg3/

……

链家北京二手房网的第9页网址为https://bj.lianjia.com/ershoufang/pg9/

链家北京二手房网的第10页网址为https://bj.lianjia.com/ershoufang/pg10/

import requests from bs4 import BeautifulSoup import csv from time import sleep import re fp = open('houselists.csv','w',newline = '',encoding='utf-8') writer = csv.writer(fp) writer.writerow(('房源标题','房源地址','房源信息')) headers = { "user-agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Edg/130.0.0.0" } for i in range(1,11): url = f"https://bj.lianjia.com/ershoufang/pg{i}" r = requests.get(url, headers=headers,cookies=cookies) soup = BeautifulSoup(r.text,'lxml') sleep(1) for j in range(1,31): title = soup.select_one(f'#content > div.leftContent > ul > li:nth-child({j}) > div.info.clear > div.title > a').text.strip() location = soup.select_one(f'#content > div.leftContent > ul > li:nth-child({j}) > div.info.clear > div.flood > div').text.strip() location = re.sub(r'\s+', '', location) formation = soup.select_one(f'#content > div.leftContent > ul > li:nth-child({j}) > div.info.clear > div.address > div').text.strip() writer.writerow((title, location, formation)) print(title,location,formation) fp.close() print("over!")

首页
评论
分享
Top