基本信息
源码名称:baidu标题爬虫
源码大小:2.15KB
文件格式:.py
开发语言:Python
更新时间:2021-09-08
   友情提示:(无需注册或充值,赞助后即可获取资源下载链接)

     嘿,亲!知识可是无价之宝呢,但咱这精心整理的资料也耗费了不少心血呀。小小地破费一下,绝对物超所值哦!如有下载和支付问题,请联系我们QQ(微信同号):813200300

本次赞助数额为: 2 元 
   源码介绍
可以批量获取百度结果的标题。

# -*- coding: utf-8 -*-
"""
Created on Mon Aug 23 15:38:33 2021
Beautiful Soup是python的一个库,最主要的功能是从网页抓取数据。
@参考: https://blog.csdn.net/qq_34320337/article/details/104997452
"""

import time
from bs4 import BeautifulSoup    #处理抓到的页面
import sys
import requests
import re
import importlib
importlib.reload(sys)    #编码转换,python3默认utf-8,一般不用加
 

#ff = open('baocun.txt', 'w')
 
headers = {
    'Accept': 'text/html,application/xhtml xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Encoding': 'gzip, deflate, compress',
    'Accept-Language': 'en-us;q=0.5,en;q=0.3',
    'Cache-Control': 'max-age=0',
    'Connection': 'keep-alive',
    'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0'
} #定义头文件
 
def getfromBaidu():
    start = time.clock()
    for k in range(1, 3):
        geturl(k)
    end = time.clock()
    print(end-start)
 
def geturl(k):
    number = str((k - 1) * 10)
    path = 'https://www.baidu.com/s?wd=%E5%92%96%E5%95%A1&pn='   number   '&oq=%E5%92%96%E5%95%A1&ie=utf-8&usm=1&rsv_pq=9ccd7f6500120ebb&rsv_t=d92fDeHr8TAXzN%2FuqzNW3xd3BcU3lunThKY2lkUUobFc3Ihjx46MPW4iNbc'
    #print(path)
    content = requests.get(path,headers=headers)
    #使用BeautifulSoup解析html
    soup = BeautifulSoup(content.text,'html.parser')
 
    tagh3 = soup.find_all('div', { 'class', 'result c-container '})
    #print(tagh3)
    for h3 in tagh3:
        try:
            title = h3.find(name = "h3", attrs = { "class": re.compile( "t")}).find('a').text.replace("\"","")
            print(title)
            #ff.write(title '\n')
        except:
            title = ''
        try:
            abstract = h3.find(name = "div", attrs = { "class": re.compile( "c-abstract")}).text.replace("\"","")
            print(abstract)
            #ff.write(abstract '\n')
        except:
            abstract = ''
        try:
            url = h3.find(name = "a", attrs = { "class": re.compile( "c-showurl")}).get('href')
            print(url '\n')
            #ff.write(url '\n')
        except:
            url = ''
  
        #ff.write('\n')
 
 
if __name__ == '__main__':
    getfromBaidu()