嘿,亲!知识可是无价之宝呢,但咱这精心整理的资料也耗费了不少心血呀。小小地破费一下,绝对物超所值哦!如有下载和支付问题,请联系我们QQ(微信同号):813200300
本次赞助数额为: 2 元微信扫码支付:2 元
请留下您的邮箱,我们将在2小时内将文件发到您的邮箱
PYQT 小说阅读器
soup = BeautifulSoup(data, features='lxml')
lis = soup.find_all('div','bookbox')
novelList = []
novelInfoList = []
linkList = []
for li in lis:
html = etree.HTML(str(li))
'''
class_ = html.xpath('//span[@class="s1"]/text()')
name = html.xpath('//span[@class="s2"]/a/text()')
link = html.xpath('//span[@class="s2"]/a/@href')
new = html.xpath('//span[@class="s3"]/a/text()')
author = html.xpath('//span[@class="s4"]/text()')
time = html.xpath('//span[@class="s5"]/text()')
now = html.xpath('//span[@class="s7"]/text()')
老的版本
'''
class_ = html.xpath('//div[@class="cat"]/text()')
name = html.xpath('//h4[@class="bookname"]/a/text()')
link = html.xpath('//h4[@class="bookname"]/a/@href')
new = html.xpath('//div[@class="update"]/span/text()')
author = html.xpath('//div[@class="author"]/text()')
time = html.xpath('//div[@class="author"]/text()')
now = html.xpath('//div[@class="update"]/a/@href')
if class_ and now and new:
novelList.append(name[0])
novelInfoList.append([class_[0], name[0], sBaseUrl link[0], new[0], author[0], time[0], sBaseUrl now[0]])
linkList.append(link[0])
return [novelList, novelInfoList, linkList]