最近在研究股票量化,想从每笔成交的明细着手,但历史数据的获取便是一个大问题,一些股票证券软件又不能批量导出成交数据。所以,我花了两天时间,成功的从新浪财经爬取了我要的数据 下面开始 新浪股票明细数据接口为

http://market.finance.sina.com.cn/transHis.php?symbol=sz000001&date=2018-07-17&page=3

格式不用多说

symbol=股票代码

date=日期

page=页码

进去就可以看到一个表格,这其实就是新浪官方网页中展现出来的,但我们在这里比较好爬取,因为没有其他多余的内容,以前好像是可以直接下载下来的,很不幸现在不行了,但没关系,我们开始爬取

1.先获取整个html,这里我加了个超时重试机制,

def getHtml(url):
while True:
try:
html = urllib.request.urlopen(url, timeout=5).read()
break
except:
print(“超时重试”)
html = html.decode(‘gbk’)
return html

2.获取列名

def getTitle(tableString):
    s = r'(?<=<thead)>.*?([\s\S]*?)(?=</thead>)'
    pat = re.compile(s)
    code = pat.findall(tableString)
    s2 = r'(?<=<tr).*?>([\s\S]*?)(?=</tr>)'
    pat2 = re.compile(s2)
    code2 = pat2.findall(code[0])
    s3 = r'(?<=<t[h,d]).*?>([\s\S]*?)(?=</t[h,d]>)'
    pat3 = re.compile(s3)
    code3 = pat3.findall(code2[0])
    return code3

3.获取所有记录

def getBody(tableString):
    s = r'(?<=<tbody)>.*?([\s\S]*?)(?=</tbody>)'
    pat = re.compile(s)
    code = pat.findall(tableString)
    s2 = r'(?<=<tr).*?>([\s\S]*?)(?=</tr>)'
    pat2 = re.compile(s2)
    code2 = pat2.findall(code[0])
    s3 = r'(?<=<t[h,d]).*?>(?!<)([\s\S]*?)(?=</)[^>]*>'
    pat3 = re.compile(s3)
    code3 = []
    for tr in code2:
        code3.append(pat3.findall(tr))
    return code3

4.然后用个while把每一页都获取出来,当然还要判断一下是不是到最后一页了,或者其他一些细节

while True:
    Url = 'http://market.finance.sina.com.cn/transHis.php?symbol=' + symbol + '&date=' + date + '&page=' + str(page)
    html = getHtml(Url)
    table = getTable(html)
    if len(table) != 0:
        tbody = getBody(table[0])
        if len(tbody) == 0:
            print("结束")
            break
        if page == 1:
            thead = getTitle(table[0])
            print(thead)
        for tr in tbody:
            print(tr)
    else:
        print("当日无数据")
        break
    page += 1

5.输出结果

怎么样,还不错吧,然后导入数据库就可以了 本人学生一枚,自学python没多久,正则表达式也才花了半天时间学的,代码会比较简陋,不足的地方还请多指教

源码:

import urllib.request
import re
import datetime
 
 
def getHtml(url):
    while True:
        try:
            html = urllib.request.urlopen(url, timeout=5).read()
            break
        except:
            print("超时重试")
    html = html.decode('gbk')
    return html
 
 
def getTable(html):
    s = r'(?<=<table class="datatbl" id="datatbl">)([\s\S]*?)(?=</table>)'
    pat = re.compile(s)
    code = pat.findall(html)
    return code
 
 
def getTitle(tableString):
    s = r'(?<=<thead)>.*?([\s\S]*?)(?=</thead>)'
    pat = re.compile(s)
    code = pat.findall(tableString)
    s2 = r'(?<=<tr).*?>([\s\S]*?)(?=</tr>)'
    pat2 = re.compile(s2)
    code2 = pat2.findall(code[0])
    s3 = r'(?<=<t[h,d]).*?>([\s\S]*?)(?=</t[h,d]>)'
    pat3 = re.compile(s3)
    code3 = pat3.findall(code2[0])
    return code3
 
 
def getBody(tableString):
    s = r'(?<=<tbody)>.*?([\s\S]*?)(?=</tbody>)'
    pat = re.compile(s)
    code = pat.findall(tableString)
    s2 = r'(?<=<tr).*?>([\s\S]*?)(?=</tr>)'
    pat2 = re.compile(s2)
    code2 = pat2.findall(code[0])
    s3 = r'(?<=<t[h,d]).*?>(?!<)([\s\S]*?)(?=</)[^>]*>'
    pat3 = re.compile(s3)
    code3 = []
    for tr in code2:
        code3.append(pat3.findall(tr))
    return code3
 
 
# 股票代码
symbol = 'sz000001'
# 日期
dateObj = datetime.datetime(2018, 6, 1)
date = dateObj.strftime("%Y-%m-%d")
 
# 页码,因为不止1页,从第一页开始爬取
page = 1
 
while True:
    Url = 'http://market.finance.sina.com.cn/transHis.php?symbol=' + symbol + '&date=' + date + '&page=' + str(page)
    html = getHtml(Url)
    table = getTable(html)
    if len(table) != 0:
        tbody = getBody(table[0])
        if len(tbody) == 0:
            print("结束")
            break
        if page == 1:
            thead = getTitle(table[0])
            print(thead)
        for tr in tbody:
            print(tr)
    else:
        print("当日无数据")
        break
    page += 1

发表回复