本篇内容主要参考方正金工研究报告“星火” 多因子系列报告的第二篇《Barra模型进阶,多因子模型风险预测》,对Barra模型的因子风险部分进行介绍。其主要利用Barra模型初探一文中的因子收益矩阵,预测未来各个因子的风险,并且对风险进行Newey-West自相关调整、特征值调整、波动率偏误调整,得到最终描述因子预期风险的协方差矩阵。
特别提示:阅读本篇之前建议先阅读Barra模型初探
本部分对Barra多因子模型进行简要回顾:假设市场上有K个驱动股票收益的共同因子,那么多因子模型可以表示为: 在本文中,我们衔接了“Barra模型初探”一文,采用聚宽因子库中的45个因子,在使用加权最小二乘回归后筛除6个异常因子,一共剩余39个因子进行下面的计算。
注意:本报告最后对于风险的预测可能不甚理想,原因在于研报中对于因子的选取过于粗糙。事实上,笔者翻阅了Barra模型的手册,发现其因子的构造有一系列复杂的过程,中间要用到各种优化函数。因此,本文着力与给各位读者提供计算风险的思路,如果想力求精确,更应该在因子选取和数据清洗上下功夫。
依据“Barra模型初探”中的步骤,我们可以得到一个以因子名称为index,时间线为column的数据框,每个数据代表了该因子在某个交易日的收益率。
from six import BytesIO
import pandas as pd
import numpy as np
data=pd.read_csv("factor_return.csv")
data = data.loc[:, ~data.columns.str.contains('^Unnamed')]
#筛选掉所有表现异常的因子
data.drop([15],inplace=True)
data.drop([16],inplace=True)
data.drop([18],inplace=True)
data.drop([19],inplace=True)
data.drop([28],inplace=True)
data.drop([31],inplace=True)
data.index=[np.arange(data.shape[0])]
print(data)
2019-02-01 2019-02-11 ... 2019-07-22 2019-07-23 0 -0.000408 -0.006660 ... -0.008588 -0.000071 1 0.001256 -0.001301 ... 0.002085 0.002050 2 -0.002166 -0.005771 ... 0.001443 -0.000203 3 0.002350 0.000505 ... 0.001118 -0.002245 4 0.001824 -0.004884 ... -0.007557 0.000608 5 0.001207 0.000336 ... -0.000831 -0.001783 6 0.001471 0.001788 ... 0.000751 0.000558 7 0.001579 0.001243 ... -0.000042 0.000366 8 0.000039 -0.000215 ... 0.000964 0.000781 9 -0.000613 -0.000291 ... -0.000655 0.000189 10 0.027342 -0.005269 ... 0.000291 0.002884 11 -0.010864 -0.004725 ... -0.003023 0.006102 12 -0.004798 -0.002315 ... 0.003580 -0.005725 13 -0.015248 -0.006055 ... 0.001300 -0.004151 14 -0.001119 0.001898 ... -0.012952 -0.004506 15 0.013243 0.004960 ... 0.013513 0.019025 16 0.004592 0.009807 ... -0.004921 -0.000567 17 0.011523 0.005991 ... -0.015203 0.000644 18 -0.013368 0.008945 ... -0.000774 -0.002092 19 -0.007912 0.004052 ... -0.004180 -0.001515 20 0.000309 0.007758 ... -0.002153 -0.002626 21 0.004286 0.001315 ... -0.003850 -0.000058 22 0.000799 -0.000422 ... -0.000795 -0.004828 23 -0.004288 0.001404 ... -0.004968 -0.007250 24 -0.000963 -0.001635 ... -0.002433 -0.001355 25 -0.003840 -0.003344 ... -0.011625 0.009125 26 -0.009574 -0.003220 ... 0.006364 0.004616 27 -0.005543 -0.006330 ... -0.000452 -0.013108 28 -0.004014 -0.001081 ... -0.001428 0.001042 29 -0.003786 -0.004929 ... 0.001465 0.000946 30 -0.005901 -0.005770 ... 0.015465 -0.005526 31 0.007651 -0.005302 ... -0.004968 0.015122 32 -0.003906 -0.001657 ... 0.004351 -0.005523 33 0.008511 -0.002214 ... 0.010832 0.010487 34 -0.007414 -0.007853 ... -0.000808 0.000122 35 -0.004025 0.001428 ... -0.009276 -0.011402 36 -0.002089 -0.000850 ... 0.001404 0.000229 37 -0.004031 0.000444 ... 0.011195 -0.005736 38 0.022488 0.012155 ... 0.011959 0.007467 [39 rows x 113 columns]
在得到上述数据框后,我们首先要计算初始的协方差矩阵。传统情况下,计算协方差矩阵直接采用股票收益率之间的协方差,这种方法将每一天的数据都视为同等重要。然而现实中市场上因子的收益率是存在序列相依性的,这意味着近期的数据对未来有更好的预测效果,因此采用半衰指数加权平均的方法计算日度协方差矩阵: 其中$f_{k,s}$代表因子$k$在$s$期的收益,$\overline{f_k}$表示因子$k$的收益在样本期内的指数加权平均,$h$表示样本期间的长度(本例为113),$τ$为半衰期参数,$λ=0.5^{1/τ}$。在实际计算中,我们取$τ=90$。
经过上述计算,我们可以得到原始的协方差矩阵,可以记为"init_cov",但由于金融时间序列数据的群聚性与相关性,我们必须采用计量经济学中的Newey-West调整,具体步骤如下: 在实际计算中,我们对方差和协方差序列的相关滞后时间长度均取$D=2$,半衰期设置为$90$。
#引入工具包
import time
from datetime import datetime,timedelta
from jqdata import *
import numpy as np
import pandas as pd
import math
from statsmodels import regression
import statsmodels.api as sm
import matplotlib.pyplot as plt
import datetime
from scipy import stats
from jqfactor import *
from scipy.optimize import minimize
import warnings
warnings.filterwarnings('ignore')
#设置画图样式
plt.style.use('ggplot')
#获取日期列表
def get_tradeday_list(start,end,frequency=None,count=None):
if count !=None:
df=get_price('000001.XSHG',end_date=end,count=count)
else:
df=get_price('000001.XSHG',start_date=start,end_date=end)
if frequency==None or frequency=='day':
return df.index
else:
df['year-month'] = [str(i)[0:7] for i in df.index]
if frequency == 'month':
return df.drop_duplicates('year-month').index
elif frequency == 'quarter':
df['month'] = [str(i)[5:7] for i in df.index]
df = df[(df['month']=='01') | (df['month']=='04') | (df['month']=='07') | (df['month']=='10') ]
return df.drop_duplicates('year-month').index
elif frequency =='halfyear':
df['month'] = [str(i)[5:7] for i in df.index]
df = df[(df['month']=='01') | (df['month']=='06')]
return df.drop_duplicates('year-month').index
#推断交易日为date之后的第shift个交易日
def ShiftTradingDay(date,shift):
# 获取所有的交易日,返回一个包含所有交易日的 list,元素值为 datetime.date 类型.
tradingday = get_all_trade_days()
# 得到date之后shift天那一天在列表中的行标号 返回一个数
date = datetime.date(int(str(date)[:4]),int(str(date)[5:7]),int(str(date)[8:10]))
shiftday_index = list(tradingday).index(date)+shift
# 根据行号返回该日日期 为datetime.date类型
return tradingday[shiftday_index]
#进行新股、St股过滤,返回筛选后的股票
#!!!不能过滤停牌股票
def filter_stock(stockList,date,days=21*3,limit=0):#日频策略加入开盘涨停过滤
#去除上市距beginDate不足3个月的股票
def delect_stop(stocks,beginDate,n=days):
stockList=[]
beginDate = datetime.datetime.strptime(beginDate, "%Y-%m-%d")
for stock in stocks:
start_date=get_security_info(stock).start_date
if start_date<(beginDate-datetime.timedelta(days=n)).date():
stockList.append(stock)
return stockList
#剔除ST股
st_data=get_extras('is_st',stockList, count = 1,end_date=date)
stockList = [stock for stock in stockList if not st_data[stock][0]]
#新股及退市股票
stockList=delect_stop(stockList,date)
#剔除开盘涨停股票
if limit == 1:
#如果需要收盘涨跌停可以改字段即可
df = get_price(stockList,end_date=date,fields=['open','high_limit','low_limit'],count=1).iloc[:,0,:]
df['h_limit']=(df['open']==df['high_limit'])
df['l_limit']=(df['open']==df['low_limit'])
stockList = [df.index[i] for i in range(len(df)) if not (df.h_limit[i] or df.l_limit[i])] #过滤涨跌停股票
return stockList
#为股票池添加行业标记,return df格式 ,为中性化函数的子函数
def get_industry_exposure(stock_list,date):
df = pd.DataFrame(index=get_industries(name='sw_l1').index, columns=stock_list)
for stock in stock_list:
try:
df[stock][get_industry_code_from_security(stock,date=date)] = 1
except:
continue
return df.fillna(0)#将NaN赋为0
#查询个股所在行业函数代码(申万一级) ,为中性化函数的子函数 (是get_industry_exposure的支持函数)
def get_industry_code_from_security(security,date=None):
industry_index=get_industries(name='sw_l1').index
for i in range(0,len(industry_index)):
try:
index = get_industry_stocks(industry_index[i],date=date).index(security)
return industry_index[i]
except:
continue
return u'未找到'
#初始设置,设置统计区间,调仓频率
#设置统计数据区间
index = '000905.XSHG' #设置股票池,和对比基准,这里是中证500
#设置统计起止日期
date_start = '2019-02-01'
date_end = '2019-07-24'
#设置调仓频率
trade_freq = 'day' #month每个自然月;day每个交易日;输入任意数字如 5,则为5日调仓
#获取调仓时间列表
if trade_freq == 'month':
#获取交易日列表,每月首个交易日
date_list = get_tradeday_list(start=date_start,end=date_end,frequency='month',count=None) #自然月的第一天
elif trade_freq == 'day':
date_list = get_tradeday_list(start=date_start,end=date_end,count=None)#获取回测日期间的所有交易日
else:
date_day_list = get_tradeday_list(start=date_start,end=date_end,count=None)#获取回测日期间的所有交易日
date_list = [date_day_list[i] for i in range(len(date_day_list)) if i%int(trade_freq) == 0]
#打印最终筛选出的调仓日期,可以删除
print(date_list)
DatetimeIndex(['2019-02-01', '2019-02-11', '2019-02-12', '2019-02-13', '2019-02-14', '2019-02-15', '2019-02-18', '2019-02-19', '2019-02-20', '2019-02-21', ... '2019-07-11', '2019-07-12', '2019-07-15', '2019-07-16', '2019-07-17', '2019-07-18', '2019-07-19', '2019-07-22', '2019-07-23', '2019-07-24'], dtype='datetime64[ns]', length=114, freq=None)
def Newy_West_Cov(hyperpanel, forewards=21, nlags='Bartlett',
half_life=90, left_decay=True):
'''
Calculate Covariance Matrix adjusted by Newy West,
nlags estimated by Bartlett kernal func.
Parameters:
-----------
hyperpanel: N x T matrix. N denotes numbers of factors,
T is the time window. We set a threshold of
time window to ensure the estimation of covariance
is robust. Th threshold is 100 samples.
forewards: The intervals of estimation of corvariance.
According to Barra CNE5, we set 21 days (one month).
nlags : 'Bartlett' of an int. Number of lags when
estimating autocovariance.
half_life: half_life parameter of decay adjustment.
left_decay: If the latest day is in the right side,
left_decay is True. If the latest day is in the left side,
left_decay is False.
Returns:
--------
np.array, N x N matrix.
References:
----------
方正证券: Barra模型进阶:多因子模型风险预测 '星火'多因子系列(二)
statsmodels.stats.sandwich_covariance.cov_hac
'''
# 判断样本数是否小于100,如果小于100则raise ValueError, 可以自行修改
if hyperpanel.shape[1] < 100:
raise ValueError('Not enough samples')
if ~isinstance(hyperpanel, np.ndarray): # 判断是否输入为hyperpanel
hyperpanel = np.array(hyperpanel)
#按照定义计算lamda
if half_life is None:
lamb = 1
else:
lamb = 0.5 ** (1. / half_life)
#使用lanb**(x/2)本质上是把分子拆分成两部分,可以理解为a^(b/2)*a^(b/2)=a^b,主要是为了方便计算
if left_decay:
weights = np.array(list(map(lambda x: lamb ** (x / 2),
np.arange(hyperpanel.shape[1] - 1, -1, -1))))
else:
weights = np.array(list(map(lambda x: lamb ** (x / 2),
np.arange(hyperpanel.shape[1]))))
weighted_panel = hyperpanel * weights
# filter nan index
nonan_index = np.where(~np.isnan(np.sum(hyperpanel, axis=0)))[0]
filtered_weights = weights[nonan_index]
filtered_weighted_panel = weighted_panel[:, nonan_index]
if filtered_weighted_panel.shape[1] < 0.8 * hyperpanel.shape[1]:
raise ValueError(
'Too many nan values in hyperpanel,please handle nan first.')
# calculate initial covariance,使用np.nanmean处理缺失值问题
filtered_weighted_panel_demean = filtered_weighted_panel -\
np.nanmean(filtered_weighted_panel,axis=1).reshape(-1, 1)
init_covar = np.dot(filtered_weighted_panel_demean,
filtered_weighted_panel_demean.T) / sum(
filtered_weights ** 2)
if nlags == 'Bartlett':
nlags = int(4 * (filtered_weighted_panel.shape[1] / 100) ** (2 / 9))
elif isinstance(nlags, str):
raise ValueError('No such method, try input an int or Bartlett.')
def _autocov(panel, weight, lag):
if left_decay:
weight = weight[lag:]
else:
weight = weight[: lag]
panel_lag = panel[:, :-lag] * weight
panel_current = panel[:, lag:] * weight
nonan_index = np.where(
~np.isnan(np.sum(panel_current + panel_lag, axis=0)))[0]
panel_lag_demean = panel_lag -\
np.nanmean(panel_lag, axis=1).reshape(-1, 1)
panel_current_demean = panel_current -\
np.nanmean(panel_current, axis=1).reshape(-1, 1)
return np.dot(panel_current_demean[:, nonan_index],
panel_lag_demean[:, nonan_index].T) / sum(
weight[nonan_index] ** 2)
def adj_term(delta):
return (nlags - delta + 1) / (nlags + 1) * (_autocov(hyperpanel, weights, delta))
adj_term_sum=np.zeros(adj_term(1).shape)
for delta in np.arange(1,nlags+1):
adj_term_sum+=adj_term(delta)
return forewards * (init_covar + adj_term_sum + adj_term_sum.T)
pd1=pd.DataFrame(Newy_West_Cov(data))
pd1=np.around(pd1,decimals=5)
print(pd1)
0 1 2 ... 36 37 38 0 0.00031 0.00005 -0.00007 ... 0.00006 -0.00009 0.00012 1 0.00005 0.00015 0.00000 ... 0.00004 0.00009 0.00054 2 -0.00007 0.00000 0.00018 ... 0.00006 -0.00004 -0.00055 3 0.00004 0.00007 -0.00008 ... 0.00005 0.00003 0.00074 4 0.00026 0.00006 -0.00007 ... 0.00007 -0.00009 0.00031 5 -0.00000 0.00000 -0.00007 ... 0.00008 0.00002 0.00030 6 -0.00001 -0.00001 -0.00005 ... 0.00006 -0.00001 0.00050 7 -0.00001 -0.00002 0.00001 ... -0.00007 0.00001 -0.00013 8 0.00001 0.00000 0.00000 ... -0.00003 0.00001 -0.00005 9 -0.00001 -0.00001 0.00002 ... 0.00007 -0.00002 -0.00011 10 0.00013 -0.00011 0.00009 ... 0.00033 -0.00022 -0.00189 11 0.00026 0.00019 -0.00007 ... 0.00136 -0.00020 0.00166 12 0.00016 0.00009 0.00003 ... 0.00007 -0.00005 0.00059 13 0.00014 0.00009 -0.00003 ... -0.00004 0.00015 0.00016 14 0.00011 -0.00001 -0.00011 ... -0.00038 -0.00003 0.00021 15 -0.00007 -0.00000 -0.00002 ... -0.00017 0.00009 0.00012 16 -0.00020 -0.00006 -0.00001 ... 0.00026 0.00014 -0.00063 17 -0.00022 -0.00014 0.00023 ... -0.00046 -0.00011 -0.00186 18 -0.00034 -0.00014 0.00023 ... -0.00009 0.00001 -0.00076 19 -0.00008 -0.00005 0.00023 ... -0.00015 0.00003 -0.00077 20 -0.00015 -0.00007 0.00010 ... 0.00008 -0.00010 -0.00092 21 -0.00001 0.00004 0.00002 ... 0.00014 0.00008 -0.00003 22 0.00000 -0.00001 -0.00006 ... -0.00031 -0.00002 0.00082 23 0.00021 0.00001 0.00002 ... 0.00009 -0.00007 0.00013 24 -0.00005 -0.00004 0.00010 ... -0.00006 -0.00016 -0.00075 25 -0.00004 -0.00008 0.00012 ... -0.00009 0.00010 -0.00021 26 -0.00004 -0.00014 -0.00017 ... -0.00010 -0.00017 0.00007 27 0.00006 0.00007 -0.00001 ... -0.00019 0.00014 0.00039 28 0.00017 0.00014 -0.00002 ... 0.00014 0.00016 0.00077 29 0.00015 -0.00001 0.00005 ... 0.00064 -0.00009 -0.00040 30 -0.00016 0.00002 0.00011 ... -0.00029 0.00010 0.00007 31 -0.00010 -0.00002 -0.00014 ... -0.00027 -0.00018 0.00072 32 -0.00001 -0.00000 0.00000 ... 0.00008 -0.00014 0.00002 33 0.00004 0.00008 -0.00015 ... -0.00003 0.00012 0.00088 34 0.00001 -0.00005 0.00003 ... -0.00016 0.00000 -0.00204 35 -0.00009 -0.00004 -0.00042 ... -0.00078 0.00013 0.00180 36 0.00006 0.00004 0.00006 ... 0.00115 -0.00004 0.00005 37 -0.00009 0.00009 -0.00004 ... -0.00004 0.00053 0.00050 38 0.00012 0.00054 -0.00055 ... 0.00005 0.00050 0.00669 [39 rows x 39 columns]
如前文所述,直接采用协方差矩阵估计会对最优投资组合的风险产生明显低估。基于此,Menchero(2011)提出采用特征值调整(Eigenfactor Risk Adjustment)的方法对因子协方差矩阵进一步修正。
首先对样本协方差矩阵$F^{NW}$进行特征分解,可以得到对角矩阵$D_0$和正交矩阵$U_0$: $D_0=U_0^TF^{NW}U_0$
此外,为了完成修正,还需要进行$m$次蒙特卡洛模拟,每一次的步骤如下:
def eig_value_adj(cov, simu_len,simu_times=10000, adj_coef=1.2):
# number of factor returns
assert cov.ndim == 2
assert cov.shape[0] == cov.shape[1]
# m: number of simulation times
m = simu_times
# n: number of random variables
n = cov.shape[0]
# t: length of simulation numbers
t = simu_len
assert t >= n
d_0, u_0 = np.linalg.eigh(cov)
u_0 *= np.sqrt(d_0)
# After multiply the sqaured root of eignvalue,
# cov = u_0@u_0.T
# m * n * t. iid. standard normal random numbers
standard_normal_nums = np.random.standard_normal((m, n, t))
# m * n * t. t normal random n-dimension vectors with
# 0 mean and covariance matrix cov for each m
normal_nums = np.einsum('ij, ...jk -> ...ik',
u_0,
standard_normal_nums)
del standard_normal_nums
# m * n * n simulated estimated cov for each m
simu_cov = np.einsum('...ij, ...hj -> ...ih',
normal_nums,
normal_nums)
del normal_nums
# d_m: m * n. n eignvalues for each m
# u_m: m * n * n.
d_m, u_m = np.linalg.eigh(simu_cov)
del simu_cov
# d_m_tilde: m * n
d_m_tilde = np.sum(
np.einsum('...ij, ...ik -> ...jk',
u_m,
u_0) ** 2,
axis=2,
)
del u_m
# n * 1
lambda_diag = np.sqrt((np.mean(d_m / d_m_tilde, axis=0))/m)
del d_m, d_m_tilde
gamma_diag = 1. - adj_coef + adj_coef * lambda_diag
u_0 *= gamma_diag
cov_adj = np.dot(u_0, u_0.T)
return cov_adj
import pandas as pd
#注:这一段程序笔者在本机完成,在平台上由于numpy版本的问题暂时无法实现
#Eigen=(eig_value_adj(cov=pd1.values,simu_len=100,adj_coef=1.2, simu_times=10000))
#print(pd.DataFrame(Eigen))
Eigen=pd.read_csv("Eigen.csv")
Eigen = Eigen.loc[:, ~Eigen.columns.str.contains('^Unnamed')]
print(Eigen)
0 1 ... 37 38 0 0.000088 1.962982e-05 ... -2.153498e-05 0.000023 1 0.000020 3.797352e-05 ... 3.599042e-05 0.000242 2 -0.000010 -1.181321e-05 ... -1.332392e-05 -0.000275 3 0.000009 2.755096e-05 ... 8.187678e-06 0.000365 4 0.000075 2.238943e-05 ... -1.933573e-05 0.000131 5 0.000002 5.622629e-06 ... -4.883357e-07 0.000143 6 -0.000002 8.719113e-06 ... -1.329107e-07 0.000244 7 -0.000005 -4.389544e-06 ... 6.529457e-06 -0.000064 8 0.000004 5.304369e-07 ... 5.650774e-06 -0.000025 9 -0.000001 -3.956646e-06 ... -7.822068e-06 -0.000064 10 0.000083 -4.428750e-05 ... -1.171252e-04 -0.000921 11 0.000125 8.934013e-05 ... -1.222596e-04 0.000559 12 0.000072 3.987611e-05 ... -9.702036e-06 0.000210 13 0.000058 3.488453e-05 ... 5.939074e-05 -0.000013 14 0.000071 -2.375931e-06 ... -5.342479e-06 0.000098 15 -0.000034 -3.769712e-06 ... 3.386489e-05 0.000189 16 -0.000079 -2.898435e-05 ... 4.010702e-05 -0.000344 17 -0.000099 -6.560087e-05 ... -4.903866e-05 -0.000979 18 -0.000142 -6.386673e-05 ... 1.570240e-05 -0.000358 19 -0.000040 -2.609586e-05 ... 2.382626e-05 -0.000376 20 -0.000067 -2.936802e-05 ... -3.319265e-05 -0.000480 21 0.000011 8.505965e-06 ... 2.800566e-05 -0.000031 22 -0.000009 8.432547e-06 ... 2.255956e-05 0.000437 23 0.000063 8.640901e-07 ... -1.699090e-05 0.000050 24 -0.000020 -2.618581e-05 ... -5.261820e-05 -0.000396 25 -0.000039 -2.873193e-05 ... 6.631432e-05 -0.000126 26 -0.000021 -5.078742e-05 ... -7.073230e-05 0.000148 27 0.000020 2.285620e-05 ... 5.709680e-05 0.000110 28 0.000075 5.271469e-05 ... 7.480393e-05 0.000285 29 0.000079 8.287857e-07 ... -3.642755e-05 -0.000233 30 -0.000063 2.446794e-06 ... 4.255306e-05 0.000057 31 -0.000080 -1.052780e-05 ... -5.965590e-05 0.000541 32 -0.000011 4.882952e-08 ... -5.355494e-05 0.000053 33 0.000031 3.402226e-05 ... 5.851331e-05 0.000546 34 0.000008 -4.308832e-05 ... -7.402376e-06 -0.001112 35 -0.000050 1.121426e-06 ... 8.802790e-05 0.001010 36 0.000045 1.838470e-05 ... -2.633858e-05 -0.000059 37 -0.000022 3.599042e-05 ... 1.375026e-04 0.000229 38 0.000023 2.419260e-04 ... 2.287141e-04 0.003255 [39 rows x 39 columns]
传统的多因子模型在估计单个因子风险时,将每个因子视为独立的。也就是说每个因子本身的风险大小是通过该因子自己的时间序列数据得到的,与其他因子无关。但是,实际应用中表明,这样会使风险预测存在持续高估或低估的情况,因此还需要进行波动率偏误调整(Volatility Regime Adjustment)
首先,我们根据日度数据计算K个因子在横截面上的偏差统计量$B_t^F$
注意:笔者查阅了相关研报和Barra手册原文,并未找到具体的$σ_{kt}$的定义,因此这里采用initcov中计算出的$σ{kt}$。
其次,需要通过指数移动加权平均的方法,计算出过去一段时间的平均偏误系数$λ_F$,称作因子波动率乘数(factor volatility multiplier)
最后,计算因子协方差矩阵$F^{VAR}$:
$F^{VAR}=λ_F^2F^{Eigen}$
import numpy as np
import pandas as pd
def initial_cov(sample, forewards=21, lag_type='Bartlett',
half_life=90, ascending=True):
T, N = sample.shape
if half_life is None:
lamb = 1.
else:
lamb = 0.5 ** (1. / half_life)
if ascending:
weights = np.power(
lamb, np.arange(T) / 2.).reshape(-1, 1)
else:
weights = np.power(
lamb, np.arange(T - 1, -1, -1) / 2.).reshape(-1, 1)
if lag_type == 'Bartlett':
nlags = int(4 * (T / 100) ** (2 / 9))
# record nan index
# T * 1
# 1 * N
sample_mean = np.mean(sample, axis=0, keepdims=True)
# T * N
sample_weighted = (sample - sample_mean) * weights
# N * N
init_cov = np.dot(sample_weighted.T, sample_weighted) / np.sum(
weights ** 2, keepdims=True)
return init_cov
def volatility_regime_adjustment(factor_return,Eigen_matrix,window1=21,half_life=42):
#计算每日每个因子的波动标准差
var_list=pd.DataFrame()
for i in range(window1,len(date_list)-1):
variance_matrix=initial_cov((factor_return.iloc[:,i-window1:i]).T.values)
#print(variance_matrix)
var_list[date_list[i]]=variance_matrix.diagonal()
std_list=np.sqrt(var_list)
factor1=factor_return.iloc[:,window1:]
#计算每个横截面数据的偏差统计量BF
factor_num=std_list.shape[0]
BF=np.sqrt(np.sum((factor1.values/std_list.values)**2,axis=0)/factor_num)
#print(BF)
if half_life is None:
lamb = 1
else:
lamb = 0.5 ** (1. / half_life)
weights = list(map(lambda x: lamb ** (x / 2),np.arange(len(BF))))
weights.reverse()
weights=np.array(weights)
BF=np.array(list(BF**2)).reshape(1,-1)
factor_vol_multiplier=np.sqrt((np.sum((BF)*weights))/np.sum(weights))
return factor_vol_multiplier*Eigen_matrix
Factor_cov=volatility_regime_adjustment(data,Eigen,window1=21,half_life=42)
print(pd.DataFrame(Factor_cov))
0 1 ... 37 38 0 0.000091 2.029639e-05 ... -2.226624e-05 0.000024 1 0.000020 3.926299e-05 ... 3.721255e-05 0.000250 2 -0.000011 -1.221436e-05 ... -1.377636e-05 -0.000285 3 0.000009 2.848650e-05 ... 8.465707e-06 0.000378 4 0.000077 2.314971e-05 ... -1.999231e-05 0.000135 5 0.000002 5.813557e-06 ... -5.049182e-07 0.000148 6 -0.000002 9.015189e-06 ... -1.374239e-07 0.000252 7 -0.000005 -4.538600e-06 ... 6.751178e-06 -0.000066 8 0.000005 5.484489e-07 ... 5.842658e-06 -0.000026 9 -0.000001 -4.091002e-06 ... -8.087682e-06 -0.000066 10 0.000086 -4.579137e-05 ... -1.211025e-04 -0.000952 11 0.000129 9.237386e-05 ... -1.264111e-04 0.000578 12 0.000075 4.123019e-05 ... -1.003149e-05 0.000217 13 0.000060 3.606911e-05 ... 6.140748e-05 -0.000014 14 0.000074 -2.456611e-06 ... -5.523894e-06 0.000102 15 -0.000035 -3.897720e-06 ... 3.501484e-05 0.000195 16 -0.000082 -2.996858e-05 ... 4.146893e-05 -0.000356 17 -0.000103 -6.782848e-05 ... -5.070387e-05 -0.001012 18 -0.000146 -6.603546e-05 ... 1.623560e-05 -0.000370 19 -0.000041 -2.698200e-05 ... 2.463533e-05 -0.000389 20 -0.000069 -3.036527e-05 ... -3.431977e-05 -0.000496 21 0.000012 8.794802e-06 ... 2.895665e-05 -0.000032 22 -0.000009 8.718891e-06 ... 2.332561e-05 0.000452 23 0.000065 8.934320e-07 ... -1.756786e-05 0.000052 24 -0.000021 -2.707500e-05 ... -5.440496e-05 -0.000409 25 -0.000040 -2.970758e-05 ... 6.856616e-05 -0.000130 26 -0.000021 -5.251201e-05 ... -7.313416e-05 0.000153 27 0.000021 2.363232e-05 ... 5.903564e-05 0.000113 28 0.000078 5.450473e-05 ... 7.734405e-05 0.000295 29 0.000081 8.569288e-07 ... -3.766452e-05 -0.000241 30 -0.000065 2.529880e-06 ... 4.399803e-05 0.000059 31 -0.000083 -1.088529e-05 ... -6.168164e-05 0.000559 32 -0.000012 5.048763e-08 ... -5.537351e-05 0.000054 33 0.000032 3.517756e-05 ... 6.050024e-05 0.000565 34 0.000008 -4.455147e-05 ... -7.653738e-06 -0.001150 35 -0.000052 1.159506e-06 ... 9.101707e-05 0.001044 36 0.000046 1.900899e-05 ... -2.723296e-05 -0.000061 37 -0.000022 3.721255e-05 ... 1.421718e-04 0.000236 38 0.000024 2.501411e-04 ... 2.364806e-04 0.003365 [39 rows x 39 columns]
以上,我们利用上一篇报告中的数据,使用Barra风险模型估计出了各个因子之间协方差矩阵。如果想进一步研究,还需要估计每支股票的特异性风险,并且将二者结合,从而可以估计出各个股票间的协方差矩阵。利用这一协方差矩阵,可以进一步构造Smart Beta等策略。
本社区仅针对特定人员开放
查看需注册登录并通过风险意识测评
5秒后跳转登录页面...
移动端课程