Search

Is the Information Coefficient really a random normally distributed variable?

Updated: Apr 2, 2021

Overview of Information Coefficients


Cornerstone of the active portfolio management is the skill of the manager to select stocks with positive risk-adjusted alphas. Measuring this skill is a difficult task. On one hand, we have the forecasts of the manager and on the other - the actual alphas. Best way to measure the skill of the manager is to check how the forecasts are correlated with the realized alphas. In most cases linear method is applied to measure the correlation between the two, although it is certainly possible that the relationship between the two is non-linear. The estimated correlation of the manager’s skill with the actual alphas is called Information Coefficient (IC) as first described by (Grinold & Kahn, 2000).

Before delving deeper into the IC it is necessary to describe what we mean by “manager’s skill”. It is a commonly disputable issue. In the initial stages of the evolution of the active portfolio management the common practice was that active manager “pick” select investments based on their professional judgment and skills. However with the development of the asset management theory and practice manager’s methods for making active bets have become more complex (more on active management here). Nowadays managers mainly rely on highly-complex active investment strategies and models. But the notion of skill hasn’t been lost, in order to develop these strategies and models managers must use their skills. So it is correct to say that any active investment strategy is embedded with skill of the manager that uses it. Furthermore, using this concept of skill we can also evaluate active managers that are not pure stock pickers and use some type of common factor modelling to get alpha. In the end by “skill of the manager” we mean the correlation between any forecast (no matter how it is produced) of the manager and the actual alphas.

After we clarified what we mean by “skill of the manager” we can get into the IC. The power of active portfolio management comes from the application of active investment models in cross-sectional space. However before applying any model to evaluate manager’s skill we must take into account several issues. First there a plenty of information signals that generate very different types of forecasts. For example, if some manager’s will apply quantile strategies i.e. they will distribute stock into groups by some criteria and go long on the top quintile while shorting the bottom. While other managers will simply use the E/P ratio and make their bets accordingly. To bring all of these very different methods to common ground and evaluate them we apply this two-step procedure:

1) Rank the stocks – for L/S Quintile strategies the stocks in the long quintile get 1’s while the stocks in the short group get -1’s. If fundamental factors were used the stock with highest E/P gets the highest rank, while the lowest E/P gets the worst. Using ranks bring all information into common terms. Furthermore when back-testing the strategy it protects us with the bias of outliers.

2) Normalize and standardize the ranks to have cross-sectional mean 0 and standard deviation of 1 – it is done to scale the dispersion of forecasts and alphas to common level. It is especially required when non-uniform ranking system is used, because the issue with forecasts dispersion is contaminating the results. Most commonly used tool for this is (1):

Adjusting the ranked forecasts in such a way is intuitive concept that is easily applied. However it does not take into the account if forecasts have time-varying volatility. Although it is a very interesting issue it is out of the scope of this discussion.

Next issue that we need to consider is the volatility of alphas (residual returns) - . Different assets will have different volatilities, which is a problem considering that the residual returns may be driven simply by higher risk. To remedy this we also adjust the residual returns before applying any model (2):





It is a common risk-adjusting process. The residual volatility can be estimated by any generic risk-model like BARRA or by simply applying GARCH-type model. Note that with this standardization, the residual returns will have CS dispersion only in the case of perfect risk model and we know that such a perfection doesn’t exist, so the issue with CS dispersion stays but it will not contaminate the idea of this discussion.

Applying standardization procedures is absolutely necessary when trying to estimate the IC of an investment strategy. After both residual returns and forecasts are properly standardized we can estimate the IC. Best way to do is to use the following CS regression model (3):




Given that we use OLS method for parameter estimation and both variables are properly standardize then the estimator y is simply the correlation between the two. As theoretically described the correlation between the risk-adjusted forecasts and residual returns is IC thus from hereon we use substitute y with IC.


How to check if IC is random process?

Applying (3) for each period t we get the time series of CS ICs: IC1, IC2, IC3 . According to (Ding & Martin, 2017) managers should use the time-series sample average and sample standard deviation of IC as expectations for the future value of their insight. Authors have based this statement on the IC is a random process and as such the sample estimates for mean and standard deviation are true. However in their paper they did not provide evidence on this. Therefore we establish our own investigation on the issue.

Important assumption in both (Grinold & Kahn, 2000) and (Ding & Martin, 2017) is that IC is a random process and as such the estimated sample and can be used in the development of active portfolio management framework. It is very serios proposition because if IC is not random process with normal distribution then there can be huge errors in the estimation. Since there is no single test for testing if given time-series is independent identically distributed (i.i.d) process we propose to fill the picture with several test that shed light on different aspects of the time-series.

Shapiro-Wilks test is used for testing the normality of the data. The null hypothesis is that the tested sample comes from normal distribution. If rejected then the IC would not be validated to have normal distribution. Shapiro-Wilks test is selected due to his better power of significance compared to Anderson-Darling and Kolmogorov-Smirnov tests.

Second step is to check the autocorrelation of the time-series IC. We use Ljung-Box test rather than Durbin-Watson and Breusch-Godfrey tests. The null hypothesis is that the data is independently distributed.

Important property of i.i.d variables is stationarity. We apply ADF test (Augmented Dickey-Fuller). The null hypothesis is the presence of unit root and it means that the IC process is not mean stationary.

Wald-Wolfowitz (runs) test is also designed to check independence in a time-series. The null hypothesis is that time-series observations of IC are mutually independent. It is a non-parametric test that tests if elements (ICs) are mutually independent.

Additionally we apply Engle’s version of Lagrange-Multiplier test to test for ARCH effects. If the null hypothesis of the test is rejected then managers and analysts must apply some sort of GARCH modeling to properly estimate . It is important feature when estimating the variation of the investment signal’s quality and plays important role in strategy risk paradigm.

Empirical example

After describing the methods that have to be applied in order to check if IC is truly random process we will give empirical example (data from Yahoo Finance). On the Taiwanese stock exchange will be tested 10 different alpha factors. We choose Taiwanese market mainly because of the abundance of data. The investment universe will be the constituents of the broad-market index TAIEX (also benchmark). The 10 alpha factors that we test are: Momentum (MOM), Net profit margin (NM), Return on Equity (ROE), Operating leverage (OLEV), Change in asset turnover (Δ(S/Assets)), Cash flow from Operations to Sales (CF_Ops./S), EBIT to EV (EBIT/EV), Sales to EV (S/EV), Earnings yield (E/P), Book-to-Price (B/P). We estimate each month cross-section IC in the period October, 2007 to September, 2016 giving us total of 108 observation which are enough for any time-series testing.

Applying the commonly accepted methodology IC is the correlation between risk-adjusted historical residual returns and properly standardized ranked factor loadings. On exhibit (1) we show the IC time-series for the factor B/P:


It is evident that ICs of this particular factor does look like a random process and at that it appears to be both mean and variance stationary. Before proceeding to the application of the testing methodology lets check the initial estimates of sample mean and standard deviation of ICs that are shown in exhibit (2):


From exhibit (2) we can conclude that ICs of our alpha factor range from -0.03 to 0.15. While the standard deviations range from 0.05 to 0.16 in a rather compact distribution. Given these estimates for sample mean and sample standard deviation it is interesting to see how reliable are they. If IC is indeed a i.i.d. variable then those estimates from (3) can be applied to estimate the quality of the investment signals and their respective volatility. To ascertain how reliable are they we apply our testing methodology. The results from all test that we use are shown in exhibit (3):



First test that we ran was Shapiro-Wilks test for normality. Out of 10 alpha factors in only two cases we can reject the null hypothesis (6M_MOM and B/P) that estimates sample of ICs is comes from normal distributed population. This mean that majority of the factor IC’s indeed are normally distributed.

Ljung-Box test showed that all sample ICs are independent and does not have autocorrelation. In all 10 cases we could not reject the null hypothesis at the 95% confidence interval.

For all sample ICs the ADF did not find evidence of a unit root. In all tests the null hypothesis for non-stationarity is rejected. It is important test, because it shows that ICs are indeed mean stationary.

The additional non-parametric test for independence validated the results from Ljung-Box test in all cases expect 6M_MOM. The Wald-Wolfowitz null hypothesis for independence is not rejected in all factors except the momentum factor. This is further validation for the absence of autocorrelation in the ICs.

The last test we used is for second level stationarity i.e. presence of ARCH effects. The results from the LM test showed that only in the vase of CF_Ops./S we have presence of time-varying volatility. In all other cases the sample IC volatility can be used as an estimate for the volatility of the signal quality.

Conclusion

Overall from exhibit (3) we can conclude that in most cases sample IC estimated by cross-sectional regression is indeed random process that exhibits the properties of i.i.d. variable.

It is very important result because a lot of studies in active management assume that IC is a random variable but none give any evidence. It is key because as we confirmed this assumption now estimated sample mean and standard deviation can be used as expected values for the signal’s quality and its volatility.