David Hendry, University of Oxford  Empirical Model Discovery
Model evaluation is reinterpreted as discovering what is wrong with a specification; robust statistics as discovering which subsample is reliable; nonparametrics as discovering what functional form best characterizes the evidence; and model selection as discovering which model best matches the criteria. Yet each is addressed in isolation of the others. Empirical Model Discovery seeks to tackle all of these jointly. Automatic methods enable formulation, selection, estimation and evaluation on a scale well beyond the powers of human intellect, including when there are more candidate variables than observations. The lectures explain how major recent developments facilitate the discovery of empirical models within which the best theoretical formulation is embedded, when the high dimensionality, nonlinearity, inertia, endogeneity, evolution, and abrupt change characteristic of economic data interact to make modelling so difficult in practice. Live computer illustrations using Autometrics show the remarkable power and feasibility of the approach. 
Moudud Alam, Örebro University
 Likelihood prediction with generalized linear and mixed models under covariate uncertainty This paper demonstrates the techniques of likelihood prediction with the generalized linear mixed models. It also presents a way to deal with the covariate uncertainty while producing the measure of the prediction uncertainty. Several rather nontrivial prediction problems from the existing literature are reviewed and their likelihood solutions are presented. 
Yushu Li, Linné University
 Linear and nonlinear causality tests in a LSTAR model: wavelet decomposition in a nonlinear environment In this paper, we use simulated data to investigate the power of different causality tests in a twodimensional vector autoregressive (VAR) model. The data are presented in a nonlinear environment that is modelled using a logistic smooth transition autoregressive (LSTAR) function. We use both linear and nonlinear causality tests to investigate the unidirection causality relationship and compare the power of these tests. The linear test is the commonly used Granger causality test. The nonlinear test is a nonparametric test based on Baek and Brock (1992) and Hiemstra and Jones (1994). When implementing the nonlinear test, we use separately the original data, the linear VAR filtered residuals, and the wavelet decomposed series based on wavelet multiresolution analysis (MRA). The VAR filtered residuals and the wavelet decomposition series are used to extract the nonlinear structure of the original data. The simulation results show that the nonparametric test based on the wavelet decomposition series (which is a model free approach) has the highest power to explore the causality relationship in the nonlinear models.

Peter Karlsson, Jönköping International Business School  The Incompleteness Problem of the APT Model The Arbitrage Pricing Theory provides a theory to quantify risk and the reward for taking it. While the theory itself is sound from most perspectives, its empirical version is connected with several shortcomings. One extremely delicate problem arises because the set of observable asset returns rarely has a history of complete observations. Traditionally, this problem has been solved by simply excluding assets without a complete set of observations from the analysis. Unfortunately, such a methodology may be shown to (i) lead for any fixed time period to selection bias in that only the largest companies will remain and (ii) lead to an asymptotically empty set containing no observations at all. This paper discusses some possible solutions to this problem and also provides a case study containing Swedish OMX data for demonstration.

Yuna Liu, Umeå University
 International Stock Market Integration and Market Risk: The Nordic Experience In this paper we study whether the creation of a uniform stock trading platform(OMX, NASDAQOMX) for the Nordic countries (Sweden, Finland, Denmark and Iceland), facilitating crossborder trading, has changed the longrun structure of stock market volatilities and correlations on the Nordic stock markets. To accomplish this, the trend in timevarying volatilities and correlations are fi?ltered out in a ?first step using a nonparametric decomposition building on Loess. In a second step, possible changes in these trends due to the integration of the markets are then analyzed. The analysis is complemented by use of parametric CGARCH models and extensions of these (Component CorrelationGARCH models). The results indicate, among other, that the longrun trend in volatility decreased on the Swedish and Finish stock markets when a uniform trading platform was introduced for these countries.

Farrukh Javed, Lund University
 GARCHType models and Performance of Information Criteria GARCH models have been gaining popularity since the last two decades probably because of their ability to capture nonlinear dynamics in the real life data which we often observe especially in financial markets. This paper discusses the relative ability of some common information criteria (AIC, AICc, SIC and HQ) using their probability of correct selection, as a measure of performance, in the presence of GARCH effect. The investigation has been performed using Monte Carlo simulation of conditional variance GARCH processes with 6 different kinds of DGPs including for and , GARCH (1,1)Leverage, and GARCH(1,1)Spillover. All these models are further simulated with different parameter combinations to study the possible effect of volatility structures on these information criteria. We noticed an impact of volatility structure of time series on the performance of these criteria.

Shutong Ding, Örebro University
 Model Selection in Dynamic Factor Models Dynamic factor models have become popular in applied macroeconomics with the increased availability of large data sets. We consider model specification, i.e. the choice of lag length and the number of factors, in the setting of factor augmented VAR models. In addition to the standard Bayesian approach based on Bayes factors and marginal likelihood, we also study model choice based on the predictive likelihood which is particularly appealing in a forecasting context. As a benchmark we compare the performance of the Bayesian procedures with frequentists approaches, such as the factor selection method of Bai and Ng.

Rashid Mansoor, Jönköping International Business School
 Estimating meanvariance ratios of financial data The Sharp ratio defined as the ratio of excess return to its risk provides a measure of excess relative return.The ratio link the first raw moment to the second central moment. In this paper we are suggesting some estimators of the meanvariance ratio of financial data.The study is motivated by considering a functional form between mean and standard deviation of the stocks by assuming multivariate normal distribution.Three potential estimators of the ratio are developed and the asymptotic properties of the different estimators are derived.An empirical investigation is then performed on the several US stocks returns in order to compare the different estimators of the ratio for different sectors and we test if these are significantly different.

Karin Stål, Stockholm University
 Added variable plots in nonlinear regression An added variable plot is a commonly used plot in linear regression diagnostics. The plot provides information about the addition of a further regressor to the model. The plot can lead to the identification of nonlinearity in the selected regressor, and outliers and influential observations that may seriously impact the leastsquares estimate of the parameter that corresponds to the selected regressor. In this paper added variable plots are derived for a nonlinear regression model with an additive error term. The added variable plot for this nonlinear regression model is different from the plot in the linear regression case. The plot is not created for a specific explanatory variable, but for a parameter. Thus, the plot can be called an added parameter plot, since it provides information about the modification of the model by adding a parameter. The plot also gives a more formal tool to decide the importance of the parameter, since it is closely connected to the score test of the nullhypothesis that the added parameter is zero. It is proved that the value of the score test statistic is equal to SSR of the regression through the origin in the added parameter plot, divided by the estimated variance under the null hypothesis.

Petra Ornstein, Uppsala University
 A Generalized Rank Based Polychoric Correlation Depending of the properties of the variables different measures for correlation have desirable properties. If the variables are ordinal then the Pearson product moment correlation is not valid while it is still possible to use the Spearman rank correlation. When interest is in a hypothesized underlying continuous variable however, the Spearman rank correlation does not work well. If the underlying variables follow a bivariate normal distribution, the polychoric correlation can recover the Pearson product moment correlation. The purpose of this paper is to generalize the polychoric correlation such that it is more robust against the distributional assumption. We propose fitting the polychoric correlation using the Spearman rank correlation adjusted for discrete data. Performing a Monte Carlo simulation study, we find that our measure performs almost identically well to the polychoric correlation when the assumptions hold, but outperforms it in the case of skewness. We show that it is unbiased, and that its properties can be derived from the Spearman rank correlation. For testing under the null hypothesis of zero correlation, our statistic is consistent and asymptotically normal.

Bertil Wegmann, Stockholm University
 Inference in Second Price Auctions with Gamma Distributed Common Values Our paper explores possible limitations of the Gaussian model in Wegmann and Villani (2008, WV) due to intrinsically nonnegative values. The relative performance of the Gaussian model is compared to an extension of the Gamma model in Gordy (2008) within the symmetric second price common value model. A key feature in our approach is the derivation of an accurate approximation of the bid function for the Gamma model, which can be inverted and differentiated analytically. This is extremely valuable for fast and numerically stable evaluations of the likelihood function. The general MCMC algorithm in WV is utilized to estimate WV’s eBay dataset from 1000 auctions of U.S. proof coin sets, as well as simulated datasets from the Gamma model with different degrees of skewness in the value distribution. The Gaussian model fits the data slightly better than the Gamma model for the particular eBay dataset, which can be explained by the fairly symmetrical value distribution. The superiority of the Gamma to the Gaussian model is shown to increase for higher degrees of skewness in the simulated datasets.

Jens Olofsson, Örebro University
 Algorithms to find exact inclusion probabilities for 2Pπps designs The statistical literature contain several proposals for methods generating fixed size without replacement πps sampling designs. Methods for strict πps designs have rarely been used due to difficulties with implementation. On the other hand, approximate πps designs as the Conditional Poisson sampling design (Hajek) is a popular alternative. Laitila and Olofsson presented an easily implemented sampling design, the 2Pπps sampling design, using a twophase approach. The firstorder inclusion probabilities of the 2Pπps design are asymptotically equal to the target inclusion probabilities of a strict πps design. This paper extends the work on the 2Pπps design and presents algorithms for calculation of exact first and secondorder inclusion probabilities. Starting from a probability mass function (pmf) of the sum of N independent, but not equally distributed Bernoulli variables, the algorithms are based on derived expressions for the pmfs of sums of N1 and N2 variables, respectively. Exact inclusion probabilities facilitate standard designbased inference and provide a tool for studying the properties of the 2Pπps design. The Conditional Poisson sampling design is shown to be a special case of the 2Pπps design. However, empirical results presented show that the properties of the suggested point estimator can be improved using a more general 2Pπps design.

Dao Li, Örebro University
 Nonlinear Cointegration in Nonlinear Vector Autoregressive Models Existing cointegration studies mainly concern with integrated time series. This is often not applicable while the time series processes are global process. In this paper, we propose a definition of smoothtransition (ST) type nonlinear cointegration for a group of individually global time series. We study smoothtransition vector autoregressive (STVAR) model to consider the proposed nonlinear cointegration. Our model is also suitable to study common nonlinear factors in an econmic system. We study the properties of STVAR models and test for common nonlinear factors or nonlinear cointegration. Simulation studies have been carried out to show the asymptotic characteristic of the tests. Finally, we apply our work to consumption and income data (United States, monthly from 1959:1 to 2010:3), and compare the forecasting results with linear VAR model.
