A Meta Model Analysis of Exchange Rate Determination ¤

A novel approach to modelling exchange rates is presented based on a set of models distinguished by the drivers of the rate and regime duration. The models are combined into a `meta model' using model averaging and non-nested hypothesis-testing techniques. The meta model accommodates periods of stability and slowly-evolving or abruptly-changing regimes involving multiple drivers. Estimated meta models for ¯ve exchange rates provide a compelling characterisation of their determination over the last forty years or so, identifying `phases' during which the in°uences from policy and ¯nancial market responses to news succumb to equilibrating macroeconomic pressures and vice versa

1 Introduction Obstfeld and Rogo® (2009) cite the weak relationship between the exchange rate and the rest of the economy as one of the major puzzles in international macroeconomics: the so called \exchange rate disconnect puzzle". 1 Engel et al. (2008) provide a useful framework which considers the exchange rate to depend on fundamental drivers and expected future rates and which highlights some of the sources of the disconnect.The framework accommodates any model incorporating Uncovered Interest Parity where, in this case, the drivers are the set of variables chosen to account for the behaviour of the interest rate di®erential.In practice, if the focus of attention is on the short-term, the set of variables chosen to account for interest rate movements are those best able to capture the e®ects of policy responses or ¯nancial market responses to news.When attention is on the macroeconomy's longer-term adjustment to its steady-state level, variables that capture broader equilibrating pressures on interest rates are considered more appropriate. 2Of course, in reality, both sets of in°uences on the exchange rate could play a role at any one time, with their relative importance likely to change over time depending on the extent of business cycle shocks and turbulence in the ¯nancial markets and the scale and speed of changes in countries' longer term macroeconomic 1 The disconnect lies behind the di±culties involved in forecasting exchange rates which have been well-rehearsed since Meese and Rogo®'s seminal (1983) paper where point predictions from a driftless random walk model were no worse than those from more sophisticated models.See Rossi (2013) for a review on the literature on exchange rate predictability.
2 On the basis of a survey of US foreign exchange traders, Cheung and Chinn (2001) report that conventional macroeconomic pressures are thought to be important for exchange rate movements by 1% of traders at the intraday horizon, but by 59% of traders in the medium run (i.e. up to 6 months) and by 88% of traders in the long run (i.e. over six months).Comparisons with the results of earlier surveys also lead them to conclude that these rankings of variables change substantially over time.
outlook.As Engel et al. point out, the forward-looking nature of exchange rate determination compounds these di±culties, shifting the in°uence from the current value of the interest rate fundamentals to their expected future paths.This means the relative weight of the di®erent sets of in°uences can change in anticipation of changes in future policy or macroeconomic outlook as well as in response to contemporaneous changes.
This inherent instability poses di±culties in applied work and is an explanation for why no single exchange rate model performs well in explaining or forecasting different currencies over di®erent samples.Researchers have attempted to accommodate structural instability in exchange rate models through single-equation time-varying parameter models (see Wol® (1987) or Schinasi and Swamy (1989), for example) and through Markov-switching models (as in Engel (1994) for example).Model averaging, in which a variety of models are estimated -recursively or with a rolling windowand then combined with time-varying weights has also been employed.This can be approached as a full Bayesian exercise -as in Wright (2008) or Byrne et al. ( 2017) for example -with the weights de¯ned by an estimated posterior probability that the model holds true, or following a more standard forecast-combination approach in which, at each time, all models are given equal weight or a weight based on `out-ofsample' performance in a recent training period (see, for example, Sarno and Valente (2009)).
In this paper, we adopt a model averaging approach to deal with the inherent structural instability in exchange rate determination but we emphasise the `regime uncertainty' surrounding the length of time for which a set of fundamentals exerts its in°uence as well as the `model instability' surrounding the choice of fundamentals.
This follows the suggestion of Pesaran and Timmermann (2007) to apply model averaging techniques to alternative models of the same type but estimated over di®erent [2] estimation windows.We allow for uncertainty across model fundamentals as in the literature then, but we also pay explicit attention to the duration of the period over which the di®erent fundamentals are relevant.This distinguishes the approach from those exchange rate papers where the time-variation is introduced implicitly through the recursive nature of the modelling or through the application of a simple rolling window.The more implicit approach may be reasonable in forecasting exercises but it could obscure important regime shifts when the model averaging exercise is conducted to make economically-meaningful inferences.We use the term `meta modelling' to °ag our emphasis on regime instability when compared to more usual model averaging methods. 3 We also introduce a novel approach to constructing the time-varying weights in our model averages by adopting non-nested hypothesis-testing methods.Here, a characterisation of the data generating process based on a particular combination of fundamentals continues until there is evidence to reject it in favour of a new characterisation.Non-nested testing methods are involved as the new characterisation could be based on a very di®erent combination of fundamentals.The approach has the advantage that, to the extent that it is warranted by the data, it builds in a degree of stability in the characterisation over time by taking the current model as the maintained hypothesis.Given the volatility of exchange rates, this is a feature that is often missing from models driven purely by Bayesian updating or weights based on forecast performance over the recent past and this undermines those models' ability to provide an economic narrative to explain the changes over time. 4 3 Lee et al. (2013Lee et al. ( , 2015) ) provide descriptions of the conduct of monetary policy in the UK and US based on estimated \meta-Taylor rules" for the two countries, obtained using similar methods to those of this paper, where the duration of di®erent policy regimes is an important focus of interest.
4 See Timmermann (2006) andAiol¯et al. (2011) for discussion of the approaches taken to model averaging in the forecasting context. [3] In the next section, we brie°y comment on some traditional models of exchange rate determination to motivate the use of di®erent fundamentals in di®erent models and our characterisation of these as re°ecting policy or ¯nancial market responses to news or equilibrating macroeconomic pressures.Section 3 elaborates on the model averaging approach that we adopt to construct our meta model.The methods are applied to monthly data for the exchange rates of ¯ve currencies against the US dollar spanning over the last forty or ¯fty years in Section 4. Exchange rate determination in the countries is characterised here according to a series of phases in which there is an ebb and °ow between the pressures on the exchange rate from policy and ¯nancial market responses to news and those from longer-term macroeconomic adjustments.
Section 5 provides concluding remarks.

Exchange Rate Fundamentals and Structural Uncertainty
There are four structural models of exchange rate determination frequently found in the literature which we characterise as being more or less relevant during periods of economic turbulence or stability.
where   is the nominal exchange rate at , de¯ned as a home price of a unit of foreign currency,   and  ¤  are the nominal interest rates paid on domestic and foreign assets during period  respectively, the `' superscript indicates expectations (formed at time t) and lower case variables denote logarithms.These include models based 5 Rossi (2013) provides detailed descriptions of these models and the evidence relating to them.
[4] on interest rate parity fundamentals in which, iterating forwards, taking expectations and assuming that the expected future interest rate di®erential follows a simple () speci¯cation, we can write where the 's are parameters and   represent stationary innovations.Alternatively, working with the determinants of the interest rate as expressed in the Taylor rule (i.e.in°ation ¢  and the output gap   ) and assuming these in°uences e®ect domestic and foreign interest rates, a model based on Taylor rule fundamentals would be written as where is the log of the real exchange rate, the 's are parameters and    again represent stationary innovations.
In less turbulent times, the future path of interest rates will re°ect broader macroeconomic conditions and might be better captured by Mark's (1995) approach to modelling the exchange rate in which deviations of the nominal exchange rate from its equilibrium are gradually eliminated over time according to where   ¡   is the deviation of the time- equilibrium exchange rate,   , from the actual rate.The Purchasing Power Parity (PPP) hypothesis provides a candidate for the equilibrium level of the exchange rate based on the `law of one price' so we can write where     again re°ects stationary innovations.Alternatively, the monetary model of the exchange rate characterises the equilibrium exchange rate as depending on relative money supplies, relative income levels and the interest rate di®erential and can motivate a model of the form Modelling structural uncertainty The four models outlined in (2.2), (2.3), (2.5) and (2.6) are all relatively standard in the literature.Our discussion emphasises that any one of them, or a combination of them, could be more or less relevant in di®erent circumstances and over di®erent sample windows.In what follows then, at time  , there are  £ models that can be used to characterise recent changes in the exchange rate, described by where  =  max ¡ min +1 and the complexity of the subscripts re°ects the °exibility of the modelling framework.Here, model   is assumed to explain the change in the exchange rate over the period  ¡   , and allowing  to vary means we contemplate models that might be relevant only for the most recent past or back to  max periods in the past.The model involves X  which is the   set of  alternative sets of explanatory variables driving the exchange rate; these represent the fundamentals proposed by interest rate parity, the Taylor rule, PPP and the monetary models respectively in the case of the models in (2.2)-(2.6)and, for models of the form in (2.4), the lagged exchange rate level also. [6] The uncertainty surrounding the determination of exchange rates is re°ected by the idea that the change in the exchange rate observed at any particular time  could be explained by any one of  £ di®erent models according to (2.7).The meta modelling approach accommodates this uncertainty by using a weighted average of the alternative models in (2.7).The approach starts from a Bayesian Model Averaging (BMA) formula but is classical in nature avoiding the (often problematic) Bayesian assumption that the model includes the true data generating process (dgp) and avoiding the need to specify prior probabilities for the unknown parameters in the models or for the models themselves.Indeed, the estimated meta model simply aims to characterise exchange rate movements taking account of the possibility of changes in the relative importance of the fundamentals at di®erent times.As we shall see, the model weights in our preferred meta model are updated in each period on the basis of non-nested hypothesis tests, accommodating the possibility of a structural break by switching to an alternative structural model if there is evidence to reject the previously-held null.The meta model could re°ect the true dgp if we know one of the fundamental models under consideration holds true at all times or if, for example, exchange rate decisions are made by di®erent groups -each focused on di®erent fundamentals -and the weights capture the proportions of individuals in the respective groups as these change over time.But the meta modelling approach is also consistent with the true dgp being distinct from all of the underlying structural models considered.Here the weights simply convey the real-time adequacy of the underlying structural models in characterising recent exchange rate movements, and a reduction of the weight on a model because of its rejection does not imply acceptance of its alternative but simply re°ects the shortcomings of the previously-held null. [7]

Model Averaging
The basis of the meta modelling approach is the BMA formula: where Z  = (z 1  z  ) represents the data available at  , with z  = (   , X  8 ), and represents the unknown parameters capturing the in°uence of all the fundamentals under consideration.The Pr(µ  j Z  ) describes our understanding of the parameters of interest and   represent the various models described at (2.7).
The BMA formula decomposes the uncertainties accommodated within Pr(µ  j Z  ) into a weighted average of the conditional distributions, Pr(µ  j   , Z  ), using as weights the model probabilities Pr(  j Z  ).A strict Bayesian requires a prior distribution for the unknown parameters of all the models to evaluate the conditional distributions.Alternatively, if no meaningful prior distribution is available, one can make the more classical assumption that for Pr (µ  j Z     ), where b µ  is the familiar maximum likelihood estimate of the parameters under   , and c V  is the asymptotic covariance matrix of b µ  .This assumption treats µ  as a random variable at the inference stage so that Pr(µ ) and standard inference can be carried out for each model in turn.

The Model Weights
Turning to the model weights Pr(  j Z  ), we note that in the context of exchange rate determination, where the models under consideration are unlikely to be exhaustive even allowing for structural breaks, the strict Bayesian requirement to assign prior [8] probabilities to all models at each point seems unrealistic, or at least very demanding.
The alternative `frequentist' model averaging approaches found in the literature are reviewed in Steel (2020) noting that here weights are chosen to deliver parameters with desirable properties under repeated sampling.The relative advantages of the di®erent frequentist approaches considered in the review make little or no reference to models of di®erent sample lengths though (and, indeed, the desirable properties are typically related to the asymptotic properties of the estimators).This reinforces our use of the term `meta modelling', with its focus on regime uncertainty and the choice of sampling window, to distinguish it from the more usual context for model averaging.
The Meta (Non-Nested Testing) Approach A pragmatic approach to deriving model weights in these circumstances is to allow these to evolve over time, updating the weights in each period to re°ect new evidence on whether the previously-held view continues to be valid or whether an alternative new-born model is now appropriate.
Since the new-born model could involve an entirely di®erent set of fundamentals to those of the previously-held model, the evidence involves non-nested hypothesistesting (NNT) methods which are relevant when one model cannot be obtained from the other by imposition of parameter restrictions or through a limiting process.
The meta-NNT approach can be formalised by writing, for any  and for all In transferring weights, our interest is whether the most recent observation con-¯rms or °ags shortcomings on our currently-held characterisation of the data.A natural statistic on which to base the test between the models is the ratio of the squared residuals obtained for the ¯nal observation of the two competing models, denoted   say.Here, a large (absolute) value of the residual from the null model casts doubt on its continued relevance, but this is judged relative to the performance of the realistic alternative models.In the case where the alternative is the same behavioural model but with changed parameters based on a shorter sample period, the alternative is nested within the null and the statistic provides a standard F-test of structural instability, itself a likelihood ratio test under the assumption of normally-distributed errors.But, more generally, neither model is nested within the other and non-nested testing procedures are required.The `Cox test' of two competing non-nested models involves modifying the likelihood ratio test statistic to obtain a statistic with known asymptotic distribution.The modi¯cation is required because, taking one model as the null, the alternative is misspeci¯ed and its estimated likelihood will depend on the parameters of the null model. 7In most cases, the required modi¯cation renders 6 Alternatively, as illustrated in the empirical exercise below, the weights could be reallocated according to the strength of the rejection (denoted the `meta-NNTp' approach).
7 Pesaran (1974) describes the modi¯cation required to take into account the misspeci¯cation in the case of two non-nested linear regression models estimated over a common sample and derives a statistic which is asymptotically normally distributed with zero mean and calculable ¯nite variance.
[10] the distribution of the statistic analytically intractable so that simulation methods are required.
The simulation exercise involved here is computationally demanding but relatively straightforward.Here the previously-held model has a clear status as the null and so can be used to simulate  arti¯cial samples of the exchange rate,   ,  = 1  , for  =  ¡    using the estimated parameters of model   and making random draws from a Normal distribution with mean zero and variance equal to that estimated under   .For each arti¯cial sample, the models   and   can be estimated and the ratio of the squared residuals obtained for the ¯nal observation of the two competing models,  ()  , can be calculated.The set of simulated  ()  statistics provides the appropriate distribution against which to compare the observed   under the null that model   is true.Finding that this value lies in the upper 5%, say, of the simulated distribution provides signi¯cant evidence to reject the model in favour of the new alternative.Carrying out this exercise at each point in time, holding in turn each model with non-zero probability as the null and comparing it to all realistic alternative models, provides the means to update the weights over time.
Alternative Frequentist Model Averaging Approaches The meta-NNT approach is related to Hansen et al.'s (2011) idea of a Model Con¯dence Set (MCS) in which a test is applied to a set of competing models and models are eliminated if they perform poorly by some user-speci¯ed criterion.The MCS is the set of (equally weighted) models which are not rejected as statistically inferior.In the meta-NNT approach, as we move through the sample, the weight from each model characterising exchange rate determination in one period is e®ectively transferred to the models in See Pesaran and Weeks (2003) for a review of the non-nested testing literature.
[11] its MCS in the next period.
The use of NNT in allocating weights has the advantage of building in a degree of stability in the weights over time through the `protection' provided to the null.Lee et al's (2015) approach to de¯ning the `meta' weights also builds in a degree of stability by updating time- weights at  + 1 according to the probability of observing the time  + 1 outcome based on the time- model, where this latter probability is assumed proportional to the squared estimated residual at the end of the sample. 8Compared to the meta-NNT approach, the updating criterion in this meta model is more closely related to the approach to de¯ning weights found in the forecast combination literature.Here, weights depend on the historical forecasting performance of the di®erent models, sometimes discounting into the past or focusing on the "most recent best" (MRB) forecasts; see, for example, Diebold and Pauly (2007) or Sarno and Valente (2009) for discussion.
Comparison of the meta-NNT and meta-MRB approaches will provide insights on the role of the updating criterion.Comparison with a more standard model averaging approach in which weights are updated as above but based on the MRB performance of models estimated in a rolling window of ¯xed-sample-length would further isolate the `meta' contribution of accommodating regime change.And an exercise in which weights are based only on MRB performance of a rolling model average, with no updating, would reveal the role of the smoothing.These exercises are considered in the empirical work below.
The Meta-NNT Model The meta-NNT model characterising exchange rate determination over the whole sample  =    consists of the set of individual 8 The weights also accommodate the possibility of new regimes being born with a ¯xed probability; see Lee at al (2015) for details.
[12] estimated models of the form given by (2.7) plus the associated weights obtained using the non-nested testing procedure described above.Denoting the weights by   = Pr(  j Z  ), the meta-NNT model can be written as which attaches weights to all the possible models in (2.7) de¯ned according to the de¯nition of exchange rate fundamentals and to the di®erent regime lengths. 9 Changes in the size of the weights over time provide useful information on how exchange rate determination has evolved therefore.For example, the duration statistic, We now apply the modelling approach to the analysis of the determination of ¯ve exchange rates over the last forty or ¯fty years; namely, the U.S. dollar (USD) exchange rates for the Canadian dollar, Danish krone, Japanese yen, Swedish krona and British pound.The data are measured monthly and are as provided in Rossi (2013), with the start dates for the analysis varying across countries to accommodate the di®erences in the dates at which the currency prices are considered to be °oating 9 The meta-MRB model takes the form of (3.10) but with weights determined according to the most-recent-best performance as described above and might be denoted f  ¢¢¢ , say.The corresponding rolling window model average would then be denoted f  ¢¢ with window size . 10 The Shannon entropy statistic   = ¡ P  P  log(  ) £   , as used in information theory, provides a useful summary measure of the extent of model uncertainty experienced at  .
[13] -as described in column (1) of Table 1 -but all running to 2010:06. 11These data are derived originally from Datastream but were collated by Rossi to provide a set of variables that are reasonably comparable across countries.The choice of our ¯ve rates was based on the availability of a long run of data, and the results for these rates presented by Rossi provide a useful setting from which to judge our own results.
To be clear on de¯nitions, the data for nominal exchange rates   are the end-ofmonth observations of the rate expressed as the price of one US dollar.Interest rates   are three-month Treasury Bill rates, output   is measured by monthly industrial production ¯gures and the output gap   is calculated as the percentage deviations of actual industrial production from the trend de¯ned by applying a simple moving average to a forecast-augmented industrial output series. 12Prices   are measured by CPI and we use relatively liquid measures of the money supply   in each country (e.g.M1 data for the US).Series are seasonally-adjusted using one-sided moving averages with equal weights over the previous twelve months.
A plot of the (logarithm of the) ¯ve exchange rates, and corresponding price and interest rate di®erentials, are provided in the on-line Appendix and show reasonably clear similarities in the movements of each country's exchange rate and its prices relative to those of US over the forty or ¯fty years of the data sample.For instance, broadly speaking, the Canadian dollar rate    rises to the mid-eighties, falls through to the early nineties, rises again through to early 2000's, drops sharply 11 The early years of data in some countries include observations during regimes of highly managed exchange rates.But these early observations provides a convenient way to initiate the modelling and, of course, the modelling strategy is speci¯cally designed to disregard these observations in later years -by moving to shorter samples -as dictated by the data.
12 Speci¯cally, at each period  , an AR(2) model was estimated for the output series and used to produce forecasts for  + 1   + 12. Trend output at  was identi¯ed as the value of the 24-month moving average centred at  applied to the extended series. [14] to 2007/8, rises brie°y and then falls again at the end of the sample.Exactly the same description applies to Canadian relative prices    ¡    .In contrast, and again broadly speaking, the Japanese Yen falls gradually throughout the sample.But so too do Japanese relative prices.As shown in column (2) of Table 1, the simple correlations between each country's exchange rate and its relative prices is high in all ¯ve countries, averaging 077, showing the importance of broad price pressures for exchange rate determination.On the other hand, these relationships are not one-for-one and divergencies in the movements between the two series appear to persist.Simple ADF tests applied to the entire sample of data show, for all ¯ve currencies, that the nominal exchange rate and relative price series are both I(1) and, importantly, that the real exchange rate   =   +     ¡   is also I(1).In short, price pressures do appear to impact on the exchange rate but, given the periodic and permanent shifts in the series, it seems unlikely that exchange rate determination will be fully captured by a stable PPP or monetary model.Despite these broad patterns, each country's relative price movements are much smoother over time than those of its exchange rate.Column (4) of Table 1 shows the variance of the change in the exchange rates relative to the variance of the change in relative prices is very large in every country, averaging 67 time larger across these ¯ve countries.The volatility of exchange rates is much more in line with the volatility of the interest rate di®erentials, with the ratio of these variances averaging 16 across our ¯ve respective countries.This suggests that the asset market pressures captured by the IRP and Taylor Rule models could provide a more important in°uence on exchange rates over short horizons.On the other hand, the simple correlations in each country between exchange rate changes and the interest rate di®erential over the whole sample, as reported in column (2) of Table 1, have an average of just ¡009 making it very unlikely that the IRP and Taylor rule models could provide the basis [15] for explaining exchange rate movements over the sample in all the countries.An intuitive account that is consistent with these statistics is that there are equilibrating macroeconomic pressures to move exchange rates towards establishing PPP.
But there are also factors that change the relationship between exchange rates and relative prices permanently, and there are jumps and volatile movements in the exchange rates arising in response to news from global markets that are best represented by an IRP or Taylor Rule relationship.The relative strengths of these various pressures varies over time and the meta model allows them all to have an e®ect, with individual models having non-zero weight while their in°uence is apparent in the data.

The Meta-NNT Models
Our modelling work began by estimating, for each country, 12 versions of our four fundamental models based on three years of available data running up to the beginning of the period of analysis reported in column (1) of Table 1; e.g. up to 1965 : 5 for Canada.The di®erent versions used data ranging between 24 months and 36 months prior to the beginning of the period of analysis, providing estimates of   for  =       , for  = 24  36.In this ¯rst iteration of the modelling, equal weights was given to all 4 £ 12 = 48 models obtained for each country.The data window was then extended by one month and 52 models were estimated for each country but in this case the weights were assigned to each model following the procedure in (3.9).This iterated procedure then continued for every  up to the end of the period of analysis in 20106.The estimated models and model weights obtained in this way provided the estimated `meta-NNT model' for each country.
Figure 1 provides graphical representations of the key features of the meta-NNT models obtained for Canada; the equivalent ¯gures for the other countries are provided in the online Appendix.The ¯gure shows the weighted average of the sample length There is inherent instability over time in the process determining exchange rates and it is not surprising that explaining exchange rate movements and forecasting them is di±cult in these circumstances.The model averaging underlying the meta model of this paper provides a very °exible approach to dealing with this inherent instability in real time.The approach accommodates regime uncertainty as well as model uncertainty, doing this in a way that can account for periods of stability, periods in which policy evolves gradually and episodes of abrupt changes in regime.The results of the paper show that, for the ¯ve currencies considered, the meta-NNT models provide sensible characterisations of exchange rate movements over the last 40-50 years, re-°ecting the ebb and °ow of macroeconomic and `news' pressures on exchange rates.
The timing of the phases of the di®erent pressures are country-speci¯c, re°ecting countries' individual experiences.But there is a striking similarity in the frequency of structural breaks (occurring every 17-28 months on average) and the duration of the phases in which macroeconomic or `news' pressures dominate (lasting 29-42 months on average).Comparison with alternative model averaging approaches shows that it is the meta model's ability to accommodate structural change that is central to its success in characterising the data, although the use of non-nested testing -as opposed to updating using most-recent-best criteria -is also important in building some useful stability in the evolution of model weights over time.Notes: (., .)refers to the correlation between two variables; ADF(.) refers to the p-value of the ADF test applied to the variable (with a constant in the underlying ADF regression and extent of augmentation chosen by AIC with max lag =12).In tests, superscripts * and ** indicate significance at the 5% and 1% level respectively.V(.) refers to the variance of the variable; 'No.breaks' refers to number of occasions in which average sample length drops below 30 months (see text for details); 'No.phases' refers to phases defined by the occurrence of peaks/troughs (see text for details).
rejected in favour of   for  =  min ,..., and  = 1  so that the weight assigned at time  ¡1 to the model containing the   fundamentals [9] and based on data  ¡ 1 ¡  to  ¡ 1 is either transferred to the model with the same fundamentals based on one additional observation -i.e.data  ¡ 1 ¡  to or to a new model based on the shorter sample of data  ¡  to  containing any one of the alternative sets of fundamentals based on a non-nested test.If a model is rejected in favour of more than one alternative, the weight can be split equally among the alternative models. 6 provides a time- indication of the duration of the exchange rate regime in place at that time (whatever the nature of the regime).Similarly, the behavioural model weight statistic,   = X      = 1   , provides a time- summary of the usefulness of the   of the alternative exchange rate models. 104 Characterising Exchange Rate Determination for Five Currencies

Table 1 : Exchange Rate Summary Statistics (1)
Weighted Average Sample Size and Smoothed Weights for PPP and Monetary Models for Canada: 1968m4 -2010m6 Weighted Average Sample Size Smoothed Sum of Weights for the PPP and Monetary Models