Since Time Series dimension developed variables like behavior, organizations and mechanisms were now studied in context of their evolution in time. But this brings following issue, consider the example.

Assuming a country produces Output Y using only two factors Labor L and Capital K from time 1 to n. This assumption of only two inputs is called output function, and in statistics it is called Data Generating Process.

In Economics we assume that all the processes are working efficiently and resources are fully utilized, there is no room for improvement in the output production ability.

But while changing time there comes one issue that the realized value of output (y) in each time period should originate from same distribution[1] it means they should be working with same efficiency all the time, in other words year in year out same labor and same capital in terms of quality should be used to produce output with consistent[2] motivation, efficiency and productivity.

Hence there should be only a random change above and below average total output which sums up to zero.

This restriction is impossible to hold and check hence we cannot say for sure that this output variable is same in terms of its properties throughout the time considered; no worker can work with same motivation each year. So each year output is difference one way or another with its past one. This assumption is called strict stationarity of the variable.

**Stationarity Assumption**

Because of this practical limitation we imply a second best assumption called weak stationarity which specifies variable should have constant mean, variance and zero co-variance. Fulfilling these assumptions will weakly insure us that we have same variable throughout the time period and it is not being influenced by any means.

In conclusion, when we talk about citrus peribus assumption our motive is to see pure effect of one variable on other, hence this stationarity concept talk about same phenomenon that the variable should be stay original throughout time. See the graph below a stationary (weak) should look like this where mean and spread of observations (variance) seem constant.

When in a graph we see that every second observation is moving in opposite direction to the previous one, means to say that it is cutting the central line representing mean all the time, than the series is said to be stationary. This is the way we can check if the series is stationary or not, without using any test.

**Origin of Non-Stationarity**

Before we can see how non stationarity can come in a variable, we have to look first what kind of possible sources from where a variable can be changed.

This above illustration shows that there four possible sources that can cause a variable to change.

As the concept of regression is to see the effect of independent variables on dependent variable hence DL component is not considered enemy. While most of our regression analysis is based on calculating the effects of DL component, the minimization of other components is the main problem that a researcher faces when he turns up to estimate a model

Now we will discuss others one by one.

**AR component**

Consider a simple model below where Y is effected by its past. In other words it is also called inertia in the series. it is because of natural phenomenon, like in output the economy is not always fully employed, population is always growing so there will be opportunities to employ them and increase the output. Following mode is AR(1) means there is only one lag (past) value of the variable directly involved.

Now as we have the initial equation[3] we will calculate the mean and variance of the variable *y _{t}*

Hence this shows that the variance of y_{t} is a function of variable t, so it is not constant. Similarly the problem will be big if the covariance is not zero it will increase the variance of the model as there will be few extra non-zero terms in the variance equation 3 from equation 2. This shows that if the variable is significantly dependent on its past then its variance is non constant. Here significantly means that in equation 1, past value of y has one to one relation with the present value. This is also called series with stochastic trend, as the trend affects the variance of the series y. There is one assumption and property of Ordinary least square that it has constant and minimum possible variance, if the variance is changing then we cannot confirm if the mean is minimum or not so it cannot be efficient model.

**Trend Component:**

Trend component is a qualitative component that shows the effect improvement or depreciation in the series because of repetition, experience or growth. Certainly if we start constructing houses even though we do not have any experience, we will take ages to make one house, but with the time our speed and finishing will improve, this evolution is called trend. Consider a simple trend model below where Y is affected by trend and intercept

Here we can see because of the presence of trend, the mean of the variable is function of time because of trend effect. This effect usually comes if the variable is changing because of change in the experience, working environment or efficiency is changing with time. So we can say trend is capturing all other qualitative improvement or deterioration in the variable, which is making it non comparable to its past value. This concept is called deterministic trend.

**MA Component**

MA component describes effect of any event that has changed the nature of the behavior of series. Just like fall of Breton woods system of exchange rate, it stated to deviate faster and independently. Unknown events are random shocks before they have been occurred. When random shocks have long lasting effects they are considered as structural breaks. There will be a visible change in the behavior of the variable.

Most macroeconomic time series are not characterized by the presence of a unit root. Fluctuations are indeed stationary around a deterministic trend function. The only ‘shocks’ which have had persistent effects are the 1929 crash and the 1973 oil price shock’ (Perron 1989, pp. 1361).

See the example below this WPI inflation should have a consistent mean of 5+ % but there are changes in conditions from mar 2006 causing an upward trend. If we do not include the information of that break it will make overall mean inflation to have increasing trend[4], where mean will become a function of time. such that it will follow the purple line and it will have same consequences as the trended series.

**Consequences:**

Consider simple OLS model

- Assuming that either one of or is not weak stationary, so it will be function of its past or time or some random event not included in model using dummy variable.

Hence any of these three missing components will make function of past or time or some random event respectively instead of being white noise as assumed in OLS, causing Autocorrelation among residuals, and invalid OLS.

- If both are function of time then OLS will capture the common deterministic trend and show reasonably high t-values, OLS will think that there is a lot of resemblance in the series, but in reality there is not it is just a trend. Just looking at it will show that the results are perfect but further testing will show that the residuals are not white noise any more.[5]

- If in following regression the coefficient of lag (comes to be 1 than it causes further problems

**Detection:**

The detection of the unit root is built from the simplest AR(1) model where the trend and intercept component are optional depending upon the nature of the series. To see if the variable has constant mean variance or zero covariance, the present value of the variable is compared with its past.

Here testing for H1 : Alpha1 = 1 or H1: alpha2 not equal to 0 will tell the presence of stochastic or deterministic trend respectively. So in both cases variable will have non-constant mean or variance hence it will be non stationary.

Because of being simple this regression faces several problems like

- can be effected by other than its first lag.
- can be auto correlated as there can be infinite variables which effect but are missing in the model
- If the data has seasons or cycles it will also be reflected in the error term.

Hence one modification is done to solve these problems

Hence ADF shows special set of critical values for this test. Confirming the variable to be non-stationary in nature. This test lacks power to identify between stationary and non stationary for the variable which are marginal cases. Hence it is advised to check it for several lag orders “n” and without trend and intercept too, to confirm the findings. There are few other tests like Philips Peron and KPSS which can also be used to see if the results are robust.

**Checking Structural Break:**

The presence of structural break chow test method can be used, for it a simple AR(1) model is estimated.

**Removal Method:**

Once we have detected the problem in the above section. Following are some steps that are used to remove these problems.

- Deterministic trend

Once we know what is the specification of trend in this model either it is linear or square or cube, then estimate a simple OLS with one extra like if it is linear here then add square in OLS

- Stochastic trend

If (1 – alpha1) comes out to be zero then it is confirmed is non stationary then subtracting the first lag that is creating the problem will make the resultant variable (first difference of y) stationary. The decision of stationarity should not be based on single test only as it has low power; there is a wide variety of test that can also be applied to complement the results.

- Structural Break

Confirming the presence and nature of structural break as to pass through several rigorous testing found in literature. One example is Peron test below.

If (alpha3) turns out to be significant then estimating the model with dummy only

And extract (error term) will remove the problem of structural break.

**Conclusion:**

This is a basic layout of a unit root test, these are useful in building a rough idea, and these should be complemented with other tests including graphical analysis.

[1] Same probability density function, Data Generating Process

Hi Nouman,

Thank you for your post. I have a question regarding unit root test with multiple breakpoints. What software and/or package do you suggest us to use to conduct this test? Eviews has Breakpoint Unit Root Test but this is only for single structural break. I have 3-4 breakpoints in my model and need a type of unit root test that I can apply in it. I look forward to your advice.

Thanks in advance.

You should explore R language it might have unit root with multiple breaks. But you can do it manually, by finding breaks your self and add these breaks as exogenous variables in the manual ADF test.