This page will give you some examples of the typical methods used for forecasting using Python.

The libraries you will need include: pandas, scikit-learn, statsmodels.

NOTE 1: There are lots and lots of different libraries and ways of doing Forecasting. This page is intended to give a introduction to Forecasting and it’s various methods. You can build upon these to go explore each in more detail. The main Python library being used in these examples is statsmodels.tsa.

NOTE 2: These forecasting example are based on Univariate Forecasting i.e. based on using one variable. Multivariate Forecasting will be covered as a separate topic.

**1. The data set**

The data set used in these examples is the Rossmann Sales Forecasting data set. This data set is available on Kaggle, and many other website. Go download the datasets, and the train.csv data will be used in the following examples.

Kaggle Description: Rossmann operates over 3,000 drug stores in 7 European countries. Currently, Rossmann store managers are tasked with predicting their daily sales for up to six weeks in advance. Store sales are influenced by many factors, including promotions, competition, school and state holidays, seasonality, and locality. With thousands of individual managers predicting sales based on their unique circumstances, the accuracy of results can be quite varied.

Data fields

Most of the fields are self-explanatory. The following are descriptions for those that aren’t.

**Id**– an Id that represents a (Store, Date) duple within the test set**Store**– a unique Id for each store**Sales**– the turnover for any given day (this is what you are predicting)**Customers**– the number of customers on a given day**Open**– an indicator for whether the store was open: 0 = closed, 1 = open**StateHoliday**– indicates a state holiday. Normally all stores, with few exceptions, are closed on state holidays. Note that all schools are closed on public holidays and weekends. a = public holiday, b = Easter holiday, c = Christmas, 0 = None**SchoolHoliday**– indicates if the (Store, Date) was affected by the closure of public schools**StoreType**– differentiates between 4 different store models: a, b, c, d**Assortment**– describes an assortment level: a = basic, b = extra, c = extended**CompetitionDistance**– distance in meters to the nearest competitor store**CompetitionOpenSince[Month/Year]**– gives the approximate year and month of the time the nearest competitor was opened**Promo**– indicates whether a store is running a promo on that day**Promo2**– Promo2 is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating**Promo2Since[Year/Week]**– describes the year and calendar week when the store started participating in Promo2**PromoInterval**– describes the consecutive intervals Promo2 is started, naming the months the promotion is started anew. E.g. “Feb,May,Aug,Nov” means each round starts in February, May, August, November of any given year for that store

**2. Data Preparation & Time Series Setup**

import pandas as pd df = pd.read_csv('/users/brendan.tierney/Dropbox/train.csv') df.head(5)

df.info()

<class 'pandas.core.frame.DataFrame'> RangeIndex: 1017209 entries, 0 to 1017208 Data columns (total 9 columns): Store 1017209 non-null int64 DayOfWeek 1017209 non-null int64 Date 1017209 non-null object Sales 1017209 non-null int64 Customers 1017209 non-null int64 Open 1017209 non-null int64 Promo 1017209 non-null int64 StateHoliday 1017209 non-null object SchoolHoliday 1017209 non-null int64 dtypes: int64(7), object(2) memory usage: 69.8+ MB

The data is at a granular level and we will need to aggregate it to monthly sales. First, aggregate it into daily sales

daily_sales = df.groupby(['Date'])['Sales'].sum().reset_index() daily_sales

We can now aggregate into Monthly sales figures

ds = daily_sales['Date'] >= '2012-07-01' ds2 = daily_sales[ds] ds2.head(20)

We now take this data frame and use the Monthly date to define the index. This index value will be used for the Time Series.

ds3 = ds2 monthly_sales2 = ds3.groupby(ds3['Date'].dt.strftime('%Y-%m-01'))['Sales'].sum().reset_index() monthly_sales2['Date'] = pd.to_datetime(monthly_sales2.Date) monthly_sales2 = monthly_sales2.set_index(monthly_sales2.Date) monthly_sales2.drop('Date', axis=1, inplace=True) monthly_sales2

The data set is now in timeseries (ts) format.

To return subsets of this data set you can do the following

monthly_sales2['2015']

Finally, create a line chart for the data set.

plt.figure(); plt.figure(figsize=(18,12)) plt.plot(monthly_sales2) plt.show()

We are now ready to do some timeseries analysis and forecasting on this data set.

**3. Moving Averages**

With moving averages, a new value is calculated based on the average of the previous X number of samples. If x=1 then it only looks at the current value. Where X=6 it will average the number of data values over the past 6 cases.

from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 15,10 def test_stationarity(timeseries): #Determining rolling statistics rolmean6 = timeseries.rolling(window = 6).mean() #plotting rolling statistics orig = plt.plot(timeseries, color = 'blue', label = 'Original') mean6 = plt.plot(rolmean6, color = 'yellow', label = 'Rolling Mean (6)') plt.legend(loc = 'best') plt.title('Rolling Mean - with different size windows') plt.show() test_stationarity(monthly_sales2)

We can see how the Moving average smooths out the peaks and troughs of the original data. One of the challenges of using this approach is determining what the size of the moving average window should be. Here is the code and chart for 2<=X<=6.

from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 15,10 def test_stationarity(timeseries): #Determining rolling statistics #rolmean1 = timeseries.rolling(window = 1).mean() rolmean2 = timeseries.rolling(window = 2).mean() rolmean3 = timeseries.rolling(window = 3).mean() rolmean4 = timeseries.rolling(window = 4).mean() rolmean5 = timeseries.rolling(window = 5).mean() rolmean6 = timeseries.rolling(window = 6).mean() #plotting rolling statistics orig = plt.plot(timeseries, color = 'blue', label = 'Original') #mean1 = plt.plot(rolmean1, color = 'red', label = 'Rolling Mean (1)') mean2 = plt.plot(rolmean2, color = 'green', label = 'Rolling Mean (2)') mean3 = plt.plot(rolmean3, color = 'black', label = 'Rolling Mean (3)') mean4 = plt.plot(rolmean4, color = 'magenta', label = 'Rolling Mean (4)') mean5 = plt.plot(rolmean5, color = 'cyan', label = 'Rolling Mean (5)') mean6 = plt.plot(rolmean6, color = 'yellow', label = 'Rolling Mean (6)') plt.legend(loc = 'best') plt.title('Rolling Mean - with different size windows') plt.show() test_stationarity(monthly_sales2)

**4. Autoregression**

The autoregression (AR) method models the next step in the sequence as a linear function of the observations at prior time steps.

In the following examples, the predicted values are calculated and then projected beyond the end of the data set to cover monthly predictions up to April 2016.

#Autoregression # Autoregression models the next step in the sequence as a linear function of the prior observed time steps from datetime import datetime from statsmodels.tsa.ar_model import AR #fit AR model ar_model = AR(monthly_sales2, missing='drop') model_fit = ar_model.fit(method='mle') #make forecasing prediction ar_pred = model_fit.predict(start=datetime(2013, 3, 1), end=datetime(2016, 4, 1)) #print results ar_pred

This gives the projected values for monthly figures beyond the last data point of

2015-08-01 1.941865e+08 2015-09-01 1.811351e+08 2015-10-01 1.914105e+08 2015-11-01 1.909271e+08 2015-12-01 1.824975e+08 2016-01-01 1.827472e+08 2016-02-01 1.836780e+08 2016-03-01 1.819657e+08 2016-04-01 1.826685e+08

When we chart the original data points and the AR predicted values we get.

**5. Simple Exponential Smoothing**

#Simple Exponential Smoothing from statsmodels.tsa.holtwinters import SimpleExpSmoothing #fit ES model ar_model = SimpleExpSmoothing(monthly_sales2) model_fit = ar_model.fit() #make forecasing prediction es_pred = model_fit.predict(start=datetime(2013, 3, 1), end=datetime(2016, 4, 1)) #print results es_pred

**6. Holt Winters Exponential Smoothing**

#HoltWinters Exponential Smoothing from statsmodels.tsa.holtwinters import ExponentialSmoothing #fit ES model hw_model = SimpleExpSmoothing(monthly_sales2) model_fit = hw_model.fit() #make forecasing prediction hw_pred = model_fit.predict(start=datetime(2013, 3, 1), end=datetime(2016, 4, 1)) #print results hw_pred

**7. Autoregressive Integrated Moving Averages (ARIMA)**

ARIMA is a very popular method for forecasting. It takes 3 main parameters.

- Number of AR (Auto-Regressive) terms (p): AR terms are just lags of dependent variable. For instance if p is 5, the predictors for x(t) will be x(t-1)….x(t-5).
- Number of MA (Moving Average) terms (q): MA terms are lagged forecast errors in prediction equation. For instance if q is 5, the predictors for x(t) will be e(t-1)….e(t-5) where e(i) is the difference between the moving average at ith instant and actual value.
- Number of Differences (d): These are the number of nonseasonal differences, i.e. in this case we took the first order difference. So either we can pass that variable and put d=0 or pass the original variable and put d=1. Both will generate same results.

Sometimes you might need to do a grid search of these values to determine which one gives the better results. Here is one example

arima_model = ARIMA(monthly_sales2, order=(2,1,5)) model_fit = arima_model.fit(disp=False) #make predictions arima_pred3 = model_fit.predict(start='2013-02-01', end='2016-04-01', typ='levels') arima_pred3

As you can see it reflects the trend in the data closely. This is good and give a better forecast then the previous methods. ARIMA is a very commonly used forecasting method in many industries.

Here are some examples of alternative values for the parameters and the charting comparing the results of all three combinations.

arima_model = ARIMA(monthly_sales2, order=(2,1,1))

arima_model = ARIMA(monthly_sales2, order=(0,1,2))

**8. Seasonal Autoregressive Integrated Moving Averages (SARIMA)**

The Seasonal Autoregressive Integrated Moving Average (SARIMA) method models the next step in the sequence as a linear function of the differenced observations, errors, differenced seasonal observations, and seasonal errors at prior time steps.

It combines the ARIMA model with the ability to perform the same autoregression, differencing, and moving average modeling at the seasonal level.

There is an additional parameter called ‘seasonal_order’. This contains four seasonal elements that are not part of ARIMA that must be configured; they are:

**P**: Seasonal autoregressive order.**D**: Seasonal difference order.**Q**: Seasonal moving average order.**m**: The number of time steps for a single seasonal period.

#SARIMA from statsmodels.tsa.statespace.sarimax import SARIMAX sarima_model = SARIMAX(monthly_sales2, order=(2,1,5), seasonal_order=(2,1,5,1), enforce_stationarity=False, enforce_invertibility=False) model_fit = sarima_model.fit(disp=False) #make predictions sarima_pred = model_fit.predict(start='2013-02-01', end='2016-04-01', typ='levels') sarima_pred

**9. Comparing Results from All Methods**

The following diagram shows the original data set along with the best forecasting predictions from all the different methods used above.