Capital Bikeshare (also called CapBi) is a bicycle sharing system that serves Washington DC, Arlington County, Alexandria, Falls Church, Montgomery County, Prince George’s County, and Fairfax County. The Capital Bikeshare system is owned by the local governments and is operated by Motivate International, Inc.(Motivate International, Inc). As of August 2019, Capital Bike has 500 stations and 4300 bicycles.
The distribution of the docks is shown below:
As we can see from the above image, the majority of the docks for the bicycle are in Washington DC.
Bike tours in Washington DC are not only a popular family activity but renting a bike is a great way to get around without breaking the bank or sitting in traffic. There are dedicated bike lanes in Washington DC hence there is safety and convenience for the rider.
Capital BikeShare is undoubtedly cheaper than its competitors and the docks are conveniently placed around monumental locations. Capital Bikeshare is often faster than other modes of transportation and its annual membership offers unlimited trips under 30 minutes which helps save money. CapBi can be used to commute to work or ride to meet friends and is a great alternative for exercise since it is human-powered instead of electric powered. CapBi services save fuel, prevents carbon emissions, it is not only healthy for the rider but also for the environment.
As CapBi services are very popular and always in demand, we want to predict the number of bikes riders will use per hour and have contingencies to fulfill the demand. To estimate the number of bikes required we will consider various factors such as weather, temperature, working or non-working hour, the hour of the day, etc.
Fun Fact: CapBi offers GWU students annual membership for only 25$.(“Capital Bikeshare Discount”)
The data is sourced from the official Capital Bikeshare website, https://www.capitalbikeshare.com/system-data. We have downloaded data for September 2013 to September 2019.
The official data contains only the following variables:
Variable | Description |
---|---|
Duration | Duration of trip |
Start Date | Includes start date and time |
End Date | Includes end date and time |
Start Station | Includes starting station name and number |
End Station | Includes ending station name and number |
Bike Number | Includes ID number of bike used for the trip |
Member Type | Indicates whether user was a “registered” member (Annual Member, 30-Day Member or Day Key Member) or a “casual” rider (Single Trip, 24-Hour Pass, 3-Day Pass or 5-Day Pass) |
We dropped irrelevant columns like Duration, End Date, End Station, Bike Number and Member Type from official capital bike share dataset.
To predict the number of bikes to be used hourly we scraped the weather data from following website: https://www.wunderground.com/history/daily/us/va/arlington-county/KDCA/.
To also understand whether holiday influences the increase or decrease in bike usage we downloaded the holiday dataset from https://www.kaggle.com/gsnehaa21/federal-holidays-usa-19662020.
We merged all the different data sources into a single file for our analysis.
Once the merging of data is done we will preprocess our data. We dropped irrelevant columns like Start.station, Wind, Wind.Gust..mph., Pressure..in., Precip…in. as they are not useful for our analysis.
We condensed the Condition column which has 47 levels into 6 levels. If condition is Cloudy,Cloudy / Windy,Mostly Cloudy,Mostly Cloudy / Windy,Partly Cloudy,Partly Cloudy / Windy we replace it by Cloudy itself.
Similar logic is used of other weather conditions as well. We finally have the following levels in Condition column:Condition |
---|
Cloudy |
Fair |
Fog |
Rain |
Snow |
Windy |
Holiday |
---|
NA |
Columbus Day |
Veterans Day |
Thanksgiving Day |
Christmas Day |
New Year’s Day |
Birthday of Martin Luther King, Jr. |
Washington’s Birthday |
Memorial Day |
Independence Day |
Labor Day |
We convert the Holiday column from factors into a binary column where 0 means no Holiday and 1 means Holiday.
Since our dataset has information about the number of bikes used per hour across various stations, we want to simplify this, thus we aggregate all the CapBi data on an hourly basis.
We also rename the columns for ease of use.
We use the lubridate package to extract the hour, month, day and year from the Start_Date column which is of type character. For example 2019-09-20 18:00:00 is the date and thus the hour is 18, month is 09, day is 20 and year is 2019.
We convert the following columns to factors HourOfDay, Month, Year, Day, Condition, Holiday, Weekday, TimeofDay, Season.
The first 6 rows of the final processed dataset are:Start_Date | Condition | Temp | Dew | Humidity | Windspeed | Holiday | Weekday | TimeofDay | Season | HourOfDay | Month | Day | Year | Total_Bikes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2013-10-01 00:00:00 | Cloudy | 62 | 52 | 70 | 3 | 0 | Weekday | Non Working Hour | Fall | 0 | 10 | 1 | 2013 | 38 |
2013-10-01 01:00:00 | Cloudy | 63 | 55 | 75 | 3 | 0 | Weekday | Non Working Hour | Fall | 1 | 10 | 1 | 2013 | 41 |
2013-10-01 02:00:00 | Cloudy | 63 | 55 | 75 | 5 | 0 | Weekday | Non Working Hour | Fall | 2 | 10 | 1 | 2013 | 23 |
2013-10-01 03:00:00 | Fair | 61 | 55 | 81 | 6 | 0 | Weekday | Non Working Hour | Fall | 3 | 10 | 1 | 2013 | 5 |
2013-10-01 04:00:00 | Fair | 60 | 55 | 83 | 5 | 0 | Weekday | Non Working Hour | Fall | 4 | 10 | 1 | 2013 | 4 |
2013-10-01 05:00:00 | Cloudy | 61 | 55 | 81 | 5 | 0 | Weekday | Non Working Hour | Fall | 5 | 10 | 1 | 2013 | 21 |
In this analysis, the question that we want to answer or predict is -
Exploratory Data Analysis is performed to understand the underlying patterns before moving to model creation. This helps us understand what to expect from the models.
From the plots below we can see that winter is the least favorite season for hiring bikes while spring, summer, and fall have pretty similar patterns, which is justified because roads covered with snow are difficult to cycle in, so the demand for bikes reduces during the winter season.
In this plot, we also include the temperature and observe that higher numbers of bikes are rented as temperature increases, and the optimum temperature is between 80-90 degrees Fahrenheit.
The plot shows that people like to bike most in cloudy weather, followed by fair. Rain, snow, windy etc. are not preferred.
There is a steady increase in the number of bikes rented up to the year 2017 and then it decreased in 2018. There is a slight difference between bikes rented between weekdays and weekends, with more bikes rented during weekdays because that is when people commute to work.
The bikes hired peak during morning and evening 8 AM and 6 PM rush hours when people are heading to or returning back from work.
We notice that riders rent bikes more often on days when there is no holiday, but the number of bikes rented during holidays is still significant. This could be because people like to commute within DC on a bike and do sightseeing on holidays.
There is a positive 44% correlation between temperature and bikes hired, additionally, Humidity has a negative correlation of 30%.
The correlation table for numerical variables like Temperature, Dew, Humidity, WindSpeed, and HourOfDay with respect to Total Bikes is as follows:Total_Bikes | Temp | Dew | Humidity | Windspeed | HourOfDay | |
---|---|---|---|---|---|---|
Total_Bikes | 1.00 | 0.44 | 0.24 | -0.30 | 0.10 | 0.42 |
Temp | 0.44 | 1.00 | 0.89 | 0.08 | -0.07 | 0.14 |
Dew | 0.24 | 0.89 | 1.00 | 0.50 | -0.18 | -0.01 |
Humidity | -0.30 | 0.08 | 0.50 | 1.00 | -0.29 | -0.29 |
Windspeed | 0.10 | -0.07 | -0.18 | -0.29 | 1.00 | 0.15 |
HourOfDay | 0.42 | 0.14 | -0.01 | -0.29 | 0.15 | 1.00 |
The pictorial representation of the above table is as follows:
For our analysis we scale all the numeric variables like Temperature, Dew, Humidity and WindSpeed to avoid skewed results.
We now split our dataset into Train and Test splits. For the years 2013, 2014, 2015, 2016, 2017 and 2018 we are considering these samples for training our model and using the sample for the year 2019 we will validate the performance of the model.
In our training set we have total 45557 samples and for testing our model we have total 6540 samples.
We also drop irrelevant columns like Start_Date, Month, Day and Year in our training and test set, as these are not useful when creating models.
As we need to predict the number of bikes which is a numerical value, we have to perform regression analysis. We will create models using Linear Regression and then try to optimize it. We will use Decision Tree for regression and also try Bagged Decision Trees. Finally we will create a random forest model and try to tune it to get best results.
We perform linear regression, using training data set and all the variables, we predict the number of bikes per hour which is our ‘y’ variable.
We notice that the R-squared value for linear model with all the variables is 0.697.
We want to know how much of each variable contributes to the Linear model R-squared value i.e we want to know the relative importance of each variable in Linear model. For this we make use of the relaimpo package, and use the function calc.relimp.
LMG | |
---|---|
HourOfDay | 0.5769237 |
Temp | 0.1340105 |
TimeofDay | 0.0807025 |
Humidity | 0.0674104 |
Dew | 0.0643029 |
Season | 0.0424971 |
Condition | 0.0256185 |
Windspeed | 0.0052728 |
Holiday | 0.0017618 |
Weekday | 0.0014998 |
The relative importance matrix show us that hour of the day heavily influences the model. It contributes to more that fifty percentage of variation in our model. Similarly, temperature, time of the day, dew, humidity are other factors which impact variation in the model.
We have already evaluated the performance of the linear model using all predictor variables, we now perform feature selection so that we can create a linear model with a subset of the variables without compromising on the accuracy.
The output for forward feature selection is as follows:
We observe that adjusted r2 is maxixum when variables like Temp, TimeofDay, HourofDay are included. Similary BIC and Cp value are minimum when these same variables are in included in the model.
Hence, forward feature selection suggests that including features such as Temp, TimeofDay, HourofDay increases the accuracy of the prediction.
The output for backward feature selection is as follows:
Similarly, we observe that adjusted r2 is maximum when variables like Temp, Humidity, HourofDay are included. BIC and Cp value is minimum when these same variables are included in the model.
Hence, backward feature selection suggests that including features such as Temp, Humidity, HourofDay increases the accuracy of the prediction.
Overall, from the results of forward/backward features selection, we can conclude that variables like Temp, Humidity, TimeofDay, HourOfDay are important.
After removing nonessential features described in the previous section, we compute the linear model with four selected variables Temp, Dew, time of day, HourOfDay.
Moreover, we would like to examine the model further to find out which variables have the most impact on our outcome variable, the number of bikes.
We notice that the R-squared value for linear model with Temp, Dew, time of day, HourOfDay as the variables is 0.681.
From the results of the reduced linear model, we also observe that the Timeofday variable has high p-value and thus is not significant, so we exclude it and create a new linear model using the combination of three predictors which are Temp, Dew, HourOfDay.
The R-squared value for linear model with Temp, Dew, HourOfDay as the variables is still 0.681, but all the variables are significant since the p-values are below significant level.
An accurate multilinear model should not include two predictors that correlate with each other, as the relationship will affect the overall performance.
Thus we proceed to the next step for examining and removing these potential relationships in our model.
GVIF | |
---|---|
Temp | 6.740818 |
Dew | 6.420717 |
HourOfDay | 1.352372 |
As the result indicates, Temp and Dew point have a close relationship since they both had higher VIF values than HourOfDay. Since the Temp and Dew Point served the same purpose in our model, we would like to keep anyone that is most relevant to the outcome.
To identify the essential predictor, we build two new models which consist of Temp and Dew Point, respectively.
Model 1: y = Temp + HourOfDay
Model 2: y = Dew + HourOfDay
The results of Temp(Model 1) is significantly better than the Dew(Model 2) in several key aspects, especially in R2. The R2 value for Model 1 is 0.661 and the R2 value for Model 2 is 0.610.
The higher the R2 value is, the stronger is the relationship between the model and the outcome. Furthermore, higher R2 value also indicates that Temp is deemed more important to predict the number of bikes than Dew Point.
Therefore, the final model consists of Temp and HourOfDay.
With the final model containing Temp and HourOfDay variables, we calculate what percentage does each predictor contributes to the model.
LMG | |
---|---|
HourOfDay | 0.760272 |
Temp | 0.239728 |
In 66% of the final model variance, the predictor HourOfDay contributes well over 70% and less than 30% of the contribution is from Temp, as shown in the bar chart below.
In this section, it has been explained using multiple linear regression HourOfDay and Temp are the most relevant predictor for predicting the number of bikes. The section that follows moves on to using the decision tree.
After performing Regression using linear model, we now perform regression using Decision Trees. Decision Trees have the advantage that they are simple to create and can work with non-linear data.
To save computational time we have stored the Decision Tree model as RDS file, and read the RDS file for our predictions.
We have created a decision tree using Caret package. The “rpart2” method of caret’s train model creates a decision tree by tuning based on the depth of the decision tree. We have specified the max depth to be 10, by using the argument tuneLength = 9. We have also performed 10-Fold cross validation and repeated this 3 times.
The final depth used for creating decision tree is 10, and the plot for RMSE values with respect to max depth of decision tree is as follows:
As the depth of tree increases the performance increases i.e. the RMSE value of model decreases and it is minimum at depth = 10.
The Decision Tree which we will use for prediction is as follows::
At the root node we have TimeOfDay as variable with value being Working Hour, the variable Temp occurs 3 times in the decision tree. The variable HourOfDay with value 18 occurs twice in the decision tree. The more occurrences of a variable, the more important it is.
The importance of the variables for decision tree is shown below:
We notice that the variables Temp, WeekdayWeekend, Dew, SeasonWinter and HourOfDay18 have high variable importance. The importance of variable depends on how high it shows up in the decision tree and also depends on number of times that variable repeats.(Therneau, 2019, p. 11)
We have trained our Decision Tree for 2013, 2014, 2015, 2016, 2017 and 2018 dataset. We now perform predictions based on 2019 dataset.
The evaluation metrics of Decision Tree is as follows:
R2 | RMSE | MAE |
---|---|---|
0.7319768 | 194.6294 | 139.3643 |
Decision Trees have a drawback, they are not flexible and perform poorly with a new sample of data. Thus we use Bagging which is a combination of Bootstrapping the Data and performing Aggregation.
Bagged Decision Trees creates an ensemble of decision trees. Bagged Decision Trees overcome the drawback of Decision Trees by estimating the value based on majority or averages.
To save computational time we have stored the Bagged Decision Tree model as RDS file, and read the RDS file for our predictions.
We have created a bagged decision tree using Caret package. The “treebag” method of caret’s train model creates a bagged decision tree. For bagged Decision tree there is no tuning parameter. We have performed 10-Fold cross validation and repeated this 3 times.
The importance of the variables for decision tree is shown below:
We notice that the variables Temp, WeekdayWeekend, Dew, HourOfDay18 and SeasonWinter have high variable importance.
We have trained our Decision Tree for 2013, 2014, 2015, 2016, 2017 and 2018 dataset. We now perform predictions based on 2019 dataset.
The evaluation metrics of Decision Tree is as follows:
R2 | RMSE | MAE |
---|---|---|
0.7447782 | 190.1936 | 136.4659 |
Thus using bagged decision tree we got slightly higher r-squared values in comparison to decision trees.
Finally, we model using Random Forest.
In the bagged tree model, different samples of the dataset are taken and the model is trained to get the best possible value. As all variables are used to train the model, there are chances we may overfit the model with full variables and there might be a chance most significant variable of the dataset can also be suppressed by other variables in the dataset.
In order to overcome this, we can check model performance on a small subset of feature variables to understand variable influence to the target variable. Let’s try with a limited number of variables and check its performance. We have 10 variables with us, do we need to try all different options? let’s use the information that we gained from previous models like linear regression, decision tree. More than 80% of the variable importance is obtained by the top three variables. So let’s create a random forest using these three variables.
So far from linear regression to decision trees,there are two predominant variables that find to be important in the previous model, but the third variable is hard to choose. To get better clarity we will try all different combination of three variable to check the model performance and evaluate dominant variable performance to our number of bikes.
We have used random forest package to train model, which takes mtry(number of variables to be used to train each sample) as n/3 by default for the regression model.
To save computational time, we have trained the model and loaded the file to check its performance.Using the loaded model we have predicted test set and evaluated the metrics to get better insights on our performance.
Lets use our model to predict the test model.
The evaluation metrics of Random Forest is as follows:
R2 | RMSE | MAE |
---|---|---|
0.9321908 | 97.85963 | 62.4085 |
From the model metrics, we could see this model gave us the best R2 value so far.
The relative importance of the model helps us to identify the dominant feature present in the model.
%IncMSE | IncNodePurity | |
---|---|---|
Condition | 106.42494 | 163639162 |
Temp | 79.90901 | 862235383 |
Dew | 35.38582 | 295842195 |
Humidity | 62.20948 | 349574467 |
Windspeed | 50.81140 | 109738801 |
Holiday | 137.36333 | 53374646 |
Weekday | 450.50367 | 456264976 |
TimeofDay | 37.94861 | 417106486 |
Season | 41.75134 | 212546356 |
HourOfDay | 165.53598 | 3019676627 |
Mean Decrease Accuracy(%IncMSE) and Mean Decrease Gini(IncNodePurity) are calculated on the trained model.
Mean Decrease Accuracy(%IncMSE) - Refers to how much model accuracy decreases if we leave out that variable.
Mean Decrease Gini(IncNodePurity) - is the measure of variable importance based on the Gini impurity index used for calculating the splits in trees.
From the result, we could see HourOfDay, Weekday, Holiday, Condition, Temp are the most important variables which is similar to the other model’s relative importance.
We have used different combinations of variables from 1 to 5 with cross-validation(5 fold cross-validation) and also we repeat the same process for 3 three different samples to get averaged best output.
We tune/optimize Random Forest by using grid search and setting mtry in the range of 1 to 5.
To train the model takes quite some time, thus we have already trained the model, stored it and read directly from disk for faster execution and knitting of file.
Different variable models from 1 to 5 combinations, were tried and we could see R2 value increases as the number of variables increases.
mtry | RMSE | Rsquared | MAE | RMSESD | RsquaredSD | MAESD |
---|---|---|---|---|---|---|
1 | 298.1751 | 0.6622144 | 234.8145 | 2.306013 | 0.0062524 | 1.548266 |
2 | 228.6833 | 0.7355588 | 175.3999 | 2.221711 | 0.0049081 | 1.521392 |
3 | 188.9903 | 0.7937174 | 140.6115 | 1.905449 | 0.0037658 | 1.418370 |
4 | 161.8658 | 0.8376181 | 116.6697 | 1.865326 | 0.0039560 | 1.271864 |
5 | 142.4377 | 0.8669464 | 99.4204 | 1.730465 | 0.0035588 | 1.176530 |
The plot describes that the Mean Square error decreases when the number of predictors increases(not surprising).
Let’s use this model, which used various combinations to get accurate prediction with less number of variables to predict the test data.
R2 | RMSE | MAE |
---|---|---|
0.8875139 | 133.6627 | 92.47352 |
With the maximum of 5 variable model combination, we could able to achieve an R2 of up to 0.88. Let’s check which variables constitute more to this R2 value.
As most of the variables are categorical values, each level is treated as levels and we could see HourofDay and weekday, condition, Temp are the variables that contribute much to our model.
Hourofday - 18 is the best predictor, weekend or not, time of day with working day makes good observation points in determining the total number of bikes consumed by users for a given hour.
Model | No. of Predictor | R2 [Test Data] | Variable Importance(desc) |
---|---|---|---|
Linear Regression | Full Model i.e. all variables | 0.69 | HourOfDay, Temp, TimeOfDay |
Linear Regression | 2 | 0.67 | HourOfDay,Temp |
Decision Tree | Full Model i.e. all variables | 0.73 | Temp, Weekday, Dew, Season, HourOfDay |
Bagged Tree | Full Model i.e. all variables | 0.74 | Temp, Weekday, Dew, HourOfDay, Season |
Random Forest | 3 | 0.93 | HourOfDay, Weekday, Holiday, Condition, Temp |
Random Forest- Grid Search | Combination of 1 to 5 variables | 0.88 | HourOfDay, Weekday, Holiday, Temp, Holiday |
Thus random forest performs the best with the R-square value being highest at 0.93.
In terms of modeling & predictions, we can conclude that random forest can predict the most accurately with the given dataset. The maximum r-squared value obtained is 0.93 or 93%. The important variables are the hour of day with most significant values as 6pm - 7pm, 8am - 9am, followed by variable weekday with value as weekend, followed by time of day variable with value as working hour and also the variable temp is important with values as 70-90 degree Fahrenheit.
The insights which we got from our analysis is that on a normal day, users tend to ride a bike for commuting to offices, schools, etc. But on weekends & holidays, people prefer to use bikes for travel and leisure activity purposes. We also derive that bikes are preferred maximum in moderate temperatures and users tend to avoid bikes at high temperatures and low temperatures.
Based on our analysis we recommend that during high demand in morning and evening office hours and weekend/holiday, Capital Bikeshare should increase availability during these hours. Thus catering to more users and in turn, securing more profits.
Humidity and Windspeed based on our analysis are non-dominant feature variables. Let’s analyse how our model predicts the demand of bikes when all other variables are fixed but humidity and windspeed are changed.
The sample input shows that all variables are fixed except Humidity and Windspeed:
Condition | Temp | Dew | Humidity | Windspeed | Holiday | Weekday | TimeofDay | Season | HourOfDay |
---|---|---|---|---|---|---|---|---|---|
Windy | 1.673829 | 1.337800 | -0.4183304 | 0.5601230 | 0 | Weekday | Non Working Hour | Summer | 19 |
Windy | 1.618472 | 1.363495 | -0.2330757 | -0.2022479 | 0 | Weekday | Non Working Hour | Summer | 19 |
The prediction of different models are shown below:
Actual Bike Demand | Full Linear Model | Reduced Linear Model | Decision Tree | Grid Search Random Forest | Random Forest |
---|---|---|---|---|---|
1059 | 879.0534 | 879.1654 | 898.7845 | 938.1532 | 1007.0551 |
968 | 869.3173 | 871.9652 | 898.7845 | 929.4174 | 963.6588 |
The 3 variable Random Forest i.e. the last column has the closest predictions, second closest model in this scenario is Random Forest (Using Grid Search).
We know that Hour of the day variable is a dominant feature with Hour of day - 18 being the most important level. We want to see how our created models predict when Hour of the day is varied and other variables are kept constant.
Sample Input Set :
Condition | Temp | Dew | Humidity | Windspeed | Holiday | Weekday | TimeofDay | Season | HourOfDay |
---|---|---|---|---|---|---|---|---|---|
Fair | 0.1791985 | 0.0530644 | -0.3654005 | 1.2893474 | 0 | Weekday | Non Working Hour | Summer | 8 |
Fair | 0.7881220 | -0.3066616 | -1.7945077 | 1.8859856 | 0 | Weekday | Working Hour | Summer | 14 |
Fair | 0.7881220 | -0.5636087 | -2.0591572 | 1.8859856 | 0 | Weekday | Working Hour | Summer | 18 |
Fair | 0.6774087 | 0.3100115 | -0.7359098 | 0.8915887 | 0 | Weekday | Non Working Hour | Summer | 22 |
Fair | 0.6220520 | 0.4641798 | -0.3124706 | 0.0960711 | 0 | Weekday | Non Working Hour | Summer | 5 |
Fair | 0.7327653 | 0.5669587 | -0.3124706 | -0.5005670 | 0 | Weekday | Non Working Hour | Summer | 8 |
The prediction of different models are shown below:
Actual Bike Demand | Full Linear Model | Reduced Linear Model | Decision Tree | Grid Search Random Forest | Random Forest |
---|---|---|---|---|---|
1065 | 765.1080 | 737.1794 | 888.5368 | 937.9541 | 1153.41739 |
522 | 618.9127 | 569.6468 | 509.5070 | 590.7918 | 570.07379 |
1599 | 1093.2136 | 1025.9506 | 1224.6965 | 1216.3499 | 1480.60021 |
371 | 378.3377 | 346.6182 | 85.7916 | 387.4753 | 412.94439 |
37 | 186.9565 | 144.9366 | 85.7916 | 101.0604 | 49.53663 |
1131 | 856.4953 | 809.1816 | 888.5368 | 913.3333 | 1088.83388 |
From the results, we notice that 3-variable random forest (i.e. last column) performs the best.
We want to verify how well our created models predict the demand for bikes on weekdays and weekends.
Sample Input Set:
Condition | Temp | Dew | Humidity | Windspeed | Holiday | Weekday | TimeofDay | Season | HourOfDay |
---|---|---|---|---|---|---|---|---|---|
Cloudy | 0.7327653 | 0.1044538 | -1.1593489 | 1.488227 | 0 | Weekday | Working Hour | Spring | 16 |
Cloudy | 0.8434787 | 0.9266846 | 0.3226882 | -1.097205 | 0 | Weekend | Working Hour | Spring | 16 |
The prediction of different models are shown below:
Actual Bike Demand | Full Linear Model | Reduced Linear Model | Decision Tree | Grid Search Random Forest | Random Forest |
---|---|---|---|---|---|
779 | 720.5074 | 657.4148 | 509.5070 | 691.4413 | 713.2958 |
947 | 662.5180 | 671.8152 | 861.6428 | 839.5023 | 946.1810 |
We observe that error rate is least in the last column, which is for random forest.
We want to validate how well our generated models predict the demand for bikes on Holidays and Working Days.
Sample Input Set:
Condition | Temp | Dew | Humidity | Windspeed | Holiday | Weekday | TimeofDay | Season | HourOfDay |
---|---|---|---|---|---|---|---|---|---|
Fair | 0.9541921 | 0.6183481 | -0.5771201 | 0.6927093 | 0 | Weekend | Working Hour | Fall | 11 |
Fair | 1.1202622 | 1.0294635 | -0.0478211 | -1.0972052 | 0 | Weekend | Working Hour | Fall | 11 |
Fair | -2.3118526 | -2.1052914 | -1.4239984 | 2.6815031 | 1 | Weekday | Working Hour | Winter | 11 |
Fair | -2.0350691 | -2.2594597 | -1.2122788 | -0.5005670 | 0 | Weekday | Working Hour | Winter | 11 |
The prediction of different models are shown below:
Actual Bike Demand | Full Linear Model | Reduced Linear Model | Decision Tree | Grid Search Random Forest | Random Forest |
---|---|---|---|---|---|
924 | 551.76454 | 505.7156 | 861.6428 | 777.7951 | 922.5449 |
900 | 565.48432 | 527.3163 | 861.6428 | 738.1631 | 847.4874 |
74 | 29.96823 | 80.9030 | 356.2176 | 155.6374 | 112.7801 |
104 | 166.05304 | 116.9041 | 356.2176 | 174.5768 | 127.8974 |
From the above outputs, we conclude that Random Forest works best for all kinds of changes in Dataset.
Motivate International, Inc. (n.d.). Press Kit. Retrieved November 26, 2019, from https://www.capitalbikeshare.com/press-kit.
Capital Bikeshare Discount. (n.d.). Retrieved November 26, 2019, from https://benefits.gwu.edu/capital-bikeshare-discount.
Therneau, T. M. (2019, April 11). An Introduction to Recursive Partitioning Using the RPART Routines. Retrieved November 26, 2019, from https://cran.r-project.org/web/packages/rpart/vignettes/longintro.pdf.
Github Repository: