In [1]:
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import geopandas as gpd
import hvplot.pandas
import esri2gpd
import carto2gpd
import seaborn as sns

np.random.seed(42)
In [2]:
pd.options.display.max_columns = 999

Lecture 12B: Predictive Modeling Part 2¶

Nov 28, 2022

Housekeeping¶

  • HW #7 (required) posted, due on Wednesday 12/07
    • Includes final project proposal
    • Predictive modeling of housing prices in Philadelphia
  • Final project due on Tuesday December 20

Picking up where we left off¶

We'll start by recapping the exercise from last lecture: adding additional distance-based features to our housing price model...

Spatial amenity/disamenity features¶

The strategy

  • Get the data for a certain type of amenity, e.g., restaurants, bars, or disamenity, e.g., crimes
    • Data sources: 311 requests, crime incidents, Open Street Map
  • Use scikit learn's nearest neighbor algorithm to calculate the distance from each sale to its nearest neighbor in the amenity/disamenity datasets

Examples of new possible features...¶

Distance from each sale to:

  • Graffiti 311 Calls (last lecture)
  • Subway Stops (last lecture)
  • Universities
  • Parks
  • City Hall
  • New Construction Permits
  • Aggravated Assaults
  • Abandoned Vehicle 311 Calls
In [3]:
# Models
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor

# Model selection
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV

# Pipelines
from sklearn.pipeline import make_pipeline

# Preprocessing
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler, PolynomialFeatures, OneHotEncoder

# Neighbors
from sklearn.neighbors import NearestNeighbors

Load and clean the data:

In [4]:
# the CARTO API url
carto_url = "https://phl.carto.com/api/v2/sql"

# The table name
table_name = "opa_properties_public"

# Only pull 2021 sales for single family residential properties
where = "sale_date >= '2021-01-01' AND sale_date < '2022-01-01'" 
where = where + " AND category_code_description IN ('SINGLE FAMILY', 'Single Family')"

# Run the query
salesRaw = carto2gpd.get(carto_url, table_name, where=where)

# Optional: put it a reproducible order for test/training splits later
salesRaw = salesRaw.sort_values("parcel_number")

# The feature columns we want to use
cols = [
    "sale_price",
    "total_livable_area",
    "total_area",
    "garage_spaces",
    "fireplaces",
    "number_of_bathrooms",
    "number_of_bedrooms",
    "number_stories",
    "exterior_condition",
    "zip_code",
    "geometry"
]

# Trim to these columns and remove NaNs
sales = salesRaw[cols].dropna()

# Trim zip code to only the first five digits
sales['zip_code'] = sales['zip_code'].astype(str).str.slice(0, 5)

# Trim very low and very high sales
valid = (sales['sale_price'] > 3000) & (sales['sale_price'] < 1e6)
sales = sales.loc[valid]
In [5]:
len(sales)
Out[5]:
19297

Add new distance-based features:

In [6]:
def get_xy_from_geometry(df):
    """
    Return a numpy array with two columns, where the 
    first holds the `x` geometry coordinate and the second 
    column holds the `y` geometry coordinate
    
    Note: this works with both Point() and Polygon() objects.
    """
    # NEW: use the centroid.x and centroid.y to support Polygon() and Point() geometries 
    x = df.geometry.centroid.x
    y = df.geometry.centroid.y
    
    return np.column_stack((x, y)) # stack as columns
In [7]:
# Convert to meters and EPSG=3857
sales_3857 = sales.to_crs(epsg=3857)

# Extract x/y for sales
salesXY = get_xy_from_geometry(sales_3857)

Example #1: 311 Graffiti Calls¶

Source: https://www.opendataphilly.org/dataset/311-service-and-information-requests

Download the data:

In [8]:
# Select only those for grafitti and in 2021
where = "requested_datetime >= '01-01-2021' and requested_datetime < '01-01-2022'"
where = where + " AND service_name = 'Graffiti Removal'"

# Pull the subset we want
graffiti = carto2gpd.get(carto_url, "public_cases_fc", where=where)

# Remove rows with missing geometries
graffiti = graffiti.loc[graffiti.geometry.notnull()]

Get the x/y coordinates for the grafitti calls:

In [9]:
# Convert to meters in EPSG=3857
graffiti_3857 = graffiti.to_crs(epsg=3857)

# Extract x/y for grafitti calls
graffitiXY = get_xy_from_geometry(graffiti_3857)

Run the neighbors algorithm to calculate the new feature:

In [10]:
# STEP 1: Initialize the algorithm
nbrs = NearestNeighbors(n_neighbors=5)

# STEP 2: Fit the algorithm on the "neighbors" dataset
nbrs.fit(graffitiXY)

# STEP 3: Get distances for sale to neighbors
grafDists, grafIndices = nbrs.kneighbors(salesXY) 

# STEP 4: Get average distance to neighbors
avgGrafDist = grafDists.mean(axis=1)

# Set zero distances to be small, but nonzero
# IMPORTANT: THIS WILL AVOID INF DISTANCES WHEN DOING THE LOG
avgGrafDist[avgGrafDist==0] = 1e-5

# STEP 5: Add the new feature
sales['logDistGraffiti'] = np.log10(avgGrafDist)

Example #2: Subway stops¶

Use the osmnx package to get subway stops in Philly — we can use the ox.geometries_from_polygon() function.

  • To select subway stations, we can use station=subway: see the OSM Wikipedia
  • See Lecture 9A for a reminder on osmnx!
In [11]:
import osmnx as ox

Get the geometry polygon for the Philadelphia city limits:

In [12]:
# Download the Philadelphia city limits
url = "https://services.arcgis.com/fLeGjb7u4uXqeF9q/arcgis/rest/services/City_Limits/FeatureServer/0"
city_limits = esri2gpd.get(url).to_crs(epsg=3857)

# Get the geometry from the city limits
city_limits_outline = city_limits.to_crs(epsg=4326).squeeze().geometry
In [13]:
city_limits_outline
Out[13]:
b''

Use osmnx to query OpenStreetMap to get all subway stations within the city limits:

In [14]:
# Get the subway stops within the city limits
subway = ox.geometries_from_polygon(city_limits_outline, tags={"station": "subway"})

# Convert to 3857 (meters)
subway = subway.to_crs(epsg=3857)

Get the distance to the nearest subway station ($k=1$):

In [15]:
# STEP 1: x/y coordinates of subway stops (in EPGS=3857)
subwayXY = get_xy_from_geometry(subway.to_crs(epsg=3857))

# STEP 2: Initialize the algorithm
nbrs = NearestNeighbors(n_neighbors=1)

# STEP 3: Fit the algorithm on the "neighbors" dataset
nbrs.fit(subwayXY)

# STEP 4: Get distances for sale to neighbors
subwayDists, subwayIndices = nbrs.kneighbors(salesXY)

# STEP 5: add back to the original dataset
sales["logDistSubway"] = np.log10(subwayDists.mean(axis=1))
In [16]:
sales.head()
Out[16]:
sale_price total_livable_area total_area garage_spaces fireplaces number_of_bathrooms number_of_bedrooms number_stories exterior_condition zip_code geometry logDistGraffiti logDistSubway
18317 430000.0 1203.0 779.0 0.0 0.0 1.0 3.0 3.0 2 19147 POINT (-75.14692 39.93129) 1.865279 3.372884
20169 520000.0 2050.0 1433.0 0.0 0.0 2.0 4.0 2.0 4 19147 POINT (-75.14703 39.93123) 1.858225 3.371136
4248 600000.0 1625.0 1623.0 1.0 0.0 2.0 3.0 2.0 4 19147 POINT (-75.14854 39.93144) 2.347568 3.338733
18639 765000.0 2558.0 1400.0 1.0 0.0 0.0 3.0 2.0 1 19147 POINT (-75.14798 39.93012) 2.151381 3.358594
4096 255000.0 1008.0 546.0 0.0 0.0 1.0 3.0 2.0 4 19147 POINT (-75.14886 39.93012) 2.285802 3.339511

Now, let's run the model...¶

In [17]:
# Numerical columns
num_cols = [
    "total_livable_area",
    "total_area",
    "garage_spaces",
    "fireplaces",
    "number_of_bathrooms",
    "number_of_bedrooms",
    "number_stories",
    "logDistGraffiti", # NEW
    "logDistSubway" # NEW
]

# Categorical columns
cat_cols = ["exterior_condition", "zip_code"]
In [18]:
# Set up the column transformer with two transformers
transformer = ColumnTransformer(
    transformers=[
        ("num", StandardScaler(), num_cols),
        ("cat", OneHotEncoder(handle_unknown="ignore"), cat_cols),
    ]
)
In [19]:
# Initialize the pipeline
# NOTE: only use 20 estimators here so it will run in a reasonable time
pipe = make_pipeline(
    transformer, RandomForestRegressor(n_estimators=20, random_state=42)
)
In [20]:
# Split the data 70/30
train_set, test_set = train_test_split(sales, test_size=0.3, random_state=42)

# the target labels
y_train = np.log(train_set["sale_price"])
y_test = np.log(test_set["sale_price"])
In [21]:
# Fit the training set
# REMINDER: use the training dataframe objects here rather than numpy array
pipe.fit(train_set, y_train);
In [22]:
# What's the test score?
# REMINDER: use the test dataframe rather than numpy array
pipe.score(test_set, y_test)
Out[22]:
0.6549563613331909

Can we improve on this?¶

Exercise from last lecture: How about other spatial features?¶

  • I've listed out several other types of potential sources of new distance-based features from OpenDataPhilly
  • Choose a few and add new features
  • Re-fit the model and evalute the performance on the test set and feature importances

Example 1: Universities¶

New feature: Distance to the nearest university/college

  • Source: OpenDataPhilly
  • GeoService URL: https://services.arcgis.com/fLeGjb7u4uXqeF9q/ArcGIS/rest/services/Universities_Colleges/FeatureServer/0
In [23]:
# Get the data
url = "https://services.arcgis.com/fLeGjb7u4uXqeF9q/ArcGIS/rest/services/Universities_Colleges/FeatureServer/0"
univs = esri2gpd.get(url)

# Get the X/Y
univXY = get_xy_from_geometry(univs.to_crs(epsg=3857))

# Run the k nearest algorithm
nbrs = NearestNeighbors(n_neighbors=1)
nbrs.fit(univXY)
univDists, _ = nbrs.kneighbors(salesXY)

# Add the new feature
sales['logDistUniv'] = np.log10(univDists.mean(axis=1))
In [24]:
fig, ax = plt.subplots(figsize=(10,10), facecolor=plt.get_cmap('viridis')(0))

x = salesXY[:,0]
y = salesXY[:,1]
ax.hexbin(x, y, C=np.log10(univDists.mean(axis=1)), gridsize=60)

# Plot the city limits
city_limits.plot(ax=ax, facecolor='none', edgecolor='white', linewidth=4)

ax.set_axis_off()
ax.set_aspect("equal")
ax.set_title("Distance to Nearest University/College", fontsize=16, color='white');

Example 2: Parks¶

New feature: Distance to the nearest park centroid

  • Source: OpenDataPhilly
  • GeoService URL: https://services.arcgis.com/fLeGjb7u4uXqeF9q/ArcGIS/rest/services/PPR_Properties/FeatureServer/0

Notes

  • The park geometries are polygons, so you'll need to get the x and y coordinates of the park centroids and calculate the distance to these centroids.
  • You can use the geometry.centroid.x and geometry.centroid.y values to access these coordinates.
In [25]:
# Get the data
url = "https://services.arcgis.com/fLeGjb7u4uXqeF9q/ArcGIS/rest/services/PPR_Properties/FeatureServer/0"
parks = esri2gpd.get(url)

# Get the X/Y
parksXY = get_xy_from_geometry(parks.to_crs(epsg=3857))

# Run the k nearest algorithm
nbrs = NearestNeighbors(n_neighbors=1)
nbrs.fit(parksXY)
parksDists, _ = nbrs.kneighbors(salesXY)

# Add the new feature
sales["logDistParks"] = np.log10(parksDists.mean(axis=1))
In [26]:
fig, ax = plt.subplots(figsize=(10, 10), facecolor=plt.get_cmap("viridis")(0))

x = salesXY[:, 0]
y = salesXY[:, 1]
ax.hexbin(x, y, C=np.log10(parksDists.mean(axis=1)), gridsize=60)

# Plot the city limits
city_limits.plot(ax=ax, facecolor="none", edgecolor="white", linewidth=4)

ax.set_axis_off()
ax.set_aspect("equal")
ax.set_title("Distance to Nearest Park", fontsize=16, color="white");

Example 3: City Hall¶

New feature: Distance to City Hall.

  • Source: OpenDataPhilly
  • GeoService URL: https://services.arcgis.com/fLeGjb7u4uXqeF9q/ArcGIS/rest/services/CITY_LANDMARKS/FeatureServer/0

Notes

  • To identify City Hall, you'll need to pull data where "NAME='City Hall'" and "FEAT_TYPE='Municipal Building'"
  • As with the parks, the geometry will be a polygon, so you should calculate the distance to the centroid of the City Hall polygon
In [27]:
# Get the data
url = "https://services.arcgis.com/fLeGjb7u4uXqeF9q/ArcGIS/rest/services/CITY_LANDMARKS/FeatureServer/0"
cityHall = esri2gpd.get(
    url, where="NAME = 'City Hall' AND FEAT_TYPE = 'Municipal Building'"
)

# Get the X/Y
cityHallXY = get_xy_from_geometry(cityHall.to_crs(epsg=3857))

# Run the k nearest algorithm
nbrs = NearestNeighbors(n_neighbors=1)
nbrs.fit(cityHallXY)
cityHallDist, _ = nbrs.kneighbors(salesXY)

# Add the new feature
sales["logDistCityHall"] = np.log10(cityHallDist.mean(axis=1))
In [28]:
fig, ax = plt.subplots(figsize=(10, 10), facecolor=plt.get_cmap("viridis")(0))

x = salesXY[:, 0]
y = salesXY[:, 1]
ax.hexbin(x, y, C=np.log10(cityHallDist.mean(axis=1)), gridsize=60)

# Plot the city limits
city_limits.plot(ax=ax, facecolor="none", edgecolor="white", linewidth=4)

ax.set_axis_off()
ax.set_aspect("equal")
ax.set_title("Distance to City Hall", fontsize=16, color="white");

Example 4: New Construction Permits¶

New feature: Distance to the 5 nearest new construction permits from 2019

  • Source: OpenDataPhilly
  • CARTO table name: "permits"

Notes

  • You can pull new construction permits only by selecting where permitdescription equals 'NEW CONSTRUCTION PERMIT'
  • You can select permits from only 2019 using the permitissuedate column
In [29]:
# Table name
table_name = "permits"

# Where clause
where = "permitissuedate >= '2021-01-01' AND permitissuedate < '2022-01-01'"
where = where + " AND permitdescription='RESIDENTIAL BUILDING PERMIT'"

# Query
permits = carto2gpd.get(carto_url, table_name, where=where)

# Remove missing
permits = permits.loc[permits.geometry.notnull()]

# Get the X/Y
permitsXY = get_xy_from_geometry(permits.to_crs(epsg=3857))

# Run the k nearest algorithm
nbrs = NearestNeighbors(n_neighbors=5)
nbrs.fit(permitsXY)
permitsDist, _ = nbrs.kneighbors(salesXY)

# Add the new feature
sales["logDistPermits"] = np.log10(permitsDist.mean(axis=1))
In [30]:
fig, ax = plt.subplots(figsize=(10, 10), facecolor=plt.get_cmap("viridis")(0))

x = salesXY[:, 0]
y = salesXY[:, 1]
ax.hexbin(x, y, C=np.log10(permitsDist.mean(axis=1)), gridsize=60)

# Plot the city limits
city_limits.plot(ax=ax, facecolor="none", edgecolor="white", linewidth=4)

ax.set_axis_off()
ax.set_aspect("equal")
ax.set_title("Distance to 5 Closest Residential Building Permits", fontsize=16, color="white");

Example 5: Aggravated Assaults¶

New feature: Distance to the 5 nearest aggravated assaults in 2021

  • Source: OpenDataPhilly
  • CARTO table name: "incidents_part1_part2"

Notes

  • You can pull aggravated assaults only by selecting where Text_General_Code equals 'Aggravated Assault No Firearm' or 'Aggravated Assault Firearm'
  • You can select crimes from only 2021 using the dispatch_date column
In [31]:
# Table name
table_name = "incidents_part1_part2"

# Where selection
where = "dispatch_date >= '2021-01-01' AND dispatch_date < '2022-01-01'"
where = where + " AND Text_General_Code IN ('Aggravated Assault No Firearm', 'Aggravated Assault Firearm')"

# Query
assaults = carto2gpd.get(carto_url, table_name, where=where)

# Remove missing 
assaults = assaults.loc[assaults.geometry.notnull()]
    
# Get the X/Y
assaultsXY = get_xy_from_geometry(assaults.to_crs(epsg=3857))

# Run the k nearest algorithm
nbrs = NearestNeighbors(n_neighbors=5)
nbrs.fit(assaultsXY)
assaultDists, _ = nbrs.kneighbors(salesXY)

# Add the new feature
sales['logDistAssaults'] = np.log10(assaultDists.mean(axis=1))
In [32]:
fig, ax = plt.subplots(figsize=(10, 10), facecolor=plt.get_cmap("viridis")(0))

x = salesXY[:, 0]
y = salesXY[:, 1]
ax.hexbin(x, y, C=np.log10(assaultDists.mean(axis=1)), gridsize=60)

# Plot the city limits
city_limits.plot(ax=ax, facecolor="none", edgecolor="white", linewidth=4)

ax.set_axis_off()
ax.set_aspect("equal")
ax.set_title("Distance to 5 Closest Aggravated Assaults", fontsize=16, color="white");

Example 6: Abandonded Vehicle 311 Calls¶

New feature: Distance to the 5 nearest abandoned vehicle 311 calls in 2021

  • Source: OpenDataPhilly
  • CARTO table name: "public_cases_fc"

Notes

  • You can pull abandonded vehicle calls only by selecting where service_name equals 'Abandoned Vehicle'
  • You can select crimes from only 2019 using the requested_datetime column
In [33]:
# Table name
table_name = "public_cases_fc"

# Where selection
where = "requested_datetime >= '2021-01-01' AND requested_datetime < '2022-01-01'"
where = where + " AND service_name = 'Abandoned Vehicle'"

# Query
cars = carto2gpd.get(carto_url, table_name, where=where)

# Remove missing
cars = cars.loc[cars.geometry.notnull()]

# Get the X/Y
carsXY = get_xy_from_geometry(cars.to_crs(epsg=3857))

# Run the k nearest algorithm
nbrs = NearestNeighbors(n_neighbors=5)
nbrs.fit(carsXY)
carDists, _ = nbrs.kneighbors(salesXY)

# Handle any sales that have 0 distances
carDists[carDists == 0] = 1e-5  # a small, arbitrary value

# Add the new feature
sales["logDistCars"] = np.log10(carDists.mean(axis=1))
In [34]:
fig, ax = plt.subplots(figsize=(10, 10), facecolor=plt.get_cmap("viridis")(0))

x = salesXY[:, 0]
y = salesXY[:, 1]
ax.hexbin(x, y, C=np.log10(carDists.mean(axis=1)), gridsize=60)

# Plot the city limits
city_limits.plot(ax=ax, facecolor="none", edgecolor="white", linewidth=4)

ax.set_axis_off()
ax.set_aspect("equal")
ax.set_title(
    "Distance to 5 Closest Abandoned Vehicle 311 Calls", fontsize=16, color="white"
);

Fit the updated model¶

In [35]:
# Numerical columns
num_cols = [
    "total_livable_area",
    "total_area",
    "garage_spaces",
    "fireplaces",
    "number_of_bathrooms",
    "number_of_bedrooms",
    "number_stories",
    "logDistGraffiti", # NEW
    "logDistSubway", # NEW
    "logDistUniv", # NEW
    "logDistParks", # NEW
    "logDistCityHall", # NEW 
    "logDistPermits", # NEW
    "logDistAssaults", # NEW
    "logDistCars" # NEW
]

# Categorical columns
cat_cols = ["exterior_condition", "zip_code"]
In [36]:
# Set up the column transformer with two transformers
transformer = ColumnTransformer(
    transformers=[
        ("num", StandardScaler(), num_cols),
        ("cat", OneHotEncoder(handle_unknown="ignore"), cat_cols),
    ]
)
In [37]:
# Two steps in pipeline: preprocessor and then regressor
pipe = make_pipeline(
    transformer, RandomForestRegressor(n_estimators=20, random_state=42)
)
In [38]:
# Split the data 70/30
train_set, test_set = train_test_split(sales, test_size=0.3, random_state=42)

# the target labels
y_train = np.log(train_set["sale_price"])
y_test = np.log(test_set["sale_price"])
In [39]:
# Fit the training set
pipe.fit(train_set, y_train);
In [40]:
# What's the test score?
pipe.score(test_set, y_test)
Out[40]:
0.6638053863757688

More improvement!¶

Feature importances:¶

In [41]:
# The one-hot step
ohe = transformer.named_transformers_["cat"]

# One column for each category type!
ohe_cols = ohe.get_feature_names_out()

# Full list of columns is numerical + one-hot
features = num_cols + list(ohe_cols)

# The regressor
regressor = pipe["randomforestregressor"]

# Create the dataframe with importances
importance = pd.DataFrame(
    {"Feature": features, "Importance": regressor.feature_importances_}
)

# Sort importance in descending order and get the top
importance = importance.sort_values("Importance", ascending=False).iloc[:30]

# Plot
importance.hvplot.barh(
    x="Feature", y="Importance", flip_yaxis=True, height=500
)
Out[41]:

Part 2: Predicting bikeshare demand in Philadelphia¶

The technical problem: predict bikeshare trip counts for the Indego bikeshare in Philadelphia

The policy question: how to best expand a bikeshare program?¶

  • Bikeshares typically require substantial capital when first launching, about \$4,000 to \$5,000 per bike
  • Once launched, revenue from riders typically covers about 80 percent of operating costs
  • Ridership peaks in busy downtown hubs, but how to best expand outwards, often to low-density and low-income communities?
  • Important cost/benefit questions of how/where to expand to maximize ridership and access and minimize the need for outside subsidies

For more info, see this blog post from Pew

Using predictive modeling as a policy tool¶

  • Construct a predictive model for trip counts by stations in a bikeshare
  • Use this model to estimate and map out the ridership for potential stations in new areas
  • Use the cost per ride to estimate the additional revenue added
  • Compare this additional revenue to the cost of adding new stations

What are the key assumptions here?¶

Most important: adding new stations in new areas will not affect the demand for existing stations.

This allows the results from the predictive model for demand, built on existing stations, to translate to new stations.

The key assumption is that the bikeshare is not yet at full capacity, and riders in new areas will not decrease the demand in other areas.

Is this a good assumption?¶

  • Given that the bikeshare is looking to expand, it's a safe bet that they believe the program is not yet at full capacity
  • This is verifiable with existing data — examine trip counts in neighboring stations when a new station opens up.

Typically, this is a pretty safe assumption. But I encourage you to use historical data to verify it!

Getting trip data for the Indego bike share¶

  • Available by quarter from: https://www.rideindego.com/about/data/
  • I've pulled historic trip data from 2018 through 2022 and combined into a single CSV file (available in the data folder)

The data page also includes the live status of all of the stations in the system.

API endpoint: http://www.rideindego.com/stations/json

Two important columns:

  • totalDocks: the total number of docks at each station
  • kioskId: the unique identifier for each station
In [42]:
import requests
In [43]:
# API endpoint
API_endpoint = "http://www.rideindego.com/stations/json"

# Get the JSON
stations_geojson = requests.get(API_endpoint).json()

# Convert to a GeoDataFrame
stations = gpd.GeoDataFrame.from_features(stations_geojson, crs="EPSG:4326")
In [44]:
stations.head()
Out[44]:
geometry id name coordinates totalDocks docksAvailable bikesAvailable classicBikesAvailable smartBikesAvailable electricBikesAvailable rewardBikesAvailable rewardDocksAvailable kioskStatus kioskPublicStatus kioskConnectionStatus kioskType addressStreet addressCity addressState addressZipCode bikes closeTime eventEnd eventStart isEventBased isVirtual kioskId notes openTime publicText timeZone trikesAvailable latitude longitude
0 POINT (-75.16374 39.95378) 3004 Municipal Services Building Plaza [-75.16374, 39.95378] 30 12 17 11 0 6 17 12 FullService Active Active 1 1401 John F. Kennedy Blvd. Philadelphia PA 19102 [{'dockNumber': 6, 'isElectric': True, 'isAvai... None None None False False 3004 None None None 0 39.95378 -75.16374
1 POINT (-75.14403 39.94733) 3005 Welcome Park, NPS [-75.14403, 39.94733] 13 7 6 6 0 0 6 7 FullService Active Active 1 191 S. 2nd St. Philadelphia PA 19106 [{'dockNumber': 2, 'isElectric': False, 'isAva... None None None False False 3005 None None None 0 39.94733 -75.14403
2 POINT (-75.20311 39.95220) 3006 40th & Spruce [-75.20311, 39.9522] 17 14 3 1 0 2 3 14 FullService Active Active 1 246 S. 40th St. Philadelphia PA 19104 [{'dockNumber': 1, 'isElectric': False, 'isAva... None None None False False 3006 None None None 0 39.95220 -75.20311
3 POINT (-75.15993 39.94517) 3007 11th & Pine, Kahn Park [-75.15993, 39.94517] 20 4 16 11 0 5 16 4 FullService Active Active 1 328 S. 11th St. Philadelphia PA 19107 [{'dockNumber': 1, 'isElectric': False, 'isAva... None None None False False 3007 None None None 0 39.94517 -75.15993
4 POINT (-75.15067 39.98081) 3008 Temple University Station [-75.15067, 39.98081] 17 12 5 0 0 5 5 12 FullService Active Active 1 1076 Berks Street Philadelphia PA 19122 [{'dockNumber': 5, 'isElectric': True, 'isAvai... None None None False False 3008 None None None 0 39.98081 -75.15067

Let's plot the stations, colored by the number of docks¶

Use the contextily package to add a static basemap underneath the matplotlib plot of the stations.

In [45]:
import contextily as ctx
In [46]:
fig, ax = plt.subplots(figsize=(15, 10))

# stations
stations_3857 = stations.to_crs(epsg=3857)
stations_3857.plot(ax=ax, column='totalDocks', legend=True)

# plot the basemap underneath
ctx.add_basemap(ax=ax, crs=stations_3857.crs, source=ctx.providers.CartoDB.DarkMatter)

ax.set_axis_off()

Load historic trips data¶

In [47]:
all_trips = pd.read_csv("./data/indego-trips-2018-2022.csv.tar.gz")
/var/folders/49/ntrr94q12xd4rq8hqdnx96gm0000gn/T/ipykernel_69101/2512420577.py:1: DtypeWarning: Columns (10,14) have mixed types. Specify dtype option on import or set low_memory=False.
  all_trips = pd.read_csv("./data/indego-trips-2018-2022.csv.tar.gz")
In [48]:
for col in ['start_time', 'end_time']:
    all_trips[col] = pd.to_datetime(all_trips[col], format="%Y-%m-%d %H:%M:%S")
In [49]:
len(all_trips)
Out[49]:
3112619
In [50]:
all_trips.head()
Out[50]:
trip_id duration start_time end_time start_station start_lat start_lon end_station end_lat end_lon bike_id plan_duration trip_route_category passholder_type bike_type
0 223869188 18 2018-01-01 00:24:00 2018-01-01 00:42:00 3124 39.952950 -75.139793 3073 39.961430 -75.152420 3708 30.0 One Way Indego30 NaN
1 223872811 22 2018-01-01 00:48:00 2018-01-01 01:10:00 3026 39.941380 -75.145638 3023 39.950481 -75.172859 11735 30.0 One Way Indego30 NaN
2 223872810 21 2018-01-01 01:03:00 2018-01-01 01:24:00 3045 39.947922 -75.162369 3037 39.954239 -75.161377 5202 30.0 One Way Indego30 NaN
3 223872809 4 2018-01-01 01:05:00 2018-01-01 01:09:00 3115 39.972630 -75.167572 3058 39.967159 -75.170013 5142 30.0 One Way Indego30 NaN
4 223912960 658 2018-01-01 01:14:00 2018-01-01 12:12:00 3045 39.947922 -75.162369 3000 NaN NaN 5196 30.0 One Way Indego30 NaN

Dependent variable: total trips by starting station¶

In [51]:
start_trips = (
    all_trips.groupby("start_station").size().reset_index(name="total_start_trips")
)
In [52]:
start_trips.head()
Out[52]:
start_station total_start_trips
0 3000 552
1 3004 22038
2 3005 16796
3 3006 26149
4 3007 51651
In [53]:
# Now merge in the geometry for each station
bike_data = (
    stations[["geometry", "kioskId"]]
    .merge(start_trips, left_on="kioskId", right_on="start_station")
    .to_crs(epsg=3857)
)

Let's plot it...

In [54]:
fig, ax = plt.subplots(figsize=(10,10))

# stations
bike_data.plot(ax=ax, column='total_start_trips', legend=True)

# plot the basemap underneath
ctx.add_basemap(ax=ax, crs=bike_data.crs, source=ctx.providers.CartoDB.DarkMatter)

ax.set_axis_off()

Trips are clearly concentrated in Center City...

What features to use?¶

There are lots of possible options. Generally speaking, the possible features fall into a few different categories:

  • Internal
    • e.g., the number of docks per station
  • Demographic (Census)
    • e.g., population, income, percent commuting by car, percent with bachelor's degree or higher, etc
  • Amenities/Disamenities
    • e.g., distance to nearest crimes, restaurants, parks, within Center City, etc
  • Transportation network
    • e.g., distance to nearest bus stop, interesection nodes, nearest subway stop
  • Neighboring stations
    • e.g., average trips of nearest stations, distance to nearest stations

Let's add a few from each category...

1. Internal characteristics¶

Let's use the number of docks per stations:

In [55]:
bike_data = bike_data.merge(stations[["kioskId", "totalDocks"]], on="kioskId")
bike_data.head()
Out[55]:
geometry kioskId start_station total_start_trips totalDocks
0 POINT (-8367189.263 4859227.987) 3004 3004 22038 30
1 POINT (-8364995.156 4858291.368) 3005 3005 16796 13
2 POINT (-8371571.911 4858998.543) 3006 3006 26149 17
3 POINT (-8366765.136 4857977.729) 3007 3007 51651 20
4 POINT (-8365734.317 4863154.033) 3008 3008 10317 17

2. Census demographic data¶

We'll try out percent commuting by car first.

In [56]:
import cenpy
In [57]:
acs = cenpy.remote.APIConnection("ACSDT5Y2020")
In [58]:
acs.varslike("B08134").sort_index().iloc[:15]
Out[58]:
label concept predicateType group limit predicateOnly hasGeoCollectionSupport attributes required
B08134_001E Estimate!!Total: MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_001EA,B08134_001M,B08134_001MA NaN
B08134_002E Estimate!!Total:!!Less than 10 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_002EA,B08134_002M,B08134_002MA NaN
B08134_003E Estimate!!Total:!!10 to 14 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_003EA,B08134_003M,B08134_003MA NaN
B08134_004E Estimate!!Total:!!15 to 19 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_004EA,B08134_004M,B08134_004MA NaN
B08134_005E Estimate!!Total:!!20 to 24 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_005EA,B08134_005M,B08134_005MA NaN
B08134_006E Estimate!!Total:!!25 to 29 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_006EA,B08134_006M,B08134_006MA NaN
B08134_007E Estimate!!Total:!!30 to 34 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_007EA,B08134_007M,B08134_007MA NaN
B08134_008E Estimate!!Total:!!35 to 44 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_008EA,B08134_008M,B08134_008MA NaN
B08134_009E Estimate!!Total:!!45 to 59 minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_009EA,B08134_009M,B08134_009MA NaN
B08134_010E Estimate!!Total:!!60 or more minutes MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_010EA,B08134_010M,B08134_010MA NaN
B08134_011E Estimate!!Total:!!Car, truck, or van: MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_011EA,B08134_011M,B08134_011MA NaN
B08134_012E Estimate!!Total:!!Car, truck, or van:!!Less th... MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_012EA,B08134_012M,B08134_012MA NaN
B08134_013E Estimate!!Total:!!Car, truck, or van:!!10 to 1... MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_013EA,B08134_013M,B08134_013MA NaN
B08134_014E Estimate!!Total:!!Car, truck, or van:!!15 to 1... MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_014EA,B08134_014M,B08134_014MA NaN
B08134_015E Estimate!!Total:!!Car, truck, or van:!!20 to 2... MEANS OF TRANSPORTATION TO WORK BY TRAVEL TIME... int B08134 0 NaN NaN B08134_015EA,B08134_015M,B08134_015MA NaN
In [59]:
# Means of Transportation to Work by Travel Time to Work
variables = ["NAME", "B08134_001E", "B08134_011E"]

# Pull data by census tract
census_data = acs.query(
    cols=variables,
    geo_unit="block group:*",
    geo_filter={"state": "42",
                "county": "101", 
                "tract": "*"},
)

# Convert to the data to floats
for col in variables[1:]:
    census_data[col] = census_data[col].astype(float)

# The percent commuting by car
census_data["percent_car"] = census_data["B08134_011E"] / census_data["B08134_001E"]
In [60]:
census_data.head()
Out[60]:
NAME B08134_001E B08134_011E state county tract block group percent_car
0 Block Group 3, Census Tract 1.01, Philadelphia... 0.0 0.0 42 101 000101 3 NaN
1 Block Group 1, Census Tract 1.02, Philadelphia... 1023.0 533.0 42 101 000102 1 0.521017
2 Block Group 4, Census Tract 1.02, Philadelphia... 100.0 62.0 42 101 000102 4 0.620000
3 Block Group 2, Census Tract 2, Philadelphia Co... 461.0 185.0 42 101 000200 2 0.401302
4 Block Group 2, Census Tract 5, Philadelphia Co... 0.0 0.0 42 101 000500 2 NaN

Merge with block group geometries for Philadelphia:

In [61]:
acs.set_mapservice("tigerWMS_Census2020")
Out[61]:
Connection to American Community Survey: 5-Year Estimates: Detailed Tables 5-Year(ID: https://api.census.gov/data/id/ACSDT5Y2020)
With MapServer: Census 2020 WMS
In [62]:
block_group_layer = acs.mapservice.layers[8]

block_group_layer
Out[62]:
(ESRILayer) Census Block Groups
In [63]:
block_group_layer._baseurl
Out[63]:
'https://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/tigerWMS_Census2020/MapServer/8'
In [64]:
# Use SQL to return geometries only for Philadelphia County in PA
where_clause = "STATE = '42' AND COUNTY = '101'"

# Query for tracts
block_groups = esri2gpd.get(block_group_layer._baseurl, where=where_clause)
In [65]:
census_data = block_groups.merge(
    census_data,
    left_on=["STATE", "COUNTY", "TRACT", "BLKGRP"],
    right_on=["state", "county", "tract", "block group"],
)

Finally, let's merge the census data into our dataframe of features by spatially joining the stations and the block groups:

Each station gets the census data associated with the block group the station is within.

In [66]:
bike_data = gpd.sjoin(
    bike_data,
    census_data.to_crs(bike_data.crs)[["geometry", "percent_car"]],
    predicate="within",
).drop(labels=["index_right"], axis=1)
In [67]:
bike_data.head()
Out[67]:
geometry kioskId start_station total_start_trips totalDocks percent_car
0 POINT (-8367189.263 4859227.987) 3004 3004 22038 30 NaN
1 POINT (-8364995.156 4858291.368) 3005 3005 16796 13 0.507079
9 POINT (-8365532.829 4858294.272) 3015 3015 22997 11 0.507079
2 POINT (-8371571.911 4858998.543) 3006 3006 26149 17 0.131944
101 POINT (-8371183.406 4858854.781) 3159 3159 20731 23 0.131944

"Impute" missing values with the median value¶

Note: scikit-learn contains preprocessing transformers to do more advanced imputation methods. The documentation contains a good description of the options.

In particular, the SimpleImputer object (see the docs) can fill values based on the median, mean, most frequent value, or a constant value.

In [68]:
missing = bike_data['percent_car'].isnull()
bike_data.loc[missing, 'percent_car'] = bike_data['percent_car'].median()

3. Amenities/disamenities¶

Let's add two new features:

  1. Distances to the nearest 10 restaurants from Open Street Map
  2. Whether the station is located within Center City

Restaurants¶

Search https://wiki.openstreetmap.org/wiki/Map_Features for OSM identifier of restaurants

In [69]:
restaurants = ox.geometries_from_polygon(
    city_limits_outline, tags={"amenity": "restaurant"}
).to_crs(epsg=3857)

restaurants.head()
Out[69]:
amenity created_by name source wheelchair geometry cuisine addr:housenumber addr:street craft microbrewery outdoor_seating addr:city addr:postcode phone diet:vegetarian opening_hours website description addr:state addr:housename designation diet:vegan email fax bar takeaway smoking alt_name short_name name:zh brand brand:wikidata brand:wikipedia official_name twitter drive_in internet_access not:brand:wikidata payment:cash delivery payment:bitcoin diet:halal addr:country air_conditioning capacity note opening_hours:covid19 contact:phone operator diet:pescetarian lgbtq payment:credit_cards payment:debit_cards reservation source:cuisine diet:kosher wheelchair:description facebook amenity_1 wikidata name:en check_date:opening_hours name:ca internet_access:fee level toilets start_date payment:american_express payment:discover_card payment:mastercard payment:visa internet_access:description internet_access:ssid addr:unit nodes building historic ship:type wikipedia tourism building:levels leisure area image building:material name:ja name:zn nonsquare kitchen_hours
element_type osmid
node 333786044 restaurant Potlatch 0.10f Sam's Morning Glory knowledge limited POINT (-8366653.605 4857351.702) NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
566683522 restaurant NaN Spring Garden Pizza & Restaurant NaN NaN POINT (-8366499.795 4860429.601) pizza NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
596230881 restaurant NaN NaN NaN NaN POINT (-8369759.051 4873925.895) NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
596245658 restaurant NaN Earth Bread & Brewery NaN NaN POINT (-8370151.987 4874548.737) NaN 7136 Germantown Ave brewery yes NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
596245661 restaurant NaN Cresheim Valley Grain Exchange NaN NaN POINT (-8370191.995 4874609.053) american 7152 Germantown Ave NaN NaN yes NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN

Get x/y values for the stations:

In [70]:
stationsXY = get_xy_from_geometry(bike_data)
In [71]:
# STEP 1: x/y coordinates of restaurants (in EPGS=3857)
restsXY = get_xy_from_geometry(restaurants.to_crs(epsg=3857))

# STEP 2: Initialize the algorithm
nbrs = NearestNeighbors(n_neighbors=10)

# STEP 3: Fit the algorithm on the "neighbors" dataset
nbrs.fit(restsXY)

# STEP 4: Get distances for stations to neighbors
restsDists, restsIndices = nbrs.kneighbors(stationsXY)

# STEP 5: add back to the original dataset
bike_data['logDistRests'] = np.log10(restsDists.mean(axis=1))
In [72]:
bike_data.head()
Out[72]:
geometry kioskId start_station total_start_trips totalDocks percent_car logDistRests
0 POINT (-8367189.263 4859227.987) 3004 3004 22038 30 0.385340 2.593366
1 POINT (-8364995.156 4858291.368) 3005 3005 16796 13 0.507079 2.159451
9 POINT (-8365532.829 4858294.272) 3015 3015 22997 11 0.507079 2.627670
2 POINT (-8371571.911 4858998.543) 3006 3006 26149 17 0.131944 2.426370
101 POINT (-8371183.406 4858854.781) 3159 3159 20731 23 0.131944 2.642763
In [73]:
fig, ax = plt.subplots(figsize=(10,10))

# stations
bike_data.plot(ax=ax, column='logDistRests', legend=True)

# plot the basemap underneath
ctx.add_basemap(ax=ax, crs=bike_data.crs, source=ctx.providers.CartoDB.DarkMatter)

ax.set_axis_off()

Within the Center City business district¶

Available from OpenDataPhilly

In [74]:
url = "http://data.phl.opendata.arcgis.com/datasets/95366b115d93443eae4cc6f498cb3ca3_0.geojson"
ccbd = gpd.read_file(url).to_crs(epsg=3857)
In [75]:
ccbd_geo = ccbd.squeeze().geometry

ccbd_geo
Out[75]:
b''
In [76]:
# Add the new feature: 1 if within CC and 0 otherwise
bike_data['within_centerCity'] = bike_data.geometry.within(ccbd_geo).astype(int)

4. Transportation Network¶

  • Let's add a feature that calculates the distance to the nearest intersections
  • We can use the osmnx package
In [77]:
xmin, ymin, xmax, ymax = bike_data.to_crs(epsg=4326).total_bounds

Create the graph object from the bounding box:

In [78]:
G = ox.graph_from_bbox(ymax, ymin, xmax, xmin, network_type='bike')

G
Out[78]:
<networkx.classes.multidigraph.MultiDiGraph at 0x2c8e34a30>

Let's plot the graph using the built-in plotting from osmnx:

In [79]:
ox.plot_graph(G, figsize=(12, 12), node_size=0);

Now extract the nodes (intersections) from the graph into a GeoDataFrame:

In [80]:
intersections = ox.graph_to_gdfs(G, nodes=True, edges=False).to_crs(epsg=3857)
In [81]:
intersections.head()
Out[81]:
y x street_count highway geometry
osmid
109727072 39.920114 -75.176365 3 NaN POINT (-8368594.627 4854340.318)
109727084 39.918876 -75.176639 3 NaN POINT (-8368625.162 4854160.598)
109727165 39.917601 -75.176933 3 NaN POINT (-8368657.901 4853975.496)
109727173 39.917476 -75.176959 4 NaN POINT (-8368660.784 4853957.411)
109727183 39.917126 -75.177038 3 NaN POINT (-8368669.590 4853906.554)

Now, compute the average distance to the 3 nearest intersections:

In [82]:
# STEP 1: x/y coordinates of restaurants (in EPGS=3857)
intersectionsXY = get_xy_from_geometry(intersections.to_crs(epsg=3857))

# STEP 2: Initialize the algorithm
nbrs = NearestNeighbors(n_neighbors=3)

# STEP 3: Fit the algorithm on the "neighbors" dataset
nbrs.fit(intersectionsXY)

# STEP 4: Get distances for stations to neighbors
interDists, interIndices = nbrs.kneighbors(stationsXY)

# STEP 5: add back to the original dataset
bike_data['logIntersectionDists'] = np.log10(interDists.mean(axis=1))

Let's plot the stations, coloring by the new feature (distance to intersections):

In [83]:
fig, ax = plt.subplots(figsize=(10,10))

# stations
bike_data.plot(ax=ax, column='logIntersectionDists', legend=True)

# plot the basemap underneath
ctx.add_basemap(ax=ax, crs=bike_data.crs, source=ctx.providers.CartoDB.DarkMatter)

ax.set_axis_off()

5. Neighboring Stations¶

  • We need to include features that encodes the fact that demand for a specific station is likely related to the demand in neighboring stations
  • This idea is known as spatial lag

We will add two new features:

  1. The average distance to the nearest 5 stations
  2. The average trip total for the nearest 5 stations

First, find the nearest 5 stations:

In [84]:
N = 5
k = N + 1
nbrs = NearestNeighbors(n_neighbors=k)
nbrs.fit(stationsXY)

stationDists, stationIndices = nbrs.kneighbors(stationsXY)

Notes¶

  • We are matching the stations to themselves to find the nearest neighbors
  • The closest match will always be the same station (distance of 0)
  • So we fit for $k+1$ neighbors and will remove the closest neighbor

The log of the distances to the 5 nearest stations:

In [85]:
bike_data["logStationDists"] = np.log10(stationDists[:, 1:].mean(axis=1))
In [86]:
fig, ax = plt.subplots(figsize=(10,10))

# stations
bike_data.plot(ax=ax, column='logStationDists', legend=True)

# plot the basemap underneath
ctx.add_basemap(ax=ax, crs=bike_data.crs, source=ctx.providers.CartoDB.DarkMatter)

ax.set_axis_off()

Now, let's add the trip counts of the 5 neighboring stations:

Use the indices returned by the NearestNeighbors() algorithm to identify which stations are the 5 neighbors in the original data

In [87]:
stationIndices[:, 1:]
Out[87]:
array([[ 36,  14,  18,  33, 115],
       [ 46,  44,   2,  45,  22],
       [ 45,   1,  44,  46,  32],
       [101,   4, 106,  74,  20],
       [101,   3,  24,  37, 106],
       [ 84,  50,  43,   8,  31],
       [ 27,  13,  55,  64,  38],
       [ 24,  19,  71, 106, 107],
       [ 62,  43,   9, 119, 115],
       [ 62,   8,  77,  65, 119],
       [ 28,  30,  65,  93,  63],
       [ 54, 117,  58, 110,  52],
       [ 64,  38,  75,  39,  73],
       [  6,  81,  27,  78, 111],
       [ 33,  36,   0,  31,  48],
       [ 45,  48,  32, 113,  44],
       [ 37,  29,  17,  24, 108],
       [ 37,  16,   4, 101,  24],
       [ 76, 115,  60,   0, 117],
       [107,  61,  59,   7, 110],
       [101,   3,  74,   4,  91],
       [ 68,  25,  34,  67,  77],
       [ 23,  46,   1,  69,   2],
       [ 22,  69,  26,  25,  82],
       [106,   7,   4,  37,  16],
       [ 26,  21,  84,  68,  69],
       [ 25,  23,  69,  84,  21],
       [  6,  55,  13,  38,  64],
       [ 30,  10,  29, 108,  65],
       [ 30,  28, 108,  16,  10],
       [ 28,  29,  10, 108,  65],
       [ 32,  33,  50,  14,  48],
       [ 31,  48,  33,  45,  50],
       [ 14,  31,  36,  32,  48],
       [ 68,  21,  67, 109, 100],
       [ 86,  66,  71, 106,   7],
       [  0,  14,  33,  31,  48],
       [ 17,  16,   4,  24, 101],
       [ 39,  75,  12,  55,  94],
       [ 38,  94,  75,  55,  56],
       [ 57,  58,  56,  54,  75],
       [111,  78, 102,  89,  70],
       [100,  51, 105,  67,  63],
       [  8,   5,  50,  33,  31],
       [  1,  45, 113,   2,  46],
       [ 15,   2,  44,  32,  48],
       [  1,  22,  44,   2,  23],
       [113,  44,  70,   1,  15],
       [ 32,  15,  31,  33,  14],
       [ 85,  56,  53,  52,  94],
       [ 31,   5,  43,  32,  33],
       [ 42,  93,  63, 100,  10],
       [ 54,  11,  58,  53,  49],
       [ 85,  52,  49, 110,  71],
       [ 58,  11, 117,  52, 110],
       [ 27,  38,  39,   6,  64],
       [ 49,  40,  52,  94,  75],
       [ 40, 103,  75,  73,  58],
       [ 54, 117,  11,  52,  40],
       [107,  61,  60, 110, 112],
       [ 76, 112,  59, 117,  18],
       [108,  59, 107,  19, 112],
       [119,   8,  65,   9, 115],
       [ 10,  77,  93,  67,   9],
       [ 12,  38,  78,  55,  75],
       [ 62, 119,  10,  28,   9],
       [ 35,  86,  97, 106,  71],
       [ 34,  77,  21,  68,  63],
       [ 34,  21,  67,  25, 109],
       [ 82,  23,  26,  72,  25],
       [ 89,  47, 113,  41,  78],
       [ 35,  86,   7,  53,  19],
       [ 82,  69,  23,  22,  83],
       [103,  57,  15,  12,  48],
       [ 96,   3,  91,  20, 101],
       [ 38,  57,  39,  12,  40],
       [ 60,  18, 112, 115, 117],
       [  9,  67,  63,  21,  84],
       [ 41, 111,  64,  89,  70],
       [ 87,  80,  88,  98,  99],
       [ 87,  79,  88,  99,  98],
       [111, 102,  13,  41,  78],
       [ 69,  72,  83,  23,  26],
       [ 82, 104,  72,  69,  68],
       [  5,  25,  50,  26,  43],
       [ 53,  49,  52,  56,  71],
       [ 35,  71,  66,  92,  98],
       [ 79,  80,  88,  98,  99],
       [ 79,  98,  87,  92,  80],
       [ 70,  41,  47,  78, 113],
       [ 95,  97,  92,  66,  96],
       [ 96,  74,  20,   3, 101],
       [ 98,  86,  88,  66,  97],
       [ 10,  63,  51,  30,  28],
       [ 39,  56,  38,  75,  49],
       [ 90,  97,  92,  96,  66],
       [ 74,  91,  97,   3,  20],
       [ 66,  96,  92,  35,  74],
       [ 88,  92,  85,  86,  49],
       [ 94,  39,  55,  38,  27],
       [105, 109,  42,  67,  34],
       [  3,   4,  20,  17,  37],
       [111,  41,  81,  78,  89],
       [ 57,  73,  36,  48,  14],
       [109,  83, 105,  34, 100],
       [100, 109,  42, 104,  34],
       [ 24,   4,   3,   7,  35],
       [ 19,  59,  61, 110, 108],
       [ 61,  28,  29, 107,  59],
       [100, 105, 104,  34,  68],
       [ 11, 107,  59, 117,  54],
       [102,  41,  78,  81,  89],
       [ 60,  76, 115, 119,  59],
       [ 47,  44,  15,  45,   1],
       [116, 118, 105, 104, 109],
       [119,  18,  76, 112,  62],
       [114, 118, 105, 104, 100],
       [ 11,  58,  54,  60,  76],
       [114, 116, 105, 104, 100],
       [115,  62,   8, 112,  65]])
In [88]:
# the total trips for the stations from original data frame
total_start_trips = bike_data['total_start_trips'].values
In [89]:
total_start_trips
Out[89]:
array([22038, 16796, 22997, 26149, 20731, 51651, 10317, 36384, 70445,
       34602, 44831,  9205,  5406,  8067, 35035, 13792, 41447,  5442,
       45560, 43278, 16351, 24620, 24042, 46069, 32689, 28518, 25579,
       19193, 57827, 21751, 23696, 30286, 27938, 32100, 26794, 20140,
       30744, 29600, 20116, 12025, 31605, 17661, 22977, 57416, 42843,
       32433, 23403, 18946, 26421, 28939, 54614, 26525, 58090, 61954,
       23519, 26019, 34558, 29497, 18531, 35370, 31491, 20198, 34903,
       32024,  8247, 44914,  6431, 28959, 20498, 23698, 11075, 15597,
       16228, 20370, 19751, 23282, 35688, 26645, 17962,  5164,  5675,
       12089, 15401, 29556, 51814, 50119,  6821,  7546,  8213, 12655,
        7702, 12618,  3786, 29483, 10805,  3879, 11766,  7637, 19547,
        6547, 12560,  8769,  8539, 32930, 16940, 19532, 25233, 27312,
       37229, 17482, 19921, 11978, 48132, 22091,  3281, 27782,  3491,
       14166,  7890, 49466])
In [90]:
# get the trips for the 5 nearest neighbors (ignoring first match)
neighboring_trips = total_start_trips[stationIndices[:, 1:]]
In [91]:
neighboring_trips
Out[91]:
array([[30744, 35035, 45560, 32100, 27782],
       [23403, 42843, 22997, 32433, 24042],
       [32433, 16796, 42843, 23403, 27938],
       [ 8769, 20731, 25233, 19751, 16351],
       [ 8769, 26149, 32689, 29600, 25233],
       [51814, 54614, 57416, 70445, 30286],
       [19193,  8067, 26019,  8247, 20116],
       [32689, 43278, 15597, 25233, 27312],
       [34903, 57416, 34602, 49466, 27782],
       [34903, 70445, 26645, 44914, 49466],
       [57827, 23696, 44914, 29483, 32024],
       [23519, 14166, 18531, 19921, 58090],
       [ 8247, 20116, 23282, 12025, 20370],
       [10317, 12089, 19193, 17962, 11978],
       [32100, 30744, 22038, 30286, 26421],
       [32433, 26421, 27938, 22091, 42843],
       [29600, 21751,  5442, 32689, 37229],
       [29600, 41447, 20731,  8769, 32689],
       [35688, 27782, 31491, 22038, 14166],
       [27312, 20198, 35370, 36384, 19921],
       [ 8769, 26149, 19751, 20731, 12618],
       [20498, 28518, 26794, 28959, 26645],
       [46069, 23403, 16796, 23698, 22997],
       [24042, 23698, 25579, 28518, 15401],
       [25233, 36384, 20731, 29600, 41447],
       [25579, 24620, 51814, 20498, 23698],
       [28518, 46069, 23698, 51814, 24620],
       [10317, 26019,  8067, 20116,  8247],
       [23696, 44831, 21751, 37229, 44914],
       [23696, 57827, 37229, 41447, 44831],
       [57827, 21751, 44831, 37229, 44914],
       [27938, 32100, 54614, 35035, 26421],
       [30286, 26421, 32100, 32433, 54614],
       [35035, 30286, 30744, 27938, 26421],
       [20498, 24620, 28959, 17482, 12560],
       [ 6821,  6431, 15597, 25233, 36384],
       [22038, 35035, 32100, 30286, 26421],
       [ 5442, 41447, 20731, 32689,  8769],
       [12025, 23282,  5406, 26019, 10805],
       [20116, 10805, 23282, 26019, 34558],
       [29497, 18531, 34558, 23519, 23282],
       [11978, 17962,  8539, 12655, 11075],
       [12560, 26525, 19532, 28959, 32024],
       [70445, 51651, 54614, 32100, 30286],
       [16796, 32433, 22091, 22997, 23403],
       [13792, 22997, 42843, 27938, 26421],
       [16796, 24042, 42843, 22997, 46069],
       [22091, 42843, 11075, 16796, 13792],
       [27938, 13792, 30286, 32100, 35035],
       [50119, 34558, 61954, 58090, 10805],
       [30286, 51651, 57416, 27938, 32100],
       [22977, 29483, 32024, 12560, 44831],
       [23519,  9205, 18531, 61954, 28939],
       [50119, 58090, 28939, 19921, 15597],
       [18531,  9205, 14166, 58090, 19921],
       [19193, 20116, 12025, 10317,  8247],
       [28939, 31605, 58090, 10805, 23282],
       [31605, 32930, 23282, 20370, 18531],
       [23519, 14166,  9205, 58090, 31605],
       [27312, 20198, 31491, 19921, 48132],
       [35688, 48132, 35370, 14166, 45560],
       [37229, 35370, 27312, 43278, 48132],
       [49466, 70445, 44914, 34602, 27782],
       [44831, 26645, 29483, 28959, 34602],
       [ 5406, 20116, 17962, 26019, 23282],
       [34903, 49466, 44831, 57827, 34602],
       [20140,  6821,  7637, 25233, 15597],
       [26794, 26645, 24620, 20498, 32024],
       [26794, 24620, 28959, 28518, 17482],
       [15401, 46069, 25579, 16228, 28518],
       [12655, 18946, 22091, 17661, 17962],
       [20140,  6821, 36384, 61954, 43278],
       [15401, 23698, 46069, 24042, 29556],
       [32930, 29497, 13792,  5406, 26421],
       [11766, 26149, 12618, 16351,  8769],
       [20116, 29497, 12025,  5406, 31605],
       [31491, 45560, 48132, 27782, 14166],
       [34602, 28959, 32024, 24620, 51814],
       [17661, 11978,  8247, 12655, 11075],
       [ 7546,  5675,  8213, 19547,  6547],
       [ 7546,  5164,  8213,  6547, 19547],
       [11978,  8539,  8067, 17661, 17962],
       [23698, 16228, 29556, 46069, 25579],
       [15401, 16940, 16228, 23698, 20498],
       [51651, 28518, 54614, 25579, 57416],
       [61954, 28939, 58090, 34558, 15597],
       [20140, 15597,  6431,  3786, 19547],
       [ 5164,  5675,  8213, 19547,  6547],
       [ 5164, 19547,  7546,  3786,  5675],
       [11075, 17661, 18946, 17962, 22091],
       [ 3879,  7637,  3786,  6431, 11766],
       [11766, 19751, 16351, 26149,  8769],
       [19547,  6821,  8213,  6431,  7637],
       [44831, 32024, 26525, 23696, 57827],
       [12025, 34558, 20116, 23282, 28939],
       [ 7702,  7637,  3786, 11766,  6431],
       [19751, 12618,  7637, 26149, 16351],
       [ 6431, 11766,  3786, 20140, 19751],
       [ 8213,  3786, 50119,  6821, 28939],
       [10805, 12025, 26019, 20116, 19193],
       [19532, 17482, 22977, 28959, 26794],
       [26149, 20731, 16351,  5442, 29600],
       [11978, 17661, 12089, 17962, 12655],
       [29497, 20370, 30744, 26421, 35035],
       [17482, 29556, 19532, 26794, 12560],
       [12560, 17482, 22977, 16940, 26794],
       [32689, 20731, 26149, 36384, 20140],
       [43278, 35370, 20198, 19921, 37229],
       [20198, 57827, 21751, 27312, 35370],
       [12560, 19532, 16940, 26794, 20498],
       [ 9205, 27312, 35370, 14166, 23519],
       [ 8539, 17661, 17962, 12089, 12655],
       [31491, 35688, 27782, 49466, 35370],
       [18946, 42843, 13792, 32433, 16796],
       [ 3491,  7890, 19532, 16940, 17482],
       [49466, 45560, 35688, 48132, 34903],
       [ 3281,  7890, 19532, 16940, 12560],
       [ 9205, 18531, 23519, 31491, 35688],
       [ 3281,  3491, 19532, 16940, 12560],
       [27782, 34903, 70445, 48132, 44914]])
In [92]:
# Add to features
bike_data['laggedTrips'] = neighboring_trips.mean(axis=1)

Is there a correlation between trip counts and the spatial lag?¶

Yes!

We can use seaborn to make a quick plot:

In [93]:
sns.regplot(x=bike_data['laggedTrips'], y=bike_data['total_start_trips']);

Let's look at the correlations of all of our features¶

Again, use seaborn to investigate.

Remember: we don't want to include multipl features that are highly correlated in our model.

In [94]:
feature_cols = [
    "total_start_trips",
    "totalDocks",
    "percent_car",
    "logDistRests",
    "within_centerCity",
    "logIntersectionDists",
    "logStationDists",
    "laggedTrips",
]


sns.heatmap(
    bike_data[feature_cols].corr(), cmap="coolwarm", annot=True, vmin=-1, vmax=1
);

Let's fit a model!¶

Just as before, the plan is to:

  • Fit a random forest model, using cross validation to optimize (some) hyperparameters
  • Compare to a baseline linear regression model

Perform our test/train split¶

We'll use a 60%/40% split, due to the relatively small number of stations.

In [95]:
# Remove unnecessary columns
bike_features = bike_data.drop(labels=["geometry", "kioskId", "start_station"], axis=1)
In [96]:
# Split the data 
train_set, test_set = train_test_split(
    bike_features,
    test_size=0.4,
    random_state=12345
)

# the target labels: log of total start trips
y_train = np.log(train_set["total_start_trips"])
y_test = np.log(test_set["total_start_trips"])
In [97]:
# Extract out only the features we want to use
feature_cols = [
    "totalDocks",
    "percent_car",
    "logDistRests",
    "within_centerCity",
    "logIntersectionDists",
    "logStationDists",
    "laggedTrips",
]

train_set = train_set[feature_cols]
test_set = test_set[feature_cols]

Random forest results¶

Let's run a simple grid search to try to optimize our hyperparameters.

In [98]:
# Setup the pipeline with a standard scaler
pipe = make_pipeline(
    StandardScaler(), RandomForestRegressor(random_state=42)
)

Try out a few different values for two of the main parameters for random forests: n_estimators and max_depth:

In [99]:
model_name = "randomforestregressor"
param_grid = {
    f"{model_name}__n_estimators": [5, 10, 25, 50, 100, 200, 300, 500],
    f"{model_name}__max_depth": [None, 2, 5, 7, 9, 13],
}

param_grid
Out[99]:
{'randomforestregressor__n_estimators': [5, 10, 25, 50, 100, 200, 300, 500],
 'randomforestregressor__max_depth': [None, 2, 5, 7, 9, 13]}

Important: just like last week, we will need to prefix the parameter name with the name of the pipeline step, in this case, "randomforestregressor".

In [100]:
# Create the grid and use 10-fold CV
grid = GridSearchCV(pipe, param_grid, cv=10)

# Run the search
grid.fit(train_set, y_train);

Evaluate the best estimator on the test set:¶

In [101]:
grid.best_params_
Out[101]:
{'randomforestregressor__max_depth': 2,
 'randomforestregressor__n_estimators': 25}
In [102]:
# Evaluate the best random forest model
best_random = grid.best_estimator_
grid.score(test_set, y_test)
Out[102]:
0.7372172302891389

Evaluate a linear model (baseline) on the test set¶

In [103]:
# Set up a linear pipeline
linear = make_pipeline(StandardScaler(), LinearRegression())

# Fit on train set
linear.fit(train_set, y_train)

# Evaluate on test set
linear.score(test_set, y_test)
Out[103]:
0.6867355901513779

Only a modest improvement over the baseline model!¶

Which features were the most important?¶

From our earlier correlation analysis, we should expect the most important features to be:

  • the spatially lagged trip counts
  • the distance to the nearest stations
In [104]:
# The best model
regressor = grid.best_estimator_["randomforestregressor"]

# Create the dataframe with importances
importance = pd.DataFrame(
    {"Feature": train_set.columns, "Importance": regressor.feature_importances_}
)

# Sort importance in descending order and get the top
importance = importance.sort_values("Importance", ascending=False)

# Plot
importance.hvplot.barh(x="Feature", y="Importance", flip_yaxis=True, height=300)
Out[104]:
In [105]:
importance
Out[105]:
Feature Importance
6 laggedTrips 0.510225
2 logDistRests 0.311333
5 logStationDists 0.152421
1 percent_car 0.017618
0 totalDocks 0.005724
4 logIntersectionDists 0.002678
3 within_centerCity 0.000000

Let's analyze the spatial structure of the predictions visually¶

We'll plot the predicted and actual trip values

Use the test set index (test_set.index) to get the data from the original data frame (bike_data).

This will ensure we will have geometry info (not used in the modeling) for our test data set:

In [106]:
test_set.index
Out[106]:
Int64Index([101,  16, 116,  74,  21,  36,  62,  84,  82,  58,  14, 110,  33,
             52,  24, 102,  42,  59,  41,  54,  39,  56,  92,  22,  64,  78,
             38,  96,  31,  68,  55,  11,  91,  83,  17,  57,  98, 107,  80,
             71,   6,  29,   9,   4,  85,  76,  43,  27],
           dtype='int64')
In [107]:
# Extract the test data from the original dataset
# This will include the geometry data
X = bike_data.loc[test_set.index]
In [108]:
# test data extracted from our original data frame
X.head()
Out[108]:
geometry kioskId start_station total_start_trips totalDocks percent_car logDistRests within_centerCity logIntersectionDists logStationDists laggedTrips
101 POINT (-8371183.406 4858854.781) 3159 3159 20731 23 0.131944 2.642763 0 1.642285 2.786577 24488.0
16 POINT (-8369358.880 4859364.493) 3022 3022 43278 21 0.231481 2.767425 0 1.436911 2.774140 27837.0
116 POINT (-8366648.250 4858924.483) 3185 3185 32100 21 0.350725 2.221697 1 1.836466 2.610841 30084.8
74 POINT (-8366722.834 4857655.389) 3101 3101 51814 18 0.359375 2.604317 0 1.700408 2.791618 43555.6
21 POINT (-8370645.733 4859230.891) 3029 3029 32689 17 0.330544 2.315135 0 1.854447 2.865545 30679.0
In [109]:
test_set.head()
Out[109]:
totalDocks percent_car logDistRests within_centerCity logIntersectionDists logStationDists laggedTrips
101 23 0.131944 2.642763 0 1.642285 2.786577 24488.0
16 21 0.231481 2.767425 0 1.436911 2.774140 27837.0
116 21 0.350725 2.221697 1 1.836466 2.610841 30084.8
74 18 0.359375 2.604317 0 1.700408 2.791618 43555.6
21 17 0.330544 2.315135 0 1.854447 2.865545 30679.0

The data frame indices lines up!¶

Now, make our predictions, and convert them from log to raw counts:

In [110]:
# Predictions for log of total trip counts
log_predictions = grid.best_estimator_.predict(test_set)

# Convert the predicted test values from log
X['prediction'] = np.exp(log_predictions)

Let's make the plot with side-by-side panels for actual and predicted:

In [111]:
# Plot two columns
fig, axs = plt.subplots(ncols=2, figsize=(10,10))

# Predicted values
X.plot(ax=axs[0], column='prediction')
ctx.add_basemap(ax=axs[0], crs=X.crs, source=ctx.providers.CartoDB.DarkMatter)
axs[0].set_title("Predicted Trip Counts")

# Actual values
X.plot(ax=axs[1], column='total_start_trips')
ctx.add_basemap(ax=axs[1], crs=X.crs, source=ctx.providers.CartoDB.DarkMatter)
axs[1].set_title("Actual Trip Counts")


axs[0].set_axis_off()
axs[1].set_axis_off()

The results are... not great¶

The good¶

We are capturing the general trend of within Center City vs. outside Center City

The bad¶

The values of the trip counts for those stations within Center City do not seem to be well-represented

Can we improve the model?¶

Yes!

This is a classic example of underfitting, for a few reasons:

  • The analysis only used seven features. We should be able to improve the fit by adding more features, particularly features that capture information of the stations within Center City (where the model seems to be struggling).
  • The correlation analysis did not indicate much correlation between features, and the features all had a pretty large correlation with the total trip counts. So, in some sense, the features all seem to be important, and we just need to add more.

Features to investigate:¶

  • Additional census demographic data:
    • e.g., population, income, percent with bachelor's degree or higher
  • Amenities / disamenities
    • Lots of options from OpenDataPhilly and OpenStreetMap
    • Criminal incidents, distance to parks, restaurants, new construction permits, etc.
  • Transportation network
    • Distance to the nearest bus station is a good place to start (see https://wiki.openstreetmap.org/wiki/Map_Features for the amenity tag)
  • Changing $k$ values for distance-based features
    • Experiment with different values of $k$ to see if they improve the model

Other options for bikeshare data¶

When trying to improve the accuracy of the model, another option is incorporating additional data. In this case, we can look to other cities and include trip data for these cities. Some good places to start:

  • Boston
  • Chicago
  • Washington D.C

That's it!¶

On Wednesday we start our introduction to web browsers and dashboarding!

In [ ]: