Waze Project

Course 3 - Go Beyond the Numbers: Translate Data into Insights

Your team is still in the early stages of their user churn project. So far, you’ve completed a project proposal and used Python to inspect and organize Waze’s user data.

You check your inbox and notice a new message from Chidi Ga, your team’s Senior Data Analyst. Chidi is pleased with the work you have already completed and requests your assistance with exploratory data analysis (EDA) and further data visualization. Harriet Hadzic, Waze's Director of Data Analysis, will want to review a Python notebook that shows your data exploration and visualization.

A notebook was structured and prepared to help you in this project. Please complete the following questions and prepare an executive summary.

Course 3 End-of-course project: Exploratory data analysis

In this activity, you will examine data provided and prepare it for analysis.

The purpose of this project is to conduct exploratory data analysis (EDA) on a provided dataset.

The goal is to continue the examination of the data that you began in the previous Course, adding relevant visualizations that help communicate the story that the data tells.

This activity has 4 parts:

Part 1: Imports, links, and loading

Part 2: Data Exploration

  • Data cleaning

Part 3: Building visualizations

Part 4: Evaluating and sharing results


Follow the instructions and answer the question below to complete the activity. Then, you will complete an executive summary using the questions listed on the PACE Strategy Document .

Be sure to complete this activity before moving on. The next course item will provide you with a completed exemplar to compare to your own work.

Visualize a story in Python

PACE stages

Throughout these project notebooks, you'll see references to the problem-solving framework PACE. The following notebook components are labeled with the respective PACE stage: Plan, Analyze, Construct, and Execute.

PACE: Plan

Consider the questions in your PACE Strategy Document to reflect on the Plan stage.

Task 1. Imports and data loading

For EDA of the data, import the data and packages that will be most helpful, such as pandas, numpy, and matplotlib.

In [1]:
### YOUR CODE HERE ###

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

Read in the data and store it as a dataframe object called df.

Note: As shown in this cell, the dataset has been automatically loaded in for you. You do not need to download the .csv file, or provide more code, in order to access the dataset and proceed with this lab. Please continue with this activity by completing the following instructions.

In [2]:
# Load the dataset into a dataframe
df = pd.read_csv('waze_dataset.csv')

PACE: Analyze

Consider the questions in your PACE Strategy Document and those below where applicable to complete your code:

  1. Does the data need to be restructured or converted into usable formats?

  2. Are there any variables that have missing data?

==> ENTER YOUR RESPONSES TO QUESTIONS 1-2 HERE

Task 2. Data exploration and cleaning

Consider the following questions:

  1. Given the scenario, which data columns are most applicable?

  2. Which data columns can you eliminate, knowing they won’t solve your problem scenario?

  3. How would you check for missing data? And how would you handle missing data (if any)?

  4. How would you check for outliers? And how would handle outliers (if any)?

==> ENTER YOUR RESPONSES TO QUESTIONS 1-4 HERE

Data overview and summary statistics

Use the following methods and attributes on the dataframe:

  • head()
  • size
  • describe()
  • info()

It's always helpful to have this information at the beginning of a project, where you can always refer back to if needed.

In [3]:
### YOUR CODE HERE ###
df.head()
Out[3]:
ID label sessions drives total_sessions n_days_after_onboarding total_navigations_fav1 total_navigations_fav2 driven_km_drives duration_minutes_drives activity_days driving_days device
0 0 retained 283 226 296.748273 2276 208 0 2628.845068 1985.775061 28 19 Android
1 1 retained 133 107 326.896596 1225 19 64 13715.920550 3160.472914 13 11 iPhone
2 2 retained 114 95 135.522926 2651 0 0 3059.148818 1610.735904 14 8 Android
3 3 retained 49 40 67.589221 15 322 7 913.591123 587.196542 7 3 iPhone
4 4 retained 84 68 168.247020 1562 166 5 3950.202008 1219.555924 27 18 Android
In [4]:
### YOUR CODE HERE ###
df.size
Out[4]:
194987

Generate summary statistics using the describe() method.

In [5]:
### YOUR CODE HERE ###
df.describe()
Out[5]:
ID sessions drives total_sessions n_days_after_onboarding total_navigations_fav1 total_navigations_fav2 driven_km_drives duration_minutes_drives activity_days driving_days
count 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000
mean 7499.000000 80.633776 67.281152 189.964447 1749.837789 121.605974 29.672512 4039.340921 1860.976012 15.537102 12.179879
std 4329.982679 80.699065 65.913872 136.405128 1008.513876 148.121544 45.394651 2502.149334 1446.702288 9.004655 7.824036
min 0.000000 0.000000 0.000000 0.220211 4.000000 0.000000 0.000000 60.441250 18.282082 0.000000 0.000000
25% 3749.500000 23.000000 20.000000 90.661156 878.000000 9.000000 0.000000 2212.600607 835.996260 8.000000 5.000000
50% 7499.000000 56.000000 48.000000 159.568115 1741.000000 71.000000 9.000000 3493.858085 1478.249859 16.000000 12.000000
75% 11248.500000 112.000000 93.000000 254.192341 2623.500000 178.000000 43.000000 5289.861262 2464.362632 23.000000 19.000000
max 14998.000000 743.000000 596.000000 1216.154633 3500.000000 1236.000000 415.000000 21183.401890 15851.727160 31.000000 30.000000

And summary information using the info() method.

In [6]:
### YOUR CODE HERE ###
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 14999 entries, 0 to 14998
Data columns (total 13 columns):
 #   Column                   Non-Null Count  Dtype  
---  ------                   --------------  -----  
 0   ID                       14999 non-null  int64  
 1   label                    14299 non-null  object 
 2   sessions                 14999 non-null  int64  
 3   drives                   14999 non-null  int64  
 4   total_sessions           14999 non-null  float64
 5   n_days_after_onboarding  14999 non-null  int64  
 6   total_navigations_fav1   14999 non-null  int64  
 7   total_navigations_fav2   14999 non-null  int64  
 8   driven_km_drives         14999 non-null  float64
 9   duration_minutes_drives  14999 non-null  float64
 10  activity_days            14999 non-null  int64  
 11  driving_days             14999 non-null  int64  
 12  device                   14999 non-null  object 
dtypes: float64(3), int64(8), object(2)
memory usage: 1.5+ MB

PACE: Construct

Consider the questions in your PACE Strategy Document to reflect on the Construct stage.

Consider the following questions as you prepare to deal with outliers:

  1. What are some ways to identify outliers?
  2. How do you make the decision to keep or exclude outliers from any future models?

==> ENTER YOUR RESPONSES TO QUESTIONS 1-2 HERE

Task 3a. Visualizations

Select data visualization types that will help you understand and explain the data.

Now that you know which data columns you’ll use, it is time to decide which data visualization makes the most sense for EDA of the Waze dataset.

Question: What type of data visualization(s) will be most helpful?

  • Line graph
  • Bar chart
  • Box plot
  • Histogram
  • Heat map
  • Scatter plot
  • A geographic map

==> ENTER YOUR RESPONSE HERE

Begin by examining the spread and distribution of important variables using box plots and histograms.

sessions

The number of occurrence of a user opening the app during the month

In [7]:
# Box plot
### YOUR CODE HERE ###

sns.boxplot(df['sessions'], notch=True)
plt.title('Number Of occurence of a user opening the app during the month')
plt.show()
In [8]:
# Histogram
### YOUR CODE HERE ###
plt.hist(df['sessions'], orientation = 'vertical')
plt.xlabel('No of sessions')
plt.ylabel('No of Count')
plt.title('Histogram representation')
plt.show()

The sessions variable is a right-skewed distribution with half of the observations having 56 or fewer sessions. However, as indicated by the boxplot, some users have more than 700.

drives

An occurrence of driving at least 1 km during the month

In [9]:
# Box plot
### YOUR CODE HERE ###

sns.boxplot(df['drives'])
plt.title('Drives Boxplot')
Out[9]:
Text(0.5, 1.0, 'Drives Boxplot')
In [10]:
# Histogram
### YOUR CODE HERE ###
plt.hist(df['drives'], orientation='vertical')
plt.title('Drives Histrogram')
plt.xlabel('No of drives')
plt.ylabel('No of counts')
plt.show()

The drives information follows a distribution similar to the sessions variable. It is right-skewed, approximately log-normal, with a median of 48. However, some drivers had over 400 drives in the last month.

total_sessions

A model estimate of the total number of sessions since a user has onboarded

In [11]:
# Box plot
### YOUR CODE HERE ###
sns.boxplot(df['total_sessions'])
plt.title('Total_session Boxplot')
Out[11]:
Text(0.5, 1.0, 'Total_session Boxplot')
In [12]:
# Histogram
### YOUR CODE HERE ###
plt.hist(df['total_sessions'], orientation='vertical')
plt.title('Total_sessions Histrogram')
plt.xlabel('No of Total_sessions')
plt.ylabel('No of counts')
plt.show()

The total_sessions is a right-skewed distribution. The median total number of sessions is 159.6. This is interesting information because, if the median number of sessions in the last month was 48 and the median total sessions was ~160, then it seems that a large proportion of a user's total drives might have taken place in the last month. This is something you can examine more closely later.

n_days_after_onboarding

The number of days since a user signed up for the app

In [13]:
# Box plot
### YOUR CODE HERE ###

sns.boxplot(df['n_days_after_onboarding'])
plt.title('Days_after_onboarding Boxplot')
Out[13]:
Text(0.5, 1.0, 'Days_after_onboarding Boxplot')
In [14]:
# Histogram
### YOUR CODE HERE ###

plt.hist(df['n_days_after_onboarding'], orientation='vertical')
plt.title('Days_after_onboarding Histrogram')
plt.xlabel('No of Days_After_Onboarding')
plt.ylabel('No of counts')
plt.show()

The total user tenure (i.e., number of days since onboarding) is a uniform distribution with values ranging from near-zero to ~3,500 (~9.5 years).

driven_km_drives

Total kilometers driven during the month

In [15]:
# Box plot
### YOUR CODE HERE ###
sns.boxplot(df['driven_km_drives'])
plt.title('Driven_Drives(in KM) Boxplot')
Out[15]:
Text(0.5, 1.0, 'Driven_Drives(in KM) Boxplot')
In [16]:
# Histogram
### YOUR CODE HERE ###
plt.hist(df['driven_km_drives'], orientation='vertical')
plt.title('Driven_Drives Histrogram')
plt.xlabel('No of drivesDriven')
plt.ylabel('No of counts')
plt.show()

The number of drives driven in the last month per user is a right-skewed distribution with half the users driving under 3,495 kilometers. As you discovered in the analysis from the previous course, the users in this dataset drive a lot. The longest distance driven in the month was over half the circumferene of the earth.

duration_minutes_drives

Total duration driven in minutes during the month

In [17]:
# Box plot
### YOUR CODE HERE ###
sns.boxplot(df['duration_minutes_drives'])
plt.title('Duration-Minutes_Drives Boxplot')
Out[17]:
Text(0.5, 1.0, 'Duration-Minutes_Drives Boxplot')
In [18]:
# Histogram
### YOUR CODE HERE ###
plt.hist(df['duration_minutes_drives'], orientation='vertical')
plt.title('Duration_Minutes_Drives Histrogram')
plt.xlabel('No of duration_minutes_drives')
plt.ylabel('No of counts')
plt.show()

The duration_minutes_drives variable has a heavily skewed right tail. Half of the users drove less than ~1,478 minutes (~25 hours), but some users clocked over 250 hours over the month.

activity_days

Number of days the user opens the app during the month

In [19]:
# Box plot
### YOUR CODE HERE ###
sns.boxplot(df['activity_days'])
plt.title('Days Active Boxplot')
Out[19]:
Text(0.5, 1.0, 'Days Active Boxplot')
In [20]:
# Histogram
### YOUR CODE HERE ###
plt.hist(df['activity_days'], orientation='vertical')
plt.title('Days Active Histrogram')
plt.xlabel('No of Active Days')
plt.ylabel('No of counts')
plt.show()

Within the last month, users opened the app a median of 16 times. The box plot reveals a centered distribution. The histogram shows a nearly uniform distribution of ~500 people opening the app on each count of days. However, there are ~250 people who didn't open the app at all and ~250 people who opened the app every day of the month.

This distribution is noteworthy because it does not mirror the sessions distribution, which you might think would be closely correlated with activity_days.

driving_days

Number of days the user drives (at least 1 km) during the month

In [21]:
# Box plot
### YOUR CODE HERE ###
sns.boxplot(df['driving_days'])
plt.title('Driving Days Boxplot')
Out[21]:
Text(0.5, 1.0, 'Driving Days Boxplot')
In [22]:
# Histogram
### YOUR CODE HERE ###
plt.hist(df['driving_days'], orientation='vertical')
plt.title('Driving Days Histrogram')
plt.xlabel('No of driving days')
plt.ylabel('No of counts')
plt.show()

The number of days users drove each month is almost uniform, and it largely correlates with the number of days they opened the app that month, except the driving_days distribution tails off on the right.

However, there were almost twice as many users (~1,000 vs. ~550) who did not drive at all during the month. This might seem counterintuitive when considered together with the information from activity_days. That variable had ~500 users opening the app on each of most of the day counts, but there were only ~250 users who did not open the app at all during the month and ~250 users who opened the app every day. Flag this for further investigation later.

device

The type of device a user starts a session with

This is a categorical variable, so you do not plot a box plot for it. A good plot for a binary categorical variable is a pie chart.

In [23]:
# Pie chart
### YOUR CODE HERE ###
data= df['device'].value_counts()
expld = (0,0.1)
plt.pie(data, labels=[f'{data.index[0]}: {data.values[0]}',
                f'{data.index[1]}: {data.values[1]}'],
        autopct='%1.1f%%',
       explode = expld)
plt.title("Device Base")
plt.show()

There are nearly twice as many iPhone users as Android users represented in this data.

label

Binary target variable (“retained” vs “churned”) for if a user has churned anytime during the course of the month

This is also a categorical variable, and as such would not be plotted as a box plot. Plot a pie chart instead.

In [24]:
# Pie chart
### YOUR CODE HERE ###

label_data = df['label'].value_counts()
plt.pie(label_data,
       labels=[f'{label_data.index[0]}: {label_data.values[0]}',
                f'{label_data.index[1]}: {label_data.values[1]}'],
        autopct='%1.1f%%')
plt.title('Users count i.e., Retained v/s Churned')
plt.show()

Less than 18% of the users churned.

driving_days vs. activity_days

Because both driving_days and activity_days represent counts of days over a month and they're also closely related, you can plot them together on a single histogram. This will help to better understand how they relate to each other without having to scroll back and forth comparing histograms in two different places.

Plot a histogram that, for each day, has a bar representing the counts of driving_days and activity_days.

In [25]:
# Histogram
### YOUR CODE HERE ###
plt.figure(figsize=(15,7))
plt.hist([df['driving_days'], df['activity_days']], bins=range(0,32))
plt.xlabel('No of days')
plt.ylabel('No of Counts')
plt.title('Driving Days v/s Activity_Days')
plt.show
Out[25]:
<function matplotlib.pyplot.show(*args, **kw)>

As observed previously, this might seem counterintuitive. After all, why are there fewer people who didn't use the app at all during the month and more people who didn't drive at all during the month?

On the other hand, it could just be illustrative of the fact that, while these variables are related to each other, they're not the same. People probably just open the app more than they use the app to drive—perhaps to check drive times or route information, to update settings, or even just by mistake.

Nonetheless, it might be worthwile to contact the data team at Waze to get more information about this, especially because it seems that the number of days in the month is not the same between variables.

Confirm the maximum number of days for each variable—driving_days and activity_days.

In [26]:
### YOUR CODE HERE ###
print("Maximum Driving Days", df['driving_days'].max())
print("Maximum Activity Days", df['activity_days'].max())
Maximum Driving Days 30
Maximum Activity Days 31

It's true. Although it's possible that not a single user drove all 31 days of the month, it's highly unlikely, considering there are 15,000 people represented in the dataset.

One other way to check the validity of these variables is to plot a simple scatter plot with the x-axis representing one variable and the y-axis representing the other.

In [27]:
# Scatter plot
### YOUR CODE HERE ###

sns.scatterplot(data=df, x='driving_days', y='activity_days')
plt.title('driving days vs/ activity days')
plt.plot([0,31], [0,31], color='black', linestyle='-');
plt.show()

Notice that there is a theoretical limit. If you use the app to drive, then by definition it must count as a day-use as well. In other words, you cannot have more drive-days than activity-days. None of the samples in this data violate this rule, which is good.

Retention by device

Plot a histogram that has four bars—one for each device-label combination—to show how many iPhone users were retained/churned and how many Android users were retained/churned.

In [46]:
# Histogram
### YOUR CODE HERE ###

sns.histplot(data = df,
            x='device',
             hue='label',
             multiple='dodge',
            shrink=0.7)
plt.title("Retenetion by device")
plt.show()

The proportion of churned users to retained users is consistent between device types.

Retention by kilometers driven per driving day

In the previous course, you discovered that the median distance driven last month for users who churned was 8.33 km, versus 3.36 km for people who did not churn. Examine this further.

  1. Create a new column in df called km_per_driving_day, which represents the mean distance driven per driving day for each user.

  2. Call the describe() method on the new column.

In [48]:
# 1. Create `km_per_driving_day` column
### YOUR CODE HERE ###
df['km_per_driving_day']= df['driven_km_drives']/df['driving_days']
# 2. Call `describe()` on the new column
### YOUR CODE HERE ###
df['km_per_driving_day'].describe()
Out[48]:
count    1.499900e+04
mean              inf
std               NaN
min      3.022063e+00
25%      1.672804e+02
50%      3.231459e+02
75%      7.579257e+02
max               inf
Name: km_per_driving_day, dtype: float64

What do you notice? The mean value is infinity, the standard deviation is NaN, and the max value is infinity. Why do you think this is?

This is the result of there being values of zero in the driving_days column. Pandas imputes a value of infinity in the corresponding rows of the new column because division by zero is undefined.

  1. Convert these values from infinity to zero. You can use np.inf to refer to a value of infinity.

  2. Call describe() on the km_per_driving_day column to verify that it worked.

In [49]:
# 1. Convert infinite values to zero
### YOUR CODE HERE ###
df.loc[df['km_per_driving_day']==np.inf, 'km_per_driving_day']=0
# 2. Confirm that it worked
### YOUR CODE HERE ###
df['km_per_driving_day'].describe()
Out[49]:
count    14999.000000
mean       578.963113
std       1030.094384
min          0.000000
25%        136.238895
50%        272.889272
75%        558.686918
max      15420.234110
Name: km_per_driving_day, dtype: float64

The maximum value is 15,420 kilometers per drive day. This is physically impossible. Driving 100 km/hour for 12 hours is 1,200 km. It's unlikely many people averaged more than this each day they drove, so, for now, disregard rows where the distance in this column is greater than 1,200 km.

Plot a histogram of the new km_per_driving_day column, disregarding those users with values greater than 1,200 km. Each bar should be the same length and have two colors, one color representing the percent of the users in that bar that churned and the other representing the percent that were retained. This can be done by setting the multiple parameter of seaborn's histplot() function to fill.

In [50]:
# Histogram
### YOUR CODE HERE ###
sns.histplot(data=df,
             x='km_per_driving_day',
             bins=range(0,1201,20),
             hue='label',
             multiple='fill')
plt.ylabel('%', rotation=0)
plt.title('Churn rate by mean km per driving day');

The churn rate tends to increase as the mean daily distance driven increases, confirming what was found in the previous course. It would be worth investigating further the reasons for long-distance users to discontinue using the app.

Churn rate per number of driving days

Create another histogram just like the previous one, only this time it should represent the churn rate for each number of driving days.

In [54]:
# Histogram
### YOUR CODE HERE ###
sns.histplot(data=df,
             x='driving_days',
             bins=range(0,1201,20),
             hue='label',
             multiple='fill',
            discrete=True)
plt.ylabel('%', rotation=0)
plt.title('Churn rate by mean km per driving day');

The churn rate is highest for people who didn't use Waze much during the last month. The more times they used the app, the less likely they were to churn. While 40% of the users who didn't use the app at all last month churned, nobody who used the app 30 days churned.

This isn't surprising. If people who used the app a lot churned, it would likely indicate dissatisfaction. When people who don't use the app churn, it might be the result of dissatisfaction in the past, or it might be indicative of a lesser need for a navigational app. Maybe they moved to a city with good public transportation and don't need to drive anymore.

Proportion of sessions that occurred in the last month

Create a new column percent_sessions_in_last_month that represents the percentage of each user's total sessions that were logged in their last month of use.

In [55]:
### YOUR CODE HERE ###

df['percent_sessions_in_last_month'] = df['sessions'] / df['total_sessions']

What is the median value of the new column?

In [56]:
### YOUR CODE HERE ###
df['percent_sessions_in_last_month'].median()
Out[56]:
0.42309702992763176

Now, create a histogram depicting the distribution of values in this new column.

In [69]:
# Histogram
### YOUR CODE HERE ###
sns.histplot(data=df,
             x='percent_sessions_in_last_month',
             
             hue='label'
              )
plt.show()

Check the median value of the n_days_after_onboarding variable.

In [70]:
### YOUR CODE HERE ###
df['n_days_after_onboarding'].median()
Out[70]:
1741.0

Half of the people in the dataset had 40% or more of their sessions in just the last month, yet the overall median time since onboarding is almost five years.

Make a histogram of n_days_after_onboarding for just the people who had 40% or more of their total sessions in the last month.

In [72]:
# Histogram
### YOUR CODE HERE ###

dtfrm= df.loc[df['n_days_after_onboarding']>0.4]
sns.histplot(x=dtfrm['n_days_after_onboarding']
            )
Out[72]:
<matplotlib.axes._subplots.AxesSubplot at 0x7fc24d9abc10>

The number of days since onboarding for users with 40% or more of their total sessions occurring in just the last month is a uniform distribution. This is very strange. It's worth asking Waze why so many long-time users suddenly used the app so much in the last month.

Task 3b. Handling outliers

The box plots from the previous section indicated that many of these variables have outliers. These outliers do not seem to be data entry errors; they are present because of the right-skewed distributions.

Depending on what you'll be doing with this data, it may be useful to impute outlying data with more reasonable values. One way of performing this imputation is to set a threshold based on a percentile of the distribution.

To practice this technique, write a function that calculates the 95th percentile of a given column, then imputes values > the 95th percentile with the value at the 95th percentile. such as the 95th percentile of the distribution.

In [73]:
### YOUR CODE HERE ###
def check_outlier(column_name, percentile):
    x = df[column_name].quantile(percentile)
    
    df.loc[df[column_name] > x, column_name] = x
    
    print('{:>25} | percentile: {}  | x: {}'.format(column_name, percentile,x) )

Next, apply that function to the following columns:

  • sessions
  • drives
  • total_sessions
  • driven_km_drives
  • duration_minutes_drives
In [74]:
### YOUR CODE HERE ###

for i in ['sessions', 'drives', 'total_sessions', 'driven_km_drives', 'duration_minutes_drives']:
    check_outlier(i, 0.95)
                 sessions | percentile: 0.95  | x: 243.0
                   drives | percentile: 0.95  | x: 201.0
           total_sessions | percentile: 0.95  | x: 454.3632037399997
         driven_km_drives | percentile: 0.95  | x: 8889.7942356
  duration_minutes_drives | percentile: 0.95  | x: 4668.899348999999

Call describe() to see if your change worked.

In [75]:
### YOUR CODE HERE ###
df.describe()
Out[75]:
ID sessions drives total_sessions n_days_after_onboarding total_navigations_fav1 total_navigations_fav2 driven_km_drives duration_minutes_drives activity_days driving_days km_per_driving_day percent_sessions_in_last_month
count 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000 14999.000000
mean 7499.000000 76.568705 64.058204 184.031320 1749.837789 121.605974 29.672512 3939.632764 1789.647426 15.537102 12.179879 578.963113 0.449255
std 4329.982679 67.297958 55.306924 118.600463 1008.513876 148.121544 45.394651 2216.041510 1222.705167 9.004655 7.824036 1030.094384 0.286919
min 0.000000 0.000000 0.000000 0.220211 4.000000 0.000000 0.000000 60.441250 18.282082 0.000000 0.000000 0.000000 0.000000
25% 3749.500000 23.000000 20.000000 90.661156 878.000000 9.000000 0.000000 2212.600607 835.996260 8.000000 5.000000 136.238895 0.196221
50% 7499.000000 56.000000 48.000000 159.568115 1741.000000 71.000000 9.000000 3493.858085 1478.249859 16.000000 12.000000 272.889272 0.423097
75% 11248.500000 112.000000 93.000000 254.192341 2623.500000 178.000000 43.000000 5289.861262 2464.362632 23.000000 19.000000 558.686918 0.687216
max 14998.000000 243.000000 201.000000 454.363204 3500.000000 1236.000000 415.000000 8889.794236 4668.899349 31.000000 30.000000 15420.234110 1.530637

Conclusion

Analysis revealed that the overall churn rate is ~17%, and that this rate is consistent between iPhone users and Android users.

Perhaps you feel that the more deeply you explore the data, the more questions arise. This is not uncommon! In this case, it's worth asking the Waze data team why so many users used the app so much in just the last month.

Also, EDA has revealed that users who drive very long distances on their driving days are more likely to churn, but users who drive more often are less likely to churn. The reason for this discrepancy is an opportunity for further investigation, and it would be something else to ask the Waze data team about.

PACE: Execute

Consider the questions in your PACE Strategy Document to reflect on the Execute stage.

Task 4a. Results and evaluation

Having built visualizations in Python, what have you learned about the dataset? What other questions have your visualizations uncovered that you should pursue?

Pro tip: Put yourself in your client's perspective. What would they want to know?

Use the following code fields to pursue any additional EDA based on the visualizations you've already plotted. Also use the space to make sure your visualizations are clean, easily understandable, and accessible.

Ask yourself: Did you consider color, contrast, emphasis, and labeling?

==> ENTER YOUR RESPONSE HERE

I have learned ....

My other questions are ....

My client would likely want to know ...

Use the following two code blocks (add more blocks if you like) to do additional EDA you feel is important based on the given scenario.

In [41]:
### YOUR CODE HERE ###
In [42]:
### YOUR CODE HERE ###

Task 4b. Conclusion

Now that you've explored and visualized your data, the next step is to share your findings with Harriet Hadzic, Waze's Director of Data Analysis. Consider the following questions as you prepare to write your executive summary. Think about key points you may want to share with the team, and what information is most relevant to the user churn project.

Questions:

  1. What types of distributions did you notice in the variables? What did this tell you about the data?

  2. Was there anything that led you to believe the data was erroneous or problematic in any way?

  3. Did your investigation give rise to further questions that you would like to explore or ask the Waze team about?

  4. What percentage of users churned and what percentage were retained?

  5. What factors correlated with user churn? How?

  6. Did newer uses have greater representation in this dataset than users with longer tenure? How do you know?

==> ENTER YOUR RESPONSES TO QUESTIONS 1-6 HERE

Congratulations! You've completed this lab. However, you may not notice a green check mark next to this item on Coursera's platform. Please continue your progress regardless of the check mark. Just click on the "save" icon at the top of this notebook to ensure your work has been logged.