This assignment is based on a data challenge from the Michigan Data Science Team (MDST).
The Michigan Data Science Team (MDST) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences (MSSISS) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. Blight violations are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance?
The first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time.
All data for this assignment has been provided to us through the Detroit Open Data Portal. Only the data already included in your Coursera directory can be used for training the model for this assignment. Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets:
We provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv.
Note: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set.
File descriptions (Use only this data for training your model!)
train.csv - the training set (all tickets issued 2004-2011)
test.csv - the test set (all tickets issued 2012-2016)
addresses.csv & latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates.
Note: misspelled addresses may be incorrectly geolocated.
Data fields
train.csv & test.csv
ticket_id - unique identifier for tickets
agency_name - Agency that issued the ticket
inspector_name - Name of inspector that issued the ticket
violator_name - Name of the person/organization that the ticket was issued to
violation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred
mailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator
ticket_issued_date - Date and time the ticket was issued
hearing_date - Date and time the violator's hearing was scheduled
violation_code, violation_description - Type of violation
disposition - Judgment and judgement type
fine_amount - Violation fine amount, excluding fees
admin_fee - $20 fee assigned to responsible judgments
state_fee - $10 fee assigned to responsible judgments late_fee - 10% fee assigned to responsible judgments discount_amount - discount applied, if any clean_up_cost - DPW clean-up or graffiti removal cost judgment_amount - Sum of all fines and fees grafitti_status - Flag for graffiti violations
train.csv only
payment_amount - Amount paid, if any
payment_date - Date payment was made, if it was received
payment_status - Current payment status as of Feb 1 2017
balance_due - Fines and fees still owed
collection_status - Flag for payments in collections
compliance [target variable for prediction]
Null = Not responsible
0 = Responsible, non-compliant
1 = Responsible, compliant
compliance_detail - More information on why each ticket was marked compliant or non-compliant
Your predictions will be given as the probability that the corresponding blight ticket will be paid on time.
The evaluation metric for this assignment is the Area Under the ROC Curve (AUC).
Your grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points.
For this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using train.csv
. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from test.csv
will be paid, and the index being the ticket_id.
Example:
ticket_id
284932 0.531842
285362 0.401958
285361 0.105928
285338 0.018572
...
376499 0.208567
376500 0.818759
369851 0.018528
Name: compliance, dtype: float32
Make sure your code is working before submitting it to the autograder.
Print out your result to see whether there is anything weird (e.g., all probabilities are the same).
Generally the total runtime should be less than 10 mins. You should NOT use Neural Network related classifiers (e.g., MLPClassifier) in this question.
Try to avoid global variables. If you have other functions besides blight_model, you should move those functions inside the scope of blight_model.
Refer to the pinned threads in Week 4's discussion forum when there is something you could not figure it out.
import pandas as pd
import numpy as np
def blight_model():
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import roc_auc_score
train = pd.read_csv('~/data/train2.csv', encoding = "ISO-8859-1")
test = pd.read_csv('~/data/test.csv')
addresses = pd.read_csv('~/data/addresses.csv')
latlons = pd.read_csv('~/data/latlons.csv')
train = train[np.isfinite(train['compliance'])]
train = train[train.country == 'USA']
test = test[test.country == 'USA']
train = pd.merge(train, pd.merge(addresses, latlons, on='address'), on='ticket_id')
test = pd.merge(test, pd.merge(addresses, latlons, on='address'), on='ticket_id')
train.drop(['agency_name', 'inspector_name', 'violator_name', 'non_us_str_code', 'violation_description','grafitti_status',
'state_fee', 'admin_fee', 'ticket_issued_date', 'hearing_date', 'payment_amount', 'balance_due', 'payment_date',
'payment_status','collection_status', 'compliance_detail', 'violation_zip_code', 'country', 'address',
'violation_street_number','violation_street_name', 'mailing_address_str_number', 'mailing_address_str_name',
'city', 'state', 'zip_code', 'address'], axis=1, inplace=True)
label_encoder = LabelEncoder()
for col in train.columns[train.dtypes == "object"]:
train[col] = label_encoder.fit_transform(train[col])
train['lat'] = train['lat'].fillna(method='pad')
train['lon'] = train['lon'].fillna(method='pad')
test['lat'] = test['lat'].fillna(method='pad')
test['lon'] = test['lon'].fillna(method='pad')
train_columns = list(train.columns.values)
train_columns.remove('compliance')
test = test[train_columns]
X_train, X_test, y_train, y_test = train_test_split(train.ix[:, train.columns != 'compliance'], train['compliance'])
rf = RandomForestRegressor()
grid_values = {'n_estimators': [10, 200], 'max_depth': [3, 50]}
grid_rf_auc = GridSearchCV(rf, param_grid=grid_values, scoring='roc_auc')
grid_rf_auc.fit(X_train, y_train)
print('Model best parameter (max. AUC): ', grid_rf_auc.best_params_)
print('Model score (AUC): ', grid_rf_auc.best_score_)
for col in test.columns[test.dtypes == "object"]:
test[col] = label_encoder.fit_transform(test[col])
ans = pd.DataFrame(grid_rf_auc.predict(test), test.ticket_id)
return ans
blight_model()
import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
train = pd.read_csv("~/data/train.csv",encoding = 'ISO-8859-1')
train = pd.read_csv("~/data/train.csv",encoding = 'ISO-8859-1')
test = pd.read_csv("~/data/test.csv")
train = train[(train["compliance"] == 1) | (train["compliance"] ==0)]
addresses = pd.read_csv('~/data/addresses.csv')
latlons = pd.read_csv('~/data/latlons.csv')
temp = pd.merge(addresses, latlons, on = 'address')
train = pd.merge(train, temp, on = 'ticket_id')
test = pd.merge(test, temp, on = 'ticket_id')
pd.get_dummies?
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import roc_auc_score
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import roc_auc_score
# load data
train = pd.read_csv('~/data/train2.csv', encoding = "ISO-8859-1")
#train = train[np.isfinite(train['compliance'])]
#train.drop(['Unnamed: 34','Unnamed: 35'], axis=1, inplace=True)
test = pd.read_csv('~/data/test.csv')
addresses = pd.read_csv('~/data/addresses.csv')
latlons = pd.read_csv('~/data/latlons.csv')
# drop all rows with Null compliance
train = train[np.isfinite(train['compliance'])]
#train_df = train_df.dropna(subset = ['compliance'])#drop rows in training data set where target is NaN
# drop all rows not in the U.S
train = train[train.country == 'USA']
test = test[test.country == 'USA']
# merge latlons and addresses with data
train = pd.merge(train, pd.merge(addresses, latlons, on='address'), on='ticket_id')
test = pd.merge(test, pd.merge(addresses, latlons, on='address'), on='ticket_id')
# drop all unnecessary columns
train.drop(['agency_name', 'inspector_name', 'violator_name', 'non_us_str_code', 'violation_description','grafitti_status',
'state_fee', 'admin_fee', 'ticket_issued_date', 'hearing_date', 'payment_amount', 'balance_due', 'payment_date',
'payment_status','collection_status', 'compliance_detail', 'violation_zip_code', 'country', 'address',
'violation_street_number','violation_street_name', 'mailing_address_str_number', 'mailing_address_str_name',
'city', 'state', 'zip_code', 'address'], axis=1, inplace=True)
# discretizing relevant columns
#label_encoder = LabelEncoder()
#label_encoder.fit(train['disposition'].append(test['disposition'], ignore_index=True))
#train['disposition'] = label_encoder.transform(train['disposition'])
#test['disposition'] = label_encoder.transform(test['disposition'])
#label_encoder = LabelEncoder()
#label_encoder.fit(train['violation_code'].append(test['violation_code'], ignore_index=True))
#train['violation_code'] = label_encoder.transform(train['violation_code'])
#test['violation_code'] = label_encoder.transform(test['violation_code'])
label_encoder = LabelEncoder()
for col in train.columns[train.dtypes == "object"]:
train[col] = label_encoder.fit_transform(train[col])
train['lat'] = train['lat'].fillna(method='pad') #train['lat'].mean()
train['lon'] = train['lon'].fillna(method='pad') #train['lon'].mean()
test['lat'] = test['lat'].fillna(method='pad') #test['lat'].mean()
test['lon'] = test['lon'].fillna(method='pad') #test['lon'].mean()
train_columns = list(train.columns.values)
train_columns.remove('compliance')
test = test[train_columns]
# train the model
X_train, X_test, y_train, y_test = train_test_split(train.ix[:, train.columns != 'compliance'], train['compliance'])
rf = RandomForestRegressor()
grid_values = {'n_estimators': [10, 30], 'max_depth': [3, 10]}
grid_rf_auc = GridSearchCV(rf, param_grid=grid_values, scoring='roc_auc')
grid_rf_auc.fit(X_train, y_train)
print('Grid best parameter (max. AUC): ', grid_rf_auc.best_params_)
print('Grid best score (AUC): ', grid_rf_auc.best_score_)
for col in test.columns[test.dtypes == "object"]:
test[col] = label_encoder.fit_transform(test[col])
pd.DataFrame(grid_rf_auc.predict(test), test.ticket_id) #return
#---------------------------------------------------------------------
#换GradientBoostingClassifier算法进行分类建模,#前面的数据处理同下面的算法
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import auc, precision_score,recall_score, accuracy_score, precision_recall_curve
from sklearn.ensemble import GradientBoostingClassifier
grid_values = {'learning_rate': [0.01, 0.1, 1]}
clf = GradientBoostingClassifier(random_state = 0)
grid = GridSearchCV(clf, param_grid = grid_values, scoring = 'roc_auc')
grid.fit(X_train, y_train)
result = grid.predict_proba(test)[:, 1]
print(grid.best_score_) #0.82247378736744237
result_new = pd.Series(result, index = test.ticket_id)
result_new #return
test.dtypes
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import auc, precision_score,recall_score, accuracy_score, precision_recall_curve
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import GridSearchCV
def blight_model():
# Your code here
data_train = pd.read_csv("train.csv", encoding = "ISO-8859-1")
data_test = pd.read_csv("test.csv", encoding = "ISO-8859-1")
addresses = pd.read_csv("addresses.csv",encoding="ISO-8859-1")
data_train = pd.merge(data_train,addresses,on="ticket_id", how="inner")
data_test = pd.merge(data_test,addresses,on="ticket_id",how="inner")
data_train = data_train.dropna(subset=["compliance"])
data_train["compliance"] = data_train["compliance"].astype(int)
convert_columns={'country': 'category',
'non_us_str_code': 'category',
'compliance': 'category',
'state': 'category',
'zip_code': 'category'
}
for df in [data_test, data_train]:
for col, col_type in convert_columns.items():
if col in df:
if col_type == 'category':
df[col] = df[col].replace(np.nan, "NA", regex=True).astype(col_type)
elif col_type == 'int':
df[col] = df[col].replace(np.nan, 0, regex=True).astype(col_type)
#print(data_train.head())
#print(data_train.isnull().any())
#dropping the columns we don't need
common_cols_to_drop = ['agency_name', 'inspector_name', 'mailing_address_str_number',
'violator_name', 'violation_street_number', 'violation_street_name',
'mailing_address_str_name', 'address', 'admin_fee', 'violation_zip_code',
'state_fee', 'late_fee', 'ticket_issued_date', 'hearing_date', 'violation_description',
'fine_amount', 'clean_up_cost', 'disposition', 'grafitti_status',
'violation_code', 'city']
data_train_cols_to_drop = ['payment_status', 'payment_date', 'balance_due', 'payment_amount'] + common_cols_to_drop
data_test = data_test.drop(common_cols_to_drop,axis=1).set_index("ticket_id")
data_train = data_train.drop(data_train_cols_to_drop,axis=1).set_index("ticket_id")
#print (data_test.head())
y_train = data_train["compliance"]
data_train = data_train.drop(["compliance", "compliance_detail","collection_status"],axis=1)
cat = data_train.select_dtypes(['category']).columns
X_train,
#print(cat)
for df in [data_test, data_train]:
df[cat] = df[cat].apply(lambda x: x.cat.codes)
X_train = data_train.copy()
grid_values = {'learning_rate': [0.01, 0.1, 1]}
clf = GradientBoostingClassifier(random_state = 0)
grid = GridSearchCV(clf, param_grid = grid_values, scoring = 'roc_auc')
grid.fit(X_train, y_train)
result = grid.predict_proba(data_test)[:, 1]
result_new = pd.Series(result, index = data_test.index)
return result_new
#return # Your answer here
blight_model()
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import roc_auc_score
def blight_model():
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn.tree import DecisionTreeClassifier
from datetime import datetime
def time_gap(hearing_date_str, ticket_issued_date_str):
if not hearing_date_str or type(hearing_date_str)!=str: return 73
hearing_date = datetime.strptime(hearing_date_str, "%Y-%m-%d %H:%M:%S")
ticket_issued_date = datetime.strptime(ticket_issued_date_str, "%Y-%m-%d %H:%M:%S")
gap = hearing_date - ticket_issued_date
return gap.days
train_data = pd.read_csv('~/data/train2.csv', encoding = 'ISO-8859-1')
#train_data.drop(['Unnamed: 34','Unnamed: 35'], axis=1, inplace=True)
test_data = pd.read_csv('~/data/test.csv')
train_data = train_data[(train_data['compliance'] == 0) | (train_data['compliance'] == 1)]
address = pd.read_csv('~/data/addresses.csv')
latlons = pd.read_csv('~/data/latlons.csv')
address = address.set_index('address').join(latlons.set_index('address'), how='left')
train_data = train_data.set_index('ticket_id').join(address.set_index('ticket_id'))
test_data = test_data.set_index('ticket_id').join(address.set_index('ticket_id'))
train_data = train_data[~train_data['hearing_date'].isnull()]
train_data['time_gap'] = train_data.apply(lambda row: time_gap(row['hearing_date'], row['ticket_issued_date']), axis=1)
test_data['time_gap'] = test_data.apply(lambda row: time_gap(row['hearing_date'], row['ticket_issued_date']), axis=1)
feature_to_be_splitted = ['agency_name', 'state', 'disposition']
train_data.lat.fillna(method='pad', inplace=True)
train_data.lon.fillna(method='pad', inplace=True)
train_data.state.fillna(method='pad', inplace=True)
test_data.lat.fillna(method='pad', inplace=True)
test_data.lon.fillna(method='pad', inplace=True)
test_data.state.fillna(method='pad', inplace=True)
train_data = pd.get_dummies(train_data, columns=feature_to_be_splitted)
test_data = pd.get_dummies(test_data, columns=feature_to_be_splitted)
list_to_remove_train = [
'balance_due',
'collection_status',
'compliance_detail',
'payment_amount',
'payment_date',
'payment_status'
]
list_to_remove_all = ['fine_amount', 'violator_name', 'zip_code', 'country', 'city',
'inspector_name', 'violation_street_number', 'violation_street_name',
'violation_zip_code', 'violation_description',
'mailing_address_str_number', 'mailing_address_str_name',
'non_us_str_code',
'ticket_issued_date', 'hearing_date', 'grafitti_status', 'violation_code']
train_data.drop(list_to_remove_train, axis=1, inplace=True)
train_data.drop(list_to_remove_all, axis=1, inplace=True)
test_data.drop(list_to_remove_all, axis=1, inplace=True)
train_features = train_data.columns.drop('compliance')
train_features_set = set(train_features)
for feature in set(train_features):
if feature not in test_data:
train_features_set.remove(feature)
train_features = list(train_features_set)
X_train = train_data[train_features]
y_train = train_data.compliance
X_test = test_data[train_features]
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
clf = MLPClassifier(hidden_layer_sizes = [100, 10], alpha = 5,
random_state = 0, solver='lbfgs', verbose=0)
# clf = DecisionTreeClassifier()
clf.fit(X_train_scaled, y_train)
test_proba = clf.predict_proba(X_test_scaled)[:,1]
test_df = pd.read_csv('test.csv', encoding = "ISO-8859-1")
test_df['compliance'] = test_proba
test_df.set_index('ticket_id', inplace=True)
return test_df.compliance
blight_model()
train_data = pd.read_csv("~/data/train2.csv",delimiter=",",encoding='ISO-8859-1')
train_data = train_data[np.isfinite(train_data['compliance'])]
train_data.info()
train_data.compliance.value_counts()
train_data = train_data[~(train_data.compliance.isnull())]
train_data.shape
addresses_data = pd.read_csv("~/data/addresses.csv",delimiter=",")
latlons_data = pd.read_csv("~/data/latlons.csv",delimiter=",")
add_loc_data = pd.merge(addresses_data,latlons_data,on="address")
train_data = pd.merge(train_data,add_loc_data,on="ticket_id")
add_loc_data.head()
train_data.head()
train_data = train_data.drop(["violation_street_number","violation_street_name","violation_zip_code",
"mailing_address_str_number","mailing_address_str_name","city","state","country",
"non_us_str_code","grafitti_status","address"],axis = 1)
train_data.info()
bad_zipcode = train_data.zip_code[train_data.zip_code.apply(lambda x: len(str(x)) != 5)].index
bad_zipcode
train_data.drop(bad_zipcode,axis = 0,inplace=True)
train_data.shape
train_data.zip_code = train_data.zip_code.astype("int64")
train_data = train_data[~(train_data.hearing_date.isnull())]
train_data.shape
train_data.payment_status.value_counts()
train_data.drop(["payment_date","collection_status","payment_status"],1,inplace=True)
train_data.drop(["payment_amount","balance_due","compliance_detail"],1,inplace= True)
train_data = train_data[~(train_data.lat.isnull())]
train_data.drop("violator_name",1,inplace=True)
train_data.drop("ticket_id",1,inplace=True)
train_data.info()
for col in train_data.columns[train_data.dtypes == "object"]:
print("The number of unique values for '{}' is {}.".format(col,train_data[col].nunique()))
train_data.disposition.value_counts()
test_data = pd.read_csv("test.csv",delimiter=",",encoding="ISO-8859-1")
for col in test_data.columns[test_data.dtypes == "object"]:
print("The number of unique values for '{}' is {}.".format(col,test_data[col].nunique()))
train_data.ticket_issued_date = pd.to_datetime(train_data.ticket_issued_date)
train_data.ticket_issued_date.head()
train_data["year"] = [date.isocalendar()[0] for date in train_data.ticket_issued_date]
train_data["dow"] = [date.isocalendar()[2] for date in train_data.ticket_issued_date]
train_data["woy"] = [date.isocalendar()[1] for date in train_data.ticket_issued_date]
train_data.hearing_date = pd.to_datetime(train_data.hearing_date)
train_data["h_year"] = [date.isocalendar()[0] for date in train_data.hearing_date]
train_data["h_woy"] = [date.isocalendar()[1] for date in train_data.hearing_date]
train_data["h_dow"] = [date.isocalendar()[2] for date in train_data.hearing_date]
train_data.drop(["ticket_issued_date","hearing_date"],1,inplace=True)
train_data.drop("violation_code",1,inplace=True)
import matplotlib.pyplot as plt
import seaborn as sns
corr = train_data.corr()
%matplotlib inline
plt.figure(figsize=(12,12))
sns.heatmap(corr,annot=True)
%matplotlib inline
plt.figure(figsize=(12,12))
sns.heatmap(train_data.drop(["h_year","fine_amount","judgment_amount"],1).corr(),annot=True)
train_data.drop(["h_year","fine_amount","judgment_amount"],1,inplace=True)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
for col in train_data.columns[train_data.dtypes == "object"]:
train_data[col] = le.fit_transform(train_data[col])
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_curve, auc
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GridSearchCV
cv = StratifiedKFold(n_splits=5)
scaler = MinMaxScaler()
clfs = [LogisticRegression(),GaussianNB(),
RandomForestClassifier(random_state=0,n_estimators=100),AdaBoostClassifier()]
X = train_data.drop("compliance",1)
y = train_data.compliance
for clf in clfs:
print(clf)
for train, test in cv.split(X,y):
X_train = scaler.fit_transform(X.iloc[train])
X_test = scaler.transform(X.iloc[test])
clf.fit(X_train,y.iloc[train])
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = (prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fpr, tpr, _ = roc_curve(y.iloc[test], prob_pos)
auc_score = auc(fpr,tpr)
print(auc_score)
param_grid = {"n_estimators":[10,100],"learning_rate":[0.01,0.1,1]}
ada = AdaBoostClassifier()
gs = GridSearchCV(ada,param_grid=param_grid,cv=5,scoring="roc_auc")
gs.fit(X,y)
cv = StratifiedKFold(n_splits=10)
scaler = MinMaxScaler()
clfs = [GaussianNB(),AdaBoostClassifier(learning_rate=0.5, n_estimators=100)]
X = train_data.drop("compliance",1)
y = train_data.compliance
for clf in clfs:
print(clf)
sc = []
for train, test in cv.split(X,y):
X_train = scaler.fit_transform(X.iloc[train])
X_test = scaler.transform(X.iloc[test])
clf.fit(X_train,y.iloc[train])
prob_pos = clf.predict_proba(X_test)[:, 1]
fpr, tpr, _ = roc_curve(y.iloc[test], prob_pos)
sc.append(auc(fpr,tpr))
print(np.mean(sc))
我理解get_dummies是将拥有不同值的变量转换为0/1数值。打个比方,小明有黄、红、蓝三种颜色的帽子,小明今天戴黄色帽子用1表示,红色帽子用2表示,蓝色帽子用3表示。但1、2、3数值大小本身是没有意义的,只是用于区分帽子的颜色,因此在实际分析时,需要将1、2、3转化为0、1,如下代码所示:
作者:dechuan 链接:https://www.jianshu.com/p/c324f4101785
http://scikit-learn.org/stable/modules/preprocessing.html#preprocessing
import pandas as pd
xiaoming=pd.DataFrame([1,2,3],index=['yellow','red','blue'],columns=['hat'])
print(xiaoming)
hat_ranks=pd.get_dummies(xiaoming['hat'],prefix='hat')
print(hat_ranks.head())