본문 바로가기
  • 紹睿: 자유롭고 더불어 사는 가치있는 삶
Study/파이썬으로 데이터 주무르기

[Linear regression] Boston dataset에 실제 적용해 보기

by 징여 2018. 7. 9.
반응형

Boston Dataset에 실제 적용해보기¶

In [36]:
from sklearn import datasets
boston_house_prices = datasets.load_boston()

# 로드한 boston 전체 데이터에 key 값을 출력
print(boston_house_prices.keys())
# boston 전체 데이터 중 data에 대한 전체 행, 열 길이를 출력
print(boston_house_prices.data.shape)
# boston 데이터에 컬럼 이름을 출력 
print(boston_house_prices.feature_names)
dict_keys(['data', 'target', 'feature_names', 'DESCR'])
(506, 13)
['CRIM' 'ZN' 'INDUS' 'CHAS' 'NOX' 'RM' 'AGE' 'DIS' 'RAD' 'TAX' 'PTRATIO'
 'B' 'LSTAT']
In [3]:
print(boston_house_prices.DESCR)
Boston House Prices dataset
===========================

Notes
------
Data Set Characteristics:  

    :Number of Instances: 506 

    :Number of Attributes: 13 numeric/categorical predictive
    
    :Median Value (attribute 14) is usually the target

    :Attribute Information (in order):
        - CRIM     per capita crime rate by town
        - ZN       proportion of residential land zoned for lots over 25,000 sq.ft.
        - INDUS    proportion of non-retail business acres per town
        - CHAS     Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
        - NOX      nitric oxides concentration (parts per 10 million)
        - RM       average number of rooms per dwelling
        - AGE      proportion of owner-occupied units built prior to 1940
        - DIS      weighted distances to five Boston employment centres
        - RAD      index of accessibility to radial highways
        - TAX      full-value property-tax rate per $10,000
        - PTRATIO  pupil-teacher ratio by town
        - B        1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
        - LSTAT    % lower status of the population
        - MEDV     Median value of owner-occupied homes in $1000's

    :Missing Attribute Values: None

    :Creator: Harrison, D. and Rubinfeld, D.L.

This is a copy of UCI ML housing dataset.
http://archive.ics.uci.edu/ml/datasets/Housing


This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.

The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978.   Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980.   N.B. Various transformations are used in the table on
pages 244-261 of the latter.

The Boston house-price data has been used in many machine learning papers that address regression
problems.   
     
**References**

   - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
   - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
   - many more! (see http://archive.ics.uci.edu/ml/datasets/Housing)

Boston dataset을 Data Frame으로 정제하기¶

In [11]:
import pandas as pd
data = pd.DataFrame(boston_house_prices.data)
data.tail()
Out[11]:
0 1 2 3 4 5 6 7 8 9 10 11 12
501 0.06263 0.0 11.93 0.0 0.573 6.593 69.1 2.4786 1.0 273.0 21.0 391.99 9.67
502 0.04527 0.0 11.93 0.0 0.573 6.120 76.7 2.2875 1.0 273.0 21.0 396.90 9.08
503 0.06076 0.0 11.93 0.0 0.573 6.976 91.0 2.1675 1.0 273.0 21.0 396.90 5.64
504 0.10959 0.0 11.93 0.0 0.573 6.794 89.3 2.3889 1.0 273.0 21.0 393.45 6.48
505 0.04741 0.0 11.93 0.0 0.573 6.030 80.8 2.5050 1.0 273.0 21.0 396.90 7.88

data.columns를 이용하여, column 명이름을 바꿔준다¶

In [15]:
data.columns = boston_house_prices.feature_names
data.tail()
Out[15]:
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT
501 0.06263 0.0 11.93 0.0 0.573 6.593 69.1 2.4786 1.0 273.0 21.0 391.99 9.67
502 0.04527 0.0 11.93 0.0 0.573 6.120 76.7 2.2875 1.0 273.0 21.0 396.90 9.08
503 0.06076 0.0 11.93 0.0 0.573 6.976 91.0 2.1675 1.0 273.0 21.0 396.90 5.64
504 0.10959 0.0 11.93 0.0 0.573 6.794 89.3 2.3889 1.0 273.0 21.0 393.45 6.48
505 0.04741 0.0 11.93 0.0 0.573 6.030 80.8 2.5050 1.0 273.0 21.0 396.90 7.88
In [16]:
data['Price'] =  boston_house_prices.target
data.tail()
Out[16]:
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT Price
501 0.06263 0.0 11.93 0.0 0.573 6.593 69.1 2.4786 1.0 273.0 21.0 391.99 9.67 22.4
502 0.04527 0.0 11.93 0.0 0.573 6.120 76.7 2.2875 1.0 273.0 21.0 396.90 9.08 20.6
503 0.06076 0.0 11.93 0.0 0.573 6.976 91.0 2.1675 1.0 273.0 21.0 396.90 5.64 23.9
504 0.10959 0.0 11.93 0.0 0.573 6.794 89.3 2.3889 1.0 273.0 21.0 393.45 6.48 22.0
505 0.04741 0.0 11.93 0.0 0.573 6.030 80.8 2.5050 1.0 273.0 21.0 396.90 7.88 11.9

scatter로 뿌려주기¶

In [25]:
import matplotlib.pylab as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')

data.plot(kind='scatter', x ="RM", y="Price", figsize=(5, 5), color='black', xlim=(4,8), ylim=(10,45))
Out[25]:
<matplotlib.axes._subplots.AxesSubplot at 0x113ba5828>

데이터 학습시키기¶

In [26]:
from sklearn import linear_model
In [27]:
linear_regression = linear_model.LinearRegression()
linear_regression.fit(X=pd.DataFrame(data['RM']), y=data['Price'])
prediction = linear_regression.predict(X=pd.DataFrame(data['RM']))
print('a value: ', linear_regression.intercept_)
print('b value: ',linear_regression.coef_)
a value:  -34.67062077643857
b value:  [9.10210898]

적합도 검증¶

In [28]:
residuals = data['Price'] - prediction
residuals.describe()
Out[28]:
count    5.060000e+02
mean     2.134437e-15
std      6.609606e+00
min     -2.334590e+01
25%     -2.547477e+00
50%      8.976267e-02
75%      2.985532e+00
max      3.943314e+01
Name: Price, dtype: float64
In [29]:
SSE = (residuals**2).sum()
SST = ((data['Price']-data['Price'].mean())**2).sum()
R_squared = 1 - (SSE/SST)
print('R_squared: ', R_squared)
R_squared:  0.48352545599133423
In [31]:
data.plot(kind='scatter', x='RM', y='Price', figsize=(6,6), color='black',
         xlim=(4,8), ylim=(10, 45))

plt.plot(data['RM'], prediction, color='b')
Out[31]:
[<matplotlib.lines.Line2D at 0x11428eeb8>]

성능 평가하기¶

In [32]:
from sklearn.metrics import mean_squared_error
In [34]:
print('score: ', linear_regression.score(X=pd.DataFrame(data['RM']), y= data['Price']))
print('Mean Squared Error: ', mean_squared_error(prediction, data['Price']))
print('RMSE: ', mean_squared_error(prediction, data['Price'])**.5)
score:  0.4835254559913343
Mean Squared Error:  43.60055177116956
RMSE:  6.603071389222561


반응형

댓글