Flask + 머신러닝 모델 연동하기-모델 작성
2020. 9. 20. 00:01ㆍPython/Flask
반응형
KNeighborsClassifier¶
In [32]:
import warnings
# 불필요한 경고 출력을 방지합니다.
warnings.filterwarnings('ignore')
1.데이터셋 만들기¶
step 1: 데이터셋 불러오기¶
In [6]:
from sklearn.datasets import load_iris
from IPython.core.display import display, HTML
display(HTML("<style>.container {width:90% !important;}</style>"))
In [7]:
# iris 데이터셋을 불러옵니다.
iris = load_iris()
DESCR
: 데이터셋의 정보를 보여줍니다.data
: feature data.feature_names
: feature data의 컬럼 이름target
: label data (수치형)target_names
: label의 이름 (문자형)
In [14]:
print(iris['DESCR'])
.. _iris_dataset: Iris plants dataset -------------------- **Data Set Characteristics:** :Number of Instances: 150 (50 in each of three classes) :Number of Attributes: 4 numeric, predictive attributes and the class :Attribute Information: - sepal length in cm - sepal width in cm - petal length in cm - petal width in cm - class: - Iris-Setosa - Iris-Versicolour - Iris-Virginica :Summary Statistics: ============== ==== ==== ======= ===== ==================== Min Max Mean SD Class Correlation ============== ==== ==== ======= ===== ==================== sepal length: 4.3 7.9 5.84 0.83 0.7826 sepal width: 2.0 4.4 3.05 0.43 -0.4194 petal length: 1.0 6.9 3.76 1.76 0.9490 (high!) petal width: 0.1 2.5 1.20 0.76 0.9565 (high!) ============== ==== ==== ======= ===== ==================== :Missing Attribute Values: None :Class Distribution: 33.3% for each of 3 classes. :Creator: R.A. Fisher :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov) :Date: July, 1988 The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken from Fisher's paper. Note that it's the same as in R, but not as in the UCI Machine Learning Repository, which has two wrong data points. This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. .. topic:: References - Fisher, R.A. "The use of multiple measurements in taxonomic problems" Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to Mathematical Statistics" (John Wiley, NY, 1950). - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis. (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218. - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System Structure and Classification Rule for Recognition in Partially Exposed Environments". IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-2, No. 1, 67-71. - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions on Information Theory, May 1972, 431-433. - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II conceptual clustering system finds 3 classes in the data. - Many, many more ...
In [15]:
# data
data = iris['data']
data[:5]
Out[15]:
array([[5.1, 3.5, 1.4, 0.2], [4.9, 3. , 1.4, 0.2], [4.7, 3.2, 1.3, 0.2], [4.6, 3.1, 1.5, 0.2], [5. , 3.6, 1.4, 0.2]])
In [16]:
# feature
feature_names = iris['feature_names']
feature_names
Out[16]:
['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']
- sepal: 꽃 받침
- petal: 꽃잎
In [17]:
# target(label)
target = iris['target']
target[:5]
Out[17]:
array([0, 0, 0, 0, 0])
step 2: 데이터프레임 만들기¶
In [9]:
import pandas as pd
In [12]:
df_iris = pd.DataFrame(iris['data'], columns=iris['feature_names'])
In [13]:
df_iris.head()
Out[13]:
sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | |
---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 |
1 | 4.9 | 3.0 | 1.4 | 0.2 |
2 | 4.7 | 3.2 | 1.3 | 0.2 |
3 | 4.6 | 3.1 | 1.5 | 0.2 |
4 | 5.0 | 3.6 | 1.4 | 0.2 |
In [24]:
df_iris['target'] = target
In [25]:
df_iris.head()
Out[25]:
sepal length (cm) | sepal width (cm) | petal length (cm) | petal width (cm) | target | |
---|---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 | 0 |
1 | 4.9 | 3.0 | 1.4 | 0.2 | 0 |
2 | 4.7 | 3.2 | 1.3 | 0.2 | 0 |
3 | 4.6 | 3.1 | 1.5 | 0.2 | 0 |
4 | 5.0 | 3.6 | 1.4 | 0.2 | 0 |
step 3: 분류별 데이터 시각화¶
In [18]:
import matplotlib.pyplot as plt
import seaborn as sns
In [19]:
sns.scatterplot(x='sepal length (cm)',y='sepal width (cm)',hue=target,palette='muted',data=df_iris)
plt.title('Sepal')
plt.show()
In [20]:
sns.scatterplot(x='petal length (cm)',y='petal width (cm)',hue=target,palette='muted',data=df_iris)
plt.title('Petal')
plt.show()
2.학습용 데이터와 검증요 데이터 분리하기¶
In [21]:
from sklearn.model_selection import train_test_split
In [49]:
# 학습용 데이터와 검증용 데이터를 8(학습용데이터):2(검증용데이터) 나눈다
x_train, x_valid, y_train, y_valid = train_test_split(df_iris.drop('target', axis=1), df_iris['target'], test_size=0.2, random_state=42)
In [50]:
# 학습용 데이터 크기를 확인한다.
x_train.shape, y_train.shape
Out[50]:
((120, 4), (120,))
In [51]:
# 검증용 데이터 크기를 확인한다.
x_valid.shape, y_valid.shape
Out[51]:
((30, 4), (30,))
In [52]:
# Label 분포 확인
sns.countplot(y_train)
Out[52]:
<AxesSubplot:xlabel='target', ylabel='count'>
3.모델생성¶
In [85]:
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
from sklearn.metrics import accuracy_score
step 1: 최적의 K 값 찾기¶
In [54]:
# 1~26 까지 설정
k_range = range(1,26)
scores = {}
score_list = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(x_train,y_train)
y_pred = knn.predict(x_valid)
scores[k] = metrics.accuracy_score(y_valid,y_pred)
print('k is %d, score is %f' %(k, scores[k]))
score_list.append(metrics.accuracy_score(y_valid,y_pred))
k is 1, score is 1.000000 k is 2, score is 1.000000 k is 3, score is 1.000000 k is 4, score is 1.000000 k is 5, score is 1.000000 k is 6, score is 1.000000 k is 7, score is 0.966667 k is 8, score is 1.000000 k is 9, score is 1.000000 k is 10, score is 1.000000 k is 11, score is 1.000000 k is 12, score is 1.000000 k is 13, score is 1.000000 k is 14, score is 1.000000 k is 15, score is 1.000000 k is 16, score is 1.000000 k is 17, score is 1.000000 k is 18, score is 1.000000 k is 19, score is 1.000000 k is 20, score is 1.000000 k is 21, score is 1.000000 k is 22, score is 1.000000 k is 23, score is 1.000000 k is 24, score is 1.000000 k is 25, score is 1.000000
In [55]:
# accuracy 결과 시각화
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(k_range,score_list)
plt.xlabel('Value of k for KNN')
plt.ylabel('Testing Accuracy')
Out[55]:
Text(0, 0.5, 'Testing Accuracy')
step 2: 모델선언¶
In [68]:
model = KNeighborsClassifier()
In [74]:
model.get_params()
Out[74]:
{'algorithm': 'auto', 'leaf_size': 30, 'metric': 'minkowski', 'metric_params': None, 'n_jobs': None, 'n_neighbors': 5, 'p': 2, 'weights': 'uniform'}
step 3: 모델학습¶
In [75]:
model.fit(x_train,y_train)
Out[75]:
KNeighborsClassifier()
step 4: 예측¶
In [79]:
y_pred = model.predict(x_valid)
y_pred[:5]
Out[79]:
array([1, 0, 2, 1, 1])
step 5: 평가¶
In [87]:
print("accuracy_score: %.2f" % accuracy_score(y_valid, y_pred))
accuracy_score: 1.00
4.모델 파일생성¶
In [89]:
import pickle
pickle.dump(model, open('model/iris_prediction.pickle', 'wb'))
반응형
'Python > Flask' 카테고리의 다른 글
Flask + 머신러닝 모델 연동하기-Rest API 작성 (0) | 2020.09.20 |
---|---|
Flask + 머신러닝 모델 연동하기-개요 (0) | 2020.09.13 |
Flask RESTful API - DELETE /stores/{store_id} (매장정보삭제) (0) | 2019.07.03 |
Flask RESTful API - GET /stores/{store_id} (매장상세정보조회 및 평점업데이트) (0) | 2019.07.03 |
Flask RESTful API - GET /stores/search (매장검색) (0) | 2019.07.03 |