K近邻算法(KNN)原理与实战

一、KNN算法核心思想 K近邻(K-Nearest Neighbors)是一种基于实例的监督学习算法,其核心是物以类聚: 通过计算测试样本与训练集中各样本的距离(如欧式距离),选取最近的K个邻居。 根据这K个邻居的类别投票决定测试样本的类别。 二、Python代码示例 from sklearn.neighbors import KNeighborsClassifier from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split # 加载数据 iris = load_iris() X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.3) # 训练模型 knn = KNeighborsClassifier(n_neighbors=3) knn.fit(X_train, y_train) # 预测与评估 accuracy = knn.score(X_test, y_test) print(f"Accuracy: {accuracy:.2f}")

February 20, 2024