Skip to content Skip to sidebar Skip to footer

Missing Value Imputation In Python Using KNN

I have a dataset that looks like this 1908 January 5.0 -1.4 1908 February 7.3 1.9 1908 March 6.2 0.3 1908 April NaN 2.1 1908 May NaN 7.7 1908 June 1

Solution 1:

fancyimpute package supports such kind of imputation, using the following API:

from fancyimpute import KNN    
# X is the complete data matrix
# X_incomplete has the same values as X except a subset have been replace with NaN

# Use 3 nearest rows which have a feature to fill in each row's missing features
X_filled_knn = KNN(k=3).complete(X_incomplete)

Here are the imputations supported by this package:

•SimpleFill: Replaces missing entries with the mean or median of each column.

•KNN: Nearest neighbor imputations which weights samples using the mean squared difference on features for which two rows both have observed data.

•SoftImpute: Matrix completion by iterative soft thresholding of SVD decompositions. Inspired by the softImpute package for R, which is based on Spectral Regularization Algorithms for Learning Large Incomplete Matrices by Mazumder et. al.

•IterativeSVD: Matrix completion by iterative low-rank SVD decomposition. Should be similar to SVDimpute from Missing value estimation methods for DNA microarrays by Troyanskaya et. al.

•MICE: Reimplementation of Multiple Imputation by Chained Equations.

•MatrixFactorization: Direct factorization of the incomplete matrix into low-rank U and V, with an L1 sparsity penalty on the elements of U and an L2 penalty on the elements of V. Solved by gradient descent.

•NuclearNormMinimization: Simple implementation of Exact Matrix Completion via Convex Optimization by Emmanuel Candes and Benjamin Recht using cvxpy. Too slow for large matrices.

•BiScaler: Iterative estimation of row/column means and standard deviations to get doubly normalized matrix. Not guaranteed to converge but works well in practice. Taken from Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares.


Solution 2:

fancyimpute's KNN imputation no more supports the complete function as suggested by other answer, we need to now use fit_transform

# X is the complete data matrix
# X_incomplete has the same values as X except a subset have been replace with NaN
# Use 3 nearest rows which have a feature to fill in each row's missing features

X_filled_knn = KNN(k=3).fit_transform(X_incomplete)    

reference https://github.com/iskandr/fancyimpute


Solution 3:

scikit-learn v0.22 supports native KNN Imputation

import numpy as np
from sklearn.impute import KNNImputer

X = [[1, 2, np.nan], [3, 4, 3], [np.nan, 6, 5], [8, 8, 7]]
imputer = KNNImputer(n_neighbors=2)
print(imputer.fit_transform(X))

Solution 4:

This pull request to sklearn adds KNN support. You can get the code from here.


Post a Comment for "Missing Value Imputation In Python Using KNN"