Skip to content Skip to sidebar Skip to footer

Value Error With Dimensions In Designing A Simple Autoencoder

Hi I am trying out a simple autoencoder in Python 3.5 using Keras library. The issue I face is - ValueError: Error when checking input: expected input_40 to have 2 dimensions, but

Solution 1:

The problem is here:

input_img = Input(shape=(65536,))

You told Keras the input to the network will have 65K dimensions, meaning a vector of shape (samples, 65536), but your actual inputs have shape(samples, 256, 256, 3). Any easy solution would be to use the real input shape and for the network to perform the necessary reshaping:

input_img = Input(shape=((256, 256, 3))
flattened = Flatten()(input_img)
encoded = Dense(encoding_dim, activation='relu')(flattened)
decoded = Dense(256 * 256 * 3, activation='sigmoid')(encoded)
decoded = Reshape((256, 256, 3))(decoded)
autoencoder = Model(input_img, decoded)
encoder = Model(input_img, encoded)
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))

Note that I added a Flatten and a Reshape layer first to flatten the image, and then to take the flattened image back to the shape (256, 256, 3).

Post a Comment for "Value Error With Dimensions In Designing A Simple Autoencoder"