python-Keras:ValueError:输入0是不兼容的图层问题

这个问题已经在这里有了答案:            >            Input Shape Error in Second-layer (but not first) of Keras LSTM                                    1个
我正在将Keras与Tensorflow作为后端一起使用,并得到了不兼容的错误:

model = Sequential()
model.add(LSTM(64, input_dim = 1))
model.add(Dropout(0.2))
model.add(LSTM(16))

出现以下错误:

Traceback (most recent call last):
  File "train_lstm_model.py", line 36, in 
    model.add(LSTM(16))
  File "/home/***/anaconda2/lib/python2.7/site-packages/keras/models.py", line 332, in add
    output_tensor = layer(self.outputs[0])
  File "/home/***/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 529, in __call__
    self.assert_input_compatibility(x)
  File "/home/***/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 469, in assert_input_compatibility
    str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2

我该如何解决这个问题?

Keras版本:1.2.2
Tensorflow版本:0.12

最佳答案

LSTM层正在接受(len_of_sequences,nb_of_features)形状的输入.您提供的输入形状仅为1-dim,因此这是错误的来源.错误消息的确切形式来自以下事实:数据的实际形状包括batch_size.因此,馈送到该层的数据的实际形状为(batch_size,len_of_sequences,nb_of_features).您的形状是(batch_size,1),这就是3d vs 2d输入背后的原因.

此外,第二层可能也有类似的问题.为了使您的LSTM层返回序列,您应该将其定义更改为:

model.add(LSTM(64, input_shape = (len_of_seq, nb_of_features), return_sequences=True)

要么:

model.add(LSTM(64, input_dim = nb_of_features, input_len = len_of_sequence, return_sequences=True)