一尘不染

TensorFlow:记住下一批的LSTM状态(有状态LSTM)

python

给定训练有素的LSTM模型,我想对单个时间步执行推断,即seq_length = 1在下面的示例中。在每个时间步之后,需要记住下一个“批”的内部LSTM状态(内存和隐藏状态)。对于推论的最开始,在init_c, init_h给定输入的情况下计算内部LSTM状态。然后将它们存储在LSTMStateTuple传递给LSTM的对象中。在训练期间,此状态会在每个时间步更新。但是,为了进行推断,我希望state在批处理之间保存初始状态,即,仅需在最开始时计算初始状态,然后在每次“批处理”(n
= 1)之后保存LSTM状态。

我发现了这个与StackOverflow相关的问题:Tensorflow,在RNN中保存状态的最佳方法?。但是,仅当时才有效state_is_tuple=False,但TensorFlow很快将弃用此行为(请参见rnn_cell.py)。Keras似乎有一个很好的包装器,可以使
有状态 LSTM成为可能,但是我不知道在TensorFlow中实现这一目标的最佳方法。TensorFlow
GitHub上的这个问题也与我的问题有关:https
:
//github.com/tensorflow/tensorflow/issues/2838

有人建议建立有状态的LSTM模型吗?

inputs  = tf.placeholder(tf.float32, shape=[None, seq_length, 84, 84], name="inputs")
targets = tf.placeholder(tf.float32, shape=[None, seq_length], name="targets")

num_lstm_layers = 2

with tf.variable_scope("LSTM") as scope:

    lstm_cell  = tf.nn.rnn_cell.LSTMCell(512, initializer=initializer, state_is_tuple=True)
    self.lstm  = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * num_lstm_layers, state_is_tuple=True)

    init_c = # compute initial LSTM memory state using contents in placeholder 'inputs'
    init_h = # compute initial LSTM hidden state using contents in placeholder 'inputs'
    self.state = [tf.nn.rnn_cell.LSTMStateTuple(init_c, init_h)] * num_lstm_layers

    outputs = []

    for step in range(seq_length):

        if step != 0:
            scope.reuse_variables()

        # CNN features, as input for LSTM
        x_t = # ...

        # LSTM step through time
        output, self.state = self.lstm(x_t, self.state)
        outputs.append(output)

阅读 159

收藏
2020-12-20

共1个答案

一尘不染

我发现在占位符中保存所有图层的整个状态是最容易的。

init_state = np.zeros((num_layers, 2, batch_size, state_size))

...

state_placeholder = tf.placeholder(tf.float32, [num_layers, 2, batch_size, state_size])

然后解压缩它并创建一个LSTMStateTuples元组,然后再使用本机tensorflow RNN Api。

l = tf.unpack(state_placeholder, axis=0)
rnn_tuple_state = tuple(
[tf.nn.rnn_cell.LSTMStateTuple(l[idx][0], l[idx][1])
 for idx in range(num_layers)]
)

RNN传递API:

cell = tf.nn.rnn_cell.LSTMCell(state_size, state_is_tuple=True)
cell = tf.nn.rnn_cell.MultiRNNCell([cell]*num_layers, state_is_tuple=True)
outputs, state = tf.nn.dynamic_rnn(cell, x_input_batch, initial_state=rnn_tuple_state)

state-那么变量将被feeded下一批作为占位符。

2020-12-20