这是使用tf实现RNN模型的第二篇,上次用很简单的例子实现了一个简单的RNN用于解释其原理,这次我们开始结合NLP尝试构建一个char-rnn的语言建模模型。和CNN的入门篇一样,我们这里也直接来分析一个github上star很多的项目,这样我们不仅可以学习到一些编程的标准规范,还能够开始我们的RNN-NLP之旅。闲话少说,先来介绍一下此次要实现的char-rnn模型。
这个模型是Andrej Karpathy提出来的,可以去看他的一篇博文The Unreasonable Effectiveness of Recurrent Neural Networks,代码存放在其github上面,不过是用lua编程实现的,在这之后sherjilozair使用tf进行了重写,这也就是我们今天要学习的代码。这里使用的是多层RNN/LSTM模型,我们同样按照数据集、预处理、模型构建、模型训练、结果分析的思路对其代码进行分析。

数据集和预处理

首先看一下这里要用到的数据集是小莎士比亚文集。很简单就是一个txt文本文件,我们的预处理工作主要是获得该数据集中所有出现的字符,存到vocab里面,并按照其出现次数多少构建索引列表,最后将我们的数据转化为int型索引。此外也包括譬如划分batch等功能。其实要实现的功能很简单,如果要我自己写的话应该是写很多个循环把想要的数据一次次遍历出来即可,但是这里作者使用collections.Counter函数以及dict、map、zip几个功能函数很简单的实现了,这也是我们需要学习的地方。接下来看一下代码:

class TextLoader():
    def __init__(self, data_dir, batch_size, seq_length, encoding='utf-8'):
        self.data_dir = data_dir
        self.batch_size = batch_size
        self.seq_length = seq_length
        self.encoding = encoding
        #第一次运行程序时只有input.txt一个文件,剩下两个文件是运行之后产生的
        input_file = os.path.join(data_dir, "input.txt")
        vocab_file = os.path.join(data_dir, "vocab.pkl")
        tensor_file = os.path.join(data_dir, "data.npy")
        #如果是第一次执行则调用preprocess函数,否则调用load_preprocessed函数。
        if not (os.path.exists(vocab_file) and os.path.exists(tensor_file)):
            print("reading text file")
            self.preprocess(input_file, vocab_file, tensor_file)
        else:
            print("loading preprocessed files")
            self.load_preprocessed(vocab_file, tensor_file)
        self.create_batches()
        self.reset_batch_pointer()

    def preprocess(self, input_file, vocab_file, tensor_file):
        with codecs.open(input_file, "r", encoding=self.encoding) as f:
            data = f.read()
        #使用Counter函数对输入数据进行统计。counter保存data中每个字符出现的次数
        counter = collections.Counter(data)
        #对counter进行排序,出现次数最多的排在前面
        count_pairs = sorted(counter.items(), key=lambda x: -x[1])
        #将data中出现的所有字符保存,这里有65个,所以voacb_size=65
        self.chars, _ = zip(*count_pairs)
        self.vocab_size = len(self.chars)
        #按照字符出现次数多少顺序将chars保存,vocab中存储的是char和顺序,这样方便将data转化为索引
        self.vocab = dict(zip(self.chars, range(len(self.chars))))
        with open(vocab_file, 'wb') as f:
            #保存chars
            cPickle.dump(self.chars, f)
        #将data中每个字符转化为索引下标。
        self.tensor = np.array(list(map(self.vocab.get, data)))
        np.save(tensor_file, self.tensor)

    def load_preprocessed(self, vocab_file, tensor_file):
        #如果是第二次运行,则可以直接读取之前保存的chars和tensor
        with open(vocab_file, 'rb') as f:
            self.chars = cPickle.load(f)
        self.vocab_size = len(self.chars)
        self.vocab = dict(zip(self.chars, range(len(self.chars))))
        self.tensor = np.load(tensor_file)
        self.num_batches = int(self.tensor.size / (self.batch_size *
                                                   self.seq_length))

    def create_batches(self):
        #首先将数据按batch_size切割,然后每个batch_size在按照seq_length进行切割
        self.num_batches = int(self.tensor.size / (self.batch_size *
                                                   self.seq_length))

        if self.num_batches == 0:
            assert False, "Not enough data. Make seq_length and batch_size small."

        self.tensor = self.tensor[:self.num_batches * self.batch_size * self.seq_length]
        xdata = self.tensor
        #构造target,这里使用上一个词预测下一个词,所以直接将x向后一个字符即可
        ydata = np.copy(self.tensor)
        ydata[:-1] = xdata[1:]
        ydata[-1] = xdata[0]
        #将数据进行切分,这里我们假设数据总长度为10000,batch_size为100, seq_length为10.
        # 所以num_batches=10,所以,xdata在reshape之后变成[100, 100],然后在第二个维度上切成10份,
        # 所以最终得到[100, 10, 10]的数据
        self.x_batches = np.split(xdata.reshape(self.batch_size, -1),
                                  self.num_batches, 1)
        self.y_batches = np.split(ydata.reshape(self.batch_size, -1),
                                  self.num_batches, 1)

    def next_batch(self):
        x, y = self.x_batches[self.pointer], self.y_batches[self.pointer]
        self.pointer += 1
        return x, y

    def reset_batch_pointer(self):
        self.pointer = 0

模型构建

这一部分我们将使用tf构建RNN模型,看代码之前先来关注tf中几个比较重要的函数和其参数:
1,tf.contrib.rnn.BasicRNNCell/GRUCell/BasicLSTMCell/NASCell
这里以BasicLSTMCell为例,

__init__(
    num_units,
    forget_bias=1.0,
    input_size=None,
    state_is_tuple=True,
    activation=tf.tanh,
    reuse=None
)

num_units:cell中的神经元个数,注意再很多教程中将RNNcell表示为一个小圆圈,但其实其中有很多个神经元。
forget_bias:忘记门的偏置大小
state_is_tuple:若为真则接受和返回的状态保存在长度为2的tuple中。c_state和m_state。
reuse:该单元的参数是否重复使用。如果不是真,而且不是第一次使用,则会报错。

zero_state(
    batch_size,
    dtype
)

将初始状态设为0,需要传入的参数是batch_size。并且根据cell的state_size返回不同结果。如果state_size为正数,则返回[batch_size x state_size]的全零初始状态;如果state_size为列表,则返回一个列表[batch_size x s] for each s in state_size.

2,tf.contrib.rnn.DropoutWrapper函数
为我们上面选择的RNNCell添加dropout属性,抑制过拟合。

__init__(
    cell,
    input_keep_prob=1.0,
    output_keep_prob=1.0,
    state_keep_prob=1.0,
    variational_recurrent=False,
    input_size=None,
    dtype=None,
    seed=None
)

从构造函数可以看出,对每个cell而言,我们都有input、output、state三个层面的dropout。
cell: an RNNCell, 可以使我们上面选择的某一种RNNCell.
input_keep_prob: 0-1,输入的dropout几率
output_keep_prob: 0-1,输出的dropout几率
state_keep_prob: 0-1,state的dropout几率,在output的基础上进行dropout
variational_recurrent: 若为真,则说明所有时间步上应用相同的dropout,并且需要设置input_size参数。

3,tf.contrib.rnn.MultiRNNCell函数

__init__(
    cells,
    state_is_tuple=True
)

 1. cells: cell list,包含n层rnn的cell.
 2. state_is_tuple: 若为真则接受和返回的状态是n(层数)元tuple. 

zero_state(
    batch_size,
    dtype
)

4,tf.contrib.legacy_seq2seq.rnn_decoder函数

rnn_decoder(
    decoder_inputs,
    initial_state,
    cell,
    loop_function=None,
    scope=None
)

该函数实现了一个简单的多层rnn模型。上面的MultiRNNCell函数构造了一个时间步的多层rnn,本函数则实现将其循环num_steps个时间步。

  1. decoder_inputs:输入列表,是一个长度为num_steps的列表,每个元素是[batch_size, input_size]的2-D维的tensor
  2. initial_state:初始化状态,2-D的tensor,shape为 [batch_size x cell.state_size].
  3. cell:RNNCell
  4. loop_function:如果不为空,则将该函数应用于第i个输出以得到第i+1个输入,此时decoder_inputs变量除了第一个元素之外其他元素会被忽略。其形式定义为:loop(prev, i)=next。prev是[batch_size x output_size],i是表明第i步,next是[batch_size x input_size]。

这里我们可以看一下该函数的源代码加深理解:

  with variable_scope.variable_scope(scope or "rnn_decoder"):
    state = initial_state
    outputs = []
    prev = None
    #遍历n个时间步
    for i, inp in enumerate(decoder_inputs):
      #下面这两个if语句只有在第2个时间步之后才会被执行
      if loop_function is not None and prev is not None:
        with variable_scope.variable_scope("loop_function", reuse=True):
          inp = loop_function(prev, i)
      if i > 0:
        variable_scope.get_variable_scope().reuse_variables()
      #重点是循环执行这个
      output, state = cell(inp, state)
      outputs.append(output)
      if loop_function is not None:
        prev = output
  return outputs, state

看了上面几个重要函数的介绍,那么对下面代码就不难理解了。在tf中,构造RNN模型的一般思路就是RNNcell–>dropout–>MultiRNNCell–>重复time_steps步(这里使用legacy_seq2seq.rnn_decoder函数实现,我们也可以自己定义自己的模型)。接下来我们看一下model.py文件中关于模型构建部分的代码。

class Model():
    def __init__(self, args, training=True):
        self.args = args
        if not training:
            args.batch_size = 1
            args.seq_length = 1
        #几种可选的rnn cell
        if args.model == 'rnn':
            cell_fn = rnn.BasicRNNCell
        elif args.model == 'gru':
            cell_fn = rnn.GRUCell
        elif args.model == 'lstm':
            cell_fn = rnn.BasicLSTMCell
        elif args.model == 'nas':
            cell_fn = rnn.NASCell
        else:
            raise Exception("model type not supported: {}".format(args.model))

        cells = []
        #因为是多层RNN,所以在recoll时我们要输入的是一个多层的cell,
        # 根据是否处于训练过程和需要dropout添加dropout层
        for _ in range(args.num_layers):
            cell = cell_fn(args.rnn_size)
            if training and (args.output_keep_prob < 1.0 or args.input_keep_prob < 1.0):
                cell = rnn.DropoutWrapper(cell,
                                          input_keep_prob=args.input_keep_prob,
                                          output_keep_prob=args.output_keep_prob)
            cells.append(cell)
        #MultiRNNCell接受我们之前定义的多层RNNcell列表。
        # state_is_tuple默认为True,表示输入和输出都用tuple存储,将来会丢弃False的选项。
        self.cell = cell = rnn.MultiRNNCell(cells, state_is_tuple=True)

        self.input_data = tf.placeholder(
            tf.int32, [args.batch_size, args.seq_length])
        self.targets = tf.placeholder(
            tf.int32, [args.batch_size, args.seq_length])
        #定义初始化状态,可以直接调用cell.zero_state函数,参数为batch_size
        self.initial_state = cell.zero_state(args.batch_size, tf.float32)

        with tf.variable_scope('rnnlm'):
            softmax_w = tf.get_variable("softmax_w",
                                        [args.rnn_size, args.vocab_size])
            softmax_b = tf.get_variable("softmax_b", [args.vocab_size])
        #将输入索引转化为索引
        embedding = tf.get_variable("embedding", [args.vocab_size, args.rnn_size])
        inputs = tf.nn.embedding_lookup(embedding, self.input_data)

        # dropout beta testing: double check which one should affect next line
        if training and args.output_keep_prob:
            inputs = tf.nn.dropout(inputs, args.output_keep_prob)
        #将输入切分
        inputs = tf.split(inputs, args.seq_length, 1)
        inputs = [tf.squeeze(input_, [1]) for input_ in inputs]

        def loop(prev, _):
            prev = tf.matmul(prev, softmax_w) + softmax_b
            prev_symbol = tf.stop_gradient(tf.argmax(prev, 1))
            return tf.nn.embedding_lookup(embedding, prev_symbol)
        #直接调用rnn_decoder函数构建RNN模型
        outputs, last_state = legacy_seq2seq.rnn_decoder(inputs, self.initial_state, cell, loop_function=loop if not training else None, scope='rnnlm')
        output = tf.reshape(tf.concat(outputs, 1), [-1, args.rnn_size])

        #下面就是loss和梯度计算,优化器定义部分
        self.logits = tf.matmul(output, softmax_w) + softmax_b
        self.probs = tf.nn.softmax(self.logits)
        loss = legacy_seq2seq.sequence_loss_by_example(
                [self.logits],
                [tf.reshape(self.targets, [-1])],
                [tf.ones([args.batch_size * args.seq_length])])
        self.cost = tf.reduce_sum(loss) / args.batch_size / args.seq_length
        with tf.name_scope('cost'):
            self.cost = tf.reduce_sum(loss) / args.batch_size / args.seq_length
        self.final_state = last_state
        self.lr = tf.Variable(0.0, trainable=False)
        tvars = tf.trainable_variables()
        #RNN中常用的梯度截断,防止出现梯度过大难以求导的现象
        grads, _ = tf.clip_by_global_norm(tf.gradients(self.cost, tvars),
                args.grad_clip)
        with tf.name_scope('optimizer'):
            optimizer = tf.train.AdamOptimizer(self.lr)
        self.train_op = optimizer.apply_gradients(zip(grads, tvars))

        # instrument tensorboard
        tf.summary.histogram('logits', self.logits)
        tf.summary.histogram('loss', loss)
        tf.summary.scalar('train_loss', self.cost)

模型训练

模型定义完成之后,我们要进行的就是读入数据,构建模型并开始训练等一系列操作。这里并没有设么新鲜的代码,我们直接看程序就可以了:

def train(args):
    #读入数据
    data_loader = TextLoader(args.data_dir, args.batch_size, args.seq_length)
    args.vocab_size = data_loader.vocab_size

    # check compatibility if training is continued from previously saved model
    if args.init_from is not None:
        # 继续从之前的模型接着训练(可以先不看)
        assert os.path.isdir(args.init_from)," %s must be a a path" % args.init_from
        assert os.path.isfile(os.path.join(args.init_from,"config.pkl")),"config.pkl file does not exist in path %s"%args.init_from
        assert os.path.isfile(os.path.join(args.init_from,"chars_vocab.pkl")),"chars_vocab.pkl.pkl file does not exist in path %s" % args.init_from
        ckpt = tf.train.get_checkpoint_state(args.init_from)
        assert ckpt, "No checkpoint found"
        assert ckpt.model_checkpoint_path, "No model path found in checkpoint"

        # open old config and check if models are compatible
        with open(os.path.join(args.init_from, 'config.pkl'), 'rb') as f:
            saved_model_args = cPickle.load(f)
        need_be_same = ["model", "rnn_size", "num_layers", "seq_length"]
        for checkme in need_be_same:
            assert vars(saved_model_args)[checkme]==vars(args)[checkme],"Command line argument and saved model disagree on '%s' "%checkme

        # open saved vocab/dict and check if vocabs/dicts are compatible
        with open(os.path.join(args.init_from, 'chars_vocab.pkl'), 'rb') as f:
            saved_chars, saved_vocab = cPickle.load(f)
        assert saved_chars==data_loader.chars, "Data and loaded model disagree on character set!"
        assert saved_vocab==data_loader.vocab, "Data and loaded model disagree on dictionary mappings!"

    if not os.path.isdir(args.save_dir):
        os.makedirs(args.save_dir)
    with open(os.path.join(args.save_dir, 'config.pkl'), 'wb') as f:
        cPickle.dump(args, f)
    with open(os.path.join(args.save_dir, 'chars_vocab.pkl'), 'wb') as f:
        cPickle.dump((data_loader.chars, data_loader.vocab), f)

    #构建模型
    model = Model(args)

    with tf.Session() as sess:
        # 写入Summary
        summaries = tf.summary.merge_all()
        writer = tf.summary.FileWriter(
                os.path.join(args.log_dir, time.strftime("%Y-%m-%d-%H-%M-%S")))
        writer.add_graph(sess.graph)
        #参数初始化
        sess.run(tf.global_variables_initializer())
        saver = tf.train.Saver(tf.global_variables())
        # restore model
        if args.init_from is not None:
            saver.restore(sess, ckpt.model_checkpoint_path)

        #开始循环送入数据并训练
        for e in range(args.num_epochs):
            sess.run(tf.assign(model.lr,
                               args.learning_rate * (args.decay_rate ** e)))
            data_loader.reset_batch_pointer()
            state = sess.run(model.initial_state)
            for b in range(data_loader.num_batches):
                start = time.time()
                x, y = data_loader.next_batch()
                feed = {model.input_data: x, model.targets: y}
                for i, (c, h) in enumerate(model.initial_state):
                    feed[c] = state[i].c
                    feed[h] = state[i].h
                train_loss, state, _ = sess.run([model.cost, model.final_state, model.train_op], feed)

                # instrument for tensorboard
                summ, train_loss, state, _ = sess.run([summaries, model.cost, model.final_state, model.train_op], feed)
                writer.add_summary(summ, e * data_loader.num_batches + b)

                end = time.time()
                print("{}/{} (epoch {}), train_loss = {:.3f}, time/batch = {:.3f}"
                      .format(e * data_loader.num_batches + b,
                              args.num_epochs * data_loader.num_batches,
                              e, train_loss, end - start))
                if (e * data_loader.num_batches + b) % args.save_every == 0\
                        or (e == args.num_epochs-1 and
                            b == data_loader.num_batches-1):
                    # save for the last result
                    checkpoint_path = os.path.join(args.save_dir, 'model.ckpt')
                    saver.save(sess, checkpoint_path,
                               global_step=e * data_loader.num_batches + b)
                    print("model saved to {}".format(checkpoint_path))

结果分析

代码是可以直接执行的,在cpu上平均1s可以执行3次。最终我们获得的结果也十分平滑。这里只记录了loss,所以我们也暂时只展示最终的架构图和loss曲线。如下:



更多推荐

使用TensorFlow实现RNN模型入门篇2--char-rnn语言建模模型