API - Layers¶
To make TensorLayer simple, we minimize the number of layer classes as much as
we can. So we encourage you to use TensorFlow’s function.
For example, we provide layer for local response normalization, but we still suggest
users to apply tf.nn.lrn
on network.outputs
.
More functions can be found in TensorFlow API.
Understand Basic layer¶
All TensorLayer layers have a number of properties in common:
layer.outputs
: a Tensor, the outputs of current layer.layer.all_params
: a list of Tensor, all network variables in order.layer.all_layers
: a list of Tensor, all network outputs in order.layer.all_drop
: a dictionary of {placeholder : float}, all keeping probabilities of noise layer.
All TensorLayer layers have a number of methods in common:
layer.print_params()
: print the network variables information in order (aftertl.layers.initialize_global_variables(sess)
). alternatively, print all variables bytl.layers.print_all_variables()
.layer.print_layers()
: print the network layers information in order.layer.count_params()
: print the number of parameters in the network.
The initialization of a network is done by input layer, then we can stacked layers
as follow, a network is a Layer
class.
The most important properties of a network are network.all_params
, network.all_layers
and network.all_drop
.
The all_params
is a list which store all pointers of all network parameters in order,
the following script define a 3 layer network, then:
all_params
= [W1, b1, W2, b2, W_out, b_out]
To get specified variables, you can use network.all_params[2:3]
or get_variables_with_name()
.
As the all_layers
is a list which store all pointers of the outputs of all layers,
in the following network:
all_layers
= [drop(?,784), relu(?,800), drop(?,800), relu(?,800), drop(?,800)], identity(?,10)]
where ?
reflects any batch size. You can print the layer information and parameters information by
using network.print_layers()
and network.print_params()
.
To count the number of parameters in a network, run network.count_params()
.
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')
network = tl.layers.InputLayer(x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1')
network = tl.layers.DenseLayer(network, n_units=800,
act = tf.nn.relu, name='relu1')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop2')
network = tl.layers.DenseLayer(network, n_units=800,
act = tf.nn.relu, name='relu2')
network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3')
network = tl.layers.DenseLayer(network, n_units=10,
act = tl.activation.identity,
name='output_layer')
y = network.outputs
y_op = tf.argmax(tf.nn.softmax(y), 1)
cost = tl.cost.cross_entropy(y, y_)
train_params = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate, beta1=0.9, beta2=0.999,
epsilon=1e-08, use_locking=False).minimize(cost, var_list = train_params)
tl.layers.initialize_global_variables(sess)
network.print_params()
network.print_layers()
In addition, network.all_drop
is a dictionary which stores the keeping probabilities of all
noise layer. In the above network, they are the keeping probabilities of dropout layers.
So for training, enable all dropout layers as follow.
feed_dict = {x: X_train_a, y_: y_train_a}
feed_dict.update( network.all_drop )
loss, _ = sess.run([cost, train_op], feed_dict=feed_dict)
feed_dict.update( network.all_drop )
For evaluating and testing, disable all dropout layers as follow.
feed_dict = {x: X_val, y_: y_val}
feed_dict.update(dp_dict)
print(" val loss: %f" % sess.run(cost, feed_dict=feed_dict))
print(" val acc: %f" % np.mean(y_val ==
sess.run(y_op, feed_dict=feed_dict)))
For more details, please read the MNIST examples on Github.
Understand Dense layer¶
Before creating your own TensorLayer layer, let’s have a look at Dense layer.
It creates a weights matrix and biases vector if not exists, then implement
the output expression.
At the end, as a layer with parameter, we also need to append the parameters into all_params
.
class DenseLayer(Layer):
"""
The :class:`DenseLayer` class is a fully connected layer.
Parameters
----------
layer : a :class:`Layer` instance
The `Layer` class feeding into this layer.
n_units : int
The number of units of the layer.
act : activation function
The function that is applied to the layer activations.
W_init : weights initializer
The initializer for initializing the weight matrix.
b_init : biases initializer
The initializer for initializing the bias vector. If None, skip biases.
W_init_args : dictionary
The arguments for the weights tf.get_variable.
b_init_args : dictionary
The arguments for the biases tf.get_variable.
name : a string or None
An optional name to attach to this layer.
"""
def __init__(
self,
layer = None,
n_units = 100,
act = tf.nn.relu,
W_init = tf.truncated_normal_initializer(stddev=0.1),
b_init = tf.constant_initializer(value=0.0),
W_init_args = {},
b_init_args = {},
name ='dense_layer',
):
Layer.__init__(self, name=name)
self.inputs = layer.outputs
if self.inputs.get_shape().ndims != 2:
raise Exception("The input dimension must be rank 2")
n_in = int(self.inputs._shape[-1])
self.n_units = n_units
print(" tensorlayer:Instantiate DenseLayer %s: %d, %s" % (self.name, self.n_units, act))
with tf.variable_scope(name) as vs:
W = tf.get_variable(name='W', shape=(n_in, n_units), initializer=W_init, **W_init_args )
if b_init:
b = tf.get_variable(name='b', shape=(n_units), initializer=b_init, **b_init_args )
self.outputs = act(tf.matmul(self.inputs, W) + b)#, name=name)
else:
self.outputs = act(tf.matmul(self.inputs, W))
# Hint : list(), dict() is pass by value (shallow).
self.all_layers = list(layer.all_layers)
self.all_params = list(layer.all_params)
self.all_drop = dict(layer.all_drop)
self.all_layers.extend( [self.outputs] )
if b_init:
self.all_params.extend( [W, b] )
else:
self.all_params.extend( [W] )
Your layer¶
A simple layer¶
To implement a custom layer in TensorLayer, you will have to write a Python class
that subclasses Layer and implement the outputs
expression.
The following is an example implementation of a layer that multiplies its input by 2:
class DoubleLayer(Layer):
def __init__(
self,
layer = None,
name ='double_layer',
):
Layer.__init__(self, name=name)
self.inputs = layer.outputs
self.outputs = self.inputs * 2
self.all_layers = list(layer.all_layers)
self.all_params = list(layer.all_params)
self.all_drop = dict(layer.all_drop)
self.all_layers.extend( [self.outputs] )
Modifying Pre-train Behaviour¶
Greedy layer-wise pretraining is an important task for deep neural network initialization, while there are many kinds of pre-training methods according to different network architectures and applications.
For example, the pre-train process of Vanilla Sparse Autoencoder can be implemented by using KL divergence (for sigmoid) as the following code, but for Deep Rectifier Network, the sparsity can be implemented by using the L1 regularization of activation output.
# Vanilla Sparse Autoencoder
beta = 4
rho = 0.15
p_hat = tf.reduce_mean(activation_out, reduction_indices = 0)
KLD = beta * tf.reduce_sum( rho * tf.log(tf.div(rho, p_hat))
+ (1- rho) * tf.log((1- rho)/ (tf.sub(float(1), p_hat))) )
There are many pre-train methods, for this reason, TensorLayer provides a simple way to modify or design your
own pre-train method. For Autoencoder, TensorLayer uses ReconLayer.__init__()
to define the reconstruction layer and cost function, to define your own cost
function, just simply modify the self.cost
in ReconLayer.__init__()
.
To creat your own cost expression please read Tensorflow Math.
By default, ReconLayer
only updates the weights and biases of previous 1
layer by using self.train_params = self.all _params[-4:]
, where the 4
parameters are [W_encoder, b_encoder, W_decoder, b_decoder]
, where
W_encoder, b_encoder
belong to previous DenseLayer, W_decoder, b_decoder
belong to this ReconLayer.
In addition, if you want to update the parameters of previous 2 layers at the same time, simply modify [-4:]
to [-6:]
.
ReconLayer.__init__(...):
...
self.train_params = self.all_params[-4:]
...
self.cost = mse + L1_a + L2_w
Layer list¶
get_variables_with_name (name[, train_only, …]) |
Get variable list by a given name scope. |
get_layers_with_name ([network, name, printable]) |
Get layer list in a network by a given name scope. |
set_name_reuse ([enable]) |
Enable or disable reuse layer name. |
print_all_variables ([train_only]) |
Print all trainable and non-trainable variables without tl.layers.initialize_global_variables(sess) |
initialize_global_variables ([sess]) |
Excute sess.run(tf.global_variables_initializer()) for TF12+ or sess.run(tf.initialize_all_variables()) for TF11. |
Layer ([inputs, name]) |
The Layer class represents a single layer of a neural network. |
InputLayer ([inputs, n_features, name]) |
The InputLayer class is the starting layer of a neural network. |
Word2vecEmbeddingInputlayer ([inputs, …]) |
The Word2vecEmbeddingInputlayer class is a fully connected layer, for Word Embedding. |
EmbeddingInputlayer ([inputs, …]) |
The EmbeddingInputlayer class is a fully connected layer, for Word Embedding. |
DenseLayer ([layer, n_units, act, W_init, …]) |
The DenseLayer class is a fully connected layer. |
ReconLayer ([layer, x_recon, name, n_units, act]) |
The ReconLayer class is a reconstruction layer DenseLayer which use to pre-train a DenseLayer. |
DropoutLayer ([layer, keep, is_fix, …]) |
The DropoutLayer class is a noise layer which randomly set some values to zero by a given keeping probability. |
GaussianNoiseLayer ([layer, mean, stddev, …]) |
The GaussianNoiseLayer class is noise layer that adding noise with normal distribution to the activation. |
DropconnectDenseLayer ([layer, keep, …]) |
The DropconnectDenseLayer class is DenseLayer with DropConnect behaviour which randomly remove connection between this layer to previous layer by a given keeping probability. |
Conv1dLayer ([layer, act, shape, strides, …]) |
The Conv1dLayer class is a 1D CNN layer, see tf.nn.conv1d. |
Conv2dLayer ([layer, act, shape, strides, …]) |
The Conv2dLayer class is a 2D CNN layer, see tf.nn.conv2d. |
DeConv2dLayer ([layer, act, shape, …]) |
The DeConv2dLayer class is deconvolutional 2D layer, see tf.nn.conv2d_transpose. |
Conv3dLayer ([layer, act, shape, strides, …]) |
The Conv3dLayer class is a 3D CNN layer, see tf.nn.conv3d. |
DeConv3dLayer ([layer, act, shape, …]) |
The DeConv3dLayer class is deconvolutional 3D layer, see tf.nn.conv3d_transpose. |
PoolLayer ([layer, ksize, strides, padding, …]) |
The PoolLayer class is a Pooling layer, you can choose tf.nn.max_pool and tf.nn.avg_pool for 2D or tf.nn.max_pool3d() and tf.nn.avg_pool3d() for 3D. |
Padlayer |
|
UpSampling2dLayer ([layer, size, is_scale, …]) |
The UpSampling2dLayer class is upSampling 2d layer, see tf.image.resize_images. |
DownSampling2dLayer ([layer, size, is_scale, …]) |
The DownSampling2dLayer class is downSampling 2d layer, see tf.image.resize_images. |
AtrousConv2dLayer ([layer, n_filter, …]) |
The AtrousConv2dLayer class is Atrous convolution (a.k.a. |
LocalResponseNormLayer ([layer, …]) |
The LocalResponseNormLayer class is for Local Response Normalization, see tf.nn.local_response_normalization . |
Conv2d (net[, n_filter, filter_size, …]) |
Wrapper for Conv2dLayer , if you don’t understand how to use Conv2dLayer , this function may be easier. |
DeConv2d (net[, n_out_channel, filter_size, …]) |
Wrapper for DeConv2dLayer , if you don’t understand how to use DeConv2dLayer , this function may be easier. |
MaxPool2d (net[, filter_size, strides, …]) |
Wrapper for PoolLayer . |
MeanPool2d (net[, filter_size, strides, …]) |
Wrapper for PoolLayer . |
BatchNormLayer ([layer, decay, epsilon, act, …]) |
The BatchNormLayer class is a normalization layer, see tf.nn.batch_normalization and tf.nn.moments . |
LocalResponseNormLayer ([layer, …]) |
The LocalResponseNormLayer class is for Local Response Normalization, see tf.nn.local_response_normalization . |
TimeDistributedLayer ([layer, layer_class, …]) |
The TimeDistributedLayer class that applies a function to every timestep of the input tensor. |
RNNLayer ([layer, cell_fn, cell_init_args, …]) |
The RNNLayer class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it. |
BiRNNLayer ([layer, cell_fn, cell_init_args, …]) |
The BiRNNLayer class is a Bidirectional RNN layer. |
advanced_indexing_op (input, index) |
Advanced Indexing for Sequences, returns the outputs by given sequence lengths. |
retrieve_seq_length_op (data) |
An op to compute the length of a sequence from input shape of [batch_size, n_step(max), n_features], it can be used when the features of padding (on right hand side) are all zeros. |
retrieve_seq_length_op2 (data) |
An op to compute the length of a sequence, from input shape of [batch_size, n_step(max)], it can be used when the features of padding (on right hand side) are all zeros. |
DynamicRNNLayer ([layer, cell_fn, …]) |
The DynamicRNNLayer class is a Dynamic RNN layer, see tf.nn.dynamic_rnn . |
BiDynamicRNNLayer ([layer, cell_fn, …]) |
The BiDynamicRNNLayer class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it. |
Seq2Seq ([net_encode_in, net_decode_in, …]) |
The Seq2Seq class is a simple DynamicRNNLayer based Seq2seq layer, both encoder and decoder are DynamicRNNLayer , network details see Model and Sequence to Sequence Learning with Neural Networks . |
PeekySeq2Seq ([net_encode_in, net_decode_in, …]) |
Waiting for contribution. |
AttentionSeq2Seq ([net_encode_in, …]) |
Waiting for contribution. |
FlattenLayer ([layer, name]) |
The FlattenLayer class is layer which reshape high-dimension input to a vector. |
ReshapeLayer ([layer, shape, name]) |
The ReshapeLayer class is layer which reshape the tensor. |
LambdaLayer ([layer, fn, fn_args, name]) |
The LambdaLayer class is a layer which is able to use the provided function. |
ConcatLayer ([layer, concat_dim, name]) |
The ConcatLayer class is layer which concat (merge) two or more DenseLayer to a single class:DenseLayer. |
ElementwiseLayer ([layer, combine_fn, name]) |
The ElementwiseLayer class combines multiple Layer which have the same output shapes by a given elemwise-wise operation. |
ExpandDimsLayer ([layer, axis, name]) |
The ExpandDimsLayer class inserts a dimension of 1 into a tensor’s shape, see tf.expand_dims() . |
TileLayer ([layer, multiples, name]) |
The TileLayer class constructs a tensor by tiling a given tensor, see tf.tile() . |
SlimNetsLayer ([layer, slim_layer, …]) |
The SlimNetsLayer class can be used to merge all TF-Slim nets into TensorLayer. |
KerasLayer ([layer, keras_layer, keras_args, …]) |
The KerasLayer class can be used to merge all Keras layers into TensorLayer. |
PReluLayer ([layer, channel_shared, a_init, …]) |
The PReluLayer class is Parametric Rectified Linear layer. |
MultiplexerLayer ([layer, name]) |
The MultiplexerLayer selects one of several input and forwards the selected input into the output, see tutorial_mnist_multiplexer.py. |
EmbeddingAttentionSeq2seqWrapper (…[, …]) |
Sequence-to-sequence model with attention and for multiple buckets. |
flatten_reshape (variable[, name]) |
Reshapes high-dimension input to a vector. |
clear_layers_name () |
Clear all layer names in set_keep[‘_layers_name_list’], enable layer name reuse. |
initialize_rnn_state (state) |
Return the initialized RNN state. |
list_remove_repeat ([l]) |
Remove the repeated items in a list, and return the processed list. |
Name Scope and Sharing Parameters¶
These functions help you to reuse parameters for different inference (graph), and get a list of parameters by given name. About TensorFlow parameters sharing click here.
Get variables with name¶
Get layers with name¶
Enable layer name reuse¶
-
tensorlayer.layers.
set_name_reuse
(enable=True)[source]¶ Enable or disable reuse layer name. By default, each layer must has unique name. When you want two or more input placeholder (inference) share the same model parameters, you need to enable layer name reuse, then allow the parameters have same name scope.
Parameters: - enable : boolean, enable name reuse.
Examples
>>> def embed_seq(input_seqs, is_train, reuse): >>> with tf.variable_scope("model", reuse=reuse): >>> tl.layers.set_name_reuse(reuse) >>> network = tl.layers.EmbeddingInputlayer( ... inputs = input_seqs, ... vocabulary_size = vocab_size, ... embedding_size = embedding_size, ... name = 'e_embedding') >>> network = tl.layers.DynamicRNNLayer(network, ... cell_fn = tf.nn.rnn_cell.BasicLSTMCell, ... n_hidden = embedding_size, ... dropout = (0.7 if is_train else None), ... initializer = w_init, ... sequence_length = tl.layers.retrieve_seq_length_op2(input_seqs), ... return_last = True, ... name = 'e_dynamicrnn',) >>> return network >>> >>> net_train = embed_seq(t_caption, is_train=True, reuse=False) >>> net_test = embed_seq(t_caption, is_train=False, reuse=True)
- see
tutorial_ptb_lstm.py
for example.
Print variables¶
Basic layer¶
-
class
tensorlayer.layers.
Layer
(inputs=None, name='layer')[source]¶ The
Layer
class represents a single layer of a neural network. It should be subclassed when implementing new types of layers. Because each layer can keep track of the layer(s) feeding into it, a network’s outputLayer
instance can double as a handle to the full network.Parameters: - inputs : a
Layer
instance The Layer class feeding into this layer.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - inputs : a
Input layer¶
-
class
tensorlayer.layers.
InputLayer
(inputs=None, n_features=None, name='input_layer')[source]¶ The
InputLayer
class is the starting layer of a neural network.Parameters: - inputs : a TensorFlow placeholder
The input tensor data.
- name : a string or None
An optional name to attach to this layer.
- n_features : a int
The number of features. If not specify, it will assume the input is with the shape of [batch_size, n_features], then select the second element as the n_features. It is used to specify the matrix size of next layer. If apply Convolutional layer after InputLayer, n_features is not important.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Word Embedding Input layer¶
Word2vec layer for training¶
-
class
tensorlayer.layers.
Word2vecEmbeddingInputlayer
(inputs=None, train_labels=None, vocabulary_size=80000, embedding_size=200, num_sampled=64, nce_loss_args={}, E_init=<tensorflow.python.ops.init_ops.RandomUniform object>, E_init_args={}, nce_W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, nce_W_init_args={}, nce_b_init=<tensorflow.python.ops.init_ops.Constant object>, nce_b_init_args={}, name='word2vec_layer')[source]¶ The
Word2vecEmbeddingInputlayer
class is a fully connected layer, for Word Embedding. Words are input as integer index. The output is the embedded word vector.Parameters: - inputs : placeholder
For word inputs. integer index format.
- train_labels : placeholder
For word labels. integer index format.
- vocabulary_size : int
The size of vocabulary, number of words.
- embedding_size : int
The number of embedding dimensions.
- num_sampled : int
The Number of negative examples for NCE loss.
- nce_loss_args : a dictionary
The arguments for tf.nn.nce_loss()
- E_init : embedding initializer
The initializer for initializing the embedding matrix.
- E_init_args : a dictionary
The arguments for embedding initializer
- nce_W_init : NCE decoder biases initializer
The initializer for initializing the nce decoder weight matrix.
- nce_W_init_args : a dictionary
The arguments for initializing the nce decoder weight matrix.
- nce_b_init : NCE decoder biases initializer
The initializer for tf.get_variable() of the nce decoder bias vector.
- nce_b_init_args : a dictionary
The arguments for tf.get_variable() of the nce decoder bias vector.
- name : a string or None
An optional name to attach to this layer.
References
Examples
- Without TensorLayer : see tensorflow/examples/tutorials/word2vec/word2vec_basic.py
>>> train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) >>> train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) >>> embeddings = tf.Variable( ... tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) >>> embed = tf.nn.embedding_lookup(embeddings, train_inputs) >>> nce_weights = tf.Variable( ... tf.truncated_normal([vocabulary_size, embedding_size], ... stddev=1.0 / math.sqrt(embedding_size))) >>> nce_biases = tf.Variable(tf.zeros([vocabulary_size])) >>> cost = tf.reduce_mean( ... tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, ... inputs=embed, labels=train_labels, ... num_sampled=num_sampled, num_classes=vocabulary_size, ... num_true=1))
- With TensorLayer : see tutorial_word2vec_basic.py
>>> train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) >>> train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) >>> emb_net = tl.layers.Word2vecEmbeddingInputlayer( ... inputs = train_inputs, ... train_labels = train_labels, ... vocabulary_size = vocabulary_size, ... embedding_size = embedding_size, ... num_sampled = num_sampled, ... name ='word2vec_layer', ... ) >>> cost = emb_net.nce_cost >>> train_params = emb_net.all_params >>> train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize( ... cost, var_list=train_params) >>> normalized_embeddings = emb_net.normalized_embeddings
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Embedding Input layer¶
-
class
tensorlayer.layers.
EmbeddingInputlayer
(inputs=None, vocabulary_size=80000, embedding_size=200, E_init=<tensorflow.python.ops.init_ops.RandomUniform object>, E_init_args={}, name='embedding_layer')[source]¶ The
EmbeddingInputlayer
class is a fully connected layer, for Word Embedding. Words are input as integer index. The output is the embedded word vector.If you have a pre-train matrix, you can assign the matrix into it. To train a word embedding matrix, you can used class:Word2vecEmbeddingInputlayer.
Note that, do not update this embedding matrix.
Parameters: - inputs : placeholder
For word inputs. integer index format. a 2D tensor : [batch_size, num_steps(num_words)]
- vocabulary_size : int
The size of vocabulary, number of words.
- embedding_size : int
The number of embedding dimensions.
- E_init : embedding initializer
The initializer for initializing the embedding matrix.
- E_init_args : a dictionary
The arguments for embedding initializer
- name : a string or None
An optional name to attach to this layer.
Examples
>>> vocabulary_size = 50000 >>> embedding_size = 200 >>> model_file_name = "model_word2vec_50k_200" >>> batch_size = None ... >>> all_var = tl.files.load_npy_to_any(name=model_file_name+'.npy') >>> data = all_var['data']; count = all_var['count'] >>> dictionary = all_var['dictionary'] >>> reverse_dictionary = all_var['reverse_dictionary'] >>> tl.files.save_vocab(count, name='vocab_'+model_file_name+'.txt') >>> del all_var, data, count ... >>> load_params = tl.files.load_npz(name=model_file_name+'.npz') >>> x = tf.placeholder(tf.int32, shape=[batch_size]) >>> y_ = tf.placeholder(tf.int32, shape=[batch_size, 1]) >>> emb_net = tl.layers.EmbeddingInputlayer( ... inputs = x, ... vocabulary_size = vocabulary_size, ... embedding_size = embedding_size, ... name ='embedding_layer') >>> tl.layers.initialize_global_variables(sess) >>> tl.files.assign_params(sess, [load_params[0]], emb_net) >>> word = b'hello' >>> word_id = dictionary[word] >>> print('word_id:', word_id) ... 6428 ... >>> words = [b'i', b'am', b'hao', b'dong'] >>> word_ids = tl.files.words_to_word_ids(words, dictionary) >>> context = tl.files.word_ids_to_words(word_ids, reverse_dictionary) >>> print('word_ids:', word_ids) ... [72, 1226, 46744, 20048] >>> print('context:', context) ... [b'i', b'am', b'hao', b'dong'] ... >>> vector = sess.run(emb_net.outputs, feed_dict={x : [word_id]}) >>> print('vector:', vector.shape) ... (1, 200) >>> vectors = sess.run(emb_net.outputs, feed_dict={x : word_ids}) >>> print('vectors:', vectors.shape) ... (4, 200)
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Dense layer¶
Dense layer¶
-
class
tensorlayer.layers.
DenseLayer
(layer=None, n_units=100, act=<function identity>, W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='dense_layer')[source]¶ The
DenseLayer
class is a fully connected layer.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- n_units : int
The number of units of the layer.
- act : activation function
The function that is applied to the layer activations.
- W_init : weights initializer
The initializer for initializing the weight matrix.
- b_init : biases initializer or None
The initializer for initializing the bias vector. If None, skip biases.
- W_init_args : dictionary
The arguments for the weights tf.get_variable.
- b_init_args : dictionary
The arguments for the biases tf.get_variable.
- name : a string or None
An optional name to attach to this layer.
Notes
If the input to this layer has more than two axes, it need to flatten the input by using
FlattenLayer
in this case.Examples
>>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.DenseLayer( ... network, ... n_units=800, ... act = tf.nn.relu, ... W_init=tf.truncated_normal_initializer(stddev=0.1), ... name ='relu_layer' ... )
>>> Without TensorLayer, you can do as follow. >>> W = tf.Variable( ... tf.random_uniform([n_in, n_units], -1.0, 1.0), name='W') >>> b = tf.Variable(tf.zeros(shape=[n_units]), name='b') >>> y = tf.nn.relu(tf.matmul(inputs, W) + b)
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Reconstruction layer for Autoencoder¶
-
class
tensorlayer.layers.
ReconLayer
(layer=None, x_recon=None, name='recon_layer', n_units=784, act=<function softplus>)[source]¶ The
ReconLayer
class is a reconstruction layer DenseLayer which use to pre-train a DenseLayer.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- x_recon : tensorflow variable
The variables used for reconstruction.
- name : a string or None
An optional name to attach to this layer.
- n_units : int
The number of units of the layer, should be equal to x_recon
- act : activation function
The activation function that is applied to the reconstruction layer. Normally, for sigmoid layer, the reconstruction activation is sigmoid; for rectifying layer, the reconstruction activation is softplus.
Notes
The input layer should be DenseLayer or a layer has only one axes. You may need to modify this part to define your own cost function. By default, the cost is implemented as follow: - For sigmoid layer, the implementation can be UFLDL - For rectifying layer, the implementation can be Glorot (2011). Deep Sparse Rectifier Neural Networks
Examples
>>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.DenseLayer(network, n_units=196, ... act=tf.nn.sigmoid, name='sigmoid1') >>> recon_layer1 = tl.layers.ReconLayer(network, x_recon=x, n_units=784, ... act=tf.nn.sigmoid, name='recon_layer1') >>> recon_layer1.pretrain(sess, x=x, X_train=X_train, X_val=X_val, ... denoise_name=None, n_epoch=1200, batch_size=128, ... print_freq=10, save=True, save_name='w1pre_')
Methods
pretrain(self, sess, x, X_train, X_val, denoise_name=None, n_epoch=100, batch_size=128, print_freq=10, save=True, save_name=’w1pre_’) Start to pre-train the parameters of previous DenseLayer. - layer : a
Noise layer¶
Dropout layer¶
-
class
tensorlayer.layers.
DropoutLayer
(layer=None, keep=0.5, is_fix=False, is_train=True, name='dropout_layer')[source]¶ The
DropoutLayer
class is a noise layer which randomly set some values to zero by a given keeping probability.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- keep : float
The keeping probability, the lower more values will be set to zero.
- is_fix : boolean
Default False, if True, the keeping probability is fixed and cannot be changed via feed_dict.
- is_train : boolean
If False, skip this layer, default is True.
- name : a string or None
An optional name to attach to this layer.
Notes
- A frequent question regarding
DropoutLayer
is that why it donot have is_train likeBatchNormLayer
.
In many simple cases, user may find it is better to use one inference instead of two inferences for training and testing seperately,
DropoutLayer
allows you to control the dropout rate via feed_dict. However, you can fix the keeping probability by setting is_fix to True.Examples
- Define network
>>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.DropoutLayer(network, keep=0.8, name='drop1') >>> network = tl.layers.DenseLayer(network, n_units=800, act = tf.nn.relu, name='relu1') >>> ...
- For training, enable dropout as follow.
>>> feed_dict = {x: X_train_a, y_: y_train_a} >>> feed_dict.update( network.all_drop ) # enable noise layers >>> sess.run(train_op, feed_dict=feed_dict) >>> ...
- For testing, disable dropout as follow.
>>> dp_dict = tl.utils.dict_to_one( network.all_drop ) # disable noise layers >>> feed_dict = {x: X_val_a, y_: y_val_a} >>> feed_dict.update(dp_dict) >>> err, ac = sess.run([cost, acc], feed_dict=feed_dict) >>> ...
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Gaussian noise layer¶
-
class
tensorlayer.layers.
GaussianNoiseLayer
(layer=None, mean=0.0, stddev=1.0, is_train=True, name='gaussian_noise_layer')[source]¶ The
GaussianNoiseLayer
class is noise layer that adding noise with normal distribution to the activation.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- mean : float
- stddev : float
- is_train : boolean
If False, skip this layer, default is True.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Dropconnect + Dense layer¶
-
class
tensorlayer.layers.
DropconnectDenseLayer
(layer=None, keep=0.5, n_units=100, act=<function identity>, W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='dropconnect_layer')[source]¶ The
DropconnectDenseLayer
class isDenseLayer
with DropConnect behaviour which randomly remove connection between this layer to previous layer by a given keeping probability.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- keep : float
The keeping probability, the lower more values will be set to zero.
- n_units : int
The number of units of the layer.
- act : activation function
The function that is applied to the layer activations.
- W_init : weights initializer
The initializer for initializing the weight matrix.
- b_init : biases initializer
The initializer for initializing the bias vector.
- W_init_args : dictionary
The arguments for the weights tf.get_variable().
- b_init_args : dictionary
The arguments for the biases tf.get_variable().
- name : a string or None
An optional name to attach to this layer.
References
Examples
>>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.DropconnectDenseLayer(network, keep = 0.8, ... n_units=800, act = tf.nn.relu, name='dropconnect_relu1') >>> network = tl.layers.DropconnectDenseLayer(network, keep = 0.5, ... n_units=800, act = tf.nn.relu, name='dropconnect_relu2') >>> network = tl.layers.DropconnectDenseLayer(network, keep = 0.5, ... n_units=10, act = tl.activation.identity, name='output_layer')
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Convolutional layer (Pro)¶
1D Convolutional layer¶
-
class
tensorlayer.layers.
Conv1dLayer
(layer=None, act=<function identity>, shape=[5, 5, 1], strides=1, padding='SAME', use_cudnn_on_gpu=None, data_format=None, W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='cnn_layer')[source]¶ The
Conv1dLayer
class is a 1D CNN layer, see tf.nn.conv1d.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer, [batch, in_width, in_channels].
- act : activation function, None for identity.
- shape : list of shape
shape of the filters, [filter_length, in_channels, out_channels].
- strides : an int.
The number of entries by which the filter is moved right at each step.
- padding : a string from: “SAME”, “VALID”.
The type of padding algorithm to use.
- use_cudnn_on_gpu : An optional bool. Defaults to True.
- data_format : An optional string from “NHWC”, “NCHW”. Defaults to “NHWC”, the data is stored in the order of [batch, in_width, in_channels]. The “NCHW” format stores data as [batch, in_channels, in_width].
- W_init : weights initializer
The initializer for initializing the weight matrix.
- b_init : biases initializer or None
The initializer for initializing the bias vector. If None, skip biases.
- W_init_args : dictionary
The arguments for the weights tf.get_variable().
- b_init_args : dictionary
The arguments for the biases tf.get_variable().
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
2D Convolutional layer¶
-
class
tensorlayer.layers.
Conv2dLayer
(layer=None, act=<function identity>, shape=[5, 5, 1, 100], strides=[1, 1, 1, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='cnn_layer')[source]¶ The
Conv2dLayer
class is a 2D CNN layer, see tf.nn.conv2d.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- act : activation function
The function that is applied to the layer activations.
- shape : list of shape
shape of the filters, [filter_height, filter_width, in_channels, out_channels].
- strides : a list of ints.
The stride of the sliding window for each dimension of input.
It Must be in the same order as the dimension specified with format.
- padding : a string from: “SAME”, “VALID”.
The type of padding algorithm to use.
- W_init : weights initializer
The initializer for initializing the weight matrix.
- b_init : biases initializer or None
The initializer for initializing the bias vector. If None, skip biases.
- W_init_args : dictionary
The arguments for the weights tf.get_variable().
- b_init_args : dictionary
The arguments for the biases tf.get_variable().
- name : a string or None
An optional name to attach to this layer.
Notes
- shape = [h, w, the number of output channel of previous layer, the number of output channels]
- the number of output channel of a layer is its last dimension.
Examples
>>> x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1]) >>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.Conv2dLayer(network, ... act = tf.nn.relu, ... shape = [5, 5, 1, 32], # 32 features for each 5x5 patch ... strides=[1, 1, 1, 1], ... padding='SAME', ... W_init=tf.truncated_normal_initializer(stddev=5e-2), ... W_init_args={}, ... b_init = tf.constant_initializer(value=0.0), ... b_init_args = {}, ... name ='cnn_layer1') # output: (?, 28, 28, 32) >>> network = tl.layers.PoolLayer(network, ... ksize=[1, 2, 2, 1], ... strides=[1, 2, 2, 1], ... padding='SAME', ... pool = tf.nn.max_pool, ... name ='pool_layer1',) # output: (?, 14, 14, 32)
>>> Without TensorLayer, you can implement 2d convolution as follow. >>> W = tf.Variable(W_init(shape=[5, 5, 1, 32], ), name='W_conv') >>> b = tf.Variable(b_init(shape=[32], ), name='b_conv') >>> outputs = tf.nn.relu( tf.nn.conv2d(inputs, W, ... strides=[1, 1, 1, 1], ... padding='SAME') + b )
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
2D Deconvolutional layer¶
-
class
tensorlayer.layers.
DeConv2dLayer
(layer=None, act=<function identity>, shape=[3, 3, 128, 256], output_shape=[1, 256, 256, 128], strides=[1, 2, 2, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='decnn2d_layer')[source]¶ The
DeConv2dLayer
class is deconvolutional 2D layer, see tf.nn.conv2d_transpose.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- act : activation function
The function that is applied to the layer activations.
- shape : list of shape
shape of the filters, [height, width, output_channels, in_channels], filter’s in_channels dimension must match that of value.
- output_shape : list of output shape
representing the output shape of the deconvolution op.
- strides : a list of ints.
The stride of the sliding window for each dimension of the input tensor.
- padding : a string from: “SAME”, “VALID”.
The type of padding algorithm to use.
- W_init : weights initializer
The initializer for initializing the weight matrix.
- b_init : biases initializer
The initializer for initializing the bias vector. If None, skip biases.
- W_init_args : dictionary
The arguments for the weights initializer.
- b_init_args : dictionary
The arguments for the biases initializer.
- name : a string or None
An optional name to attach to this layer.
Notes
- shape = [h, w, the number of output channels of this layer, the number of output channel of previous layer]
- output_shape = [batch_size, any, any, the number of output channels of this layer]
- the number of output channel of a layer is its last dimension.
Examples
- A part of the generator in DCGAN example
>>> batch_size = 64 >>> inputs = tf.placeholder(tf.float32, [batch_size, 100], name='z_noise') >>> net_in = tl.layers.InputLayer(inputs, name='g/in') >>> net_h0 = tl.layers.DenseLayer(net_in, n_units = 8192, ... W_init = tf.random_normal_initializer(stddev=0.02), ... act = tf.identity, name='g/h0/lin') >>> print(net_h0.outputs._shape) ... (64, 8192) >>> net_h0 = tl.layers.ReshapeLayer(net_h0, shape = [-1, 4, 4, 512], name='g/h0/reshape') >>> net_h0 = tl.layers.BatchNormLayer(net_h0, act=tf.nn.relu, is_train=is_train, name='g/h0/batch_norm') >>> print(net_h0.outputs._shape) ... (64, 4, 4, 512) >>> net_h1 = tl.layers.DeConv2dLayer(net_h0, ... shape = [5, 5, 256, 512], ... output_shape = [batch_size, 8, 8, 256], ... strides=[1, 2, 2, 1], ... act=tf.identity, name='g/h1/decon2d') >>> net_h1 = tl.layers.BatchNormLayer(net_h1, act=tf.nn.relu, is_train=is_train, name='g/h1/batch_norm') >>> print(net_h1.outputs._shape) ... (64, 8, 8, 256)
- U-Net
>>> .... >>> conv10 = tl.layers.Conv2dLayer(conv9, act=tf.nn.relu, ... shape=[3,3,1024,1024], strides=[1,1,1,1], padding='SAME', ... W_init=w_init, b_init=b_init, name='conv10') >>> print(conv10.outputs) ... (batch_size, 32, 32, 1024) >>> deconv1 = tl.layers.DeConv2dLayer(conv10, act=tf.nn.relu, ... shape=[3,3,512,1024], strides=[1,2,2,1], output_shape=[batch_size,64,64,512], ... padding='SAME', W_init=w_init, b_init=b_init, name='devcon1_1')
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
3D Convolutional layer¶
-
class
tensorlayer.layers.
Conv3dLayer
(layer=None, act=<function identity>, shape=[2, 2, 2, 64, 128], strides=[1, 2, 2, 2, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='cnn3d_layer')[source]¶ The
Conv3dLayer
class is a 3D CNN layer, see tf.nn.conv3d.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- act : activation function
The function that is applied to the layer activations.
- shape : list of shape
shape of the filters, [filter_depth, filter_height, filter_width, in_channels, out_channels].
- strides : a list of ints. 1-D of length 4.
The stride of the sliding window for each dimension of input. Must be in the same order as the dimension specified with format.
- padding : a string from: “SAME”, “VALID”.
The type of padding algorithm to use.
- W_init : weights initializer
The initializer for initializing the weight matrix.
- b_init : biases initializer
The initializer for initializing the bias vector.
- W_init_args : dictionary
The arguments for the weights initializer.
- b_init_args : dictionary
The arguments for the biases initializer.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
3D Deconvolutional layer¶
-
class
tensorlayer.layers.
DeConv3dLayer
(layer=None, act=<function identity>, shape=[2, 2, 2, 128, 256], output_shape=[1, 12, 32, 32, 128], strides=[1, 2, 2, 2, 1], padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='decnn3d_layer')[source]¶ The
DeConv3dLayer
class is deconvolutional 3D layer, see tf.nn.conv3d_transpose.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- act : activation function
The function that is applied to the layer activations.
- shape : list of shape
shape of the filters, [depth, height, width, output_channels, in_channels], filter’s in_channels dimension must match that of value.
- output_shape : list of output shape
representing the output shape of the deconvolution op.
- strides : a list of ints.
The stride of the sliding window for each dimension of the input tensor.
- padding : a string from: “SAME”, “VALID”.
The type of padding algorithm to use.
- W_init : weights initializer
The initializer for initializing the weight matrix.
- b_init : biases initializer
The initializer for initializing the bias vector.
- W_init_args : dictionary
The arguments for the weights initializer.
- b_init_args : dictionary
The arguments for the biases initializer.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
2D UpSampling layer¶
-
class
tensorlayer.layers.
UpSampling2dLayer
(layer=None, size=[], is_scale=True, method=0, align_corners=False, name='upsample2d_layer')[source]¶ The
UpSampling2dLayer
class is upSampling 2d layer, see tf.image.resize_images.Parameters: - layer : a layer class with 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels].
- size : a tupe of int or float.
(height, width) scale factor or new size of height and width.
- is_scale : boolean, if True (default), size is scale factor, otherwise, size is number of pixels of height and width.
- method : 0, 1, 2, 3. ResizeMethod. Defaults to ResizeMethod.BILINEAR.
- ResizeMethod.BILINEAR, Bilinear interpolation.
- ResizeMethod.NEAREST_NEIGHBOR, Nearest neighbor interpolation.
- ResizeMethod.BICUBIC, Bicubic interpolation.
- ResizeMethod.AREA, Area interpolation.
- align_corners : bool. If true, exactly align all 4 corners of the input and output. Defaults to false.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
2D DownSampling layer¶
-
class
tensorlayer.layers.
DownSampling2dLayer
(layer=None, size=[], is_scale=True, method=0, align_corners=False, name='downsample2d_layer')[source]¶ The
DownSampling2dLayer
class is downSampling 2d layer, see tf.image.resize_images.Parameters: - layer : a layer class with 4-D Tensor of shape [batch, height, width, channels] or 3-D Tensor of shape [height, width, channels].
- size : a tupe of int or float.
(height, width) scale factor or new size of height and width.
- is_scale : boolean, if True (default), size is scale factor, otherwise, size is number of pixels of height and width.
- method : 0, 1, 2, 3. ResizeMethod. Defaults to ResizeMethod.BILINEAR.
- ResizeMethod.BILINEAR, Bilinear interpolation.
- ResizeMethod.NEAREST_NEIGHBOR, Nearest neighbor interpolation.
- ResizeMethod.BICUBIC, Bicubic interpolation.
- ResizeMethod.AREA, Area interpolation.
- align_corners : bool. If true, exactly align all 4 corners of the input and output. Defaults to false.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
2D Atrous convolutional layer¶
-
class
tensorlayer.layers.
AtrousConv2dLayer
(layer=None, n_filter=32, filter_size=(3, 3), rate=2, act=None, padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='atrou2d')[source]¶ The
AtrousConv2dLayer
class is Atrous convolution (a.k.a. convolution with holes or dilated convolution) 2D layer, see tf.nn.atrous_conv2d.Parameters: - layer: a layer class with 4-D Tensor of shape [batch, height, width, channels].
- # filters : A 4-D Tensor with the same type as value and shape [filter_height, filter_width, in_channels, out_channels]. filters’ in_channels dimension must match that of value. Atrous convolution is equivalent to standard convolution with upsampled filters with effective height filter_height + (filter_height - 1) * (rate - 1) and effective width filter_width + (filter_width - 1) * (rate - 1), produced by inserting rate - 1 zeros along consecutive elements across the filters’ spatial dimensions.
- n_filter : number of filter.
- filter_size : tuple (height, width) for filter size.
- rate : A positive int32. The stride with which we sample input values across the height and width dimensions. Equivalently, the rate by which we upsample the filter values by inserting zeros across the height and width dimensions. In the literature, the same parameter is sometimes called input stride or dilation.
- act : activation function, None for linear.
- padding : A string, either ‘VALID’ or ‘SAME’. The padding algorithm.
- W_init : weights initializer. The initializer for initializing the weight matrix.
- b_init : biases initializer or None. The initializer for initializing the bias vector. If None, skip biases.
- W_init_args : dictionary. The arguments for the weights tf.get_variable().
- b_init_args : dictionary. The arguments for the biases tf.get_variable().
- name : a string or None, an optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Convolutional layer (Simplified)¶
For users don’t familiar with TensorFlow, the following simplified functions may easier for you. We will provide more simplified functions later, but if you are good at TensorFlow, the professional APIs may better for you.
2D Convolutional layer¶
-
tensorlayer.layers.
Conv2d
(net, n_filter=32, filter_size=(3, 3), strides=(1, 1), act=None, padding='SAME', W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='conv2d')[source]¶ Wrapper for
Conv2dLayer
, if you don’t understand how to useConv2dLayer
, this function may be easier.Parameters: - net : TensorLayer layer.
- n_filter : number of filter.
- filter_size : tuple (height, width) for filter size.
- strides : tuple (height, width) for strides.
- act : None or activation function.
- others : see
Conv2dLayer
.
Examples
>>> w_init = tf.truncated_normal_initializer(stddev=0.01) >>> b_init = tf.constant_initializer(value=0.0) >>> inputs = InputLayer(x, name='inputs') >>> conv1 = Conv2d(inputs, 64, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv1_1') >>> conv1 = Conv2d(conv1, 64, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv1_2') >>> pool1 = MaxPool2d(conv1, (2, 2), padding='SAME', name='pool1') >>> conv2 = Conv2d(pool1, 128, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv2_1') >>> conv2 = Conv2d(conv2, 128, (3, 3), act=tf.nn.relu, padding='SAME', W_init=w_init, b_init=b_init, name='conv2_2') >>> pool2 = MaxPool2d(conv2, (2, 2), padding='SAME', name='pool2')
2D Deconvolutional layer¶
-
tensorlayer.layers.
DeConv2d
(net, n_out_channel=32, filter_size=(3, 3), out_size=(30, 30), strides=(2, 2), padding='SAME', batch_size=None, act=None, W_init=<tensorflow.python.ops.init_ops.TruncatedNormal object>, b_init=<tensorflow.python.ops.init_ops.Constant object>, W_init_args={}, b_init_args={}, name='decnn2d')[source]¶ Wrapper for
DeConv2dLayer
, if you don’t understand how to useDeConv2dLayer
, this function may be easier.Parameters: - net : TensorLayer layer.
- n_out_channel : int, number of output channel.
- filter_size : tuple of (height, width) for filter size.
- out_size : tuple of (height, width) of output.
- batch_size : int or None, batch_size. If None, try to find the batch_size from the first dim of net.outputs (you should tell the batch_size when define the input placeholder).
- strides : tuple of (height, width) for strides.
- act : None or activation function.
- others : see
DeConv2dLayer
.
2D Max pooling layer¶
-
tensorlayer.layers.
MaxPool2d
(net, filter_size=(2, 2), strides=None, padding='SAME', name='maxpool')[source]¶ Wrapper for
PoolLayer
.Parameters: - net : TensorLayer layer.
- filter_size : tuple of (height, width) for filter size.
- strides : tuple of (height, width). Default is the same with filter_size.
- others : see
PoolLayer
.
2D Mean pooling layer¶
-
tensorlayer.layers.
MeanPool2d
(net, filter_size=(2, 2), strides=None, padding='SAME', name='meanpool')[source]¶ Wrapper for
PoolLayer
.Parameters: - net : TensorLayer layer.
- filter_size : tuple of (height, width) for filter size.
- strides : tuple of (height, width). Default is the same with filter_size.
- others : see
PoolLayer
.
Pooling layer¶
Pooling layer for any dimensions and any pooling functions.
-
class
tensorlayer.layers.
PoolLayer
(layer=None, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME', pool=<function max_pool>, name='pool_layer')[source]¶ The
PoolLayer
class is a Pooling layer, you can choosetf.nn.max_pool
andtf.nn.avg_pool
for 2D ortf.nn.max_pool3d()
andtf.nn.avg_pool3d()
for 3D.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- ksize : a list of ints that has length >= 4.
The size of the window for each dimension of the input tensor.
- strides : a list of ints that has length >= 4.
The stride of the sliding window for each dimension of the input tensor.
- padding : a string from: “SAME”, “VALID”.
The type of padding algorithm to use.
- pool : a pooling function
- see TensorFlow pooling APIs
- class
tf.nn.max_pool
- class
tf.nn.avg_pool
- class
tf.nn.max_pool3d
- class
tf.nn.avg_pool3d
- name : a string or None
An optional name to attach to this layer.
Examples
- see
Conv2dLayer
.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Padding layer¶
Padding layer for any modes.
-
class
tensorlayer.layers.
PadLayer
(layer=None, paddings=None, mode='CONSTANT', name='pad_layer')[source]¶ The
PadLayer
class is a Padding layer for any modes and dimensions. Please see tf.pad for usage.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- padding : a Tensor of type int32.
- mode : one of “CONSTANT”, “REFLECT”, or “SYMMETRIC” (case-insensitive)
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Normalization layer¶
For local response normalization as it does not have any weights and arguments,
you can also apply tf.nn.lrn
on network.outputs
.
Batch Normalization¶
-
class
tensorlayer.layers.
BatchNormLayer
(layer=None, decay=0.9, epsilon=1e-05, act=<function identity>, is_train=False, beta_init=<class 'tensorflow.python.ops.init_ops.Zeros'>, gamma_init=<tensorflow.python.ops.init_ops.RandomNormal object>, name='batchnorm_layer')[source]¶ The
BatchNormLayer
class is a normalization layer, seetf.nn.batch_normalization
andtf.nn.moments
.Batch normalization on fully-connected or convolutional maps.
Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- decay : float, default is 0.9.
A decay factor for ExponentialMovingAverage, use larger value for large dataset.
- epsilon : float
A small float number to avoid dividing by 0.
- act : activation function.
- is_train : boolean
Whether train or inference.
- beta_init : beta initializer
The initializer for initializing beta
- gamma_init : gamma initializer
The initializer for initializing gamma
- name : a string or None
An optional name to attach to this layer.
References
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Local Response Normalization¶
-
class
tensorlayer.layers.
LocalResponseNormLayer
(layer=None, depth_radius=None, bias=None, alpha=None, beta=None, name='lrn_layer')[source]¶ The
LocalResponseNormLayer
class is for Local Response Normalization, seetf.nn.local_response_normalization
. The 4-D input tensor is treated as a 3-D array of 1-D vectors (along the last dimension), and each vector is normalized independently. Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius.Parameters: - layer : a layer class. Must be one of the following types: float32, half. 4-D.
- depth_radius : An optional int. Defaults to 5. 0-D. Half-width of the 1-D normalization window.
- bias : An optional float. Defaults to 1. An offset (usually positive to avoid dividing by 0).
- alpha : An optional float. Defaults to 1. A scale factor, usually positive.
- beta : An optional float. Defaults to 0.5. An exponent.
- name : A string or None, an optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Time distributed layer¶
-
class
tensorlayer.layers.
TimeDistributedLayer
(layer=None, layer_class=None, args={}, name='time_distributed')[source]¶ The
TimeDistributedLayer
class that applies a function to every timestep of the input tensor. For example, if usingDenseLayer
as thelayer_class
, inputs [batch_size , length, dim] outputs [batch_size , length, new_dim].Parameters: Examples
>>> batch_size = 32 >>> timestep = 20 >>> input_dim = 100 >>> x = tf.placeholder(dtype=tf.float32, shape=[batch_size, timestep, input_dim], name="encode_seqs") >>> net = InputLayer(x, name='input') >>> net = TimeDistributedLayer(net, layer_class=DenseLayer, args={'n_units':50, 'name':'dense'}, name='time_dense') ... [TL] InputLayer input: (32, 20, 100) ... [TL] TimeDistributedLayer time_dense: layer_class:DenseLayer >>> print(net.outputs._shape) ... (32, 20, 50) >>> net.print_params(False) ... param 0: (100, 50) time_dense/dense/W:0 ... param 1: (50,) time_dense/dense/b:0 ... num of params: 5050
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Fixed Length Recurrent layer¶
All recurrent layers can implement any type of RNN cell by feeding different cell function (LSTM, GRU etc).
RNN layer¶
-
class
tensorlayer.layers.
RNNLayer
(layer=None, cell_fn=None, cell_init_args={}, n_hidden=100, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, n_steps=5, initial_state=None, return_last=False, return_seq_2d=False, name='rnn_layer')[source]¶ The
RNNLayer
class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
- cell_init_args : a dictionary
The arguments for the cell initializer.
- n_hidden : a int
The number of hidden units in the layer.
- initializer : initializer
The initializer for initializing the parameters.
- n_steps : a int
The sequence length.
- initial_state : None or RNN State
If None, initial_state is zero_state.
- return_last : boolean
- If True, return the last output, “Sequence input and single output”
- If False, return all outputs, “Synced sequence input and output”
- In other word, if you want to apply one or more RNN(s) on this layer, set to False.
- return_seq_2d : boolean
- When return_last = False
- If True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer after it.
- If False, return 3D Tensor [n_example/n_steps, n_steps, n_hidden], for stacking multiple RNN after it.
- name : a string or None
An optional name to attach to this layer.
Notes
Input dimension should be rank 3 : [batch_size, n_steps, n_features], if no, please see
ReshapeLayer
.References
- Neural Network RNN Cells in TensorFlow
- tensorflow/python/ops/rnn.py
- tensorflow/python/ops/rnn_cell.py
- see TensorFlow tutorial
ptb_word_lm.py
, TensorLayer tutorialstutorial_ptb_lstm*.py
andtutorial_generate_text.py
Examples
- For words
>>> input_data = tf.placeholder(tf.int32, [batch_size, num_steps]) >>> network = tl.layers.EmbeddingInputlayer( ... inputs = input_data, ... vocabulary_size = vocab_size, ... embedding_size = hidden_size, ... E_init = tf.random_uniform_initializer(-init_scale, init_scale), ... name ='embedding_layer') >>> if is_training: >>> network = tl.layers.DropoutLayer(network, keep=keep_prob, name='drop1') >>> network = tl.layers.RNNLayer(network, ... cell_fn=tf.nn.rnn_cell.BasicLSTMCell, ... cell_init_args={'forget_bias': 0.0},# 'state_is_tuple': True}, ... n_hidden=hidden_size, ... initializer=tf.random_uniform_initializer(-init_scale, init_scale), ... n_steps=num_steps, ... return_last=False, ... name='basic_lstm_layer1') >>> lstm1 = network >>> if is_training: >>> network = tl.layers.DropoutLayer(network, keep=keep_prob, name='drop2') >>> network = tl.layers.RNNLayer(network, ... cell_fn=tf.nn.rnn_cell.BasicLSTMCell, ... cell_init_args={'forget_bias': 0.0}, # 'state_is_tuple': True}, ... n_hidden=hidden_size, ... initializer=tf.random_uniform_initializer(-init_scale, init_scale), ... n_steps=num_steps, ... return_last=False, ... return_seq_2d=True, ... name='basic_lstm_layer2') >>> lstm2 = network >>> if is_training: >>> network = tl.layers.DropoutLayer(network, keep=keep_prob, name='drop3') >>> network = tl.layers.DenseLayer(network, ... n_units=vocab_size, ... W_init=tf.random_uniform_initializer(-init_scale, init_scale), ... b_init=tf.random_uniform_initializer(-init_scale, init_scale), ... act = tl.activation.identity, name='output_layer')
- For CNN+LSTM
>>> x = tf.placeholder(tf.float32, shape=[batch_size, image_size, image_size, 1]) >>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.Conv2dLayer(network, ... act = tf.nn.relu, ... shape = [5, 5, 1, 32], # 32 features for each 5x5 patch ... strides=[1, 2, 2, 1], ... padding='SAME', ... name ='cnn_layer1') >>> network = tl.layers.PoolLayer(network, ... ksize=[1, 2, 2, 1], ... strides=[1, 2, 2, 1], ... padding='SAME', ... pool = tf.nn.max_pool, ... name ='pool_layer1') >>> network = tl.layers.Conv2dLayer(network, ... act = tf.nn.relu, ... shape = [5, 5, 32, 10], # 10 features for each 5x5 patch ... strides=[1, 2, 2, 1], ... padding='SAME', ... name ='cnn_layer2') >>> network = tl.layers.PoolLayer(network, ... ksize=[1, 2, 2, 1], ... strides=[1, 2, 2, 1], ... padding='SAME', ... pool = tf.nn.max_pool, ... name ='pool_layer2') >>> network = tl.layers.FlattenLayer(network, name='flatten_layer') >>> network = tl.layers.ReshapeLayer(network, shape=[-1, num_steps, int(network.outputs._shape[-1])]) >>> rnn1 = tl.layers.RNNLayer(network, ... cell_fn=tf.nn.rnn_cell.LSTMCell, ... cell_init_args={}, ... n_hidden=200, ... initializer=tf.random_uniform_initializer(-0.1, 0.1), ... n_steps=num_steps, ... return_last=False, ... return_seq_2d=True, ... name='rnn_layer') >>> network = tl.layers.DenseLayer(rnn1, n_units=3, ... act = tl.activation.identity, name='output_layer')
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Bidirectional layer¶
-
class
tensorlayer.layers.
BiRNNLayer
(layer=None, cell_fn=None, cell_init_args={'state_is_tuple': True, 'use_peepholes': True}, n_hidden=100, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, n_steps=5, fw_initial_state=None, bw_initial_state=None, dropout=None, n_layer=1, return_last=False, return_seq_2d=False, name='birnn_layer')[source]¶ The
BiRNNLayer
class is a Bidirectional RNN layer.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
- cell_init_args : a dictionary
The arguments for the cell initializer.
- n_hidden : a int
The number of hidden units in the layer.
- initializer : initializer
The initializer for initializing the parameters.
- n_steps : a int
The sequence length.
- fw_initial_state : None or forward RNN State
If None, initial_state is zero_state.
- bw_initial_state : None or backward RNN State
If None, initial_state is zero_state.
- dropout : tuple of float: (input_keep_prob, output_keep_prob).
The input and output keep probability.
- n_layer : a int, default is 1.
The number of RNN layers.
- return_last : boolean
- If True, return the last output, “Sequence input and single output”
- If False, return all outputs, “Synced sequence input and output”
- In other word, if you want to apply one or more RNN(s) on this layer, set to False.
- return_seq_2d : boolean
- When return_last = False
- If True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer after it.
- If False, return 3D Tensor [n_example/n_steps, n_steps, n_hidden], for stacking multiple RNN after it.
- name : a string or None
An optional name to attach to this layer.
Notes
- Input dimension should be rank 3 : [batch_size, n_steps, n_features], if no, please see
ReshapeLayer
. - For predicting, the sequence length has to be the same with the sequence length of training, while, for normal
RNN, we can use sequence length of 1 for predicting.
References
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Advanced Ops for Dynamic RNN¶
These operations usually be used inside Dynamic RNN layer, they can compute the sequence lengths for different situation and get the last RNN outputs by indexing.
Output indexing¶
-
tensorlayer.layers.
advanced_indexing_op
(input, index)[source]¶ Advanced Indexing for Sequences, returns the outputs by given sequence lengths. When return the last output
DynamicRNNLayer
uses it to get the last outputs with the sequence lengths.Parameters: - input : tensor for data
[batch_size, n_step(max), n_features]
- index : tensor for indexing, i.e. sequence_length in Dynamic RNN.
[batch_size]
References
- Modified from TFlearn (the original code is used for fixed length rnn), references.
Examples
>>> batch_size, max_length, n_features = 3, 5, 2 >>> z = np.random.uniform(low=-1, high=1, size=[batch_size, max_length, n_features]).astype(np.float32) >>> b_z = tf.constant(z) >>> sl = tf.placeholder(dtype=tf.int32, shape=[batch_size]) >>> o = advanced_indexing_op(b_z, sl) >>> >>> sess = tf.InteractiveSession() >>> tl.layers.initialize_global_variables(sess) >>> >>> order = np.asarray([1,1,2]) >>> print("real",z[0][order[0]-1], z[1][order[1]-1], z[2][order[2]-1]) >>> y = sess.run([o], feed_dict={sl:order}) >>> print("given",order) >>> print("out", y) ... real [-0.93021595 0.53820813] [-0.92548317 -0.77135968] [ 0.89952248 0.19149846] ... given [1 1 2] ... out [array([[-0.93021595, 0.53820813], ... [-0.92548317, -0.77135968], ... [ 0.89952248, 0.19149846]], dtype=float32)]
Compute Sequence length 1¶
-
tensorlayer.layers.
retrieve_seq_length_op
(data)[source]¶ An op to compute the length of a sequence from input shape of [batch_size, n_step(max), n_features], it can be used when the features of padding (on right hand side) are all zeros.
Parameters: - data : tensor
[batch_size, n_step(max), n_features] with zero padding on right hand side.
References
- Borrow from TFlearn.
Examples
>>> data = [[[1],[2],[0],[0],[0]], ... [[1],[2],[3],[0],[0]], ... [[1],[2],[6],[1],[0]]] >>> data = np.asarray(data) >>> print(data.shape) ... (3, 5, 1) >>> data = tf.constant(data) >>> sl = retrieve_seq_length_op(data) >>> sess = tf.InteractiveSession() >>> tl.layers.initialize_global_variables(sess) >>> y = sl.eval() ... [2 3 4]
- Multiple features
>>> data = [[[1,2],[2,2],[1,2],[1,2],[0,0]], ... [[2,3],[2,4],[3,2],[0,0],[0,0]], ... [[3,3],[2,2],[5,3],[1,2],[0,0]]] >>> sl ... [4 3 4]
Compute Sequence length 2¶
-
tensorlayer.layers.
retrieve_seq_length_op2
(data)[source]¶ An op to compute the length of a sequence, from input shape of [batch_size, n_step(max)], it can be used when the features of padding (on right hand side) are all zeros.
Parameters: - data : tensor
[batch_size, n_step(max)] with zero padding on right hand side.
Examples
>>> data = [[1,2,0,0,0], ... [1,2,3,0,0], ... [1,2,6,1,0]] >>> o = retrieve_seq_length_op2(data) >>> sess = tf.InteractiveSession() >>> tl.layers.initialize_global_variables(sess) >>> print(o.eval()) ... [2 3 4]
Dynamic RNN layer¶
RNN layer¶
-
class
tensorlayer.layers.
DynamicRNNLayer
(layer=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, sequence_length=None, initial_state=None, dropout=None, n_layer=1, return_last=False, return_seq_2d=False, name='dyrnn_layer')[source]¶ The
DynamicRNNLayer
class is a Dynamic RNN layer, seetf.nn.dynamic_rnn
.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
- cell_init_args : a dictionary
The arguments for the cell initializer.
- n_hidden : a int
The number of hidden units in the layer.
- initializer : initializer
The initializer for initializing the parameters.
- sequence_length : a tensor, array or None
- The sequence length of each row of input data, see
Advanced Ops for Dynamic RNN
. - If None, it uses
retrieve_seq_length_op
to compute the sequence_length, i.e. when the features of padding (on right hand side) are all zeros. - If using word embedding, you may need to compute the sequence_length from the ID array (the integer features before word embedding) by using
retrieve_seq_length_op2
orretrieve_seq_length_op
. - You can also input an numpy array.
- More details about TensorFlow dynamic_rnn in Wild-ML Blog.
- If None, it uses
- The sequence length of each row of input data, see
- initial_state : None or RNN State
If None, initial_state is zero_state.
- dropout : tuple of float: (input_keep_prob, output_keep_prob).
The input and output keep probability.
- n_layer : a int, default is 1.
The number of RNN layers.
- return_last : boolean
- If True, return the last output, “Sequence input and single output”
- If False, return all outputs, “Synced sequence input and output”
- In other word, if you want to apply one or more RNN(s) on this layer, set to False.
- return_seq_2d : boolean
- When return_last = False
- If True, return 2D Tensor [n_example, n_hidden], for stacking DenseLayer or computing cost after it.
- If False, return 3D Tensor [n_example/n_steps(max), n_steps(max), n_hidden], for stacking multiple RNN after it.
- name : a string or None
An optional name to attach to this layer.
Notes
Input dimension should be rank 3 : [batch_size, n_steps(max), n_features], if no, please see
ReshapeLayer
.References
- Wild-ML Blog
- dynamic_rnn.ipynb
- tf.nn.dynamic_rnn
- tflearn rnn
tutorial_dynamic_rnn.py
Examples
>>> input_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="input_seqs") >>> network = tl.layers.EmbeddingInputlayer( ... inputs = input_seqs, ... vocabulary_size = vocab_size, ... embedding_size = embedding_size, ... name = 'seq_embedding') >>> network = tl.layers.DynamicRNNLayer(network, ... cell_fn = tf.contrib.rnn.BasicLSTMCell, # for TF0.2 tf.nn.rnn_cell.BasicLSTMCell, ... n_hidden = embedding_size, ... dropout = 0.7, ... sequence_length = tl.layers.retrieve_seq_length_op2(input_seqs), ... return_seq_2d = True, # stack denselayer or compute cost after it ... name = 'dynamic_rnn') ... network = tl.layers.DenseLayer(network, n_units=vocab_size, ... act=tf.identity, name="output")
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Bidirectional layer¶
-
class
tensorlayer.layers.
BiDynamicRNNLayer
(layer=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, sequence_length=None, fw_initial_state=None, bw_initial_state=None, dropout=None, n_layer=1, return_last=False, return_seq_2d=False, name='bi_dyrnn_layer')[source]¶ The
BiDynamicRNNLayer
class is a RNN layer, you can implement vanilla RNN, LSTM and GRU with it.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
- cell_init_args : a dictionary
The arguments for the cell initializer.
- n_hidden : a int
The number of hidden units in the layer.
- initializer : initializer
The initializer for initializing the parameters.
- sequence_length : a tensor, array or None
- The sequence length of each row of input data, see
Advanced Ops for Dynamic RNN
. - If None, it uses
retrieve_seq_length_op
to compute the sequence_length, i.e. when the features of padding (on right hand side) are all zeros. - If using word embedding, you may need to compute the sequence_length from the ID array (the integer features before word embedding) by using
retrieve_seq_length_op2
orretrieve_seq_length_op
. - You can also input an numpy array.
- More details about TensorFlow dynamic_rnn in Wild-ML Blog.
- If None, it uses
- The sequence length of each row of input data, see
- fw_initial_state : None or forward RNN State
If None, initial_state is zero_state.
- bw_initial_state : None or backward RNN State
If None, initial_state is zero_state.
- dropout : tuple of float: (input_keep_prob, output_keep_prob).
The input and output keep probability.
- n_layer : a int, default is 1.
The number of RNN layers.
- return_last : boolean
If True, return the last output, “Sequence input and single output”
If False, return all outputs, “Synced sequence input and output”
In other word, if you want to apply one or more RNN(s) on this layer, set to False.
- return_seq_2d : boolean
- When return_last = False
- If True, return 2D Tensor [n_example, 2 * n_hidden], for stacking DenseLayer or computing cost after it.
- If False, return 3D Tensor [n_example/n_steps(max), n_steps(max), 2 * n_hidden], for stacking multiple RNN after it.
- name : a string or None
An optional name to attach to this layer.
Notes
Input dimension should be rank 3 : [batch_size, n_steps(max), n_features], if no, please see
ReshapeLayer
.References
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Sequence to Sequence¶
Simple Seq2Seq¶
-
class
tensorlayer.layers.
Seq2Seq
(net_encode_in=None, net_decode_in=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, encode_sequence_length=None, decode_sequence_length=None, initial_state=None, dropout=None, n_layer=1, return_seq_2d=False, name='seq2seq')[source]¶ The
Seq2Seq
class is a simpleDynamicRNNLayer
based Seq2seq layer, both encoder and decoder areDynamicRNNLayer
, network details see Model and Sequence to Sequence Learning with Neural Networks .Parameters: - net_encode_in : a
Layer
instance Encode sequences, [batch_size, None, n_features].
- net_decode_in : a
Layer
instance Decode sequences, [batch_size, None, n_features].
- cell_fn : a TensorFlow’s core RNN cell as follow (Note TF1.0+ and TF1.0- are different).
- cell_init_args : a dictionary
The arguments for the cell initializer.
- n_hidden : a int
The number of hidden units in the layer.
- initializer : initializer
The initializer for initializing the parameters.
- encode_sequence_length : tensor for encoder sequence length, see
DynamicRNNLayer
. - decode_sequence_length : tensor for decoder sequence length, see
DynamicRNNLayer
. - initial_state : None or forward RNN State
If None, initial_state is of encoder zero_state.
- dropout : tuple of float: (input_keep_prob, output_keep_prob).
The input and output keep probability.
- n_layer : a int, default is 1.
The number of RNN layers.
- return_seq_2d : boolean
- When return_last = False
- If True, return 2D Tensor [n_example, 2 * n_hidden], for stacking DenseLayer or computing cost after it.
- If False, return 3D Tensor [n_example/n_steps(max), n_steps(max), 2 * n_hidden], for stacking multiple RNN after it.
- name : a string or None
An optional name to attach to this layer.
Notes
- How to feed data: Sequence to Sequence Learning with Neural Networks
- input_seqs :
['how', 'are', 'you', '<PAD_ID'>]
- decode_seqs :
['<START_ID>', 'I', 'am', 'fine', '<PAD_ID'>]
- target_seqs :
['I', 'am', 'fine', '<END_ID']
- target_mask :
[1, 1, 1, 1, 0]
- related functions : tl.prepro <pad_sequences, precess_sequences, sequences_add_start_id, sequences_get_mask>
Examples
>>> from tensorlayer.layers import * >>> batch_size = 32 >>> encode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="encode_seqs") >>> decode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="decode_seqs") >>> target_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="target_seqs") >>> target_mask = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="target_mask") # tl.prepro.sequences_get_mask() >>> with tf.variable_scope("model") as vs:#, reuse=reuse): ... # for chatbot, you can use the same embedding layer, ... # for translation, you may want to use 2 seperated embedding layers >>> net_encode = EmbeddingInputlayer( ... inputs = encode_seqs, ... vocabulary_size = 10000, ... embedding_size = 200, ... name = 'seq_embedding') >>> vs.reuse_variables() >>> tl.layers.set_name_reuse(True) >>> net_decode = EmbeddingInputlayer( ... inputs = decode_seqs, ... vocabulary_size = 10000, ... embedding_size = 200, ... name = 'seq_embedding') >>> net = Seq2Seq(net_encode, net_decode, ... cell_fn = tf.nn.rnn_cell.LSTMCell, ... n_hidden = 200, ... initializer = tf.random_uniform_initializer(-0.1, 0.1), ... encode_sequence_length = retrieve_seq_length_op2(encode_seqs), ... decode_sequence_length = retrieve_seq_length_op2(decode_seqs), ... initial_state = None, ... dropout = None, ... n_layer = 1, ... return_seq_2d = True, ... name = 'seq2seq') >>> net_out = DenseLayer(net, n_units=10000, act=tf.identity, name='output') >>> e_loss = tl.cost.cross_entropy_seq_with_mask(logits=net_out.outputs, target_seqs=target_seqs, input_mask=target_mask, return_details=False, name='cost') >>> y = tf.nn.softmax(net_out.outputs) >>> net_out.print_params(False)
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - net_encode_in : a
PeekySeq2Seq¶
-
class
tensorlayer.layers.
PeekySeq2Seq
(net_encode_in=None, net_decode_in=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, in_sequence_length=None, out_sequence_length=None, initial_state=None, dropout=None, n_layer=1, return_seq_2d=False, name='peeky_seq2seq')[source]¶ Waiting for contribution. The
PeekySeq2Seq
class, see Model and Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation .Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
AttentionSeq2Seq¶
-
class
tensorlayer.layers.
AttentionSeq2Seq
(net_encode_in=None, net_decode_in=None, cell_fn=None, cell_init_args={'state_is_tuple': True}, n_hidden=256, initializer=<tensorflow.python.ops.init_ops.RandomUniform object>, in_sequence_length=None, out_sequence_length=None, initial_state=None, dropout=None, n_layer=1, return_seq_2d=False, name='attention_seq2seq')[source]¶ Waiting for contribution. The
AttentionSeq2Seq
class, see Model and Neural Machine Translation by Jointly Learning to Align and Translate .Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Shape layer¶
Flatten layer¶
-
class
tensorlayer.layers.
FlattenLayer
(layer=None, name='flatten_layer')[source]¶ The
FlattenLayer
class is layer which reshape high-dimension input to a vector. Then we can apply DenseLayer, RNNLayer, ConcatLayer and etc on the top of it.[batch_size, mask_row, mask_col, n_mask] —> [batch_size, mask_row * mask_col * n_mask]
Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- name : a string or None
An optional name to attach to this layer.
Examples
>>> x = tf.placeholder(tf.float32, shape=[None, 28, 28, 1]) >>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.Conv2dLayer(network, ... act = tf.nn.relu, ... shape = [5, 5, 32, 64], ... strides=[1, 1, 1, 1], ... padding='SAME', ... name ='cnn_layer') >>> network = tl.layers.Pool2dLayer(network, ... ksize=[1, 2, 2, 1], ... strides=[1, 2, 2, 1], ... padding='SAME', ... pool = tf.nn.max_pool, ... name ='pool_layer',) >>> network = tl.layers.FlattenLayer(network, name='flatten_layer')
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Reshape layer¶
-
class
tensorlayer.layers.
ReshapeLayer
(layer=None, shape=[], name='reshape_layer')[source]¶ The
ReshapeLayer
class is layer which reshape the tensor.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- shape : a list
The output shape.
- name : a string or None
An optional name to attach to this layer.
Examples
- The core of this layer is
tf.reshape
. - Use TensorFlow only :
>>> x = tf.placeholder(tf.float32, shape=[None, 3]) >>> y = tf.reshape(x, shape=[-1, 3, 3]) >>> sess = tf.InteractiveSession() >>> print(sess.run(y, feed_dict={x:[[1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5],[6,6,6]]})) ... [[[ 1. 1. 1.] ... [ 2. 2. 2.] ... [ 3. 3. 3.]] ... [[ 4. 4. 4.] ... [ 5. 5. 5.] ... [ 6. 6. 6.]]]
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Lambda layer¶
-
class
tensorlayer.layers.
LambdaLayer
(layer=None, fn=None, fn_args={}, name='lambda_layer')[source]¶ The
LambdaLayer
class is a layer which is able to use the provided function.Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- fn : a function
The function that applies to the outputs of previous layer.
- fn_args : a dictionary
The arguments for the function (option).
- name : a string or None
An optional name to attach to this layer.
Examples
>>> x = tf.placeholder(tf.float32, shape=[None, 1], name='x') >>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = LambdaLayer(network, lambda x: 2*x, name='lambda_layer') >>> y = network.outputs >>> sess = tf.InteractiveSession() >>> out = sess.run(y, feed_dict={x : [[1],[2]]}) ... [[2],[4]]
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Merge layer¶
Concat layer¶
-
class
tensorlayer.layers.
ConcatLayer
(layer=[], concat_dim=1, name='concat_layer')[source]¶ The
ConcatLayer
class is layer which concat (merge) two or moreDenseLayer
to a single class:DenseLayer.Parameters: - layer : a list of
Layer
instances The Layer class feeding into this layer.
- concat_dim : int
Dimension along which to concatenate.
- name : a string or None
An optional name to attach to this layer.
Examples
>>> sess = tf.InteractiveSession() >>> x = tf.placeholder(tf.float32, shape=[None, 784]) >>> inputs = tl.layers.InputLayer(x, name='input_layer') >>> net1 = tl.layers.DenseLayer(inputs, n_units=800, act = tf.nn.relu, name='relu1_1') >>> net2 = tl.layers.DenseLayer(inputs, n_units=300, act = tf.nn.relu, name='relu2_1') >>> network = tl.layers.ConcatLayer(layer = [net1, net2], name ='concat_layer') ... [TL] InputLayer input_layer (?, 784) ... [TL] DenseLayer relu1_1: 800, <function relu at 0x1108e41e0> ... [TL] DenseLayer relu2_1: 300, <function relu at 0x1108e41e0> ... [TL] ConcatLayer concat_layer, 1100 ... >>> tl.layers.initialize_global_variables(sess) >>> network.print_params() ... param 0: (784, 800) (mean: 0.000021, median: -0.000020 std: 0.035525) ... param 1: (800,) (mean: 0.000000, median: 0.000000 std: 0.000000) ... param 2: (784, 300) (mean: 0.000000, median: -0.000048 std: 0.042947) ... param 3: (300,) (mean: 0.000000, median: 0.000000 std: 0.000000) ... num of params: 863500 >>> network.print_layers() ... layer 0: Tensor("Relu:0", shape=(?, 800), dtype=float32) ... layer 1: Tensor("Relu_1:0", shape=(?, 300), dtype=float32) ...
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a list of
Element-wise layer¶
-
class
tensorlayer.layers.
ElementwiseLayer
(layer=[], combine_fn=<function minimum>, name='elementwise_layer')[source]¶ The
ElementwiseLayer
class combines multipleLayer
which have the same output shapes by a given elemwise-wise operation.Parameters: - layer : a list of
Layer
instances The Layer class feeding into this layer.
- combine_fn : a TensorFlow elemwise-merge function
e.g. AND is
tf.minimum
; OR istf.maximum
; ADD istf.add
; MUL istf.multiply
and so on. See TensorFlow Math API .- name : a string or None
An optional name to attach to this layer.
Examples
- AND Logic
>>> net_0 = tl.layers.DenseLayer(net_0, n_units=500, ... act = tf.nn.relu, name='net_0') >>> net_1 = tl.layers.DenseLayer(net_1, n_units=500, ... act = tf.nn.relu, name='net_1') >>> net_com = tl.layers.ElementwiseLayer(layer = [net_0, net_1], ... combine_fn = tf.minimum, ... name = 'combine_layer')
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a list of
Extend layer¶
Expand dims layer¶
-
class
tensorlayer.layers.
ExpandDimsLayer
(layer=None, axis=None, name='expand_dims')[source]¶ The
ExpandDimsLayer
class inserts a dimension of 1 into a tensor’s shape, see tf.expand_dims() .Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- axis : int, 0-D (scalar).
Specifies the dimension index at which to expand the shape of input.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Tile layer¶
-
class
tensorlayer.layers.
TileLayer
(layer=None, multiples=None, name='tile')[source]¶ The
TileLayer
class constructs a tensor by tiling a given tensor, see tf.tile() .Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- multiples: a list of int
Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Connect TF-Slim¶
Yes ! TF-Slim models can be connected into TensorLayer, all Google’s Pre-trained model can be used easily , see Slim-model .
-
class
tensorlayer.layers.
SlimNetsLayer
(layer=None, slim_layer=None, slim_args={}, name='tfslim_layer')[source]¶ The
SlimNetsLayer
class can be used to merge all TF-Slim nets into TensorLayer. Model can be found in slim-model , more about slim see slim-git .Parameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- slim_layer : a slim network function
The network you want to stack onto, end with
return net, end_points
.- slim_args : dictionary
The arguments for the slim model.
- name : a string or None
An optional name to attach to this layer.
Notes
The due to TF-Slim stores the layers as dictionary, the
all_layers
in this network is not in order ! Fortunately, theall_params
are in order.Examples
- see Inception V3 example on Github
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Connect Keras¶
Yes ! Keras models can be connected into TensorLayer! see tutorial_keras.py .
-
class
tensorlayer.layers.
KerasLayer
(layer=None, keras_layer=None, keras_args={}, name='keras_layer')[source]¶ The
KerasLayer
class can be used to merge all Keras layers into TensorLayer. Example can be found here tutorial_keras.pyParameters: - layer : a
Layer
instance The Layer class feeding into this layer.
- keras_layer : a keras network function
- keras_args : dictionary
The arguments for the keras model.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a
Parametric activation layer¶
-
class
tensorlayer.layers.
PReluLayer
(layer=None, channel_shared=False, a_init=<tensorflow.python.ops.init_ops.Constant object>, a_init_args={}, name='prelu_layer')[source]¶ The
PReluLayer
class is Parametric Rectified Linear layer.Parameters: - x : A Tensor with type float, double, int32, int64, uint8,
int16, or int8.
- channel_shared : bool. Single weight is shared by all channels
- a_init : alpha initializer, default zero constant.
The initializer for initializing the alphas.
- a_init_args : dictionary
The arguments for the weights initializer.
- name : A name for this activation op (optional).
References
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network
Flow control layer¶
-
class
tensorlayer.layers.
MultiplexerLayer
(layer=[], name='mux_layer')[source]¶ The
MultiplexerLayer
selects one of several input and forwards the selected input into the output, see tutorial_mnist_multiplexer.py.Parameters: - layer : a list of
Layer
instances The Layer class feeding into this layer.
- name : a string or None
An optional name to attach to this layer.
References
- See
tf.pack() for TF0.12 or tf.stack() for TF1.0
andtf.gather()
at TensorFlow - Slicing and Joining
Examples
>>> x = tf.placeholder(tf.float32, shape=[None, 784], name='x') >>> y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_') >>> # define the network >>> net_in = tl.layers.InputLayer(x, name='input_layer') >>> net_in = tl.layers.DropoutLayer(net_in, keep=0.8, name='drop1') >>> # net 0 >>> net_0 = tl.layers.DenseLayer(net_in, n_units=800, ... act = tf.nn.relu, name='net0/relu1') >>> net_0 = tl.layers.DropoutLayer(net_0, keep=0.5, name='net0/drop2') >>> net_0 = tl.layers.DenseLayer(net_0, n_units=800, ... act = tf.nn.relu, name='net0/relu2') >>> # net 1 >>> net_1 = tl.layers.DenseLayer(net_in, n_units=800, ... act = tf.nn.relu, name='net1/relu1') >>> net_1 = tl.layers.DropoutLayer(net_1, keep=0.8, name='net1/drop2') >>> net_1 = tl.layers.DenseLayer(net_1, n_units=800, ... act = tf.nn.relu, name='net1/relu2') >>> net_1 = tl.layers.DropoutLayer(net_1, keep=0.8, name='net1/drop3') >>> net_1 = tl.layers.DenseLayer(net_1, n_units=800, ... act = tf.nn.relu, name='net1/relu3') >>> # multiplexer >>> net_mux = tl.layers.MultiplexerLayer(layer = [net_0, net_1], name='mux_layer') >>> network = tl.layers.ReshapeLayer(net_mux, shape=[-1, 800], name='reshape_layer') # >>> network = tl.layers.DropoutLayer(network, keep=0.5, name='drop3') >>> # output layer >>> network = tl.layers.DenseLayer(network, n_units=10, ... act = tf.identity, name='output_layer')
Methods
count_params
()Return the number of parameters in the network print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network - layer : a list of
Wrapper¶
Embedding + Attention + Seq2seq¶
-
class
tensorlayer.layers.
EmbeddingAttentionSeq2seqWrapper
(source_vocab_size, target_vocab_size, buckets, size, num_layers, max_gradient_norm, batch_size, learning_rate, learning_rate_decay_factor, use_lstm=False, num_samples=512, forward_only=False, name='wrapper')[source]¶ Sequence-to-sequence model with attention and for multiple buckets.
This example implements a multi-layer recurrent neural network as encoder, and an attention-based decoder. This is the same as the model described in this paper: - Grammar as a Foreign Language please look there for details, or into the seq2seq library for complete model implementation. This example also allows to use GRU cells in addition to LSTM cells, and sampled softmax to handle large output vocabulary size. A single-layer version of this model, but with bi-directional encoder, was presented in - Neural Machine Translation by Jointly Learning to Align and Translate The sampled softmax is described in Section 3 of the following paper. - On Using Very Large Target Vocabulary for Neural Machine Translation
Parameters: - source_vocab_size : size of the source vocabulary.
- target_vocab_size : size of the target vocabulary.
- buckets : a list of pairs (I, O), where I specifies maximum input length
that will be processed in that bucket, and O specifies maximum output length. Training instances that have inputs longer than I or outputs longer than O will be pushed to the next bucket and padded accordingly. We assume that the list is sorted, e.g., [(2, 4), (8, 16)].
- size : number of units in each layer of the model.
- num_layers : number of layers in the model.
- max_gradient_norm : gradients will be clipped to maximally this norm.
- batch_size : the size of the batches used during training;
the model construction is independent of batch_size, so it can be changed after initialization if this is convenient, e.g., for decoding.
- learning_rate : learning rate to start with.
- learning_rate_decay_factor : decay learning rate by this much when needed.
- use_lstm : if true, we use LSTM cells instead of GRU cells.
- num_samples : number of samples for sampled softmax.
- forward_only : if set, we do not construct the backward pass in the model.
- name : a string or None
An optional name to attach to this layer.
Methods
count_params
()Return the number of parameters in the network get_batch
(data, bucket_id[, PAD_ID, GO_ID, …])Get a random batch of data from the specified bucket, prepare for step. print_layers
()Print all info of layers in the network print_params
([details])Print all info of parameters in the network step
(session, encoder_inputs, …)Run a step of the model feeding the given inputs. -
get_batch
(data, bucket_id, PAD_ID=0, GO_ID=1, EOS_ID=2, UNK_ID=3)[source]¶ Get a random batch of data from the specified bucket, prepare for step.
To feed data in step(..) it must be a list of batch-major vectors, while data here contains single length-major cases. So the main logic of this function is to re-index data cases to be in the proper format for feeding.
Parameters: - data : a tuple of size len(self.buckets) in which each element contains
lists of pairs of input and output data that we use to create a batch.
- bucket_id : integer, which bucket to get the batch for.
- PAD_ID : int
Index of Padding in vocabulary
- GO_ID : int
Index of GO in vocabulary
- EOS_ID : int
Index of End of sentence in vocabulary
- UNK_ID : int
Index of Unknown word in vocabulary
Returns: - The triple (encoder_inputs, decoder_inputs, target_weights) for
- the constructed batch that has the proper format to call step(…) later.
-
step
(session, encoder_inputs, decoder_inputs, target_weights, bucket_id, forward_only)[source]¶ Run a step of the model feeding the given inputs.
Parameters: - session : tensorflow session to use.
- encoder_inputs : list of numpy int vectors to feed as encoder inputs.
- decoder_inputs : list of numpy int vectors to feed as decoder inputs.
- target_weights : list of numpy float vectors to feed as target weights.
- bucket_id : which bucket of the model to use.
- forward_only : whether to do the backward step or only forward.
Returns: - A triple consisting of gradient norm (or None if we did not do backward),
- average perplexity, and the outputs.
Raises: - ValueError : if length of encoder_inputs, decoder_inputs, or
target_weights disagrees with bucket size for the specified bucket_id.
Helper functions¶
Flatten tensor¶
-
tensorlayer.layers.
flatten_reshape
(variable, name='')[source]¶ Reshapes high-dimension input to a vector. [batch_size, mask_row, mask_col, n_mask] —> [batch_size, mask_row * mask_col * n_mask]
Parameters: - variable : a tensorflow variable
- name : a string or None
An optional name to attach to this layer.
Examples
>>> W_conv2 = weight_variable([5, 5, 100, 32]) # 64 features for each 5x5 patch >>> b_conv2 = bias_variable([32]) >>> W_fc1 = weight_variable([7 * 7 * 32, 256])
>>> h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) >>> h_pool2 = max_pool_2x2(h_conv2) >>> h_pool2.get_shape()[:].as_list() = [batch_size, 7, 7, 32] ... [batch_size, mask_row, mask_col, n_mask] >>> h_pool2_flat = tl.layers.flatten_reshape(h_pool2) ... [batch_size, mask_row * mask_col * n_mask] >>> h_pool2_flat_drop = tf.nn.dropout(h_pool2_flat, keep_prob) ...
Permanent clear existing layer names¶
-
tensorlayer.layers.
clear_layers_name
()[source]¶ Clear all layer names in set_keep[‘_layers_name_list’], enable layer name reuse.
Examples
>>> network = tl.layers.InputLayer(x, name='input_layer') >>> network = tl.layers.DenseLayer(network, n_units=800, name='relu1') ... >>> tl.layers.clear_layers_name() >>> network2 = tl.layers.InputLayer(x, name='input_layer') >>> network2 = tl.layers.DenseLayer(network2, n_units=800, name='relu1') ...