diff --git a/src/TensorFlowNET.Keras/Layers/LayersApi.cs b/src/TensorFlowNET.Keras/Layers/LayersApi.cs
index 458e87f5..3cd46c28 100644
--- a/src/TensorFlowNET.Keras/Layers/LayersApi.cs
+++ b/src/TensorFlowNET.Keras/Layers/LayersApi.cs
@@ -565,25 +565,31 @@ namespace Tensorflow.Keras.Layers
});
///
- ///
+ /// Long Short-Term Memory layer - Hochreiter 1997.
///
- ///
- ///
- ///
- ///
- ///
- ///
- ///
- ///
- ///
- ///
+ /// Positive integer, dimensionality of the output space.
+ /// Activation function to use. If you pass null, no activation is applied (ie. "linear" activation: a(x) = x).
+ /// Activation function to use for the recurrent step. If you pass null, no activation is applied (ie. "linear" activation: a(x) = x).
+ /// Boolean (default True), whether the layer uses a bias vector.
+ /// Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: glorot_uniform.
+ /// Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: orthogonal.
+ /// Initializer for the bias vector. Default: zeros.
+ /// Boolean (default True). If True, add 1 to the bias of the forget gate at initialization. Setting it to true will also force bias_initializer="zeros". This is recommended in Jozefowicz et al..
+ /// Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0.
+ /// Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0.
///
- ///
- ///
- ///
- ///
- ///
- ///
+ /// Boolean. Whether to return the last output. in the output sequence, or the full sequence. Default: False.
+ /// Whether to return the last state in addition to the output. Default: False.
+ /// Boolean (default false). If True, process the input sequence backwards and return the reversed sequence.
+ /// Boolean (default False). If True, the last state for each sample at index i in a batch will be used as initial state for the sample of index i in the following batch.
+ ///
+ /// The shape format of the inputs and outputs tensors. If True, the inputs and outputs will be in shape [timesteps, batch, feature],
+ /// whereas in the False case, it will be [batch, timesteps, feature]. Using time_major = True is a bit more efficient because it avoids transposes at the
+ /// beginning and end of the RNN calculation. However, most TensorFlow data is batch-major, so by default this function accepts input and emits output in batch-major form.
+ ///
+ /// Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN,
+ /// although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
+ ///
///
public Layer LSTM(int units,
Activation activation = null,