Next, we use TensorFlow/Keras to do the same task. We build a tf.keras.Sequential model with a SimpleRNN layer (the most basic recurrent layer)  followed by a Dense output. The workflow is similar: create the same synthetic sine data and split it into train/test sets; then define, train, and evaluate the model.
import numpy as np
import tensorflow as tf
# 1. Data preparation: same sine wave data and sequences as above
time_steps = np.linspace(0, 100, 500)
data = np.sin(time_steps) # (500,)
seq_length = 20
X, y = [], []
for i in range(len(data) - seq_length):
X.append(data[i:i+seq_length])
y.append(data[i+seq_length])
X = np.array(X) # (480, seq_length)
y = np.array(y) # (480,)
# reshape for RNN: (samples, timesteps, features)
X = X.reshape(-1, seq_length, 1) # (480, 20, 1)
y = y.reshape(-1, 1) # (480, 1)
# Split into train/test (80/20)
split = int(0.8 * len(X))
X_train, X_test = X[:split], X[split:]
y_train, y_test = y[:split], y[split:]
Data: We use the same sine-wave sequence and sliding-window split as in the PyTorch example . The arrays are reshaped to (batch, timesteps, features) for Keras.
# 2. Model definition: Keras SimpleRNN and Dense
model = tf.keras.Sequential([
tf.keras.layers.SimpleRNN(16, input_shape=(seq_length, 1)),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse') # MSE loss and Adam optimizer
model.summary()
Explanation: Here SimpleRNN(16) creates 16 recurrent units. The model summary shows the shapes and number of parameters. (Keras handles the sequence dimension internally.)
# 3. Training
history = model.fit(
X_train, y_train,
epochs=50,
batch_size=32,
validation_split=0.2, # use 20% of train data for validation
verbose=1
)
Training: We train for 50 epochs. The fit call also reports validation loss (using a 20$%$ split of the training data) to monitor generalization.
# 4. Evaluation on test set
test_loss = model.evaluate(X_test, y_test, verbose=0)
print(f'Test Loss: {test_loss:.4f}')
# (Optional) Predictions
predictions = model.predict(X_test)
print("Actual:", y_test.flatten()[:5])
print("Pred : ", predictions.flatten()[:5])
Evaluation: After training, we call model.evaluate on the test set. A low test loss indicates good forecasting accuracy. We also predict and compare a few samples of actual vs. predicted values. This completes the simple RNN forecasting example in TensorFlow.
Both examples use only basic RNN cells (no LSTM/GRU) and include data preparation, model definition, training loop, and evaluation. The PyTorch code uses nn.RNN as and the Keras code uses SimpleRNN layer. Each code block above is self-contained and can be run independently with standard libraries (NumPy, PyTorch or TensorFlow).