6. Design and implement a deep learning network for forecasting time series data.
✅ Install Required Packages
pip install numpy scikit-learn tensorflow
PROGRAM:
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error, mean_absolute_error
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense, TimeDistributed
# ---------------------------
# 1) Generate synthetic time series
def generate_series(n=1000):
t = np.arange(n)
trend = 0.01 * t
seasonal = np.sin(2 * np.pi * t / 50)
noise = 0.1 * np.random.randn(n)
series = 10 + trend + seasonal + noise
return series
series = generate_series(1000)
# ---------------------------
# 2) Scale data
scaler = StandardScaler()
series_scaled = scaler.fit_transform(series.reshape(-1,1)).flatten()
# ---------------------------
# 3) Windowing
INPUT_WINDOW = 30
OUTPUT_WINDOW = 10
def make_windows(data, input_w, output_w):
X, Y = [], []
for i in range(len(data) - input_w - output_w):
X.append(data[i:i+input_w])
Y.append(data[i+input_w:i+input_w+output_w])
return np.array(X), np.array(Y)
X, Y = make_windows(series_scaled, INPUT_WINDOW, OUTPUT_WINDOW)
X = X[..., np.newaxis]
Y = Y[..., np.newaxis]
# Train/test split (80/20)
split = int(0.8 * len(X))
X_train, X_test = X[:split], X[split:]
Y_train, Y_test = Y[:split], Y[split:]
# ---------------------------
# 4) Build Seq2Seq LSTM model
# Encoder
encoder_inputs = Input(shape=(INPUT_WINDOW,1))
encoder_lstm = LSTM(64, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(encoder_inputs)
encoder_states = [state_h, state_c]
# Decoder
decoder_inputs = Input(shape=(OUTPUT_WINDOW,1))
decoder_lstm = LSTM(64, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = TimeDistributed(Dense(1))
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
# Prepare decoder input (teacher forcing)
decoder_input_train = np.zeros_like(Y_train)
decoder_input_test = np.zeros_like(Y_test)
# ---------------------------
# 5) Train
history = model.fit([X_train, decoder_input_train], Y_train,
validation_data=([X_test, decoder_input_test], Y_test),
epochs=20, batch_size=32, verbose=1)
# ---------------------------
# 6) Evaluate
preds = model.predict([X_test, decoder_input_test])
preds_unscaled = scaler.inverse_transform(preds.reshape(-1,1)).reshape(preds.shape)
Y_test_unscaled = scaler.inverse_transform(Y_test.reshape(-1,1)).reshape(Y_test.shape)
mae = mean_absolute_error(Y_test_unscaled.flatten(), preds_unscaled.flatten())
rmse = np.sqrt(mean_squared_error(Y_test_unscaled.flatten(), preds_unscaled.flatten()))
print(f"Test MAE: {mae:.4f}, Test RMSE: {rmse:.4f}")
OUTPUT:
Epoch 1/20
24/24 [==============================] - 4s 49ms/step - loss: 0.2551 - mae: 0.3946 - val_loss: 0.3036 - val_mae: 0.4672
Epoch 2/20
24/24 [==============================] - 0s 13ms/step - loss: 0.0757 - mae: 0.2349 - val_loss: 0.1495 - val_mae: 0.3149
Epoch 3/20
24/24 [==============================] - 0s 13ms/step - loss: 0.0617 - mae: 0.2132 - val_loss: 0.1106 - val_mae: 0.2658
Epoch 4/20
24/24 [==============================] - 0s 17ms/step - loss: 0.0520 - mae: 0.1943 - val_loss: 0.1049 - val_mae: 0.2621
Epoch 5/20
24/24 [==============================] - 0s 17ms/step - loss: 0.0333 - mae: 0.1499 - val_loss: 0.0716 - val_mae: 0.2159
Epoch 6/20
24/24 [==============================] - 0s 15ms/step - loss: 0.0109 - mae: 0.0790 - val_loss: 0.0705 - val_mae: 0.2389
Epoch 7/20
24/24 [==============================] - 0s 16ms/step - loss: 0.0046 - mae: 0.0535 - val_loss: 0.0344 - val_mae: 0.1554
Epoch 8/20
24/24 [==============================] - 0s 16ms/step - loss: 0.0038 - mae: 0.0493 - val_loss: 0.0350 - val_mae: 0.1611
Epoch 9/20
24/24 [==============================] - 0s 14ms/step - loss: 0.0035 - mae: 0.0469 - val_loss: 0.0469 - val_mae: 0.1950
Epoch 10/20
24/24 [==============================] - 0s 16ms/step - loss: 0.0031 - mae: 0.0444 - val_loss: 0.0212 - val_mae: 0.1240
Epoch 11/20
24/24 [==============================] - 0s 17ms/step - loss: 0.0028 - mae: 0.0428 - val_loss: 0.0180 - val_mae: 0.1133
Epoch 12/20
24/24 [==============================] - 0s 17ms/step - loss: 0.0029 - mae: 0.0430 - val_loss: 0.0187 - val_mae: 0.1168
Epoch 13/20
24/24 [==============================] - 0s 17ms/step - loss: 0.0026 - mae: 0.0412 - val_loss: 0.0118 - val_mae: 0.0895
Epoch 14/20
24/24 [==============================] - 0s 12ms/step - loss: 0.0026 - mae: 0.0410 - val_loss: 0.0165 - val_mae: 0.1104
Epoch 15/20
24/24 [==============================] - 0s 16ms/step - loss: 0.0026 - mae: 0.0407 - val_loss: 0.0084 - val_mae: 0.0747
Epoch 16/20
24/24 [==============================] - 0s 16ms/step - loss: 0.0027 - mae: 0.0413 - val_loss: 0.0169 - val_mae: 0.1130
Epoch 17/20
24/24 [==============================] - 0s 16ms/step - loss: 0.0025 - mae: 0.0399 - val_loss: 0.0089 - val_mae: 0.0776
Epoch 18/20
24/24 [==============================] - 0s 17ms/step - loss: 0.0024 - mae: 0.0392 - val_loss: 0.0122 - val_mae: 0.0926
Epoch 19/20
24/24 [==============================] - 0s 17ms/step - loss: 0.0024 - mae: 0.0392 - val_loss: 0.0077 - val_mae: 0.0713
Epoch 20/20
24/24 [==============================] - 0s 15ms/step - loss: 0.0026 - mae: 0.0408 - val_loss: 0.0076 - val_mae: 0.0707
6/6 [==============================] - 1s 8ms/step
Test MAE: 0.2083, Test RMSE: 0.2570