ํ‹ฐ์Šคํ† ๋ฆฌ ๋ทฐ

๋ฐ˜์‘ํ˜•

๐ŸŒŸ ํ˜„๋Œ€ ์ธ๊ณต์ง€๋Šฅ ํ•™์Šต 2๋‹จ๊ณ„: ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง(RNN)๊ณผ ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ

๐Ÿ“… ํ•™์Šต ๊ธฐ๊ฐ„: 7~9๊ฐœ์›”

๐ŸŽฏ ํ•™์Šต ๋ชฉํ‘œ: ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ๋ฐ ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง ์ดํ•ด


๐Ÿ“ 1. ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง(RNN) ๊ธฐ์ดˆ

์ˆœํ™˜ ์‹ ๊ฒฝ๋ง(RNN, Recurrent Neural Network)์€ ์ˆœ์ฐจ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐ ํŠนํ™”๋œ ์‹ ๊ฒฝ๋ง์ž…๋‹ˆ๋‹ค.

  • ์ž…๋ ฅ์ด ์—ฐ์†์ ์ธ ๊ฒฝ์šฐ์— ์ ํ•ฉ (์˜ˆ: ์ฃผ๊ฐ€ ์˜ˆ์ธก, ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ)
  • ์ด์ „์˜ ๊ณ„์‚ฐ ๊ฒฐ๊ณผ๋ฅผ ๋‹ค์Œ ๊ณ„์‚ฐ์— ํ”ผ๋“œ๋ฐฑํ•˜์—ฌ ํ™œ์šฉ

๐Ÿ“Œ 1-1. RNN ๊ตฌ์กฐ ์ดํ•ด

RNN์€ ์ž…๋ ฅ๊ณผ ์ถœ๋ ฅ ์‚ฌ์ด์˜ ๊ด€๊ณ„๋ฟ ์•„๋‹ˆ๋ผ ์‹œ๊ฐ„์˜ ํ๋ฆ„๋„ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค.

  • ์ž…๋ ฅ: ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ๋˜๋Š” ์ˆœ์ฐจ์  ๋ฐ์ดํ„ฐ
  • ์ˆœํ™˜ ๋…ธ๋“œ: ์ด์ „ ์ƒํƒœ๋ฅผ ๋‹ค์Œ ์ƒํƒœ๋กœ ์ „๋‹ฌ
  • ์ถœ๋ ฅ: ์—ฐ์†์  ๋ฐ์ดํ„ฐ ์˜ˆ์ธก ๋˜๋Š” ๋ถ„๋ฅ˜

๐Ÿ”‘ 1) RNN์˜ ์ˆ˜ํ•™์  ํ‘œํ˜„

RNN์—์„œ๋Š” ์€๋‹‰ ์ƒํƒœ(hth_t)๊ฐ€ ์‹œ๊ฐ„์— ๋”ฐ๋ผ ๋ณ€ํ™”ํ•ฉ๋‹ˆ๋‹ค.

ht=f(Wx⋅xt+Wh⋅ht−1+b)h_t = f(W_x \cdot x_t + W_h \cdot h_{t-1} + b)

์—ฌ๊ธฐ์„œ,

  • xtx_t: ํ˜„์žฌ ์ž…๋ ฅ
  • ht−1h_{t-1}: ์ด์ „ ์€๋‹‰ ์ƒํƒœ
  • WxW_x, WhW_h: ๊ฐ€์ค‘์น˜ ํ–‰๋ ฌ
  • bb: ๋ฐ”์ด์–ด์Šค
  • ff: ํ™œ์„ฑํ™” ํ•จ์ˆ˜ (์ฃผ๋กœ tanhtanh ์‚ฌ์šฉ)

๐Ÿ’ป ์ฝ”๋“œ ์‹ค์Šต: ๊ฐ„๋‹จํ•œ RNN ๊ตฌํ˜„

import torch
import torch.nn as nn

# RNN ๋ชจ๋ธ ์ •์˜
class SimpleRNN(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(SimpleRNN, self).__init__()
        self.rnn = nn.RNN(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        out, _ = self.rnn(x)
        out = self.fc(out[:, -1, :])  # ๋งˆ์ง€๋ง‰ ํƒ€์ž„์Šคํ… ์ถœ๋ ฅ
        return out

# ๋ชจ๋ธ ์ƒ์„ฑ
model = SimpleRNN(input_size=1, hidden_size=10, output_size=1)
print(model)

๐Ÿ“Œ 1-2. RNN์˜ ํ•œ๊ณ„์™€ ๊ฐœ์„ 

๋ฐ˜์‘ํ˜•

๐Ÿ”‘ 1) RNN์˜ ํ•œ๊ณ„

  • ์žฅ๊ธฐ ์ข…์†์„ฑ ๋ฌธ์ œ: ๊ธด ๋ฌธ๋งฅ์„ ํ•™์Šตํ•˜๊ธฐ ์–ด๋ ค์›€
  • ๊ธฐ์šธ๊ธฐ ์†Œ์‹ค ๋ฌธ์ œ: ์—ญ์ „ํŒŒ ์‹œ ๊ธฐ์šธ๊ธฐ๊ฐ€ 0์— ๊ฐ€๊นŒ์›Œ์ง

๐Ÿ’ก ๊ฐœ์„  ๋ฐฉ์•ˆ:

  1. LSTM(Long Short-Term Memory)
    • ์…€ ์ƒํƒœ๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ์žฅ๊ธฐ ์˜์กด์„ฑ ๋ฌธ์ œ ํ•ด๊ฒฐ
  2. GRU(Gated Recurrent Unit)
    • LSTM๋ณด๋‹ค ๊ฐ„๊ฒฐํ•˜์ง€๋งŒ ์œ ์‚ฌํ•œ ์„ฑ๋Šฅ

๐Ÿ” 2. LSTM๊ณผ GRU: ๊ฐœ์„ ๋œ ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง

๐Ÿ“Œ 2-1. LSTM (Long Short-Term Memory)

LSTM์€ RNN์˜ ํ•œ๊ณ„๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด **์…€ ์ƒํƒœ(Cell State)**๋ฅผ ์ถ”๊ฐ€ํ•œ ๊ตฌ์กฐ์ž…๋‹ˆ๋‹ค.

  • ๊ฒŒ์ดํŠธ ๊ตฌ์กฐ: ์ž…๋ ฅ ๊ฒŒ์ดํŠธ, ๋ง๊ฐ ๊ฒŒ์ดํŠธ, ์ถœ๋ ฅ ๊ฒŒ์ดํŠธ
  • ์…€ ์ƒํƒœ: ์ค‘์š”ํ•œ ์ •๋ณด๋ฅผ ์žฅ๊ธฐ์ ์œผ๋กœ ๊ธฐ์–ต

๐Ÿ’ป ์ฝ”๋“œ ์‹ค์Šต: LSTM ๊ตฌํ˜„

class SimpleLSTM(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(SimpleLSTM, self).__init__()
        self.lstm = nn.LSTM(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        out, _ = self.lstm(x)
        out = self.fc(out[:, -1, :])
        return out

model = SimpleLSTM(input_size=1, hidden_size=10, output_size=1)
print(model)

๐Ÿ“Œ 2-2. GRU (Gated Recurrent Unit)

GRU๋Š” LSTM์„ ๊ฐ„์†Œํ™”ํ•œ ํ˜•ํƒœ๋กœ ๊ธฐ์–ต ์…€๊ณผ ๊ฒŒ์ดํŠธ ๊ตฌ์กฐ๋ฅผ ํ†ตํ•ฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.

  • ์žฅ์ : ๊ณ„์‚ฐ๋Ÿ‰ ๊ฐ์†Œ
  • ๋‹จ์ : ๋ณต์žกํ•œ ํŒจํ„ด ์ฒ˜๋ฆฌ ๋Šฅ๋ ฅ ๊ฐ์†Œ

๐Ÿ’ป ์ฝ”๋“œ ์‹ค์Šต: GRU ๊ตฌํ˜„

class SimpleGRU(nn.Module):
    def __init__(self, input_size, hidden_size, output_size):
        super(SimpleGRU, self).__init__()
        self.gru = nn.GRU(input_size, hidden_size, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)

    def forward(self, x):
        out, _ = self.gru(x)
        out = self.fc(out[:, -1, :])
        return out

model = SimpleGRU(input_size=1, hidden_size=10, output_size=1)
print(model)

๐Ÿ“ˆ 3. ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ๋ถ„์„๊ณผ RNN ํ™œ์šฉ ์‚ฌ๋ก€

๐Ÿ“Œ 3-1. ์ฃผ๊ฐ€ ์˜ˆ์ธก ๋ชจ๋ธ (LSTM ํ™œ์šฉ)

๋ชฉํ‘œ: ์ฃผ๊ฐ€ ๋ณ€๋™ ์˜ˆ์ธก
๋ฐ์ดํ„ฐ: Yahoo Finance์—์„œ ๊ฐ€์ ธ์˜จ ์ฃผ๊ฐ€ ๋ฐ์ดํ„ฐ

๐Ÿ’ป ์ฝ”๋“œ ์‹ค์Šต: ์ฃผ๊ฐ€ ์˜ˆ์ธก ๋ชจ๋ธ

import yfinance as yf
from sklearn.preprocessing import MinMaxScaler
import numpy as np

# ๋ฐ์ดํ„ฐ ๋กœ๋“œ
stock = yf.download("AAPL", start="2022-01-01", end="2023-01-01")
close_prices = stock['Close'].values

# ๋ฐ์ดํ„ฐ ์ „์ฒ˜๋ฆฌ
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(close_prices.reshape(-1, 1))

# ๋ฐ์ดํ„ฐ์…‹ ์ƒ์„ฑ
def create_sequences(data, seq_length):
    sequences = []
    for i in range(len(data) - seq_length):
        seq = data[i:i + seq_length]
        label = data[i + seq_length]
        sequences.append((seq, label))
    return sequences

sequences = create_sequences(scaled_data, 5)

# RNN ๋ชจ๋ธ ์ •์˜ ๋ฐ ํ•™์Šต
model = SimpleLSTM(input_size=1, hidden_size=50, output_size=1)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

# ํ•™์Šต ๋ฃจํ”„
for epoch in range(50):
    for seq, label in sequences:
        seq = torch.tensor(seq, dtype=torch.float32).unsqueeze(0)
        label = torch.tensor(label, dtype=torch.float32)

        optimizer.zero_grad()
        output = model(seq)
        loss = criterion(output, label)
        loss.backward()
        optimizer.step()

    if (epoch+1) % 10 == 0:
        print(f"Epoch {epoch+1}, Loss: {loss.item():.4f}")

๐Ÿ“ ํ•™์Šต ์ฒดํฌ๋ฆฌ์ŠคํŠธ:

  • RNN์˜ ๊ธฐ๋ณธ ๊ตฌ์กฐ๋ฅผ ์ดํ•ดํ•˜๊ณ  ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค.
  • LSTM๊ณผ GRU์˜ ์ฐจ์ด์ ์„ ์ดํ•ดํ•˜๊ณ  ์ง์ ‘ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค.
  • ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ์˜ ๊ธฐ๋ณธ ๊ฐœ๋…์„ ํŒŒ์•…ํ•˜๊ณ  ์˜ˆ์ธก ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ๋‹ค.
  • ์ฃผ๊ฐ€ ์˜ˆ์ธก ๋ชจ๋ธ์„ ํ†ตํ•ด ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ ํ•™์Šต์„ ์‹ค์Šตํ•  ์ˆ˜ ์žˆ๋‹ค.

RNN, LSTM, GRU, ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ, ์ฃผ๊ฐ€ ์˜ˆ์ธก, ๋”ฅ๋Ÿฌ๋‹, PyTorch, ์ˆœํ™˜ ์‹ ๊ฒฝ๋ง, ์‹œ๊ณ„์—ด ๋ถ„์„, ์žฅ๊ธฐ ์˜์กด์„ฑ ๋ฌธ์ œ, ๊ธฐ์šธ๊ธฐ ์†Œ์‹ค ๋ฌธ์ œ

โ€ป ์ด ํฌ์ŠคํŒ…์€ ์ฟ ํŒก ํŒŒํŠธ๋„ˆ์Šค ํ™œ๋™์˜ ์ผํ™˜์œผ๋กœ, ์ด์— ๋”ฐ๋ฅธ ์ผ์ •์•ก์˜ ์ˆ˜์ˆ˜๋ฃŒ๋ฅผ ์ œ๊ณต๋ฐ›์Šต๋‹ˆ๋‹ค.
๊ณต์ง€์‚ฌํ•ญ
์ตœ๊ทผ์— ์˜ฌ๋ผ์˜จ ๊ธ€
์ตœ๊ทผ์— ๋‹ฌ๋ฆฐ ๋Œ“๊ธ€
Total
Today
Yesterday
๋งํฌ
ยซ   2025/05   ยป
์ผ ์›” ํ™” ์ˆ˜ ๋ชฉ ๊ธˆ ํ† 
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31
๊ธ€ ๋ณด๊ด€ํ•จ
๋ฐ˜์‘ํ˜•