2_black_box.md 1.2 KB
Newer Older
hazrmard's avatar
hazrmard committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
title: Black-box models
order: 2
---

## Feed-forward networks

Feed-forward networks (FFNs), also known as Multi-layer Perceptrons (MLPs), instantaneously map inputs to outputs. They are universal function approximators given their layer size can be arbitrarily large.

However, FFNs do not have 'memory' - their past inputs do not affect their current outputs. The history of the model needs to be encoded in a single input (for e.g. concatenating previous and current state variables into a longer vector).

## Recurrent networks

Recurrent neural networks (RNNs) are *stateful* models. An LSTM network is able to memorize earlier states such that it affects later states. This is important when modelling dynamic systems wherre the future states of the system depend on the rate of change of states or other higher order phenomenon.

An LSTM network has two architectural parameters:

* **Cell**: A cell is an equivalent of a layer for a LSTM network. A cell takes in sequential data and outputs a transformed sequence.

* **Hidden unit**: Each cell has memory. Each hidden unit represents the state of the cell for one past time step. A cell with 10 hidden units has a sliding window of 10 steps over the input sequence.