Evolving cellular automata to solve problems, part 1

Recently I finished reading Complexity: A Guided Tour, by Melanie Mitchell, which reminded me a lot of Gödel, Escher Bach (indeed, the book is dedicated to Douglas Hofstadter). It has reminded me that emergence is an incredibly fascinating concept – simple individual units somehow coming together to result in complex behaviour that cannot really be explained in terms of the components. One of the first examples presented is the army ant: even in groups of 100, these ants are completely useless and will walk in circles until they die of exhaustion. But in a colony, they show remarkable sophistication and a “superintelligence”. I would highly recommend the article Army Ants: a Collective Intelligence, by Nigel Franks, and this short David Attenborough clip for more information. Apart from ants, Mitchell also makes consistent reference to cellular automata, the study of which really led to the growth of complexity science as a distinct field.

Cellular automata (CAs) have managed to divert the attention of many great scientists – Stephen Wolfram considers them the foundation of a “new kind of science”. I am clearly not one of these great thinkers, because I have never really understood what the fuss is all about. Although I have written about Conway’s Game of Life (a special case of CAs) in a previous post), I admit that I was more interested in the implementation than in the CAs.

But Complexity has given me a fascinating new perspective. In Chapter 11, Mitchell introduces the idea that CAs can be designed (or should I say evolved) to solve computational tasks. Although it is widely known that the Game of Life is Turing-complete (arguably more interesting than Mitchell’s idea), this always felt far too abstract for me to appreciate – the proof relies on a demonstration that some logic gates can be emulated, rather than actually constructing a nontrivial program. However, in Evolving Cellular Automata with Genetic Algorithms (the journal article on which Chapter 11 is based), Mitchell and coauthors tangibly demonstrate that genetic algorithms can be used to evolve a CA to perform a simple but nontrivial task.

The paper is very interesting, but does not come with source code (in their defense, it was written in 1996 and they refer to the internet as the “World Wide Web”). I think that their work deserves a modern recreation and analysis, with the benefit of intuitive code in python and more processing power.

Because this is quite a large topic, I will split it into parts. Today’s post will set up the problem, dealing with notation and terminology, as well as discussing the nature of the CAs considered. We will leave it to a future post to actually apply genetic algorithms to evolve CAs.

Contents

A quick introduction to cellular automata

A cellular automaton can be thought of as a set of units (cells) whose states change based on simple rules. In the Game of Life, the universe is a 2D grid of binary cells (either alive or dead), with the update rules as follows:

  • A living cell (black) will die if it has fewer than two live neighbours or more than three live neighbours – roughly corresponding to underpopulation and overpopulation respectively
  • A dead cell (white) will come alive if it has exactly three living neighbours (captures the idea of reproduction).

Note that to begin applying the rules, some initial condition must be specified – in practice, small changes in this starting point can drastically affect the resulting behaviour.

For our purposes we will be dealing with an arguably simpler model: a 1D binary CA. In this case, the universe consists of a row of N cells, each of which can either be dead (0) or alive (1), and is updated by looking at the radius R of surrounding cells. So if $R=3$, any update rules must map a 7-bit input (the 3 cells on either side, and the cell itself) to a 1-bit output (whether the cell will be dead or alive). A given rule might look something like this, bearing in mind that the output bit determines the new state of the middle cell:

A ruleset is the set of mappings between all possible input configurations and their output states. We may as well begin to speak in specific terms referring to $R=3$, in which case a ruleset is a mapping between all possible 7-bit binary strings to a binary output:

Input Output
0000001 1
0000010 0
0100101 0
0100110 1
1111110 1
1111111 0

If we assume that the input is ordered, then as a convenience of notation we can say that a ruleset is equal to the binary string formed by concatenating the output bits – in the above example, this is $10…01…10$.

It is very important to be comfortable with this concept: we are representing rulesets with binary strings of length 128, because there are 128 possible 7-bit inputs. Later on, we will also be using binary numbers to represent configurations of cells, so please remember to distinguish the two.

Because we are dealing with long 1D universes, animation is probably not the visualisation tool of choice (though it works very nicely for 2D). Instead, we will use a space-time diagram, which essentially stacks the generations on top of each other (with time going down the page). Here’s an example, with a simple rule that switches input bits:

Computation

I am going to conveniently sidestep the issue of properly defining “computation”. As far as this post is concerned, computation is what a computer does – a CA “computes” if it solves some task for a variable input. I would refer you to Complexity if you’d like a meaningful discussion.

The paper’s main objective is the density classification task, in which the CA must decide whether the ‘density’ of 1s in the initial configuration is greater than some cutoff proportion. Because density is typically denoted by $\rho$, this is more precisely referred to as the $\rho_c = 1/2$ task.

To put it simply, the CA must decide whether the initial setup had more living or dead cells (1s or 0s). If there were more 1s, then after some number of iterations, we want the CA to display all 1s. Otherwise, if there were more 0s, we want the CA to display all 0s. An example mapping is shown below:

You may be questioning the difficulty of that task. After all, it’s as easy as iterating over the universe and counting the number of 1s, then determining the output based on this value! But remember the catch: each cell only has access to a fixed neighbourhood of three cells on either side. There is nowhere we can store the count: each cell can only make a decision based on its neighbours and change its state accordingly.

The majority ruleset

Before we bother with genetic algorithms, it is worth checking how a simple solution performs. For this, we will use the majority ruleset, in which each cell will just change its state to match the majority of its 7-cell neighbourhood.

Although it is conceptually straightforward, it is at this stage that we need to consider the details of the implementation in python. We will be using strings of binary digits to store configurations; they are efficient and easy to work with. At a high level, we need to do the following:

  • generate the majority ruleset (of course we are not going to hard code all 128 rules)
  • generate a random initial configuration (i.e a random binary number)
  • iterate the initial configuration according to the majority ruleset
  • plot the space-time diagram

Here are some of the global constants that we need for now, though we will introduce many more when we get onto genetic algorithms:

R = 3  # size of neighbourhood
N_RULES = 2 ** (2 * R + 1)  #  number of rules in a ruleset
N = 79  # number of cells in universe

The number of rules is $2^7 = 128$ (the number of possible 7-bit inputs). Incidentally, this means that the number of possible rulesets is $2^{128}$ – a clear indication that we will not be able to find a solution by brute force.

Generating the majority ruleset

We are going to treat rulesets as python dictionaries. This is quite natural, since a ruleset is essentially a bijective mapping between all possible 7-bit inputs (keys) and the output bits (values). We will generate the keys and values separately.

def rule_keys(n_rules, radius=3):
    rule_input = []
    for i in range(n_rules):
        binary = str(bin(i))[2:].zfill(2*radius + 1)
        rule_input.append(binary)
    return rule_input

This could also be refactored into a one-line generator expression:

rule_keys = (str(bin(i))[2:].zfill(7) for i in range(N_RULES))

But out of respect for Guido I have kept it as a function with a loop inside (which is slightly faster anyway). The [2:] is used to slice away the ugly 0b that python prepends to a binary number, and zfill pads the string with leading zeroes.

To complete the majority ruleset, we just need to know if there are more 1s or 0s in the input. We will use a bit of python trickery to make this more concise:

majority_ruleset = {}
for rule_input in rule_keys(N_RULES):
    # 1 if the majority of the neighbourhood is 1.
    out = int((sum(d == "1" for d in rule_input) > R))
    majority_ruleset.update({rule_input: str(out)})

The statement (sum(d == "1" for d in rule_input) > R is True if there are more than 3 living cells in the input (remember that R = 3), and False otherwise. We then cast this to an int (1 or 0) then a str (“1” or “0”), and update the ruleset with this key-value pair. This is an excerpt from the resulting ruleset:

 {
     '1010010': '0',
     '1010011': '1',
     '1010100': '0',
     '1010101': '1',
     '1010110': '1',
     '1010111': '1',
     '1011000': '0',
     '1011001': '1',
 }

Generating a random initial configuration

One might think that the best way to generate a random configuration is just to combine N random bits where N is the size of the universe. However, consider the density of 1s in such a configuration – it is a (literal) textbook example of a binomial distribution with $\rho \sim B(N, 0.5)$.

Notice the strong peak at 0.5, meaning that most of the initial configurations will have a roughly equal split of 0s and 1s. This is not ideal for us to explore the density classification task, because we do want to have enough easy cases (i.e clear majorities).

As such, we will adopt a slightly different strategy:

  1. Randomly choose a density of 1s that we want in the configuration (sampled from a uniform distribution).
  2. Form an ordered universe with that number of 1s, and the rest 0s
  3. Shuffle the universe

A small snippet that does this is as follows:

n_ones = random.randrange(0, N)
binary = "1" * n_ones + "0" * (N - n_ones)
result = "".join(random.sample(binary, N))

We could use random.shuffle() in the third line, but that works in-place (and requires a list), so the chosen solution is cleaner. In the end, I decided to generalise the above slightly, letting you change the range of the uniform sample and specify the length of the output string:

def uniform_random_binary(length, lower=0, upper=None):
    # The defualt upper bound is the length
    if upper is None:
        upper = length
    n_ones = random.randrange(lower, upper)
    binary = "1" * n_ones + "0" * (length - n_ones)
    return "".join(random.sample(binary, length))

Iterating an automaton

To remain general, our iteration function should take in a configuration, a ruleset, and a radius. One iteration consists of applying the appropriate rule to each bit in the universe’s configuration, which is done by getting a cell’s neighbours as a 7-bit binary string, then using the ruleset dictionary to find the output bit.

def single_iteration(config, ruleset, radius):
    next_iteration = ""
    # Support wrapping by extending either end
    extended_config = config[-radius:] + config + config[:radius]
    for i in range(radius, len(config) + radius):
        neighbours = extended_config[i-radius:i+radius+1]
        next_iteration += ruleset[neighbours]
    return next_iteration

The complexity in the above implementation arises because we need a circular universe, so we must extend the configuration to allow for ‘wrapping’.

To iterate multiple times, we repeatedly apply the single_iteration() function:

def iterate_automata(initial, ruleset, n_iter, radius=3,
                     verbose=False):
    config = initial
    history = [config]
    for _ in range(n_iter):
        config = single_iteration(config, ruleset, radius)
        if verbose:
            print(config)
        history.append(config)
    return history

This function also keeps track of the CAs history, which is necessary if we want to use a space-time diagram for visualisation.

Space-time diagrams

If we run iterate_automata() with verbose=True, it’ll print the universe at each time step, so in a sense that is a space-time diagram. However, it is fair to ask for something more stylish, for which we will use matplotlib’s imshow:

def plot_spacetime_diagram(binary_history, filename=None):
    arr = np.array([[int(digit) for digit in row]
                     for row in binary_history])
    plt.figure()
    plt.imshow(arr, cmap="Purples_r")
    if filename is None:
        plt.show()
    else:
        plt.savefig(filename)

Tying it together

Now that we have implemented the pieces, testing the majority ruleset is very simple:

random.seed(0)
initial_config = uniform_random_binary(N)
majority_history = iterate_automata(initial_config,
                                    majority_ruleset,
                                    n_iter=N, 
                                    radius=R)
plot_spacetime_diagram(majority_history)

Looking at the space-time diagrams for different random seeds shows that in general, the majority ruleset does not consistently solve the $\rho_c = 1/2$ task:

The above results are very similar to what Mitchell et al. found (though they use $N = 149$), which gives us a good indication that our implementation thus far has been accurate.

Conclusion

In this post, we have laid the framework for using cellular automata to solve a computational task, namely, classifying the density of living cells in the initial conditions.

We have examined the failings of a ‘local’ solution that naively treats a cell’s immediate neighbourhood as a proxy for the entire universe, and thus see the need for emergent computation, in which individual cells somehow learn to appreciate the rest of the universe and come up with a global solution.

In the next post, we will discuss how genetic algorithms can be applied to evolve the cellular automata to solve the density classification task, rather than having to supply a hand-crafted rule, or do a brute force search over all possible rulesets (which we have shown to be infeasible). Stay tuned.


Classifying financial time series using Discrete Fourier Transforms

Introduction

A financial time series represents the collective decisions of many individual traders; it is fair to hypothesise that the nature of these decisions differs based on the underlying asset. For example, a company with a higher market cap may be more liquid, and subject to larger individual buy/sell orders including institutional investment. Thus, there is a case to be made that information such as the market cap of a company should be ‘encoded’ into its price movements. While these characteristics may be difficult to pinpoint on a chart, in principle it may be possible for a machine learning algorithm to find statistical relationships between the time series and the market cap of the company. This post will investigate that claim.

Concretely, a raw dataset will be constructed, consisting of 100-day price charts which belong to companies that are respectively in the top and bottom 1000 tickers in the Russell 3000 ordered by market cap. We will then run feature extraction (which is the core of this post), before applying a standard XGBoost classifier.

This post thus represents an intuitive (if naive) first approximation for classifying whether a given price chart belongs to a stock with high or low market cap. The hypothesis is that this problem is learnable, so the goal is to show that a machine learning model can classify a time series with greater than 50% accuracy. Though accuracy often has flaws as a metric, in this case it is sufficient because the classes will be almost perfectly balanced. But as a matter of good practice, we will also present precision.

A discussion on classifying time series

Because much of real life data is temporal in nature, classifying and clustering time series has widespread importance. Some example applications:

  • speech recognition
  • medical data such as ECGs
  • cyber-kinetic prosthetics: classifying electrical activity of muscles in terms of what action it corresponds to
  • meteorological data
  • seismic data

Some general approaches to classifying time series are as follows (a thorough survey can be found in Rani & Sikka 2012):

  • directly clustering with an algorithm like k-Nearest Neighbours. What is important here is a relevant distance metric between two different time series. Euclidean distance may not be sufficient as it has trouble dealing with timeseries that have been shifted slightly (either vertically or horizontally).
  • Hidden Markov Models, which model the timeseries as a Markov Process, whose transition matrix may be used as input to a classifier.
  • LSTM Neural Networks, which are fed in a time series and contain a sequential ‘memory’. The output of a cell can be run through the sigmoid function to convert it into a binary class value.
  • Extracting summary statistics, then passing these results into a standard classifier.

This post will explore the last method. On the surface, it seems to be the most straightforward approach, but the magic is in choosing which aspects of the time series should be used. There are existing libraries that can extract a multitude of features, e.g tsfresh, but although it is very likely that there is predictive power somewhere in here, it would be nice to find a simpler (and perhaps faster) approach.

Thus, I have decided to take the Discrete Fourier Transform (DFT) of the time series, and to use the largest terms as features. Such a method is really under the domain of spectral analysis, a well established subfield of time series analysis. However, we will adopt a rather informal approach. It should be noted that the Fourier Transform certainly shouldn’t be used to forecast the future of a financial time series because of the strong assumptions of periodicity. But it should be able to extract the main (sinusoidal) signals from a time series, which we can then put into a classifier.

Preparing the raw data

The data used in this project will be:

  • keystats.csv: parsed yahoo finance key statistics from which the latest market cap of securities will be ascertained. In order to reproduce something like this dataset (though only for the S&P500), please refer to my repo on GitHub
  • My stock_prices database, which contains daily prices for many tickers. For more information about this database, I’ve written a post on how to create your own price database.

After extracting the data from the csv files and MySQL database, I was left with a pandas dataframe sorted_tickers_by_mcap, containing tickers and their market caps, sorted by market cap in ascending order.

print("Number of tickers with mcap data available:",
       len(sorted_tickers_by_mcap))
print(sorted_tickers_by_mcap.head())
print(sorted_tickers_by_mcap.tail())
Number of tickers with mcap data available: 3382
Ticker
heat      67830.0
aryx     103730.0
gmet     336260.0
coco    1090000.0
dvox    1140000.0
Name: Market Cap, dtype: float64
Ticker
xom      3.632700e+11
msft     4.304200e+11
goog     4.942700e+11
googl    4.944500e+11
aapl     5.413300e+11
Name: Market Cap, dtype: float64

I then extracted the top and bottom 1000 tickers (removing the tickers for which data was not available), resulting in two lists of tickers: top_mcap and bottom_mcap:

Available top market cap: 831
Available bottom market cap: 862

These tickers are then sorted by market cap, and 100-day windows are made from the last 750 datapoints (roughly corresponds to 3 market-years). 3 years was chosen as the cutoff because market cap values from more than three years ago are likely to be significantly different from the latest values, as companies grow or shrink over time. A solution would be to use the keystats dataset to find the market cap for a company in a given year, but I have opted for a simpler workaround in this post.

Here is the function I wrote to create the time series snapshots from the raw price data.

def create_snapshots(ticker, n_days=750, window_size=100, 
                     window_shift=50):
    """
    Returns list of time series of length windowsize, shifted
    by window_shift, for the last n_days.
    """
    df = pd.read_sql(f"SELECT adj_close_price FROM daily_price "
                    f"WHERE ticker_id={ticker_id[ticker]}", conn)
    df = df.iloc[-n_days:]
    snapshots = []
    for i in range(0, n_days, window_shift):
        window = df.iloc[i:i +window_size]
        window = window['adj_close_price'].values
        if len(window) == window_size:
            snapshots.append(window)
    return snapshots

These snapshots are then stacked into a numpy array as follows:

dataset = []

for i, ticker in enumerate(top_mcap):
    try:
        ticker_snapshots = create_snapshots(ticker)
    except Exception as e:
        print(str(e))
        continue
    ticker_snapshots = np.c_[np.array(ticker_snapshots),
                        np.ones(len(ticker_snapshots))]
    dataset.append(ticker_snapshots)

for i, ticker in enumerate(bottom_mcap):
    try:
        ticker_snapshots = create_snapshots(ticker)
    except Exception as e:
        print(str(e))
        continue
    ticker_snapshots = np.c_[np.array(ticker_snapshots), 
                        np.zeros(len(ticker_snapshots))]
    dataset.append(ticker_snapshots)

dataset = np.vstack([a for a in dataset if a.shape[1] == 101])

The first 10 rows of dataset are presented below. Note how the last column is always 1 – this is because the first half of the dataset consists of the top market cap tickers (classified as 1).

array([[35.639999, 35.610001, 39.220001, ..., 37.189999, 37.060001,
         1.      ],
       [40.369999, 40.369999, 40.419998, ..., 36.189999, 35.889999,
         1.      ],
       [37.810001, 37.849998, 38.419998, ..., 34.68    , 34.959999,
         1.      ],
       ...,
       [41.369999, 40.02    , 39.860001, ..., 35.279999, 35.27    ,
         1.      ],
       [38.5     , 38.669998, 38.599998, ..., 41.169998, 41.27    ,
         1.      ],
       [35.360001, 35.439999, 36.34    , ..., 42.740002, 43.349998,
         1.      ]])

We have now finished preparing the raw data for analysis. However, we still need to extract the features and preprocess the data for machine learning specifically. This will be covered in the next section.

Methodology

It is a remarkable fact of mathematics that periodic continuous functions (at least the well-behaved ones) can be decomposed into sines and cosines. Although price data probably isn’t periodic, if we treat the whole time series as one period, we can apply a Discrete Fourier Transform (DFT) to extract the main ‘signals’ in a time series. My motivation for using a DFT is that it can be used to de-noise a time series by ignoring the smaller terms, and that the coefficients of the resulting terms can be fed into a classifier. Additionally, there exist very efficient implementations in numpy, such as np.fft.fft(). One potential problem is that the results of the DFT are actually complex numbers:

array([ 3.86450001e+02+0.j        , -7.77637336e+00+2.22326054j,
       -3.85445493e+00+2.69405956j, -2.39862764e+00+4.72442769j,
       -1.07054807e+00+1.46787384j,  1.49997000e-01+0.j        ,
       -1.07054807e+00-1.46787384j, -2.39862764e+00-4.72442769j,
       -3.85445493e+00-2.69405956j, -7.77637336e+00-2.22326054j])

This begs the question as to how we can input the DFT terms into a classifier. We will implement and compare two possible methods.

  • Taking the modulus (np.abs) of each term and using that as a feature
  • Splitting up the real and imaginary parts to use as separate features.

Clearly, we would expect the latter to have more predictive power, but at the cost of using twice as many terms.

There is another decision we have to make, namely, how many terms of the DFT we want to use. As it is impossible to know a priori how much information the Fourier terms hold, we’ll examine the first 5, 15, and 30 terms.

So in total, we have 6 datasets to test. Now, in order to avoid the data mining bias, we will apply a Bonferroni correction. The idea is that each additional test increases our chance of finding a spurious yet statistically significant relationship, so to counteract this, the significance level must be decreased to $\frac{\alpha}{m}$, where $m$ is the number of tests. So if the original significance level was $\alpha = 0.05$, the Bonferroni correction would imply a new significance of $\frac{\alpha}{6} = 0.00833$.

Feature extraction and data preprocessing

Adopting standard scikit-learn notation, X will denote the feature array and y the target vector. Using the dataset that we prepared before:

X = dataset[:,:-1]
y = dataset[:, -1]

Prior to putting our data through a DFT, I am first going to standardise the time series (scale everything to the range [0,1]) to ensure that we don’t actually learn some boring correlation between the share price of a company and its market cap. To maximise the efficiency of this computation, bearing in mind that this computation needs to be done for more than 20k rows, it is important to (ab)use numpy’s broadcasting:

numerator = X - X.min(axis=1).reshape((len(X), 1))
denominator = X.max(axis=1) - X.min(axis=1)
denominator = denominator.reshape((len(X),1))
X_scaled = numerator/denominator

We will now define functions to take the Fourier Transform, then slice for the first k terms, before either taking the modulus or splitting into real and imaginary parts.

def generate_modulus_dataset(X, k):
    return np.abs(np.fft.fft(X)[:, :k])

def generate_complex_dataset(X, k):
    fourier = np.fft.fft(X)[:, :k]
    return np.column_stack((fourier.real, fourier.imag))

With these methods ready, creating all of the datasets is simple:

modulus_5 = generate_modulus_dataset(X_scaled, 5)
modulus_15 = generate_modulus_dataset(X_scaled, 15)
modulus_30 = generate_modulus_dataset(X_scaled, 30)
complex_5 = generate_complex_dataset(X_scaled, 5)
complex_15 = generate_complex_dataset(X_scaled, 15)
complex_30 = generate_complex_dataset(X_scaled, 30)

Machine Learning

We are finally ready to apply machine learning using python’s wonderfully intuitive sklearn. I have chosen to use an XGBoost classifier - it trains quickly and normally produces good results with the default parameters. It should be noted that no hyperparameter optimisation will be done: the point of this experiment is to see if this problem is learnable (rather than trying to achieve excellent results). I have written the classification script as a function which can be easily applied to each of the datasets.

from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.metrics import precision_score, accuracy_score


def classify(features, target, return_acc=False):
    X_train, X_test, y_train, y_test = train_test_split(
                                            features, target)
    clf = XGBClassifier()
    clf.fit(X_train, y_train)
    y_pred = clf.predict(X_test)

    acc = accuracy_score(y_test, y_pred)
    prec = precision_score(y_test, y_pred)
    if return_acc:
        return acc
    print(f"Accuracy:\n {100* acc:.2f}%")
    print(f"Precision:\n {100* prec:.2f}%")

It is then a simple matter of applying the function to any of our datasets:

classify(modulus_5, y)

Running the above resulted in a cross-validated accuracy of 58%. Given balanced classes, this is clearly a significant difference, so in principle the experiment could be stopped here and we can conclude that classifying stock charts based on market cap is a learnable problem. This result was especially surprising to me because this particular example is just using the moduli of the first 5 terms of the Fourier Transform, which really ignores a lot of the price chart. To see what I’m talking about, this is what a time series made of the absolute values of the first 5 Fourier terms looks like:

Although you can see that some of the main trends are captured, it is clearly a crude approximation that ignores a lot of variation.

Despite having achieved our initial goal of demonstrating the learnability of this problem, seeing as we have already prepared the datasets it may be interesting to do further analysis.

Comparison with a benchmark

What happens if we try to naively classify based on the whole time series, i.e using each of the 100 days’ prices as a feature?

%%time
classify(X_scaled, y)
Accuracy:
 65.34%
Precision:
 64.48%
CPU times: user 18.6 s, sys: 135 ms, total: 18.7 s
Wall time: 19.2 s

So actually the naive benchmark has a much better accuracy. But note the relatively long compute time of 18.7s. The question is whether any of our other datasets can reach comparable accuracies more efficiently.

%%time
classify(complex_5, y)
Accuracy:
 60.96%
Precision:
 60.17%
CPU times: user 1.96 s, sys: 18.5 ms, total: 1.97 s
Wall time: 2.04 s

It makes sense that the complex dataset has a higher accuracy than the modulus dataset for the same number of Fourier terms: bear in mind that it includes twice as many features (since we have split the real and imaginary part).

%%time
classify(modulus_15, y)
Accuracy:
 59.45%
Precision:
 57.72%
CPU times: user 3.77 s, sys: 62.8 ms, total: 3.83 s
Wall time: 4.33 s

This is an interesting result. 15 modulus terms (15 features) underperform 5 complex terms (i.e 10 features). This tells us that the actual complex number holds a lot of information.

%%time
classify(complex_15, y)
Accuracy:
 64.97%
Precision:
 63.75%
CPU times: user 3.02 s, sys: 26.1 ms, total: 3.04 s
Wall time: 3.09 s

Using 15 complex terms (i.e 30 total features), comparable accuracy to the benchmark has been achieved, in about a sixth of the time!

%%time
classify(modulus_30, y)
Accuracy:
 59.74%
Precision:
 57.98%
CPU times: user 3.16 s, sys: 35.1 ms, total: 3.19 s
Wall time: 3.33 s
%%time
classify(complex_30, y)
Accuracy:
 66.63%
Precision:
 65.24%
CPU times: user 11.9 s, sys: 136 ms, total: 12.1 s
Wall time: 12.6 s

We can observe the diminishing returns: using more Fourier terms increases accuracy only slightly. However, it is a pretty remarkable result that using 30 complex terms (i.e 60 features), we can beat the benchmark while also being 1.5x faster.

To better visualise how accuracy varies with the number of terms, we can generate multiple datasets and run our classification method on them:

scores = []
for i in range(0, 100, 5):
    print(i)
    dataset = generate_modulus_dataset(X_scaled, i)
    acc = classify(dataset, y, return_acc=True)
    scores.append(acc)

Doing the same for the complex datasets and plotting the results yields:

It is interesting to note that the modulus datasets never beat the benchmark: it seems that taking the modulus of the complex terms throws away too much information. The complex datasets are most predictive at around 40 terms, but the plateau and gradual decrease after that is a clear sign that overfitting occurs.

Conclusion

In this post, we have seen that 100-day time series of prices can be classified according to market cap, by extracting the main signals from a time series with a Discrete Fourier Transform to use as features in a classifier. The DFT is not only able to outperform the naive solution of using each day’s price as a feature, but can also do so with a large speedup. The implication of this is that the benchmark overfits to noise, and thus this is a practical example of how reducing the information available to a classifier can actually improve performance.

I think this is an important lesson for anyone applying machine learning to a real-world problem: throwing data at a classifier or deep learning model is not a solid approach: it is much better to genuinely try to extract important features, and discard those which only add noise.

Future work on this topic could involve comparing the classification performance of the DFT method to a classifier that uses different summary statistics, or perhaps even a clustering methodology like k-Nearest Neighbours. But for now I am satisfied that this simple and intuitive method is sufficient to do significantly better than random guessing.


DIY MachineLearningStocks

I recently released a machine learning stock prediction project on GitHub, unimaginatively named MachineLearningStocks. It is a project that I’ve put quite a lot of time into, and is in fact a simplified version of a system that I’ve been using to live trade. This post doesn’t really offer anything on top of the existing readme, but I figured it would be good to have a copy (with some minor changes) here as well.

MachineLearningStocks is designed to be an intuitive and highly extensible template project applying machine learning to making stock predictions. My hope is that the project will help readers to not only understand the overall workflow of using machine learning to predict stock movements, but also to appreciate some of the many subtleties. I’ve also provided a long list of possible improvements you could make, or ideas for moving forward, to put some spice into the vanilla project.

Concretely, we will be cleaning and preparing a dataset of historical stock prices and fundamentals using pandas, after which we will apply a scikit-learn classifier to discover the relationship between stock fundamentals (e.g PE ratio, debt/equity, float, etc) and the subsequent annual price change (compared with the an index). We then conduct a simple backtest, before generating predictions on current data.

While I would not live trade based off of the predictions from this exact code, I do believe that you can use this project as starting point for a profitable trading system – I have actually used code based on this project to live trade, with pretty decent results (around 20% returns on backtest and 10-15% on live trading).

This project has quite a lot of personal significance for me. It was my first proper python project, one of my first real encounters with ML, and the first time I used git. At the start, my code was rife with bad practice and inefficiency: I have since tried to amend most of this, but please be warned that some minor issues may remain (feel free to raise an issue, or fork and submit a PR). Both the project and myself as a programmer have evolved a lot since the first iteration, but there is always room to improve.

As a disclaimer, this is a purely educational project. Be aware that backtested performance may often be deceptive – trade at your own risk!

By the way, if you find the repository or this post interesting/useful in any way, don’t forget to leave a star!

Star

Contents

Overview

The overall workflow to use machine learning to make stocks prediction is as follows:

  1. Acquire historical fundamental data – these are the features or predictors
  2. Acquire historical stock price data – this is will make up the dependent variable, or label (what we are trying to predict).
  3. Preprocess data
  4. Use a machine learning model to learn from the data
  5. Backtest the performance of the machine learning model
  6. Acquire current fundamental data
  7. Generate predictions from current fundamental data

This is a very generalised overview, but in principle this is all you need to build a fundamentals-based ML stock predictor.

Quickstart

For readers who want to throw away the instruction manual and play immediately, just clone the project, then download and unzip the data file into the same directory. Then, open an instance of terminal and cd to the project’s file path, e.g

cd Users/User/Desktop/MachineLearningStocks

Then, run the following in terminal:

pip install -r requirements.txt
python download_historical_prices.py
python parsing_keystats.py
python backtesting.py
python current_data.py
pytest -v
python stock_prediction.py

Otherwise, follow the step-by-step guide below.

Preliminaries

This project uses python 3, and the common data science libraries pandas and scikit-learn. A full list of requirements is included in the requirements.txt file. To install all of the requirements at once, run the following code into terminal:

pip install -r requirements.txt

To get started, clone this project and unzip it. This will become our working directory, so make sure you cd your terminal instance into this directory.

Historical data

Data acquisition and preprocessing is probably the hardest part of most machine learning projects. But it is a necessary evil, so it’s best to not fret and just carry on.

For this project, we need three datasets:

  1. Historical stock fundamentals
  2. Historical stock prices
  3. Historical S&P500 prices

We need the S&P500 index prices as a benchmark: a 5% stock growth does not mean much if the S&P500 grew 10% in that time period, so all stock returns must be compared to those of the index.

Historical stock fundamentals

Historical fundamental data is actually very difficult to find (for free, at least). Although sites like Quandl do have datasets available, you often have to pay a pretty steep fee.

It turns out that there is a way to parse this data, for free, from Yahoo Finance. I will not go into details, because Sentdex has done it for us. On his page you will be able to find a file called intraQuarter.zip, which you should download, unzip, and place in your working directory. Relevant to this project is the subfolder called _KeyStats, which contains html files that hold stock fundamentals for all stocks in the S&P500 between 2003 and 2013, sorted by stock. However, at this stage, the data is unusable – we will have to parse it into a nice csv file before we can do any ML.

Historical price data

In the first iteration of the project, I used pandas-datareader, an extremely convenient library which can load stock data straight into pandas. However, after Yahoo Finance changed their UI, datareader no longer worked, so I switched to Quandl, which has free stock price data for a few tickers, and a python API. However, as pandas-datareader has been fixed, we will use that instead.

Likewise, we can easily use pandas-datareader to access data for the SPY ticker. Failing that, one could manually download it from yahoo finance, place it into the project directory and rename it sp500_index.csv.

The code for downloading historical price data can be run by entering the following into terminal:

python download_historical_prices.py

Creating the training dataset

Our ultimate goal for the training data is to have a ‘snapshot’ of a particular stock’s fundamentals at a particular time, and the corresponding subsequent annual performance of the stock.

For example, if our ‘snapshot’ consists of all of the fundamental data for AAPL on the date 28/1/2005, then we also need to know the percentage price change of AAPL between 28/1/05 and 28/1/06. Thus our algorithm can learn how the fundamentals impact the annual change in the stock price.

In fact, this is a slight oversimplification. In fact, what the algorithm will eventually learn is how fundamentals impact the outperformance of a stock relative to the S&P500 index. This is why we also need index data.

Preprocessing historical price data

When pandas-datareader downloads stock price data, it does not include rows for weekends and public holidays (when the market is closed).

However, referring to the example of AAPL above, if our snapshot includes fundamental data for 28/1/05 and we want to see the change in price a year later, we will get the nasty surprise that 28/1/2006 is a Saturday. Does this mean that we have to discard this snapshot?

By no means – data is too valuable to callously toss away. As a workaround, I instead decided to ‘fill forward’ the missing data, i.e we will assume that the stock price on Saturday 28/1/2006 is equal to the stock price on Friday 27/1/2006.

Features

Below is a list of some of the interesting variables that are available on Yahoo Finance.

Valuation measures

  • ‘Market Cap’
  • Enterprise Value
  • Trailing P/E
  • Forward P/E
  • PEG Ratio
  • Price/Sales
  • Price/Book
  • Enterprise Value/Revenue
  • Enterprise Value/EBITDA

Financials

  • Profit Margin
  • Operating Margin
  • Return on Assets
  • Return on Equity
  • Revenue
  • Revenue Per Share
  • Quarterly Revenue Growth
  • Gross Profit
  • EBITDA
  • Net Income Avi to Common
  • Diluted EPS
  • Quarterly Earnings Growth
  • Total Cash
  • Total Cash Per Share
  • Total Debt
  • Total Debt/Equity
  • Current Ratio
  • Book Value Per Share
  • Operating Cash Flow
  • Levered Free Cash Flow

Trading information

  • Beta
  • 50-Day Moving Average
  • 200-Day Moving Average
  • Avg Vol (3 month)
  • Shares Outstanding
  • Float
  • % Held by Insiders
  • % Held by Institutions
  • Shares Short
  • Short Ratio
  • Short % of Float
  • Shares Short (prior month)

Parsing

However, all of this data is locked up in HTML files. Thus, we need to build a parser. In this project, I did the parsing with regex, but please note that generally it is really not recommended to use regex to parse HTML. However, I think regex probably wins out for ease of understanding (this project being educational in nature), and from experience regex works fine in this case.

This is the exact regex used:

r'>' + re.escape(variable) + r'.*?(\-?\d+\.*\d*K?M?B?|N/A[\\n|\s]*|>0|NaN)%?(</td>|</span>)'

While it looks pretty arcane, all it is doing is searching for the first occurence of the feature (e.g “Market Cap”), then it looks forward until it finds a number immediately followed by a </td> or </span> (signifying the end of a table entry). The complexity of the expression above accounts for some subtleties in the parsing:

  • the numbers could be preceeded by a minus sign
  • Yahoo Finance sometimes uses K, M, and B as abbreviations for thousand, million and billion respectively.
  • some data are given as percentages
  • some datapoints are missing, so instead of a number we have to look for “N/A” or “NaN.

Both the preprocessing of price data and the parsing of keystats are included in parsing_keystats.py. Run the following in your terminal:

python parsing_keystats.py

You should see the file keystats.csv appear in your working directory. Now that we have the training data ready, we are ready to actually do some machine learning.

Backtesting

Backtesting is arguably the most important part of any quantitative strategy: you must have some way of testing the performance of your algorithm before you live trade it.

Despite its importance, I originally did not want to include backtesting code in this repository. The reasons were as follows:

  • Backtesting is messy and empirical. The code is not very pleasant to use, and in practice requires a lot of manual interaction.
  • Backtesting is very difficult to get right, and if you do it wrong, you will be deceiving yourself with high returns.
  • Developing and working with your backtest is probably the best way to learn about machine learning and stocks – you’ll see what works, what doesn’t, and what you don’t understand.

Nevertheless, because of the importance of backtesting, I decided that I can’t really call this a ‘template machine learning stocks project’ without backtesting. Thus, I have included a simplistic backtesting script. Please note that there is a fatal flaw with this backtesting implementation that will result in much higher backtesting returns. It is quite a subtle point, but I will let you figure that out :)

Run the following in terminal:

python backtesting.py

You should get something like this:

Classifier performance
======================
Accuracy score:  0.81
Precision score:  0.75

Stock prediction performance report
===================================
Total Trades: 177
Average return for stock predictions:  37.8 %
Average market return in the same period:  9.2%
Compared to the index, our strategy earns  28.6 percentage points more

Again, the performance looks too good to be true and almost certainly is.

Current fundamental data

Now that we have trained and backtested a model on our data, we would like to generate actual predictions on current data.

As always, we can scrape the data from good old Yahoo Finance. My method is to literally just download the statistics page for each stock (here is the page for Apple), then to parse it using regex as before.

In fact, the regex should be almost identical, but because Yahoo has changed their UI a couple of times, there are some minor differences. This part of the projet has to be fixed whenever yahoo finance changes their UI, so if you can’t get the project to work, the problem is most likely here.

Run the following in terminal:

python current_data.py

The script will then begin downloading the HTML into the forward/ folder within your working directory, before parsing this data and outputting the file forward_sample.csv. You might see a few miscellaneous errors for certain tickers (e.g ‘Exceeded 30 redirects.’), but this is to be expected.

Stock prediction

Now that we have the training data and the current data, we can finally generate actual predictions. This part of the project is very simple: the only thing you have to decide is the value of the OUTPERFORMANCE parameter (the percentage by which a stock has to beat the S&P500 to be considered a ‘buy’). I have set it to 10 by default, but it can easily be modified by changing the variable at the top of the file. Go ahead and run the script:

python stock_prediction.py

You should get something like this:

21 stocks predicted to outperform the S&P500 by more than 10%:
NOC FL SWK NFX LH NSC SCHL KSU DDS GWW AIZ ORLY R SFLY SHW GME DLX DIS AMP BBBY APD

Unit testing

I have included a number of unit tests (in the tests/ folder) which serve to check that things are working properly. However, due to the nature of the some of this projects functionality (downloading big datasets), you will have to run all the code once before running the tests. Otherwise, the tests themselves would have to download huge datasets (which I don’t think is optimal).

I thus recommend that you run the tests after you have run all the other scripts (except, perhaps, stock_prediction.py).

To run the tests, simply enter the following into a terminal instance in the project directory:

pytest -v

Please note that it is not considered best practice to include an __init__.py file in the tests/ directory (see here for more), but I have done it anyway because it is uncomplicated and functional.

Where to go from here

I have stated that this project is extensible, so here are some ideas to get you started and possibly increase returns (no promises).

Data acquisition

My personal belief is that better quality data is THE factor that will ultimately determine your performance. Here are some ideas:

  • Explore the other subfolders in Sentdex’s intraQuarter.zip.
  • Parse the annual reports that all companies submit to the SEC (have a look at the Edgar Database)
  • Try to find websites from which you can scrape fundamental data (this has been my solution).
  • Ditch US stocks and go global – perhaps better results may be found in markets that are less-liquid. It’d be interesting to see whether the predictive power of features vary based on geography.
  • Buy Quandl data, or experiment with alternative data.

Data preprocessing

  • Build a more robust parser using BeautifulSoup
  • In this project, I have just ignored any rows with missing data, but this reduces the size of the dataset considerably. Are there any ways you can fill in some of this data?
    • hint: if the PE ratio is missing but you know the stock price and the earnings/share…
    • hint 2: how different is Apple’s book value in March to its book value in June?
  • Some form of feature engineering
    • e.g, calculate Graham’s number and use it as a feature
    • some of the features are probably redundant. Why not remove them to speed up training?
  • Speed up the construction of keystats.csv.
    • hint: don’t keep appending to one growing dataframe! Split it into chunks

Machine learning

Altering the machine learning stuff is probably the easiest and most fun to do.

  • The most important thing if you’re serious about results is to find the problem with the current backtesting setup and fix it. This will likely be quite a sobering experience, but if your backtest is done right, it should mean that any observed outperformance on your test set can be traded on (again, do so at your own discretion).
  • Try a different classifier – there is plenty of research that advocates the use of SVMs, for example. Don’t forget that other classifiers may require feature scaling etc.
  • Hyperparameter tuning: use gridsearch to find the optimal hyperparameters for your classifier. But make sure you don’t overfit!
  • Make it deep – experiment with neural networks (an easy way to start is with sklearn.neural_network).
  • Change the classification problem into a regresion one: will we achieve better results if we try to predict the stock price rather than whether it outperformed?
  • Run the prediction multiple times (perhaps using different hyperparameters?) and select the k most common stocks to invest in. This is especially important if the algorithm is not deterministic (as is the case for Random Forest)
  • Experiment with different values of the outperformance parameter.
  • Try to plot the importance of different features to ‘see what the machine sees’.

Contributing

Feel free to fork, play around, and submit PRs. I would be very grateful for any bug fixes or more unit tests.

This project was originally based on Sentdex’s excellent machine learning tutorial, but it has since evolved far beyond that and the code is almost completely different. The complete series is also on his website.