Monday, September 18, 2023

006 - Lessons from Artificial Intelligence (Part IV)

In the fourth episode of this subseries, I discuss one of the variables in my experiments with artificial neural networks in algorithmic trading and what it taught me about intelligence, ideas, and policy.

Duration: 00:05:25


Episode Transcript

Good evening. You’re listening to the Reflections in Beige Podcast, hosted by Michael LeSane. The date is Sunday, September seventeenth, 2023.

Forward pass, backpropagation. These are the two respective processes through which a neural network processes information and reconfigures itself.

Neural networks receive inputs, which are encoded into a form suitable for processing. The red, green, and blue dimensions of image pixels, the bits of the bytes which form letters and words, audio bitstreams, and more.

Think of it like vision or hearing. Your eyes or ears receive stimuli in the form of light or sound waves, but the brain must then process what is being seen or heard, whether to contemplate or to respond to it.

In a neural network, each input feeds forward into each node, or neuron, of the next layer, and each node of the next layer feeds into each node in the layer that follows, and so on until the output layer is reached. Some types of networks even feed nodes with the nodes of previous layers or the same layer.

The connections between nodes in each layer are weighted, and these weights collectively represent the knowledge contained within a neural network, or its model if you will.

Sparing the granular details, during the process of backpropagation, these weights are iteratively adjusted based on the margin of error of the node they’re associated with.

That may be a bit to take in, but this information will be helpful to know as I proceed with this series.

Over the course of my experimentation, I ran two simulations at the same time. The only difference between these two simulations was the dimensions of the neural networks that they were working with. One network was ten by ten neurons, while the other, was twice the size, at twenty by ten neurons.

How did their performances differ? The answer might surprise you.

The smaller network converged on something resembling what I can only describe as index trading. It traded conservatively, and didn’t really take big risks with individual stocks, relying what seemed to be tried and true indicators common to all of them.

The larger network, on the other hand, was more willing to take big risks and pick individual stocks. On good days, it saw returns which significantly outperformed the smaller network. Conversely though, on bad days, its losses also exceeded the smaller network.

A lesson I took from this is that more intellectually-inclined agents can bring forth bold ideas based on empirical grounds. In aggregate, however, they simply amplify the range and magnitude of potential outcomes, and do not necessarily guarantee some particular desired outcome simply because they’re intellectually-inclined agents.

To rephrase this: intelligence can contribute to the formulation of more novel ideas based on observations, but in practice, there is not just potential for these ideas to be constructive; there is similarly the risk that these ideas could be catastrophic due to unanticipated consequences.

The intelligence of an individual or individuals, then, should not be a qualifier for the merit of their ideas. Ideas should be rigorously scrutinized, and implemented in a cautious and methodical manner, ideally in a diversity of environments or contexts, falling back on tried-and-true models as the baseline.

The end goal of novel ideas influencing systems should be to incrementally enhance the tried and true, not to replace them wholesale, in order to maintain the baseline as a risk management strategy while making simultaneous careful attempts at improving it.

This is just as applicable to financial models as it is to a priori governance models, as both harbor latent biases and are applied to chaotic and adversarial systems with a myriad of unanticipated externalities.

The intellectual caliber of technocrats observing summary metrics in a vacuum are no match for the dynamism of the situation on the ground, in which sanity is overwhelmingly governed by social forms, often latent or implicit, which have developed evolutionarily and often organically over long periods of time.

The best these experts can do is to join the fray and hope their contributions to the macro model hold up, improve outcomes, and introduce few if any new externalities that must be reckoned with.

This little lecture went a little longer than planned, and so I’d like to close this episode with a quote I that has stuck with me from Linus Torvalds, the software developer who created the Linux kernel:

Quote:

And don’t EVER make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That’s giving your intelligence much too much credit.

End quote.

Let that be the lesson for tonight.

Thank you for listening and sharing, and have a good night.

Lessons from Artificial Intelligence

This episode is part of a series.
Monday, December 18, 2023

008 - Lessons from Artificial Intelligence (Part VI)

In the sixth and final episode of this subseries, I go into further detail about the similarities between artificial neural networks and of the human mind, and how the same dynamics of learning are used to influence our behavior and perception of reality.

Duration: 00:07:13


Saturday, September 23, 2023

007 - Lessons from Artificial Intelligence (Part V)

In the fifth and penultimate episode of this sub-series, I discuss the 2013 Flash Crash, sentiment analysis, my inroads into business analysis, and the phenomenon of neural overfitting.

Duration: 00:06:10


Saturday, September 09, 2023

005 - Lessons from Artificial Intelligence (Part III)

In the third episode of this subseries, I discuss some of my experiments with artificial neural networks in algorithmic trading and what one particularly interesting experiment taught me about the nature of anxiety.

Duration: 00:04:16


Saturday, September 02, 2023

003 - Lessons from Artificial Intelligence (Part II)

In the second episode of this subseries, I recount the beginnings of my interest in artificial intelligence, my early experimentation with artificial neural networks, and how this experimentation came to intersect with finance.

Duration: 00:02:50


Wednesday, August 30, 2023

002 - Lessons from Artificial Intelligence (Part I)

In this episode, I begin a subseries reflecting on on my work with artificial neural networks, discussing observations and conclusions that were drawn from the experience.

Duration: 00:01:50