Saturday, September 09, 2023

005 - Lessons from Artificial Intelligence (Part III)

In the third episode of this subseries, I discuss some of my experiments with artificial neural networks in algorithmic trading and what one particularly interesting experiment taught me about the nature of anxiety.

Duration: 00:04:16


Episode Transcript

Reach out to loved ones. Express your admiration and appreciation for them before it’s too late. The role models of your formative years are not immortal.

Good evening. You’re listening to the Reflections in Beige Podcast, hosted by Michael LeSane. The date is Friday, September eighth, 2023.

As a general rule, the neural trading simulation turned a profit when the market trended upward that day, and turned a smaller profit when the market trended downward that day. This rule only applied, however, when the market had low volatility over the course of the day. Days characterized by high volatility tended to have lower returns at best, and at worst, losses.

One strategy for reducing losses was, for lack of better words, training the neural network in a manner which encouraged varying degrees of aggressiveness with its trades, depending on the confidence it had in its forecasts.

As a risk management strategy, this proved to be quite effective and losses were contained accordingly, but on the flip side, the neural network was never sufficiently confident in its forecast to trade as aggressively as it had before. As a consequence, its profits were also less than they had been previously.

What was needed, I concluded, was a means of anticipating market volatility to inform how aggressively the neural network should be encouraged to approach its trading behavior on any given day. I’ll go into greater detail about that arc of my work at a later time.

Another strategy I briefly experimented with was to make the neural network aware of its gains or losses as part of its ongoing training cycles. Letting this iteration of the simulation run in the morning before leaving to attend several tech talks at a conference I was attending at the Aspen Institute, I returned later to find an end of day report characterized by nontrivial losses and erratic trading behavior by the neural network.

I’d come to refer to this incident as the invention of artificial anxiety. So too has this incident come to inform how I think about anxiety itself.

While there are forms of anxiety that stem from neurochemical imbalances, I concluded from this experiment that more situational forms of anxiety can be a product of excessive fixation – consciously or latently – on superficial outcomes at the expense of the greater process or mode of operation driving outcomes, never mind undermining the primary objective itself.

This could translate to anything from decision paralysis, to throwing out the baby with the bathwater: that is, repeatedly discarding partially formed processes in reaction to what is tantamount to noise, or irrelevant information, rather than continuously refining a single process. This invites the incoherent and seemingly impulsive behavior which comes with an inconsistent mode of operation.

If the central goal of the neural network was to forecast prices, and these forecasts informed the simulation’s trading decisions, then how exactly the simulation was performing with these forecasts was – as it related to forecasting financial trends – just noise as far as the neural network was concerned.

This is especially the case when there’s limited bandwidth, or processing capacity, available to work with. In these experiments, for instance, the neural network had to be small enough to be trained in realtime on consumer hardware. As a consequence, any irrelevant considerations were especially disastrous in their convolution of how the neural network configured itself.

In the grand scheme of things, if the forecasts are accurate, then decisions based on those forecasts will be sound – granted those decisions are executed properly and in a timely fashion, of course – and the outcomes will be desirable.

So, in short, focus on improving the process, focus on what matters, and don’t bite off more than you can chew with limited resources.

Let that be the lesson for tonight.

Thank you for listening and sharing, and have a good night.

Lessons from Artificial Intelligence

This episode is part of a series.
Monday, December 18, 2023

008 - Lessons from Artificial Intelligence (Part VI)

In the sixth and final episode of this subseries, I go into further detail about the similarities between artificial neural networks and of the human mind, and how the same dynamics of learning are used to influence our behavior and perception of reality.

Duration: 00:07:13


Saturday, September 23, 2023

007 - Lessons from Artificial Intelligence (Part V)

In the fifth and penultimate episode of this sub-series, I discuss the 2013 Flash Crash, sentiment analysis, my inroads into business analysis, and the phenomenon of neural overfitting.

Duration: 00:06:10


Monday, September 18, 2023

006 - Lessons from Artificial Intelligence (Part IV)

In the fourth episode of this subseries, I discuss one of the variables in my experiments with artificial neural networks in algorithmic trading and what it taught me about intelligence, ideas, and policy.

Duration: 00:05:25


Saturday, September 02, 2023

003 - Lessons from Artificial Intelligence (Part II)

In the second episode of this subseries, I recount the beginnings of my interest in artificial intelligence, my early experimentation with artificial neural networks, and how this experimentation came to intersect with finance.

Duration: 00:02:50


Wednesday, August 30, 2023

002 - Lessons from Artificial Intelligence (Part I)

In this episode, I begin a subseries reflecting on on my work with artificial neural networks, discussing observations and conclusions that were drawn from the experience.

Duration: 00:01:50