Wednesday, August 30, 2023
002 - Lessons from Artificial Intelligence (Part I)
In this episode, I begin a subseries reflecting on on my work with artificial neural networks, discussing observations and conclusions that were drawn from the experience.
Duration: 00:01:50
Episode Transcript
Good evening. You’re listening to the Reflections in Beige Podcast, hosted by Michael LeSane. The date is 29 August, 2023.
Around this time ten years ago, I made the difficult decision to leave academia to focus on my mental well-being and to try my hand in the software industry.
As a postgraduate, I worked with a research group that focused on high-performance computing – or in layman’s terms, solving complex problems with supercomputers. My research interest in particular pertained to the applications of our distributed computing technology to artificial neural networks.
For the uninitiated, neural networks are best described as computational simulations of brain cell clusters. They attempt to predict patterns based on inputs, and can dynamically reconfigure themselves based on the feedback they’re provided.
Neural networks are used for everything from medical diagnoses, to financial forecasting, to language translation. More recently, they have powered text-to-image technologies like Stable Diffusion and Midjourney, and large language models like those used by ChatGPT.
Though my academic research largely revolved around using our distributed computing technology to improve performance with respect to training large neural networks, the best stories and most enduring insights from this period of my career came from my more hobbyist adventures with neural networks outside of research.
Over the next episode or several, I would like to recount some of these stories and the lessons I took from them, and to discuss the broader implications potentially associated with these lessons.
Around this time ten years ago, I made the difficult decision to leave academia to focus on my mental well-being and to try my hand in the software industry.
As a postgraduate, I worked with a research group that focused on high-performance computing – or in layman’s terms, solving complex problems with supercomputers. My research interest in particular pertained to the applications of our distributed computing technology to artificial neural networks.
For the uninitiated, neural networks are best described as computational simulations of brain cell clusters. They attempt to predict patterns based on inputs, and can dynamically reconfigure themselves based on the feedback they’re provided.
Neural networks are used for everything from medical diagnoses, to financial forecasting, to language translation. More recently, they have powered text-to-image technologies like Stable Diffusion and Midjourney, and large language models like those used by ChatGPT.
Though my academic research largely revolved around using our distributed computing technology to improve performance with respect to training large neural networks, the best stories and most enduring insights from this period of my career came from my more hobbyist adventures with neural networks outside of research.
Over the next episode or several, I would like to recount some of these stories and the lessons I took from them, and to discuss the broader implications potentially associated with these lessons.
