Neuroscience to AI: Transitioning Laterally
How do we hear? A sound wave makes contact with the pinna, the outermost part of your ear. Then, it spirals down the ear canal and vibrates the tympanic membrane, causing the ossicles, the three smallest bones in your body, to push against the oval window. This produces chemical reactions to occur in the cochlea, which generates an electrical signal to your vestibulocochlear nerve to then relay the signal to your somatosensory cortex.
While the above pathway demonstrates how sound is translated from the environment to the brain, it does not indicate how chemical fluctuations or reactions translate into thought, nor does it detail why thought varies per person or how it varies.
Current technology in the neuroscience field is not only rudimentary, but, for example, it lacks the development necessary to link our understanding of chemical reactions to thought dissection. As a neuroscience major, common pathways after graduation include attending medical school or spending a lifetime in academia. Neither of these interested me, and combined with my intrinsic understanding of programming, I decided to pursue computational or translational neuroscience to refine current technological problems that we face.
Currently, neuroscience is a relatively new and pioneering field; the technology prevalent isn’t enough to understand the brain itself: we can evaluate and research behavior, study and dissect brains post mortem, but the technology present to understand brain activity in real time is not enough to fully comprehend brain function alone.
For instance, fMRI technology uses magnetic resonance imaging to track blood activity; using Blood Oxygen Level Dependent (BOLD) signals, an fMRI machine allows us to visualize where activity is present. However, tracking estimated blood flow in the brain is not enough information for scientists and researchers to understand more about the implicit components and relationships a brain has with different lobes or sections: studies have shown that fMRI technology has a “[39% reproducibility rate, variability between patients and there are extraneous variables that are difficult to control for: heart rate, diet, blood pressure, gene expression, etc.]” (Specht, 2020).
Originally, fMRI was used to understand average human brain activity, to map the human brain, but over time, our understanding of the brain has made scientists realize brains are more individual and unique than we thought. These discrepancies/uniqueness’ between brains have made it difficult to control and produce results that are unanimously trusted in the neuroscience field.
Luckily, machine learning advancements has shown mass improvements accounting for real time reading and accuracy for fMRI technology; machine learning and deep learning have evolved to be used for brain imaging techniques/analysis to “devise imaging-based diagnostic and classification systems of stroke, certain psychiatric disorders, epilepsy, etc.” (Zhu et al., 2019 ). Kazeminejad and Sotero used machine learning algorithms to reevaluate fMRI data of 816 participants in a study focused on determining various levels of autism topologically. Using machine learning algorithms allowed these researchers to achieve “accuracy, sensitivity, and specificity of 95, 97, and 95%, respectively” in finding biomarker commonality between participants (2019).
While fMRI advancements with machine learning is only one example of how software engineering has resolved discrepancies with understanding relationships within the brain, neuroscience and software engineering are two fascinating blends for building a cyborg future: neural prosthetics.
Neural prosthetics — what a minute but very fascinating blend of neuroscience and computer science! One of my mentors in my undergrad, a head researcher at my university for post-stroke sensation reception in the neuro-rehabilitation laboratory, was a doctor in software engineering, but had an interest in neuroscience. Her love for biology and software engineering paved the way for me to understand that these two fields could mix.
Neural prosthetics is an enthralling advancement. Hugh Herr, a forefather on the front of bionics, created his own prosthetics where it registers motor output, but not sensation input.
Sensory reception is a crucial drawback within the bionic community. While there are methods to deducing sensation reception, the technological means for receiving sensory input in real time, in a noninvasive manor, is extremely limited. However, with the advancements of software engineering, biological understanding and mechanics, sensation reception can be restored.
In my junior and senior year of undergrad, I came to my advisors to gauge their opinion about continuing school to get a masters or bachelors in software engineering. To my luck, they encouraged me not to go to a masters/bachelors program (for software engineering), but to go to a coding bootcamp instead. To get into a career in AI engineering, it could take years from a traditional schooling standpoint. Whereas attending coding bootcamp, it could be done in less than a year.
I chose Flatiron School because of their unique immersive education system and undeniable belief in their students; through the coding prep-work, I never felt isolated, my questions were always positively accepted and well-answered by patient coaches, and, with hard work, I was able to retain the material being taught. If you are a current student in a biologically related field or are interested in immersing yourself into the coding world, I encourage you to first research and weigh your options, but at least check out Flatiron Coding Bootcamp. While the pre-med track is competitive and can be difficult to navigate and network, I’ve never experienced such thorough encouragement and support from my peers and teachers like I have Flatiron.
As a Flatiron student, we have algorithm exercises for each student to practice for coding interviews, algorithm club, lecturers, special guest speakers, AI tech interview events and prep.
While Flatiron prepares you for real world interactions and expectations of software engineers, Flatirons’ gauge of empathy breakdown and compassion towards their students is incomparable: everyday at 5:30 the cohort exercises ‘Stand down’. ‘Stand down’ allows students to talk about their struggles of the day and to share with their classmates their feelings that resonate throughout the group. The purpose of this exercise is to not only understand that imposter syndrome is real, but to understand, through each other, that imposter syndrome is not a solo journey through the construction of a community.
Like a modern farmer uses a tractor as an extension of their body, a computer is an extension of the brain. Software engineering is the vessel that allows tangible augmentation of the mind and interconnectivity around the world. I studied neuroscience for 4.5 years and graduated in December of 2019 at Colorado State University. Whether it be neural prosthetics, or post stroke research, to human-computer interactions and AI (artificial intelligence), Brain-Computer Interface technology — through the means of software engineering and the fusion of neuroscience — is the career path I have chosen and will continue to choose.
Resources:
AJ+. “This Man Can Control His Bionic Arm With His Mind.” YouTube, YouTube, 14 Jan. 2016, www.youtube.com/watch?v=ocFRxDFawHA.
Kazeminejad, A., & Sotero, R. C. (2019). Topological Properties of Resting-State fMRI Functional Networks Improve Machine Learning-Based Autism Classification. Frontiers in neuroscience, 12, 1018. https://doi.org/10.3389/fnins.2018.01018
Lv, H., et al. “Resting-State Functional MRI: Everything That Nonexperts Have Always Wanted to Know.” American Journal of Neuroradiology, 2018, doi:10.3174/ajnr.a5527.
Specht, K. (2020). Current Challenges in Translational and Clinical fMRI and Future Directions. Frontiers in Psychiatry, 10. doi:10.3389/fpsyt.2019.00924
TEDtalksDirector. “New Bionics Let Us Run, Climb and Dance | Hugh Herr.” YouTube, YouTube, 28 Mar. 2014, www.youtube.com/watch?v=CDsNZJTWw0w.
Wang, Z. Irene., & Jones, Stephen E. “Integrating MRI Post-Processing with Artificial Intelligence in Epilepsy.” Consult QD, Consult QD, 6 Dec. 2019, consultqd.clevelandclinic.org/integrating-mri-post-processing-with-artificial-intelligence-in-epilepsy/.
Zhu, G., Jiang, B., Tong, L., Xie, Y., Zaharchuk, G., & Wintermark, M. (2019). Applications of Deep Learning to Neuro-Imaging Techniques. Frontiers in Neurology, 10. doi:10.3389/fneur.2019.00869