After reading David Eagleman's new Livewired book, I did a deep dive into 3 aspects that computational neuroscientists should find interesting: Potato head model of evolution, reinforcement learning in the brain, and brain architecture search.

I've worked on Global Formula Racing's (GFR) Autonomous Vehicle team for 2 years. I trained Deep Neural Networks to effectively perceive cones in images. This was a presentation on some of that work from back in May 2021.

I trained a ResNet18 deep neural network to infer the Toyota robot's position in a cognitive map of the social science building. This project was for the CARL robotics lab at UCI. It took ~48hrs and used a standard desktop computer.

I took part in the DARPA Machine Common Sense (MCS) reasearch at OSU. We collaborated with roboticists and developmental psychologists from the UoU and NYU. We competed against other research groups at Berkley, MIT/IBM, Stanford, etc. Two of the three methods I helped develop at OSU took 1st place. I was the sole developer for Theory of Mind tasks. There's more tasks in the playlist. E.g. intuiting gravity, testing theory of mind, navigating a room looking for a trophy, etc. The code can be found in the video description.

Here's an earlier presentation from UCI's molecular neuroscience course. In this video I review a paper that combines many different cell sequencing datasets into one large dataset for the mouse primary motor cortex.

GFR is a collaboration between OSU and DHBW in Germany. It is a great source of knowledge and friendship. Here's another presentation from back in January 2021.

2nd page 3rd page 4th page 5th page 6th page 7th page 8th page