The world is inherently structured; our models should learn representations which reflect that.

I'm interested in machine learning models which explicitly represent the structure of the problem domain in their model architecture. This kind of architectural constraint is desirable because it allows the model to encode structural assumptions about the input data - assumptions which reduce sample complexity while allowing the model to learn meaningful, interpretable relations between entities.

The swirling, metallic device shown above is an astrolabe I came across while traveling in Florence. Astrolabes were used as navigational tools before the invention of the sextant, and were probably the earliest predictive instrument to explicitly represent the structure of their problem domain (namely, spatiotemporal relationships between the earth and other celestial bodies) in their construction.



Biography

Alongside a healthy population of machine learning researchers, my academic background is in physics. In undergrad, I conducted research on charge photogeneration and transfer in organic semiconductors under the supervision of Oksana Ostroverkhova. You can see some of our work here.

Following undergrad, I received a Masters in Applied Optical Physics from the University of Oregon, where I designed 3D printed gradient index phase masks for optical testing aboard NASA's James Webb Space Telescope.

I also spent a few wonderful years at Intel. First as a Process Engineer manufacturing photomasks in our factories, then as a Machine Learning Engineer in our AI group, where I worked on a truly enormous variety of projects (some of which are public, and viewable here).



Get in touch

Email is the most reliable way to reach me, but feel free to get in touch on LinkedIn or find me on GitHub.


Site design inspired by Jaan Altosaar, and transitively, by Cole Townsend.