1. Azure Machine Learning: CPU Optimizations

    Microsoft Azure's Data Science Virtual Machine is its most popular VM in the ML space. This project introduced an extension of this offering with ML frameworks optimized for execution on Intel CPUs.

  2. Optimized Machine Learning VMs on AWS

    Developed orchestration software to ensure ML frameworks are built and configured optimally for the underlying hardware they're run on. If you launch a machine learning instance on CPU, you can be confident you're getting the right stuff!

  3. Distributed Deep Learning: Convolutional Neural Networks for Brain Tumor Segmentation

    In collaboration with the AI team at General Electric's Global Research Center, we developed a distributed deep learning system for brain tumor segmentation.

  4. Accelerating Memory Bound Machine Learning Applications

    Operating on matrix-vectors can substantially degrade training throughput in machine learning. In cooperation with Siemens AI Research, and using a memory-bound meta-learning model, we demonstrate how decreasing the number of threads utilized can yield throughput increases of up to 100x.

  5. Biomedical Image Segmentation: Upsampling vs. Transposed Convolution

    The semantic segmentation topology U-Net is designed somewhat like an autoencoder. This work investigated the effects of upsampling using standard nearest neighbors as well as transposed convolution (also called fractionally strided convolution).

  6. Thorn

    Thorn uses machine learning to identify and rescue children from sexual exploitation and human trafficking.

    In my tenure volunteering as an ML Scientist, I built machine learning tools for face-identification and age-detection, though what I'm most proud of is a method for scraping large amounts of labeled data from the web (for free!). This method enabled us to grow our training set by five orders of magnitude and increase the accuracy and confidence of our age-detector.