NIPS 2017 — notes and thoughs

Last week Long Beach, CA was hosting annual NIPS (Neural Information Processing Systems) Conference with record breaking (8000+) number of attendees.  This conference is consider once of the biggest events in ML\DNN Research community.

Below are thoughts and notes related to what was going on at NIPS. Hopefully those brief (and sometimes abrupt) statements will be intriguing enough to inspire your further research ;).

Key trends

    1. Deep learning everywhere – pervasive across the other topics listed below. Lots of vision/image processing applications. Mostly CNNs and variations thereof. Two new developments: Capsule Networks and WaveNet.
    2. Reinforcement Learning – strong come-back with multiple sessions and papers on Deep RL and multi-arm bandits.
    3. Meta-Learning and One-Shot learning are often mention in Robotics and RL context.
    4. GANs – still popular, with several variations to speed up training / conversion and address  mode collapse problem. Variational Auto-Encoders also popular.
    5. Bayesian  NNs are area of active research
    6. Fairness in ML – Keynote and several papers on dealing with / awareness of bias in models, approaches to generate explanations.
    7. Explainable ML — lots of attention to it.
    8. Tricks, approaches to speed up SGD.
    9. Graphic models are back! Deep learning meets graphical probabilistic modeling.

Continue reading