This year I will be presenting 2 sessions at the [de:code] conference and cover topics ranging from Metalerning, Object Detection @Edge to Hierarchical Attention Neural Networks.Thank you Daiyu Hatakeyama-san for warming up the audience 😉.
Presenting https://www.microsoft.com/developerblog/2018/03/06/sequence-intent-classification/ at The Datascience Conference: loved all the questions from audience and ideas on how to apply our work!
One of my favorite is to use our case study for malware classification to analyze application logs 😉
Last week Long Beach, CA was hosting annual NIPS (Neural Information Processing Systems) Conference with record breaking (8000+) number of attendees. This conference is consider once of the biggest events in ML\DNN Research community.
Below are thoughts and notes related to what was going on at NIPS. Hopefully those brief (and sometimes abrupt) statements will be intriguing enough to inspire your further research ;).
- Deep learning everywhere – pervasive across the other topics listed below. Lots of vision/image processing applications. Mostly CNNs and variations thereof. Two new developments: Capsule Networks and WaveNet.
- Reinforcement Learning – strong come-back with multiple sessions and papers on Deep RL and multi-arm bandits.
- Meta-Learning and One-Shot learning are often mention in Robotics and RL context.
- GANs – still popular, with several variations to speed up training / conversion and address mode collapse problem. Variational Auto-Encoders also popular.
- Bayesian NNs are area of active research
- Fairness in ML – Keynote and several papers on dealing with / awareness of bias in models, approaches to generate explanations.
- Explainable ML — lots of attention to it.
- Tricks, approaches to speed up SGD.
- Graphic models are back! Deep learning meets graphical probabilistic modeling.
Debugging Neural Network could be quite an adventure.
These blog posts have lots of useful info on the topic, check it out:
Last 2 days of the conference were workshops and actually had less rock-star content.
In overall ICML in this year was well organized (well, minus pass-holders that emit constant cow-bell like tinkling) and rich for content. I have not noticed any breakthrough papers though. Lots of RNNs, LSTMs, language\speech related work, GANs and Reinforcement Learning.
Toolset wise it “feels” like mostly Tensorflow, Caffe, Pytorch, even Matlab was mentioned few times.
Principled Approaches to Deep Learning
This track was about theoretical understanding DNN architectures.
Do GANs actually learn distribution? I personally had higher expectation of this talk. Main point was that yes, it’s problematic quantify success of GANs training algo and that mode collapse is a problem. That’s pretty much all about it.
The morning talk was about Deep Reinforcement Learning in Complex environment by Raia Hadsell from Deep Mind. In overall lots of great talks on the conference from DeepMind and Google Brain. The talk was generously sprinkled with newly published papers by DeepMind researches in Reinforcement Learning\Gaming space. Angry Birds are not yet solved, just FYI if somebody is up for a challenge.
Main algos\approaches covered in talk were: hierarchical reinforcement learning, continual learning, continuous control, multimodal agents, auxiliary tasks. See quite entertaining and nicely annotated demos here.
Deep learning & hardware
Main theme: let’s use CPUs effectively and make NN computation effective on mobile devices.
Thirty-fourth International Conference on Machine Learning is held in Sydney this year. The accumulative brain power of this event is overwhelming. I’m taking notes, so below are few things that I found interesting.
Amongst most popular areas of interest for this year at the conference are:
- Neural Networks and Deep Learning
- Optimization (Continuous)
- Online Learning
- Generative Models
- Graphical Models
Machine Learning for Autonomous Vehicles
There is quite a bit of interest for this topic at the conference: 2h tutorial on day1 and full day workshop later in the conference. The tutorial was presented by Uber Advanced Technologies Group. It’s quite impressive that within slightly more than a year team has gone from building their 1st self driving vehicle (Dec’15) to self-driving cars picking up Uber passenger in Pittsburg (Feb’17).
Well, first blog post have to be cool, right? I’d so happens that just few days ago official write up about my team’s hackfest with Getty Images got published! Here we got Machine Learning, DNNS, GANs sprinkled with Kubernetes. So here it is. My first blog post will have reference to this awesomeness: Learning Image to Image Translation with CycleGANs. Engoy!