Last 2 days of the conference were workshops and actually had less rock-star content.
In overall ICML in this year was well organized (well, minus pass-holders that emit constant cow-bell like tinkling) and rich for content. I have not noticed any breakthrough papers though. Lots of RNNs, LSTMs, language\speech related work, GANs and Reinforcement Learning.
Toolset wise it “feels” like mostly Tensorflow, Caffe, Pytorch, even Matlab was mentioned few times.
Principled Approaches to Deep Learning
This track was about theoretical understanding DNN architectures.
Do GANs actually learn distribution? I personally had higher expectation of this talk. Main point was that yes, it’s problematic quantify success of GANs training algo and that mode collapse is a problem. That’s pretty much all about it.
The morning talk was about Deep Reinforcement Learning in Complex environment by Raia Hadsell from Deep Mind. In overall lots of great talks on the conference from DeepMind and Google Brain. The talk was generously sprinkled with newly published papers by DeepMind researches in Reinforcement Learning\Gaming space. Angry Birds are not yet solved, just FYI if somebody is up for a challenge.
Main algos\approaches covered in talk were: hierarchical reinforcement learning, continual learning, continuous control, multimodal agents, auxiliary tasks. See quite entertaining and nicely annotated demos here.
Deep learning & hardware
Main theme: let’s use CPUs effectively and make NN computation effective on mobile devices.
Here are my notes from ICML Day 3 (Tuesday).
Lots of interesting tracks (going in parallel) to choose from: Fisher approximations, Continuous optimization, RRNs, Reinforcement learning, Probabilistic inference, Clustering, Deep learning analysis, Game theory and etc.
The day’s been kicked off with “Test Of Time Award” presentation. Each year committee looks back ~ 10 years and choses paper that’s proven to be most impactful. This time it was “Combining Online and Offline Knowledge in UCT” – the paper that laid foundation of AlphaGo’s success. The original idea of Mogo was leveraging Reinforcement Learning and Monter-Carlo Tree search. AlphaGo’s added Deep Learning kick to it. Back in 2007 authors maid bets\predictions on the future of their algo, beating Go’s world champion in 10 years was one of them.
Several policy evaluation approaches has been discussed.
Amount of content is astounding. Learning a lot and truly impressed on magnitude of high promising research happening around the world.
Day 2 at ICML had great variety of parallel tracks with topics covering Online Learning, Probabalistic Learning, Deep Generative Models, Deep Learning Theory, Supervised Learning, Latent Feature Models, Reinforcement Learning, Continuous Optimization, Matrix Factorization, Metalearning and etc.
Bernhard Schölkopf kicked off the day with talk on Causal Learning (book) and how causal ideas could be exploited for classical machine learning problems.
Deep Generative Models
Lots of interest in this area (no surprise). Here are what few memorable talks were about (no links to papers as they are easy to find using your fav search engine ;)… maybe I will add those later).
Thirty-fourth International Conference on Machine Learning is held in Sydney this year. The accumulative brain power of this event is overwhelming. I’m taking notes, so below are few things that I found interesting.
Amongst most popular areas of interest for this year at the conference are:
- Neural Networks and Deep Learning
- Optimization (Continuous)
- Online Learning
- Generative Models
- Graphical Models
Machine Learning for Autonomous Vehicles
There is quite a bit of interest for this topic at the conference: 2h tutorial on day1 and full day workshop later in the conference. The tutorial was presented by Uber Advanced Technologies Group. It’s quite impressive that within slightly more than a year team has gone from building their 1st self driving vehicle (Dec’15) to self-driving cars picking up Uber passenger in Pittsburg (Feb’17).