CVPR 2018 — recap, notes and trends

This year CVPR (Computer Vision and Pattern Recognition) conference has accepted 900+ papers. This blog post has overview of some of them. Here you can find notes that we captured together with my amazing colleague Tingting Zhao.

The main conference had the following presentation tracks during 3 days:

  • Special session: Workshop Competitions
  • Object Recognition and Scene Understanding
  •  Analyzing Humans in Images
  •  3D Vision
  •  Machine Learning for Computer Vision
  •  Video Analytics
  •  Computational Photography
  •  Image Motion and Tracking
  •  Applications

Below are  some trends and topics worth mentioning:

  • Video analysis: captioning, action classification, predict in what direction person (pedestrian) will move.
  •  Visual sentiment analysis.
  •  Agent orientation in space (room), virtual rooms datasets   — topics related to enabling machines to perform tasks.
  •  Person re-identification in video feeds.
  •  Style transfer (GAaaaNs) is still a theme.
  •  Adversarial attacks analysis.
  •  Image  enhancements — remove drops, remove shadows.
  •  NLP+Computer Vision.
  •  Image and video saliency.
  •  Efficient computation on edge devices.
  •  Weakly supervised learning for computer vision.
  •  Domain adaption.
  •  Interpretable Machine Learning.
  • Applications of Reinforcement Learning to CV: optimize network, data,  NN learning process.
  • Lots of interest into data-labeling area.

Notes below are semi-grouped to the following subsections:

Here is nice compilation of person re-identification related papers (in Mandarin, online translators are doing ok job 🙂 ).

For more info  please dig into into presentations and workshops archive.

Videos from sessions are here.

Continue reading

Advertisements

NIPS 2017 — notes and thoughs

Last week Long Beach, CA was hosting annual NIPS (Neural Information Processing Systems) Conference with record breaking (8000+) number of attendees.  This conference is consider once of the biggest events in ML\DNN Research community.

Below are thoughts and notes related to what was going on at NIPS. Hopefully those brief (and sometimes abrupt) statements will be intriguing enough to inspire your further research ;).

Key trends

    1. Deep learning everywhere – pervasive across the other topics listed below. Lots of vision/image processing applications. Mostly CNNs and variations thereof. Two new developments: Capsule Networks and WaveNet.
    2. Reinforcement Learning – strong come-back with multiple sessions and papers on Deep RL and multi-arm bandits.
    3. Meta-Learning and One-Shot learning are often mention in Robotics and RL context.
    4. GANs – still popular, with several variations to speed up training / conversion and address  mode collapse problem. Variational Auto-Encoders also popular.
    5. Bayesian  NNs are area of active research
    6. Fairness in ML – Keynote and several papers on dealing with / awareness of bias in models, approaches to generate explanations.
    7. Explainable ML — lots of attention to it.
    8. Tricks, approaches to speed up SGD.
    9. Graphic models are back! Deep learning meets graphical probabilistic modeling.

Continue reading

Day 5 & 6 at ICML. All done.

Last 2 days of the conference were workshops and actually had less rock-star content.

In overall ICML in this year was well organized (well, minus pass-holders that emit constant cow-bell like tinkling) and rich for content.  I have not noticed any breakthrough papers though. Lots of RNNs, LSTMs, language\speech related work, GANs and Reinforcement Learning.

Toolset wise it “feels” like mostly Tensorflow,  Caffe, Pytorch,  even Matlab was mentioned few times.

 

Principled Approaches to Deep Learning

This track was about theoretical understanding  DNN architectures.

Do GANs actually learn distribution? I personally had higher expectation of this talk.  Main point was that yes, it’s problematic quantify success of GANs training algo and that mode collapse is a problem. That’s pretty much all about it.

Continue reading

Day 4 at ICML 2017 — more Adversarial NNs

The morning talk was about Deep Reinforcement Learning in Complex environment by Raia Hadsell from Deep Mind.  In overall lots of great talks on the conference from DeepMind and Google Brain. The talk was generously sprinkled with newly published papers by DeepMind researches in Reinforcement Learning\Gaming space. Angry Birds are not yet solved, just FYI if somebody is up for a challenge.

Main algos\approaches covered in talk were: hierarchical reinforcement learning,  continual learning,  continuous control, multimodal agents, auxiliary tasks. See quite entertaining and nicely annotated demos here.

Deep learning & hardware

Main  theme: let’s use CPUs effectively and make NN computation effective on mobile devices.

Continue reading

Day 3 at ICML 2017 — musical RNNs

Here are my notes from ICML Day 3 (Tuesday).

Lots of interesting tracks (going in parallel) to choose from:  Fisher approximations, Continuous optimization, RRNs, Reinforcement learning, Probabilistic inference, Clustering, Deep learning analysis,  Game theory and etc.

The day’s been kicked off with “Test Of Time Award” presentation.  Each year committee  looks back ~ 10 years and choses paper that’s proven to be most impactful.  This time it was “Combining Online and Offline Knowledge in UCT” – the paper that laid foundation of AlphaGo’s success. The original idea of Mogo was leveraging  Reinforcement Learning and  Monter-Carlo Tree search. AlphaGo’s added Deep Learning kick to it. Back in 2007 authors maid bets\predictions on the future of their algo, beating  Go’s world champion in 10 years was one of them.

Reinforcement learning

Several policy evaluation approaches has been discussed.

Continue reading

Brain endurance or Day 2 at ICML 2017

Amount of content is astounding. Learning a lot and truly impressed  on magnitude of high promising research happening around the world.

Day 2  at ICML had great variety of parallel tracks with topics covering Online Learning,  Probabalistic Learning,  Deep Generative Models,  Deep Learning Theory, Supervised Learning,  Latent Feature Models, Reinforcement Learning, Continuous Optimization,  Matrix Factorization,  Metalearning and etc.

Bernhard Schölkopf kicked off the day with talk on Causal Learning (book) and how causal ideas could be exploited for classical machine learning problems.

Deep Generative Models

Lots of interest in this area (no surprise).  Here are what few memorable talks were about (no links  to papers as they are easy to find using your fav search engine ;)… maybe I will add those later).

Continue reading