CVPR 2018 — recap, notes and trends

This year CVPR (Computer Vision and Pattern Recognition) conference has accepted 900+ papers. This blog post has overview of some of them. Here you can find notes that we captured together with my amazing colleague Tingting Zhao.

The main conference had the following presentation tracks during 3 days:

  • Special session: Workshop Competitions
  • Object Recognition and Scene Understanding
  •  Analyzing Humans in Images
  •  3D Vision
  •  Machine Learning for Computer Vision
  •  Video Analytics
  •  Computational Photography
  •  Image Motion and Tracking
  •  Applications

Below are  some trends and topics worth mentioning:

  • Video analysis: captioning, action classification, predict in what direction person (pedestrian) will move.
  •  Visual sentiment analysis.
  •  Agent orientation in space (room), virtual rooms datasets   — topics related to enabling machines to perform tasks.
  •  Person re-identification in video feeds.
  •  Style transfer (GAaaaNs) is still a theme.
  •  Adversarial attacks analysis.
  •  Image  enhancements — remove drops, remove shadows.
  •  NLP+Computer Vision.
  •  Image and video saliency.
  •  Efficient computation on edge devices.
  •  Weakly supervised learning for computer vision.
  •  Domain adaption.
  •  Interpretable Machine Learning.
  • Applications of Reinforcement Learning to CV: optimize network, data,  NN learning process.
  • Lots of interest into data-labeling area.

Notes below are semi-grouped to the following subsections:

Here is nice compilation of person re-identification related papers (in Mandarin, online translators are doing ok job 🙂 ).

For more info  please dig into into presentations and workshops archive.

Videos from sessions are here.

Continue reading

Advertisements

Spark+AI gems (from the Summit)

Below are videos worth checking out from recent Spark+AI Summit.

Building the Software 2.0 Stack Andrej Karpathy (Tesla) – Andrej’s talk at Spark+AI Summit. If I had time to watch one, I’d do this one.

A lot of our code is in the process of being transitioned from Software 1.0 (code written by humans) to Software 2.0 (code written by an optimization, commonly in the form of neural network training). In the new paradigm, much of the attention of a developer shifts from designing an explicit algorithm to curating large, varied, and clean datasets, which indirectly influence the code. I will provide a number of examples of this ongoing transition, cover the advantages and challenges of the new stack, and outline multiple opportunities for new tooling.

Using AI to Build a Self-Driving Query Optimizer Continue reading

NIPS 2017 — notes and thoughs

Last week Long Beach, CA was hosting annual NIPS (Neural Information Processing Systems) Conference with record breaking (8000+) number of attendees.  This conference is consider once of the biggest events in ML\DNN Research community.

Below are thoughts and notes related to what was going on at NIPS. Hopefully those brief (and sometimes abrupt) statements will be intriguing enough to inspire your further research ;).

Key trends

    1. Deep learning everywhere – pervasive across the other topics listed below. Lots of vision/image processing applications. Mostly CNNs and variations thereof. Two new developments: Capsule Networks and WaveNet.
    2. Reinforcement Learning – strong come-back with multiple sessions and papers on Deep RL and multi-arm bandits.
    3. Meta-Learning and One-Shot learning are often mention in Robotics and RL context.
    4. GANs – still popular, with several variations to speed up training / conversion and address  mode collapse problem. Variational Auto-Encoders also popular.
    5. Bayesian  NNs are area of active research
    6. Fairness in ML – Keynote and several papers on dealing with / awareness of bias in models, approaches to generate explanations.
    7. Explainable ML — lots of attention to it.
    8. Tricks, approaches to speed up SGD.
    9. Graphic models are back! Deep learning meets graphical probabilistic modeling.

Continue reading

Day 5 & 6 at ICML. All done.

Last 2 days of the conference were workshops and actually had less rock-star content.

In overall ICML in this year was well organized (well, minus pass-holders that emit constant cow-bell like tinkling) and rich for content.  I have not noticed any breakthrough papers though. Lots of RNNs, LSTMs, language\speech related work, GANs and Reinforcement Learning.

Toolset wise it “feels” like mostly Tensorflow,  Caffe, Pytorch,  even Matlab was mentioned few times.

 

Principled Approaches to Deep Learning

This track was about theoretical understanding  DNN architectures.

Do GANs actually learn distribution? I personally had higher expectation of this talk.  Main point was that yes, it’s problematic quantify success of GANs training algo and that mode collapse is a problem. That’s pretty much all about it.

Continue reading