[de:code] in Tokyo ‚ÄĒ looking forward

This year I will be presenting 2 sessions at the [de:code] conference and cover topics ranging from Metalerning, Object Detection @Edge to Hierarchical Attention Neural Networks.Thank you Daiyu Hatakeyama-san for warming up the audience ūüėČ.

Advertisements

Do u have a sequence to classify?

Presenting https://www.microsoft.com/developerblog/2018/03/06/sequence-intent-classification/ at The Datascience Conference: loved all the questions from audience and ideas on how to apply our work!

One of my favorite is to use our case study for malware classification to analyze application logs ūüėČ

NIPS 2017 — notes and thoughs

Last week Long Beach, CA was hosting annual NIPS (Neural Information Processing Systems) Conference with record breaking (8000+) number of attendees.  This conference is consider once of the biggest events in ML\DNN Research community.

Below are thoughts and notes related to what was going on at NIPS. Hopefully those brief (and sometimes abrupt) statements will be intriguing enough to inspire your further research ;).

Key trends

    1. Deep learning everywhere – pervasive across the other topics listed below. Lots of vision/image processing applications. Mostly CNNs and variations thereof. Two new developments: Capsule Networks and WaveNet.
    2. Reinforcement Learning – strong come-back with multiple sessions and papers on Deep RL and multi-arm bandits.
    3. Meta-Learning and One-Shot learning are often mention in Robotics and RL context.
    4. GANs Рstill popular, with several variations to speed up training / conversion and address  mode collapse problem. Variational Auto-Encoders also popular.
    5. Bayesian  NNs are area of active research
    6. Fairness in ML – Keynote and several papers on dealing with / awareness of bias in models, approaches to generate explanations.
    7. Explainable ML — lots of attention to it.
    8. Tricks, approaches to speed up SGD.
    9. Graphic models are back! Deep learning meets graphical probabilistic modeling.

Continue reading

Day 5 & 6 at ICML. All done.

Last 2 days of the conference were workshops and actually had less rock-star content.

In overall ICML in this year was well organized (well, minus pass-holders that emit constant cow-bell like tinkling) and rich for content.  I have not noticed any breakthrough papers though. Lots of RNNs, LSTMs, language\speech related work, GANs and Reinforcement Learning.

Toolset wise it ‚Äúfeels‚ÄĚ like mostly Tensorflow,¬† Caffe, Pytorch, ¬†even Matlab was mentioned few times.

 

Principled Approaches to Deep Learning

This track was about theoretical understanding  DNN architectures.

Do GANs actually learn distribution? I personally had higher expectation of this talk.¬† Main point was that yes, it’s problematic quantify success of GANs training algo and that mode collapse is a problem. That’s pretty much all about it.

Continue reading

Day 4 at ICML 2017 — more Adversarial NNs

The morning talk was about Deep Reinforcement Learning in Complex environment by Raia Hadsell from Deep Mind.  In overall lots of great talks on the conference from DeepMind and Google Brain. The talk was generously sprinkled with newly published papers by DeepMind researches in Reinforcement Learning\Gaming space. Angry Birds are not yet solved, just FYI if somebody is up for a challenge.

Main algos\approaches covered in talk were: hierarchical reinforcement learning,  continual learning,  continuous control, multimodal agents, auxiliary tasks. See quite entertaining and nicely annotated demos here.

Deep learning & hardware

Main¬† theme: let’s use CPUs effectively and make NN computation effective on mobile devices.

Continue reading

ICML and my notes on day 1

Thirty-fourth International Conference on Machine Learning is held in Sydney this year. The accumulative brain power of this event is overwhelming. I’m taking ¬†notes, so below are few things that I found interesting.

 

Amongst most popular areas of interest for this year at the conference are:

  • Neural Networks and Deep Learning
  • Optimization (Continuous)
  • Online Learning
  • Generative Models
  • Graphical Models

 Machine Learning for Autonomous Vehicles

There is quite a bit of interest for this topic at the conference: 2h tutorial on day1 and full day workshop later in the conference. The tutorial was presented by Uber Advanced Technologies Group. It’s quite impressive that within slightly more than a year team has gone from building¬† their 1st self driving vehicle¬† (Dec’15) to self-driving cars picking up Uber passenger in Pittsburg¬† (Feb’17).

Continue reading