2 minute read

NeurIPS 2020 has gone virtual this year due to the ongoing global pandemic, allowing me to go to another conference for cheap and without dealing with the business of airlines, travel, and getting out of my pyjamas. NeurIPS is one of the largest conferences on machine learning and artificial intelligence with topics ranging from adversarial attacks to robotics to fairness and bias in machine learning. This conference takes a much broader view of machine learning compared to the European Conference on Computer Vision (ECCV) which I posted about previously. This means that the papers tend to be more theoretical and mathematical at NeurIPS, but also includes topics like natural language processing.

I can’t say that the top papers at NeurIPS this year had the most accessible topics other than the one about GPT-3 called Language Models are Few-Shot Learners by Tom Brown et al. GPT-3 is a language model with 175 billion parameters trained on a huge dataset by simply randomly removing some words or phrases in a sentence and then having the model try to predict those words back knowing only the words around it. The key point of the presentation is that GPT-3 is getting impressively good at human-like sentences and can even do new tasks such as programming or arithmetic using only a handful or even no examples of this task. It still, however, has problems such as bias and reading comprehension.

Out of the sponsored presentations I’ve seen, the ones for Neural Magic and Zalando were quite interesting. Neural Magic’s presentation was about pruning the weights of neural networks and quantizing them into discrete values so that the network is smaller and faster while maintaining accuracy. They plan to release an open source user interface called Sparsify to simplify pruning and quantization next year.

Zalando is an online clothing company who presented their algorithm that used a generative adversarial network (GAN) to combine styles of different articles of clothing and then matches the result to similar clothing in their large catalogue. Curiously, they’ve found combinations that don’t match closely to anything in their catalogue, perhaps finding new avenues for creativity. They also presented a generative model that could automatically dress models in articles of clothing chosen by the user. To me, it’s quite amazing to see GANs having such practical applications only a few years after their invention.

My favourite presentation on the Online Learning for Adaptive Robotic Systems by Byron Boots. His team wanted to create a self-driving car that could go around a dirt track. The problem was that the car is always bouncing or sliding around on the track, making accurate predictions really difficult. So they propose using online learning to act like a control system, aggressively reoptimizing on the track to control the motors. Additionally, they talked about using imitation learning to train the automated car to drive with a far cheaper sensor setup.

Overall, I think that the NeurIPS team has done an admirable job in organizing this conference considering the situation. This conference was worth the time that I spent attending it, learning a lot about everyone’s research and showing me that it is an exciting time to practice machine learning.