The moment you put a model in production, it starts degrading. Building Machine Learning models that perform well in the wild at production time is still an open and challenging problem. It is well known that modern machine-learning models can be brittle, meaning that — even when achieving impressive performance on the evaluation set — their performance can degrade significantly when exposed to new examples with differences in vocabulary and writing style. This discrepancy between examples seen during training use and inference can cause a drop in performance of ML models in production, which can pose non-affordable risks for…
‘Every experiment is sacred
Every experiment is great
If an experiment is wasted
God gets quite irate’ ~Sacred
Here I come clean, for a long time I have been a caveman. I have been using spreadsheets to log my ML experiments, all started well, I was happy, and then a deadline came and all of a sudden it went messy, very messy... I trusted my self-discipline to keep consistency and it failed me. I am a waste of GPUs.
But don’t cast your stones yet, you all did that. I saw you when I tried to reproduce your experiments. I…
“Code is available upon request”, “Authors promise ..”, a repo that only contains model.py .. we all know... no need to explain further.
In First day of #EMNLP2018 Joel Grus, Matt and Mark from Allen AI institute presented the tutorial that was arguably the one that attracted the most interest amongst all.
I like this type of tutorials, which discusses new issues in the research community, so much more than having a catwalk of SOTA models across a specific task. …
Machine Learning and NLP are highly paced science fields particularly in the recent years. Already for researchers with an easy access to resources, it can prove challenging to keep up to date with the latest developments. It becomes even more extreme if taking into consideration researchers with less resources ready at their availability. This unfortunately creates a fertile environment for creating biases leading to more and more under represented groups.
Diversity in AI fields has been a matter of concern and discussion of the scientific community. It has lead to various efforts of closing this gap, such as: Black in…
Copying, Pointing and Placeholders are terms that have been popping up in NLP papers since 2015 in Neural Machine Translation (NMT) and Abstractive Summarization. However, also a different adaptation of the same concepts has been seen in related work in Question Generation and Summary generation from structured data.
Unlike Attention mechanism, I haven’t seen much blog posts that try to explain these terms further. So, this post is the first part of multiple blog posts trying to: first, explain the initial problem historically and group each type of techniques and related work in a blog post.
Neural models for text…
Research Scientist at NAVER LABS Europe. Interested in NLP and Machine Learning.