I created this notebook to better understand the inner workings of Bert. I followed a lot of tutorials to try to understand the architecture, but I was never able to really understand what was happening under the hood. For me it always helps to see the actual code instead of just simple abstract diagrams that a lot of times don’t match the actual implementation. If you’re like … [Read more...] about BERT Inner Workings
Natural Language Processing
GPT2 For Text Classification Using Hugging Face Transformers
This notebook is used to fine-tune GPT2 model for text classification using Hugging Face transformers library on a custom dataset. Hugging Face is very nice to us to include all the functionality needed for GPT2 to be used in classification tasks. Thank you Hugging Face! I wasn’t able to find much information on how to use GPT2 for classification so I … [Read more...] about GPT2 For Text Classification Using Hugging Face Transformers
Extractive Text Summarization Using Contextual Embeddings
Text Summarization is a process of generating a compact and meaningful synopsis from a huge volume of text. Sources for such text include news articles, blogs, social media posts, all kinds of documentation, and many more. If you are new to NLP and want to read more about text summarization, this article will help you understand the basic and advanced concepts. The … [Read more...] about Extractive Text Summarization Using Contextual Embeddings
Fine-tune Transformers in PyTorch Using Hugging Face Transformers
This notebook is designed to use a pretrained transformers model and fine-tune it on a classification task. The focus of this tutorial will be on the code itself and how to adjust it to your needs. This notebook is using the AutoClasses from transformer by Hugging Face functionality. This functionality can guess a model’s configuration, tokenizer and … [Read more...] about Fine-tune Transformers in PyTorch Using Hugging Face Transformers
Pretrain Transformers Models in PyTorch Using Hugging Face Transformers
This notebook is used to pretrain transformers models using Hugging Face on your own custom dataset. What do I mean by pretrain transformers? The definition of pretraining is to train in advance. That is exactly what I mean! Train a transformer model to use it as a pretrained transformers model which can be used to fine-tune it on a specific … [Read more...] about Pretrain Transformers Models in PyTorch Using Hugging Face Transformers
From Text to Knowledge: The Information Extraction Pipeline
I am thrilled to present my latest project I have been working on. In this blog post, I will present my implementation of an information extraction data pipeline, following my passion for combining natural language processing and knowledge graphs. Later on, I will also explain why I see the combination of NLP and graphs as one of the paths to explainable AI. If this in-depth … [Read more...] about From Text to Knowledge: The Information Extraction Pipeline
Building a Complete AI Based Search Engine with Elasticsearch, Kubeflow and Katib
Building search systems is hard. Preparing them to work with machine learning is really hard. Developing a complete search engine framework integrated with AI is really really hard. So let’s make one. ✌️ In this post, we’ll build a search engine from scratch and discuss on how to further optimize results by adding a machine learning layer using Kubeflow and Katib. This … [Read more...] about Building a Complete AI Based Search Engine with Elasticsearch, Kubeflow and Katib
Linguistics Wisdom of NLP Models
This article is authored by Keyur Faldu and Dr. Amit Sheth. This article elaborates on a niche aspect of the broader cover story on “Rise of Modern NLP and the Need of Interpretability!”At Embibe, we focus on developing interpretable and explainable Deep Learning systems, and we survey the current state of the art techniques to answer … [Read more...] about Linguistics Wisdom of NLP Models
Discovering the Encoded Linguistic Knowledge in NLP Models
This article is authored by Keyur Faldu and Dr. Amit Sheth. This article elaborates on a niche aspect of the broader cover story on “Rise of Modern NLP and the Need of Interpretability!”At Embibe, we desiderate answers to the open questions while we build the NLP platform to solve numerous problems for the academic content. Modern NLP models (BERT, GPT, … [Read more...] about Discovering the Encoded Linguistic Knowledge in NLP Models
NeurIPS 2020 Papers: Takeaways for a Deep Learning Engineer
Advances in Deep Learning research are of great utility for a Deep Learning engineer working on real-world problems as most of the Deep Learning research is empirical with validation of new techniques and theories done on datasets that closely resemble real-world datasets/tasks (ImageNet pre-trained weights are still useful!). But, churning a vast amount of … [Read more...] about NeurIPS 2020 Papers: Takeaways for a Deep Learning Engineer