The nonprofit AI research company, OpenAI, recently released a new language model, called GPT-2, which is capable of generating realistic texts in a wide range of styles. In fact, the company stated that the model is so good at automatic text generation that it can be used for nefarious purposes; therefore, it did not publicize the trained model. The dangerous-to-release … [Read more...] about OpenAI’s GPT-2: Results, Hype, and Controversies
Content Generation
How to Create a Fake Video of a Real Person
Recent advances in artificial intelligence have made creating convincing fake videos, or deepfakes, of real people possible. While the ethical implications and the creative capabilities of this amazing technology are only beginning to be explored, there are growing concerns that this technology could be used maliciously to ruin reputations and cause extensive damages. Can … [Read more...] about How to Create a Fake Video of a Real Person
An In-Depth Guide To Generative Adversarial Networks (GANs)
Generative Adversarial Networks are a powerful class of neural networks with remarkable applications. They essentially consist of a system of two neural networks — the Generator and the Discriminator — dueling each other. Given a set of target samples, the Generator tries to produce samples that … [Read more...] about An In-Depth Guide To Generative Adversarial Networks (GANs)
Intuitively Understanding Variational Autoencoders
In contrast to the more standard uses of neural networks as regressors or classifiers, Variational Autoencoders (VAEs) are powerful generative models, now having applications as diverse as from generating fake human faces, to producing purely synthetic music. This post will explore what a VAE is, the intuition behind why it works so well, and its uses as a powerful … [Read more...] about Intuitively Understanding Variational Autoencoders
Mixture of Variational Autoencoders – a Fusion Between MoE and VAE
The Variational Autoencoder (VAE) is a paragon for neural networks that try to learn the shape of the input space. Once trained, the model can be used to generate new samples from the input space. If we have labels for our input data, it’s also possible to condition the generation process on the label. In the MNIST case, it means we can specify … [Read more...] about Mixture of Variational Autoencoders – a Fusion Between MoE and VAE
Variational Autoencoders Explained in Detail
In the previous post of this series I introduced the Variational Autoencoder (VAE) framework, and explained the theory behind it. In this post I’ll explain the VAE in more detail, or in other words – I’ll provide some code 🙂 After reading this post, you’ll understand the technical details needed to implement VAE. As a bonus point, I’ll show you how by … [Read more...] about Variational Autoencoders Explained in Detail
Variational Autoencoders Explained
Ever wondered how the Variational Autoencoder (VAE) model works? Do you want to know how VAE is able to generate new examples similar to the dataset it was trained on? After reading this post, you'll be equipped with the theoretical understanding of the inner workings of VAE, as well as being able to implement one yourself. In a future post I'll provide you with a working code … [Read more...] about Variational Autoencoders Explained
OpenAI GPT-2: Understanding Language Generation through Visualization
Are you interested in receiving more in-depth technical education about language models and NLP applications? Subscribe below to receive relevant updates. In the eyes of most NLP researchers, 2018 was a year of great technological advancement, with new pre-trained NLP models shattering records on tasks ranging from sentiment analysis to … [Read more...] about OpenAI GPT-2: Understanding Language Generation through Visualization
Novel Methods For Text Generation Using Adversarial Learning & Autoencoders
Just two years ago, text generation models were so unreliable that you needed to generate hundreds of samples in hopes of finding even one plausible sentence. Nowadays, OpenAI's pre-trained language model can generate relatively coherent news articles given only two sentence of context. Other approaches like Generative Adversarial Networks (GANs) and Variational … [Read more...] about Novel Methods For Text Generation Using Adversarial Learning & Autoencoders
5 New Generative Adversarial Network (GAN) Architectures For Image Synthesis
AI image synthesis has made impressive progress since Generative Adversarial Networks (GANs) were introduced in 2014. GANs were originally only capable of generating small, blurry, black-and-white pictures, but now we can generate high-resolution, realistic and colorful pictures that you can hardly distinguish from real photographs. Here we have summarized for you 5 … [Read more...] about 5 New Generative Adversarial Network (GAN) Architectures For Image Synthesis