Deep Learning neural network models have been successfully applied to natural language processing, and are now changing radically how we interact with machines (Siri, Amazon Alexa, Google Home, Skype translator, Google Translate, or the Google search engine). These models are able to infer a continuous representation for words and sentences, and generalize to new tasks with much less training data. The seminar will introduce the main deep learning models used in natural language processing, allowing the attendees to gain hands-on understanding and implementation of them in Keras.
This course is a 20 hour introduction to the main deep learning models used in text processing, covering the latest developments, including Transformers and pre-trained (multilingual) language models like GPT, T5, BERT, and their use with fine-tuning and prompting. It combines theoretical and practical hands-on classes. Attendants will be able to understand and implement the models in Keras.