GenAI in Education, Science, and Society
Content
- Agents for information retrieval, algorithmic embedding and association techniques (e.g. TF-IDF)
- Representation learning for sequential structures, automatic embedding: word2vec, CBOW, skip-gram with negative sampling
- Natural language processing: large language models (LLMs): recurrent computation networks (with so-called LSTMs or GRUs as base units), transformation networks (e.g. BERTs, GPTs).
- Large Language Models (LLMs): recurrent computational networks
(with so-called LSTMs or GRUs as base units), transformation networks (e.g. BERT, GPT), basics of training generative pretrained transformers (GPTs), GPT generation parameters: Temperature and top p-sampling, retrieval-supported generation, embedding techniques for relational data (knowledge graphs), integration of knowledge graphs into language models, generation of knowledge graphs from texts, fine-tuning of pre-trained generative models for special tasks, distilling models - Prompt engineering: verbalization of context and task descriptions (including context-related GPTs). context-related GPTs), context- related learning (zero-shot vs. few-shot prompt generation)
- Software development with LLMs (code generation)
- Image processing with convolutional networks and transformation networks: AlexNet, ResNet, transformation networks for visual
data (ViT) - Vision and language: large multimodal models (ViLBERT), contrastive pre-training (CLIP)
- Generation of images from textual descriptions (DALL-E)