Machine Learning for Speech and Audio Processing
Lecturer: Prof. Dr.-Ing. Peter Jax
Contact: Lars Thieling, Maximilian Kentgens
Type: Master lecture
Credits: 4
Lecture in RWTHonline
Exercise in RWTHonline
Learning room RWTHmoodle
(Registration via RWTHonline)
Course language: English
Material:
Lecture slides are sold in the first lecture as well as by Irina Ronkartz. Exercise problems will be published in RWTHmoodle.
Exam
The exam is held orally on 24.02., 09.03. and 23.03. Dates are given by arrangement. Please contact Ms Sedgwick , sedgwick@iks.rwth-aachen.de
Resources: You are allowed to bring one hand-written DIN A4 formula sheet (front and back). Any other written material (e.g., lecture notes, exercise notes) is not allowed. A non-programmable calculator is allowed.
Please note: The exam takes place via Zoom in digital form at our institute. Please have your student ID (BlueCard) ready.
The new lecture "Machine Learning for Speech and Audio Processing (MLSAP)" addresses especially students of the Master's program "Electrical Engineering, Information Technology and Computer Engineering". Starting in the summer term 2019, the course is curricularly anchored in the ELECTIVE module catalogues of the majors "Communications Engineering" (COMM), "Computer Engineering" (COMP), and "Systems and Automation" (SYAT).
Content
In this one term lecture the fundamental methods of machine learning with applications to problems in speech and audio signal processing are presented:
- Fundamentals of Classification and Estimation
- Basic Problems of Classification
- Feature Extraction Techniques
- Basic Classification Schemes
- Probabilistic Models
- Stochastic Processes and Models
- Gaussian Mixture Models (GMMs)
- Hidden Markov Models (HMMs)
- Training Methods
- Bayesian Probability Theory: Classification and Estimation
- Particle Filter
- Non-Negative Matrix Factorization (NMF)
- Dictionary-based concept
- Neural Network and Deep Learning
- Feed-Forward Neural Networks
- Fundamental Applications
- Learning Strategies: Supervised vs Unsupervised vs Reinforcement Learning
- Training of Synaptic Weights: Backpropagation and Stochastic Gradient Descent
- Behavior of Learning and the “Magic” of Setting Hyper‐Parameters
- Generative Networks as a Complement to Directed Graphs
- From „Shallow“ to „Deep“: Trade Comprehensibility for Performance
- Specific Network Architectures
- Applications in Signal Processsing
- Interpretations and Realizations
Exercises are offered to gain a deeper understanding on the basis of practical examples.
Summer term 2020
Participants of the evaluation (lecture/exercise): 16/10
Lecture:
Global grade: 1,4
Concept of the lecture: 1,4
Instruction and behaviour: 1,4
Exercise:
Global grade: 1,4
Concept of the exercise: 1,5
Instruction and behaviour: 1,4