About
![Avatar von Daniel Hallmann](https://blog.mayflower.de/wp-content/uploads/2024/01/cropped-daniel-150x150.png)
-
Fine-tuning German LLMs with Model Merging and DPO for Improving Customer Support
von
Let’s cover Model Merging and Direct Preference Optimization (DPO) as state-of-the-art approaches that could be combined to best level up LLM’s language performance.