Evaluation of Imitation Learning with Reinforcement Learning-Based Fine-Tuning for Different Control Tasks

Vyhodnocení kombinace učení imitací a posilovaného učení pro různé úlohy řízení

Editors

Other contributors

Journal Title

Journal ISSN

Volume Title

Publisher

České vysoké učení technické v Praze
Czech Technical University in Prague

Date of defense

Research Projects

Organizational Units

Journal Issue

Abstract

This thesis evaluates the performance of different learning-based control methods for various control tasks. Focusing on comparing techniques from Imitation Learning (IL) and Reinforcement Learning (RL) to understand their strengths and limitations. Based on these insights, a novel hybrid approach is proposed, which combines the rapid learning capabilities of IL with the adaptability and robustness of RL. IL is beneficial because it enables an agent to learn quickly from expert demonstrations. However, it can struggle in situations that differ from the training data. In contrast, RL allows an agent to learn through trial and error, often resulting in better long-term outcomes, but it tends to be slower and less efficient, especially with complex tasks. The proposed hybrid approach seeks to leverage the advantages of both methods, addressing these challenges effectively. A comprehensive evaluation of IL and RL algorithms is conducted to analyze their learning efficiency, task performance, and practical applicability to handle high-dimensional, continuous control tasks. Results demonstrate that the hybrid approach consistently outperforms standalone IL and RL, achieving an optimal balance between learning speed and task accuracy. This work underscores the potential of hybrid learning methods to tackle challenging control problems more efficiently and effectively.

This thesis evaluates the performance of different learning-based control methods for various control tasks. Focusing on comparing techniques from Imitation Learning (IL) and Reinforcement Learning (RL) to understand their strengths and limitations. Based on these insights, a novel hybrid approach is proposed, which combines the rapid learning capabilities of IL with the adaptability and robustness of RL. IL is beneficial because it enables an agent to learn quickly from expert demonstrations. However, it can struggle in situations that differ from the training data. In contrast, RL allows an agent to learn through trial and error, often resulting in better long-term outcomes, but it tends to be slower and less efficient, especially with complex tasks. The proposed hybrid approach seeks to leverage the advantages of both methods, addressing these challenges effectively. A comprehensive evaluation of IL and RL algorithms is conducted to analyze their learning efficiency, task performance, and practical applicability to handle high-dimensional, continuous control tasks. Results demonstrate that the hybrid approach consistently outperforms standalone IL and RL, achieving an optimal balance between learning speed and task accuracy. This work underscores the potential of hybrid learning methods to tackle challenging control problems more efficiently and effectively.

Description

Citation

Underlying research data set URL

Rights/License

A university thesis is a work protected by the Copyright Act of the Czech Republic. Extracts, copies and transcripts of the thesis are allowed for personal use only and at one`s own expense. The use of thesis should be in compliance with the Copyright Act.

Vysokoškolská závěrečná práce je dílo chráněné autorským zákonem. Je možné pořizovat z něj na své náklady a pro svoji osobní potřebu výpisy, opisy a rozmnoženiny. Jeho využití musí být v souladu s autorským zákonem v platném znění.

Endorsement

Review

Supplemented By

Referenced By