Bayesian Ensembles for Exploration in Deep Reinforcement Learning; Code underlying the dissertation "Bayesian Model-Free Deep Reinforcement Learning"
DOI:10.4121/2dcc8aeb-fdbb-4455-add2-c55c5afcc5d4.v1
The DOI displayed above is for this specific version of this dataset, which is currently the latest. Newer versions may be published in the future.
For a link that will always point to the latest version, please use
DOI: 10.4121/2dcc8aeb-fdbb-4455-add2-c55c5afcc5d4
DOI: 10.4121/2dcc8aeb-fdbb-4455-add2-c55c5afcc5d4
Datacite citation style
van der Vaart, Pascal (2025): Bayesian Ensembles for Exploration in Deep Reinforcement Learning; Code underlying the dissertation "Bayesian Model-Free Deep Reinforcement Learning". Version 1. 4TU.ResearchData. software. https://doi.org/10.4121/2dcc8aeb-fdbb-4455-add2-c55c5afcc5d4.v1
Other citation styles (APA, Harvard, MLA, Vancouver, Chicago, IEEE) available at Datacite
Software
Licence Apache-2.0
Interoperability
Code underlying the dissertation "Bayesian Model-Free Deep Reinforcement Learning". This repository contains code for the chapter "Bayesian Ensembles for Exploration in Deep Q-Learning". The code is for experiments which demonstrate how ensemble-based Bayesian methods can improve exploration efficiency in reinforcement learning, with a focus on the DQN architecture. The code can be used to reproduce the results and be modified by researchers to be applied to other benchmark problems. The code has no direct applications outside of reinforcement learning research.
History
- 2025-08-11 first online, published, posted
Publisher
4TU.ResearchDataFormat
.pyOrganizations
TU Delft, Faculty of Electrical Engineering, Mathematics and Computer Science, Department of Intelligent SystemsTo access the source code, use the following command:
git clone https://data.4tu.nl/v3/datasets/3a105521-4def-4d63-a350-5d3b4cadd35b.git