https://worldmodels.github.io/
We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own dream environment generated by its world model, and transfer this policy back into the actual environment.
Humans develop a mental model of the world based on what they are able to perceive with their limited senses. The decisions and actions we make are based on this internal model. Jay Wright Forrester, the father of system dynamics, described a mental model as:
“The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system.”
To handle the vast amount of information that flows through our daily lives, our brain learns an abstract representation of both spatial and temporal aspects of this information. We are able to observe a scene and remember an abstract description thereof
One way of understanding the predictive model inside of our brains is that it might not be about just predicting the future in general, but predicting future sensory data given our current motor actions
Take baseball for example. A baseball batter has milliseconds to decide how they should swing the bat -- shorter than the time it takes for visual signals from our eyes to reach our brain. The reason we are able to hit a 100mph fastball is due to our ability to instinctively predict when and where the ball will go. For professional players, this all happens subconsciously. Their muscles reflexively swing the bat at the right time and location in line with their internal models' predictions
In many reinforcement learning (RL) problems
Large RNNs are highly expressive models that can learn rich spatial and temporal representations of data. However, many model-free RL methods in the literature often only use small neural networks with few parameters. The RL algorithm is often bottlenecked by the credit assignment problem
Ideally, we would like to be able to efficiently train large RNN-based agents. The backpropagation algorithm
In this article, we combine several key concepts from a series of papers from 1990--2015 on RNN-based world models and controllers
Most existing model-based RL
To overcome the problem of an agent exploiting imperfections of the generated environments, we adjust a temperature parameter of internal world model to control the amount of uncertainty of the generated environments. We train an agent's controller inside of a noisier and more uncertain version of its generated environment, and demonstrate that this approach helps prevent our agent from taking advantage of the imperfections of its internal world model. We will also discuss other related works in the model-based RL literature that share similar ideas of learning a dynamics model and training an agent using this model.
We present a simple model inspired by our own cognitive system. In this model, our agent has a visual sensory component that compresses what it sees into a small representative code. It also has a memory component that makes predictions about future codes based on historical information. Finally, our agent has a decision-making component that decides what actions to take based only on the representations created by its vision and memory components.
Our agent consists of three components that work closely together: Vision (V), Memory (M), and Controller (C).
The environment provides our agent with a high dimensional input observation at each time step. This input is usually a 2D image frame that is part of a video sequence. The role of the V model is to learn an abstract, compressed representation of each observed input frame.
We use a Variational Autoencoder (VAE)
While it is the role of the V model to compress what the agent sees at each time frame, we also want to compress what happens over time. For this purpose, the role of the M model is to predict the future. The M model serves as a predictive model of the future vectors that V is expected to produce. Because many complex environments are stochastic in nature, we train our RNN to output a probability density function instead of a deterministic prediction of .
In our approach, we approximate as a mixture of Gaussian distribution, and train the RNN to output the probability distribution of the next latent vector given the current and past information made available to it.
More specifically, the RNN will model , where is the action taken at time and is the hidden state of the RNN at time . During sampling, we can adjust a temperature parameter to control model uncertainty, as done in
This approach is known as a Mixture Density Network
The Controller (C) model is responsible for determining the course of actions to take in order to maximize the expected cumulative reward of the agent during a rollout of the environment. In our experiments, we deliberately make C as simple and small as possible, and trained separately from V and M, so that most of our agent's complexity resides in the world model (V and M).
C is a simple single layer linear model that maps and directly to action at each time step:
In this linear model, and are the weight matrix and bias vector that maps the concatenated input vector to the output action vector .
The following flow diagram illustrates how V, M, and C interacts with the environment:
Below is the pseudocode for how our agent model is used in the OpenAI Gym controller
C will return the cumulative reward during a rollout of the environment.
This minimal design for C also offers important practical benefits. Advances in deep learning provided us with the tools to train large, sophisticated models efficiently, provided we can define a well-behaved, differentiable loss function. Our V and M models are designed to be trained efficiently with the backpropagation algorithm using modern GPU accelerators, so we would like most of the model's complexity, and model parameters to reside in V and M. The number of parameters of C, a linear model, is minimal in comparison. This choice allows us to explore more unconventional ways to train C -- for example, even using evolution strategies (ES)
To optimize the parameters of C, we chose the Covariance-Matrix Adaptation Evolution Strategy (CMA-ES)
For more specific information about the models, training procedures, and environments used in our experiments, please refer to the Appendix.
A predictive world model can help us extract useful representations of space and time. By using these features as inputs of a controller, we can train a compact and minimal controller to perform a continuous control task, such as learning to drive from pixel inputs for a top-down car racing environment
In this environment, the tracks are randomly generated for each trial, and our agent is rewarded for visiting as many tiles as possible in the least amount of time. The agent controls three continuous actions: steering left/right, acceleration, and brake.
To train our V model, we first collect a dataset of 10,000 random rollouts of the environment. We have first an agent acting randomly to explore the environment multiple times, and record the random actions taken and the resulting observations from the environment.
We can now use our trained V model to pre-process each frame at time into to train our M model. Using this pre-processed data, along with the recorded random actions taken, our MDN-RNN can now be trained to model as a mixture of Gaussians.
In this experiment, the world model (V and M) has no knowledge about the actual reward signals from the environment. Its task is simply to compress and predict the sequence of image frames observed. Only the Controller (C) Model has access to the reward information from the environment. Since there are a mere 867 parameters inside the linear controller model, evolutionary algorithms such as CMA-ES are well suited for this optimization task.
The figure below compares actual the observation given to the agent and the observation captured by the world model. We can use the VAE to reconstruct each frame using at each time step to visualize the quality of the information the agent actually sees during a rollout:
To summarize the Car Racing experiment, below are the steps taken:
Model | Parameter Count |
---|---|
VAE | 4,348,547 |
MDN-RNN | 422,368 |
Controller | 867 |
Training an agent to drive is not a difficult task if we have a good representation of the observation. Previous works
Although the agent is still able to navigate the race track in this setting, we notice it wobbles around and misses the tracks on sharper corners. This handicapped agent achieved an average score of 632 251 over 100 random trials, in line with the performance of other agents on OpenAI Gym's leaderboard
The representation provided by our V model only captures a representation at a moment in time and doesn't have much predictive power. In contrast, M is trained to do one thing, and to do it really well, which is to predict . Since M's prediction of is produced from the RNN's hidden state at time , this vector is a good candidate for the set of learned features we can give to our agent. Combining with gives our controller C a good representation of both the current observation, and what to expect in the future.
Indeed, we see that allowing the agent to access the both and greatly improves its driving capability. The driving is more stable, and the agent is able to seemingly attack the sharp corners effectively. Furthermore, we see that in making these fast reflexive driving decisions during a car race, the agent does not need to plan ahead and roll out hypothetical scenarios of the future. Since contain information about the probability distribution of the future, the agent can just query the RNN instinctively to guide its action decisions. Like a seasoned Formula One driver or the baseball player discussed earlier, the agent can instinctively predict when and where to navigate in the heat of the moment.
Method | Average Score over 100 Random Tracks |
---|---|
DQN |
343 18 |
A3C (continuous) |
591 45 |
A3C (discrete) |
652 10 |
ceobillionaire's algorithm (unpublished) |
838 11 |
V model only, input | 632 251 |
V model only, input with a hidden layer | 788 141 |
Full World Model, and | 906 21 |
Our agent was able to achieve a score of 906 21 over 100 random trials, effectively solving the task and obtaining new state of the art results. Previous attempts
Since our world model is able to model the future, we are also able to have it come up with hypothetical car racing scenarios on its own. We can ask it to produce the probability distribution of given the current states, sample a and use this sample as the real observation. We can put our trained C back into this dream environment generated by M. The following demo shows how our world model can be used to generate the car racing environment:
We have just seen that a policy learned inside of the real environment appears to somewhat function inside of the dream environment. This begs the question -- can we train our agent to learn inside of its own dream, and transfer this policy back to the actual environment?
If our world model is sufficiently accurate for its purpose, and complete enough for the problem at hand, we should be able to substitute the actual environment with this world model. After all, our agent does not directly observe the reality, but only sees what the world model lets it see. In this experiment, we train an agent inside the dream environment generated by its world model trained to mimic a VizDoom
The agent must learn to avoid fireballs shot by monsters from the other side of the room with the sole intent of killing the agent. There are no explicit rewards in this environment, so to mimic natural selection, the cumulative reward can be defined to be the number of time steps the agent manages to stay alive during a rollout. Each rollout in the environment runs for a maximum of 2100 time steps ( 60 seconds), and the task is considered solved if the average survival time over 100 consecutive rollouts is greater than 750 time steps ( 20 seconds)
The setup of our VizDoom experiment is largely the same as the Car Racing task, except for a few key differences. In the Car Racing task, M is only trained to model the next . Since we want to build a world model we can train our agent in, our M model here will also predict whether the agent dies in the next frame (as a binary event , or for short), in addition to the next frame .
Since the M model can predict the state in addition to the next observation, we now have all of the ingredients needed to make a full RL environment. We first build an OpenAI Gym environment interface by wrapping a gym.Env
In this simulation, we don't need the V model to encode any real pixel frames during the hallucination process, so our agent will therefore only train entirely in a latent space environment. This has many advantages that will be discussed later on.
This virtual environment has an identical interface to the real environment, so after the agent learns a satisfactory policy in the virtual environment, we can easily deploy this policy back into the actual environment to see how well the policy transfers over.
To summarize the Take Cover experiment, below are the steps taken:
Model | Parameter Count |
---|---|
VAE | 4,446,915 |
MDN-RNN | 1,678,785 |
Controller | 1,088 |
After some training, our controller learns to navigate around the dream environment and escape from deadly fireballs launched by monsters generated by the M model. Our agent achieved a score in this virtual environment of 900 time steps.
The following demo shows how our agent navigates inside its own dream. The M model learns to generate monsters that shoot fireballs at the direction of the agent, while the C model discovers a policy to avoid these generated fireballs. Here, the V model is only used to decode the latent vectors produced by M into a sequence of pixel images we can see:
Here, our RNN-based world model is trained to mimic a complete game environment designed by human programmers. By learning only from raw image data collected from random episodes, it learns how to simulate the essential aspects of the game -- such as the game logic, enemy behaviour, physics, and also the 3D graphics rendering.
For instance, if the agent selects the left action, the M model learns to move the agent to the left and adjust its internal representation of the game states accordingly. It also learns to block the agent from moving beyond the walls on both sides of the level if the agent attempts to move too far in either direction. Occasionally, the M model needs to keep track of multiple fireballs being shot from several different monsters and coherently move them along in their intended directions. It must also detect whether the agent has been killed by one of these fireballs.
Unlike the actual game environment, however, we note that it is possible to add extra uncertainty into the virtual environment, thus making the game more challenging in the dream environment. We can do this by increasing the temperature parameter during the sampling process of , as done in
We find agents that perform well in higher temperature settings generally perform better in the normal setting. In fact, increasing helps prevent our controller from taking advantage of the imperfections of our world model -- we will discuss this in more depth later on.
We took the agent trained inside of the virtual environment and tested its performance on the original VizDoom scenario. The score over 100 random consecutive trials is 1100 time steps, far beyond the required score of 750 time steps, and also much higher than the score obtained inside the more difficult virtual environment.
We see that even though the V model is not able to capture all of the details of each frame correctly, for instance, getting the number of monsters correct, the agent is still able to use the learned policy to navigate in the real environment. As the virtual environment cannot even keep track of the exact number of monsters in the first place, an agent that is able to survive the noisier and uncertain virtual nightmare environment will thrive in the original, cleaner environment.
In our childhood, we may have encountered ways to exploit video games in ways that were not intended by the original game designer
Because M is only an approximate probabilistic model of the environment, it will occasionally generate trajectories that do not follow the laws governing the actual environment. As we previously pointed out, even the number of monsters on the other side of the room in the actual environment is not exactly reproduced by M. For this reason, our world model will be exploitable by C, even if such exploits do not exist in the actual environment.
As a result of using M to generate a virtual environment for our agent, we are also giving the controller access to all of the hidden states of M. This is essentially granting our agent access to all of the internal states and memory of the game engine, rather than only the game observations that the player gets to see. Therefore our agent can efficiently explore ways to directly manipulate the hidden states of the game engine in its quest to maximize its expected cumulative reward. The weakness of this approach of learning a policy inside of a learned dynamics model is that our agent can easily find an adversarial policy that can fool our dynamics model -- it will find a policy that looks good under our dynamics model, but will fail in the actual environment, usually because it visits states where the model is wrong because they are away from the training distribution.
This weakness could be the reason that many previous works that learn dynamics models of RL environments do not actually use those models to fully replace the actual environments
To make it more difficult for our C to exploit deficiencies of M, we chose to use the MDN-RNN as the dynamics model of the distribution of possible outcomes in the actual environment, rather than merely predicting a deterministic future. Even if the actual environment is deterministic, the MDN-RNN would in effect approximate it as a stochastic environment. This has the advantage of allowing us to train C inside a more stochastic version of any environment -- we can simply adjust the temperature parameter to control the amount of randomness in M, hence controlling the tradeoff between realism and exploitability.
Using a mixture of Gaussian model may seem excessive given that the latent space encoded with the VAE model is just a single diagonal Gaussian distribution. However, the discrete modes in a mixture density model are useful for environments with random discrete events, such as whether a monster decides to shoot a fireball or stay put. While a single diagonal Gaussian might be sufficient to encode individual frames, an RNN with a mixture density output layer makes it easier to model the logic behind a more complicated environment with discrete random states.
For instance, if we set the temperature parameter to a very low value of , effectively training our C with an M that is almost identical to a deterministic LSTM, the monsters inside this generated environment fail to shoot fireballs, no matter what the agent does, due to mode collapse. M is not able to transition to another mode in the mixture of Gaussian model where fireballs are formed and shot. Whatever policy learned inside of this generated environment will achieve a perfect score of 2100 most of the time, but will obviously fail when unleashed into the harsh reality of the actual world, underperforming even a random policy.
In the following demo, we show that even low values of make it difficult for the MDN-RNN to generate fireballs:
By making the temperature an adjustable parameter of M, we can see the effect of training C inside of virtual environments with different levels of uncertainty, and see how well they transfer over to the actual environment. We experiment with varying of the virtual environment, training an agent inside of this virtual environment, and observing its performance when inside the actual environment.
Temperature |
Score in Virtual Environment |
Score in Actual Environment |
---|---|---|
0.10 |
2086 140 |
193 58 |
0.50 |
2060 277 |
196 50 |
1.00 |
1145 690 |
868 511 |
1.15 |
918 546 |
1092 556 |
1.30 |
732 269 |
753 139 |
Random Policy Baseline |
N/A |
210 108 |
Gym Leaderboard |
N/A |
820 58 |
In the table above, while we see that increasing of M makes it more difficult for C to find adversarial policies, increasing it too much will make the virtual environment too difficult for the agent to learn anything, hence in practice it is a hyperparameter we can tune. The temperature also affects the types of strategies the agent discovers. For example, although the best score obtained is 1092 556 with , increasing a notch to 1.30 results in a lower score but at the same time a less risky strategy with a lower variance of returns. For comparison, the best reported score
In our experiments, the tasks are relatively simple, so a reasonable world model can be trained using a dataset collected from a random policy. But what if our environments become more sophisticated? In any difficult environment, only parts of the world are made available to the agent only after it learns how to strategically navigate through its world.
For more complicated tasks, an iterative training procedure is required. We need our agent to be able to explore its world, and constantly collect new observations so that its world model can be improved and refined over time. An iterative training procedure, adapted from Learning To Think
We have shown that one iteration of this training loop was enough to solve simple tasks. For more difficult tasks, we need our controller in Step 2 to actively explore parts of the environment that is beneficial to improve its world model. An exciting research direction is to look at ways to incorporate artificial curiosity and intrinsic motivation
In the present approach, since M is a MDN-RNN that models a probability distribution for the next frame, if it does a poor job, then it means the agent has encountered parts of the world that it is not familiar with. Therefore we can adapt and reuse M's training loss function to encourage curiosity. By flipping the sign of M's loss function in the actual environment, the agent will be encouraged to explore parts of the world that it is not familiar with. The new data it collects may improve the world model.
The iterative training procedure requires the M model to not only predict the next observation and , but also predict the action and reward for the next time step. This may be required for more difficult tasks. For instance, if our agent needs to learn complex motor skills to walk around its environment, the world model will learn to imitate its own C model that has already learned to walk. After difficult motor skills, such as walking, is absorbed into a large world model with lots of capacity, the smaller C model can rely on the motor skills already absorbed by the world model and focus on learning more higher level skills to navigate itself using the motor skills it had already learned.
An interesting connection to the neuroscience literature is the work on hippocampal replay that examines how the brain replays recent experiences when an animal rests or sleeps. Replaying recent experiences plays an important role in memory consolidation
Iterative training could allow the C--M model to develop a natural hierarchical way to learn. Recent works about self-play in RL
There is extensive literature on learning a dynamics model, and using this model to train a policy. Many concepts first explored in the 1980s for feed-forward neural networks (FNNs)
While Gaussian processes work well with a small set of low dimensional data, their computational complexity makes them difficult to scale up to model a large history of high dimensional observations. Other recent works
In robotic control applications, the ability to learn the dynamics of a system from observing only camera-based video inputs is a challenging but important problem. Early work on RL for active vision trained an FNN to take the current image frame of a video sequence to predict the next frame
Video game environments are also popular in model-based RL research as a testbed for new ideas. Guzdial et al.
The works mentioned above use FNNs to predict the next video frame. We may want to use models that can capture longer term time dependencies. RNNs are powerful models suitable for sequence modelling
Using RNNs to develop internal models to reason about the future has been explored as early as 1990 in a paper called Making the World Differentiable
In this work, we used evolution strategies (ES) to train our controller, as it offers many benefits. For instance, we only need to provide the optimizer with the final cumulative reward, rather than the entire history. ES is also easy to parallelize -- we can launch many instances of rollout
with different solutions to many workers and quickly compute a set of cumulative rewards in parallel. Recent works
Before the popularity of Deep RL methods
We have demonstrated the possibility of training an agent to perform tasks entirely inside of its simulated latent space world. This approach offers many practical benefits. For instance, video game engines typically require heavy compute resources for rendering the game states into image frames, or calculating physics not immediately relevant to the game. We may not want to waste cycles training an agent in the actual environment, but instead train the agent as many times as we want inside its simulated environment. Agents that are trained incrementally to simulate reality may prove to be useful for transferring policies back to the real world. Our approach may complement sim2real approaches outlined in previous work
Furthermore, we can take advantage of deep learning frameworks to accelerate our world model simulations using GPUs in a distributed environment. The benefit of implementing the world model as a fully differentiable recurrent computation graph also means that we may be able to train our agents in the dream directly using the backpropagation algorithm to fine-tune its policy to maximize an objective function
The choice of implementing V as a VAE and training it as a standalone model also has its limitations, since it may encode parts of the observations that are not relevant to a task. After all, unsupervised learning cannot, by definition, know what will be useful for the task at hand. For instance, our VAE reproduced unimportant detailed brick tile patterns on the side walls in the Doom environment, but failed to reproduce task-relevant tiles on the road in the Car Racing environment. By training together with an M that predicts rewards, the VAE may learn to focus on task-relevant areas of the image, but the tradeoff here is that we may not be able to reuse the VAE effectively for new tasks without retraining. Learning task-relevant features has connections to neuroscience as well. Primary sensory neurons are released from inhibition when rewards are received, which suggests that they generally learn task-relevant features, rather than just any features, at least in adulthood
Another concern is the limited capacity of our world model. While modern storage devices can store large amounts of historical data generated using an iterative training procedure, our LSTM-based
Like early RNN-based C--M systems
If you would like to discuss any issues or give feedback, please visit the GitHub repository of this page for more information.
We would like to thank Blake Richards, Kory Mathewson, Kyle McDonald, Kai Arulkumaran, Ankur Handa, Denny Britz, Elwin Ha and Natasha Jaques for their thoughtful feedback on this article, and for offering their valuable perspectives and insights from their areas of expertise.
The interative demos in this article were all built using p5.js. Deploying all of these machine learning models in a web browser was made possible with deeplearn.js, a hardware-accelerated machine learning framework for the browser, developed by the People+AI Research Initiative (PAIR) team at Google. A special thanks goes to Nikhil Thorat and Daniel Smilkov for their support.
We would like to thank Chris Olah and the rest of the Distill editorial team for their valuable feedback and generous editorial support, in addition to supporting the use of their distill.pub technology.
We would to extend our thanks to Alex Graves, Douglas Eck, Mike Schuster, Rajat Monga, Vincent Vanhoucke, Jeff Dean and the Google Brain team for helpful feedback and for encouraging us to explore this area of research.
Any errors here are our own and do not reflect opinions of our proofreaders and colleagues. If you see mistakes or want to suggest changes, feel free to contribute feedback by participating in the discussion forum for this article.
The experiments in this article were performed on both a P100 GPU and a 64-core CPU Ubuntu Linux virtual machine provided by Google Cloud Platform, using TensorFlow and OpenAI Gym.
For attribution in academic contexts, please cite this work as
Ha and Schmidhuber, "Recurrent World Models Facilitate Policy Evolution", 2018.
BibTeX citation
@incollection{ha2018worldmodels, title = {Recurrent World Models Facilitate Policy Evolution}, author = {Ha, David and Schmidhuber, J{\"u}rgen}, booktitle = {Advances in Neural Information Processing Systems 31}, pages = {2451--2463}, year = {2018}, publisher = {Curran Associates, Inc.}, url = {https://papers.nips.cc/paper/7512-recurrent-world-models-facilitate-policy-evolution}, note = "\url{https://worldmodels.github.io}", }
The instructions to reproduce the experiments in this work is available here.
Diagrams and text are licensed under Creative Commons Attribution CC-BY 4.0 with the source available on GitHub, unless noted otherwise. The figures that have been reused from other sources don’t fall under this license and can be recognized by the citations in their caption.
In this section we will describe in more details the models and training methods used in this work.
We trained a Convolutional Variational Autoencoder (ConvVAE) model as our agent's V. Unlike vanilla autoencoders, enforcing a Gaussian prior over the latent vector also limits the amount of information capacity for compressing each frame, but this Gaussian prior also makes the world model more robust to unrealistic vectors generated by M.
In the following diagram, we describe the shape of our tensor at each layer of the ConvVAE and also describe the details of each layer:
Our latent vector is sampled from a factored Gaussian distribution , with mean and diagonal variance . As the environment may give us observations as high dimensional pixel images, we first resize each image to 64x64 pixels and use this resized image as V's observation. Each pixel is stored as three floating point values between 0 and 1 to represent each of the RGB channels. The ConvVAE takes in this 64x64x3 input tensor and passes it through 4 convolutional layers to encode it into low dimension vectors and . In the Car Racing task, is 32 while for the Doom task is 64. The latent vector is passed through 4 of deconvolution layers used to decode and reconstruct the image.
Each convolution and deconvolution layer uses a stride of 2. The layers are indicated in the diagram in Italics as Activation-type Output Channels x Filter Size. All convolutional and deconvolutional layers use relu activations except for the output layer as we need the output to be between 0 and 1. We trained the model for 1 epoch over the data collected from a random policy, using distance between the input image and the reconstruction to quantify the reconstruction loss we optimize for, in addition to KL loss.
For the M Model, we use an
To implement M, we use an LSTM
We use this network to model the probability distribution of as a Mixture of Gaussian distribution. This approach is very similar to previous work
Unlike the handwriting and sketch generation works, rather than using the MDN-RNN to model the probability density function (pdf) of the next pen stroke, we model instead the pdf of the next latent vector . We would sample from this pdf at each time step to generate the environments. In the Doom task, we also use the MDN-RNN to predict the probability of whether the agent has died in this frame. If that probability is above 50%, then we set done
to be True
in the virtual environment. Given that death is a low probability event at each time step, we find the cutoff approach to be more stable compared to sampling from the Bernoulli distribution.
The MDN-RNNs were trained for 20 epochs on the data collected from a random policy agent. In the Car Racing task, the LSTM used 256 hidden units, in the Doom task 512 hidden units. In both tasks, we used 5 Gaussian mixtures, but unlike
When training the MDN-RNN using teacher forcing from the recorded data, we store a pre-computed set of and for each of the frames, and sample an input each time we construct a training batch, to prevent overfitting our MDN-RNN to a specific sampled .
For both environments, we applied nonlinearities to clip and bound the action space to the appropriate ranges. For instance, in the Car Racing task, the steering wheel has a range from -1.0 to 1.0, the acceleration pedal from 0.0 to 1.0, and the brakes from 0.0 to 1.0. In the Doom environment, we converted the discrete actions into a continuous action space between -1.0 to 1.0, and divided this range into thirds to indicate whether the agent is moving left, staying where it is, or moving to the right. We would give C a feature vector as its input, consisting of and the hidden state of the MDN-RNN. In the Car Racing task, this hidden state is the output vector of the LSTM, while for the Doom task it is both the cell vector and the output vector of the LSTM.
We used
Since the requirement of this environment is to have an agent achieve an average score above 900 over 100 random rollouts, we took the best performing agent at the end of every 25 generations, and tested it over 1024 random rollout scenarios to record this average on the red line. After 1800 generations, an agent was able to achieve an average score of 900.46 over 1024 random rollouts. We used 1024 random rollouts rather than 100 because each process of the 64 core machine had been configured to run 16 times already, effectively using a full generation of compute after every 25 generations to evaluate the best agent 1024 times. In the figure below, we plot the results of same agent evaluated over 100 rollouts:
We also experimented with an agent that has access to only the vector from the VAE, but not the RNN's hidden states. We tried 2 variations, where in the first variation, C maps directly to the action space . In second variation, we attempted to add a hidden layer with 40 activations between and , increasing the number of model parameters of C to 1443, making it more comparable with the original setup. These results are shown in the two figures below:
We conducted a similar experiment on the generated Doom environment we called DoomRNN. Please note that we did not attempt to train our agent on the actual VizDoom environment, but only used VizDoom for the purpose of collecting training data using a random policy. DoomRNN is more computationally efficient compared to VizDoom as it only operates in latent space without the need to render an image at each time step, and we do not need to run the actual Doom game engine.
In our virtual DoomRNN environment we increased the temperature slightly and used to make the agent learn in a more challenging environment. The best agent managed to obtain an average score of 959 over 1024 random rollouts. This is the highest score of the red line in the figure below:
This same agent achieved an average score of 1092 556 over 100 random rollouts when deployed to the actual