CSE 571: Artificial Intelligence Project 4 solution


Max Pts: 25
:CSE 571 Artificial Intelligence
Project 4 (Optional)

Make sure
to download the project from Blackboard/content folder and not from Berkeley website as we have made changes to
the project files.
Make sure to download the project from Blackboard/content folder and not from Berkeley website
as we have made changes to the project files.
Please use python 2.7 for this project. Any other versions of python will cause autograder not to
work properly and will result in grade 0.



5/5 - (3 votes)

Max Pts: 25
:CSE 571 Artificial Intelligence
Project 4 (Optional)

Make sure
to download the project from Blackboard/content folder and not from Berkeley website as we have made changes to
the project files.
Make sure to download the project from Blackboard/content folder and not from Berkeley website
as we have made changes to the project files.
Please use python 2.7 for this project. Any other versions of python will cause autograder not to
work properly and will result in grade 0.
In this project, you will implement value iteration and Q-learning. You will test your agents first on Gridworld (from
class), then apply them to a simulated robot controller (Crawler) and Pacman.
As in previous projects, this project includes an autograder for you to grade your solutions on your machine. This
can be run on all questions with the command:
It can be run for one particular question, such as q2, by:
python -q q2
It can be run for one particular test by commands of the form:
python -t test cases/q2/1-bridge-grid
See the autograder tutorial in Project 0 for more information about using the autograder.
Files to Edit and Submit: You will fill in portions of,,
during the assignment. You should submit ONLY these files as mentioned in the submission section. Please do not
change the other files in this distribution or submit any of our original files other than these files.
Evaluation: Your code will be autograded for technical correctness. Please do not change the names of any provided
functions or classes within the code, or you will wreak havoc on the autograder. However, the correctness of your
implementation —not the autograder’s judgments —will be the final judge of your score. If necessary, we will review
and grade assignments individually to ensure that you receive due credit for your work.

Getting Help: You are not alone! If you find yourself stuck on something, contact us, come to oce hours and/or
post issues on Piazza. We want these projects to be rewarding and instructional, not frustrating and demoralizing.
But, we don’t know when or how to help unless you ask.
Discussion: Please be careful not to post spoilers.
Question 1 (6 points): Value Iteration
Write a value iteration agent in ValueIterationAgent, which has been partially specified for you in Your value iteration agent is an o✏ine planner, not a reinforcement learning agent, and so the relevant
training option is the number of iterations of value iteration it should run (option -i) in its initial planning phase.
ValueIterationAgent takes an MDP on construction and runs value iteration for the specified number of iterations
before the constructor returns. Value iteration computes k-step estimates of the optimal values, Vk. In addition to
running value iteration, implement the following methods for ValueIterationAgent using Vk.
• computeActionFromValues(state) computes the best action according to the value function given by self.values.
• computeQValueFromValues(state, action) returns the Q-value of the (state, action) pair given by the value
function given by self.values.
These quantities are all displayed in the GUI: values are numbers in squares, Q-values are numbers in square quarters,
and policies are arrows out from each square.
Important: Use the “batch” version of value iteration where each vector Vk is computed from a fixed vector Vk1
(like in lecture), not the ”online” version where one single weight vector is updated in place. This means that when
a state’s value is updated in iteration k based on the values of its successor states, the successor state values used in
the value update computation should be those from iteration k-1 (even if some of the successor states had already
been updated in iteration k).
Note: A policy synthesized from values of depth k (which reflect the next k rewards) will actually reflect the next
k+1 rewards (i.e. you return ⇡k+1). Similarly, the Q-values will also reflect one more reward than the values (i.e.
you return Qk+1).
You should return the synthesized policy ⇡k+1.
Hint: Use the util.Counter class in, which is a dictionary with a default value of zero. Methods such as
totalCount should simplify your code. However, be careful with argMax: the actual argmax you want may be a key
not in the counter!
Note: Make sure to handle the case when a state has no available actions in an MDP (think about what this means
for future rewards).
To test your implementation, run the autograder:
python -q q1
The following command loads your ValueIterationAgent, which will compute a policy and execute it 10 times. Press
a key to cycle through values, Q-values, and the simulation. You should find that the value of the start state
(V(start), which you can read o↵ of the GUI) and the empirical resulting average reward (printed after the 10 rounds
of execution finish) are quite close.
python -a value -i 100 -k 10
Hint: On the default BookGrid, running value iteration for 5 iterations should give you this output:
python -a value -i 5
Grading: Your value iteration agent will be graded on a new grid. We will check your values, Q-values, and policies
after fixed numbers of iterations and at convergence (e.g. after 100 iterations).
Question 2 (1 point): Bridge Crossing Analysis
BridgeGrid is a grid world map with the a low-reward terminal state and a high-reward terminal state separated by
a narrow “bridge”, on either side of which is a chasm of high negative reward. The agent starts near the low-reward
state. With the default discount of 0.9 and the default noise of 0.2, the optimal policy does not cross the bridge.
Change only ONE of the discount and noise parameters so that the optimal policy causes the agent to attempt to
cross the bridge. Put your answer in question2() of (Noise refers to how often an agent ends up in an
unintended successor state when they perform an action.) The default corresponds to:
python -a value -i 100 -g BridgeGrid discount 0.9 noise 0.2
Grading: We will check that you only changed one of the given parameters, and that with this change, a correct
value iteration agent should cross the bridge. To check your answer, run the autograder:
python -q q2
Question 3 (5 points): Policies
Consider the DiscountGrid layout, shown below. This grid has two terminal states with positive payo↵ (in the middle
row), a close exit with payo↵ +1 and a distant exit with payo↵ +10. The bottom row of the grid consists of terminal
states with negative payo↵ (shown in red); each state in this “cli↵” region has payo↵ -10. The starting state is the
yellow square. We distinguish between two types of paths: (1) paths that “risk the cli↵” and travel near the bottom
row of the grid; these paths are shorter but risk earning a large negative payo↵, and are represented by the red
arrow in the figure below. (2) paths that “avoid the cli↵” and travel along the top edge of the grid. These paths
are longer but are less likely to incur huge negative payo↵s. These paths are represented by the green arrow in the
figure below.
In this question, you will choose settings of the discount, noise, and living reward parameters for this MDP to
produce optimal policies of several di↵erent types. Your setting of the parameter values for each part should have
the property that, if your agent followed its optimal policy without being subject to any noise, it would exhibit the
given behavior. If a particular behavior is not achieved for any setting of the parameters, assert that the policy is
impossible by returning the string ’NOT POSSIBLE’.
Here are the optimal policy types you should attempt to produce:
• Prefer the close exit (+1), risking the cli↵ (-10)
• Prefer the close exit (+1), but avoiding the cli↵ (-10)
• Prefer the distant exit (+10), risking the cli↵ (-10)
• Prefer the distant exit (+10), avoiding the cli↵ (-10)
• Avoid both exits and the cli↵ (so an episode should never terminate)
To check your answers, run the autograder:
python -q q3
question3a() through question3e() should each return a 3-item tuple of (discount, noise, living reward) in
Note: You can check your policies in the GUI. For example, using a correct answer to 3(a), the arrow in (0,1) should
point east, the arrow in (1,1) should also point east, and the arrow in (2,1) should point north.
Note: On some machines you may not see an arrow. In this case, press a button on the keyboard to switch to qValue
display, and mentally calculate the policy by taking the arg max of the available qValues for each state.
Grading: We will check that the desired policy is returned in each case.
Question 4 (5 points): Q-Learning
You will now write a Q-learning agent, which does very little on construction, but instead learns by trial and error
from interactions with the environment through its update(state, action, nextState, reward) method. A stub of a
Q-learner is specified in QLearningAgent in, and you can select it with the option ’-a q’. For this
question, you must implement the update, computeValueFromQValues, getQValue, and computeActionFromQValues
Note: For computeActionFromQValues, you should break ties randomly for better behavior. The random.choice()
function will help. In a particular state, actions that your agent hasn’t seen before still have a Q-value, specifically
a Q-value of zero, and if all of the actions that your agent has seen before have a negative Q-value, an unseen action
may be optimal.
Important: Make sure that in your computeValueFromQValues and computeActionFromQValues functions, you only
access Q values by calling getQValue . This abstraction will be useful for question 8 when you override getQValue
to use features of state-action pairs rather than state-action pairs directly.
With the Q-learning update in place, you can watch your Q-learner learn under manual control, using the keyboard:
python -a q -k 5 -m
Recall that -k will control the number of episodes your agent gets to learn. Watch how the agent learns about the
state it was just in, not the one it moves to, and “leaves learning in its wake.” Hint: to help with debugging, you can
turn o↵ noise by using the noise 0.0 parameter (though this obviously makes Q-learning less interesting). If you
manually steer Pacman north and then east along the optimal path for four episodes, you should see the following
Grading: We will run your Q-learning agent and check that it learns the same Q-values and policy as our reference
implementation when each is presented with the same set of examples. To grade your implementation, run the
python -q q4
Question 5 (3 points): Epsilon Greedy
Complete your Q-learning agent by implementing epsilon-greedy action selection in getAction, meaning it chooses
random actions an epsilon fraction of the time, and follows its current best Q-values otherwise. Note that choosing a
random action may result in choosing the best action – that is, you should not choose a random sub-optimal action,
but rather any random legal action.
python -a q -k 100
Your final Q-values should resemble those of your value iteration agent, especially along well-traveled paths. However,
your average returns will be lower than the Q-values predict because of the random actions and the initial learning
You can choose an element from a list uniformly at random by calling the random.choice function. You can simulate
a binary variable with probability p of success by using util.flipCoin(p), which returns True with probability p and
False with probability 1-p.
To test your implementation, run the autograder:
python -q q5
With no additional code, you should now be able to run a Q-learning crawler robot:
If this doesn’t work, you’ve probably written some code too specific to the GridWorld problem and you should make
it more general to all MDPs.
This will invoke the crawling robot from class using your Q-learner. Play around with the various learning parameters
to see how they a↵ect the agent’s policies and actions. Note that the step delay is a parameter of the simulation,
whereas the learning rate and epsilon are parameters of your learning algorithm, and the discount factor is a property
of the environment.
Question 6 (1 points): Bridge Crossing Revisited First, train a completely random Q-learner with the default
learning rate on the noiseless BridgeGrid for 50 episodes and observe whether it finds the optimal policy.
python -a q -k 50 -n 0 -g BridgeGrid -e 1
Now try the same experiment with an epsilon of 0. Is there an epsilon and a learning rate for which it is highly likely
(greater than 99%) that the optimal policy will be learned after 50 iterations? question6() in should
return EITHER a 2-item tuple of (epsilon, learning rate) OR the string ’NOT POSSIBLE’ if there is none. Epsilon
is controlled by -e, learning rate by -l.
Note: Your response should be not depend on the exact tie-breaking mechanism used to choose actions. This means
your answer should be correct even if for instance we rotated the entire bridge grid world 90 degrees.
To grade your answer, run the autograder:
python -q q6
Question 7 (1 points): Approximate Q-learning I
Time to play some Pacman! Pacman will play games in two phases. In the first phase, training, Pacman will begin
to learn about the values of positions and actions. Because it takes a very long time to learn accurate Q-values
even for tiny grids, Pacman’s training games run in quiet mode by default, with no GUI (or console) display. Once
Pacman’s training is complete, he will enter testing mode. When testing, Pacman’s self.epsilon and self.alpha will be
set to 0.0, e↵ectively stopping Q-learning and disabling exploration, in order to allow Pacman to exploit his learned
policy. Test games are shown in the GUI by default. Without any code changes you should be able to run Q-learning
Pacman for very tiny grids as follows:
python -p PacmanQAgent -x 2000 -n 2010 -l smallGrid
Note that PacmanQAgent is already defined for you in terms of the QLearningAgent you’ve already written. PacmanQAgent is only di↵erent in that it has default learning parameters that are more e↵ective for the Pacman problem
(epsilon=0.05, alpha=0.2, gamma=0.8). You will receive full credit for this question if the command above works
without exceptions and your agent wins at least 80% of the time. The autograder will run 100 test games after the
2000 training games.
Hint: If your QLearningAgent works for and but does not seem to be learning a good policy
for Pacman on smallGrid, it may be because your getAction and/or computeActionFromQValues methods do not in
some cases properly consider unseen actions. In particular, because unseen actions have by definition a Q-value of
zero, if all of the actions that have been seen have negative Q-values, an unseen action may be optimal. Beware of
the argmax function from util.Counter!
To grade your answer, run:
python -q q7
Note: If you want to experiment with learning parameters, you can use the option -a, for example -a epsilon=0.1,alpha=0.3,gamma=0.7.
These values will then be accessible as self.epsilon, self.gamma and self.alpha inside the agent.
Note: While a total of 2010 games will be played, the first 2000 games will not be displayed because of the option -x
2000, which designates the first 2000 games for training (no output). Thus, you will only see Pacman play the last
10 of these games. The number of training games is also passed to your agent as the option numTraining.
Note: If you want to watch 10 training games to see what’s going on, use the command:
python -p PacmanQAgent -n 10 -l smallGrid -a numTraining=10
During training, you will see output every 100 games with statistics about how Pacman is faring. Epsilon is positive
during training, so Pacman will play poorly even after having learned a good policy: this is because he occasionally
makes a random exploratory move into a ghost. As a benchmark, it should take between 1,000 and 1400 games
before Pacman’s rewards for a 100 episode segment becomes positive, reflecting that he’s started winning more than
losing. By the end of training, it should remain positive and be fairly high (between 100 and 350).
Make sure you understand what is happening here: the MDP state is the exact board configuration facing Pacman,
with the now complex transitions describing an entire ply of change to that state. The intermediate game configurations in which Pacman has moved but the ghosts have not replied are not MDP states, but are bundled in to the
Once Pacman is done training, he should win very reliably in test games (at least 90% of the time), since now he is
exploiting his learned policy.
However, you will find that training the same agent on the seemingly simple mediumGrid does not work well. In
our implementation, Pacman’s average training rewards remain negative throughout training. At test time, he plays
badly, probably losing all of his test games. Training will also take a long time, despite its ine↵ectiveness.
Pacman fails to win on larger layouts because each board configuration is a separate state with separate Q-values.
He has no way to generalize that running into a ghost is bad for all positions. Obviously, this approach will not
Question 8 (3 points): Approximate Q-Learning II
Implement an approximate Q-learning agent that learns weights for features of states, where many states might
share the same features. Write your implementation in ApproximateQAgent class in, which is a
subclass of PacmanQAgent.
Note: Approximate Q-learning assumes the existence of a feature function f(s, a) over state and action pairs, which
yields a vector f1(s, a), a)..fn(s, a) of feature values. We provide feature functions for you in Feature vectors are util.Counter (like a dictionary) objects containing the non-zero pairs of features and
values; all omitted features have value zero.
The approximate Q-function takes the following form
Q(s, a) = ⌃n
i=1fi(s, a)wi
where each weight wi is associated with a particular feature fi(s, a). In your code, you should implement the weight
vector as a dictionary mapping features (which the feature extractors will return) to weight values. You will update
your weight vectors similarly to how you updated Q-values:
wi wi + ↵. di↵, a)
di↵erence= (r +  maxa0Q(s0
, a0
))  Q(s, a)
Note that the di↵erence term is the same as in normal Q-learning, and r is the experienced reward.
By default, ApproximateQAgent uses the IdentityExtractor, which assigns a single feature to every (state,action)
pair. With this feature extractor, your approximate Q-learning agent should work identically to PacmanQAgent.
You can test this with the following command:
python -p ApproximateQAgent -x 2000 -n 2010 -l smallGrid
Important: ApproximateQAgent is a subclass of QLearningAgent, and it therefore shares several methods like
getAction. Make sure that your methods in QLearningAgent call getQValue instead of accessing Q-values directly,
so that when you override getQValue in your approximate agent, the new approximate q-values are used to compute
Once you’re confident that your approximate learner works correctly with the identity features, run your approximate
Q-learning agent with our custom feature extractor, which can learn to win with ease:
python -p ApproximateQAgent -a extractor=SimpleExtractor -x 50 -n 60 -l mediumGrid
Even much larger layouts should be no problem for your ApproximateQAgent. (warning: this may take a few
minutes to train)
python -p ApproximateQAgent -a extractor=SimpleExtractor -x 50 -n 60 -l mediumClassic
If you have no errors, your approximate Q-learning agent should win almost every time with these simple features,
even with only 50 training games.
Grading: We will run your approximate Q-learning agent and check that it learns the same Q-values and feature
weights as our reference implementation when each is presented with the same set of examples. To grade your
implementation, run the autograder:
python -q q8
Congratulations! You have a learning Pacman agent!
Please submit ONLY,, in ONE zip file to the BlackBoard. Please zip the files directly, DO NOT put them in a folder before zip.
The file name should be your, for instance, if your ASUID is 1234567890, the file name should be We do not accept 7zip and rar formats, only .zip is accepted. Failure to make the correct file
format may result in some penalty.