mathlab , Computer Engineering

Assignment Help:
Windy Grid World

This assignment is to use Reinforcement Learning to solve the following "Windy Grid World" problem illustrated in the above picture. Each cell in the image is a state. There are four actions: move up, down, left, and right. This is a deterministic domain -- each action deterministically moves the agent one cell in the direction indicated. If the agent is on the boundary of the world and executes an action that would move it "off" of the world, it remains on the grid in the same cell from which it executed the action.
Notice that there are arrows drawn in some states in the diagram. These are the "windy" states. In these states, the agent experiences an extra "push" upward. For example, if the agent is in a windy state and executes an action to the left or right, the result of the action is to move left or right (respectively) but also to move one cell upward. As a result, the agent moves diagonally upward to the left or right.
This is an episodic task where each episode lasts no more than 30 time steps. At the beginning of each episode, the agent is placed in the "Start" state. Reward in this domain is zero everywhere except when the agent is in the goal state (labeled "goal" in the diagram). The agent receives a reward of positive ten when it executes any action {\it from} the goal state. The episode ends after 30 time steps or when the agent takes any action after having landed in the goal state.
You should solve the problem using Q-learning. Use e-greedy exploration with epsilon=0.1 (the agent takes a random action 10 percent of the time in order to explore.) Use a learning rate of 0.1 and a discount rate of 0.9.
The programming should be done in MATLAB. Students may get access to MATLAB here. Alternatively, students may code in Python (using Numpy). If the student would rather code in a different language, please see Dr Platt or the TA.
Students should submit their homework via email to the TA ([email protected]) in the form of a ZIP file that includes the following:
1. A PDF of a plot of gridworld that illustrates the policy and a path found by Q-learning after it has approximately converged. The policy plot should identify the action taken by the policy in each state. The path should begin in the start state and follow the policy to the goal state.
2. A PDF of a plot of reward per episode. It should look like the diagram in Figure 6.13 in SB.
3. A text file showing output from a sample run of your code.
4. A directory containing all source code for your project.
5. A short readme file enumerating the important files in your submission.
Updates
You can initialize the Q function randomly or you can initialize it to a uniform value of 10. That is, you can initialize Q such that each value in the table is equal to 10.
There have been questions about how to know when the algorithm has converged. The algorithm has converged when the value function has stopped changing signficantly and the policy has stopped changing completely. Since we are using q-learning, the algorithm should converge to a single optimal policy.
Please also submit a short readme file with your homework that enumerates the important files in your submission.


Related Discussions:- mathlab

Utilization summary, Utilization Summary The Utilization Summary shows ...

Utilization Summary The Utilization Summary shows the status of each processor i.e. how much time (in the form of percentage) have been spent by every processor in busy mode, o

C program, program to find minimum total number of shelfs

program to find minimum total number of shelfs

Show the advantages of joint application development, Q. Show the advantage...

Q. Show the advantages of Joint Application Development? The following are the numerous advantages of Joint Application Development: Actively includes management and use

What is parallel balance point, Q. What is Parallel Balance Point? In ...

Q. What is Parallel Balance Point? In order to execute parallel algorithm on parallel computer K processors are necessary. It should be noted that given input is allocated to

What are the two types of branch prediction techniques, What are the two ty...

What are the two types of branch prediction techniques available?  The two types of branch prediction methods are  1) Static branch prediction  2) Dynamic branch predicti

History of e-commerce, In 1960: The purpose of e-commerce was to exchange t...

In 1960: The purpose of e-commerce was to exchange the electronic data. In 1970s: Electronic Fund Transfers or EFT was developed which considered as huge impact in the emerging

Hazard in pipeline - computer architecture, Hazard in pipeline - computer a...

Hazard in pipeline - computer architecture: A hazard in pipeline .-removing a hazard frequently need that some instructions in the pipeline to be permitted to proceed as othe

Describe about relationships and look up fields, Relationships are imported...

Relationships are imported from the source to finish without any hindrance but once they land in the destination they can never be changed or changed and change of extensions canno

Global fon, GlobalFon is an international communication company, which offe...

GlobalFon is an international communication company, which offers international prepaid calling cards. They introduced three different types of cards, (1) AsiaFon: is cheapest for

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd