mathlab , Computer Engineering

Assignment Help:
Windy Grid World

This assignment is to use Reinforcement Learning to solve the following "Windy Grid World" problem illustrated in the above picture. Each cell in the image is a state. There are four actions: move up, down, left, and right. This is a deterministic domain -- each action deterministically moves the agent one cell in the direction indicated. If the agent is on the boundary of the world and executes an action that would move it "off" of the world, it remains on the grid in the same cell from which it executed the action.
Notice that there are arrows drawn in some states in the diagram. These are the "windy" states. In these states, the agent experiences an extra "push" upward. For example, if the agent is in a windy state and executes an action to the left or right, the result of the action is to move left or right (respectively) but also to move one cell upward. As a result, the agent moves diagonally upward to the left or right.
This is an episodic task where each episode lasts no more than 30 time steps. At the beginning of each episode, the agent is placed in the "Start" state. Reward in this domain is zero everywhere except when the agent is in the goal state (labeled "goal" in the diagram). The agent receives a reward of positive ten when it executes any action {\it from} the goal state. The episode ends after 30 time steps or when the agent takes any action after having landed in the goal state.
You should solve the problem using Q-learning. Use e-greedy exploration with epsilon=0.1 (the agent takes a random action 10 percent of the time in order to explore.) Use a learning rate of 0.1 and a discount rate of 0.9.
The programming should be done in MATLAB. Students may get access to MATLAB here. Alternatively, students may code in Python (using Numpy). If the student would rather code in a different language, please see Dr Platt or the TA.
Students should submit their homework via email to the TA ([email protected]) in the form of a ZIP file that includes the following:
1. A PDF of a plot of gridworld that illustrates the policy and a path found by Q-learning after it has approximately converged. The policy plot should identify the action taken by the policy in each state. The path should begin in the start state and follow the policy to the goal state.
2. A PDF of a plot of reward per episode. It should look like the diagram in Figure 6.13 in SB.
3. A text file showing output from a sample run of your code.
4. A directory containing all source code for your project.
5. A short readme file enumerating the important files in your submission.
Updates
You can initialize the Q function randomly or you can initialize it to a uniform value of 10. That is, you can initialize Q such that each value in the table is equal to 10.
There have been questions about how to know when the algorithm has converged. The algorithm has converged when the value function has stopped changing signficantly and the policy has stopped changing completely. Since we are using q-learning, the algorithm should converge to a single optimal policy.
Please also submit a short readme file with your homework that enumerates the important files in your submission.


Related Discussions:- mathlab

State in brief about the proxy server, State in brief about the proxy serve...

State in brief about the proxy server A proxy server also helps a lot since it hides the real IP addresses of users requesting resources outside the firewall. Finally, the

What is preemptive and nonpreemptive scheduling, What is preemptive and non...

What is preemptive and nonpreemptive scheduling? Under nonpreemptive scheduling once the CPU has been allocated to a process, the process keeps the CPU unless it releases the C

Which electro mechanical switch had fewer moving parts, The             el...

The             electro mechanical switch (developed in 1938) had fewer moving parts than earlier switches. (A)  No. 1ESS                                 (B)  Strowger (

What is a scope resolution operator, A scope resolution operator (::), can ...

A scope resolution operator (::), can be used to describe the member functions of a class outside the class.

Crafting an isa - computer architecture, Crafting an ISA: We will l...

Crafting an ISA: We will look at some decisions facing an instruction set architect, and In the design of the MIPS instruction set how those decisions were made. MIPS

Describe critical directive in fortan, Q. Describe Critical Directive in FO...

Q. Describe Critical Directive in FORTAN? The critical directive permits one thread executes associated structured block. When one or more threads attain critical directive the

C++.., how to swap to nunbers

how to swap to nunbers

Name a different return types of a controller action method, The following...

The following are just a few return types of a controller action process. In common an action process can return an instance of an any class that derives from Action Result class.

Define terminal symbols, Define Terminal symbols? Terminal symbols: T...

Define Terminal symbols? Terminal symbols: These are literal strings forming the input of a formal grammar and can't be broken down in slighter units without losing literal m

State and prove demorgan's second theorem, State and prove Demorgan's secon...

State and prove Demorgan's second theorem   Proof: Demorgan's second theorem   = A‾ + B‾ The two sides of the equation here   = A‾ + B‾ is represented through the logic d

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd