Page-table lookups, Operating System

Assignment Help:

How exactly is a page table used to look up an address?

The CPU has a page table base register (PTBR)which points to the base (entry 0) of the level-0 page table. Each process has its own page table, and so in a context switch, the PTBR is updated along with the other context registers. The PTBR contains a physical address, not a virtual address.When theMMU receives a virtual address which it needs to translate to a physical address, it uses the PTBR to go to the the level-0 page table. Then it uses the level-0 index fromthemost-signi?cant bits (MSBs) of the virtual address to ?nd the appropriate table entry, which contains a pointer to the base address of the appropriate level-1 page table. Then, from that base address, it uses the level-1 index to ?nd the appropriate entry. In a 2-level page table, the level-1 entry is a PTE, and points to the physical page itself. In a 3-level (or higher) page table, there would be more steps:

This sounds pretty slow: N page table lookups for everymemory access. But is it necessarily slow? A special cache called a TLB1 caches the PTEs from recent lookups, and so if a page's PTE is in the TLB cache, this improves a multi-level page table access time down to the access time for a single-level page table.

When a scheduler switches processes, it invalidates all the TLB entries (also known as TLB shoot- down). The new process then starts with a "cold cache" for its TLB, and takes a while for the TLB to "warm up". The scheduler therefore should not switch too frequently between processes, since a "warm" TLB is critical to making memory accesses fast. This is one reason that threads are so useful: switching threads within a process does not require the TLB to be invalidated; switching to a new thread within the same process lets it start up with a "warm" TLB cache right away. So what are the drawbacks of TLBs? The main drawback is that they need to be extremely fast, fully associative caches. Therefore TLBs are very expensive in terms of power consumption, and have an impact on chip real estate, and increasing chip real estate drives up price dramatically. The TLB can account a signi?cant fraction of the total power consumed by a microprocessor, on the order of 10% or more. TLBs are therefore kept relatively small, and typical sizes are between 8 and 2048 entries.


Related Discussions:- Page-table lookups

Draw and r chart and an x-bar chart, Thermostats are subjected to rigorous ...

Thermostats are subjected to rigorous testing before they are shipped to air conditioning technicians around the world. Results from the last five samples are shown in the table. D

Define the features to implement top down parsing, Define the features are ...

Define the features are needed to implement top down parsing                      Source string marker, Prediction making mechanism and Matching and Backtracking mechanism

What is the lower bound on the numeral of page faults, Q. Presume that you...

Q. Presume that you have a page-reference string for a process with m frames (initially all empty). The page-reference string has length p along with n distinct page numbers

What are threads?, What are threads? A thread is alike to sequential pr...

What are threads? A thread is alike to sequential programs. Single threads have a beginning, sequence and end. At any given point in time during the runtime of the thread there

Dynamically loadable kernel modules, Q. Dynamically loadable kernel module...

Q. Dynamically loadable kernel modules give elasticity when drivers are added to a system however do they have disadvantages too? Under what situations would a kernel be compiled

Malloc and calloc function, Note that the parameter for scanf doesn't need ...

Note that the parameter for scanf doesn't need the address operators & because name is an address. However the variable name has no defined space. This can cause problems within C

Define lru page replacement policy, Define ‘LRU’ page replacement policy ...

Define ‘LRU’ page replacement policy LRU is Least Recently Used page replacement policy.

Explain wait for graph, Explain Wait for Graph a. Use a resource alloca...

Explain Wait for Graph a. Use a resource allocation graph to derive a wait-for graph.  b. Wait-for graph acquired by making an edge from p 1 to p 2 iff p 1 is waiting for

What are the issues in designing a network-transparency, Transparency : ...

Transparency : Users must be able to access remote resources as though these resources were local. Transparency should as well promote user mobility. Users must be allowed to lo

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd