Page-table lookups, Operating System

Assignment Help:

How exactly is a page table used to look up an address?

The CPU has a page table base register (PTBR)which points to the base (entry 0) of the level-0 page table. Each process has its own page table, and so in a context switch, the PTBR is updated along with the other context registers. The PTBR contains a physical address, not a virtual address.When theMMU receives a virtual address which it needs to translate to a physical address, it uses the PTBR to go to the the level-0 page table. Then it uses the level-0 index fromthemost-signi?cant bits (MSBs) of the virtual address to ?nd the appropriate table entry, which contains a pointer to the base address of the appropriate level-1 page table. Then, from that base address, it uses the level-1 index to ?nd the appropriate entry. In a 2-level page table, the level-1 entry is a PTE, and points to the physical page itself. In a 3-level (or higher) page table, there would be more steps:

This sounds pretty slow: N page table lookups for everymemory access. But is it necessarily slow? A special cache called a TLB1 caches the PTEs from recent lookups, and so if a page's PTE is in the TLB cache, this improves a multi-level page table access time down to the access time for a single-level page table.

When a scheduler switches processes, it invalidates all the TLB entries (also known as TLB shoot- down). The new process then starts with a "cold cache" for its TLB, and takes a while for the TLB to "warm up". The scheduler therefore should not switch too frequently between processes, since a "warm" TLB is critical to making memory accesses fast. This is one reason that threads are so useful: switching threads within a process does not require the TLB to be invalidated; switching to a new thread within the same process lets it start up with a "warm" TLB cache right away. So what are the drawbacks of TLBs? The main drawback is that they need to be extremely fast, fully associative caches. Therefore TLBs are very expensive in terms of power consumption, and have an impact on chip real estate, and increasing chip real estate drives up price dramatically. The TLB can account a signi?cant fraction of the total power consumed by a microprocessor, on the order of 10% or more. TLBs are therefore kept relatively small, and typical sizes are between 8 and 2048 entries.


Related Discussions:- Page-table lookups

Estimation the number of input - output per second, Q. Remapping of bad bl...

Q. Remapping of bad blocks by sector sparing or else sector slipping could influence performance. Presume that the drive in Subsequent Exercise has a total of 100 bad sectors at r

What do you mean by semaphore?, What do you mean by semaphore?  Semap...

What do you mean by semaphore?  Semaphore : A synchronization variable that acquires on positive integer values. Invented by the Dijkstra P (semaphore): an atomic proce

Protection scheme in unix, Q. Consider a system that holds 5000 users. Pre...

Q. Consider a system that holds 5000 users. Presume that you want to allow 4990 of these users to be able to access one file. a. How would you denote this protection scheme in

Explain hashed page table method, Hashed page table method A general ap...

Hashed page table method A general approach for managing address spaces larger than 32 bits is to use a hashed page table with the hash values being the virtual-page number.

What is the translation lookaside buffer, What is the Translation Lookaside...

What is the Translation Lookaside Buffer (TLB) In a cached system, the base addresses of the last few referenced pages is maintained in registers known as the TLB that aids in

What is multitasking, What is Multitasking? Multitasking provides power...

What is Multitasking? Multitasking provides power to users by offering them to run multiple applications at once. The applications are loaded into memory and appear to the user

Compute how many disk input - output operations are required, Q. Consider ...

Q. Consider a file at present consisting of 100 blocks. Presume that the file control block (and the index block in the case of indexed allocation) is already in memory. Compute h

Implementation of lru, 1. On every access, mark the page with a timestamp. ...

1. On every access, mark the page with a timestamp. Whenever we need to evict a page, we search through memory for the oldest page, the least-recently used page. But we need memory

Explain internal fragmentation, Explain internal fragmentation The Inter...

Explain internal fragmentation The Internal fragmentation signifies to the result of reserving a piece of space without ever intending to use it. This space is wasted that this

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd