Virtual addresses are made up of two parts: the ?rst part is the page number, and the second part is an offset inside that page. Suppose our pages are 4kb (4096 = 212 bytes) long, and that
our machine uses 32-bit addresses. Then we can have at most 232 addressable bytes of memory; therefore, we could ?t at most 232 / 212 = 220 pages. This means that we need 20 bits to address any page. So, the page number in the virtual address is stored in 20 bits, and the offset is stored in the remaining 12 bits.
Now suppose that we have one such page table per process. A page table with 220 entries, each entry with, say, 4 bytes, would require 4Mb of memory! This is somehow disturbing because a machine with 80 processes would need more than 300 megabytes just for storing page tables! The solution to this dilemma is to use multi-level page tables. This approach allows page tables to point to other page tables, and so on. Consider a 1-level system. In this case, each virtual address can be divided into an offset (10 bits), a level-1 page table entry (12 bits), and a level-0 page table entry (10 bits). Then if we read the 10 most signi?cant bits of a virtual address, we obtain an entry index in the level-0 page; if we follow the pointer given by that entry, we get a pointer to a level-1 page table. The entry to be accessed in this page table is given by the next 12 bits of the virtual address.
We can again follow the pointer speci?ed on that level-1 page table entry, and ?nally arrive at a physical page. The last 10 bits of the VA address will give us the offset within that PA page. A drawback of using this hierarchical approach is that for every load or store instruction we have to perform several indirections, which of course makes everything slower. One way to minimize this problem is to use something called Translation Lookaside Buffer (TLB); the TLB is a fast, fully associative memory that caches page table entries. Typically, TLBs can cache from 8 to 2048 page table entries.