The question of fairness, regarding page eviction, is a hard one. How do we decide what is fair? Many operating systems use global LRU, where pages from all processes are managed together using the approximate LRU algorithms described above. This is easy to implement, but has a number of problems.
For instance, with global LRU, there's no isolation between processes, and greedy or badly-written programs will push other programs out of physical memory. In addition, the priority which one gives a program is generally priority for the scheduler, not priority for physical memory (though there are approaches to try to deal with this).
There's also the "sleepyhead" problem, which is where an important but rarely-used programwill get paged out, and then start up slowly. For instance, the ntpd (network time protocol) program doesn't run often, but when it does run, it needs to make precise timing measurements, which would be impacted by pagefaults. ntpd and similar programs can get around this problem by "pinning" their pages, meaning that their physical pages are not allowed to be paged out to disk.
In general, the problem of fairness is that processes that are greedy (or wasteful) are rewarded; processes with poor locality end up squeezing out those with good locality, because they "steal" memory. There is no simple solution to this problem.