Since most of the programs require a lot of memory allocation/deallocation, we expect the memory management to be fast, to have low fragmentation, make good use of locality, and be scalable to multiple processors. Newer issues include security from attacks (e.g. use randomization of locations to minimize effect of buffer-over?ow attacks) as well as reliability in the face of errors (e.g. detecting double free()'s and buffer overruns).
In analyzing the speed of an allocator, the key components are the cost of malloc, the cost of free, and the cost of getting the size of an object (for realloc). In certain types of programs, for instance those with a lot of graphics manipulation, allocators can become the performance bottleneck, and so speed is important.
We say that fragmentation occurs when the heap, due to characteristics of the allocation algorithm, gets "broken up" into several unusable spaces, or when the overall utilization of the memory is compromised. External fragmentation happens when there is waste of space outside (i.e. in between) allocated objects; internal fragmentation happens when there is waste of space inside an allocated area.
Remember that the allocator might at any single time have several pages with unused space, from which it could draw pieces of memory to give to requesting programs. There are several ways to decide what are the good spots, among those with free memory, from which to take memory. The ?rst-?t method ?nds the ?rst chunk of desired size and returns it; it is generally considered to be very fast; best-?t ?nds the chunk that wastes the least of space; and worst-?t takes memory from the largest existent spot, and as a consequence maximizes the free space available. As a general rule, ?rst-?t is faster, but increases fragmentation. One useful technique that can be applied with these methods is always to coalesce free adjacent objects into one big free object.