Библиотека сайта rus-linux.net
The book is available and called simply "Understanding The Linux Virtual Memory Manager". There is a lot of additional material in the book that is not available here, including details on later 2.4 kernels, introductions to 2.6, a whole new chapter on the shared memory filesystem, coverage of TLB management, a lot more code commentary, countless other additions and clarifications and a CD with lots of cool stuff on it. This material (although now dated and lacking in comparison to the book) will remain available although I obviously encourge you to buy the book from your favourite book store :-) . As the book is under the Bruce Perens Open Book Series, it will be available 90 days after appearing on the book shelves which means it is not available right now. When it is available, it will be downloadable from http://www.phptr.com/perens so check there for more information.
To be fully clear, this webpage is not the actual book.
Next: 3.1 Nodes Up: understand-html Previous: 2.3 Submitting Work   Contents   Index
3. Describing Physical Memory
Linux is available for a wide range of architectures so there needs to be an architecture-independent way of describing memory. This chapter describes the structures used to keep account of memory banks, pages and the flags that affect VM behavior.
The first principle concept prevalent in the VM is Non-Uniform Memory Access (NUMA). With large scale machines, memory may be arranged into banks that incur a different cost to access depending on their ``distance'' from the processor. For example, there might be a bank of memory assigned to each CPU or a bank of memory very suitable for DMA near device cards.
Each bank is called a node and the concept is represented under
Linux by a struct pg_data_t
even if the architecture is
UMA. Every node in the system is kept on a NULL terminated list called
pgdat_list
and each node is linked to the next with the
field pg_data_t
node_next
. For UMA architectures
like PC desktops, only one static pg_data_t
structure called
contig_page_data
is used. Nodes will be discussed further in
Section 3.1.
Each node is divided up into a number of blocks called zones
which represent ranges within memory. Zones should not be confused with
zone based allocators as they are unrelated. A zone is described by a
struct zone_t
and each one is one of ZONE_DMA,
ZONE_NORMAL or ZONE_HIGHMEM. Each is suitable for a
different type of usage. ZONE_ DMA
is memory in the lower physical memory
ranges which certain ISA devices require. Memory within ZONE_ NORMAL
be directly mapped by the kernel in the upper region of the linear address
space which is discussed further in Section 5.1.
With the x86 the zones are:
|
First 16MiB of memory |
|
16MiB - 896MiB |
|
896 MiB - End |
It is important to note that many kernel operations can only take place
using ZONE_ NORMAL
so it is the most performance critical zone.
ZONE_ HIGHMEM
is the rest of memory. Zones are discussed further in
Section 3.2.
The system's memory is broken up into fixed sized chunks called page
frames. Each physical page frame is represented by a struct page
and all the structures are kept in a global mem_map
array which
is usually stored at the beginning of ZONE_ NORMAL
or just after the area
reserved for the loaded kernel image in low memory machines. struct
page
s are discussed in detail in Section 3.3 and the global
mem_map
array is discussed in detail in Section 4.7. The basic relationship between all these
structures is illustrated in .
As the amount of memory directly accessible by the kernel (ZONE_ NORMAL
)
is limited in size, Linux supports the concept of High Memory which
is discussed in detail in Chapter 10. This
chapter will discuss how nodes, zones and pages are represented before
introducing high memory management.
Subsections
Next: 3.1 Nodes Up: understand-html Previous: 2.3 Submitting Work   Contents   Index Mel 2004-02-15