صفحه 1:
Chapter 4
Memory Management
4.1 Basic memory management
4.2 Swapping
4.3 Virtual memory
4.4 Page replacement algorithms
4.5 Modeling page replacement algorithms
4.6 Design issues for paging systems
4.7 Implementation issues
4.8 Segmentation
صفحه 2:
Memory Management
* Ideally programmers want memory that is
- large
~ fast
- non volatile
¢ Memory hierarchy
~ small amount of fast, expensive memory - cache
- some medium-speed, medium price main memory
- gigabytes of slow, cheap disk storage
* Memory manager handles the memory
hierarchy
صفحه 3:
Basic Memory Management
Monoprogramming without Swapping or Paging
Device
drivers in ROM
User
program
Operating
system in
RAM
0
Operating
system in
ROM
User
program
(b)
OxFFF ...
User
program
Operating
system in
RAM
(a)
Three simple ways of organizing memory
- an operating system with one user process
صفحه 4:
Multiprogramming with Fixed
Partitions
‘Multiple
input queues 5-58
C+ Pantition 4 Partition 4
700K
Partition 3 Single Partition 3
input queue
400K
15 Partition 2 Partition 2
200K
COOH Pattition 1 Patttion 1
100K
Operating Operating
system” | 4 system
3 0
* Fixed memory partitions
- separate input queues for each partition
~ single input queue
صفحه 5:
Modeling
Multiprogramming
‘50% I/O wait
80% I/O wait
CPU utilization (in percent)
0 1 2 3 4 5 6 7 8 9 10
Oeyree oF culiproyrnmicny
CPU utilization as a function of number of processes in memory
صفحه 6:
Analysis of Multiprogramming
System Performance
cpu
Arial minutes ٠
0 We. هد ف
1] 1909 4 اندع | a0] sa] st] ai
2 [1010 5 رعس نامع | 20) 26] 40] 50
3 [101s | 2 (CPUlprocess | 20 | 18] 5
4-1 ق1020
0 ©)
{ 20 Ls 18 ob 1 fishes
1 اقا فا ايد هسه ممص 8 my 1
لغب تدده |
1 قا ia! اف 1
او 1 a |
i r tale itt
اه i ۱ ee
دید لد ___ب۰
0 3 03 0م 220000 276202 7
Time (lative tj t's rv
©
* Arrival and work requirements of 4 jobs
* CPU utilization for 1 - 4 jobs with 80% I/O wait
+ Sequence of events as jobs arrive and finish
~ note numbers show amout of CPU time jobs get in each interval
صفحه 7:
Relocation and Protection
* Cannot be sure where program will be loaded
in memory
- address locations of variables, code routines
cannot be absolute
- must keep a program out of other processes’
partitions
* Use base and limit values
- address locations added to base value to map to
physical addr
- address locations larger than limit value is an error
7
صفحه 8:
Operating
system
(9)
0
‘Operating
system
0
Swapping (1)
D
Operating
system
0
ع وو
Operating
system
9
Operating
system
©
Operating
system
0
‘Operating
system
0
Memory allocation changes as
~ processes come into memory
- leave memory
Shaded regions are unused memory
صفحه 9:
Swapping (2)
| B-Stack
} Room for growth
5 tI] ere
8 } Actually in use
2
| som gow, 2
tenn } Room tr rh
A } Actually in use.
J A-Program
وس Operating
ect ean
(a) 0
¢ Allocating space for growing data segment
¢ Allocating space for growing stack & data segment
9
صفحه 10:
Memory Management with Bit
Maps
0 8 977 و 5
5 7 و 4
te
17111000[ ۰. 5 755 215151-37
10
مد
و ا 75 “EET EEE
وا هه ما که Process
ae 2
0 ©
* Part of memory with 5 processes, 3 holes
~ tick marks show allocation units
- shaded regions are free
* Corresponding bit map
۰ Same information as a list
صفحه 11:
Memory Management with Linked
Lists
Before X terminates After X terminates
becomes A ا
becomes
>|
|
0
(a)
(b)
| |
NES
۹18
۱
VHX
| x | WML
Four neighbor combinations for the terminating process X
becomes
0
N
becomes
(d)
11
صفحه 12:
Virtual Memory
Paging (1)
The CPU sends virtual
cpu addresses to the MMU
package
نامع
| _ Memory M Disk
1} management lemory controller
unit
هاده pT Bus
‘The MMU sends physical
addresses to the memory
The position and function of the MMU
12
صفحه 13:
13
Physical
memory
address
281-3216
241-2816
201-2416
16K-20K
121-1616
81-1216
4K-8K
OK-4K
Page frame
} Virtual page
عد |»د إعد ]عد |ت إعد أى | |»د |عد ات إى زه أه أ أدد
Virtual
address
space
8016-6416
5616-6016
Paging (2)
The relation between*k*s«
4816-5216
44K-48K
40K-44K
36K-40K
321-3616
281-3216
2416-2816
2016-2416
18/6-2016
1216-1616
816-1216
416-816
کرو
virtual addresses
and physical
memory addres-
ses given by
page table
صفحه 14:
Page Tables
(1)
1 ugaing
۲712۳107010۰۲007 Divs
cease)
5
5
0
i
2 12-bit offset
5 0 ‘copied directly
7 5 حم
6 0 ure
5 1
4 i
3 1
apa te)
Doct] o
OL ato 11 Scent bit
tua page= 28 used
Menino ino be
retake Incoming
cal
وس
PLEPEPELEDERETTED| ison)
0
Internal operation of MMU with 16 4 KB pages
14
صفحه 15:
Page Tables
)2۳ سس
* 32 bit address with 2 page table fields
* Two-level page tables
صفحه 16:
Page Tables (3)
Caching
disabled Modified Present/absent
7
۰"
Referenced Protection
Page frame number
Typical page table entry
صفحه 17:
TLBs - Translation 6
Buffers
Valid _| Virtual page | Modified | Protection | Page frame
1 140 1 RW. 31
1 20 0 RX 38
1 130 1 RW 29
1 129 1 RW. 62
1 19 0 RX 50
1 21 0 RX 45
1 860 1 RW. 14
1 861 1 RW 75
A TLB to speed up paging
صفحه 18:
Inverted Page Tables
Traditional page
table with an entry
for each of the 252
pages
282-4
256-MB physical
memory has 216
4-KB page frames Hash table
16 16 1
216 -1 EF 2 AE 5
|
Indexed Indexed
by virtual by hash on Virtual Page
page virtual page page frame
Comparison of a traditional page table with an inverted page table
18
صفحه 19:
Page Replacement
Algorithms
٠ Page fault forces choice
~ which page must be removed
- make room for incoming page
* Modified page must first be saved
- unmodified just overwritten
۰ Better not to choose an often used page
~ will probably need to be brought back in
soon
صفحه 20:
Optimal Page Replacement
Algorithm
¢ Replace page needed at the farthest
point in future
- Optimal but unrealizable
۰ Estimate by ...
- logging page use on previous runs of
process
- although this is impractical
صفحه 21:
Not Recently Used Page Replacement
Algorithm
* Each page has Reference bit,
Modified bit
- bits are set when page is referenced,
modified
* Pages are classified
1, not referenced, not modified
2, not referenced, modified
3. referenced, not modified
4. referenced, modified
* NRU removes page at random
- from lowest numbered non empty class) 21
صفحه 22:
FIFO Page Replacement
Algorithm
* Maintain a linked list of all pages
~ in order they came into memory
* Page at beginning of list replaced
۰ Disadvantage
- page in memory the longest may be often
used
صفحه 23:
Second Chance Page Replacement
Algorithm
Page loaded first
Most recently
o 3 7 8 12 14 15 18 loaded page
للع لو للع ay
Ais treated like a
3 7 8 12 14 15 18 20 7 newly loaded page
()
* Operation of a second chance
~ pages sorted in FIFO order
~ Page lis if fault occurs at time 20, A has Rit set
(ctumbers above pages are loading times)
23
صفحه 24:
The Clock Page Replacement
Algorithm
A
3 8
K 6
When a page fault occurs,
the page the hand is
pointing to is inspected.
J م The action taken depends
on the R bit:
R = 0: Evict the page
1 7 R = 1: Clear R and advance hand
H F
24
صفحه 25:
Least Recently Used
(LRU)
¢ Assume pages used recently will used again soon
- throw out page that has been unused for longest time
¢ Must keep a linked list of pages
- most recently used at front, least at rear
- update this list every memory reference !!
¢ Alternatively keep counter in each page table
entry
- choose page with lowest value counter
- periodically zero the counter
25
صفحه 26:
26
LRU using a matrix - pages referenced in order
0,1,2,3,2,1,0,3,2,3
0
0
th)
(9)
48
ofofolo
0
1 ofofofo
ololo
ofofofo
ofofofo
ofofofo
(8
04
0
(b)
@)
ofofolo ofofolo
sfofofofo
ofofofo
2fofofolo
۱1911
1
ofofofo ofofofo
91910
23
1
Page
Page
Simulating LRU in
Software (1)
Page
Page
صفحه 27:
27
Simulating LRU in
Software (2)
Rois for بمعسم | #طعسم | هه
| pages0-5, | pages0-5, | pages 0-5, pages 0-5,
فا زا اه Bock teed
60 [ه[*]ه]ه[ه]۰] ۱ [۱۱۱]۵]:]۵[7] ! اه*]م[ه]۱]1] | 11
Page | 1 | 1
o[ ra0ee000 ] | [trec0060] | [avt00600 ] | [_arti0000 0
1] م | |[ ممم [ |] 1# [|] 1 [١]
2] 10000000 [ |] 06 [ || 006 [|] 80 10001000
3] ۵0000000 [ | |_ 00000000 |! 10000000 | | 61000000 | | 00100000
4| 10000000 |! 11000000 ! 01100000 | | 10110000 | | 01011000
5| 10000000 | } 01000000 | | 10100000 | | 01010000 | | 00101000
fa) (b) 0 8 ©
* The aging algorithm simulates LRU in software
* Note 6 pages for 5 clock ticks, (a) - (e)
صفحه 28:
28
The Working Set Page Replacement
Algorithm (1)
wik,t)
+ The working set is the set of pages used by the k most recent memory references
+ w(k,t) is the size of the working set at time, 6
صفحه 29:
The Working Set Page Replacement
Algorithm (2)
Current virtual time
Information about { 1 R (Referenced) bit
one page 2084-0
2003-0
Time of last use (ل1[-1880 حب | scan all pages examining R bit:
it(R==1)
Page referenced _|———1213_[0 set time of last use to current virtual time
during سا داط
هوه فجه 0 عد 8) 11 للشب < 2
سسوم remove this page
وي if (R ==0 andage <1)
Page. on أب صر remember the smallest time
fring this ti سح
Page table
The working set algorithm
29
صفحه 30:
The WSClock Page Replacement Algorithm
Operation of the WSClock algorithm 5
صفحه 31:
31
Review of Page Replacement
Algorithms
Comment
Not implementable, but useful as a benchmark
Very crude
Might throw out important pages
Big improvement over FIFO
Realistic
Excellent, but difficult to implement exactly
Fairly crude approximation to LRU
Efficient algorithm that approximates LRU well
Somewhat expensive to implement
Good efficient algorithm
Algorithm
Optimal
NRU (Not Recently Used)
FIFO (First-In, First-Out)
Second chance
Clock
LRU (Least Recently Used)
NFU (Not Frequently Used)
Aging
Working set
WSClock
صفحه 32:
Modeling Page Replacement
Algorithms
Raladw'c Annmalyyz
All pages frames initially empty
0 1 2 3 0 1 4+ 0 1 2 3 4
Youngest page 0111290144453
0111۱2۱3۱0112
Oldest page 011۱2۱30044
م 6 م م م م مام 6 9 Page faults
2
0 1 2 3 0 1 4+ 0 1 2 3 4
Youngest page 2+4
001 23
Oldest page 011۱۱۱۱۱2۱342
0 10۱0۱۱1
م م مم PP P P P P 10Page faults
(b)
¢ FIFO with 3 page frames
¢ FIFO with 4 page frames
٠ P's show which page references show page faults 5
صفحه 33:
33
State of memory array, M, after each item in reference string is
processed
7465
4
1 1 6 2 1 5 1 3 2 4 ه 4 هه ه هه ه وه
۴ ۴ ۴ ۴ ۴ ۴ ۴
Distance string
1
Page faults
212206
2010000002
20201
86666
wah
1
۵ 5
315] 46] 7
2111
1
0
2
1
0
212210100981
2
3654 653
0
314
1
1
711
71344
1171113041
1
1
1
31*11
21
Reference string 0 2 1
3546374733553 1
Stack Algorithms
صفحه 34:
The Distance String
Pod) P(d)
1 d 0 1 d n
Probability density functions for two
hypothetical distance strings
صفحه 35:
The Distance String
# times
1 occurs in
distance strin:
cna 9 جا جيه ديه بيع ه| 19 دبع ©
6-2 يعسي درف مي ه] 17 و۳
C= 1 Faz 16 |<—0,+0,40,4...46,
Ga 4 ديع 12
# times
occurs in F,= 10 |<— #of page faults with 5 frames 6 2عیه
distance string
کم 2 یه 8 Fz= 10
Cat ع
5 ۴
(a) (b)
+ Computation of page fault rate from distance string
- the C vector
~ the Fvector
35
صفحه 36:
Design Issues for Paging Systems
Local versus Global Allocation Policies (1)
Age
10
* Original configuration
* Local page replacement
* Global page replacement
صفحه 37:
Local versus Global Allocation Policies
(2)
Page faults/sec
Number of page frames assigned
Page fault rate as a function of the number
of page frames assigned
صفحه 38:
Load Control
* Despite good designs, system may still thrash
¢ When PFF algorithm indicates
- some processes need more memory
- but no processes need less
* Solution :
Reduce number of processes competing for memory
- swap one or more to disk, divide up pages they held
- reconsider degree of multiprogramming
38
صفحه 39:
Page Size (1)
Small page size
° Advantages
- less internal fragmentation
- better fit for various data structures,
code sections
- less unused program in memory
* Disadvantages
- programs need many pages, larger page
tables
صفحه 40:
Page Size (2)
* Overhead due to page table and
internal fragmentatio) page table
3 space
overhead +—— PY internal
۱/۱2 fragmentatio
° Where °
- 5 = average process size in byte
- p = page size in bytes
- e = page entry p= 2se
Optimized when
40
صفحه 41:
Separate Instruction and Data
D space
} Unused page
Data
Spaces
Single address
‘space | space
22
Program {
0
٠ 026 200155 6
* Separate I and D spaces
41
0
22
Data
ri ۱
صفحه 42:
Shared Pages
Program Data 1 Data 2
سس سس
Page tables
Two processes sharing same program sharing its page table
42
صفحه 43:
Cleaning Policy
¢ Need for a background process, paging
daemon
~- periodically inspects state of memory
¢ When too few frames are free
- selects pages to evict using a replacement
algorithm
* It can use same circular list (clock)
- as regular page replacement algorithmbut with diff
ptr
43
صفحه 44:
Implementation Issues
Operating System Involvement with Paging
Four times when OS involved with paging
Process creation
determine program size
create page table
Process execution
MMU reset for new process
TLB flushed
Page fault time
determine virtual address causing fault
swap target page out, needed page in
Process termination time
release page table, pages
44
1
صفحه 45:
Page Fault Handling (1)
Hardware traps to kernel
General registers saved
OS determines which virtual page needed
OS checks validity of address, seeks page
frame
If selected frame is dirty, write it to disk
صفحه 46:
Page Fault Handling (2)
OS brings schedules new page in from disk
Page tables updated
Faulting instruction backed up to when it
began
Faulting process scheduled
Registers restored
Program continues
صفحه 47:
Instruction Backup
MOVE.L #6(A1), 2(A0)
16 Bits ———>|
1000 MOVE Opcode
10082] 6 |} First operand
1004] 2 Second operand
An instruction causing a page fault
صفحه 48:
Locking Pages in Memory
¢ Virtual memory and I/O occasionally interact
¢ Proc issues call for read from device into
buffer
- while waiting for I/O, another processes starts
up
- has a page fault
- buffer for the first proc may be chosen to be
paged out
¢ Need to specify some pages locked
- exempted from being target pages
48
صفحه 49:
Backing Store
Main memory Disk Main memory Disk
Pages Pages
3 Swap area 9 3 Swap ura
6 9 4 6
2 7
Page $
table 1
5
Disk
3
3 map,
2
(a) (b)
(a) Paging to static swap area
(b) Backing up pages dynamically
49
صفحه 50:
Separation of Policy and
Mechanism
3, Request page
Main memory a
Disk
User External
Space pager
5. Here
ts page
Kernel
space
page in
Page fault handling with an external pager
صفحه 51:
Segmentation (1)
Virtual address space
} Free
Address space
allocated to the
parse tree
Space currently being
used by the parse tree
bumped into the
source text table
۱ ‘Symbol table has
+ One-dimensional address space with growing tables
+ One table may bump into another
51
صفحه 52:
grow or shrink, independently
52
Call
stack
‘Segment
4
12K
ak
aK
oK
Segmentation (2)
L Parse
tree
‘Segment
a
16K
12K
ak
ak
9
Constants
‘Segment
2
oK
Source
text
‘Segment
1
20K
166
126 12
‘Symbol
table
8 ak
46 تساه
ok 9
Segment
0
ble to
Allows each ta
صفحه 53:
53
Segmentation (3)
‘Segmentation
Yes
Many
Yes
Yes
Yes
Yes
To allow programs
and data to be broken
Up into logically
independent address
‘spaces and to aid
sharing and
protection,
Paging
No
Yes
No
No
No
To get a large
linear address
space without
having to buy
more physical
memory.
Consideration
‘Need the programmer be aware
that this technique is being used?
How many linear address
‘spaces are there?
Can the total address space
exceed the size of physical
‘memory?
Can procedures and data be
distinguished and separately
protected?
Can tables whose size fluctuates
bbe accommodated easily?
Ts sharing of procedures
between users facilitated?
Why was this technique
invented?
Comparison of paging and
segmentation
صفحه 54:
Implementation of Pure
(10K):
‘Segment 5
(4k)
‘Segment 6
(4k)
Segment 2
(SK)
Segment 7
(5K)
‘Segment 0
(4k)
(2)
Seamentation
‘Segment 4 (3K); (3K);
(7K) ‘Segment 5 Segment 5
(4k) (4K)
(4Ky,
Segment 3 Segment 3
(8K) (8K) ‘Segment 6
(4K)
‘Segment 2 ‘Segment 2 Segment 2
(5K) (5K) (5K)
(BZ (aK) ERY,
Segment 7 Segment 7 Segment 7
(5K) (5K) (5K)
‘Segment 0 ‘Segment 0 ‘Segment 0
(4K) (4k) (4K)
8
8
(a)-(d) Development of checkerboarding
(e) Removal of the checkerboarding by compaction
(b)
Segment 4
(7K)
Segment 3
(8k)
‘Segment 2
(5K)
Segment 1
(8K)
‘Segment 0|
(4k
(a)
صفحه 55:
Segmentation with Paging:
MULTICS (1)
Segrertiongh
(pages
Page ste
0 = 1028 words
۳
O=sagmetis paged)
1S segment ot paged
Nscetaneous its
Ptstonbis
55
0
ain merary atress
of the page able
Page 2 enty
Page enty
Page 0 enty
Page table fr segment 3
Page 2enty
age 1 erty
Page Oenty
Page taba for sagnant 1
سوم
5011 6 descriptor
موم 5 desciptor
7
Seger 3 descriptor
Segnent2 descriptor
Segent 1 descriptor
Segont 0 descriptor
‘Descriptor segment
+ Descriptor segment points to page tables
+ Segment descriptor - numbers are field lengths
صفحه 56:
Segmentation with Paging:
MULTICS (2)
Address within
the segment
Offset within
‘the page
10
Page
number
Segment number
18
A 34-bit MULTICS virtual address
صفحه 57:
Segmentation with Paging:
MULTICS (3)
MULTICS virtual address
Page
Segment number mage, | Offset
Word!
(Descriptor Page frame NM |
seit | أ jest
number “Descriptor "UTPSY Page Page
‘segment table
Conversion of a 2-part MULTICS address into a main memory
address 57
صفحه 58:
Segmentation with Paging:
MULTICS (4)
Comparison 9 5
field
< used?
ببس
Segment Virtual Page
number page frame Protection Age |
1 7 Readiwrite
+4 —
1 0 | Execute only
Execute only
* Simplified version of the MuLtics TLB
+ Existence of 2 page sizes makes actual TLB more complicated ۳
صفحه 59:
Segmentation with Paging:
Pentium (1)
Bits 13
Index 1 !
0=GDT/1 =LDT Privilege level (0-3)
A Pentium selector
صفحه 60:
Segmentation with Paging:
Pentium (2)
0; 16-Bitsegment 0: Segment is absent from memory
41: 32-Bit segment 4; Seamentis present in memory
Privilege level (0-3)
0: Liisin bytes 0: System
1: Liis in pages | | 1: Application
{— Seamenttype and protection
Limit
Base 2431 مد له و هدع Base 16-23 4
Base 0-15 Limit 0-15: 0
Relative
ات address
* Pentium code segment descriptor
* Data segments differ slightly
60
صفحه 61:
Segmentation with Paging:
Pentium (3)
| |
Descriptor
Base address (+)
Limit
Other fields
32-Bit linear address
Conversion of a (selector, offset) pair to a linear address
61
صفحه 62:
Segmentation with Paging:
Pentium (4)
Linear adress
10
Bits 10 2
9 Page عیام
8
Page directo Page table Page frame.
oes
Drecteryenty
10
entry points
te word
page table
0
Mapping of a linear address onto a physical address
صفحه 63:
Segmentation with Paging:
Pentium (5)
01
ای ‘Typical uses of
SX the levels
Protection on the Pentium
