صفحه 1:
معد امع |ه۳<) مسو5۵) مله() :000 امن
صفحه 2:
+ Okwier CO: Ounbws Opeew Orchiecurer
Ovotrated and Obet-Gerver Cysts
مه موه سره
سره له
Gyotews له
م1 +سو0
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. ههه
صفحه 3:
Cvcirdized Gysiews
وی چاه رت مه مه بل ام موه وه و و Bi Ron oa
ود
۲ وه ول syste! oar to a Pew OP Os and ام و of
xbupe vodrvbrs tho oe coments kro جا صيجا ومخصصم د proves حصي
shored wewory.
بان ما :مروت بو موه لور اا لل
bard dhs} be OG من Weer, ved) kor roy oe OPO ocd ocr or تج
Seay SA oy oe User.
Bl Queer systew! wore debe, wore werory, wali OP Os, ond a adhere
OG. Gere ofan oncber of مرو و لو و مات وی vie
teronnis. كان oiled server syste.
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 ههه
صفحه 4:
mouse keyboard ۰ ۲
لز كك كد So
سا0 لح 0 لا سواه 1 مه OOOO. 000 بط 9 - ویس م6 ی
صفحه 5:
+ Obert Oorver Oyotrwe
۲ Gener spstews sutshy requests yrurrted of 7 cleat systews, whose yeverd structure
ts show below:
client client client me client
network!
server
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 همه
صفحه 6:
+ بیان Opstrwe (Ovu.)
B® Octabuse Puurtcodliy coc be divided tsi:
© Becton: warner worse sinwtwes, query evdudion oad opicrtzaion,
rey ood ond recovery.
© Proctend: cretts of took suck os Pores, reportionters, ot rophicd weer
toterPuce Portes.
BO ODhe tnterPoce betwee he Prootead ood the bucked & hrougk GGL or through ot
جص اه موم ابو
report data mining ۲
هی forms || generation | | and analysis | ‘"°"' "4
interface interface ee cae
interface
(SQL + API)
SQL engine back end
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 ههه
صفحه 7:
+ بیان Opstrwe (Ovu.)
اوه وت of مس wit هخا موجه مد( ۲
تاو موی اما نا وه حطامووت
ام رپس( سا ۶
اس موه له to locatieg resvurces رای( ۶
beter user interPares ©
من ی ۱
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مه
صفحه 8:
+ Gere Opstew Grolincsire
© Gerver spstews vou be browdy categorized taty tue berks?
© trewsunton servers whick oe widely weed io rebaticod datubusr systews,
werd
© dato servers, weed fa bectorieuied datdbuse spstews
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 همه
صفحه 9:
Vraswirr Oervers
© Oby oiled query verver systews or GOL server syste
۶ Oleuis seud requests tp he server
و شمه و exerted of the server
© Results ore shipped buck to the chet.
Bl Reqests or sperfied ia GGL, ond ooamnicdted 7 he server hroanh a
rove proceder ol (RPO) اس
اسمس Drecwortinnd RPO dhs wacy RPC vals to Pore a لا
موس ماه موم 0 وت (000) روم له مس ۲
7
و exes, ond
۲ 00 هجوت to OO®O, For dart
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 ههه
صفحه 10:
+ Trawwirr Gerver Provess Oinwtre
© © typed ساك server cousisis oP wulipe: processes uovessiay dott it
shored wewory.
© Gener processes
© Des recur Wer quertes (rmuriows), اجه اجه مد عم remake
beh
© Crovesses شوه ,لول سا بو 0 state provers ty exert
مود User queries poured
جعوععومبن +صمحود لجلحج دالج ۱
proves وم Lock ©
© Dore vn this hier
© Ootbuse writer process
© QuiputwrdPied bubPer blocks to dishes moet
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مدمه
صفحه 11:
دمجم امب بجنا ل
انا لس ما نا لام ما للی بو من مه
Lex wnter provess puipus by revors table torn. ©
© Chechponst process
© PerPorws perio checkpoints
BC rocess wouter process
© Qputors ver provesses, ued thes revovery uriocs P oay oP the vier
provesses Pail
١ 6.5). cborten مها شمسا بو executed by a server provers od
سس
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. 0.00 ©Sbervehnts, Cork ced Cnakershe
صفحه 12:
process process process
1۳۳
2۳
butier poot monitor
shared
‘query plan cache
Tog butter | [Tock table
database
Tog writer writer
process process
©Sbervehnts, Cork ced Cnakershe
صفحه 13:
+ Tresasioa Oysiew (Provesses (Ovc.)
۲ ای یی تما shored att
© @PPer اس
© bok table
© boy buPPer
© Cached query phre (rased save query sbcotied cra)
(Gh nb چم موی nner یت میا
عطاك ساد فصل وه عا رت thot w we provesses ore سمج 2109 ©
لاه ری انم ای او hoe, databases spsiews مور
© Operaikn syetew sewophores
۶ امه مس و us teptard-set
© Po wos overkedd of اما و( موی وم requestor,
euck database proves operates threrdiy oo the back table testes oP sercrcery
requests to look wacayer process
© Lock ون process sill wed Por deadock detevtica
مه
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO.
صفحه 14:
Octa Gervers
© Osed ta hkh-sperd LOOs, tt cases where
© Dhe cleus wre powparcble tt provessioy power to the server
© Whe tasks to be executed one .ونفصها عتحصصة
© Oot oe shipped ty chews where provesstny i perPorwed, oad thea shipped
vests back to the server.
© Vhe ohiertre requires Pul buck-eod Puarivedliy of the clircts.
Oped to ony ماه database sysiews
Cage-Ghippiag versus ‘Itew-Ghippiagy
مه
© ‘ones:
Lockie
Data Carktery
۰
۰
۰
© Lock اون
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO.
صفحه 15:
+ Octa Oervers (Ovu.)
© Page-chippieg versus tew-chipptey
© Gander vit oP shipper 3 ددم جمد
© Work prePeichtag retiied سر dou wik requested tea
© Page shippte maa be thownht oP اد که مه
Bi Lookin:
© Overhead oF requestiog cod yettoy locks Brow server is high dur io
wessnp dehys
۶ Coa rant bebe co requested ond prePeiched tews; wil اه سوم
traneuntizg t yrocted lock oo whole pace.
© Locks vag prePeiched eww vod be (P{culed buck} by the server, ocd
مس ماه برط لصا P the prePetched tec bos ot beeo used.
© Locks va the paye co be deesodkted to locks vo tes tothe page uwheo
there are lock oooPicts. Locks oo ucused tees coo thea be returced to
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 هدهو
صفحه 16:
Ota Gervers (Ovu.)
۳ له مب
© ota naa be cached مس ما مه مه و
© Bat cherk that data te upto-chte bePore tie used (cache ooherecny)
© Check cas be doce wheo requesioe bro or det tor
1" Lock Okie
© books ma be لو by choot syste eve مس منوا و
© Dreux rtarine cached joke brdy, win costar server
© Gerver cole beck locke Brow chews whee tl reveives ooPiotioy look request.
Olea retires lock voce a loool بای وا مس ۰
شمسا هه لها ,ولج نا ملق و
4
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. e010 ©Sbervehnts, Cork ced Cnakershe
صفحه 17:
ا
حاص جاملجه كمد جمجوججدجمج جاصننجه خأه أصاصمكمت Corde dotubase systews ©
مه موه م۳ هم برط تحص
۲ ٩ ۴اه امه امه و اه من سطاسن الوم موه powerPd
Provessurs
BD eee parcel or Por grate pandel warhrar vias thovscads of scar
سیم
ores! بوبم ولج م وناك 1
Roe مه هد اوه سا مه of ashe ام با bouche ©
امرس
۱
the صصص its subcotted
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. eon ©Sbervehnts, Cork ced Cnakershe
صفحه 18:
+ Opert-Dp aed Oode-Op
موه وف موه موه وم موه لام لول و توق ۲
ukick t O-toes keer.
© تبط لسع(
speedup = sal systiew ekpsed عد
و ادص وم
۱۹ tb brew P equation equi D.
© مس لوق he tre oP butk the problew ocd the systew
۱۳ D-iwes جام عجيمدا
۰ ی by:
soukeup = sul systew suwull problew ekypsed tke
by syetew bry problew ekpsed tke
© Code wp bea P equaira equds
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 هدهو
صفحه 19:
linear speedup
sublinear speedu
resources _ ——>-
Speedup
مه
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO.
صفحه 20:
linear scaleup
sublinear scaleup}
problem size ——>-
Scaleup
2000
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO.
صفحه 21:
رد +
تمه نو ©
© © siete hace job} ippicd oP covet اوه تسه لت صی تاول
© Ose oa OrKoes haxer copter 7a اراس وا متا
© Drecwios odew!
© Oxccern us swall queries subwited by tedepeudet were ty: skored
سم ما لو لول ed ikmesharkry syste.
نممو جه جعت( subuitioy requests (heuwe, صوص Ores os wey ©
وه ها )و ون هل ها و( من و requests)
.مصتحوك عدم دز سای ۶
0
صفحه 22:
وتا نت ود Pobre Lacey +
تاد ماه ماه مه موه peed oad
ی
0
۲ ,عم و و وم و سا مه دا تج
sks, or locke) copes uh pack اه deus spear tere رم oc obser
processes, rtker tras وت خی تاو
۲ وا تماق he dower of سس بطم ia service
kee oF pad) execute fishes. Overdl execution kv detercoed by piowent
oP parcel) exertion asko.
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 23:
+ 222222 k Dickies
© ®w. Gpstew coxwpourds seu dota go ood receive اجه و مت ول
ore ها
© Opes wt srde well wih teoreusieg .ممصاصاسهم
© Qesk. Oowprueus ore موی ا لته مه is
وی له هن لصو
۱ growiey cumber oP cowpoarcis, oad o7
sides beter.
© Cu ww require OV chops 7 sud waren i con (or Vows
چاه تاه موه تون ord).
اجه وی روا لاه بو ولا مرا ل
ty ove cneher Fhe brary represectdions dPPer ia excl oe bt.
© amine لاه بو by bni(u) ober maxims ممت لجو reach each
ther Ua a <oost by(a) hike; rehoes cow nicaira devs.
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 24:
Li
(c) hypercube
100
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO.
صفحه 25:
Pardes! Oadbosr Orcvkisviwres
Bl Ohored wewery — ون اه موم EME
سای کل 3
مم و تامجه وه له واه موم - مات لوق ۴
ohh
Werachid — hybrid oP the cbove achieves
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 26:
9
(a) shared memory (b) shared disk
(c) shared nothing ری hierarchical
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 27:
Ghkored Oewory
© Crocessors oad dishs hove uovess to 0 ors wewory, picdiy vido bus or
throu on امه موه
© Cxrewey Pied cowonicaivs betwee provessors — deta io shored
swewory ooo be uozessed by cay processor without hovteg to wove it usta
راو
سوه سم 66 7 96 مسا لو اه ع لس - تلم( ۲
ماما هن مسا وه موه عم چا با
BE نوا لت ری( decrees oP parcel (P tr 8).
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. e008? ©Sbervehnts, Cork ced Cnakershe
صفحه 28:
6۵! )
۲ Ol processors con dreniy uccess dl disks vic جد inieroooeericn oetvort, but the
processors hove private wewories.
© Vhe wewory bus ty wt u botlewerk
© Orchtertre provides o deer of Paubolercae — Pu provessor Pals, the
ker processors can toe over iis tasks تال | مه & residedt oo disks
thot are امه Prow oll processors.
۲ Cxnopks: 100 Gpeplex ond OCC chistes (wow part oP Compan) reemicny
cb (sew Orceke db) were صوص ات رام
© Qourside: boleweck ww ooours ul fotercoweenivg to the dish subspstew.
© Ghoeddeh systews coo sod too soe whet karger ouber oP provessurs, but
nicdiod betuwerd processors ts slauwer.
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 29:
Okored Ovikiag
۲ Dede cowie of u processor, wewory, ood vor or wore disks. Provessors
owe ode cornice wih: cover processor of corer ode ustay ot
مس oetiwork. (B oode Puantioas os the server Por the dota oa te chests
or dishes the ade muvce.
۴ رل سا Pardew, Orak-a COE
BE Qa covessed Priny bod dhe (aud bed wewon aoresees) db oot
rank tteroooenion oeiverk, hereby wisiotzin he ierPereae oP resmurre
sharky.
۲ لمجم جو وب لوط الما of processors
.جسسموخاجها فجن
Bl Qua dracback cost of pow nization oad مت مس ی للم det
روموت موی ماو بو ار ot bet pods,
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 30:
Weravkicd
وله له اوه موه ان مه امین ۲
ملسم
مه را له ی - باه را له وت اما بو ۲
له کاس upto, cad de ont shore disks or wewory wil موم
سوه اس ویو روحم واه و چا uk ode of he systew could ©
ا
1# ,رهبا euch ude could be o shorededsh spstew, god euch of the systews
shorter a set oP disks could be موم مه و systew.
اب لاو روا مه مه موم Recker he cower ۴
وجسد اوج تا
(0000) اه رت منوت لسن سل ٩
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 31:
۵ سای +
© Oot spread over wuts vachices (us referred ty us sites (وعلجه عه
site C
ات با مس سا ۲
یطوط من ی روا اوه و ©
site A
network
jcommunication:
via network
site B
Oxsdrer Gyre Onewpe -O* Cram, Gx O, OOOO. 0.80
صفحه 32:
+ Ovrrinied Ocrixows
بل لاس عم ۲
sites موه لصوم سا رو طل اه ون مرول مرو Gove ©
hides details oP ddetrbutcc ۱
عمط لاد eter
اه له من مرو امه سط0۳۳) ©
رایخ لخلسی existe databases to provide وس Godt ©
© OPPereutde betters broad ybal iraceurions
© 0 tood موه اوه tote to the stile site of hick the تسس
uses لوق
ether uesses dota too ste dPPereot Pow the vor ot مت ملق و
uvas tottoted or ouncesses dota tt severd dPP ered sites. ما با مار
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 33:
+ rate Ps ia Oetrinied Opsewe
© Ghortoy dota — were of ve ste oble ty ove the dota reside of sore other
sites.
© Quay — eurk step oble مه اجه و هط وا dott stored lordly.
Wither systew avotobliiy through redueddesy — dota coc be rephooted of recov
sites, ced syste ooo Puaciog eved Po ste Potts.
© Oscdvcokne! added cowplexity required to posure proper coordaiod ozo
vile.
© GP wore developwedt cvs.
© Grecter poteuid Por buce.
© Aeoreused provessien overhead,
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 34:
+ Chopleweukivs “wou Por Ovribued Oudbwes
اه ول هل سل با سم وت ceeded even رو ۲
۱
ما سا له ,و just bePore لت مس وه Quer deo! cuck ste ©
یی و وا مت ام
© ج221 ste مهو وه و و نطو ی P here io Pohure while
مت وی ۲ ود
۲ 20 واه موه وس وی trowsuntiva wodels bused va persisirat
او له بو ,وت له موه
0 ordol (aad deadock detection) required
يسو بجنا ery be rete ip een des cae
Betas of cbove is Oh CO
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 35:
)سین +
۲ اد بو جا و او - (۱) یه مجملوورا
Due sed sevurophiod arrus, suck os a skrde bub اه تاه
brake.
۲ Oke wee cetvorte (POs) — cowposed oP provessore deirbuied over u
farce ارو ore.
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 36:
+ Detworks Types (Ova)
B00 s wik crake coocevtre (ect the “lotercet) ore ceeded Por
و وا بو موی روج طبر
۲ او موم( such us bois wtes ra work vo DBDs wit
تم
۶) is replicated.
© Opies ore propaguied tp repos periodical).
۶ Copies oP data way be updated todepeudecily.
© Ovwseridizeble exevuioes vac tus result. (Resvhaiod is شوه
depended.
Oxsdrer Gyetrw Oneewpe -O* Crim, Gui O, OOOO. سا0 لح 0 لا سواه 1 مومه
صفحه 37:
Gad oP Okaper
Chapter 20: Database System Architectures
Database System Concepts, 5th Ed.
©Silberschatz, Korth and Sudarshan
See www.db-book.com for conditions on re-use
Chapter 20: Database System Architectures
Centralized and Client-Server Systems
Server System Architectures
Parallel Systems
Distributed Systems
Network Types
Database System Concepts - 5th Edition, Aug 22, 2005.
20.2
©Silberschatz, Korth and Sudarshan
Centralized Systems
Run on a single computer system and do not interact with other computer
systems.
General-purpose computer system: one to a few CPUs and a number of
device controllers that are connected through a common bus that provides access
to shared memory.
Single-user system (e.g., personal computer or workstation): desk-top unit,
single user, usually has only one CPU and one or two hard disks; the OS
may support only one user.
Multi-user system: more disks, more memory, multiple CPUs, and a multi-user
OS. Serve a large number of users who are connected to the system vie
terminals. Often called server systems.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.3
©Silberschatz, Korth and Sudarshan
A Centralized Computer System
Database System Concepts - 5th Edition, Aug 22, 2005.
20.4
©Silberschatz, Korth and Sudarshan
Client-Server Systems
Server systems satisfy requests generated at m client systems, whose general structure
is shown below:
Database System Concepts - 5th Edition, Aug 22, 2005.
20.5
©Silberschatz, Korth and Sudarshan
Client-Server Systems (Cont.)
Database functionality can be divided into:
Back-end: manages access structures, query evaluation and optimization,
concurrency control and recovery.
Front-end: consists of tools such as forms, report-writers, and graphical user
interface facilities.
The interface between the front-end and the back-end is through SQL or through an
application program interface.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.6
©Silberschatz, Korth and Sudarshan
Client-Server Systems (Cont.)
Advantages of replacing mainframes with networks of workstations or personal
computers connected to back-end server machines:
better functionality for the cost
flexibility in locating resources and expanding facilities
better user interfaces
easier maintenance
Database System Concepts - 5th Edition, Aug 22, 2005.
20.7
©Silberschatz, Korth and Sudarshan
Server System Architecture
Server systems can be broadly categorized into two kinds:
transaction servers which are widely used in relational database systems,
and
data servers, used in object-oriented database systems
Database System Concepts - 5th Edition, Aug 22, 2005.
20.8
©Silberschatz, Korth and Sudarshan
Transaction Servers
Also called query server systems or SQL server systems
Clients send requests to the server
Transactions are executed at the server
Results are shipped back to the client.
Requests are specified in SQL, and communicated to the server through a
remote procedure call (RPC) mechanism.
Transactional RPC allows many RPC calls to form a transaction.
Open Database Connectivity (ODBC) is a C language application program
interface standard from Microsoft for connecting to a server, sending SQL
requests, and receiving results.
JDBC standard is similar to ODBC, for Java
Database System Concepts - 5th Edition, Aug 22, 2005.
20.9
©Silberschatz, Korth and Sudarshan
Transaction Server Process Structure
A typical transaction server consists of multiple processes accessing data in
shared memory.
Server processes
These receive user queries (transactions), execute them and send results
back
Processes may be multithreaded, allowing a single process to execute
several user queries concurrently
Typically multiple multithreaded server processes
Lock manager process
More on this later
Database writer process
Output modified buffer blocks to disks continually
Database System Concepts - 5th Edition, Aug 22, 2005.
20.10
©Silberschatz, Korth and Sudarshan
Transaction Server Processes (Cont.)
Log writer process
Server processes simply add log records to log record buffer
Log writer process outputs log records to stable storage.
Checkpoint process
Performs periodic checkpoints
Process monitor process
Monitors other processes, and takes recovery actions if any of the other
processes fail
E.g. aborting any transactions being executed by a server process and
restarting it
Database System Concepts - 5th Edition, Aug 22, 2005.
20.11
©Silberschatz, Korth and Sudarshan
Transaction System Processes (Cont.)
Database System Concepts - 5th Edition, Aug 22, 2005.
20.12
©Silberschatz, Korth and Sudarshan
Transaction System Processes (Cont.)
Shared memory contains shared data
Buffer pool
Lock table
Log buffer
Cached query plans (reused if same query submitted again)
All database processes can access shared memory
To ensure that no two processes are accessing the same data structure at the
same time, databases systems implement mutual exclusion using either
Operating system semaphores
Atomic instructions such as test-and-set
To avoid overhead of interprocess communication for lock request/grant,
each database process operates directly on the lock table instead of sending
requests to lock manager process
Lock manager process still used for deadlock detection
Database System Concepts - 5th Edition, Aug 22, 2005.
20.13
©Silberschatz, Korth and Sudarshan
Data Servers
Used in high-speed LANs, in cases where
The clients are comparable in processing power to the server
The tasks to be executed are compute intensive.
Data are shipped to clients where processing is performed, and then shipped
results back to the server.
This architecture requires full back-end functionality at the clients.
Used in many object-oriented database systems
Issues:
Page-Shipping versus Item-Shipping
Locking
Data Caching
Lock Caching
Database System Concepts - 5th Edition, Aug 22, 2005.
20.14
©Silberschatz, Korth and Sudarshan
Data Servers (Cont.)
Page-shipping versus item-shipping
Smaller unit of shipping more messages
Worth prefetching related items along with requested item
Page shipping can be thought of as a form of prefetching
Locking
Overhead of requesting and getting locks from server is high due to
message delays
Can grant locks on requested and prefetched items; with page shipping,
transaction is granted lock on whole page.
Locks on a prefetched item can be P{called back} by the server, and
returned by client transaction if the prefetched item has not been used.
Locks on the page can be deescalated to locks on items in the page when
there are lock conflicts. Locks on unused items can then be returned to
server.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.15
©Silberschatz, Korth and Sudarshan
Data Servers (Cont.)
Data Caching
Data can be cached at client even in between transactions
But check that data is up-to-date before it is used (cache coherency)
Check can be done when requesting lock on data item
Lock Caching
Locks can be retained by client system even in between transactions
Transactions can acquire cached locks locally, without contacting server
Server calls back locks from clients when it receives conflicting lock request.
Client returns lock once no local transaction is using it.
Similar to deescalation, but across transactions.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.16
©Silberschatz, Korth and Sudarshan
Parallel Systems
Parallel database systems consist of multiple processors and multiple disks
connected by a fast interconnection network.
A coarse-grain parallel machine consists of a small number of powerful
processors
A massively parallel or fine grain parallel machine utilizes thousands of smaller
processors.
Two main performance measures:
throughput --- the number of tasks that can be completed in a given time
interval
response time --- the amount of time it takes to complete a single task from
the time it is submitted
Database System Concepts - 5th Edition, Aug 22, 2005.
20.17
©Silberschatz, Korth and Sudarshan
Speed-Up and Scale-Up
Speedup: a fixed-sized problem executing on a small system is given to a system
which is N-times larger.
Measured by:
speedup = small system elapsed time
large system elapsed time
Speedup is linear if equation equals N.
Scaleup: increase the size of both the problem and the system
N-times larger system used to perform N-times larger job
Measured by:
scaleup = small system small problem elapsed time
big system big problem elapsed time
Scale up is linear if equation equals 1.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.18
©Silberschatz, Korth and Sudarshan
Speedup
Speedup
Database System Concepts - 5th Edition, Aug 22, 2005.
20.19
©Silberschatz, Korth and Sudarshan
Scaleup
Scaleup
Database System Concepts - 5th Edition, Aug 22, 2005.
20.20
©Silberschatz, Korth and Sudarshan
Batch and Transaction Scaleup
Batch scaleup:
A single large job; typical of most database queries and scientific simulation.
Use an N-times larger computer on N-times larger problem.
Transaction scaleup:
Numerous small queries submitted by independent users to a shared
database; typical transaction processing and timesharing systems.
N-times as many users submitting requests (hence, N-times as many
requests) to an N-times larger database, on an N-times larger computer.
Well-suited to parallel execution.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.21
©Silberschatz, Korth and Sudarshan
Factors Limiting Speedup and Scaleup
Speedup and scaleup are often sublinear due to:
Startup costs: Cost of starting up multiple processes may dominate computation
time, if the degree of parallelism is high.
Interference: Processes accessing shared resources (e.g.,system bus,
disks, or locks) compete with each other, thus spending time waiting on other
processes, rather than performing useful work.
Skew: Increasing the degree of parallelism increases the variance in service
times of parallely executing tasks. Overall execution time determined by slowest
of parallely executing tasks.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.22
©Silberschatz, Korth and Sudarshan
Interconnection Network Architectures
Bus. System components send data on and receive data from a single
communication bus;
Does not scale well with increasing parallelism.
Mesh. Components are arranged as nodes in a grid, and each component is
connected to all adjacent components
Communication links grow with growing number of components, and so
scales better.
But may require 2n hops to send message to a node (or n with
wraparound connections at edge of grid).
Hypercube. Components are numbered in binary; components are connected
to one another if their binary representations differ in exactly one bit.
n components are connected to log(n) other components and can reach each
other via at most log(n) links; reduces communication delays.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.23
©Silberschatz, Korth and Sudarshan
Interconnection Architectures
Database System Concepts - 5th Edition, Aug 22, 2005.
20.24
©Silberschatz, Korth and Sudarshan
Parallel Database Architectures
Shared memory -- processors share a common memory
Shared disk -- processors share a common disk
Shared nothing -- processors share neither a common memory nor common
disk
Hierarchical -- hybrid of the above architectures
Database System Concepts - 5th Edition, Aug 22, 2005.
20.25
©Silberschatz, Korth and Sudarshan
Parallel Database Architectures
Database System Concepts - 5th Edition, Aug 22, 2005.
20.26
©Silberschatz, Korth and Sudarshan
Shared Memory
Processors and disks have access to a common memory, typically via a bus or
through an interconnection network.
Extremely efficient communication between processors — data in shared
memory can be accessed by any processor without having to move it using
software.
Downside – architecture is not scalable beyond 32 or 64 processors since
the bus or the interconnection network becomes a bottleneck
Widely used for lower degrees of parallelism (4 to 8).
Database System Concepts - 5th Edition, Aug 22, 2005.
20.27
©Silberschatz, Korth and Sudarshan
Shared Disk
All processors can directly access all disks via an interconnection network, but the
processors have private memories.
The memory bus is not a bottleneck
Architecture provides a degree of fault-tolerance — if a processor fails, the
other processors can take over its tasks since the database is resident on disks
that are accessible from all processors.
Examples: IBM Sysplex and DEC clusters (now part of Compaq) running
Rdb (now Oracle Rdb) were early commercial users
Downside: bottleneck now occurs at interconnection to the disk subsystem.
Shared-disk systems can scale to a somewhat larger number of processors, but
communication between processors is slower.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.28
©Silberschatz, Korth and Sudarshan
Shared Nothing
Node consists of a processor, memory, and one or more disks. Processors
at one node communicate with another processor at another node using an
interconnection network. A node functions as the server for the data on the disk
or disks the node owns.
Examples: Teradata, Tandem, Oracle-n CUBE
Data accessed from local disks (and local memory accesses) do not pass
through interconnection network, thereby minimizing the interference of resource
sharing.
Shared-nothing multiprocessors can be scaled up to thousands of processors
without interference.
Main drawback: cost of communication and non-local disk access; sending data
involves software interaction at both ends.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.29
©Silberschatz, Korth and Sudarshan
Hierarchical
Combines characteristics of shared-memory, shared-disk, and shared-nothing
architectures.
Top level is a shared-nothing architecture – nodes connected by an
interconnection network, and do not share disks or memory with each other.
Each node of the system could be a shared-memory system with a few
processors.
Alternatively, each node could be a shared-disk system, and each of the systems
sharing a set of disks could be a shared-memory system.
Reduce the complexity of programming such systems by distributed virtualmemory architectures
Also called non-uniform memory architecture (NUMA)
Database System Concepts - 5th Edition, Aug 22, 2005.
20.30
©Silberschatz, Korth and Sudarshan
Distributed Systems
Data spread over multiple machines (also referred to as sites or nodes).
Network interconnects the machines
Data shared by users on multiple machines
Database System Concepts - 5th Edition, Aug 22, 2005.
20.31
©Silberschatz, Korth and Sudarshan
Distributed Databases
Homogeneous distributed databases
Same software/schema on all sites, data may be partitioned among sites
Goal: provide a view of a single database, hiding details of distribution
Heterogeneous distributed databases
Different software/schema on different sites
Goal: integrate existing databases to provide useful functionality
Differentiate between local and global transactions
A local transaction accesses data in the single site at which the transaction
was initiated.
A global transaction either accesses data in a site different from the one at
which the transaction was initiated or accesses data in several different sites.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.32
©Silberschatz, Korth and Sudarshan
Trade-offs in Distributed Systems
Sharing data – users at one site able to access the data residing at some other
sites.
Autonomy – each site is able to retain a degree of control over data stored locally.
Higher system availability through redundancy — data can be replicated at remote
sites, and system can function even if a site fails.
Disadvantage: added complexity required to ensure proper coordination among
sites.
Software development cost.
Greater potential for bugs.
Increased processing overhead.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.33
©Silberschatz, Korth and Sudarshan
Implementation Issues for Distributed Databases
Atomicity needed even for transactions that update data at multiple sites
The two-phase commit protocol (2PC) is used to ensure atomicity
Basic idea: each site executes transaction until just before commit, and the leaves
final decision to a coordinator
Each site must follow decision of coordinator, even if there is a failure while
waiting for coordinators decision
2PC is not always appropriate: other transaction models based on persistent
messaging, and workflows, are also used
Distributed concurrency control (and deadlock detection) required
Data items may be replicated to improve data availability
Details of above in Chapter 22
Database System Concepts - 5th Edition, Aug 22, 2005.
20.34
©Silberschatz, Korth and Sudarshan
Network Types
Local-area networks (LANs) – composed of processors that are distributed
over small geographical areas, such as a single building or a few adjacent
buildings.
Wide-area networks (WANs) – composed of processors distributed over a
large geographical area.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.35
©Silberschatz, Korth and Sudarshan
Networks Types (Cont.)
WANs with continuous connection (e.g. the Internet) are needed for
implementing distributed database systems
Groupware applications such as Lotus notes can work on WANs with
discontinuous connection:
Data is replicated.
Updates are propagated to replicas periodically.
Copies of data may be updated independently.
Non-serializable executions can thus result. Resolution is application
dependent.
Database System Concepts - 5th Edition, Aug 22, 2005.
20.36
©Silberschatz, Korth and Sudarshan
End of Chapter
Database System Concepts, 5th Ed.
©Silberschatz, Korth and Sudarshan
See www.db-book.com for conditions on re-use