database_course_silberschatz_2005_ch14

در نمایش آنلاین پاورپوینت، ممکن است بعضی علائم، اعداد و حتی فونت‌ها به خوبی نمایش داده نشود. این مشکل در فایل اصلی پاورپوینت وجود ندارد.




  • جزئیات
  • امتیاز و نظرات
  • متن پاورپوینت

امتیاز

درحال ارسال
امتیاز کاربر [0 رای]

نقد و بررسی ها

هیچ نظری برای این پاورپوینت نوشته نشده است.

اولین کسی باشید که نظری می نویسد “Chapter 14: Query Optimization”

Chapter 14: Query Optimization

اسلاید 1: Chapter 14: Query Optimization

اسلاید 2: Chapter 14: Query OptimizationIntroduction Transformation of Relational ExpressionsCatalog Information for Cost EstimationStatistical Information for Cost EstimationCost-based optimizationDynamic Programming for Choosing Evaluation PlansMaterialized views

اسلاید 3: IntroductionAlternative ways of evaluating a given queryEquivalent expressionsDifferent algorithms for each operation (Chapter 13)Cost difference between a good and a bad way of evaluating a query can be enormousNeed to estimate the cost of operationsStatistical information about relations. Examples:number of tuples, number of distinct values for an attributes,Etc.Statistics estimation for intermediate results to compute cost of complex expressions

اسلاید 4: Introduction (Cont.)Relations generated by two equivalent expressions have the same set of attributes and contain the same set of tuplesalthough their tuples/attributes may be ordered differently.

اسلاید 5: Introduction (Cont.)Generation of query-evaluation plans for an expression involves several steps:Generating logically equivalent expressions using equivalence rules.Annotating resultant expressions to get alternative query plansChoosing the cheapest plan based on estimated costThe overall process is called cost based optimization.

اسلاید 6: Transformation of Relational ExpressionsTwo relational algebra expressions are said to be equivalent if on every legal database instance the two expressions generate the same set of tuplesNote: order of tuples is irrelevantIn SQL, inputs and outputs are multisets of tuplesTwo expressions in the multiset version of the relational algebra are said to be equivalent if on every legal database instance the two expressions generate the same multiset of tuplesAn equivalence rule says that expressions of two forms are equivalentCan replace expression of first form by second, or vice versa

اسلاید 7: Equivalence Rules1.Conjunctive selection operations can be deconstructed into a sequence of individual selections. 2.Selection operations are commutative. 3.Only the last in a sequence of projection operations is needed, the others can be omitted. Selections can be combined with Cartesian products and theta joins.(E1 X E2) = E1  E2 1(E1 2 E2) = E1 1 2 E2

اسلاید 8: Equivalence Rules (Cont.)5.Theta-join operations (and natural joins) are commutative. E1  E2 = E2  E16.(a) Natural join operations are associative: (E1 E2) E3 = E1 (E2 E3) (b) Theta joins are associative in the following manner: (E1 1 E2) 2 3 E3 = E1 2 3 (E2 2 E3) where 2 involves attributes from only E2 and E3.

اسلاید 9: Pictorial Depiction of Equivalence Rules

اسلاید 10: Equivalence Rules (Cont.)7.The selection operation distributes over the theta join operation under the following two conditions: (a) When all the attributes in 0 involve only the attributes of one of the expressions (E1) being joined. 0E1  E2) = (0(E1))  E2 (b) When  1 involves only the attributes of E1 and 2 involves only the attributes of E2. 1 E1  E2) = (1(E1))  ( (E2))

اسلاید 11: Equivalence Rules (Cont.)8.The projections operation distributes over the theta join operation as follows:(a) if  involves only attributes from L1  L2: (b) Consider a join E1  E2. Let L1 and L2 be sets of attributes from E1 and E2, respectively. Let L3 be attributes of E1 that are involved in join condition , but are not in L1  L2, and let L4 be attributes of E2 that are involved in join condition , but are not in L1  L2.))(())(()(21212121EEEELLLLÕÕ=ÕÈqq)))(())((()(212142312121EEEELLLLLLLLÈÈÈÈÕÕÕ=Õqq

اسلاید 12: Equivalence Rules (Cont.)The set operations union and intersection are commutative E1  E2 = E2  E1 E1  E2 = E2  E1 (set difference is not commutative).Set union and intersection are associative. (E1  E2)  E3 = E1  (E2  E3) (E1  E2)  E3 = E1  (E2  E3)The selection operation distributes over ,  and –.  (E1 – E2) =  (E1) – (E2) and similarly for  and  in place of – Also:  (E1 – E2) = (E1) – E2 and similarly for  in place of –, but not for 12.The projection operation distributes over union L(E1  E2) = (L(E1))  (L(E2))

اسلاید 13: Transformation ExampleQuery: Find the names of all customers who have an account at some branch located in Brooklyn. customer_name(branch_city = “Brooklyn” (branch (account depositor)))Transformation using rule 7a. customer_name ((branch_city =“Brooklyn” (branch)) (account depositor))Performing the selection as early as possible reduces the size of the relation to be joined.

اسلاید 14: Example with Multiple TransformationsQuery: Find the names of all customers with an account at a Brooklyn branch whose account balance is over $1000. customer_name((branch_city = “Brooklyn”  balance > 1000 (branch (account depositor)))Transformation using join associatively (Rule 6a): customer_name((branch_city = “Brooklyn”  balance > 1000 (branch account)) depositor) Second form provides an opportunity to apply the “perform selections early” rule, resulting in the subexpression branch_city = “Brooklyn” (branch)  balance > 1000 (account)Thus a sequence of transformations can be useful

اسلاید 15: Multiple Transformations (Cont.)

اسلاید 16: Projection Operation ExampleWhen we compute(branch_city = “Brooklyn” (branch) account ) we obtain a relation whose schema is: (branch_name, branch_city, assets, account_number, balance)Push projections using equivalence rules 8a and 8b; eliminate unneeded attributes from intermediate results to get: customer_name (( account_number ( (branch_city = “Brooklyn” (branch) account )) depositor )Performing the projection as early as possible reduces the size of the relation to be joined. customer_name((branch_city = “Brooklyn” (branch) account) depositor)

اسلاید 17: Join Ordering ExampleFor all relations r1, r2, and r3,(r1 r2) r3 = r1 (r2 r3 )If r2 r3 is quite large and r1 r2 is small, we choose (r1 r2) r3 so that we compute and store a smaller temporary relation.

اسلاید 18: Join Ordering Example (Cont.)Consider the expressioncustomer_name ((branch_city = “Brooklyn” (branch)) (account depositor))Could compute account depositor first, and join result with branch_city = “Brooklyn” (branch) but account depositor is likely to be a large relation.Only a small fraction of the bank’s customers are likely to have accounts in branches located in Brooklyn it is better to compute branch_city = “Brooklyn” (branch) accountfirst.

اسلاید 19: Enumeration of Equivalent ExpressionsQuery optimizers use equivalence rules to systematically generate expressions equivalent to the given expressionConceptually, generate all equivalent expressions by repeatedly executing the following step until no more expressions can be found: for each expression found so far, use all applicable equivalence rules add newly generated expressions to the set of expressions found so farThe above approach is very expensive in space and timeSpace requirements reduced by sharing common subexpressions:when E1 is generated from E2 by an equivalence rule, usually only the top level of the two are different, subtrees below are the same and can be sharedE.g. when applying join associativityTime requirements are reduced by not generating all expressionsMore details shortly

اسلاید 20: Cost EstimationCost of each operator computer as described in Chapter 13Need statistics of input relationsE.g. number of tuples, sizes of tuplesInputs can be results of sub-expressionsNeed to estimate statistics of expression resultsTo do so, we require additional statisticsE.g. number of distinct values for an attributeMore on cost estimation later

اسلاید 21: Evaluation PlanAn evaluation plan defines exactly what algorithm is used for each operation, and how the execution of the operations is coordinated.

اسلاید 22: Choice of Evaluation PlansMust consider the interaction of evaluation techniques when choosing evaluation plans: choosing the cheapest algorithm for each operation independently may not yield best overall algorithm. E.g.merge-join may be costlier than hash-join, but may provide a sorted output which reduces the cost for an outer level aggregation.nested-loop join may provide opportunity for pipeliningPractical query optimizers incorporate elements of the following two broad approaches:1.Search all the plans and choose the best plan in a cost-based fashion.2. Uses heuristics to choose a plan.

اسلاید 23: Cost-Based OptimizationConsider finding the best join-order for r1 r2 . . . rn.There are (2(n – 1))!/(n – 1)! different join orders for above expression. With n = 7, the number is 665280, with n = 10, the number is greater than 176 billion!No need to generate all the join orders. Using dynamic programming, the least-cost join order for any subset of {r1, r2, . . . rn} is computed only once and stored for future use.

اسلاید 24: Dynamic Programming in OptimizationTo find best join tree for a set of n relations:To find best plan for a set S of n relations, consider all possible plans of the form: S1 (S – S1) where S1 is any non-empty subset of S.Recursively compute costs for joining subsets of S to find the cost of each plan. Choose the cheapest of the 2n – 1 alternatives.When plan for any subset is computed, store it and reuse it when it is required again, instead of recomputing itDynamic programming

اسلاید 25: Join Order Optimization Algorithmprocedure findbestplan(S) if (bestplan[S].cost  ) return bestplan[S] // else bestplan[S] has not been computed earlier, compute it now if (S contains only 1 relation) set bestplan[S].plan and bestplan[S].cost based on the best way of accessing S else for each non-empty subset S1 of S such that S1  S P1= findbestplan(S1) P2= findbestplan(S - S1) A = best algorithm for joining results of P1 and P2 cost = P1.cost + P2.cost + cost of A if cost < bestplan[S].cost bestplan[S].cost = cost bestplan[S].plan = “execute P1.plan; execute P2.plan; join results of P1 and P2 using A” return bestplan[S]

اسلاید 26: Left Deep Join TreesIn left-deep join trees, the right-hand-side input for each join is a relation, not the result of an intermediate join.

اسلاید 27: Cost of OptimizationWith dynamic programming time complexity of optimization with bushy trees is O(3n). With n = 10, this number is 59000 instead of 176 billion!Space complexity is O(2n) To find best left-deep join tree for a set of n relations:Consider n alternatives with one relation as right-hand side input and the other relations as left-hand side input.Using (recursively computed and stored) least-cost join order for each alternative on left-hand-side, choose the cheapest of the n alternatives.If only left-deep trees are considered, time complexity of finding best join order is O(n 2n)Space complexity remains at O(2n) Cost-based optimization is expensive, but worthwhile for queries on large datasets (typical queries have small n, generally < 10)

اسلاید 28: Interesting Orders in Cost-Based OptimizationConsider the expression (r1 r2 r3) r4 r5An interesting sort order is a particular sort order of tuples that could be useful for a later operation.Generating the result of r1 r2 r3 sorted on the attributes common with r4 or r5 may be useful, but generating it sorted on the attributes common only r1 and r2 is not useful.Using merge-join to compute r1 r2 r3 may be costlier, but may provide an output sorted in an interesting order.Not sufficient to find the best join order for each subset of the set of n given relations; must find the best join order for each subset, for each interesting sort orderSimple extension of earlier dynamic programming algorithmsUsually, number of interesting orders is quite small and doesn’t affect time/space complexity significantly

اسلاید 29: Heuristic OptimizationCost-based optimization is expensive, even with dynamic programming.Systems may use heuristics to reduce the number of choices that must be made in a cost-based fashion.Heuristic optimization transforms the query-tree by using a set of rules that typically (but not in all cases) improve execution performance:Perform selection early (reduces the number of tuples)Perform projection early (reduces the number of attributes)Perform most restrictive selection and join operations before other similar operations.Some systems use only heuristics, others combine heuristics with partial cost-based optimization.

اسلاید 30: Steps in Typical Heuristic Optimization1.Deconstruct conjunctive selections into a sequence of single selection operations (Equiv. rule 1.).2.Move selection operations down the query tree for the earliest possible execution (Equiv. rules 2, 7a, 7b, 11).3.Execute first those selection and join operations that will produce the smallest relations (Equiv. rule 6).4.Replace Cartesian product operations that are followed by a selection condition by join operations (Equiv. rule 4a).5.Deconstruct and move as far down the tree as possible lists of projection attributes, creating new projections where needed (Equiv. rules 3, 8a, 8b, 12).6.Identify those subtrees whose operations can be pipelined, and execute them using pipelining).

اسلاید 31: Structure of Query OptimizersThe System R/Starburst optimizer considers only left-deep join orders. This reduces optimization complexity and generates plans amenable to pipelined evaluation. System R/Starburst also uses heuristics to push selections and projections down the query tree.Heuristic optimization used in some versions of Oracle:Repeatedly pick “best” relation to join next Starting from each of n starting points. Pick best among these.For scans using secondary indices, some optimizers take into account the probability that the page containing the tuple is in the buffer.Intricacies of SQL complicate query optimizationE.g. nested subqueries

اسلاید 32: Structure of Query Optimizers (Cont.)Some query optimizers integrate heuristic selection and the generation of alternative access plans.System R and Starburst use a hierarchical procedure based on the nested-block concept of SQL: heuristic rewriting followed by cost-based join-order optimization.Even with the use of heuristics, cost-based query optimization imposes a substantial overhead.This expense is usually more than offset by savings at query-execution time, particularly by reducing the number of slow disk accesses.

اسلاید 33: Statistical Information for Cost Estimationnr: number of tuples in a relation r.br: number of blocks containing tuples of r.lr: size of a tuple of r.fr: blocking factor of r — i.e., the number of tuples of r that fit into one block.V(A, r): number of distinct values that appear in r for attribute A; same as the size of A(r).If tuples of r are stored together physically in a file, then:

اسلاید 34: HistogramsHistogram on attribute age of relation person Equi-width histogramsEqui-depth histograms

اسلاید 35: Selection Size EstimationA=v(r)nr / V(A,r) : number of records that will satisfy the selectionEquality condition on a key attribute: size estimate = 1AV(r) (case of A  V(r) is symmetric)Let c denote the estimated number of tuples satisfying the condition. If min(A,r) and max(A,r) are available in catalogc = 0 if v < min(A,r) c = If histograms available, can refine above estimateIn absence of statistical information c is assumed to be nr / 2.

اسلاید 36: Size Estimation of Complex SelectionsThe selectivity of a condition i is the probability that a tuple in the relation r satisfies i . If si is the number of satisfying tuples in r, the selectivity of i is given by si /nr.Conjunction: 1 2. . .  n (r). Assuming indepdence, estimate of tuples in the result is: Disjunction:1 2 . . .  n (r). Estimated number of tuples: Negation: (r). Estimated number of tuples: nr – size((r))

اسلاید 37: Join Operation: Running ExampleRunning example: depositor customerCatalog information for join examples:ncustomer = 10,000.fcustomer = 25, which implies that bcustomer =10000/25 = 400.ndepositor = 5000.fdepositor = 50, which implies that bdepositor = 5000/50 = 100.V(customer_name, depositor) = 2500, which implies that , on average, each customer has two accounts.Also assume that customer_name in depositor is a foreign key on customer.V(customer_name, customer) = 10000 (primary key!)

اسلاید 38: Estimation of the Size of JoinsThe Cartesian product r x s contains nr .ns tuples; each tuple occupies sr + ss bytes.If R  S = , then r s is the same as r x s. If R  S is a key for R, then a tuple of s will join with at most one tuple from rtherefore, the number of tuples in r s is no greater than the number of tuples in s.If R  S in S is a foreign key in S referencing R, then the number of tuples in r s is exactly the same as the number of tuples in s.The case for R  S being a foreign key referencing S is symmetric.In the example query depositor customer, customer_name in depositor is a foreign key of customer hence, the result has exactly ndepositor tuples, which is 5000

اسلاید 39: Estimation of the Size of Joins (Cont.)If R  S = {A} is not a key for R or S. If we assume that every tuple t in R produces tuples in R S, the number of tuples in R S is estimated to be: If the reverse is true, the estimate obtained will be: The lower of these two estimates is probably the more accurate one.Can improve on above if histograms are availableUse formula similar to above, for each cell of histograms on the two relations

اسلاید 40: Estimation of the Size of Joins (Cont.)Compute the size estimates for depositor customer without using information about foreign keys:V(customer_name, depositor) = 2500, and V(customer_name, customer) = 10000The two estimates are 5000 * 10000/2500 - 20,000 and 5000 * 10000/10000 = 5000We choose the lower estimate, which in this case, is the same as our earlier computation using foreign keys.

اسلاید 41: Size Estimation for Other OperationsProjection: estimated size of A(r) = V(A,r)Aggregation : estimated size of AgF(r) = V(A,r)Set operations For unions/intersections of selections on the same relation: rewrite and use size estimate for selectionsE.g. 1 (r)  2 (r) can be rewritten as 1 2 (r)For operations on different relations:estimated size of r  s = size of r + size of s. estimated size of r  s = minimum size of r and size of s.estimated size of r – s = r.All the three estimates may be quite inaccurate, but provide upper bounds on the sizes.

اسلاید 42: Size Estimation (Cont.)Outer join: Estimated size of r s = size of r s + size of rCase of right outer join is symmetricEstimated size of r s = size of r s + size of r + size of s

اسلاید 43: Estimation of Number of Distinct ValuesSelections:  (r) If  forces A to take a specified value: V(A, (r)) = 1.e.g., A = 3If  forces A to take on one of a specified set of values: V(A, (r)) = number of specified values.(e.g., (A = 1 V A = 3 V A = 4 )), If the selection condition  is of the form A op r estimated V(A, (r)) = V(A.r) * swhere s is the selectivity of the selection.In all the other cases: use approximate estimate of min(V(A,r), n (r) )More accurate estimate can be got using probability theory, but this one works fine generally

اسلاید 44: Estimation of Distinct Values (Cont.)Joins: r sIf all attributes in A are from r estimated V(A, r s) = min (V(A,r), n r s)If A contains attributes A1 from r and A2 from s, then estimated V(A,r s) = min(V(A1,r)*V(A2 – A1,s), V(A1 – A2,r)*V(A2,s), nr s) More accurate estimate can be got using probability theory, but this one works fine generally

اسلاید 45: Estimation of Distinct Values (Cont.)Estimation of distinct values are straightforward for projections.They are the same in A (r) as in r. The same holds for grouping attributes of aggregation.For aggregated values For min(A) and max(A), the number of distinct values can be estimated as min(V(A,r), V(G,r)) where G denotes grouping attributesFor other aggregates, assume all values are distinct, and use V(G,r)

اسلاید 46: Optimizing Nested Subqueries**SQL conceptually treats nested subqueries in the where clause as functions that take parameters and return a single value or set of valuesParameters are variables from outer level query that are used in the nested subquery; such variables are called correlation variablesE.g. select customer_name from borrower where exists (select * from depositor where depositor.customer_name = borrower.customer_name)Conceptually, nested subquery is executed once for each tuple in the cross-product generated by the outer level from clauseSuch evaluation is called correlated evaluation Note: other conditions in where clause may be used to compute a join (instead of a cross-product) before executing the nested subquery

اسلاید 47: Optimizing Nested Subqueries (Cont.)Correlated evaluation may be quite inefficient since a large number of calls may be made to the nested query there may be unnecessary random I/O as a resultSQL optimizers attempt to transform nested subqueries to joins where possible, enabling use of efficient join techniquesE.g.: earlier nested query can be rewritten as select customer_name from borrower, depositor where depositor.customer_name = borrower.customer_nameNote: above query doesn’t correctly deal with duplicates, can be modified to do so as we will seeIn general, it is not possible/straightforward to move the entire nested subquery from clause into the outer level query from clauseA temporary relation is created instead, and used in body of outer level query

اسلاید 48: Optimizing Nested Subqueries (Cont.)In general, SQL queries of the form below can be rewritten as shownRewrite: select … from L1 where P1 and exists (select * from L2 where P2)To: create table t1 as select distinct V from L2 where P21 select … from L1, t1 where P1 and P22P21 contains predicates in P2 that do not involve any correlation variablesP22 reintroduces predicates involving correlation variables, with relations renamed appropriatelyV contains all attributes used in predicates with correlation variables

اسلاید 49: Optimizing Nested Subqueries (Cont.)In our example, the original nested query would be transformed to create table t1 as select distinct customer_name from depositor select customer_name from borrower, t1 where t1.customer_name = borrower.customer_nameThe process of replacing a nested query by a query with a join (possibly with a temporary relation) is called decorrelation.Decorrelation is more complicated when the nested subquery uses aggregation, or when the result of the nested subquery is used to test for equality, or when the condition linking the nested subquery to the other query is not exists, and so on.

اسلاید 50: Materialized Views**A materialized view is a view whose contents are computed and stored.Consider the view create view branch_total_loan(branch_name, total_loan) as select branch_name, sum(amount) from loan groupby branch_nameMaterializing the above view would be very useful if the total loan amount is required frequentlySaves the effort of finding multiple tuples and adding up their amounts

اسلاید 51: Materialized View MaintenanceThe task of keeping a materialized view up-to-date with the underlying data is known as materialized view maintenanceMaterialized views can be maintained by recomputation on every updateA better option is to use incremental view maintenanceChanges to database relations are used to compute changes to materialized view, which is then updatedView maintenance can be done byManually defining triggers on insert, delete, and update of each relation in the view definitionManually written code to update the view whenever database relations are updated Supported directly by the database

اسلاید 52: Incremental View MaintenanceThe changes (inserts and deletes) to a relation or expressions are referred to as its differentialSet of tuples inserted to and deleted from r are denoted ir and drTo simplify our description, we only consider inserts and deletesWe replace updates to a tuple by deletion of the tuple followed by insertion of the update tuple We describe how to compute the change to the result of each relational operation, given changes to its inputsWe then outline how to handle relational algebra expressions

اسلاید 53: Join OperationConsider the materialized view v = r s and an update to rLet rold and rnew denote the old and new states of relation rConsider the case of an insert to r: We can write rnew s as (rold  ir) sAnd rewrite the above to (rold s)  (ir s)But (rold s) is simply the old value of the materialized view, so the incremental change to the view is just ir sThus, for inserts vnew = vold (ir s) Similarly for deletes vnew = vold – (dr s)

اسلاید 54: Selection and Projection OperationsSelection: Consider a view v = (r).vnew = vold (ir)vnew = vold - (dr)Projection is a more difficult operation R = (A,B), and r(R) = { (a,2), (a,3)} A(r) has a single tuple (a). If we delete the tuple (a,2) from r, we should not delete the tuple (a) from A(r), but if we then delete (a,3) as well, we should delete the tupleFor each tuple in a projection A(r) , we will keep a count of how many times it was derivedOn insert of a tuple to r, if the resultant tuple is already in A(r) we increment its count, else we add a new tuple with count = 1On delete of a tuple from r, we decrement the count of the corresponding tuple in A(r) if the count becomes 0, we delete the tuple from A(r)

اسلاید 55: Aggregation Operationscount : v = Agcount(B)(r). When a set of tuples ir is inserted For each tuple r in ir, if the corresponding group is already present in v, we increment its count, else we add a new tuple with count = 1When a set of tuples dr is deletedfor each tuple t in ir.we look for the group t.A in v, and subtract 1 from the count for the group. If the count becomes 0, we delete from v the tuple for the group t.Asum: v = Agsum (B)(r) We maintain the sum in a manner similar to count, except we add/subtract the B value instead of adding/subtracting 1 for the countAdditionally we maintain the count in order to detect groups with no tuples. Such groups are deleted from vCannot simply test for sum = 0 (why?)To handle the case of avg, we maintain the sum and count aggregate values separately, and divide at the end

اسلاید 56: Aggregate Operations (Cont.)min, max: v = Agmin (B) (r). Handling insertions on r is straightforward.Maintaining the aggregate values min and max on deletions may be more expensive. We have to look at the other tuples of r that are in the same group to find the new minimum

اسلاید 57: Other OperationsSet intersection: v = r  s when a tuple is inserted in r we check if it is present in s, and if so we add it to v. If the tuple is deleted from r, we delete it from the intersection if it is present. Updates to s are symmetricThe other set operations, union and set difference are handled in a similar fashion.Outer joins are handled in much the same way as joins but with some extra work we leave details to you.

اسلاید 58: Handling ExpressionsTo handle an entire expression, we derive expressions for computing the incremental change to the result of each sub-expressions, starting from the smallest sub-expressions.E.g. consider E1 E2 where each of E1 and E2 may be a complex expressionSuppose the set of tuples to be inserted into E1 is given by D1 Computed earlier, since smaller sub-expressions are handled firstThen the set of tuples to be inserted into E1 E2 is given by D1 E2This is just the usual way of maintaining joins

اسلاید 59: Query Optimization and Materialized ViewsRewriting queries to use materialized views:A materialized view v = r s is available A user submits a query r s tWe can rewrite the query as v t Whether to do so depends on cost estimates for the two alternativeReplacing a use of a materialized view by the view definition:A materialized view v = r s is available, but without any index on itUser submits a query A=10(v). Suppose also that s has an index on the common attribute B, and r has an index on attribute A. The best plan for this query may be to replace v by r s, which can lead to the query plan A=10(r) sQuery optimizer should be extended to consider all above alternatives and choose the best overall plan

اسلاید 60: Materialized View SelectionMaterialized view selection: “What is the best set of views to materialize?”. This decision must be made on the basis of the system workloadIndices are just like materialized views, problem of index selection is closely related, to that of materialized view selection, although it is simpler.Some database systems, provide tools to help the database administrator with index and materialized view selection.

اسلاید 61: End of Chapter

34,000 تومان

خرید پاورپوینت توسط کلیه کارت‌های شتاب امکان‌پذیر است و بلافاصله پس از خرید، لینک دانلود پاورپوینت در اختیار شما قرار خواهد گرفت.

در صورت عدم رضایت سفارش برگشت و وجه به حساب شما برگشت داده خواهد شد.

در صورت بروز هر گونه مشکل به شماره 09353405883 در ایتا پیام دهید یا با ای دی poshtibani_ppt_ir در تلگرام ارتباط بگیرید.

افزودن به سبد خرید