custom-speeches.com Personal Growth Introduction To Algorithms Second Edition Pdf

INTRODUCTION TO ALGORITHMS SECOND EDITION PDF

Sunday, May 19, 2019


Introduction to Algorithms, Second Edition. Thomas H. Cormen. Charles E. Leiserson. Ronald L. Rivest. Clifford Stein. The MIT Press. Cambridge. Contribute to xiayan/Coursera_classes development by creating an account on GitHub. Request PDF on ResearchGate | On Jan 1, , Thomas H. Cormen and others published Introduction to Algorithms, Second Edition.


Introduction To Algorithms Second Edition Pdf

Author:KIESHA VIGORITO
Language:English, Spanish, Japanese
Country:Croatia
Genre:Science & Research
Pages:696
Published (Last):09.06.2015
ISBN:801-6-25050-599-8
ePub File Size:16.73 MB
PDF File Size:18.25 MB
Distribution:Free* [*Regsitration Required]
Downloads:48462
Uploaded by: SHANNAN

Request PDF on ResearchGate | On Jan 1, , T H Cormen and others published Introduction to Algorithms (Second Edition). Instructor's Manual by Thomas H. Cormen Clara Lee Erica Linto AccompanyIntroduction to Algorithms Second Edition by. Instructor's Manual to Accompany Introduction to Algorithms, Third Edition by Thomas H. . in Spring —but like the instructor's manual for the second edition, we have chosen to . We created the PDF files for this manual on a. MacBook.

Course Introduction

The looping constructs while, for, and repeat and the conditional constructs if, then, and else have interpretations similar to those in Pascal. Thus, immediately after a for loop, the loop counter's value is the value that first exceeded the for loop bound.

We used this property in our correctness argument for insertion sort. Variables such as i, j, and key are local to the given procedure. We shall not use global variables without explicit indication. Array elements are accessed by specifying the array name followed by the index in square brackets.

For example, A[i] indicates the ith element of the array A. The notation " " is used to indicate a range of values within an array. Thus, A[1 j] indicates the subarray of A consisting of the j elements A[1], A[2],. Compound data are typically organized into objects, which are composed of attributes or fields.

A particular field is accessed using the field name followed by the name of its object in square brackets. For example, we treat an array as an object with the attribute length indicating how many elements it contains.

To specify the number of elements in an array A, we write length[A]. Although we use square brackets for both array indexing and object attributes, it will usually be clear from the context which interpretation is intended. A variable representing an array or object is treated as a pointer to the data representing the array or object. Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL. Parameters are passed to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure.

When objects are passed, the pointer to the data representing the object is copied, but the object's fields are not. The boolean operators "and" and "or" are short circuiting. That is, when we evaluate the expression "x and y" we first evaluate x.

If, on the other hand, x evaluates to TRUE, we must evaluate y to determine the value of the entire expression. Exercises 2. Write pseudocode for linear search, which scans through the sequence, looking for v. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties. State the problem formally and write pseudocode for adding the two integers.

Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, a most efficient one can be easily identified. Such analysis may indicate more than one viable candidate, but several inferior algorithms are usually discarded in the process. Before we can analyze an algorithm, we must have a model of the implementation technology that will be used, including a model for the resources of that technology and their costs.

For most of this book, we shall assume a generic one-processor, random-access machine RAM model of computation as our implementation technology and understand that our algorithms will be implemented as computer programs.

In the RAM model, instructions are executed one after another, with no concurrent operations. In later chapters, however, we shall have occasion to investigate models for digital hardware. Strictly speaking, one should precisely define the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM model. For example, what if a RAM had an instruction that sorts?

Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are designed. The RAM model contains instructions commonly found in real computers: arithmetic add, subtract, multiply, divide, remainder, floor, ceiling , data movement load, store, copy , and control conditional and unconditional branch, subroutine call and return.

Each such instruction takes a constant amount of time. The data types in the RAM model are integer and floating point. Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data.

If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time-clearly an unrealistic scenario. Real computers contain instructions not listed above, and such instructions represent a gray area in the RAM model. For example, is exponentiation a constant-time instruction? In the general case, no; it takes several instructions to compute xy when x and y are real numbers.

In restricted situations, however, exponentiation is a constant-time operation. Many computers have a "shift left" instruction, which in constant time shifts the bits of an integer by k positions to the left.

In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2. Shifting the bits by k positions to the left is equivalent to multiplication by 2k. Therefore, such computers can compute 2k in one constant-time instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2k as a constant-time operation when k is a small enough positive integer.

In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory which is most often implemented with demand paging. Several computational models attempt to account for memory-hierarchy effects, which are sometimes significant in real programs on real machines.

A handful of problems in this book examine memory-hierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, so that they can be difficult to work with.

Moreover, RAM-model analyses are usually excellent predictors of performance on actual machines. Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, algebraic dexterity, and the ability to identify the most significant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas.

Even though we typically select only one machine model to analyze a given algorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important characteristics of an algorithm's resource requirements, and suppresses tedious details.

In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to define the terms "running time" and "size of input" more carefully. The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most natural measure is the number of items in the input-for example, the array size n for sorting.

For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Sometimes, it is more appropriate to describe the size of the input with two numbers rather than one. For instance, if the input to an algorithm is a graph, the input size can be described by the numbers of vertices and edges in the graph.

We shall indicate which input size measure is being used with each problem we study. The running time of an algorithm on a particular input is the number of primitive operations or "steps" executed. It is convenient to define the notion of step so that it is as machineindependent as possible.

For the moment, let us adopt the following view. A constant amount of time is required to execute each line of our pseudocode. One line may take a different amount of time than another line, but we shall assume that each execution of the ith line takes time ci , where ci is a constant.

This viewpoint is in keeping with the RAM model, and it also reflects how the pseudocode would be implemented on most actual computers.

Introduction to Algorithms, Second Edition Solution Manual

This simpler notation will also make it easy to determine whether one algorithm is more efficient than another. When a for or while loop exits in the usual way i. We assume that comments are not executable statements, and so they take no time. If the array is in reverse sorted order-that is, in decreasing order-the worst case results. Typically, as in insertion sort, the running time of an algorithm is fixed for a given input, although in later chapters we shall see some interesting "randomized" algorithms whose behavior can vary even for a fixed input.

Worst-case and average-case analysis In our analysis of insertion sort, we looked at both the best case, in which the input array was already sorted, and the worst case, in which the input array was reverse sorted. For the remainder of this book, though, we shall usually concentrate on finding only the worst-case running time, that is, the longest running time for any input of size n.

We give three reasons for this orientation. Knowing it gives us a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse. For some algorithms, the worst case occurs fairly often. For example, in searching a database for a particular piece of information, the searching algorithm's worst case will often occur when the information is not present in the database.

In some searching applications, searches for absent information may be frequent. The "average case" is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray A[1 j - 1] to insert element A[j]? On average, half the elements in A[1 j - 1] are less than A[j], and half the elements are greater. If we work out the resulting average-case running time, it turns out to be a quadratic function of the input size, just like the worst-case running time.

In some particular cases, we shall be interested in the average-case or expected running time of an algorithm; in Chapter 5, we shall see the technique of probabilistic analysis, by which we determine expected running times. One problem with performing an average-case analysis, however, is that it may not be apparent what constitutes an "average" input for a particular problem.

Often, we shall assume that all inputs of a given size are equally likely.

In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. We thus ignored not only the actual statement costs, but also the abstract costs ci.

We shall now make one more simplifying abstraction. It is the rate of growth, or order of growth, of the running time that really interests us. We therefore consider only the leading term of a formula e. We also ignore the leading term's constant coefficient, since constant factors are less significant than the rate of growth in determining computational efficiency for large inputs.

We usually consider one algorithm to be more efficient than another if its worst-case running time has a lower order of growth.

Due to constant factors and lower-order terms, this evaluation may be in error for small inputs. Then find the second smallest element of A, and exchange it with A[2]. Continue in this manner for the first n - 1 elements of A. Write pseudocode for this algorithm, which is known as selection sort.

What loop invariant does this algorithm maintain? Why does it need to run for only the first n - 1 elements, rather than for all n elements?

How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? Justify your answers. Solution to Exercise 5. Candidate 1 is always hired. The best candidate, i. If the best candidate is candidate 1, then that is the only candidate hired. Letting j denote the position in the interview order of the best candidate, let F be the event in which candidates 2, 3,.

Noting that the events E1 , E 2 ,. One could enumerate all n! This would be a painstaking process, and the answer would turn out to be 1. We can use indicator random variables, however, to arrive at the same answer much more easily.

Note that this is a situation in which the indicator random variables are not independent. Despite the dependence, Solutions for Chapter 5: Thus, we can use the technique of indicator random variables even in the presence of dependence.

The maintenance and termination parts remain the same. The initialization part is for the subarray A[1. Since there are 3! The subtraction and addition of 1 in the index calculation is due to the 1-origin indexing. Thus, once offset is determined, so is the entire permutation. This procedure does not produce a uniform random permutation, however, since it can produce only n different permutations.

We view a toss as a success if it misses bin i and as a failure if it lands in bin i. In order for bin i to be empty, we need n successes in n trials. Now we determine the expected number of bins with exactly one ball. We want to compute E [Vn ]. Both ways condition on the value held in the counter, but only the second way incorporates the conditioning into the expression for E[X j ].

The X j are pairwise independent, and so by equation C. Thus, by equation C. Lecture Notes for Chapter 6: Sorts in place—like insertion sort. Combines the best of both algorithms. A heap can be stored as an array A. Computing is fast with binary representation implementation.

Here, we bypass these attributes and use parameter values instead. Heapsort Example: For min-heaps smallest element at root , min-heap property: Similar argument for min-heaps. In general, heaps can be k-ary tree instead of binary. It is used to maintain the max-heap property.

Introduction to Algorithms, Third Edition

Assume left and right subtrees of i are max-heaps. Heapsort [Parameter n replaces attribute heap-size[A]. If necessary, swap A[i] with the larger of the two children to preserve heap property.

Continue this process of comparing and swapping down the heap, until subtree rooted at i is max-heap. If we hit a leaf, then the subtree rooted at the leaf is trivially a max-heap. Compare node 2 with its children, and then swap it with the larger of the two children.

Continue down the tree, swapping until the value is properly placed at the root of a subtree that is a max-heap. In this case, the max-heap is a leaf.

Building a heap The following procedure, given an unordered array, will produce a max-heap. By Exercise 6. Children of node i are indexed higher than i, so by the loop invariant, they are both roots of max-heaps.

Decrementing i reestablishes the loop invariant at each iteration. By the loop invariant, each node, notably node 1, is the root of a max-heap. A good approach to analysis in general is to start by proving easy bound, then try to tighten it. Tighter analysis: The heapsort algorithm Given an input array, the heapsort algorithm acts as follows: Starting with the root the maximum element , the algorithm places the maximum element into the correct place in the array by swapping it with the element in the last position in the array.

Sort an example heap on the board. O n for loop: O lg n Total time: Though heapsort is a great algorithm, a well-implemented quicksort usually beats it in practice. These notes will deal with maxpriority queues implemented with max-heaps. Min-priority queues are implemented with min-heaps similarly. A heap gives a good compromise between fast insertion but slow extraction and vice versa.

Both operations take O lg n time. Each set element has a key—an associated value. Max-priority queue supports dynamic-set operations: Example max-priority queue application: Min-priority queue supports similar operations: Example min-priority queue application: Actual implementations often have a handle in each heap element that allows access to an object in the application, and objects in the application often have a handle likely an array index to access the heap element.

Will examine how to implement max-priority queue operations. Finding the maximum element Getting the maximum element is easy: Extracting max element Given the array A: Make a copy of the maximum element the root. Make the last node in the tree the new root. Re-heapify the heap, with one fewer node. Return the copy of the maximum element. Heapsort Analysis: Move 1 from node 10 to node 1. Erase node Note that successive extractions will remove items in reverse sorted order.

Increasing key value Given set S, element x, and new key value k: Min-priority queue operations are implemented similarly with min-heaps. Heapsort Solution to Exercise 6. Solution to Exercise 6. Then the maximum element is somewhere else in the subtree, possibly even at more than one location. Let m be the index at which the maximum appears the lowest such index if the maximum appears more than once.

Since the maximum is not at the root of the subtree, node m has a parent. To Solutions for Chapter 6: Two subtleties to beware of: But the proof for an incomplete tree is tricky and is not derived from the proof for a complete tree. Proof By induction on h. Let x be the number of nodes at depth H —that is, the number of nodes in the bottom possibly incomplete level.

Thus if n is odd, x is even, and if n is even, x is odd. To prove the base case, we must consider separately the case in which n is even x is odd and the case in which n is odd x is even. Here are two ways to do this: First method of proving the base case: Thus see Exercise B. The latter equality holds because n is odd. Observe that we would also increase the number of leaves by 1, since we added a node to a parent that already had a child. The latter equality holds because n is even.

Second method of proving the base case: Inductive step: Let n h be the number of nodes at height h in the n-node tree T. Solutions for Chapter 6: Consider the following counterexample. Input array A: For a lower bound of Solutions for Chapter 6: A d-ary heap can be represented in a 1-dimensional array as follows.

Heapsort d. Since only parent pointers are followed, the number of children a node has is irrelevant. Increasing an element may make it larger than its parent, in which case it must be moved higher up in the tree.

This can be done just as for insertion, traversing a path from the increased node toward the root. Using Lomuto partitioning helps simplify the analysis, which uses indicator random variables in the second edition. Expected running time: Sorts in place. Description of quicksort Quicksort is based on the three-step process of divide-and-conquer. Partition A[ p. No work is needed to combine the subarrays, because they are sorted in place.

As the procedure executes, the array is partitioned into four regions, some of which may be empty: All entries in A[ p. On an 8-element subarray. Lecture Notes for Chapter 7: Time for partitioning: Quicksort Performance of quicksort The running time of quicksort depends on the partitioning of the subarrays: If they are unbalanced, then quicksort can run as slowly as insertion sort.

Same running time as insertion sort. In fact, the worst-case running time occurs when quicksort takes a sorted array as input, but insertion sort runs in O n time in this case. There will usually be a mix of good and bad splits throughout the recursion tree.

There are still the same number of subarrays to sort, and only twice as much work was done to get to that point. This is not always true.

Solutions for Introduction to algorithms second edition

To correct this, we add randomization to quicksort. We could randomly permute the input array. Instead, we use random sampling, or picking one element at random. Instead, randomly pick an element from the subarray that is being sorted. We add this randomization by not always using A[r] as the pivot, but instead randomly picking an element from the subarray that is being sorted.

Worst-case analysis We will prove that a worst-case split at every level produces a worst-case running time of O n 2. Substituting our guess into the above recurrence: Second derivative with respect to q is positive. Therefore, the worst-case running time of quicksort is O n2. We will now compute a bound on the overall number of comparisons. For ease of analysis: Each pair of elements is compared at most once, because elements are compared only to the pivot element, and then the pivot element is never in any later call to PARTITION.

Solutions for Chapter 7: Quicksort Solution to Exercise 7. Solution to Exercise 7. Similarly, maximum depth corresponds to always taking the larger part of the partition, i. What randomization can do is make the chance of encountering a worst-case scenario small. We rewrite this function as Solutions for Chapter 7: Quicksort c. Sorting in Linear Time Chapter 8 overview How fast can we sort? We will prove a lower bound, then beat it by playing a different game.

All sorts seen so far are comparison sorts: Abstracts away everything else: Sorting in Linear Time For insertion sort on 3 elements: Each leaf is labeled by the permutation of orders that the algorithm determines.

View the tree as if the algorithm splits in two at each node, based on the information it has determined up to that point. The tree models all possible execution traces. What is the length of the longest path from root to leaf? In other words: Why is this useful? Lecture Notes for Chapter 8: By lemma, n! Take logs: Tree is just one node, which is a leaf. Each leaf becomes parent to two new leaves. Corollary Heapsort and merge sort are asymptotically optimal comparison sorts.

Sorting in linear time Non-comparison sorts. Counting sort Depends on a key assumption: Array A and values n and k are given as parameters. B is assumed to be already allocated and is given as a parameter. Auxiliary storage: How big a k is practical?

Probably not. Maybe, depending on n. Probably unless n is really small. Counting sort will be used in radix sort. Radix sort How IBM made its money.

Card sorters, worked on one column at a time. The human operator was part of the algorithm! Key idea: To sort d digits: Sorting in Linear Time Correctness: Assume digits 1, 2,. Show that a stable sort on digit i leaves digits 1,. If 2 digits in position i are equal, numbers are already in the right order by inductive hypothesis.

The stable sort on digit i leaves them in the right order. Assume that we use counting sort as the intermediate sort. How to break each key into digits? Break into r-bit digits.

Compare radix sort to merge sort and quicksort: Radix sort: How does radix sort violate the ground rules for a comparison sort? Used keys as array indices.

Sorting in Linear Time Bucket sort Assumes the input is generated by a random process that distributes elements uniformly over [0, 1. Distribute the n input values into the buckets. Sort each bucket. Then go through buckets in order, listing elements in each one. But we need to do a careful analysis. Sorting in Linear Time Take expectations of both sides: Used a function of key values to index into an array. This is a probabilistic analysis—we used probability to analyze an algorithm whose running time depends on the distribution of inputs.

Different from a randomized algorithm, where we use randomization to impose a distribution. Solutions for Chapter 8: Sorting in Linear Time Solution to Exercise 8. Use the same argument as in the proof of Theorem 8. In particular, n!

Proof First notice that, as pointed out in the hint, we cannot prove the lower bound by multiplying together the lower bounds for sorting each subsequence. That would only prove that there is no faster algorithm that sorts the subsequences independently. This was not what we are asked to prove; we cannot introduce any extra assumptions. Sorting in Linear Time Now, consider the decision tree of height h for any comparison sort for S. Since the elements of each subsequence can be in any order, any of the k!

Thus, any decision tree for sorting S must have at least k! The third line comes from k! We implicitly assume here that k is even. Solution to Exercise 8. The algorithm is correct no matter what order is used! The original algorithm was stable because an element taken from A later started out with a lower index than one taken earlier.

The number of integers in the range [a. When inserting A[ j ] into the sorted sequence A[1. Continue at long as A[ j ] Solution to Exercise 8. Radix sort sorts separately on each digit, starting from digit 1. Thus, radix sort of d digits, which sorts on digits 1,. The sort on digit d will order the elements by their dth digit. Consider two elements, a and b, with dth digits ad and bd respectively. If the intermediate sort were not stable, it might rearrange elements whose dth digits were equal—elements that were in the right order after the sort on their lower-order digits.

Sort these 2-digit numbers with radix sort. A simple change that will preserve the linear expected running time and make the worst-case running time O n lg n is to use a worst-case O n lg n -time algorithm like merge sort instead of insertion sort when sorting the buckets. For a comparison algorithm A to sort, no two input permutations can reach the same leaf of the decision tree, so there must be at least n!

Since A is a deterministic algorithm, it must always reach the same leaf when given a particular permutation as input, so at most n!

Therefore exactly n! These n! Any remaining leaves will have probability 0, since they are not reached for any input. That is, we can assume that TA consists of only the n! D T A is the sum of the decision-tree path lengths for sorting all input permutations, and the path lengths are proportional to the run time.

Since the n! At each randomized node, pick the child with the smallest subtree the subtree with the smallest average number of comparisons on a path to a leaf. Delete all the other children of the randomized node and splice out the randomized node itself. The randomized algorithm thus takes at least as much time on average as the corresponding deterministic one. The usual, unadorned radix sort algorithm will not solve this problem in the required time bound.

The number of passes, d, would have to be the number of digits in the largest integer. We assume that the range of a single digit is constant.

Let us assume without loss of generality that all the integers are positive and have no leading zeros. If there are negative integers or 0, deal with the positive numbers, negative numbers, and 0 separately. Under this assumption, we can observe that integers with more digits are always greater than integers with fewer digits. One way to solve this problem is by a radix sort from right to left.

Since the strings have varying lengths, however, we have to pad out all strings that are shorter than the longest string. Of course, we Solutions for Chapter 8: Unfortunately, this scheme does not always run in the required time bound. Suppose that there are m strings and that the longest string has d characters. The correctness of this algorithm is straightforward.

Analyzing the running time is a bit trickier. Let us count the number of times that each string is sorted by a call of counting sort. Suppose that the ith string, si , has length li.

The string a is sorted its length, 1, time plus one more time. Compare each red jug with each blue jug. To solve the problem, an algorithm has to perform a series of comparisons until it has enough information to determine the matching.

We can view the computation of the algorithm in terms of a decision tree. Every internal node is labeled with two jugs one red, one blue which we compare, and has three outgoing edges red jug smaller, same size, or larger than the blue jug. The leaves are labeled with a unique matching of jugs. Sorting in Linear Time The height of the decision tree is equal to the worst-case number of comparisons the algorithm has to make to determine the matching.

Now we can bound the height h of our decision tree. Every tree with a branching factor of 3 every inner node has at most three children has at most 3h leaves. Since the decison tree must have at least n! Assume that the red jugs are labeled with numbers 1, 2,. The numbers are arbitrary and do not correspond to the volumes of jugs, but are just used to refer to the jugs in the algorithm description.

Moreover, the output of the algorithm will consist of n distinct pairs i, j , where the red jug i and the blue jug j have the same volume.

Once we pick r randomly from R, there will be a matching among the jugs in Solutions for Chapter 8: Termination is also easy to see: Still following the quicksort analysis, until a jug from Ri j is chosen, the entire set Ri j is together. The remainder of the analysis is the same as the quicksort analysis, and we arrive at the solution of O n lg n comparisons. Just like in quicksort, in the worst case we always choose the largest or smallest jug to partition the sets, which reduces the set sizes by only 1.

Lecture Notes for Chapter 9: When n is even, there are two medians: In other words, the ith smallest element of A. The selection problem can be solved in O n lg n time. Then return the ith element in the sorted array. There are faster algorithms, however. This is the best we can do, because each element, except the minimum, must be compared to a smaller element at least once.

Process elements in pairs. Compare the elements of a pair to each other. Then compare the larger element to the maximum so far, and compare the smaller element to the minimum so far. This leads to only 3 comparisons for every 2 elements. Setting up the initial values for the min and max depends on whether n is odd or even.

Then process the rest of the elements in pairs. The pivot element is the kth element of the subarray A[ p. If the pivot element is the ith smallest element i. Otherwise, recurse on the subarray containing the ith smallest element. Medians and Order Statistics Analysis Worst-case running time: Because it is randomized, no particular input brings out the worst-case behavior consistently. We obtain an upper bound on E[T n ] as follows: It depends on whether the ith smallest element is less than, equal to, or greater than the pivot element A[q].

To obtain an upper bound, we assume that T n is monotonically increasing and that the ith smallest element is always in the larger subarray. Guarantee a good split when the array is partitioned. It executes the following steps: Divide the n elements into groups of 5. Then just pick the median from each group, in O 1 time.

Now there are three possibilities: Medians and Order Statistics Analysis Start by getting a lower bound on the number of elements that are greater than the partitioning element x: Each white circle is the median of a group, as found in step 2. Arrows go from larger elements to smaller elements, based on what we know after step 4. Elements in the region on the lower right are known to be greater than x. Symmetrically, the number of elements that are Steps 1, 2, and 4 each take O n time: Step 2: Step 4: Substitute the inductive hypothesis in the right-hand side of the recurrence: Why ?

We could have used any integer strictly greater than Sorting algorithms that run in linear time need to make assumptions about their input.

Linear-time selection algorithms do not require any assumptions about their input. Solutions for Chapter 9: Medians and Order Statistics Solution to Exercise 9. Compare all the numbers in pairs. To show this more formally, draw a binary tree of the comparisons the algorithm does. The n numbers are the leaves, and each number that came out smaller in a comparison is the parent of the two numbers that were compared. In the search for the smallest number, the second smallest number must have come out smallest in every comparison made with it until it was eventually compared with the smallest.

So the second smallest is among the elements that were compared with the smallest during the tournament. Solution to Exercise 9. For groups of 3, however, the algorithm no longer works in linear time. You can also see that T n is nonlinear by noticing that each level of the recursion tree sums to n. S ELECT takes an array A, the bounds p and r of the subarray in A, and the rank i of an order statistic, and in time linear in the size of the subarray A[ p.

Now, if the median is in X but is not in X[k], then the above condition will not hold. If n is odd, then on the oil well whose y-coordinate is the median. Medians and Order Statistics Proof We examine various cases. In each case, we will start out with the pipeline at a particular y-coordinate and see what happens when we move it.

We start with the case in which n is even. Let us start with the pipeline somewhere on or between the two oil wells whose y-coordinates are the lower and upper medians. Now suppose that the pipeline goes through the oil well whose y-coordinate is the upper median.

We conclude that moving the pipeline up from the oil well at the upper median increases the total spur length. A symmetric argument shows that if we start with the pipeline going through the oil well whose y-coordinate is the lower median and move it down, then the total spur length increases.

We see, therefore, that when n is even, an optimal placement of the pipeline is anywhere on or between the two medians. Now we consider the case when n is odd. A symmetric argument shows that moving the pipeline down from the median also increases the total spur length, and so the claim optimal placement of the pipeline is on the median. Solution to Problem We assume that the numbers start out in an array. Put Solutions for Chapter 9: Total worst-case running time: Implement the priority queue as a heap.

Note that method c is always asymptotically at least as good as the other two methods, and that method b is asymptotically at least as good as a.

Comparing c to b is easy, but it is less obvious how to compare c and b to a. The sum of two things that are O n lg n is also O n lg n. The median x of the elements x1 , x2 ,. The sorting phase can be done in O n lg n worst-case time using merge sort or heapsort , and the scanning phase takes O n time. The total running time in the worst case, therefore, is O n lg n. The weighted-median algorithm works as follows. Otherwise, we proceed as follows. We then compute the total weights of the two halves.

Let the n points be denoted by their coordinates x1 , x2 ,. Let y be any point real number other than x. We are given n 2-dimensional points p1 , p2 ,. When i Solutions for Chapter 9: Medians and Order Statistics 1. No comparisons. Thus, when i 2, then i Solutions for Chapter 9: Lecture Notes for Chapter A hash table is effective for implementing a dictionary.

A hash table is a generalization of an ordinary array.

This is called direct addressing. Direct addressing is applicable when we can afford to allocate an array with one position for every possible key. We use a hash table when we do not want to or cannot allocate an array with one position per possible key. A hash table is an array, but it typically uses a size proportional to the number of keys to be stored rather than the number of possible keys.

Instead, compute a function of k, and use that value to index into the array. We call this function a hash function. What to do when the hash function maps multiple keys to the same table entry. Hash Tables Direct-address tables Scenario: Maintain a dynamic set. No two elements have the same key. Otherwise, T [k] is empty, represented by NIL.

Often, the set K of keys actually stored is small, compared to U , so that most of the space allocated for T is wasted. Can still get O 1 search time, but in the average case, not the worst case. Instead of storing an element with key k in slot k, use a function h and store the element in slot h k. We say that k hashes to slot h k. When two or more keys hash to the same slot. Therefore, must be prepared to handle collisions in all cases.

Use two methods: Chaining is usually better than open addressing. Collision resolution by chaining Put all elements that hash to the same slot into a linked list. Hash Tables How to implement dictionary operations with chaining: It would take an additional search to check if it was already inserted.

Worst-case running time is O 1 time if the lists are doubly linked. Load factor is average number of elements per linked list.

Average case depends on how well the hash function distributes the keys among the slots. We focus on average-case performance of hashing with chaining. Assume that we can compute the hash function in O 1 time, so that the time required to search for the element with key k depends on the length nh k of the list T [h k ]. We consider two cases: If the hash table does contain an element with key k, then the search is successful.

To search unsuccessfully for any key k, need to search to the end of the list T [h k ]. Successful search: The circumstances are slightly different from an unsuccessful search.

The probability that each list is searched is proportional to the number of elements it contains. Proof Assume that the element x being searched for is equally likely to be any of the n elements stored in the table. These are the elements inserted after x was inserted because we insert at the head of the list.

We must count the element Lecture Notes for Chapter Since insertion takes O 1 worst-case time and deletion takes O 1 worst-case time when the lists are doubly linked, all dictionary operations take O 1 time on average. Hash functions We discuss some issues regarding hash-function design and present schemes for hash function creation.

What makes a good hash function? Often use heuristics, based on the domain of the keys, to create a hash function that performs well. Interpret a character string as an integer expressed in some radix notation.

Suppose the string is CLRS: Fast, since requires just one division operation. Have to avoid certain values of m: Good choice for m: A prime not too close to an exact power of 2. Multiplication method 1. Slower than division method. Value of m is not critical. Relatively easy implementation: Let the word size of the machine be w bits. Let s be an integer in the range 0 Lecture Notes for Chapter So we can just take these bits after having formed r0 by multiplying k by s.

Using the implementation: How to choose A: But it works better with some values than with others, depending on the keys being hashed. Universal hashing [We just touch on universal hashing in these notes.

See the book for a full treatment. Then he could choose keys that all hash to the same slot, giving worst-case behavior. Hash Tables One way to defeat the adversary is to use a different hash function each time. You choose one at random at the beginning of your program. What we want is to randomly choose a single hash function from a set of good candidates.

Why are universal hash functions good? Theorem Using chaining and universal hashing on key k: Each slot contains either a key or NIL. To search for key k: Examining a slot is known as a probe. If slot h k contains key k, the search is successful. If this slot contains NIL, the search is unsuccessful.

We compute the index of some other slot, based on k and on which probe count from 0: Thus, the hash function is h: Pseudocode for searching: Pseudocode for insertion: Cannot just put NIL into the slot containing the key we want to delete. And suppose we then deleted key k by storing NIL into slot j. How to compute probe sequences The ideal situation is uniform hashing: This generalizes simple uniform hashing for a hash function that produces a whole probe sequence rather than just a single number.

None of these techniques can produce all m!

Linear probing: Hash Tables Double hashing: Use two auxiliary hash functions, h1 and h 2. Analysis of open-address hashing Assumptions: Proof Since the search is unsuccessful, every probe is to an occupied slot, except for the last probe, which is to an empty slot.

By Exercise C. Boundary case: Using this claim, Lecture Notes for Chapter Proof Since there is no deletion, insertion uses the same probe sequence as an unsuccessful search. Theorem The expected number of probes in a successful search is at most 1 1 ln.

We need to average over all n keys: Simplify by using the technique of bounding a summation by an integral: Hash Tables Solution to Exercise The stack has an attribute top[S], so that only entries S[1. The idea of this scheme is that entries of T and S validate each other. If key k is actually stored in T , then T [k] contains the index, say j , of a valid entry in S, and S[ j ] contains the value k.

Assuming that we also need to store pointers to objects in our direct-address table, we can store them in an array that is parallel to either T or S. The operations on the dictionary work as follows: Given key k, we check whether we have a validating cycle, i. To delete object x with key k, assuming that this object is in the dictionary, we need to break the validating cycle.

That is, we execute the following sequence of assignments: Solutions for Chapter Solution to Exercise The slot thus contains two pointers. A used slot contains an element and a pointer possibly NIL to the next element that hashes to this slot. Of course, that pointer points to another slot in the table. The free list must be doubly linked in order for this deletion to run in O 1 time.

Let j be the slot the element x to be deleted hashes to. To do so, allocate a free slot e. Then insert the new element in the now-empty slot as usual. Check the slot the key hashes to, and if that is not the desired element, follow the chain of pointers from the slot. All the operations take expected O 1 times for the same reason they do with the version in the book: If the free list were singly linked, then operations that involved removing an arbitrary slot from the free list would not run in O 1 time.

One can prove this property formally, but informally, consider that both heapsort and quicksort work by interchanging pairs of elements and that they have to be able to produce any permutation of their input array. We have Solutions for Chapter If you think Solutions for Chapter We now show that each such move increases the number of collisions, so that all the moves together must increase the number of collisions.

Suppose that we move an element from an underloaded value j to an overloaded value k, and we leave all other elements alone. Since we assume uniform hashing, we can use the same observation as is used in Corollary As in the proof of Theorem Hash Tables b. We start by showing two facts. Can be used as both a dictionary and as a priority queue. Basic operations take time proportional to the height of the tree. For linear chain of n nodes: Different types of search trees include binary search trees, red-black trees covered in Chapter 13 , and B-trees covered in Chapter We will cover binary search trees, tree walks, and operations on binary search trees.

Binary search trees Binary search trees are an important data structure for dynamic sets. As in Section Stored keys must satisfy the binary-search-tree property. Draw sample tree. Show that the binary-search-tree property holds. Elements are printed in monotonically increasing order.

Follows by induction directly from the binary-search-tree property. Binary Search Trees Example: Search for values D and C in the example tree from above.

The algorithm recurses, visiting nodes on a downward path from the root. Thus, running time is O h , where h is the height of the tree. The above recursive procedure is more straightforward, however.

Traverse the appropriate pointers left or right until NIL is reached. Both procedures visit nodes that form a downward path from the root to a leaf. Both procedures run in O h time, where h is the height of the tree. No key comparisons are necessary. There are two cases: If node x has an empty right subtree, notice that: Key value 17 Find the successor of the node with key value 6. Key value 7 Find the successor of the node with key value 4.

Key value 6 Find the predecessor of the node with key value 6. Key value 4 Time:Insertion sort The sorting problem Input: First, writing up all these solutions would take a long time, and we felt it more important to release this manual in as timely a fashion as possible. We shall not use global variables without explicit indication. This set of lecture notes is intended as a refresher for the students, bearing in mind that some time may have passed since they last saw red-black trees. Case 3: Therefore exactly n!

We present and analyze the second method in these notes.

JANE from Virginia
I do relish reading books perfectly . Feel free to read my other articles. I'm keen on bottle pool.