Asymptotic Notation
Asymptotic notation deals with the behaviour of a function in the limit,that is, for sufficiently large values of its parameter. Often, when analysing the run time of an algorithm, it is easier to obtain an
approximate formula for the run-time which gives a good indication of the algorithm performance for large problem instances. For example, suppose the exact run-time T(n) of an algorithm on an input of size n is T(n) = 5n2 + 6n + 25 seconds. Then, since n is ≥0, we have n2 ≤ T(n) ≤ 6n2 for all n≥9. Thus we can say that T(n) is roughly proportional ton2 for sufficiently large values of n. We write this as T(n)∈ Θ(n2 ), or say that“T(n) is in the exact order of
n2”.
Definition:
Let f(n) be an arbitrary function from the natural numbers N = {0,1,2,…}to the set R≥ 0 of nonnegative reals. Let R+ denote the set of positive reals. define O( f (n)) = {t : N →R≥0 : ∃c ∈ R+ ∃n0 ∈ N ∀n ≥ n0 (t(n) ≤ cf (n))} In other words, O( f (n)) - pronounced “big Oh of f(n)” – is the set of all functions t : N →R≥0 such that t(n) ≤ cf(n) for all n ≥ n0 for some positive real constant c and integer threshold n0 . Notice that O( f (n)) is a set, so that we refer to a function t(n) as being in the order of f(n) if t(n)∈ O( f (n)).
This differs from the way in which O( f (n)) is sometimes defined elsewhere in the literature.
Example:
Suppose that an algorithm takes in the worst case
t(n) = 27n2 + 355 113 n +12 microseconds on a certain computer to solve an instance of size n of a problem. We can show that t(n) is in the order of n2 by the following argument: t(n) = 27n2 + 355 113 n +12 ≤ 27n2 + 355 113 n2 +12n2 (provided n ≥1) = 42 16
113 n2 Thus there exist constants n0 (=1) and c(= 42 16
113) such that t(n) ≤cf(n) for all n ≥ n0 . That is,
t(n)∈ O(n2) .
Conditional asymptotic notation:
Many algorithms are easier to analyse if initially we restrict our attention to instances whose size satisfies a certain condition, such as being a power of 2. Consider, for example, the divide and conquer algorithm for multiplying large integers that we saw in the Introduction. Let n be the size of the integers to be multiplied. The algorithm proceeds directly if n = 1, which requires a microseconds for an appropriate constant a. If n>1, the algorithm proceeds by multiplying four pairs of integers of size n/2 (or three if we use the better algorithm). Moreover, it takes a linear amount of time to carry out additional tasks. For simplicity, let us say that the additional work takes at most bn microseconds for an appropriate constant b.
[Note: In actual fact, the recursive algorithm carries out one multiplication of two n/2 digit integers, one multiplication of two n/2 digit integers, and two multiplications of a n/2 digit integer and a n/2 digit integer. For simplicity we shall assume that the algorithm always carries out four multiplications of two n/2 digit integers. It turns out that the running time of this simplified method is in the same order as the
running time of the more detailed method.] The worst case time taken by this algorithm is therefore given by the function t : N →R≥0 recursively defined by t(1) = a
t(n) = 4t(n /2) + bn for n >1 We shall be studying techniques for solving recurrences in the next section, but unfortunately this equation cannot be handled directly by those techniques because the ceiling function n/2 is rather troublesome. Nevertheless, our recurrence is easy to solve provided we consider only the
case when n is a power of 2. In this case n/2 = n/2 and the ceiling vanishes. The techniques of the next section yield
t(n) = (a + b)n2 − bn provided that n is a power of 2. Since the lower-order term “-bn” can be neglected, it follows that t(n) is in the exact order of n2, still provided that n is a power of 2. This is denoted by t(n)∈ Θ(n2 | n is a power of 2) . More generally, let
f ,t : N →R≥0 be two functions from the natural numbers to the nonnegative reals, and let P :N →{true, false} be a property
of the integers. We say that t(n) is in O( f (n) | P(n )) if t(n) is bounded above by a positive real multiple of f(n) for all sufficiently large n such that P(n) holds. Formally, O( f (n) | P(n )) is defined as {t :N → R≥ 0 : ∃c ∈ R+ ∃n0 ∈ N ∀n ≥ n0 (P( n) →t(n) ≤ cf (n))}
The sets Ω( f (n) |P(n)) and Θ( f (n) |P(n)) are defined in a similar way. Conditional asymptotic notation is more than a mere notational
convenience: its main interest is that it can generally be eliminated once it has been used to facilitate the analysis of an algorithm. For this we need a few definitions. A function f :N → R≥ 0 is eventually nondecreasing if there exists an integer threshold n0 such that f(n) ≤ f(n+1) for all n ≥ n0 . This implies by mathematical induction that f(n) ≤ f(m) whenever m ≥ n ≥ n0 . Let b ≥ 2 be any integer. Function f is b-smooth if, in addition to being eventually nondecreasing, it satisfies the condition f (bn)∈ O( f (n)). In other words, there must exist a constant c (depending on b) such that f(bn) ≤ cf(n) for all
n ≥ n0 . A function is smooth if it is b-smooth for every integer
b ≥ 2. Most functions we are likely to encounter in the analysis of algorithms are smooth, such as log n , nlog n ,
n2, or any polynomial whose leading coefficient is positive. However, functions that grow too fast, such as nlogn ,
2n or n! are not smooth because the ratio f(2n)/f(n) is unbounded. For example (2n)log (2n ) = 2n2nlogn
which shows that (2n)log(2n ) ∉ O(nlog n ) because
2n2 cannot be bounded above by a constant. Functions that are bounded above by a polynomial, on the other hand, are usually smooth provided they are eventually nondecreasing; and even if they are not eventually nondecreasing there is a good chance that they are in the exact order of some other function that is smooth. For instance, let b(n) denote the number of bits equal to 1 in
the binary expansion of n – such as b(13) = 3 because 13 is written as 1101 in binary – and consider f (n) = b(n) + logn . It is easy to see that f(n) is not eventually nondecreasing – and therefore it is not smooth – because b(2k −1) = k whereas b(2k ) =1 for all k. Nevertheless, f (n) ∈ Θ(log n) , a smooth function.
A useful property of smoothness is that if f is b-smooth for any specific integer b ≥ 2, then it is in fact smooth. To prove this, consider any two integers a and b not smaller than 2. Assume that f is b-smooth. We must show that f is a-smooth as well. Let c and
n0 be constants such that f(bn) ≤ cf(n) and f(n) ≤ f(n+1) for all
n ≥ n0 . Let i= logb a . By definition of the logarithm , a = blogb a ≤ blog b a = bi . Consider any n ≥ n0 . It is easy to show by mathematical induction from b-smoothness of f that f (bin) ≤ c i f (n) . But f (an) ≤ f (bin) because f is eventually nondecreasing and bin ≥ an ≥ n0 . It follows that f (an) ≤ cˆ f (n) for cˆ = ci , and thus f is a-smooth.
Seshadri, I am Harsh, Assistant Professor, Dept. of Computer Sceince Jamia Hmadard; Author , Oxford University Press.
ReplyDeleteGreat work!