[Type text]
MODULE III
3.1 THE GREEDY METHOD – CONTROL ABSTRACTION
The greedy method suggests that one can devise an algorithm that works in
stages, considering one input at a time. At each stage, a decision is made regarding
whether a particular input is in an optimal solution. This is done by considering the
inputs in an order determined by some selection procedure. If the inclusion of the next
input into the partially constructed optimal solution will result in an infeasible solution,
then this input is not assed to the partial solution. Otherwise, it is added. The selection
procedure itself is based on some optimization measure. The measure may be the
objective function. In fact, several different optimization measures may be plausible for
a given problem. Most of these, however, will result in algorithms that generate
suboptimal solutions. This version of the greedy technique is called the subset
paradigm.
The function Select selects an input from a[ ] and removes it. The selected
input’s value is assigned to x. Feasible is a Boolean-valued function that determines
whether x can be included into the solution vector. The function Union combines x
with the solution and updates the objective function. The function Greedy describes the
essential way that a greedy algorithm will look, once a particular problem is chosen and
the functions Select, Feasible, and Union are properly implemented.
Algorithm Greedy (a,n)
//a[1 : n] contains the n inputs.
{ Solution :=ø;// Initialize the solution.
for i:=1 to n do
{ x:=Select(a);
if Fesible(solution,x) then
solution := Union(solution,x);
}
return solution;
}
3.1.1 GENERAL KAPSACK PROBLEM
In the knapsack problem, we have n objects and a knapsack or a bag. Object i
has a weight wi and the knapsack has a capacity m. If a fraction xi, 0 ≤ xi ≤ 1, of
object i is placed into the knapsack, then a profit of pixi is earned. The objective is to
obtain a filling of the knapsack that maximizes the total profit earned. Since the
knapsack capacity is m, we require the total weight of all chosen objects to be at most
m. Formally, the problem can be stated as
[Type text]
,[Type text]
maximize ∑ pixi
1≤ i≤n
Subject to ∑ wixi ≤ m
1≤i≤ n
And 0 ≤ xi ≤ 1, 1≤i≤ n
The profit and weight are positive numbers. This is done by using greedy
method.
ALGORITHM
ALGORITHM greedyknapsack(m, n )
/ / p[ 1.. n] , w[ 1.. n]
/ / pi/ wi ≥ pi+1/ wi+1
/ / x[1 …. n] solution
{
for i=1 to n do
x[1] =0.0 ;
u=m ;
for i= 1 to n do
{
If ( w[i] > u) then break ;
x[i] = 1.0 ;
u = u – w[i] ;
}
If ( i ≤ n ) then x[i] = u/ w[i] ;
}
If p1 / w1 ≥ p2 /w2 ≥ ……………. ≥ pn / wn , then greedy knapsack generate
an Optimum solution to the given instant of knapsack problem. All optimum solutions
will fill the knapsack exactly complexity of knapsack algorithm is 0( n).
Greedy algorithms
Similarly to dynamic programming, greedy algorithms are used to solve
optimization problems. In contrast to dynamic programming, no global optimal solution
is computed, but rather locally optimal decisions are taken. That is why the approach is
called greedy.
Example: Knapsack Problem I
2 possible greedy-strategies: select the most valuable element in each step that
still fits into the knapsack: Knapsack capacity: 15
[Type text]
, [Type text]
Object Size Value
g1 3 3
g2 4 5
g3 6 8
g4 7 9
choose: g4; value: 9;
remaining capacity: 8
choose: g4; value: 9;
remaining capacity: 1
y not optimal
Example: Knapsack Problem II
Select the relatively most valuable element (max (v (gi) s (gi)))
This will lead to the optimal solution in the case described above. However, this need
not always be the case!
Knapsack capacity: 50
Object Size Value vs.
g1 30 90 3
g2 40 100 2, 5
g3 50 110 2, 2
The strategy will choose
g1 (value 90).
However, the optimal would be g3 (value 110) Greedy algorithms do not always
provide the optimal solution. In exchange, they are far less expensive than dynamic
programming. Complexity of the greedy-knapsack: O(C)
3.1.2 OPTIMAL STORAGE ON TAPES
Optimal storage on tapes I
Given: n programs P1, . . . , Pn with lengths l1, . . . , ln.
The programs are stored on a tape in order Pi1 , . . . , Pin .
Time to access Program Pij :
T(i1, . . . , in, j) = li1 + · · · + lij .
Average access time:
T(i1, . . . , in) = 1n
Xn
j=1
T(i1, . . . , in, j)= 1n
Xn
j=1
Xj
[Type text]
MODULE III
3.1 THE GREEDY METHOD – CONTROL ABSTRACTION
The greedy method suggests that one can devise an algorithm that works in
stages, considering one input at a time. At each stage, a decision is made regarding
whether a particular input is in an optimal solution. This is done by considering the
inputs in an order determined by some selection procedure. If the inclusion of the next
input into the partially constructed optimal solution will result in an infeasible solution,
then this input is not assed to the partial solution. Otherwise, it is added. The selection
procedure itself is based on some optimization measure. The measure may be the
objective function. In fact, several different optimization measures may be plausible for
a given problem. Most of these, however, will result in algorithms that generate
suboptimal solutions. This version of the greedy technique is called the subset
paradigm.
The function Select selects an input from a[ ] and removes it. The selected
input’s value is assigned to x. Feasible is a Boolean-valued function that determines
whether x can be included into the solution vector. The function Union combines x
with the solution and updates the objective function. The function Greedy describes the
essential way that a greedy algorithm will look, once a particular problem is chosen and
the functions Select, Feasible, and Union are properly implemented.
Algorithm Greedy (a,n)
//a[1 : n] contains the n inputs.
{ Solution :=ø;// Initialize the solution.
for i:=1 to n do
{ x:=Select(a);
if Fesible(solution,x) then
solution := Union(solution,x);
}
return solution;
}
3.1.1 GENERAL KAPSACK PROBLEM
In the knapsack problem, we have n objects and a knapsack or a bag. Object i
has a weight wi and the knapsack has a capacity m. If a fraction xi, 0 ≤ xi ≤ 1, of
object i is placed into the knapsack, then a profit of pixi is earned. The objective is to
obtain a filling of the knapsack that maximizes the total profit earned. Since the
knapsack capacity is m, we require the total weight of all chosen objects to be at most
m. Formally, the problem can be stated as
[Type text]
,[Type text]
maximize ∑ pixi
1≤ i≤n
Subject to ∑ wixi ≤ m
1≤i≤ n
And 0 ≤ xi ≤ 1, 1≤i≤ n
The profit and weight are positive numbers. This is done by using greedy
method.
ALGORITHM
ALGORITHM greedyknapsack(m, n )
/ / p[ 1.. n] , w[ 1.. n]
/ / pi/ wi ≥ pi+1/ wi+1
/ / x[1 …. n] solution
{
for i=1 to n do
x[1] =0.0 ;
u=m ;
for i= 1 to n do
{
If ( w[i] > u) then break ;
x[i] = 1.0 ;
u = u – w[i] ;
}
If ( i ≤ n ) then x[i] = u/ w[i] ;
}
If p1 / w1 ≥ p2 /w2 ≥ ……………. ≥ pn / wn , then greedy knapsack generate
an Optimum solution to the given instant of knapsack problem. All optimum solutions
will fill the knapsack exactly complexity of knapsack algorithm is 0( n).
Greedy algorithms
Similarly to dynamic programming, greedy algorithms are used to solve
optimization problems. In contrast to dynamic programming, no global optimal solution
is computed, but rather locally optimal decisions are taken. That is why the approach is
called greedy.
Example: Knapsack Problem I
2 possible greedy-strategies: select the most valuable element in each step that
still fits into the knapsack: Knapsack capacity: 15
[Type text]
, [Type text]
Object Size Value
g1 3 3
g2 4 5
g3 6 8
g4 7 9
choose: g4; value: 9;
remaining capacity: 8
choose: g4; value: 9;
remaining capacity: 1
y not optimal
Example: Knapsack Problem II
Select the relatively most valuable element (max (v (gi) s (gi)))
This will lead to the optimal solution in the case described above. However, this need
not always be the case!
Knapsack capacity: 50
Object Size Value vs.
g1 30 90 3
g2 40 100 2, 5
g3 50 110 2, 2
The strategy will choose
g1 (value 90).
However, the optimal would be g3 (value 110) Greedy algorithms do not always
provide the optimal solution. In exchange, they are far less expensive than dynamic
programming. Complexity of the greedy-knapsack: O(C)
3.1.2 OPTIMAL STORAGE ON TAPES
Optimal storage on tapes I
Given: n programs P1, . . . , Pn with lengths l1, . . . , ln.
The programs are stored on a tape in order Pi1 , . . . , Pin .
Time to access Program Pij :
T(i1, . . . , in, j) = li1 + · · · + lij .
Average access time:
T(i1, . . . , in) = 1n
Xn
j=1
T(i1, . . . , in, j)= 1n
Xn
j=1
Xj
[Type text]