answersLogoWhite

0


Best Answer

// returns the minimum value in ns - iterative

public static final int findMinIterative(final int[] ns) {

if (ns.length == 0) {

return 0; // return 0 if ns is an empty array

}

// search each element of ns

int min = ns[0];

for (int i = 0; i < ns.length; ++i) {

// if an element smaller than min is found, store that as the new min

if (ns[i] < min) {

min = ns[i];

}

}

return min;

}

// returns the minimum value in ns - recursive

// Note that this problem does not lend itself to recursion; the solution is very similar to the iterative approach

public static final int findMinRecursive(final int[] ns) {

if (ns.length == 0) {

return 0; // return 0 if ns is an empty array

}

// start recursion

return findMinRecursive(0, ns[0], ns);

}

// recursive part of algorithm

private static final int findMinRecursive(final int i, final int min, final int[] ns) {

// bounds check

if (i >= ns.length) {

return min;

}

// recurse on next value of ns

return findMinRecursive(i + 1, Math.min(min, ns[i]), ns);

}

User Avatar

Wiki User

15y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: Design iterative and recursive alogrithm to find the minimum among n elements?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Engineering

What are the advantages of using a recursive function in c?

Advantages:Through Recursion one can Solve problems in easy way whileits iterative solution is very big and complex.Ex : tower of HanoiYou reduce size of the code when you use recursive call.Disadvantages :Recursive solution is always logical and it is verydifficult to trace.(debug and understand)Before each recursive calls current values of the variblesin the function is stored in the PCB, ie process controlblock and this PCB is pushed in the OS Stack.So sometimes alot of free memory is require for recursivesolutions.Remember : whatever could be done through recursion could bedone through iterative way but reverse is not true.


How to write a program to store ten elements to an array and display the smallest element using C programming?

You data has to be stored in the array arr[] of size 10.double min(const arr[], int arrSize){double minimum = arr[0];for (int j = 0; j < arrSize; j++){if (minimum > arr[j]){minimum = arr[j];}}return minimum;}


C program to copy one matrix to another matrix?

#include main() { int array[100], minimum, size, c, location = 1; printf("Enter the number of elements in array\n"); scanf("%d",&size); printf("Enter %d integers\n", size); for ( c = 0 ; c < size ; c++ ) scanf("%d", &array[c]); minimum = array[0]; for ( c = 1 ; c < size ; c++ ) { if ( array[c] < minimum ) { minimum = array[c]; location = c+1; } } printf("Minimum element is present at location number %d and it's value is %d.\n", location, minimum); return 0; }


What is the importance of recursion?

In computer science, complex problems are resolved using an algorithm. An algorithm is a sequence of specific but simple steps that need to be carried out in a procedural manner. Some steps may need to be repeated in which case we will use an iterative loop to repeat those steps. However, sometimes we encounter an individual step that is actually a smaller instance of the same algorithm. Such algorithms are said to be recursive. An example of a recursive algorithm is the factorial algorithm. The factorial of a value, n, is the product of all values in the range 1 to n where n&gt;0. If n is zero, the factorial is 1, otherwise factorial(n)=n*factorial(n-1). Thus the factorial of any positive value can be expressed in pseudocode as follows: function factorial (num) { if num=0 then return 1 otherwise; return num * factorial (num-1); } Note that any function that calls itself is a recursive function. Whenever we use recursion, it is important that the function has an exit condition otherwise the function would call itself repeatedly, creating an infinite recursion with no end. In this case the exit condition occurs when n is 0 which returns 1 to the previous instance. At this point the recursions begin to "unwind", such that each instance returns its product to the previous instance. Thus if num were 3, we get the following computational steps: factorial(3) = 3 * factorial(3-1) factorial(2) = 2 * factorial(2-1) factorial(1) = 1 * factorial(1-1) factorial(0) = 1 factorial(1) = 1 * 1 = 1 factorial(2) = 2 * 1 = 2 factorial(3) = 3 * 2 = 6 Note how we cannot calculate 3 * factorial(3-1) until we know what the value of factorial(3-1) actually is. It is the result of factorial(2) but, by the same token, we cannot work out 2 * factorial(2-1) until we know what factorial(2-1) is. We continue these recursions until we reach factorial(0) at which point we can begin working our way back through the recursions and thus complete each calculation in reverse order. Thus factorial(3) becomes 1*1*2*3=6. Although algorithms that are naturally recursive imply a recursive solution, this isn't always the case in programming. The problem with recursion is that calling any function is an expensive operation -- even when it is the same function. This is because the current function must push the return address and the arguments to the function being called before it can pass control to the function. The function can then pop its arguments off the stack and process them. When the function returns, the return address is popped off the stack and control returned to that address. All of this is done automatically, Behind the Scenes, in high-level languages. Compilers can optimise away unnecessary function calls through inline expansion (replacing the function call with the actual code in the function, replacing the formal arguments with the actual arguments from the function call). However, this results in increased code size, thus the compiler has to weigh up the performance benefits of inline expansion against the decreased performance from the increased code size. With recursive functions, the benefits of inline expansion are quickly outweighed by the code expansion because each recursion must be expanded. Even if inline expansion is deemed beneficial, the compiler will often limit those expansions to a predetermined depth and use a recursive call for all remaining recursions. Fortunately, many recursive algorithms can be implemented as an iterative loop rather than a recursive loop. This inevitably leads to a more complex algorithm, but is often more efficient than inline expansion. The factorial example shown above is a typical example. First, let's review the recursive algorithm: function factorial (num) { if num=0 then return 1 otherwise; return num * factorial (num-1); } This can be expressed iteratively as follows: function factorial (num) { let var := 1 while 1 &lt; num { var := var * num num := num - 1 } end while return var; } In this version, we begin by initialising a variable, var, with the value 1. We then initiate an iterative loop if 1 is less than num. Inside the loop, we multiply var by num and assign the result back to var. We then decrement num. If 1 is still less than num then we perform another iteration of the loop. We continue iterating until num is 1 at which point we exit the loop. Finally, we return the value of var, which holds the factorial of the original value of num. Note that when the original value of num is either 1 or 0 (where 1 would not be less than num), then the loop will not execute and we simply return the value of var. Although the iterative solution is more complex than the recursive solution and the recursive solution expresses the algorithm more effectively than the iterative solution, the iterative solution is likely to be more efficient because all but one function call has been completely eliminated. Moreover, the implementation is not so complicated that it cannot be inline expanded, which would eliminate all function calls entirely. Only a performance test will tell you whether the iterative solution really is any better. Not all recursive algorithms can be easily expressed iteratively. Divide-and-conquer algorithms are a case in point. Whereas a factorial is simply a gradual reduction of the same problem, divide-and-conquer uses two or more instances of the same problem. A typical example is the quicksort algorithm. Quicksort is ideally suited to sorting an array. Given the lower and upper bounds of an array (a subarray), quicksort will sort that subarray. It achieves this by selecting a pivot value from the subarray and then splits the subarray into two subarrays, where values that are less than the pivot value are placed in one subarray and all other values are placed in the other subarray, with the pivot value in between the two. We then sort each of these two subarrays in turn, using exactly the same algorithm. The exit condition occurs when a subarray has fewer than 2 elements because any array (or subarray) with fewer than 2 elements can always be regarded as being a sorted array. Since each instance of quicksort will result in two recursions (one for each half of the subarray), the total number of instances doubles with each recursion, hence it is a divide-and-conquer algorithm. However, it is a depth-first recursion, so only one of the two recursions is executing upon each recursion. Nevertheless, each instance of the function needs to keep track of the lower and upper bounds of the subarray it is processing, as well as the position of the pivot value. This is because when we return from the first recursion we need to recall those values in order to invoke the second recursion. With recursive function calls we automatically maintain those values through the call stack but with an iterative solution the function needs to maintain its own stack instead. Since we need to maintain a stack, the benefits of iteration are somewhat diminished; we might as well use the one we get for free with recursion. However, when we invoke the second recursion, we do not need to recall the values that we used to invoke that recursion because when that recursion returns the two halves of the subarray are sorted and there's nothing left to do but return to the previous instance. Knowing this we can eliminate the second recursion entirely because, once we return from the first recursion, we can simply change the lower bound of the subarray and jump back to the beginning of the function. This effectively reduces the total number of recursions by half. When the final statement of a function is a recursive call to the same function it is known as a "tail call". Although we can manually optimise functions to eliminate tail calls, compilers that are aware of tail call recursion can perform the optimisation for us, automatically. However, since the point of tail call optimisation is to reduce the number of recursions, it pays to optimise the call upon those recursions that would normally result in the greatest depth of recursion. In the case of quicksort, the deepest recursions will always occur upon the subarray that has the most elements. Therefore, if we perform recursion upon the smaller subarray and tail call the larger subarray, we reduce the depth of recursion accordingly. Although recursions are expensive, we shouldn't assume that iterative solutions are any less expensive. Whenever we have a choice about the implementation, it pays to do some performance tests. Quite often we will find that the benefits of iteration are not quite as significant as we might have thought while the increased complexity makes our code significantly harder to read and maintain. Wherever possible we should always try to express our ideas directly in code. However, if more complex code results in measurable improvements in performance and/or memory consumption, it makes sense to choose that route instead.


Write a c function for minimum and maximum value from the given input?

Assume that all your data were saved in array of type double and size 100:int arrSize = 100;double arr[arrSize] = {0.0};...for(int i = 0;...){...cin >> arr[i];//here you enter all elements using loop for}...Out side of the function main you can define two functions max and min which take the array arr as argumentdouble min(const arr[], int arrSize){double minimum = arr[0];for (int j = 0; j < arrSize; j++){if (minimum > arr[j]){minimum = arr[j];}}return minimum;}double max(const arr[], int arrSize){double maximum = arr[0];for (int j = 0; j < arrSize; j++){if (maximum < arr[j]){maximum = arr[j];}}return maximum;}

Related questions

What are the advantages of using a recursive function in c?

Advantages:Through Recursion one can Solve problems in easy way whileits iterative solution is very big and complex.Ex : tower of HanoiYou reduce size of the code when you use recursive call.Disadvantages :Recursive solution is always logical and it is verydifficult to trace.(debug and understand)Before each recursive calls current values of the variblesin the function is stored in the PCB, ie process controlblock and this PCB is pushed in the OS Stack.So sometimes alot of free memory is require for recursivesolutions.Remember : whatever could be done through recursion could bedone through iterative way but reverse is not true.


What is the minimum number of elements?

There are 118 known elements in Avril 2014.


What is the minimum of elements in a compound?

A compound is composed of two or more different elements.


What is the minimum number of elements in a heap of level 4(The root of the tree is at level 0.)option1. 202. 163. 15?

The minimum number of elements in a heap of level 4 is 15.


Why doesn't helium for a compound?

A compound has minimum two different elements.


If A and B are two sets containing 3 or 6 elements respectively what are the minimum and the maximum number of elements in AUB?

Minimum 6 (the size of the larger of the two sets); maximum 9 (the sum of the sizes).


Why you called minimum shift keying as minimum?

Because as it is equivalent to FSK with lowest modulation index "h" , such that the signal elements are still orthogonal,


What two elements are used to form steel?

At the barest minimum, Iron and traces of Carbon.


Is the mean of a list of numbers the maximum minus the minimum?

No. The maximum minus the minimum is the range. The mean is the sum of all elements of the list divided by the size of the list.


How many elements are in a protein?

C, N, O, H as a minimum with some others such as S


What elements are in the organic compounds?

All organic compounds contain carbon and hydrogen as a minimum. Other elements are found in some organic compounds such as oxygen, nitrogen or sulphur.


Why is radium called an element and not a chemical compound?

Radium is a chemical element; a compound has a minimum of two chemical elements.