answersLogoWhite

0


Best Answer

Recursion is important because many algorithms are naturally recursive, particularly those that involve a divide-and-conquer technique. That is, we divide a complex problem into smaller instances of the same problem. Each division divides and reduces the problem further until the divisions are small enough to be resolved. We then aggregate those individual solutions to solve the original problem.

As a simple example, factorials can be calculated such that f(n) = n * f(n-1) for all n>1 where f(1) and f(0) are both 1. We can implement this function recursively as follows:

unsigned f(unsigned n) {

return n<2 ? 1 : n * f(n-1);

}

While the function itself is naturally recursive, that doesn't mean we must use a recursive solution. In this case, an iterative solution would actually be more efficient:

unsigned f(unsigned n) {

unsigned a = 1;

while (1<n) a *= n--;

return a;

}

However, in languages that allow compile-time computation (such as C++), recursive functions can be advantageous:

constexpr unsigned f(unsigned n) {

return n<2 ? 1 : n * f(n-1);

}

With this function, a good compiler will convert f(6) to the literal constant 720, completely eliminating the runtime overhead of invoking the function recursively. However, for values that cannot be computed at compile time due to excessive recursions, the runtime function will still be invoked, such that f(10) might be replaced with the constant expression 10 * f(9).

Languages that support template metaprogramming can provide the means to completely eliminate the runtime overhead associated with recursion:

template <unsigned N>

constexpr unsigned f () {

return N*f<N-1>();

}

template<>

constexpr unsigned f<1>() {

return 1;

}

template<>

constexpr unsigned f<0>() {

return 1;

}

Note that the terminating conditions are handled through specialisation rather than through recursion. At compile time, f<1>() invokes the first specialisation while f<0>() invokes the second. For all values N>1, the general function is invoked recursively at compile time.

constexpr unsigned x = f<10>();

Compile-time computation will effectively replace the above expression with:

constexpr unsigned x = 3628800;

Note that no code is generated for these template functions so we cannot invoke them at runtime, they are used purely for compile-time computation.

In languages that do not support template metaprogramming or constexpr, we can still make use of recursion, particularly where an iterative approach is too complex to implement efficiently. Divide-and-conquer algorithms are a typical example.

The Quicksort algorithm is a relatively simple recursive algorithm that can be implemented as follows:

void qsort (int a[], int lo, int hi) {

if (hi<=lo) return;

unsigned pivot = partition (a, lo, hi);

qsort (a, lo, pivot-1);

qsort (a, pivot+1, hi);

}

The partition() function does most of the actual work. Given the lower and upper bounds of the array it will select a pivot value and position this value such that all values to the left of it are less than it and all those to the right are not less than it. Once the array is partitioned, the index of the pivot value is returned. This value is used to determine the upper bound of the left sub-array (pivot-1) and the lower bound of the right sub-array (pivot+1).

User Avatar

Wiki User

8y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

9y ago

Recursion uses the output of an algorithm as input to the same algorithm, gradually reducing a complex problem to a simple problem that can easily be solved.

For instance, given the positive value n, we can calculate the factorial of n with the following function:

unsigned factorial (unsigned n) { return n * factorial (n-1);

}

This is the essence of recursion; a function that calls itself. However, as it stands, this function will call itself infinitely (or until the stack is full at least). Therefore we need to define an endpoint. That endpoint is always the simplest case, the point at which the problem is small enough to be solved. In this case, the problem is solved when n is less than 2:

unsigned factorial (unsigned n) {

if (n<2)

return n;

// else

return n * factorial (n-1);

}

Thus if n is 0 or 1, the return value is 0 or 1 respectively. If n is 2, the return value is 2 * factorial (1), which is 2*1=2. Factorial 3 is therefore 3 * factorial(2) which becomes 3*2*1 = 6. And so on.

This answer is:
User Avatar

User Avatar

Wiki User

10y ago

Recursion can be thought of as a data structure in the sense that the call stack is itself a structure. Recursion occurs when a function calls itself, thus its current local variables are pushed onto the stack to be recalled when the recursive call returns. You use recursion whenever a problem can be modified to produce smaller instances of the same problem, such as when implementing divide-and-conquer algorithms.

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

the process that function call itself is call recursion...

in short..the process repeated itself.

for example,factorial for 5!.

the whole process,

5*4!

5*4*3!

5*4*3*2!

5*4*3*2*1!

5*4*3*2*1*0!

5*4*3*2*1*1

so.the final answer is 120 produce..

in other word we can say that the process repeated untill meet to any specified condition,

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

Recursion occurs whenever a function calls itself. Recursive functions must have a clearly defined return path, as determined by a conditional expression within the function. Without a return path, the function will continually call itself until the call stack is depleted, at which point the program will crash.

As the function calls return (known as "unwinding"), variables placed on the stack can be recalled. This is akin to a loop whereby each iteration maintains its own set of variables.

However, if these variables are of no use to the function, then recursion should be avoided. A typical scenario is a tail call, where the last call in a function is a recursive call to the same function. Since it is the last call in the function, the local variables are no longer required, so it's better to modify the existing variables and jump to the appropriate point in the current instance of the function with a goto. Some compilers will recognise tail calls and optimise accordingly, but it's good practice to roll your own.

When designing recursive functions, be aware of the depth of the calls, and keep arguments/parameters to an absolute minimum.

A good example of recursion is the quicksort algorithm, which repeatedly splits a subset into two smaller subsets around a pivot, with values smaller than the pivot in one subset and those larger than the pivot in the other. The function then calls itself twice, once for each subset (sorting by the divide and conquer method). The function immediately returns when the subset has fewer than 2 items, thus there is a clearly-defined return path which must be encountered at some point. This is also a good example of a tail call recursion since the second recursion is also the last call of the function. The function can then be engineered to make a recursive call on the smaller of the two subsets, and then make a tail call for the larger subset. Since larger subsets tend to be more recursive than smaller subsets, this greatly reduces the number of recursions (and therefore the depth of those recursions) required to sort any given subset.

Most recursive functions can be written as iterative functions (with internal loops), however recursive functions tend to be more elegant, even if they are often less efficient due to the additional function calls. However, the difference may be insignificant when compared to the additional programming time required to implement a naturally recursive function as an iterative function.

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

A recursive method (or function) is one that calls itself.

Here is a popular example: The factorial function n! (read the exclamation mark as: factorial of n, or n factorial), for a positive integer, is the product of all numbers up to that number. For example, 4! = 1 x 2 x 3 x 4. In math, the factorial is sometimes defined as:

0! = 1

n! = n x (n-1)! (for n > 0)

You can write a function or method, using this definition. Here is the pseudocode:

function factorial(n)

if (n = 0) return 1

else return n * factorial(n - 1)

Note that this is not very efficient, but there are many problems that are extremely complicated without recursion, but which can be solved elegantly with recursion (for example, doing something with all files in a folder, including all subfolders).

This answer is:
User Avatar

User Avatar

Wiki User

13y ago

When a method calls itself (directly or indirectly).

This answer is:
User Avatar

User Avatar

Wiki User

13y ago

Recursion is where the same bit of code (routine) calls itself.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is the importance of recursion in computer programming?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

What has the author Mitchell Wand written?

Mitchell Wand has written: 'Induction, recursion, and programming' -- subject(s): Computer programming, Induction (Mathematics), Recursion theory


What has the author Wim H Hesselink written?

Wim H. Hesselink has written: 'Programs, Recursion and Unbounded Choice (Cambridge Tracts in Theoretical Computer Science)' 'Programs, recursion, and unbounded choice' -- subject(s): Computer programming


What is the importance of programming?

Without programming, computers would be expensive doorstops. Computer hardware requires computer software. Programming, even in machine code, is essential in creating that software.


What type of science is recursion?

Recursion is a computer science. Typically computer programers write a specific program to test out a theory of recursion based on the known facts to try to define the variable.


What is the use of recursion function?

Read the part in your programming manual/text book about recursion. The short answer while easy does not tell you anything about the power or dangers of recursion. It is the power and dangers of recursion that is important because once understood you can use recursion to good effect without running out of stack space.


What are the importance of programming?

Programming is what makes a computer more than just a simple pocket calculator. Everything that you can do today on a computer someone had to be programmed at one point.


What is the difference between left recursion and right recursion in a grammar?

Recursion is what it's called when a function calls itself. When a function calls itself immediately before returning, it's called tail recursion. Tail recursion can be more efficiently written as iteration. In fact a good compiler will recognize tail recursion and compile it as iteration. There is no such thing as left or right recursion in C programming.


What is funcation?

Read the part in your programming manual/text book about recursion. The short answer while easy does not tell you anything about the power or dangers of recursion. It is the power and dangers of recursion that is important because once understood you can use recursion to good effect without running out of stack space.


How is recursion used in computer science?

Recursion is the process where one large problem is solved by solving several smaller problems. This applies to computer science, as trial and error and bug fixes are needed to get a finished product.


What are the merits and demerits of recursion?

Ans: Merits of recursion are: Mathematical functions, such as Fibonacci series generation can be easily implemented using recursion as compared to iteration technique. Demerits of recursion are: Many programming languages do not support recursion; hence, recursive mathematical function is implemented using iterative methods. Even though mathematical functions can be easily implemented using recursion, it is always at the cost of execution time and memory space. The recursive programs take considerably more storage and take more time during processing.


What are demerits of recursion?

Demerits of recursion are: Many programming languages do not support recursion; hence, recursive mathematical function is implemented using iterative methods. Even though mathematical functions can be easily implemented using recursion, it is always at the cost of execution time and memory space. The recursive programs take considerably more storage and take more time during processing.


Did you mean recursion?

By definition, recursion means the repeated application of a recursive definition or procedure. It is used to define an object in terms of itself in computer science and mathematical logic.