answersLogoWhite

0


Best Answer

A flowchart for factorial of number can be made using different software's. Microsoft Word is the most popular software for beginners to make simple flowcharts.
In order to draw the flowchart write the number at the top of the paper and then draw two lines under it going slightly to the sides. Decide what two numbers can be multiplied to equal that number. Keep going until you can no longer get to smaller numbers.

User Avatar

Wiki User

6y ago
This answer is:
User Avatar
More answers
User Avatar

Wiki User

14y ago

int fact(n) { return (n == 2 ? 2 : n * fact(n-1)); }

This answer is:
User Avatar

User Avatar

Wiki User

13y ago

read n

fact=1;

if(n>=1)

{

fact*=n;

}

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

1. flochart of cp

Program to find factorial of number using function

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

flow charts of recursion function

This answer is:
User Avatar

User Avatar

Wiki User

11y ago

dfdfd

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is the algorithm n flowchart for calculating factorial of number using recursion?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Engineering

Can you provide a solution to the diamond-square algorithm using Java and recursion?

Yes. It is possible to provide a solution to the diamond-square algorithm using Java and recursion.


Algorithm to find factorial using functions?

factorial using recursion style in c++ is unsigned int fact(unsigned int a) { if (a<=1) return 1; else { f*=fact(a-1); return a; } } when using looping structure factorial is unsigned int fact (unsigned int n) { unsigned int i,f=1; for(i=1;i<=n;i++) f*=i ; return f; }


2 Write a program in java to find factorial of a number?

import java.math.BigInteger; public class Factorial { public static void main(String[] args) { BigInteger n = BigInteger.ONE; for (int i=1; i<=20; i++) { n = n.multiply(BigInteger.valueOf(i)); System.out.println(i + "! = " + n); }


What is the merits and demerit of recursion in algorithm?

The advantages of recursion tend to revolve around the fact that there are quite a few algorithms which lend themselves to recursion (tree traversal, binary searches, quick sort, etc.) The disadvantages of recursion include: * finite number of recursive steps (limited heap space) * speed/efficiency (easier to increment a loop counter than call a function)


How do you overcome limitations of stacks in polygon filling?

You overcome limitations of the stack in polygon filling, or in any other algorithm, far that matter, but using an iterative technique, rather than a recursive technique. Recursion is quite useful, and can simplify algorithm design. Polygon filling, however, is a class of algorithm can potentially have a very deep recursion depth. This causes stress on the stack, hence the need for iteration.

Related questions

Program for calculating factorial no using recursion in c?

factorial n is given by formula n! = n.(n-1)....1 int i; long x; x =1; for (i=n;i>1;i--) x = x*i ; will calculate factorial. I have put x as long to avoid integer overflow. checks for n is positive etc. to be added.


Can you provide a solution to the diamond-square algorithm using Java and recursion?

Yes. It is possible to provide a solution to the diamond-square algorithm using Java and recursion.


Algorithm to find factorial using functions?

factorial using recursion style in c++ is unsigned int fact(unsigned int a) { if (a<=1) return 1; else { f*=fact(a-1); return a; } } when using looping structure factorial is unsigned int fact (unsigned int n) { unsigned int i,f=1; for(i=1;i<=n;i++) f*=i ; return f; }


Which is faster to find factorial whether recursive or dowhile?

recursion is always slower than iteration


What is a Flow chart for finding factorial of a given number using recursion function?

no answer....pls post


2 Write a program in java to find factorial of a number?

import java.math.BigInteger; public class Factorial { public static void main(String[] args) { BigInteger n = BigInteger.ONE; for (int i=1; i<=20; i++) { n = n.multiply(BigInteger.valueOf(i)); System.out.println(i + "! = " + n); }


What are the problems involved in using the recursive functions?

There are no problems when they are used correctly. The main problem is that inexperienced programmer's often assume that a recursive algorithm requires a recursive function when it is not the case at all; most recursive algorithms can be implemented as an iterative loop. However, even when recursion is required, often it is implemented incorrectly. Consider the recursive algorithm to find the factorial of an unsigned value: const unsigned factorial (const unsigned n) { return n<2?1:n*factorial (n-1); } The problem with this is that we are calculating a constant expression at runtime. E.g., the factorial of 5 is always going to be 120. Therefore it makes more sense to calculate the factorial at compile time rather than at runtime. constexpr unsigned factorial (const unsigned n) { return n<2?1:n*factorial (n-1); } So, when we write the following: constexpr unsigned x = factorial (5); The compiler will automatically generate the following equivalent code for us: constexpr unsigned x = 120; In other words, the recursive function is completely optimised away. That's fine when we're dealing with constant expressions, but when we deal with variables we have a problem because there are no variables at compile time: unsigned y = rand (); /* a random value */ unsigned x = factorial (y); Now we cannot take advantage of compile time recursion, we must sacrifice runtime performance in order to calculate the factorial of y, whatever y happens to be at runtime. Thus if y were 5, we incur the following series of function calls: factorial (5) factorial (4) factorial (3) factorial (2) factorial (1) That's 5 function calls in total. Once we reach the exit condition (n<2), the functions unwind to calculate the result: factorial (1) = 1 factorial (2) = 2*factorial(1) = 2*1 = 2 factorial (3) = 3*factorial(2) = 3*2 = 6 factorial (4) = 4*factorial(3) = 4*6 = 24 factorial (5) = 5*factorial(4) = 5*24 = 120 That's a lot of work that has to be done at runtime. Now let's look at the iterative solution: unsigned factorial (unsigned n) { unsigned result=1; while (1<n) result*=n--; return result; } Now we only incur one function call at runtime. We can still use the constant expression version for compile time recursion, but for variables we gain better performance at runtime. Moreover, given the simplicity of the function, the compiler can still optimise away the function call entirely through inline expansion. Thus: unsigned y = rand (); /* a random value */ const unsigned x = factorial (y); Becomes the equivalent of: unsigned y = rand (); /* a random value */ unsigned t1 = y; unsigned t2 = 1; while (1<t1) t2*=t1--; const unsigned x = t2; We still have to perform the calculation, of course, but we no longer incur the penalty of making any unnecessary function calls. At worst, there will be just one function call and only when expanding the function is not possible (which is unlikely in this case). So the only time we really need recursion at runtime is when the benefit of inline expansion becomes counter-productive, resulting in runtime code that is overly-complicated. Typical examples are divide-and-conquer algorithms such as the quicksort algorithm where two recursions are necessary. However, when the final call to a function is a recursion, then we can take advantage of tail-call recursion to eliminate the second recursion; we simply modify the variables and re-invoke the current instance of the function rather than invoking a new instance. However, we can only do this when the local variables would not be required when we return from a recursion.


What is the merits and demerit of recursion in algorithm?

The advantages of recursion tend to revolve around the fact that there are quite a few algorithms which lend themselves to recursion (tree traversal, binary searches, quick sort, etc.) The disadvantages of recursion include: * finite number of recursive steps (limited heap space) * speed/efficiency (easier to increment a loop counter than call a function)


What is recursion in programing?

A recursive method (or function) is one that calls itself. Here is a popular example: The factorial function n! (read the exclamation mark as: factorial of n, or n factorial), for a positive integer, is the product of all numbers up to that number. For example, 4! = 1 x 2 x 3 x 4. In math, the factorial is sometimes defined as: 0! = 1 n! = n x (n-1)! (for n > 0) You can write a function or method, using this definition. Here is the pseudocode: function factorial(n) if (n = 0) return 1 else return n * factorial(n - 1) Note that this is not very efficient, but there are many problems that are extremely complicated without recursion, but which can be solved elegantly with recursion (for example, doing something with all files in a folder, including all subfolders).


How do you overcome limitations of stacks in polygon filling?

You overcome limitations of the stack in polygon filling, or in any other algorithm, far that matter, but using an iterative technique, rather than a recursive technique. Recursion is quite useful, and can simplify algorithm design. Polygon filling, however, is a class of algorithm can potentially have a very deep recursion depth. This causes stress on the stack, hence the need for iteration.


What is the need of recursion?

In computer science, complex problems are resolved using an algorithm. An algorithm is a sequence of specific but simple steps that need to be carried out in a procedural manner. Some steps may need to be repeated in which case we will use an iterative loop to repeat those steps. However, sometimes we encounter an individual step that is actually a smaller instance of the same algorithm. Such algorithms are said to be recursive. An example of a recursive algorithm is the factorial algorithm. The factorial of a value, n, is the product of all values in the range 1 to n where n>0. If n is zero, the factorial is 1, otherwise factorial(n)=n*factorial(n-1). Thus the factorial of any positive value can be expressed in pseudocode as follows: function factorial (num) { if num=0 then return 1 otherwise; return num * factorial (num-1); } Note that any function that calls itself is a recursive function. Whenever we use recursion, it is important that the function has an exit condition otherwise the function would call itself repeatedly, creating an infinite recursion with no end. In this case the exit condition occurs when n is 0 which returns 1 to the previous instance. At this point the recursions begin to "unwind", such that each instance returns its product to the previous instance. Thus if num were 3, we get the following computational steps: factorial(3) = 3 * factorial(3-1) factorial(2) = 2 * factorial(2-1) factorial(1) = 1 * factorial(1-1) factorial(0) = 1 factorial(1) = 1 * 1 = 1 factorial(2) = 2 * 1 = 2 factorial(3) = 3 * 2 = 6 Note how we cannot calculate 3 * factorial(3-1) until we know what the value of factorial(3-1) actually is. It is the result of factorial(2) but, by the same token, we cannot work out 2 * factorial(2-1) until we know what factorial(2-1) is. We continue these recursions until we reach factorial(0) at which point we can begin working our way back through the recursions and thus complete each calculation in reverse order. Thus factorial(3) becomes 1*1*2*3=6. Although algorithms that are naturally recursive imply a recursive solution, this isn't always the case in programming. The problem with recursion is that calling any function is an expensive operation -- even when it is the same function. This is because the current function must push the return address and the arguments to the function being called before it can pass control to the function. The function can then pop its arguments off the stack and process them. When the function returns, the return address is popped off the stack and control returned to that address. All of this is done automatically, behind the scenes, in high-level languages. Compilers can optimise away unnecessary function calls through inline expansion (replacing the function call with the actual code in the function, replacing the formal arguments with the actual arguments from the function call). However, this results in increased code size, thus the compiler has to weigh up the performance benefits of inline expansion against the decreased performance from the increased code size. With recursive functions, the benefits of inline expansion are quickly outweighed by the code expansion because each recursion must be expanded. Even if inline expansion is deemed beneficial, the compiler will often limit those expansions to a predetermined depth and use a recursive call for all remaining recursions. Fortunately, many recursive algorithms can be implemented as an iterative loop rather than a recursive loop. This inevitably leads to a more complex algorithm, but is often more efficient than inline expansion. The factorial example shown above is a typical example. First, let's review the recursive algorithm: function factorial (num) { if num=0 then return 1 otherwise; return num * factorial (num-1); } This can be expressed iteratively as follows: function factorial (num) { let var := 1 while 1 < num { var := var * num num := num - 1 } end while return var; } In this version, we begin by initialising a variable, var, with the value 1. We then initiate an iterative loop if 1 is less than num. Inside the loop, we multiply var by num and assign the result back to var. We then decrement num. If 1 is still less than num then we perform another iteration of the loop. We continue iterating until num is 1 at which point we exit the loop. Finally, we return the value of var, which holds the factorial of the original value of num. Note that when the original value of num is either 1 or 0 (where 1 would not be less than num), then the loop will not execute and we simply return the value of var. Although the iterative solution is more complex than the recursive solution and the recursive solution expresses the algorithm more effectively than the iterative solution, the iterative solution is likely to be more efficient because all but one function call has been completely eliminated. Moreover, the implementation is not so complicated that it cannot be inline expanded, which would eliminate all function calls entirely. Only a performance test will tell you whether the iterative solution really is any better. Not all recursive algorithms can be easily expressed iteratively. Divide-and-conquer algorithms are a case in point. Whereas a factorial is simply a gradual reduction of the same problem, divide-and-conquer uses two or more instances of the same problem. A typical example is the quicksort algorithm. Quicksort is ideally suited to sorting an array. Given the lower and upper bounds of an array (a subarray), quicksort will sort that subarray. It achieves this by selecting a pivot value from the subarray and then splits the subarray into two subarrays, where values that are less than the pivot value are placed in one subarray and all other values are placed in the other subarray, with the pivot value in between the two. We then sort each of these two subarrays in turn, using exactly the same algorithm. The exit condition occurs when a subarray has fewer than 2 elements because any array (or subarray) with fewer than 2 elements can always be regarded as being a sorted array. Since each instance of quicksort will result in two recursions (one for each half of the subarray), the total number of instances doubles with each recursion, hence it is a divide-and-conquer algorithm. However, it is a depth-first recursion, so only one of the two recursions is executing upon each recursion. Nevertheless, each instance of the function needs to keep track of the lower and upper bounds of the subarray it is processing, as well as the position of the pivot value. This is because when we return from the first recursion we need to recall those values in order to invoke the second recursion. With recursive function calls we automatically maintain those values through the call stack but with an iterative solution the function needs to maintain its own stack instead. Since we need to maintain a stack, the benefits of iteration are somewhat diminished; we might as well use the one we get for free with recursion. However, when we invoke the second recursion, we do not need to recall the values that we used to invoke that recursion because when that recursion returns the two halves of the subarray are sorted and there's nothing left to do but return to the previous instance. Knowing this we can eliminate the second recursion entirely because, once we return from the first recursion, we can simply change the lower bound of the subarray and jump back to the beginning of the function. This effectively reduces the total number of recursions by half. When the final statement of a function is a recursive call to the same function it is known as a "tail call". Although we can manually optimise functions to eliminate tail calls, compilers that are aware of tail call recursion can perform the optimisation for us, automatically. However, since the point of tail call optimisation is to reduce the number of recursions, it pays to optimise the call upon those recursions that would normally result in the greatest depth of recursion. In the case of quicksort, the deepest recursions will always occur upon the subarray that has the most elements. Therefore, if we perform recursion upon the smaller subarray and tail call the larger subarray, we reduce the depth of recursion accordingly. Although recursions are expensive, we shouldn't assume that iterative solutions are any less expensive. Whenever we have a choice about the implementation, it pays to do some performance tests. Quite often we will find that the benefits of iteration are not quite as significant as we might have thought while the increased complexity makes our code significantly harder to read and maintain. Wherever possible we should always try to express our ideas directly in code. However, if more complex code results in measurable improvements in performance and/or memory consumption, it makes sense to choose that route instead.


What is the importance of recursion?

In computer science, complex problems are resolved using an algorithm. An algorithm is a sequence of specific but simple steps that need to be carried out in a procedural manner. Some steps may need to be repeated in which case we will use an iterative loop to repeat those steps. However, sometimes we encounter an individual step that is actually a smaller instance of the same algorithm. Such algorithms are said to be recursive. An example of a recursive algorithm is the factorial algorithm. The factorial of a value, n, is the product of all values in the range 1 to n where n>0. If n is zero, the factorial is 1, otherwise factorial(n)=n*factorial(n-1). Thus the factorial of any positive value can be expressed in pseudocode as follows: function factorial (num) { if num=0 then return 1 otherwise; return num * factorial (num-1); } Note that any function that calls itself is a recursive function. Whenever we use recursion, it is important that the function has an exit condition otherwise the function would call itself repeatedly, creating an infinite recursion with no end. In this case the exit condition occurs when n is 0 which returns 1 to the previous instance. At this point the recursions begin to "unwind", such that each instance returns its product to the previous instance. Thus if num were 3, we get the following computational steps: factorial(3) = 3 * factorial(3-1) factorial(2) = 2 * factorial(2-1) factorial(1) = 1 * factorial(1-1) factorial(0) = 1 factorial(1) = 1 * 1 = 1 factorial(2) = 2 * 1 = 2 factorial(3) = 3 * 2 = 6 Note how we cannot calculate 3 * factorial(3-1) until we know what the value of factorial(3-1) actually is. It is the result of factorial(2) but, by the same token, we cannot work out 2 * factorial(2-1) until we know what factorial(2-1) is. We continue these recursions until we reach factorial(0) at which point we can begin working our way back through the recursions and thus complete each calculation in reverse order. Thus factorial(3) becomes 1*1*2*3=6. Although algorithms that are naturally recursive imply a recursive solution, this isn't always the case in programming. The problem with recursion is that calling any function is an expensive operation -- even when it is the same function. This is because the current function must push the return address and the arguments to the function being called before it can pass control to the function. The function can then pop its arguments off the stack and process them. When the function returns, the return address is popped off the stack and control returned to that address. All of this is done automatically, Behind the Scenes, in high-level languages. Compilers can optimise away unnecessary function calls through inline expansion (replacing the function call with the actual code in the function, replacing the formal arguments with the actual arguments from the function call). However, this results in increased code size, thus the compiler has to weigh up the performance benefits of inline expansion against the decreased performance from the increased code size. With recursive functions, the benefits of inline expansion are quickly outweighed by the code expansion because each recursion must be expanded. Even if inline expansion is deemed beneficial, the compiler will often limit those expansions to a predetermined depth and use a recursive call for all remaining recursions. Fortunately, many recursive algorithms can be implemented as an iterative loop rather than a recursive loop. This inevitably leads to a more complex algorithm, but is often more efficient than inline expansion. The factorial example shown above is a typical example. First, let's review the recursive algorithm: function factorial (num) { if num=0 then return 1 otherwise; return num * factorial (num-1); } This can be expressed iteratively as follows: function factorial (num) { let var := 1 while 1 < num { var := var * num num := num - 1 } end while return var; } In this version, we begin by initialising a variable, var, with the value 1. We then initiate an iterative loop if 1 is less than num. Inside the loop, we multiply var by num and assign the result back to var. We then decrement num. If 1 is still less than num then we perform another iteration of the loop. We continue iterating until num is 1 at which point we exit the loop. Finally, we return the value of var, which holds the factorial of the original value of num. Note that when the original value of num is either 1 or 0 (where 1 would not be less than num), then the loop will not execute and we simply return the value of var. Although the iterative solution is more complex than the recursive solution and the recursive solution expresses the algorithm more effectively than the iterative solution, the iterative solution is likely to be more efficient because all but one function call has been completely eliminated. Moreover, the implementation is not so complicated that it cannot be inline expanded, which would eliminate all function calls entirely. Only a performance test will tell you whether the iterative solution really is any better. Not all recursive algorithms can be easily expressed iteratively. Divide-and-conquer algorithms are a case in point. Whereas a factorial is simply a gradual reduction of the same problem, divide-and-conquer uses two or more instances of the same problem. A typical example is the quicksort algorithm. Quicksort is ideally suited to sorting an array. Given the lower and upper bounds of an array (a subarray), quicksort will sort that subarray. It achieves this by selecting a pivot value from the subarray and then splits the subarray into two subarrays, where values that are less than the pivot value are placed in one subarray and all other values are placed in the other subarray, with the pivot value in between the two. We then sort each of these two subarrays in turn, using exactly the same algorithm. The exit condition occurs when a subarray has fewer than 2 elements because any array (or subarray) with fewer than 2 elements can always be regarded as being a sorted array. Since each instance of quicksort will result in two recursions (one for each half of the subarray), the total number of instances doubles with each recursion, hence it is a divide-and-conquer algorithm. However, it is a depth-first recursion, so only one of the two recursions is executing upon each recursion. Nevertheless, each instance of the function needs to keep track of the lower and upper bounds of the subarray it is processing, as well as the position of the pivot value. This is because when we return from the first recursion we need to recall those values in order to invoke the second recursion. With recursive function calls we automatically maintain those values through the call stack but with an iterative solution the function needs to maintain its own stack instead. Since we need to maintain a stack, the benefits of iteration are somewhat diminished; we might as well use the one we get for free with recursion. However, when we invoke the second recursion, we do not need to recall the values that we used to invoke that recursion because when that recursion returns the two halves of the subarray are sorted and there's nothing left to do but return to the previous instance. Knowing this we can eliminate the second recursion entirely because, once we return from the first recursion, we can simply change the lower bound of the subarray and jump back to the beginning of the function. This effectively reduces the total number of recursions by half. When the final statement of a function is a recursive call to the same function it is known as a "tail call". Although we can manually optimise functions to eliminate tail calls, compilers that are aware of tail call recursion can perform the optimisation for us, automatically. However, since the point of tail call optimisation is to reduce the number of recursions, it pays to optimise the call upon those recursions that would normally result in the greatest depth of recursion. In the case of quicksort, the deepest recursions will always occur upon the subarray that has the most elements. Therefore, if we perform recursion upon the smaller subarray and tail call the larger subarray, we reduce the depth of recursion accordingly. Although recursions are expensive, we shouldn't assume that iterative solutions are any less expensive. Whenever we have a choice about the implementation, it pays to do some performance tests. Quite often we will find that the benefits of iteration are not quite as significant as we might have thought while the increased complexity makes our code significantly harder to read and maintain. Wherever possible we should always try to express our ideas directly in code. However, if more complex code results in measurable improvements in performance and/or memory consumption, it makes sense to choose that route instead.