answersLogoWhite

0


Best Answer

Consider the following factorial algorithm (C#):

uint factorial(uint n) {

if (n <= 1)

return 1;

else

return factorial(n-1) * n;

}

This has a cyclomatic complexity of two.

---

Couple of notes. First, the algorithm shown doesn't calculate the factorial of zero .. "if (n <= 1)

return 1" should be "if (n <= 0) return 1". That of course doesn't change the cyclomatic complexity.

re: cyclomatic complexity of two

The issue it would seem to be is that the complexity of the recursive call itself is not accounted for (picture if you will the whole algorithm unfolding again in place of the recursive call). The heart of the question seems to be more, How do we deal with the cyclomatic complexity of recursive calls themselves?

To illustrate, any calculation of factorial will consist of a sequence of applications of the recursive rule (call it r1), terminating with the base case (call it b1). Examples are:

sequence b1 describes 0!

sequence r1, b1 describes 1!

sequence r1, r1, b1 describes 2!

sequence r1, r1, r1, b1 describes 3!

pasequence th r1, r1, r1, r1, b1 describes 4!

Etc..

We can describe the set of all sequences with a regular expression: r1*, b1. In words, zero or more applications of r1, ending with b1. The cyclomatic complexity of that regular expression is 3 (and its helpful to draw control flow graph of the regular expression to see that).

User Avatar

Wiki User

8y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is Cyclomatic complexity of recursive factorial problem?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Related questions

Describe the complexity problem that arises in a multiprocessing system?

partial answer: Cyclomatic complexity (or conditional complexity) is a software metric (measurement). It was developed by Thomas J. McCabe in 1976 and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code.


What is recursive algorithm?

Algorithm can be defined as an interpretable, finite set of instructions for dealing with contigencies and accompanying task that has recognizable end-points for given inputs. It is a tool for solving a well computational problem. A recursive algorithm is one which calls itself.


Undecidable Problem that is recursive enumerable?

if it halts


What is simulation recursion in C?

The factorial f(n) = n * (n-1) * (n-2) * .. 1. For example factorial 5 (written as 5!) = 5 x 4 x 3 x 2 x 1 = 120. The function below returns the factorial of the parameter n. int factorial( int n) { if (n==1) return 1 else return n* factorial( n-1) ; }


When you choose iterative or recursive algorithm?

If you cannot find any iterative algorithm for the problem, you have to settle for a recursive one.


What are the problems involved in using the recursive functions?

There are no problems when they are used correctly. The main problem is that inexperienced programmer's often assume that a recursive algorithm requires a recursive function when it is not the case at all; most recursive algorithms can be implemented as an iterative loop. However, even when recursion is required, often it is implemented incorrectly. Consider the recursive algorithm to find the factorial of an unsigned value: const unsigned factorial (const unsigned n) { return n&lt;2?1:n*factorial (n-1); } The problem with this is that we are calculating a constant expression at runtime. E.g., the factorial of 5 is always going to be 120. Therefore it makes more sense to calculate the factorial at compile time rather than at runtime. constexpr unsigned factorial (const unsigned n) { return n&lt;2?1:n*factorial (n-1); } So, when we write the following: constexpr unsigned x = factorial (5); The compiler will automatically generate the following equivalent code for us: constexpr unsigned x = 120; In other words, the recursive function is completely optimised away. That's fine when we're dealing with constant expressions, but when we deal with variables we have a problem because there are no variables at compile time: unsigned y = rand (); /* a random value */ unsigned x = factorial (y); Now we cannot take advantage of compile time recursion, we must sacrifice runtime performance in order to calculate the factorial of y, whatever y happens to be at runtime. Thus if y were 5, we incur the following series of function calls: factorial (5) factorial (4) factorial (3) factorial (2) factorial (1) That's 5 function calls in total. Once we reach the exit condition (n&lt;2), the functions unwind to calculate the result: factorial (1) = 1 factorial (2) = 2*factorial(1) = 2*1 = 2 factorial (3) = 3*factorial(2) = 3*2 = 6 factorial (4) = 4*factorial(3) = 4*6 = 24 factorial (5) = 5*factorial(4) = 5*24 = 120 That's a lot of work that has to be done at runtime. Now let's look at the iterative solution: unsigned factorial (unsigned n) { unsigned result=1; while (1&lt;n) result*=n--; return result; } Now we only incur one function call at runtime. We can still use the constant expression version for compile time recursion, but for variables we gain better performance at runtime. Moreover, given the simplicity of the function, the compiler can still optimise away the function call entirely through inline expansion. Thus: unsigned y = rand (); /* a random value */ const unsigned x = factorial (y); Becomes the equivalent of: unsigned y = rand (); /* a random value */ unsigned t1 = y; unsigned t2 = 1; while (1&lt;t1) t2*=t1--; const unsigned x = t2; We still have to perform the calculation, of course, but we no longer incur the penalty of making any unnecessary function calls. At worst, there will be just one function call and only when expanding the function is not possible (which is unlikely in this case). So the only time we really need recursion at runtime is when the benefit of inline expansion becomes counter-productive, resulting in runtime code that is overly-complicated. Typical examples are divide-and-conquer algorithms such as the quicksort algorithm where two recursions are necessary. However, when the final call to a function is a recursion, then we can take advantage of tail-call recursion to eliminate the second recursion; we simply modify the variables and re-invoke the current instance of the function rather than invoking a new instance. However, we can only do this when the local variables would not be required when we return from a recursion.


What does an explanation mark mean after a number in a math problem?

Do you mean an exclaimation mark (!) An exclamination mark means factorial so............. 3! = 3 factorial 3 factorial means 1x2x3 = 6 2! or 2 factorial means 1x2 = 2 4! or 4 factorial means 1x2x3x4 = 24


Can you calculate the complexity of a problem using computational techniques?

You can calculate the complexity of a problem using computational techniques on websites like Pages and Shodor. Both websites offer free tools, which can be used to calculate the complexity of a problem using computational techniques.


How do algorithm to find the factorial of a number?

Factorial (n) = n * Factorial (n-1) for all positive values n given Factorial (1) = Factorial (0) = 1. Pseudo-code: Function: factorial, f Argument: positive number, n IF n&lt;=1 THEN RETURN 1 ELSE RETURN n * f(n-1) END IF


What is the time complexity of n queens problem?

Time complexity for n-queens is O(n!).


What is a sentence using the word complexity?

Due to the complexity of the situation, the policemen weren't able to come up with a definite answer to their problem.


Why recursive and non-recursive delivers wrong result?

Recursive and non-recursive (also known as iterative) are simply two different approaches to solving a problem. Properly implemented, they should give the same result. If they do not, then something is wrong, and you should spend the time to figure out why.This is a generic answer, because the topic is too broad to answer here, as there are many different reasons that a particular algorithm may fail.