answersLogoWhite

0

In programming, there are two types of repetition: iterative and recursive. Both are a type of loop.

Iterative loops have three general types:

  1. while loop
  2. do loop
  3. for loop

The while loop is the simplest loop. It tests a conditional expression at the start of each iteration. If the expression holds true, the body of the loop executes for one iteration. The loop continues to iterate until the expression does not hold true.

The do loop is similar to a while loop except the conditional expression is placed at the end of each iteration. Thus the body of the loop will always execute at least once. A do loop is also known as a do until loop.

The for loop is typically used to perform counted iterations. Unlike the while and do loops, a conditional expression is optional (evaluating true when omitted). In addition, a for loop can initialise a variable upon first entry to the loop and can perform an action at the end of each iteration. All clauses of the for loop are optional.

Within the body of any iterative loop, the programmer can also use additional conditional expressions in conjunction with continue, break, return or goto statements. A continue statement skips the remaining portion of the loop and begins a new iteration (testing the controlling conditional expression beforehand). A break exits the loop. A return exits the function and returns control to the caller. A goto jumps to the named label. Although goto statements are frowned upon, a goto is the most efficient way of breaking out of a nested loop (a loop within a loop).

Recursive loops make use of recursive function calls. That is; a function that calls itself. Recursive functions are typically used in conquer-and-divide algorithms where a large problem can be restated as one or more smaller problems of the same type, until the problem is small enough that it can be processed. This results in one or more small solutions that combine to form the larger solution.

To better explain the concept, let us consider the naturally recursive function to calculate the factorial of a number. Factorials determine the number of permutations within a given set. That is, a set of {abc} has 3 elements and therefore 6 permutations: {{abc},{acb},{bac},{bca},{cab},{cba}}. For any given set of n elements, the number of permutations is the product of all positive integers in the closed range [1:n]. Thus the first few factorials are as follows:

f(0) = 1

f(1) = 1

f(2) = 1x2 = 2

f(3) = 1x2x3 = 6

f(4) = 1x2x3x4 = 24

Note that 0! denotes the empty set which has one permutation, the same as a set of 1. However, for all values n>1 we can see that the result is the same as saying:

f(n) = f(n-1) x n

In other words, it is a recursive function. The end point is reached when n is less than 2 thus we can program this function as follows:

unsigned f (unsigned n) {

if (n>1)

return f(n-1)*n;

return 1;

}

If n were 4, then the first instance of the function will be invoked as f(4). This in turn invokes f(3), f(2) and finally f(1) which returns 1 to f(2), which returns 2 to f(3) which returns 6 to f(4) which finally returns 24 to the caller.

[It should be noted that factorials produce huge numbers and are only practical for calculating f(12) using 32-bit integrals and f(20) using 64-bit integrals. If you need larger factorials, you must use a user-defined type capable of handling larger integrals.]

Unlike an iterative loop which simply cycles from one state to the next never to return to a previous state, a recursive loop maintains state between calls and revisits that state upon each return. This technique is the essence of all backtracking algorithms and is made possible by automatically pushing local variables onto the call stack with each function call and popping those same variables upon return, along with the result of the call. We also refer to this as "winding" and "unwinding" the stack, analogous to winding a clock spring with each recursion and unwinding with each return.

With each call to the function, we increase the depth of recursion. With each return, we decrease the depth of recursion. If we do not provide an end point in our function, the depth of recursion will be such that all stack space will eventually be consumed and our program will crash. However, a recursive function may increase and decrease its depth of recursion several times over during a single calculation and this is the essence of divide-and-conquer algorithms such as the quicksort algorithm. That is, each call of the function divides an unsorted set into two subsets around a single pivot (or several contiguous pivots of the same value). The pivot is in the correct place with respect to the two subsets, but those subsets are still unsorted. Thus quicksort is recursively applied to each of these subsets in turn. Eventually a subset will have fewer than 2 elements which represents the endpoint for that particular recursion. When all subsets have fewer than 2 elements, the entire set is sorted.

Given that the two subsets in a quicksort may not be of equal size, the larger subset will inevitably incur more recursions than the smaller subset. Thus it pays to recurse over the smaller subset and make a tail call for the larger one. A tail call is where the final statement in a function is a recursive call to the same function. Given that state does not need to be maintained for a tail call (because it is the last statement of the function), the same instance of the function can be invoked (with modified parameters) rather than going to the expense of making an otherwise unnecessary function call. Modern compilers generally include tail-call optimisation to take advantage of this.

Although iteration can be more efficient than recursion, even with naturally recursive functions, modern compilers are capable of inline expanding recursive functions to a certain degree (or rather depth), particularly if the depth can be calculated at compile time and the increased code size does not adversely affect performance. If the depth cannot be calculated in advance, inline expansion can still be achieved up to the predefined depth with a standard recursive call for any additional recursions that may be required at runtime. That being the case there is generally no reason to try and create an iterative solution to what would otherwise be a naturally recursive algorithm.

User Avatar

Wiki User

9y ago

What else can I help you with?

Related Questions

What data structure and logarithm?

data structure is a way of storing data in a computer so that it can be used efficientlyan algorithm is a sequence of instructions, often used for calculation and data processing.Often a carefully chosen data structure will allow the most efficient algorithm to be used.


What are the subject-matters of data structure?

types of data structure types of data structure


What types of data used in algorithm?

Whatever data you need. If you need the algorithm to operate with many different types of data, and you are programming in C++, you could use generic programming practices and use templates.


Difference types of nonlinear data structure?

Tree, Graphs are the types of nonlinear data structure.


How does the nesting algorithm work to organize and structure data efficiently?

The nesting algorithm organizes and structures data by grouping related items together within a hierarchical structure. This helps to efficiently store and access the data, as items are organized based on their relationships to one another.


Write a algorithm for a deque operations?

please read data structure (schaum series) books


What is the time complexity of Dijkstra's algorithm when using a priority queue data structure?

The time complexity of Dijkstra's algorithm with a priority queue data structure is O((V E) log V), where V is the number of vertices and E is the number of edges in the graph.


What is structure of maem?

"MAEM" is likely referring to the MEEG algorithm Error Message Format (MAEM). It is a data structure used in error reporting for algorithm-related error messages in the MEEG algorithm. The structure of MAEM typically consists of error codes, descriptions, and other relevant information to help users identify and troubleshoot issues with the algorithm.


How can Dijkstra's algorithm be implemented in Java using a heap data structure for efficient shortest path calculations?

Dijkstra's algorithm can be implemented in Java using a heap data structure to efficiently calculate the shortest path. The heap data structure helps in maintaining the priority queue of vertices based on their distances from the source node. By updating the distances and reorganizing the heap, the algorithm can find the shortest path in a more optimized way compared to using other data structures.


What is a homogeneous data structure and why is this a weakness for RDBMS?

in homogeneous data structure all the elements of same data types known as homogeneous data structure. example:- array


What is the need for structure datatype?

A structure is not a data type. We use structures to define new data types (user-defined data types). If we didn't have the ability to create user-defined types we'd be limited solely to the built-in data types and arrays of those types.


What is the time complexity of the Union Find algorithm?

The time complexity of the Union Find algorithm is typically O(log n) or better, where n is the number of elements in the data structure.