Describe the major disadvantages for memory allocation schemes?
= for memory allocation schemes? = http://wiki.answers.com/Q/FAQ/2096
= for memory allocation schemes? = http://wiki.answers.com/Q/FAQ/2096
How do you remove recursion using stacks?
Using stacks won't remove recursion, they can only re-implement those recursions. In some cases we don't actually need a stack to implement a recursive algorithm, in which case an iterative implementation will typically perform better with little to no cost in additional memory. But if we require a stack in order to implement recursions iteratively, then we pay the cost in terms of additional memory consumption (the "built-in" call stack is fixed-size and exists whether we use it or not). In addition, there may be a performance cost if we cannot determine how much additional memory we need.
As an example, consider the recursive quicksort algorithm:
template<typename T>using iter = std::vector<T>::iterator; template<typename T>void quicksort (iter begin, iter end) {
if (begin<end) {
size_t pivot = partition (begin, end);
quicksort (begin, pivot - 1);
quicksort (pivot + 1, end);
} // end if
}
Note that the partition algorithm is not shown for the sake of brevity. However, it is best implemented as a separate function as its local variables play no part in the recursion.
Being a divide-and-conquer algorithm, this algorithm requires a stack for back-tracking. Here is the iterative equivalent using a stack:
template<typename T>using iter = std::vector<T>::iterator;
template<typename T>void quicksort (iter begin, iter end) {
if (begin<end) {
std::stack<std::pair<iter, iter>> s {};
s.push ({begin, end});
while (s.empty() == false) {
begin = s.top().first();
end = s.top().second();
s.pop();
size_t pivot = partition (begin, end);
if (pivot + 1<end) s.push ({pivot + 1, end});
if (begin<pivot - 1) s.push ({begin, pivot - 1});
} // end while
} // end if
}
Note that the order we push the pairs on at the end of the while loop is the reverse order we wish them to be processed. The order doesn't actually matter, but it ensures both algorithms operate in a consistent manner, with depth-first traversal from left to right.
This implementation is naive because each push allocates new memory for each pair object we push onto the stack, releasing the same memory with each pop. Allocating and releasing system memory on a per-element basis like this is highly inefficient, so it's highly unlikely that this version will perform any better than the recursive algorithm.
However, the quicksort algorithm guarantees that there can never be more elements on the stack than there are elements in the initial range, so we can improve performance significantly by reserving sufficient memory in advance:
template<typename T>using iter = std::vector<T>::iterator;
template<typename T>void quicksort (iter begin, iter end) {
if (begin<end) {
std::vector<std::pair<iter, iter>> v {};
v.reserve (end - begin);
v.emplace_back (begin, end);
while (v.empty() == false) {
begin = v.back().first();
end = v.back().second();
v.pop_back();
size_t pivot = partition (begin, end);
if (begin < pivot - 1) v.emplace_back (begin, pivot - 1);
if (pivot + 1 < end) v.emplace_back (pivot + 1, end);
} // end while
} // end if
}
Note that in this implementation we use a vector rather than a stack, however all pops and pushes (implemented as emplace_back operations) occur at the back of the vector where the unused elements are, and that's precisely how an efficient stack should be implemented. As a result, this version will perform significantly better than the previous version and should perform at least as well as the recursive implementation if not better. The only significant cost is the cost of reserving memory in the vector.
In programming a node is a pointer to some data. For instance, if we allocate objects non-contiguously we can treat them as if they were allocated contiguously by storing pointers to those objects in a contiguous array:
foo* a[100];
for (int i=0; i<100; ++i) a[i] = malloc (sizeof(foo));
In the above example we have an array of foo nodes. The foo objects themselves are individually allocated on the heap but they need not be allocated contiguously. But this does not matter because an array is always allocated contiguously so the order of the objects is determined by the order of the nodes in the array. If we change the order of the nodes, we change the order of the objects, but the objects don't actually move. This is useful when the objects are large or complex because it means we can change their order much more efficiently just by moving the nodes.
As well as pointers, a node can also be defined as a structure:
struct node {
foo* data;
node* next;
};
Here, the node still points to an object but also points to another node, the next node in the sequence. In this case we don't need an array, we simply need to keep track of the first node in the sequence. This means we are no longer restricted by the array length; we can have as many or as few nodes as we require. Changing the order of the nodes is simply a matter of changing which data element each node points to. We can also insert and extract elements from the sequence by changing which node each node points to.
More complex data structures require additional node pointers. For instance, a bi-directional list requires two node pointers, one to the next element in the sequence and one to the previous element:
struct node { foo* data;
node* next;
node* prev;
};
A binary tree node also has two node pointers:
struct node {
foo* data;
node* left;
node* right;
};
A binary tree node might also point to its parent node:
struct node {
foo* data;
node* parent;
node* left;
node* right;
};
A tertiary node has at least three node pointers:
struct node {
foo* data;
node* left;
node* middle;
node* right;
};
Nodes make it possible to construct highly-complex data structures regardless of where the data is physically allocated and the order in which it is allocated. The point is that we don't need to move any data around because the nodes themselves define the data structure. The nodes can be thought of as being metadata (data about data).
Any node that allows traversal to any other node in the structure can be used to keep track of the structure, however we typically use the node that provides the most efficient traversal. In a tree structure, that is always the root node. In a list, it is the head node.
How does a linked list differ from stack and queue?
A linked list is a data structure that allows bi-directional traversal with constant-time access to both the head and the tail of the list. Elements may be pushed and popped from anywhere in the list.
A stack is a structure where elements are pushed and popped only at the head of the stack.
A stack is typically implemented using an array (either fixed-length or variable-length) where elements are pushed and popped from the end of the array (you need to keep track of where the last-used element is). However, a stack can also be implemented as a singly-linked list, pushing and popping at the head of the list.
Stacks generally do not allow traversal of the elements.
When inserting or extracting at the end of a singly-linked list or at the beginning or end of a doubly-linked list, the complexity is constant time. Inserting or extracting in the middle of a list has linear complexity, with best case O(1) when the insertion or extraction point is already known in advance and a worst case of O(n) when it is not.
Scanner scan = new Scanner(System.in); // A scanner object that reads from the
// keyboard( System.in), must import
//Scanner class to use this System.out.print("Input Integer: "); int first = scan.nextInt(); //Reads the next int after the printed stuff that the user
//inputs System.out.print("Input Double: "); double first = scan.nextDouble(); //Reads the next double after the printed stuff that
//the user inputs
What is the difference between a function pointer and a pointer to a function?
A pointer to a function is the memory address that stores the address of a function, while the pointer itself is a function pointer.
A pointer to a function might be defined as "int (*pf)(int, int);", while to actually point to the function, you would use a function pointer, such as "pf = &func;".
How do you calculate size of an unsized array?
In Java, all arrays have a finite size. Even if you did not initialize an array to a particular size (say the array was being passed into a method), some other part of the program did. Because of this, the length of an array can always be accessed via the .length parameter.
Example:
int[] arr = new int[5];
System.out.print(arr.length); //5
public void k(int[] arr)
{
System.out.print(arr.length) //arr will always have a finite size
}
How do you describe garbage value in c?
/* Example of garbage value */ #include
yywrap() is function used by yaac parser to deal with end of file processing.if end of file occurs then it returns 1 otherwise 0.
Which data type perimits sharing memory among different types of data?
Types cannot share memory; only instances of a type (objects or variables) can share memory. This is achieved through the use of shared resource handles or "smart pointers". We can also use C-style pointer variables to share memory, however they are problematic because as soon as the shared memory is released, all pointers to it become invalid but there's no way to tell if a pointer is valid or not. Even without using shared memory, there is no notion of "ownership" as far as pointers are concerned.
In the absence of resource handles, the best strategy is to allocate memory in an outer scope and then create shared pointers within an inner scope. The shared pointers MUST NOT release the memory they refer to. When we return to the outer scope, the shared references will have fallen from scope, thus we can safely release the memory using the same pointer variable we used to allocate the memory. In this way we get some notion of ownership.
With resource handles we don't have this problem because the shared memory cannot be released accidently until the very last reference falls from scope. The last reference need not be the one we originally used to allocate the memory.
Sharing memory between threads is best kept to an absolute minimum. Although we can use locks to prevent data races, we achieve better performance with lock-free code and there's less risk of deadlock (where two threads are waiting on each other to release the memory they've locked). Shared memory is often unavoidable, but don't use it just because you can, only use it when it is appropriate to do so; there are always alternatives.
WHAT IS A level certificate in doeacc?
I am a Bikas Chandra Maity. I lost my all paper clearance marksheet. How am I collect duplicate marksheet?
Sylleptic branches grow out from lateral buds during the same growing season in which the buds are performed.
#include
using std::cin;
using std::cout;
using std::endl;
int main()
{
int arrSize = 10;
double arr[10] = {0.0};
cout << endl << "Enter " << arrSize << " numbers:";
for (int i = 0; i < arrSize; i++)
{
cout << endl << "Enter " << (i + 1) << " number :";
cin >> arr[i];
}
double avr = 0.0;
for (int j = 0; j < arrSize; j++)
{
avr += arr[j];
}
avr /= arrSize;
for (int k = 0; k < arrSize; k++)
{
cout << endl << (k + 1) << " result: " << (arr[k] - avr);
}
cout << endl;
system("PAUSE");
return 0;
}
Why does an investigator change the value of a variable parameter during in investigation?
Which factor does the investigator change during an investigation?
Arrays are the cheapest data structures, without much overhead compare to other data structures. Basically in array, we have four types i.e
1. Static array - array size known at compile time.
// example code snippet:
char array[10];
2. Dynamic array - array size known at begging of execution time.
//example code snippet:
char *array;
int num_elements;
//Dynamic memory allocation using malloc library call which in turn uses "brk" // or "sbrk" system call under unix
array = (char *)malloc (num_elements * sizeof(char));
3. Stretchable array- array size can be stretched(modified) during execution time.
//example code snippet.
char *array;
int num_elements;
array = (char *)malloc (num_elements * sizeof(char));
//Modify the memory allocation during execution time using realloc library call
array = realloc (array, new_value);
4. Tunable array- array size known at link time.
Suppose consider you have one object file called sample.o(compiled sample.c)
and header file called sample.h.
ISV's (Independent software vendors) usually won't provide you the source code(In case of proprietary software), instead they will provide you the object file and one corresponding header file which contains some symbolic literals (some # define constants). so that you can change the symbolic literals in header file which takes necessary action when you compile both file together.
//example snippet code:
In sample.c
char arr[MAX_SIZE];
In sample.h
# define MAX_SIZE
Suppose now if you change the symbolic literal "MAX_SIZE"in header file sample.h that will affect the size of the array arr at link time.
Basically static array's are fall under the category of "Static Memory allocation" and remaining all will fall under "Dynamic Memory allocation".
Explain The merits of using a deque to implement a stack in data structure?
Explain The merits of using a deque to implement a stack in data structure
What are the five advantages and five disadvantages of using global variables?
The only case where a global variable is advantageous is when that global is a constant variable and it represents a truly global concept. The value of PI, for instance, is a truly global concept and it has a constant value. The exchange rate between dollars and Sterling is also a truly global concept, but it is non-constant and should not be declared global.
Global (non-constant) variables are problematic because it can be difficult to track down all the places that interact with a global, especially if the global has external linkage. Even if that is not a problem in itself, a global cannot cater for both single-threaded and multi-threaded applications. If intended purely for single-threaded applications then there will be no synchronisation mechanism, thus it cannot be used in a multi-threaded application as this could lead to a data race. Conversely, if intended purely for multi-threaded applications, it will have a synchronisation mechanism but this could lead to performance issues on single-threaded applications where synchronisation is not required.
In trivial applications, global variables can often be useful because the scope of a global can be well-defined. But in non-trivial applications, it becomes more difficult to limit the scope of a global. One way to limit the scope is to declare the global variable static, thus limiting its scope to the translation unit in which it is declared (static global variables cannot have external linkage). However, by limiting the scope, the variable is no longer a truly global concept.