answersLogoWhite

0

📱

C Programming

Questions related to the C Computer Programming Language. This ranges all the way from K&R to the most recent ANSI incarnations. C has become one of the most popular languages today, and has been used to write all sorts of things for nearly all of the modern operating systems and applications. It it a good compromise between speed, power, and complexity.

9,649 Questions

C program to arrange 7 numbers in ascending order?

#include<stdio.h>

#include<conio.h>

void main()

{

int i,j,temp,a[7];

clrscr();

printf("Enter 7 integer numbers: \n");

for(i=0;i<7;i++)

scanf("%d",&a[i]);

for (i=0;i<7;i++)

{

for(j=i+1;j<7;j++)

{

if(a[i]<a[j])

{

temp=a[i];

a[i]=a[j];

a[j]=temp;

}

}

}

printf("\n\nThe 7 numbers sorted in ascending order are: \n");

for(i=0;i<7;i++)

printf("%d\t",a[i]);

getch();

}

How do you copy a string without using string functions?

There is almost nothing to explain if you know C Language. Here is the program:

#include

void copyString(const char *src, char *dest);

int main() {

char str1[100];

char str2[100];

printf("Enter the string: ");

gets(str1);

copyString(str1, str2);

printf("Copied string: %s\n", str2);

return 0;

}

void copyString(const char *src, char *dest) {

while (*dest++ = *src++);

}

As you can see actually copying is done in just one line of code. While loop stops after it reaches zero value and all strings in C language are null-terminated strings (ending with 0x00 byte at the end of string, which is zero).

Testing:

Enter the string: Hello world, we have copy function for string!

Copied string: Hello world, we have copy function for string! Note: You should not be using gets() in real application, because it is not possible to limit number of the characters to be read thus allowing to overflow buffer. You might get a warning in line with our while loop if you are compiling with -Wall option in GCC, what does -Wall is that it checks for questionable places. This place for compiler is questionable because we have assignment operation inside while loop expression. Most of the times this is common mistake in programming, but not in this situation. Compiler just give a notice for developer that we should be careful.

Flow chart for addition of two matrices?

For the resulting matrix, just add the corresponding elements from each of the matrices you add. Use coordinates, like "i" and "j", to loop through all the elements in the matrices. For example (for Java; code is similar in C):

for (i = 0; i <= height - 1; i++)
for (j = 0; j<= weidht - 1; j++)
matrix_c[i][j] = matrix_a[i][j] + matrix_b[i][j]


Write an algorithm for Knapsack Problem?

Answer

The pseudocode listed below is for the unbounded knapsack problem.

operation ub-ks (n, K)
// n is the total number of items, K is the capacity of the knapsack
{
for (int h = 0; h < K; h++)
V[0, h] = 0; // initializes the bottom row of the table
for (int i = 0; i < n; i++) {
for (int kp = 0; kp < K; kp++) {
ans = V[i-1, kp]; // case 1: item i not included
if (size[i] <= kp) { // if the ith item's size is less than kp...
other = val[i] + V[i-1, kp - size[i]];
// ...then case 2: item i is included
if (other > ans) // case 3: both are possible, so take the max
ans = other;
V[i, kp] = ans;
}
}
}
return V[n, K];
} // end ub-ks

Program to check palindrome using recursion?

#include <stdio.h>

#include <conio.h>

void main()

{

int num,rev=0,m,r;

clrscr();

printf("enter any number");

scanf("%d",&num);

m=num;

while(num>0)

{

r=num%10;

rev=rev*10+r;

num=num/10;

}

if(rev==m)

printf("given number is palindrome");

else

printf("given number is not palindrome");

getch();

}

this is the answer.

Are primitive traits typical of broader it smaller clades?

Traits that evolved early, such as the hole in the hip socket, are called primitive traits.

What is a wildcard for more than one character?

In Windows and UNIX-based systems, while specifying filenames, ? is a wildcard that substitutes for exactly one character.

In SQL databases, the underscore (_) matches exactly one character.

What is the algorithm to convert miles to kilometers?

Multiply miles by 1.609344 to get kilometres:

double mile2km (double miles) { return miles * 1.609344; }

Multiply kilometres by 0.62137119 to get miles:

double km2mile (double km) { return km * 0.62137119; }

A c program to generate even numbers up to 50?

#include<stdio.h>

#include<conio.h>

void main()

{

int i;

for(i=1;i<=50;i+=2)

{

printf("\t %d", i);

}

getch();

}

What is the default value of register storage class?

Register storage class is a compiler hint that the variable will be often used, and that it should generate code, if it can, to keep the variable's value in a register.

What is keywords?

Keyword represent something like reserved words. This includes syntactic words etc...
For eg: in C if, else, while , printf, static,extern all these are key words.

Some editors show these words in different color to make the suer understand that these are key words

What is the importance of recursion?

In computer science, complex problems are resolved using an algorithm. An algorithm is a sequence of specific but simple steps that need to be carried out in a procedural manner. Some steps may need to be repeated in which case we will use an iterative loop to repeat those steps. However, sometimes we encounter an individual step that is actually a smaller instance of the same algorithm. Such algorithms are said to be recursive.

An example of a recursive algorithm is the factorial algorithm. The factorial of a value, n, is the product of all values in the range 1 to n where n>0. If n is zero, the factorial is 1, otherwise factorial(n)=n*factorial(n-1). Thus the factorial of any positive value can be expressed in pseudocode as follows:

function factorial (num)

{

if num=0 then return 1

otherwise;

return num * factorial (num-1);

}

Note that any function that calls itself is a recursive function. Whenever we use recursion, it is important that the function has an exit condition otherwise the function would call itself repeatedly, creating an infinite recursion with no end. In this case the exit condition occurs when n is 0 which returns 1 to the previous instance. At this point the recursions begin to "unwind", such that each instance returns its product to the previous instance. Thus if num were 3, we get the following computational steps:

factorial(3) = 3 * factorial(3-1)

factorial(2) = 2 * factorial(2-1)

factorial(1) = 1 * factorial(1-1)

factorial(0) = 1

factorial(1) = 1 * 1 = 1

factorial(2) = 2 * 1 = 2

factorial(3) = 3 * 2 = 6

Note how we cannot calculate 3 * factorial(3-1) until we know what the value of factorial(3-1) actually is. It is the result of factorial(2) but, by the same token, we cannot work out 2 * factorial(2-1) until we know what factorial(2-1) is. We continue these recursions until we reach factorial(0) at which point we can begin working our way back through the recursions and thus complete each calculation in reverse order. Thus factorial(3) becomes 1*1*2*3=6.

Although algorithms that are naturally recursive imply a recursive solution, this isn't always the case in programming. The problem with recursion is that calling any function is an expensive operation -- even when it is the same function. This is because the current function must push the return address and the arguments to the function being called before it can pass control to the function. The function can then pop its arguments off the stack and process them. When the function returns, the return address is popped off the stack and control returned to that address. All of this is done automatically, Behind the Scenes, in high-level languages. Compilers can optimise away unnecessary function calls through inline expansion (replacing the function call with the actual code in the function, replacing the formal arguments with the actual arguments from the function call). However, this results in increased code size, thus the compiler has to weigh up the performance benefits of inline expansion against the decreased performance from the increased code size. With recursive functions, the benefits of inline expansion are quickly outweighed by the code expansion because each recursion must be expanded. Even if inline expansion is deemed beneficial, the compiler will often limit those expansions to a predetermined depth and use a recursive call for all remaining recursions.

Fortunately, many recursive algorithms can be implemented as an iterative loop rather than a recursive loop. This inevitably leads to a more complex algorithm, but is often more efficient than inline expansion. The factorial example shown above is a typical example. First, let's review the recursive algorithm:

function factorial (num)

{

if num=0 then return 1

otherwise;

return num * factorial (num-1);

}

This can be expressed iteratively as follows:

function factorial (num)

{

let var := 1

while 1 < num

{

var := var * num

num := num - 1

}

end while

return var;

}

In this version, we begin by initialising a variable, var, with the value 1. We then initiate an iterative loop if 1 is less than num. Inside the loop, we multiply var by num and assign the result back to var. We then decrement num. If 1 is still less than num then we perform another iteration of the loop. We continue iterating until num is 1 at which point we exit the loop. Finally, we return the value of var, which holds the factorial of the original value of num.

Note that when the original value of num is either 1 or 0 (where 1 would not be less than num), then the loop will not execute and we simply return the value of var.

Although the iterative solution is more complex than the recursive solution and the recursive solution expresses the algorithm more effectively than the iterative solution, the iterative solution is likely to be more efficient because all but one function call has been completely eliminated. Moreover, the implementation is not so complicated that it cannot be inline expanded, which would eliminate all function calls entirely. Only a performance test will tell you whether the iterative solution really is any better.

Not all recursive algorithms can be easily expressed iteratively. Divide-and-conquer algorithms are a case in point. Whereas a factorial is simply a gradual reduction of the same problem, divide-and-conquer uses two or more instances of the same problem. A typical example is the quicksort algorithm.

Quicksort is ideally suited to sorting an array. Given the lower and upper bounds of an array (a subarray), quicksort will sort that subarray. It achieves this by selecting a pivot value from the subarray and then splits the subarray into two subarrays, where values that are less than the pivot value are placed in one subarray and all other values are placed in the other subarray, with the pivot value in between the two. We then sort each of these two subarrays in turn, using exactly the same algorithm. The exit condition occurs when a subarray has fewer than 2 elements because any array (or subarray) with fewer than 2 elements can always be regarded as being a sorted array.

Since each instance of quicksort will result in two recursions (one for each half of the subarray), the total number of instances doubles with each recursion, hence it is a divide-and-conquer algorithm. However, it is a depth-first recursion, so only one of the two recursions is executing upon each recursion. Nevertheless, each instance of the function needs to keep track of the lower and upper bounds of the subarray it is processing, as well as the position of the pivot value. This is because when we return from the first recursion we need to recall those values in order to invoke the second recursion. With recursive function calls we automatically maintain those values through the call stack but with an iterative solution the function needs to maintain its own stack instead. Since we need to maintain a stack, the benefits of iteration are somewhat diminished; we might as well use the one we get for free with recursion. However, when we invoke the second recursion, we do not need to recall the values that we used to invoke that recursion because when that recursion returns the two halves of the subarray are sorted and there's nothing left to do but return to the previous instance. Knowing this we can eliminate the second recursion entirely because, once we return from the first recursion, we can simply change the lower bound of the subarray and jump back to the beginning of the function. This effectively reduces the total number of recursions by half.

When the final statement of a function is a recursive call to the same function it is known as a "tail call". Although we can manually optimise functions to eliminate tail calls, compilers that are aware of tail call recursion can perform the optimisation for us, automatically. However, since the point of tail call optimisation is to reduce the number of recursions, it pays to optimise the call upon those recursions that would normally result in the greatest depth of recursion. In the case of quicksort, the deepest recursions will always occur upon the subarray that has the most elements. Therefore, if we perform recursion upon the smaller subarray and tail call the larger subarray, we reduce the depth of recursion accordingly.

Although recursions are expensive, we shouldn't assume that iterative solutions are any less expensive. Whenever we have a choice about the implementation, it pays to do some performance tests. Quite often we will find that the benefits of iteration are not quite as significant as we might have thought while the increased complexity makes our code significantly harder to read and maintain. Wherever possible we should always try to express our ideas directly in code. However, if more complex code results in measurable improvements in performance and/or memory consumption, it makes sense to choose that route instead.

Give example of a language which uses more than one pass for compiling a program?

FORTRAN, Assembler, to name two. Effectively, any language that allows you to reference symbols before they are declared.

How many simple data types are there?

There are a total of 8 simple or primitive data types in Java. They are:

  • byte
  • short
  • int
  • float
  • double
  • boolean
  • long and
  • String

Write a c program to accept the range of all prime numbers from 1 to n by using while loop?

#include<iosys.h>

#include<math.h> // for sqrt()

bool is_prime (unsigned num) {

if (num<2) return false; // 2 is the first prime

if (!(num%2)) return num==2; // 2 is the only even prime

// largest potential factor is square root of num

unsigned max = (unsigned) sqrt ((double) num) + 1;

// test all odd factors

for (unsigned factor=3; factor<max; factor+=2) if (!(num%factor)) return false;

return true; // if we get this far, num has no factors and is therefore prime

}

int main (void) {

// test all nums from 0 to 100 inclusive

for (unsigned num=0; num<=100; ++num) {

if (is_prime (num))

printf ("%d is prime\n", num);

else if (num>0)

printf ("%d is composite\n", num);

else printf ("%d is neither prime nor composite\n", %d);

}

return 0;

}

Write a program to find gcd using recursive method in java?

for two positive integers:

public static int gcd(int i1, int i2) {

// using Euclid's algorithm

int a=i1, b=i2, temp;

while (b!=0) {

temp=b;

b=a%temp;

a=temp;

}

return a;

}

How do you count the number of words in a string?

To find the length of the string we use length method. The length property returns the length of a string (number of characters we use).The length of an empty string is 0.

For example:

function myFunction() {

var str = "Hello World!";

var n = str.length;

Hope this hepls.

What are the different tree methodologies in data structure?

The question itself is a bit vague since it doesn't suggest whether you're asking about the types of trees or the operations which can be performed on trees.

A tree is a data structure which stores information in a logical way that typically is ideally suited for searching quickly, often by ordering the nodes or items of the tree, also ideally, nodes should be able to be added to or removed from a tree quickly and efficiently. Often, trees are chosen as a means of simply structuring data in a meaningful way with no concerns as to their performance.

When trees are chosen as an alternative to a list, the reason for this is to gain the benefits of rapid insertion, searching and deletion of nodes. Common tree structures for this purpose are the binary tree and the black and red tree. A binary tree has an extremely simple and extremely fast method of searching for data, but it is highly dependent on nodes being added in an order which is somewhat random. If ordered data is added to a tree, the depth of the tree will be linear, thereby providing no benefits over a linked list in addition, removal of nodes can cause a binary tree to have to be rebuilt as all the nodes beneath the deleted node will have to be re-added to the tree, typically under different nodes, commonly causing linear branches of the tree and slowing down application performance.

R&B trees were designed to address many of the issues of a binary tree. By developing a data structure which intentionally keeps the depth of the tree shallow, high speed searching and node removal can be achieved, but at a cost to the insertion algorithm which is designed to "shake things up" a little. R&B trees are far beyond the scope of this answer.

General trees are used less as a means of providing ideal performance but instead are intended as a means of providing structure for data. General trees store information in a way which reflects the data format itself. An example of a general tree is the file system of a disk drive. The root directory contains zero or more children which each contain zero or more children which each contain zero or

more... you get the point. This structure is also used for things like the abstract syntax tree of a compiler. Source code is parsed into "tokens" which are the structured as nodes of a tree which are then "walked" when optimizing and producing binary code.

There are many more types of trees. Many of which ate covered by Donald Knuth in extensive, if not insane detail in "The Art of Computer Programming".

Among the operations you'd perform on the tree are insertion, deletion, searching, walking, reduction, simplification and more.

How can you declare global an local variables in flowcharts?

The simple answer is you don't. The primary purpose of a flowchart is to show the flow of execution through an algorithm, with all primary functions and decisions described in broad, abstract terms.

There is no need to distinguish between local and global variables because that is an implementation detail -- it's far too low-level a concept for algorithm design.

All variables utilised by an algorithm should essentially be declared up front at the start of the flowchart and should therefore be treated as being local to that particular flowchart. It doesn't matter where those variables came from, only that they be initialised accordingly. The decision to make a variable global comes much later, once the interactions between different flowcharts have been established. Even so, a global variable should only ever be considered when the variable represents a truly global concept. In the vast majority of cases, passing arguments into functions is often the better option. Especially when separate algorithms are intended to run concurrently because making a global variable thread-safe can seriously impact performance. But this is not a concern in algorithm design, they are only of importance to the implementers; the class designers.

How do you write a program that gives the GCD of three given numbers in C plus plus?

To find the GCD of three numbers, a, b and c, you need to find the GCD of a and b first, such that d = GCD(a, b). Then call GCD(d, c). Although you could simply call GCD(GCD(a, b), c), a more useful method is to use an array and iteratively call the GCD(a, b) function, such that a and b are the first two numbers in the first iteration, which becomes a in the next iteration, while b is the next number. The following program demonstarates this method.

Note that the GCD of two numbers can either be calculated recursively or iteratively. This program includes both options, depending on whether RECURSIVE is defined or not. In a working program you'd use one or the other, but the iterative approach is usually faster because it requires just one function call and no additional stack space.

The program will create 10 random arrays of integers of length 3 to 5 and process each in turn. Note that the more numbers in the array, the more likely the GCD will be 1.

#include<iostream>

#include<time.h>

#define RECURSIVE // comment out to use iterative method

#define ARRAY // comment out to use non-arrays

#ifdef RECURSIVE

// Returns the GCD of the two given integers (recursive method)

unsigned int gcd(unsigned int a, unsigned int b)

{

if(!a)

return(b);

if(!b)

return(a);

if(a==b)

return(a);

if(~a&1)

{

if(b&1)

return(gcd(a>>1,b));

else

return(gcd(a>>1,b>>1)<<1);

}

if(~b&1)

return(gcd(a,b>>1));

if(a>b)

return(gcd((a-b)>>1,b));

return(gcd((b-a)>>1,a));

}

#else

// Returns the GCD of the two given integers (iterative method)

unsigned int gcd(unsigned int a, unsigned int b)

{

if(!a)

return(b);

if(!b)

return(a);

int c;

for(c=0; ((a|b)&1)==0; ++c)

{

a>>=1;

b>>=1;

}

while((a&1)==0)

a>>=1;

do{

while((b&1)==0)

b>>=1;

if(a>b)

{

unsigned int t=a;

a=b;

b=t;

}

b-=a;

}while(b);

return(a<<c);

}

#endif RECURSIVE

// Returns the greatest common divisor in the given array

unsigned int gcd(const unsigned int n[], const unsigned int size)

{

if( size==0 )

return( 0 );

if( size==1 )

return( n[0] );

unsigned int hcf=gcd(n[0],n[1]);

for( unsigned int index=2; index<size; ++index )

hcf=gcd(hcf,n[index]);

return(hcf);

}

int main()

{

using std::cout;

using std::endl;

srand((unsigned) time(NULL));

for(unsigned int attempt=0; attempt<10; ++attempt)

{

unsigned int size=rand()%3+3;

unsigned int* num = new unsigned int[size];

unsigned int index=0;

while(index<size)

num[index++]=rand()%100;

unsigned int hcf=gcd(num,size);

cout<<"GCD(";

index=0;

cout<<num[index];

while(++index<size)

cout<<','<<num[index];

cout<<") = "<<hcf<<endl;

delete[]num;

}

cout<<endl;

}

What does the value 'float' mean in C plus plus?

The float data type is a fundamental numeric data type that can represent floating point values. Depending on the implementation, it is normally 4 bytes in size, with a precision around 6 decimal digits.

A float is a primitive data type in C++, representing a real number. On 32-bit system, a float occupies 4 bytes, representing values in the range 1.2e-38 to 3.4e38. They are often called single in other languages.

A double is also a floating point data type, larger than or equal to a float, but shorter or equal to a long double. On a 32-bit system, a double is 8 bytes long, representing values in the range 2.2e-308 to 1.8e308.

A long double is the longest floating point type and is equal to or greater than a double. On 32-bit systems, a long double is generally 10 byes long, representing values in the range 3.4e-4932 to 1.1e4932.

Note that in Microsoft's implementation of C++, a long double is the same length as a double, but they are still treated as different types.

Is C is better than JavaScript?

You can't compare them it's like apples and oranges. Javascipt is mainly geared for client-side execution in a web browser. c is an old language and is sort of the parent of all modern programming languages including javascript, but it would be used in a totally different scenario, eg. you could use it for application development on a server or desktop, rather than in an online scenario.