What are the disadvantages of bubble sort?
The advantage of sorting is that it is quicker and easier to find things when those things are organised in some way. The disadvantage is that it takes time to sort those things. In computing terms, the cost of searching a few unsorted items is minimal, but when searching a large number of items (millions or perhaps billions of items), every search will have an unacceptable cost which can only be minimised by initially sorting those items. The cost of that sorting process is far outweighed by the speed with which we can subsequently locate items. Inserting new elements into a sorted set also incurs a cost, but no more than the cost of searching for an element, the only difference is we search for the insertion point rather than a specific element.
What is used to translate source code instructions into appropriate machine language instructions?
Languages are either "Compiled languages" or "interpreted languages":
- A compiled language will use a compiler which is another program that checks your code and then converts it to the correct machine code for the machine it is intended to run on. You can only run the program after you have compiled it. A compiler can help spot syntax errors and certain semantic errors and will give you a "compilation error".
- An Interpreted language can be ran directly as long as you have another program called the interpreter which translates your code into machine code whilst it is running. This means certain errors will not be caught before runtime (There is no concept of a compilation error) and so you won't know until runtime if certain errors are present in your code
How string constants are declared in c?
Some strings are constants, others aren't; some constants are strings, other aren't. So these are unrelated things. Examples:
"text" -- constant string
123 -- constant number
char s[40] -- variable string
In a computer, there are many forms of media storage. We can catagorize them into 3 different categories - ROM (such as BIOS), RAM (memory), and Storage (flash drives, hard disk drives, etc.). Keep in mind that RAM retains information only when the power is on.
ROM is usually stored in the BIOS chip. It basically gives your computer the most basic instructions to tell your computer to do a system check and boot your operating system.
RAM is used by the computer to do just about everything. When your computer executes a program, for example, it loads the program into the RAM, interprets it, and runs it. It is generally faster than storage devices, but it cannot retain information once power is cut from it.
Storage medium, as the name implies, for storing data. All data, programs, and files must be present on it for it to do anything useful.
Where are data structures used?
It would probably be easier to ask where they aren't used. Even a one-dimensional array is a data structure, thus all strings are data structures. There are relatively few programs that don't require strings and even fewer that don't require arrays. But even they will use one or more data structures because a program without data of any kind would be completely useless.
Sum of geometric series using c program?
there are different ways of writing dis program... 1+x+(x*x)+(x*x*x*)+....has a formula for its sum... the sum for a geometric series with a as initial value and x as common ratio is (a*(pow(r,n)-1))/(r-1).... where a=1;r=x.. accept the values of x and n through keyboard remember to take x as a float value!! apply the formula and be careful about the parantheses. happy programming!!!
Types of programming language that is machine independent?
FORTRAN (FORmula TRANslator) is the best-known earliest example of machine independent language. This is where the language is not dependent on the characteristics of the computer.
COBAL (COmmon Business-Orientated Language) is the other type of programming language that is machine independent. COBAL was developed by the US Navy for business applications.
How you can use array in c language?
An array is a contiguous block of memory containing one or more elements of the same type and size. Each element in the array is accessed as a zero-based offset from the start of the array or by using the index [].
Examples:
int a[10]; // allocates memory for 10 integers (e.g., 40 bytes for a 4 byte int).
int x = a[5]; // accesses the 6th element (a[0] is the first element).
int *p = a; // point to start of the array.
p += 5; // advance pointer 5 * sizeof( int ) addresses.
*p = 10; // access the 6th element and assign the value 10.
int d * = new int[*p]; // dynamically allocate an array of 10 elements.
delete [] d; // release dynamic array.
Write a program which uses Command Line Arguments?
IN C++
#include <iostream> int main(int argc, char *argv[]) { using namespace std; cout << "There are " << argc << " arguments:" << endl; // Loop through each argument and print its number and value for (int nArg=0; nArg < argc; nArg++) cout << nArg << " " << argv[nArg] << endl; return 0; }
What is the difference between ternary operators and if-else statement?
ternary is a single statement operator while even the most primary form of if else contains an if and an else statement. ternary only returns a value but if else can be used to do a lot of other things like printing, assigning values or just returning true or false.
How compile the code compiler and interpriter?
No generic answer for this question, specify your operating system and compiler.
For example in unix: cc -g -o myprog myprog.c
in linux: gcc -g -W -Wall -pedantic -o myprog myprog.c
What is the difference between binary file and executable file?
windows support 2 file formats
1.text file
2.binary file
in a text file in windows , each line is teminated with a carriage reurn followed by a linefeed character .but when a file is read by a c prog in text mode,c library converts
carriage reurn/ linefeed character both in to a single linefeed character.
but in case of binary file ,the prog will see both carriage return & linefeed character
How do you draw a Flowchart to find whether the given number is odd or even using counter?
first we write start and then read number and after that check the number is totaly divide by 2 or not if number is totally divide by 2 then number is even else number is odd.
Why you use array in c language?
The idea of an array is to store data for different related items, using a single variable name. The different items are distinguished by a subscript (a number, which may also be a variable or some other expression)
For example, if you want to track scores for four different players in a computer game, you could create an array for those scores.
The idea of an array is to store data for different related items, using a single variable name. The different items are distinguished by a subscript (a number, which may also be a variable or some other expression)
For example, if you want to track scores for four different players in a computer game, you could create an array for those scores.
The idea of an array is to store data for different related items, using a single variable name. The different items are distinguished by a subscript (a number, which may also be a variable or some other expression)
For example, if you want to track scores for four different players in a computer game, you could create an array for those scores.
The idea of an array is to store data for different related items, using a single variable name. The different items are distinguished by a subscript (a number, which may also be a variable or some other expression)
For example, if you want to track scores for four different players in a computer game, you could create an array for those scores.
How are instructions converted to machine language?
Compilers or interpreters translate high-level code to machine language. Interpreted languages require a runtime to perform the conversion when the high-level code is executed whereas compiled languages are typically compiled to native machine code which requires no further translation. However, some languages compile the high-level code to an intermediate code known as byte code which is then interpreted to produce the machine code. This is typically done to improve performance, because it is quicker to interpret byte code than it is to interpret high level code (primarily because the byte code is more compact). Also, the byte code need only be compiled once but can be execute on any machine with a suitable interpreter. Java is a typical example of this (compiled Java byte code can be interpreted by the Java virtual machine on any physical machine). While this greatly improves performance and portability, the need for a runtime means the language is not suitable for general purpose programming (such as operating system kernels, device drivers, subsystems and so on), it can only be used to develop applications software. Despite the improved performance, compiled native machine code programs will always perform better than interpreted byte code. And although native machine code programs are not portable (they are machine-specific), the high-level source can be portable, it simply needs to be recompiled.
How i can Write the program in fortran using goto statement?
The use of GOTOs in programming is generally considered to be bad form, because it very rapidly leads to "spaghetti code" where it is difficult or impossible to follow the program's logic flow.
However, given Fortran's comparatively weak set of flow controls, there are times when a GOTO is unavoidable or actually clearer than using a more-structured layout. A simple example would be a subroutine that checks its arguments for validity and exits immediately if it finds something incompatible. The alternatives would be
(A) Put a GOTO 99999 after each invalid condition is detected, where 99999 is the program's RETURN statement
(B) Set flags after each condition, falling through and checking more and more flags until you "naturally" reach the module's RETURN.
An example of (A) would be (using slight variations on Fortran 90 syntax)
subroutine foo(x,y)
implicit none
real*4 x, y
! Check for negative arguments
if (x < 0.0) then
print *, 'Argument X is negative'
goto 99999
endif
if (y < 0.0) then
print *, 'Argument Y is negative'
goto 99999
endif
! (Code body goes here ....)
99999 continue
return
end
So is it 8 or 15?
printf ("%-15u", state); or
printf ("%-8u", state);
DDA uses float numbers and uses operators such as division and multiplication in its calculation. Bresgenham's algorithm uses ints and only uses addition and subtraction. Due to the use of only addition subtraction and bit shifting (multiplication and division use more resources and processor power) bresenhams algorithm is faster than DDA in producing the line. Im not sure, though if i remember right, they still produce the same line in the end.
One note concerning efficiency: Fixed point DDA algorithms are generally superior to Bresenhams algorithm on modern computers. The reason is that Bresenhams algorithm uses a conditional branch in the loop, and this results in frequent branch mispredictions in the CPU. Fixed point DDA also has fewer instructions in the loop body (one bit shift, one increment and one addition to be exact. In addition to the loop instructions and the actual plotting). As CPU pipelines become deeper, mispredictions penalties will become more severe.
Since DDA uses rounding off of the pixel position obtained by multiplication or division, causes an accumulation of error in the proceeding pixels whereas in Bresenhams line algorithm the new pixel is calculated with a small unit change in one direction and checking of nearest pixel with the decision variable satisfying the line equation.
What are preprocessor directive?
Preprocessor directives are used to mark code that is specific to a particular compiler and thus to a specific machine architecture. In this way, programmers can write cross-platform code in the same source and let the compiler decide which parts of the source to compile and which to ignore. In reality, the compiler never actually sees the preprocessor directives since the preprocessor creates new files containing only the code that is to be compiled. Hence the preprocessor is often called the precompiler. Normally, the intermediate source files are deleted as they are compiled, however your development environment should contain an option that allows you to view these files so you can see what the compiler actually works with.
In C and C++, all preprocessor directives have a leading # symbol, such as #include and #define.
#include is by far the most common preprocessor directive. When the precompiler encounters a #include statement, the named header file is essentially copy/pasted in place of the directive. However, all header files should also contain #ifndef header guards to ensure headers are only included once per compilation and these have to be preprocessed as well. Macro definitions are also preprocessed, replacing all instances of the macro symbol with the definition. Macro functions are also inline expanded but since the compiler only sees the expanded code, never the macro itself, the compiler cannot help you debug errant macros. This is why non-trivial macro functions are best avoided.
Is it true or false that A linked list is a collection of nodes?
It is true that a linked list is a collection of nodes.And a node contains data part and a link part which contains address of the next node.
What arguments can be made against the idea of a single language for all programming domains?
You cannot. The only programming language understood natively by a machine is its own machine code. Every architecture has its own variant of machine code and for good reason. Just as the machine code for a piano player would make little or no sense to a Jacquard loom, the machine code for a mainframe would be impractical for a smart phone. Each machine has a specific purpose and therefore has its own unique set of opcodes to suit that purpose. Although some of those opcodes will be very similar and may have the same value associated with them, they won't necessarily operate in exactly the same way, so the sequence of opcodes is just as important as the opcodes themselves. Thus every machine not only has its own machine code it also has its own low-level assembly language to produce that machine code.
We could argue that we only need one high-level language, of course, but then that one language would have to be suitable for all types of programming on all types of machine. This is quite simply impossible, because some languages are better suited to certain domains than others. For instance, Java is an incredibly useful language because it is highly portable, but it is only useful for writing application software. It is of no practical use when it comes to writing operating system kernels or low-level drivers because all Java code is written against a common but ultimately non-existent virtual machine. If it were possible to write an operating system in Java, the extra level of abstraction required to convert the Java byte code to native machine code would result in far from optimal performance; never mind the fact you need to an interpreter to perform the conversion in the first place.
C++ is arguably more powerful than Java because it is general purpose and has zero overhead. Other than assembly, there is no other language capable of producing more efficient machine code than C++. However, C++ isn't a practical language for coding artificial intelligence systems; for that we need a language that is capable of rewriting its own source code, learning and adapting itself to new information. C++ is too low-level for that.
The mere fact we have so many high-level languages is testament to the fact we cannot have a single language across all programming domains. Languages are evolving all the time, borrowing ideas from each other. If a domain requires multiple paradigms for which no single language can accommodate, we can easily interoperate between the languages that provide the specific paradigms we need, possibly creating an entirely new language in the process. That's precisely how languages have evolved into the languages we see today.
What is the difference between a null pointer and a null macro?
I'll assume that you're asking your question for C type language programming. A NULL pointer is a pointer that's guarnteed to point to nothing. This may be 0 in a UNIX/Linux system or some other address in another system. Using the NULL macro to set/initialize your pointers will make your programs more portable among systems than using something like the 0.
#include
char *c = 0; // initialize to NULL--not portable
char *p = NULL; // initialize to NULL as defined in stdio is portable