What are the two WEP key lengths?
A WEP key is entered as a string of numbers and letters and is generally 64 bits or 128 bits long. In some cases, WEP supports 256 bit keys as well. To simplify creating and entering these keys, many devices include a Passphrase option. The passphrase is an easy way to remember the word or phrase used to automatically generate a key.
Such specialists are usually paid to do work for others. They may have signed a contract stating that the work they get paid to do doesn't belong to them; even if they didn't sign such a contract, it is usually implicit. For more information, read the Wikipedia article entitled "Work for hire".
How records are logically deleted from files?
You might define a boolean field in the record meaning 'this record is logically deleted: yes/no'.
What is the efficiency of ascii character using asynchronous data with two stop bits?
The efficiency of ascii characters using asynchronous data transfer protocol with two stop bits is 8 in 11, or 72%.
There is one start bit, eight data bits*, and two stop bits. That is 11 bit cells, in which a payload of 8 bits is possible, hence the 8 in 11.
*Actually there are only 7 data bits in ASCII... latin-1 and several other incompatible extensions to ASCII have 8. Which one is in use varies between languages - many European countries use different encodings which have the same meanings for the first 128 characters but different for the second 128 depending on what extra characters are required in the language in question.
If the payload was 7 bits, for pure ASCII, then the efficiency with one start bit and two stop bits would be 7 in 10, or 70%.
Why do we need dynamic initialization of objects in C plus plus?
All objects must be initialised before we can use them. Initialisation can either occur statically (at compile-time) or dynamically (at runtime) however declaring an object static does not imply static initialisation.
Static initialisation relies on constant expressions. A constant expression is any expression that can be computed at compile-time. If a computation involves one or more function calls, those functions must be declared constant expression (constexpr) in order to take advantage of compile-time computation. However, whether or not the computation actually occurs at compile-time or not is implementation-defined. Consider the following code:
constexpr unsigned f (unsigned n) { return n<2?1:n*f(n-1); }
static unsigned num = f (4); // e.g., num = 4 * 3 * 2 * 1 = 24
int main () {
// ...
}
The above example is a potential candidate for compile-time computation. A good compiler will interpret this code as if it had been written:
static unsigned num = 24;
int main () {
// ...
}
However, if the function f () were not declared constexpr, the compiler would instead interpret the code as if it were actually written as follows:
unsigned f (unsigned n) { return n<2?1:n*f(n-1); }
static unsigned num; // uninitialised!
int main () {
num = f (4);
// ...
}
In other words, num is dynamically initialised -- at runtime.
Local objects are always initialised dynamically, as are objects allocated upon the heap (the free store). Static objects may be statically allocated, however this depends on the complexity of the object's constructor as well as the compiler's ability to perform compile-time computation.
What is meant by process aging?
Process aging is the mechanism of the kernel scheduler of slowly reducing the execution priority of the process (more specifically the threads in a process) when that process or thread stays compute bound (or CPU pinned) for more than a short period of time.
This mechanism allows CPU intensive processes to run at a lower priority than IO intensive (or especially interactive) processes. It is a compromise between performance and responsiveness.
Calloc allocates a block of memory for an array of elements of a certain size?
No. The calloc function allocates a block of memory for a count of a specific type. The size of the type is already known to the compiler so does not need to be specified, it will automatically multiply the type's size by the count. With malloc, you have to allocate memory in bytes, therefore you need to calculate exactly how many bytes you will need for a given type and the number of elements of that type.
Examples (allocate 100 integers):
int* p = (int*) malloc (sizeof (int) * 100);
int* q = (int*) calloc (int, 100);
Note also that malloc does not initialise the memory whereas calloc does (the allocated memory is initialised with the value zero). As such, malloc is more efficient when you want to initialise the memory by copying from other memory. That is, there's no point initialising memory you're going to initialise manually, so long as you don't access that memory before it is initialised.
What are the use of index numbers?
Two reasons: They're often quicker to use and they make for a great filing system.
1) Speed - Suppose I had a set of objects {n1, n2, n3, n4, n5, n6, n7, n8, n9, n10, n11, n12, n13, n14, n15, n16, n17, n18} that I wanted you to add up, but I wanted you to show your work. Would you rather write this down:
n1 + n2 + n3 + n4 + n5 + n6 + n7 + n8 + n9 + n10 + n11 + n12 + n13 + n14 + n15 + n16 + n17 + n18,
or an equivalent expression, using indexes, like this:
∑(ni, i, 1, 18)?
2) Filing - Suppose I had a 6 X 6 matrix, or array, and I wanted to talk about one specific element in it. If I hadn't done the proper filing, I would be fumbling for words in an effort to describe where it is. "Go two down from the top and 3 over from the left." I'd rather just have them filed and labeled with indexes, like this n3,4.
What is the diff between equality of objects and equality of references that refer to them?
References are equal if they both point to the same object.
o2);
// true because they are meaningfully equal
System.out.println(o1.equals(o2));
}
}
Meaningfully equal is defined by overriding the equals method of class Object.
boolean equals (Object obj) Decides whether two objects are meaningfully equivalent.
4 bit combinational circuit decrementer using full adders?
Use the regular 4 bit full added, but make one of the inputs 1111 = 2's complimentrepresentation of -1. This will serve to decrement the other number by 1. Throw away the 5th bit, the carry bit.
Example
If 5 is entered:
0101
+ 1111
______
0100 = 4
By -: lokesh kourav
contact no -: 9201104655
Why you have two types of memory?
There are more than 2 types of memory. RAM has to be very fast in order for the computer to work quickly enough. Magnetic memory and WORM memory can be slower but is persistent (retains its information when the power is shut off)
A command is a set of graphical choices arranged in a grid or in a list?
false. a gallery is a set of graphical choices arranged in a grid or in a list
The information technology course module has been designed with more of software part in the course whereas Computer Science includes more of computer hardware part like networking, chip level knowledge etc. Although some of the subjects are same in both the streams.
AnswerInformation Technology is the business side of computers - usually dealing with databases, business, and accounting. The cs engineering degree usually deals with how to build micro processors, how to write a compiler, and is usually more math intensive than IT. One way to think of it is one is dealing with information - data which would be the IT and the other is dealing with the "science" or "how to make it" of computers. AnswerThe exact answer depends heavily on the college or university in question, as each tends to split things slightly differently. As a generalization, there are actually three fields commonly associated with computers:
Information Technology - this sometimes also goes by the names "Information Systems", "Systems Administration", or "Business Systems Information/Administration". This is a practical engineering field, concerned primarily with taking existing hardware and software components and designing a larger system to solve a particular business function. Here you learn about some basic information theory, applied mathematics theory, and things like network topology/design, database design, and the like. IT concerns itself with taking building blocks such as servers, operating systems, network switches, and software applications and creating a whole system to solve a problem (such as creating a sales order handling systems).
Computer Science - this is a theoretical field, with emphasis on the mathematical basis which underlies modern programming. That is, computer science is primarily software-oriented, as it concerns itself with developing new algorithmic ways to solve a problem. Such algorithms are then actually implemented in software. Here you will learn about the fundamentals of programming languages, a large variety of information theory and algorithm theory (plus, linear and discrete mathematics), how to design a software program, and how to run a successful software development team. CS can also encompass items such as compiler and Operating system design and implementation. In general, if it concerns actually writing any form of software, whether to solve a practical problem or as part of a more academic research project, CS is the place to be.
Computer Engineering - this is an hardware engineering field. Some places treat it as a specialty of Electrical Engineering. This field teaches the design of hardware components, and also the assembly of those components into a larger hardware system. It encompasses information theory, electrical engineering, VLSI design, and digital logic. Here you will be involved with designing CPUs and other Integrated Chips to perform specific tasks, and will also learn about very low-level programming (usually, the type of programming use to create firmware). In essence, CE involves the creation of hardware devices intended to perform a very specific function (e.g. a modem, a CPU, a DRAM chip, etc.)
What are the Components in uml language?
A component represents a modular part of a software system that encapsulates (hides) its contents, exposes public interface for provided services, and which is replaceable within system with equivalent or conformant components.
A component defines its behavior in terms of provided and required interfaces (potentially exposed via ports.) Internals of component are hidden and inaccessible other than as provided by its interfaces.
Larger pieces of a system's functionality may be assembled by reusing components as parts in an encompassing component or assembly of components, and wiring together their required and provided interfaces.
Stack example in c using push and pop?
http://www.osix.net/modules/article/?id=275
muzzy writes "Here's some code for you kids. It demonstrates the concept of stack, implemented with a linked list mechanism to support virtually infinitely large stack sizes. Happy reading."
/* Simple Dynamically Allocating Stack Implementation in C
*
* Copyright (C) 2002 Muzzy of Worst Coders
*/
Is there any difference in allocation of the memory of the member of class and object?
A class is the definition of a type -- it consumes no memory in an off itself. An object is an instance of a class, and therefore consumes memory, equal to the total size of all its member variables (attributes), including base class members, plus padding for alignment. If the class declares any virtual methods, a v-table is also created, which consumes additional memory.
FAT32 is a widely recognized file system. While not necessarily the most modern or most reliable system, using FAT32 per default allows manufacturers to sell one version for use with many computers and operating systems.
The same cannot be said from more advanced file systems such as NTFS or Reiser, to name just two.
Expert users can always re-format a device with the file system of choice prior to first use.
What is difference between forest and a domain in AD?
The term 'domain' is too general to compare to the idea of a forest. A domain and the AD can be a part of a forest. This includes; domain controllers, child domains, domain functionality, replicators, directory service and so on. The concept of creating a forest was first introduced in the windows 2003 AD architecture. Suffice to say interoperability with server 2000 and NT (which do not recognize the forest) poses limitations and security issues. Hence four levels of functionality. Some are, in my opinion, basically unsound with regards to the security levels of a forest. A forest is not to be taken lightly. It requires much research and preparation. The term 'domain' applies across the board in a forest. Moreover, a forest relies on security. The PC you start the first installation of a forest will be considered the root and will hold the high level admins such as the enterprise and schema admins. Making forest trusts (only on root domain) facilitates communications between domains and ADs that share the same SPN (service principle name) which have to be resolved at a remote location in another forest. The configuration also requires IAS, Kerberos, UPN, SPD, SID namespaces .... What am I forgetting? Thinking about configuring the root forest on the first PC makes you dizzy with abbreviations acronyms, protocols, group security, etc ... Comprehensive research and planning are crucial. Managing forests and domain is hard enough as it is. I'd say this basic principle of security properties could be considered the largest difference between a 'forest' and a 'domain'.