answersLogoWhite

0

📱

Computer Science

Computer Science is the systematic study of algorithmic processes that describe and transform information. It includes the theoretical foundations of information and computation and the practical techniques of applying those foundations to computer systems. Among the many subfields of Computer Science are computer graphics, computer programming, computational complexity theory, and human-computer interaction. Questions about Computer Science, terms such as algorithms and proofs, and methodologies are encouraged in this category.

1,839 Questions

What is a supermarket checkout system?

All these processes are very helpful for the purposes of dealing with an individual customer's purchases. However, when computers are linked in a network, many new uses are possible.

Now, I am going to draw a different system boundary. The components of this supermarket checkout system are the checkout terminal, the network and the database server. A database server is used to make the data in databases available to other computers on the network, and therefore to users. You met a specialised form of database server in Section 14.2: the FirstClass server. In the following sections, I'll focus on the network first and then look at the database server.

You might be wondering about the users of the system: the customer and the checkout operator. For the time being, we are focusing on the computers and network, rather than thinking about the end users.

To count the no of digits in an integer?

Say you have some integer a. a

First take it's absolute value. |a|

Next log it base 10. log10 |a|

Truncate this value, then add 1. trunc ( log10 |a| ) + 1

You now have the number of digits.

What are the 4 basic things a CPU does with 2 pieces of info?

There are way more than 4 things - and even "basic" things - a computer can do with two pieces of information. For example, add, subtract, multiply, divide, get the remainder of a division (modulos), compare, boolean AND, boolean OR, boolean XOR, exchange the two values.

What is justified alignment?

There are four "main" types of text alignment.

  1. Left justified - Probably the most common, all text is aligned to the left side of the page.
  2. Right justified - Probably the least common, all text is aligned to the right side of the page.
  3. Center justified - The entire line of text is centered on the page.
  4. "Justified" - Sort of a mix between the other types. Text begins aligned to the left, but lines will "extend" themselves (by increasing the space between words) in order to completely fill the line with text. This type of alignment wants both the left and right sides of text to have straight edges.
See the related links section for some examples.

What do ICT mean?

Information and Communications Technology.

What are the main advantages and disadvantages of using a layered network architecture?

The following are the advantages of a layered architecture:

Layered architecture increases flexibility, maintainability, and scalability. In a Layered architecture we separate the user interface from the business logic, and the business logic from the data access logic. Separation of concerns among these logical layers and components is easily achieved with the help of layered architecture.

Multiple applications can reuse the components. For example if we want a windows user interface rather than a web browser interface, this can be done in an easy and fast way by just replacing the UI component. All the other components like business logic, data access and the database remains the same. Layered architecture allows to swap and reuse components at will.

Layered architecture enables teams to work on different parts of the application parallely with minimal dependencies on other teams.

Layered architecture enables develop loosely coupled systems.

Different components of the application can be independently deployed, maintained, and updated, on different time schedules.

Layered architecture also makes it possible to configure different levels of security to different components deployed on different boxes. sO Layered architecture, enables you to secure portions of the application behind the firewall and make other components accessible from the Internet.

Layered architecture also helps you to test the components independently of each other.

The following are the disadvantages of a layered architecture:

There might be a negative impact on the performance as we have the extra overhead of passing through layers instead of calling a component directly.

Development of user-intensive applications can sometime take longer if the layering prevents the use of user interface components that directly interact with the database.

The use of layers helps to control and encapsulate the complexity of large applications, but adds complexity to simple applications.

Changes to lower level interfaces tend to percolate to higher levels, especially if the relaxed layered approach is used.

How many bits in a nibble?

A group of half a byte (4 bits) is referred to as a nibble.

What would be the best choice of study for a 50-year-old who wants to change his career path and get into the field of Computer Science or IT?

I would suggest you start with a visit to your home county community college. They are typically more community oriented, and open to prospective students in your type of situation. You can request to speak to an enrollment specialist in the Admissions Office who will be able to guide and direct you. The community college - to start with - is also less expensive, smaller class size, and better student to professor ratio. This means more individualized attention. Today, colleges are a mixed bag in terms of ages with many individuals - because of the job market - changing careers. You should fair well there.

Are general purpose computers used mostly to control something else?

No.

It is possible to program them for that purpose, but they have many more uses that have nothing to do with control.

Is it true or false that modern OSs are interrupt driven?

Modern OSs making use of multi-tasking tend to be interrupt-driven.

What is the size of L1 and L2 cache?

Usually the size of the L2 cache will be larger than the L1 cache so that if data hit in L 1 cache occurs, it can look for it in L 2 cache.. If data is not in both of the caches, then it goes to the main memory...

Threads belonging to the same process share the?

Threads belonging to the same process share the same resources and address space.

Was the first computer HAL?

No, HAL is a fictitious computer that appears in the Arthur C. Clark book and Stanley Kubrick movie 2001 a Space Odyssey. No such machine exists nor is one likely to ever exist.

How many nibbles are there in this 0010101111001011?

A nibble is a term given to one half of one byte. Since a byte is eight bits, a nibble would be four bits.

The number 0010 1011 1100 1011 contains four nibbles.

Which class is considered as the superclass of all the classes?

It depends on the language. Java provides the Objectsuperclass (defined in java.lang) from which all other classes must be derived. This allows us to treat any Java object as being "of the same type". This is not necessarily a good thing -- separate types should be kept separate -- however garbage collection would be difficult to implement without a common base class to refer to every type of object.

C++, on the other hand, is not a "pure" object-oriented language; it is a multi-paradigm (procedural, object-oriented and generic). Programmers are free to decide for themselves how to classify user-defined objects, but the built-in types (such as int and double) are not derived from classes, so there can be no superclass. If there were, it would not be possible to write (let alone support) low-level C-style code where there can be no classes of any type. In addition, the standard library types (such as vector and string) have no superclass. In particular, a vector and a vector cannot be regarded as being "of the same type" even if U derives from T. If we really require this functionality, we can easily cater for it in code, but it isn't provided by default (via a superclass) because the language itself does not require it.

Name 4 memory units in which memory of a storage device is measured?

As far as my knowledge goes, it is like this

1. The smallest memory measuring unit is BIT

2. 8 BITS = 1 BYTE

3. 1024 BYTES = 1 MEGA BYTE (MB)

4. 1024 MEGA BYTES = 1 GIGA BYTE (GB)

5. 1024 GIGA BYTES = 1 TERA BYTE (TB)

and so on...

these units are generally used in computers and peripherals!

you!

What are the implications of using signed vs unsigned bytes in a computer system?

In C programming, the signed and unsignedmodifiers only apply to integer data types: char, short int, int, long int and long long int. Using these modifiers affects the range of values that each of these types can physically represent, however those ranges are implementation-defined.

Although we typically regard a byte as being 8 bits in length, this is not the case at all. In programming, a byte is simply the smallest unit of addressable storage on the system, but the length of a byte is actually determined by the machine architecture of that system.

In C programming, all data types are measured in chars, thus sizeof (char) is always 1 (byte). However, to determine the number of bits per char we need to examine the CHAR_BITS macro defined in . In most cases this macro is defined with a value of 8 (bits), however we can never assume that is always the case; some systems use a 9-bit byte, others use a 16-bit byte. The only thing the standard guarantees is that CHAR_BITS will be no less than 8. Although 7-bit and 6-bit systems do exist, they are non-standard and therefore require non-standard language implementations.

Generally, we don't really need to know the bit-length of an individual char, we simply need to know what range of values the integer types can physically represent (with respect to the current implementation). Again, we look to the macros defined in the implementation's header:

SCHAR_MIN : Minimum value signed char

SCHAR_MAX : Maximum value signed char

UCHAR_MAX : Maximum value unsigned char

CHAR_MIN : Minimum value char

CHAR_MAX : Maximum value char

SHRT_MIN : Minimum value short int

SHRT_MAX : Maximum value short int

USHRT_MAX : Maximum value unsigned short int

INT_MIN : Minimum value int

INT_MAX : Maximum value int

UINT_MAX : Maximum value unsigned int

LONG_MIN : Minimum value long int

LONG_MAX : Maximum value long int

ULONG_MAX : Maximum value unsigned long int

LLONG_MIN : Minimum value long long int

LLONG_MAX : Maximum value long long int

ULLONG_MAX : Maximum value unsigned long long int

Note that there are no macros defining the minimum value of unsigned data types because the minimum value for any unsigned data type is always 0. Also, a "plain" int is always signed, hence there is no SINT_MIN or SINT_MAX macro.

The char, signed char and unsigned char are three distinct types. The standard does not specify whether a "plain" char should be signed or unsigned, but its range must match that of either the signed char or unsigned char data types. To determine whether a plain char is signed or not, we can use the following:

bool signed_char = ((int) CHAR_MAX == (int) SCHAR_MAX) ? true : false;

If signed_char is true, a char is equivalent to a signed char, otherwise it is equivalent to an unsigned char.

Negative integer values are either represented using ones-complement or twos-complement notation. Again, this is implementation-defined although most modern systems use twos-complement. To determine which notation is in use, we simply look to the minimum value of a signed char. Assuming an 8-bit char, a ones-complement system defines SCHAR_MIN as being -127 while a twos-complement system uses -128.

In ones-complement notation, to flip the sign we simply invert all the bits, thus 01010101 becomes 10101010 (the ones-complement of 01010101). The high-order bit denotes the sign (0 for positive, 1 for negative). However, this then means that we have two distinct representations for the value zero: 00000000 (+0) and 11111111 (-0) but zero is neither positive nor negative. To eliminate this inconsistency, twos-complement adds one to the ones-complement, thus 11111111 + 1 = 00000000 (the overflowing bit is simply ignored). By eliminating the redundant representation for -0, the negative range of values increases by 1.

One of the implications of using signed and unsigned data types is that we must be careful when performing mixed-mode arithmetic as this can result in "narrowing" or loss of information. For instance:

void f (signed char s, unsigned char u) {

u = (unsigned) s; // ouch!

}

Here we used an explicit cast, however if s is negative we will lose information because an unsigned type cannot represent a negative value. Conversely, if s is positive, we don't lose information because the upper range of u exceeds that of s. So before converting between signed and unsigned representations, it is worth ensuring the value is within the range of valid values. When converting from unsigned to signed, it's usually a good idea to use a larger signed data type.