answersLogoWhite

0

A process is a program in execution,it needs resources like CPU time,memory,files and i\o devices to accomplish its task.

Threads are lightweight process

User Avatar

Wiki User

13y ago

What else can I help you with?

Continue Learning about Engineering

Explain about threads in computer networks?

A thread is the sequence of instructions followed by a CPU, and is an independently dispachable unit in the run queue. A process can start and manage multiple threads, each managing an aspect of the overall processing. The operating system can schedule the threads independently, allowing them CPU time if they are ready, or blocking them if they are waiting on something, such as an IO completion. In a network process, such as a web server, there can be many things going on at the same concurrent time. Threads are an ideal solution to the problem of managing all of these things, because the main process does not need to poll each sub-process (thread) to see if it needs or is ready to do work.


What are the circumstances of user level thread better than kernel level thread?

There are two distinct models of thread controls, and they are user-level threads and kernel-level threads. The thread function library to implement user-level threads usually runs on top of the system in user mode. Thus, these threads within a process are invisible to the operating system. User-level threads have extremely low overhead, and can achieve high performance in computation. However, using the blocking system calls like read(), the entire process would block. Also, the scheduling control by the thread runtime system may cause some threads to gain exclusive access to the CPU and prevent other threads from obtaining the CPU. Finally, access to multiple processors is not guaranteed since the operating system is not aware of existence of these types of threads. On the other hand, kernel-level threads will guarantee multiple processor access but the computing performance is lower than user-level threads due to load on the system. The synchronization and sharing resources among threads are still less expensive than multiple-process model, but more expensive than user-level threads. Thus, user-level thread is better than kernel level thread.


What characteristics of the motherboard architecture determine the amount of memory that a CPU can address?

RAM and the memory cache


What does computer process contain?

in the cpu, or central processing unit


What is the difference between a Parent Process and Foreground Process?

A parent process is a process that has spawned one or more child processes such that it takes "ownership" of those processes. A foreground process typically means a process that has a user-interface as opposed to a background process which typically does not. Most background processes are services while most foreground processes are applications. It's important to note that the term "process" has a specific meaning in a multi-processing environment. A process is a computer program that has one or more threads of execution. A thread is the machine-level representation of a task and a process may instantiate as many tasks as are required (hardware permitting). All tasks share the same memory space as the process itself, however each task has its own stack for local storage, as well as to enable the thread's function call-and-return mechanism in addition to thread-local exception handling. Every process must have at least one thread of execution typically referred to as the main thread because that is the thread that executes the program's global main function (the entry point of the application) and subsequently instantiates any additional threads required by the main thread. The additional threads are usually called worker threads and any thread can instantiate its own worker threads. Unlike parent/child processes, thread's cannot "own" other threads; the threads are owned by the process. However, it is possible for worker threads to be controlled by other threads in the same process, so in that sense the controlling thread can be said to be a parent thread. When the main function returns, the main thread terminates, however any active threads within the process will continue executing, thus keeping the process "alive". In many cases this would be undesirable particularly if those threads attempt to access shared memory that is released by the main thread. So although there is no notion of ownership amongst threads, any controlling thread that must terminate before its workers have completed their tasks should signal its workers to terminate and then wait for them to terminate before terminating itself. Threads of execution within a process execute concurrently. All this means is that the operating system's task scheduler gives each thread a time-slice of the CPU to do some work, before saving the thread's state and moving onto the next thread (which may belong to another process entirely). Task-switching is so rapid that it can appear as though all threads are executing simultaneously, however that is only physically possible when the CPU has 2 or more cores available. Only one thread may execute upon any single core at any given moment. Multiple threads are typically used to maintain responsiveness. In a GUI environment, a time-consuming task would take up the entire time-slice of a process so it won't be able to respond to any messages on the message queue unless the task specifically checks the queue periodically. However, by using a worker thread to carry out the task, the controlling thread can remain responsive to messages. The message queue can also be used by threads to signal other threads, thus a controlling thread can respond to periodic progress reports from its worker threads if required. In this sense it can be said that the worker threads are background tasks, while the controlling thread is the foreground task.

Related Questions

How does a thread work in a CPU?

A thread in a CPU is a sequence of instructions that the CPU can execute independently from other threads. Each thread has its own program counter, stack pointer, and set of registers. The CPU switches between threads to give the appearance of running multiple tasks simultaneously.


What do you mean by overhead in process and why do threads dont have overhead?

Process has it's own address space while threads are using process's address space when process is switching from one to another it will save process data during that time no productive work will done by cpu, while threads are using process's data so it can directly switch from one thread to other that's why processes have overhead when they are switching from one to other while threads doesn't have overhead.


Is control unit another name for CPU?

yes,if at all we check it out basic process of computer architecture ,we can come to know with out control unit the only CPU process cannot run ,CPU means it is combination of conntrl unit and arthimetic logical unit.....


Explain about threads in computer networks?

A thread is the sequence of instructions followed by a CPU, and is an independently dispachable unit in the run queue. A process can start and manage multiple threads, each managing an aspect of the overall processing. The operating system can schedule the threads independently, allowing them CPU time if they are ready, or blocking them if they are waiting on something, such as an IO completion. In a network process, such as a web server, there can be many things going on at the same concurrent time. Threads are an ideal solution to the problem of managing all of these things, because the main process does not need to poll each sub-process (thread) to see if it needs or is ready to do work.


How many threads can be run on a single CPU pipeline?

5


What is bus architecture?

Bus architecture is the pathway between the CPU and other peripherals. It is usually a shared input/output pathway. Bus is short for omnibus, which means, for all.


What is context schedulling?

Context switching is the process of saving the state of a process or thread, and then restoring the state of another process or thread for execution. Context switching enables multitasking by allowing multiple processes or threads to share a single CPU. It involves saving and restoring CPU registers, program counter, and stack pointers.


Explain the different types of Computer Architecture?

The computer architecture is concerned with how the CPU acts and uses the computer memory.


Which two categories are used to classify CPU architecture?

CISC and RISC are the two categories that are used to classify CPU architecture. CISC is an acronym for complex instruction set computer.


What is the difference between process and thread in net?

A process is a collection of threads that share the same virtual memory. A process has at least one thread of execution, and a thread always run in a process context. Thread is used to allocate and distribute the CPU work scheduling, many programs a re assigned to different threads and they are most of the time independent of each other. Eg: We can open many instance of MS word and MS Excel in our PC, all are monitored and managed by threads. Process is nothing but a program in execution, many threads can run under a process or many thread can combined under the one thread.


List reasons why a Mode switch between threads may be cheaper than a Mode switch between processes?

Answer# 11. reason - the control blocks for processes are larger than for threads (hold more state information), so the amount of information to move during the thread switching is less than for process context switching 2. reason - the major reason is that the memory management is much simpler for threads than for processes. Threads share their memory so during mode switching, memory information does not have to be exchanged/changed, pages and page tables do not have to be switched, etc. This makes the thread context switch much cheaper than for processes. In case of processes the memory pieces (pages) need to be exchanged, etc. (Will talk about the details in few weeks). 3. reason - threads do not have to worry about accounting, etc, so do not have to fill out all the information about accounting and other process specific information in their thread control block, so keeping the thread control block consistent is much faster 4. reason - threads share files, so when mode switch happens in threads, these information stay the same and threads do not have to worry about it (similar to accounting information) and that makes the mode switch much faster.answer 2## Process :Generally heavy weight by, the PCB holds kernel objects the values generally referred as state information. A application can be divided into two types in design phase: 1.Process - may affect application/program architecture 2.Threads - didn't affect architecture Threads typically are spawned for a short-term benefit where as process for long-term even the thread share its own process address space is never larger than 4GB. A single process may hold "n" threads so exchanging value between process; then the CPU spend most of its time for swapping it leads to thrasing definitely. Threads easily exchange their locale variables within its scope but exchange value between process stolen more CPU cycles.


What is meant by process aging?

Process aging is the mechanism of the kernel scheduler of slowly reducing the execution priority of the process (more specifically the threads in a process) when that process or thread stays compute bound (or CPU pinned) for more than a short period of time. This mechanism allows CPU intensive processes to run at a lower priority than IO intensive (or especially interactive) processes. It is a compromise between performance and responsiveness.