answersLogoWhite

0


Best Answer

One -- the main thread.

User Avatar

Wiki User

6y ago
This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is the minimum number of threads in each process?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Engineering

What is the difference between processes and threads?

The memory space, where a given application is executed is called - process. A Process is the memory set aside for an application to be executed in. Within this process the thing, which is really executed is the thread. The key difference is that processes are fully isolated from each other; threads share (heap) memory with other threads running in the same application. Threads share the address space of the process that created it; processes have their own address. Threads have direct access to the data segment of its process; processes have their own copy of the data segment of the parent process. Threads can directly communicate with other threads of its process; processes must use inter-process communication to communicate with sibling processes. Threads have almost no overhead; processes have considerable overhead. New threads are easily created; new processes require duplication of the parent process. Threads can exercise considerable control over threads of the same process; processes can only exercise control over child processes. A great answer to the question can also be found here: (link moved to link section)


Explain about threads in computer networks?

A thread is the sequence of instructions followed by a CPU, and is an independently dispachable unit in the run queue. A process can start and manage multiple threads, each managing an aspect of the overall processing. The operating system can schedule the threads independently, allowing them CPU time if they are ready, or blocking them if they are waiting on something, such as an IO completion. In a network process, such as a web server, there can be many things going on at the same concurrent time. Threads are an ideal solution to the problem of managing all of these things, because the main process does not need to poll each sub-process (thread) to see if it needs or is ready to do work.


What is difference in operating threads or process threads?

A process is an instance of a computer program. Each process has its own address space and executes concurrently with other processes. If the system has multiple CPU cores, two or more processes may execute simultaneously. Every process has at least one thread of execution, the main thread. Any thread within a process can instantiate a new thread of execution within that same process. Every thread shares the same address space as the process itself, however each thread has its own local stack. This makes it possible for two threads to call the same function concurrently. If the system has multiple CPU cores, two or more threads within the same processes may execute simultaneously. Programmers use multiple thread to divide complex tasks into simpler tasks that can be executed concurrently. As a simple example, suppose we must search an unsorted array of 1 million elements. We could use a single thread and search the entire array from beginning to end but there's a 50/50 chance the element we are looking for resides in the second half of the array which means that we'd need to search through 500,000 elements on average. By dividing the array between two worker threads we should get a result twice as quickly because each thread now only searches an average of 250,000 elements. It therefore follows that the more threads we utilise, the quicker we should get a result, but that's only true up to a point because the number of threads we can physically execute simultaneously depends on the number of CPU cores available. If all threads execute upon the same core then there is no point in multi-threading this particular task. Nevertheless, the task is likely to take some time to complete so it would still be beneficial to delegate the task to at least one worker thread. For example, if the main thread manages a message queue, the worker thread can post messages to the queue to keep the main thread abreast of its progress. Meanwhile, the user can continue to interact with the process, posting messages to the queue to perhaps initiate another search (in another thread). In this way the main thread's task is reduced to nothing more than processing messages and delegating tasks to worker threads. This is clearly an over-simplification, however it's this type of multi-threading that makes it possible for a graphical user interface to remain responsive to user interactions while time-consuming tasks are being carried out in the background by worker threads.


What is the difference between a Parent Process and Foreground Process?

A parent process is a process that has spawned one or more child processes such that it takes "ownership" of those processes. A foreground process typically means a process that has a user-interface as opposed to a background process which typically does not. Most background processes are services while most foreground processes are applications. It's important to note that the term "process" has a specific meaning in a multi-processing environment. A process is a computer program that has one or more threads of execution. A thread is the machine-level representation of a task and a process may instantiate as many tasks as are required (hardware permitting). All tasks share the same memory space as the process itself, however each task has its own stack for local storage, as well as to enable the thread's function call-and-return mechanism in addition to thread-local exception handling. Every process must have at least one thread of execution typically referred to as the main thread because that is the thread that executes the program's global main function (the entry point of the application) and subsequently instantiates any additional threads required by the main thread. The additional threads are usually called worker threads and any thread can instantiate its own worker threads. Unlike parent/child processes, thread's cannot "own" other threads; the threads are owned by the process. However, it is possible for worker threads to be controlled by other threads in the same process, so in that sense the controlling thread can be said to be a parent thread. When the main function returns, the main thread terminates, however any active threads within the process will continue executing, thus keeping the process "alive". In many cases this would be undesirable particularly if those threads attempt to access shared memory that is released by the main thread. So although there is no notion of ownership amongst threads, any controlling thread that must terminate before its workers have completed their tasks should signal its workers to terminate and then wait for them to terminate before terminating itself. Threads of execution within a process execute concurrently. All this means is that the operating system's task scheduler gives each thread a time-slice of the CPU to do some work, before saving the thread's state and moving onto the next thread (which may belong to another process entirely). Task-switching is so rapid that it can appear as though all threads are executing simultaneously, however that is only physically possible when the CPU has 2 or more cores available. Only one thread may execute upon any single core at any given moment. Multiple threads are typically used to maintain responsiveness. In a GUI environment, a time-consuming task would take up the entire time-slice of a process so it won't be able to respond to any messages on the message queue unless the task specifically checks the queue periodically. However, by using a worker thread to carry out the task, the controlling thread can remain responsive to messages. The message queue can also be used by threads to signal other threads, thus a controlling thread can respond to periodic progress reports from its worker threads if required. In this sense it can be said that the worker threads are background tasks, while the controlling thread is the foreground task.


What is processes threads?

different paths of control in a program that a computer might run at the same time if it has parallel processing support for multithread execution. threads and processes are two ways of supporting multitasking on a uniprocessor or multiprocessing on a multiprocessor. threads are lighter weight: take less OS resources to implement but only support limited protection and security. processes take more OS resources but can support full protection and security. Many operating systems support both processes and threads, allowing each process to have many threads.

Related questions

What of these components are shared by a multithreaded process?

There r some resources shared by different threads o the same process while some r not. The threads shares the address space,file,global variables. But each threads has its own stack , copy of registers(including PC).


What resource are typically shared by all of the threads of a process?

There r some resources shared by different threads o the same process while some r not. The threads shares the address space,file,global variables. But each threads has its own stack , copy of registers(including PC).


What resources are typical shared by all of the threads of a process?

There r some resources shared by different threads o the same process while some r not. The threads shares the address space,file,global variables. But each threads has its own stack , copy of registers(including PC).


What is a thread and how does it relate to a process?

A process is composed of one or more threads of execution. Multiple threads allow a process to perform two or more operations concurrently. This is particularly useful in machines with two or more processors as the threads can execute simultaneously. All the threads of a process run in a shared memory space; separate processes run in separate memory spaces. A process must have at least one thread, the primary thread. However, threads can spawn new threads as required. Each thread has its own call stack but shares the same data segment and virtual address space as the process.


What is the difference between processes and threads?

The memory space, where a given application is executed is called - process. A Process is the memory set aside for an application to be executed in. Within this process the thing, which is really executed is the thread. The key difference is that processes are fully isolated from each other; threads share (heap) memory with other threads running in the same application. Threads share the address space of the process that created it; processes have their own address. Threads have direct access to the data segment of its process; processes have their own copy of the data segment of the parent process. Threads can directly communicate with other threads of its process; processes must use inter-process communication to communicate with sibling processes. Threads have almost no overhead; processes have considerable overhead. New threads are easily created; new processes require duplication of the parent process. Threads can exercise considerable control over threads of the same process; processes can only exercise control over child processes. A great answer to the question can also be found here: (link moved to link section)


What are the advantages and disadvantages of user level threads and kernel level threads?

- USER LEVEL THREADS Aadvantages: · User-level threads can be implemented on operating system that does not support threads. · Implementing user-level threads does not require modification of operating system where everything is managed by the thread library · Simple representation which the thread is represented by a the thread ID, program counter, register, stack , all stored in user process address space · Simple management where creating new threads, switching threads and synchronization between threads can be done without intervention of the kernel · Fast and efficient where switching thread is much more inexpensive compared to a system call - Disadvantages: · There is a lack of coordination between threads and operating system kernel. A process gets one time slice no matter it has 1 thread or 10000 threads within it. It is up to the thread itself to give up the control to other threads · If one thread made a blocking system call, the entire process can be blocked in the kernel, even if other threads in the same process are in the ready state KERNEL LEVEL THREAD: - Advantages: · Because kernel has the full knowledge of all the threads, scheduler may decide to allocate more time to a process having large number of threads than process having small number of thread, where the kernel threads come useful for intense application - Disadvantages: · Kernel level threads are slow and inefficient, since kernel must manage and schedule all the threads as well as the processes. It requires a full TCB for each thread to maintain information about threads, which results in increasing of overheads and kernel complexity


Explain about threads in computer networks?

A thread is the sequence of instructions followed by a CPU, and is an independently dispachable unit in the run queue. A process can start and manage multiple threads, each managing an aspect of the overall processing. The operating system can schedule the threads independently, allowing them CPU time if they are ready, or blocking them if they are waiting on something, such as an IO completion. In a network process, such as a web server, there can be many things going on at the same concurrent time. Threads are an ideal solution to the problem of managing all of these things, because the main process does not need to poll each sub-process (thread) to see if it needs or is ready to do work.


How an operating system use time slicing?

It uses time slicing by allocating time to threads for each process/task that is to be executed.


What is the difference between process and thread in net?

A process is a collection of threads that share the same virtual memory. A process has at least one thread of execution, and a thread always run in a process context. Thread is used to allocate and distribute the CPU work scheduling, many programs a re assigned to different threads and they are most of the time independent of each other. Eg: We can open many instance of MS word and MS Excel in our PC, all are monitored and managed by threads. Process is nothing but a program in execution, many threads can run under a process or many thread can combined under the one thread.


What are the popular multiprocessor thread scheduling strategies?

Load Sharing: Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue.Gang Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group scheduling predated this strategy.Dedicated processor assignment: Provides implicit scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool.Dynamic scheduling: The number of thread in a program can be altered during the course of execution.


What are the multiprocessor thread scheduling strategies?

Load Sharing: Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue.Gang Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group scheduling predated this strategy.Dedicated processor assignment: Provides implicit scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool.Dynamic scheduling: The number of thread in a program can be altered during the course of execution.


What is the difference between Kernel and User level thread?

A kernel thread, sometimes called a LWP (Lightweight Process) is created and scheduled by the kernel. Kernel threads are often more expensive to create than user threads and the system calls to directly create kernel threads are very platform specific. A user thread is normally created by a threading library and scheduling is managed by the threading library itself (Which runs in user mode). All user threads belong to process that created them. The advantage of user threads is that they are portable. The major difference can be seen when using multiprocessor systems, user threads completely managed by the threading library can't be ran in parallel on the different CPUs, although this means they will run fine on uniprocessor systems. Since kernel threads use the kernel scheduler, different kernel threads can run on different CPUs. Many systems implement threading differently, A many-to-one threading model maps many user processes directly to one kernel thread, the kernel thread can be thought of as the main process. A one-to-one threading model maps each user thread directly to one kernel thread, this model allows parallel processing on the multiprocessor systems. Each kernel thread can be thought of as a VP (Virtual Process) which is managed by the scheduler.