answersLogoWhite

0

  • Computers
  • Database Programming
  • The Difference Between

What is a Mutex?


Top Answer
User Avatar
Wiki User
Answered 2006-08-24 23:00:11

Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of un-shareable resources by pieces of computer code called critical sections. Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The problem is acute because a thread can be stopped or started at any time. To illustrate: suppose a section of code is mutating a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These critical sections of code accessing shared data must therefore be protected, so that other processes which read from or write to the chunk of data are excluded from running. On a uniprocessor system the common way to achieve mutual exclusion is to disable interrupts for the smallest possible number of instructions that will prevent corruption of the shared data structure, the so-called "critical region". This prevents interrupt code from running in the critical region. Beside this hardware supported solution, some software solutions exist that use "busy-wait" to achieve the goal. Examples of these algorithms include: * Dekker's algorithm * Peterson's algorithm * Lamport's bakery algorithm In a computer in which several processors share memory, an indivisible test-and-set of a flag is used in a tight loop to wait until the other processor clears the flag. The test-and-set performs both operations without releasing the memory bus to another processor. When the code leaves the critical region, it clears the flag. This is called a "spinlock" or "busy-wait." Some computers have similar indivisible multiple-operation instructions for manipulating the linked lists used for event queues and other data structures commonly used in operating systems. Most classical mutual exclusion methods attempt to reduce latency and busy-waits by using queuing and context switches. Some claim that benchmarks indicate that these special algorithms waste more time than they save. Many forms of mutual exclusion have side-effects. For example, classic semaphores permit deadlocks, in which one process gets a semaphore, another process gets a second semaphore, and then both wait forever for the other semaphore to be released. Other common side-effects include starvation, in which a process never gets sufficient resources to run to completion, priority inversion in which a higher priority thread waits for a lower-priority thread, and "high latency" in which response to interrupts is not prompt. Much research is aimed at eliminating the above effects, such as by guaranteeing non-blocking progress. No perfect scheme is known.

001
๐Ÿ™
0
๐Ÿคจ
0
๐Ÿ˜ฎ
0
๐Ÿ˜‚
0
User Avatar

Your Answer

Loading...

Still have questions?

Related Questions

What is priority inversion?

Priority inversion is a situation where in lower priority tasks will run blocking higher priority tasks waiting for resource (mutex). For ex: consider 3 tasks. A, B and C, A being highest priority task and C is lowest. Look at sequence of context swaps A goes for I/O . unlocks mutex. C was ready to run. So C starts running. locks mutex B is ready to run. Swaps out C and takes mutex. A is ready to run. but A is blocked as mutex is locked by B. but B will never relinqishes the mutex as its higher priority than C. The solution to priority inversion is Priority inheritance.


What is TVE?

TVE stands for Traffic verification entitlement. this is a sub domain of mutex exchange protocol of SPPVTGs assertion suite


How to write c programme of Sleeping-barber problem in opertating system?

#define CHAIRS 5 /* # chairs for waiting customers */ typedef int semaphore; /* use your imagination */ semaphore customers = 0; /* # of customers waiting for service */ semaphore barbers = 0; /* # of barbers waiting for customers */ semaphore mutex = 1; /* for mutual exclusion */ int waiting = 0; /* customer are waiting (not being cut) */ void barber(void) { while (TRUE) { down(&customers); /* go to sleep if # of customers is 0 */ down(&mutex); /* acquire access to "waiting' */ waiting = waiting - 1; /* decrement count of waiting customers */ up(&barbers); /* one barber is now ready to cut hair */ up(&mutex); /* release 'waiting' */ cut_hair(); /* cut hair (outside critical region */ } } void customer(void) { down(&mutex); /* enter critical region */ if (waiting < CHAIRS) { /* if there are no free chairs, leave */ waiting = waiting + 1; /* increment count of waiting customers */ up(&customers); /* wake up barber if necessary */ up(&mutex); /* release access to 'waiting' */ down(&barbers); /* go to sleep if # of free barbers is 0 */ get_haircut(); /* be seated and be served */ } else { up(&mutex); /* shop is full; do not wait */ } }



What about multithreading in c sharp?

Multi-threading in c sharp is a system with different tutorial. Like: interaction between threads, producer, using thread pool and using mutex objects.


Protecting a dedicated function in Visual C plus plus from interruption?

You can only protect a function from interruption by another thread in the same process -- you cannot prevent the operating system from interrupting your program, nor threads in a separate process. To prevent interruption by another thread, use a mutex. The thread that holds the mutex blocks all other threads from holding that same mutex -- they must wait for it to be released by the holder. However, mutexes should only be used when two or more threads need to share information that must be written to by one or more of the threads. They cannot all write simultaneously and you cannot read information that is currently being written. So each thread must obtain the mutex immediately before performing a read or write, and must release it when the read or write is complete. In this way, only one thread has access to the information at any given moment, and all other threads are blocked. This does not prevent interruption, however. You can reduce the chances of interruption simply by raising your thread's priority. Level 0 is the highest priority but is reserved for the operating system, but there are 31 levels available (1 to 31). Use this sparingly and only when absolutely required.


How do you write Multithreaded applications using C plus plus?

Use std::packaged_task (preferably) or std::thread to start a new thread. Use std::future (preferred), std::mutex or std::atomic to share information between threads.


What is a semaphore in RTOS?

A semiphore is a means of protecting a resource/data shared between threads. It is a token based mechanism for controlling when a thread can have access to the resource/data.Usually a semiphore handle will be able to be received from the system by name/id.See Also Mutex. which is a simplification of the semiphore concept.


What is the use of 'lock' statement?

The lock prefix on the 8086/8088 prevents any other bus master from accessing the bus during this instruction, even if this instruction is a multiple access instruction. It assures atomicity and data consistency in a multi-computer environment, but only for a single instruction. It is generally used to manipulate a mutex or semaphore.


Discuss the class involved in implementation of thread in visual c plus plus?

The thread class (std::thread) was introduced with the C++11 standard. Indeed, C++11 introduced several classes to make it easier to implement concurrency.Consider the following example:#include#include#includevoid foo (char ch, size_t count, std::mutex* cout_mutex){if (!cout_mutex) return;for (size_t i=0; i!=count; ++i){cout_mutex->lock();std::cout cout_mutex->unlock();}}void do_it(){std::mutex my_mutex;std::thread t1 (foo, 'A', 30, &my_mutex); // print 30 A'sstd::thread t2 (foo, 'B', 25, &my_mutex); // print 25 B'sstd::thread t3 (foo, 'C', 20, &my_mutex); // print 20 C'sstd::thread t4 (foo, 'D', 15, &my_mutex); // print 15 D's// synchronize threads:t1.join();t2.join();t3.join();t4.join();}int main(){for (size_t i=0; i!=10; ++i){do_it();std::cout }std::cout }Example output:AAAAAAAAABABABABABACBACBACBACBDACBDACBDACBDACBDACBDACBDACBDACBDACBDACBDACBDACBDACBDCBDCBCBAAAAAAAAAAAABABABABCABCABCABDCABDCABDCABDCABDCABDCABDCABDCABDCABDCABDCABDCABDCBDCBDCBCBCBBAAAAAAAAABABABABABABABABCABCABCABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDBCDBCDBCDCCAAAAAAAAAABABABABABCABCABCABCABCADBCADBCADBCADBCADBCADBCADBCADBCADBCADBCADBCADBCDBCDBCDBCBAAAAAAAAAAAABABABABABACBACBACBACDBACDBACDBACDBACDBACDBACDBACDBACDBACDBACDBCDBCDBCDBCDBCBCBAAAAAAAAAAABABABABACBACBACBADCBADCBADCBADCBADCBADCBADCBADCBADCBADCBADCBADCBADCBDCBDCBCBCBBAAAAAAAAABABABABABACBACBACBACDBACDBACDBACDBACDBACDBACDBACDBACDBACDBACDBACDBACDBACDBCDBCBCBAAAAAAAAAABABABABABACBACBACBACBADCBADCBADCBADCBADCBADCBADCBADCBADCBADCBADCBADCBDCBDCBDCBCBAAAAAAAAABABABABABABABCABCABCADBCADBCADBCADBCADBCADBCADBCADBCADBCADBCADBCADBCADBCDBCDBCBCCAAAAAAAAAAABABABABABABACBACBACBACBDACBDACBDACBDACBDACBDACBDACBDACBDACBDACBDCBDCBDCBDCBDCBCComplete!The main function invokes the do_it() function 10 times. On each invocation, four threads of execution are instantiated. The first thread prints 30 A's, the second prints 25 B's, the third prints 20 C's and the last prints 15 D's. All threads begin executing as soon as they are instantiated and run concurrently with the do_it function which is part of the main thread (the thread in which main executes). Thus there are five threads in total.The mutex is required to ensure that only one thread has access to the std::cout stream at a time. In this case the mutex is actually redundant since each thread prints just one character at a time. However, it is included to demonstrate a simple locking technique when two threads attempt to access the same resource.All four threads must be joined to the main thread's do_it function. If we fail to do this, the do_it function would return before the four threads had completed their tasks. This would cause the mutex (which is local to the do_it function) to fall from scope which would then invalidate the mutex pointers in the four threads; they'd be pointing to a resource that no longer existed. By joining the threads to the main thread, the do_it function will wait for all threads to complete their tasks.You will note from the example output that the execution order isn't exactly the same for each iteration of the main loop. This is normal with concurrent tasks and will ultimately depend on whatever background activity is going on at the time.In the example output, the first thread starts printing A's while the second thread is being instantiated, then they alternate between A and B. However, on each iteration of the main loop, the B's start at different points. After a few iterations the C's begin printing and then the D's. Note that the order isn't the same each time and that there's no guarantee that D will stop before A, B or C. This shows that each thread runs independently of all others.


Thread in java?

Yes, java supports threaded execution. Threads can be created explicitly by constructing and starting a Thread object; or implicitly by running in a managed environment. For instance: a web server typically runs a thread for each http connection. Java has specific constructs for threading: the "synchronized" and "volatile" language keywords, and the "wait" and "notify" methods on the base object class. Additionally, there are objects in the standard class libraries such as threadsafe collections and higher-level mutex utility objects.


Why is c plus plus standard library needed?

It isn't needed, but it exists to provide common functionality. Without it you would have to re-invent the wheel to provide that common functionality yourself. The standard library includes the standard C library (which includes C-style memory management functions, general utilities and common types such as strings), common container classes (including lists, queues, stacks and vectors), input/output streams (including iostream, fstream, streambuf, etc), thread support (including thread and mutex) and language support (typeinfo, exception, new and limits).


What is the definition of thread?

A computer runs many applications at once, each instance of an application is known as a process. Each process is made of 1 or more threads, each thread is a sequence of code, this code is often responsible for one aspect of the program, or one task a program has been given. For instance a program doing a complex long calculation may split into two threads, one to keep a user interface responsive, and one (or more) to progress through the lengthy calculation.The catch is that when dealing with one or more threads, whilst it is guaranteed each individual thread will progress through its code in sequence, it is not known where each thread will be relative to each other. That is, one thread may progress more quickly than other threads, this means great care must be taken when two threads access one resource, this is usually done through a mutex.AnswerIn computing terms, a thread is a separate line of execution inside a process which shares the instruction portion of the process, but which has unique data associated with that thread.To use an analogy: let's assume we have to make 10 cakes. A normal process would start with its instructions (the cake recipe), and the data (the ingredients for the 10 cakes), then follow the recipe iterative (repetitively) 10 times in sequence, producing 10 cakes. A threaded model for this would say: "but, making cake 4 does not require the exact same egg being used to make cake 2 (that is, there is an egg (data) for each cake)." So, using threads, each thread would use the same instructions (recipe), but have its own unique data (set of ingredients).The problem with threads (as brought up above) is when they must share some resource. In our example, let's assume we only have 1 oven. So, each of the threads would go about preparing it's cake, then go over to use the oven. The first thread to get there would make a lock (mutex) on the oven to bake its cake. The remaining threads would sit around and wait for the lock (mutex) to be cleared, then one of them would grab it, and bake its cake.


What is the difference between MUTEX and Semaphore?

From wikipedia:"A mutex is a binary semaphore, usually including extra features like ownership or priority inversion protection. The differences between mutexes and semaphores are operating system dependent. Mutexes are meant to be used for mutual exclusion only and binary semaphores are meant to be used for event notification and mutual exclusion."They also have a good example as to the use of a semaphore:"A thread named A needs information from two databases before it can proceed. Access to these databases is controlled by two separate threads B, C. These two threads have a message-processing loop; anybody needing to use one of the databases posts a message into the corresponding thread's message queue. Thread A initializes a semaphore S with init(S,-1). A then posts a data request, including a pointer to the semaphore S, to both B and C. Then A calls P(S), which blocks. The other two threads meanwhile take their time obtaining the information; when each thread finishes obtaining the information, it calls V(S) on the passed semaphore. Only after both threads have completed will the semaphore's value be positive and A be able to continue. A semaphore used in this way is called a 'counting semaphore.'"Basically think of a semaphore as a lock that allows multiple threads to wait in line for the resource to be free. Usually they will block and the semaphore will wake them up when it is their turn.


What are the types of semaphores?

Three types of semaphores: 1.General/Counting semaphores: (can take any non-negative value) These are used when you might have multiple devices (like 3 printers or multiple memory buffers). 2.Binary semaphores: (can either be 0 or 1) These are used to gain exclusive access to a single resource (like the serial port, a non-reentrant library routine, or a hard disk drive). A counting semaphore that has a maximum value of 1 is equivalent to a binary semaphore (because the semaphore's value can only be 0 or 1). 3.Mutex semaphores: These are optimized for use in controlling mutually exclusive access to a resource. There are several implementations of this type of semaphore.


What are recursive locks?

Recursive locks (also called recursive thread mutex) are those that allow a thread to recursively acquire the same lock that it is holding. Note that this behavior is different from a normal lock. In the normal case if a thread that is already holding a normal lock attempts to acquire the same lock again, then it will deadlock. Recursive locks behave exactly like normal locks when another thread tries to acquire a lock that is already being held. Note that the recursive lock is said to be released if and only if the number of times it has been acquired match the number of times it has been released by the owner thread. Many operating systems do not provide these recursive locks natively. Hence, it is necessary to emulate the behavior using primitive features like mutexes (locks) and condition variables.


Purpose advantage and disadvantage of destructor?

Discussing advantage and disadvantage is kind of a moot point; if you use a C++ class, it has both a constructor and destructor. If you don't need to do anything when a class goes out of scope, you can have an empty destructor; no problem! The destructor is always called whenever a class goes out of scope, no matter how it happens, including an exception case, the destructor is guaranteed to be called. A very simple example of using a constructor and destructor to ensure clean programming is the case of acquiring and releasing a semaphore. You can create a simple class that consists solely of a constructor that acquires the semaphore, and a destructor that releases it. Instantiating this class automatically does the acquire, and when it goes out of scope, even if it's because of an exception, the mutex is released. That means there is no way you can possibly forget to release the semaphore; it happens automatically. What could be better than that?


What are the merits of volatile in c plus plus?

The volatile specifier informs the compiler that an object could be modified by something that is external to the thread of execution; something that is not part of the program. For instance, when reading a hardware clock, you would rightly expect the value of that clock to change from one read to the next. But from the compiler's perspective, the clock remains constant because your code does not alter it between reads and is unaware of any external code that controls its value. Therefore the compiler will (rightly) optimise away what it believes to be redundant read operations which could easily break the semantics of your code. By declaring an object volatile (even a constant object), you inform the compiler that the object may change between successive read or write operations and that all redundancy optimisations must not be applied to any code that uses that object. Although often used as a method of synchronisation between concurrent threads within the same thread of execution that share the same object, you must not use it for this purpose. C++ provides very specific mechanisms for this purpose, such as an atomic, a mutex or a condition_variable. The volatile specifier must only be used when writing low-level code that deals directly with hardware, such as when writing device driver and embedded system software.


What is a constant member function c plus plus?

A constant member function is an instance method that can only modify the mutable members of the class instance. This is achieved by implicitly declaring the this pointer constant. The this pointer is a hidden parameter that is implicitly passed to every instance function, and that points to the current instance of the class. Static member functions do not have an implicit this pointer and therefore cannot be declared const. Consider the following simple class that has both mutable and non-mutable member variables, and both constant and nonconstant member functions: struct foo { void constant()const; void nonconstant(); private: int data1; mutable int data2; }; void foo::constant()const { data1++; // not permitted: the implicit this pointer is declared const data2++; // ok: data2 is mutable } void foo::nonconstant() { data1++; // ok: the implicit this pointer is declared non-const data2++; // ok: data2 is mutable } Note that data2 can always be modified since it is declared mutable, but data1 can only be modified by nonconstant member functions. Members should only be declared mutable when they are only used internally by a class (such as when locking a mutex for thread safety), or where a value needs to be calculated and cached the first time it is accessed.


What is critical region in operating systems?

1) CRITICAL REGIONS a) Motivation The time dependent errors can be easily generated when semaphores are used to solve the critical section problem. To overcome this difficulty a new language construct, the critical region was introduced. b) Definition and notation A variable v of type T, which is to be shared among many processes, can be declared: VAR v: SHARED T; The variable v can be accessed only inside a region statement of the following form: REGION v DO S; This construct means that while statement S is being executed, no other process can access the variable v. Thus, if the two statements, REGION v DO S1; REGION v DO S2; are executed concurrently in distinct sequential processes, the result will be equivalent to the sequential execution S1 followed by S2, or S2 followed by S1. To illustrate this construct, consider the frames CLASS defined in abstract data type. Since mutual exclusion is required when accessing the array free, we need to declare it as a shared array. VAR free: SHARED ARRAY [l..n] OF boolean; The acquire procedure must be rewritten as follows; PROCEDURE ENTRY acquire (MAR index: integer); BEGIN REGION free DO FOR index := 1 TO n DO IF free[index] THEN BEGIN free[index] := false; exit; END; index := 1; END; The critical-region construct guards against some simple errors associated with the semaphore solution to the critical section problem which may be made by a programmer. c) Compiler implementation of the critical region construct. For each declaration VAR v: SHARED T; the compiler generates a semaphore v-mutex initialized to 1. For each statement, REGION v DO S; the compiler generates the following code: p(v-mutex); S; V(v-mutex); Critical region may also be nested. In this case, however, deadlocks may result. Example of deadlock) VAR x, y: SHARED T; PARBEGIN Q: REGION x DO REGION y DO S1; R: REGION y DO REGION x DO S2; PAREND; 2) CONDITIONAL CRITICAL REGIONS The critical region construct can be effectively used to solve the critical section problem. It cannot, however, be used to solve some general synchronization problems. For this reason the conditional critical region was introduced. The major difference between the critical region and the conditional critical region constructs is in the region statement, which now has the form: REGION v WHEN B DO S; where B is a boolean expression. As before, regions referring to the same shared variable exclude each other in time. Now, however, when a process enters the critical section region. the boolean expression B is evaluated if the expression is true, statement S is executed. if it is false, the process relinquishes the mutual exclusion and is delayed until B becomes lame and no other process is in the region associated with v. Example for bounded buffer problem) VAR buffer: SHARED RECORD pool: ARRAY [0..n-l] OF item; count, in. out: integer; END; The producer process inserts a new item nextp in buffer by executing REGION buffer WHEN count < n DO BEGIN pool[in] := nextp; in := in + i MOD n; count := count + 1; END; The counter process removes an item from the shared buffer and puts it in nextc by executing: REGION buffer WHEN count > 0 DO BEGIN nextc := pool[out]; out := out +1 MOD n; count := count - 1; END; However, the CLASS concept alone cannot guarantee that such sequences will be observed. * A process might operate on the file without first gaining access permission to it; * A process might never release the file once it has been granted access to it; * A process might attempt to release a file that it never required; * A process might request the same file twice; Not that we have now encountered difficulties that are similar in nature to those that motivated us to develop the critical region construct in the first place. Previously, we had to worry about the correct use of Semaphores. Now we have to worry about the correct useo higher-level programmer-defined operations, with which the comelier can no longer assist us.


What is syncronization in java?

Synchronization is a process where by a mutually-exclusive (mutex) lock is placed on a particular object for the duration of the execution of a block of code. It cannot be applied to primitives (double, int, etc.)Synchronization ensures that only one thread has access to the synchronized object at a time, such as to prevent two threads from modifying a variable or to prevent the contents of a collection being changed while it is being iterated over.Whole methods and arbitrary individual parts of one may be declared as synchronized.An example of a synchronized method is:public synchronized void setBalance(int amount) { ... }Which effectively locks the object to which that method belongs (this), so it may be modified by only one thread at a time.An example of a synchronized block, which requires exclusive access to the object myBankAccount is:public void addMoney(Account act, int amount) {// ... non-critical code can go here ...synchronized(act) {// it is imperative that nothing else is able to change// the balance in this person's account while we do this!act.setBalance(act.getBalance() + amount);}// ... more non-critical code can go here ...}In both cases, while either object is locked any additional threads which try to call either the setBalance or addMoney methods will block, meaning they will 'freeze' until they can gain exclusive access. Any code before or after the synchronized section can run as normal before and after the synchronized section completes: the block happens when a synchronized code section is encountered and it is not possible to get exclusive access to the target object.Synchronization in Java is very expensive (in terms of time), and can easily become a performance bottleneck, however it is sometimes possible to overcome such performance loss simply moving superfluous synchronization to another place. In a situation where a data structure is being iterated over continually but rarely changed, it would be far slower to synchronize MyDataStructure.getData() resulting in continual, largely unnecessary locking/unlocking, than it would be simply to synchronize the add-/remove element methods.


What is OLTP database?

Databases tend to get split up into a variety of diffrent catagoies based on their application and requirements. All of these diffrent catagories naturally get nifty buzz words to help classify them and make distinctions in features more apparent. The most popular buzz work (well, acronymn anyway) is OLTP or Online Transaction Proccessing. Other classifications include Descision Support Systems (DSS), Data Warehouses, Data Marts, etc.OLTP databases, as the name implies, handle real time transactions which inherently have some special requirements. If your running a store, for instance, you need to ensure that as people order products they are properly and effiently updating the inventory tables while they are updating the purchases tables, while their updating the customer tables, so on and so forth. OLTP databases must be atomic in nature (an entire transaction either succeeds or fails, there is no middle ground), be consistant (each transaction leaves the affected data in a consistant and correct state), be isolated (no transaction affects the states of other transactions), and be durable (changes resulting from commited transactions are persistant). All of this can be a fairly tall order but is essential to running a successful OLTP database.Because OLTP databases tend to be the real front line warriors, as far as databases go, they need to be extremely robust and scalable to meet needs as they grow. Whereas an undersized DSS database might force you to go to lunch early an undersized OLTP database will cost you customers. No body is going to order books from an online book store if the OLTP database can't update their shopping cart in less than 15 seconds.The OLTP feature you tend to hear most often is "row level locking", in which a given record in a table can be locked from updates by any other proccess until the transaction on that record is complete. This is akin to mutex locks in POSIX threading. In fact OLTP shares a number of the same problems programmers do in concurrent programming. Just as you'll find anywhere, when you've got a bunch of diffrent persons or proccesses all grabbing for the same thing at the same time (or at least the potential for that to occur) your going to run into problems and raw performance (getting your hands in and out as quick as possible) is generally one of the solutions.Several other factors come into place with OLTP databases, and the Oracle10g documentation library even has a whole section dedicated just to OLTP. Find more information in the Oracle10g docs:


When a user-level thread executes a system call not only is that thread blocked but also all the threads within the process are blocked Why is that so?

A system call is actually a transition from user mode to kernel space to carry out a required operation.. For exp: Write() function resides in kernel space and can be accessed by a only corresponding wrapper function which was implemented in user space.. In case of Windows OS, Win32 API is the implementation to such as wrapper functions that make the actual system calls from user mode.. A kernel thread is created, scheduled and destroyed by the kernel.. Whereas a library thread (user thread) is created, scheduled and destroyed by a user level library like special threading libraries.. The kernel does know of nothing about library threads since they live in a process's boundaries and bound to its life cycle tightly.. Actually, a process is not the primary execution unit in all operating systems.. When a process is made by kernel, a new kernel thread is created and attached to that process in question.. So the library threads created in user mode by a user mode library must share the time slices given to the kernel thread by the scheduler arbitrarily during the lifetime of a process.. So a process actually has one kernel thread and all other library threads have to share the kernel thread's cycles.. Hence when a library thread makes a blocking call, all the threads within the process are blocked because as I said, actually process has only one kernel thread assigned to it and others try to make use of it.. So to prevent other threads from blocking, either you should use library threads that make use of kernel threads or you could just use the CreateThread() Win32 API system function for kernel threads but the synchronization mechanism must be provided by the programmer using events, signals, mutex, semaphore etc.. Sun, BSD Unix flavours, Windows etc follow the same threading architecture in their systems as POSIX standard.. However, a thread is a process in Linux.. That's why Linux is so powerful in server systems.. So the control is left to programmers to create a POSIX way of treading model by Clone(2) system call(s).. So address space and data etc can be shared by lightweight processes easily.. When a Linux kernel thread (child process) is crashed, it won't affect the rest of the threads belong to parent process.. This is just the opposite in other operating systems that a crashing thread will destroy all of the threads in the process.. NPTL is a great threading library that was implemented by using Linux clone(2) system call.. Linux also has another type of kernel thread only lives in kernel space that can be utilized by kernel code like modules.. User threads can't be run in parallel on the different CPUs because of this. However, they are portable.. Kernel threads can be scheduled by kernel to be run on a SMP system.. Hope this helps.. hsaq19@ TH Algan


Discuss in detail the process management memory management io and file management and security and protection in windows vista operating system?

Each process in Windows Vista operating system contains its own independent virtual address space with both code and data, protected from other processes. Each process, in turn, contains one or more independently executing threads. A thread running within a process can create new threads, create new independent processes, and manage communication and synchronization between the objects. By creating and managing processes, applications canhave multiple, concurrent tasks processing files, performing computations, or communicating with other networked systems. It is even possible to exploit multiple processors to speed processing. The following sections explain the basics of process management and also introduce the basic synchronization operations. • Job: collection of processes that share quotas and limits • Process: container for holding resources • Thread: Entity scheduled by the kernel • Fiber: Lightweight thread managed entirely in user space. Interprocess Communication Threads can communicate in many ways including: pipes, named pipes, mailslots, sockets, remote procedure calls, and shared files Semaphore A semaphore is created using function 'CreateSemaphore' API function. As Semaphores are kernel objects and thus have security descriptors and handles. The handle for a semaphore can be duplicated and as a result multiple process can be synchronised on the same semaphore. Calls for up (ReleaseSemaphore) and down (WaitForSingleObject) arepresent. A calling thread can be released eventually using WaitForSingleObject, even if the semaphore remains at 0. Mutexes Mutexes are kernel objects used for synchronization and are simpler than semaphores as they do not have counters. They are locks, with API functions for locking (WaitForSingleObject) and unlocking (releaseMutex). Like Semaphores, mutex handles can beduplicated. Critical Sections or Critical Regions This is the third synchronization mechanism which is similar to mutexes. It is pertinent to note that Critical Section are not kernel objects, they do not have handles or security descriptors and cannot be passed between the processes. Locking and unlockingis done using EnterCriticalSection and LeaveCriticalSection, respectively. As these API functions are performed initially in user space and only make kernel calls when blocking is needed, they are faster than mutexes. Events This synchronization mechanism uses kernel objects called events, which are of two types: manual-reset events and auto-reset events. The number of Win32 APIcalls dealing with processes, threads, and fibres is nearly 100. Windows Vistaknows nothing about fibres and fibres are implemented in user space. As a result, the CreateFibre call does its work entirely in user space without making 12-13 system calls. Scheduling There is no central scheduling thread in Windows Vistaoperating system. But when a thread cannot run any more, the thread moves to kernel mode and runs the scheduler itself to see which thread to switch to and the concurrency control is reached through the following conditions • Thread blocks on a semaphore, mutex, event, I/O, etc.: In this situation the thread is already running in kernel mode to carry out the operation on the dispatcher or I/O object. It cannot possibly continue, so it must save its own context, run the scheduler to pickits successor, and load that threat's context to start it. • It signals an object - In this situation, thread is running in kernel mode and after signaling some object it can continue as signaling an object never blocks.Thread runs the scheduler to verify if the result of its action has created a higher priority thread. If so, a thread switch occurs because Windows Vistais fully pre-emptive. • The running thread's quantum expires: In this case, a trap to kernel mode occurs, thread executes the scheduler to see who runs next. The same thread may beselected again and gets a new quantum and continue running, else a thread switch takes place. MEMORY MANAGEMENT Windows Vistaconsists of an advanced virtual memory management system. It provides a number of Win32 functions for using it and part of the executives and six dedicated kernel threads for managing it. In Windows 2000, each user process has its own virtualaddress space, which is 32 bit long (or 4 GB of virtual address space). The lower 2 GB minus approx 256MB are reserved for process's code and data; the upper 2 GB map onto to kernel memory in a protected way. The virtual address space is demand paged with fixed pages size. INPUT/OUTPUT IN WINDOWS VISTA The goal of the Windows VistaI/O system is to provide a framework for efficiently handle various I/O devices. These includes keyboards, mice, touch pads,joysticks, scanners, still cameras, television cameras, bar code readers, microphones, monitors, printers, plotters, beamers, CD-records, sound cards, clocks, networks, telephones, and camcorders. FILE SYSTEM MANAGEMENT Windows Vistasupports several file systems like FAT-16, FAT-32, and NTFS. Windows Vistaalso supports read-only file systems for CD-ROMs and DVDs. It is possible to have access to multiple file system types on the same running system. New Technology File System (NTFS) NTFS stands for New Technology File System. Microsoft created NTFS to compensate for the features it felt FAT was lacking. These features include increased fault tolerance and enhanced security. Individual File names in NTFS are limited to 255 characters; full paths are limited to 32,767 characters. Files are in Unicode. But Win32 API does not fully support case-sensitivity for file names. NTFS consists of multiple attributes, each of which is represented by a stream of bytes Security NTFS has many security options. You can grant various permissions to directories and to individual files. These permissions protect files and directories locallyand remotely. NT operating system was designed to meet the U.S. Department of Defense's C2 (the orange book security requirements). Windows Vista was not designed for C2 compliance, but it inherits many security properties from NT, including the following: (1) secure login with anti-spoofing mechanism, (2) Discretionary access controls, (3) Privileged access controls, (4) Address space protection per process, (5) New pages must be zeroed before being mapped in, (6) security auditing File Compression Another advantage to NTFS is native support for file compression. The NTFS compression offers you the chance to compress individual files and folders of your choice.


Describe briefly six window functions usually called while creating a window?

WinMain as the entry point functionRegisterClass registers the main window classCreateWindowEx Basically creates the WindowCreateMutex creates a mutexGetLastError returns if there is any error during creationShowWindow Sets the specified window's show state, and brings to FrontHi, Actually 4th and 5th functions are not mandatory. If u check a typical WinMain function from a Win32 program in MSDN/Schildts book/Jim Conger's Book you can find the functions used in it.I can list the following functions for this.1. int WINAPI WinMain(HINSTANCE hInstance,HINSTANCE hPrevInstance,LPSTR lpCmdLine,int nCmdShow)- CreateWindow is used to create a window based on one of the predefined classes or aclass that was created with RegisterClass. The size, location, and style of the windoware determined by the parameters passed to CreateWindow. ShowWindow is used tomake the window visible after creation.The CreateWindow function is used to create any type of window. CreateWindow isused in the WinMain function to create the main application window and within theapplication for creating child windows and user interface controls.2. ATOM RegisterClass( CONST WNDCLASS* lpwc )- RegisterClass creates new window classes that can be used tocreate windows. The same class can be used to create as many windowsas your application needs.Use primarily in WinMain to create a class for the main window of anapplication. You can also use RegisterClass to create customcontrol classes and other windows.3. HWND CreateWindow( LPCTSTR lpszClassName, LPCTSTR lpszWindowName, DWORD dwStyle, int x, int y, int nWidth, int nHeight, HWND hwndParent, HMENU hmenu, HANDLE hinst, LPVOID lpvParam )- CreateWindow is used to create a window based on one of the predefined classes or aclass that was created with RegisterClass. The size, location, and style of the windoware determined by the parameters passed to CreateWindow. ShowWindow is used tomake the window visible after creation.The CreateWindow function is used to create any type of window. CreateWindow isused in the WinMain function to create the main application window and within theapplication for creating child windows and user interface controls.4. BOOL ShowWindow( HWND hwnd, int nCmdShow)- ShowWindow sets a window's show state. The show state determines how a window is displayed on the screen. Usually after a window is created, the ShowWindow function is called to make it visible. ShowWindow also can be used to minimize or maximize a window. ShowWindow does not make sure the window is in the foreground. Use the SetWindowPos or SetActiveWindowfunction to bring a window to the foreground.5. BOOL GetMessage( LPMSG lpmsg, HWND hwnd, UINT uMsgFilterMin, UINT uMsgFilterMax )- GetMessage retrieves the next waiting message from the calling thread's message queue into the MSG structure pointed to by lpmsg. Messages are not retrieved for other threads or applications.6. LONG DispatchMessage( CONST MSG* lpmsg )- DispatchMessage sends messages from within an application's message loop to the appropriate windowprocedure for processing.You typically use DispatchMessage within an application's message loop after completing any necessarytranslation.Yes. This question is getting asked frequently in UGC-NET examination in India.


Discuss methods for handling deadlocks?

ignore the problem altogether ie. ostrich algorithm it may occur very infrequently, cost of detection/prevention etc may not be worth it.detection and recoveryavoidance by careful resource allocationprevention by structurally negating one of the four necessary conditions.Deadlock PreventionDifference from avoidance is that here, the system itself is build in such a way that there are no deadlocks. Make sure atleast one of the 4 deadlock conditions is never satisfied.This may however be even more conservative than deadlock avoidance strategy.Attacking Mutex conditionnever grant exclusive access. but this may not be possible for several resources.Attacking preemptionnot something you want to do.Attacking hold and wait conditionmake a process hold at the most 1 resource at a time.make all the requests at the beginning. All or nothing policy. If you feel, retry. eg. 2-phase lockingAttacking circular waitOrder all the resources.Make sure that the requests are issued in the correct order so that there are no cycles present in the resource graph. Resources numbered 1 ... n. Resources can be requested only in increasing order. ie. you cannot request a resource whose no is less than any you may be holding.Deadlock AvoidanceAvoid actions that may lead to a deadlock. Think of it as a state machine moving from 1 state to another as each instruction is executed.Safe StateSafe state is one whereIt is not a deadlocked stateThere is some sequence by which all requests can be satisfied.To avoid deadlocks, we try to make only those transitions that will take you from one safe state to another. We avoid transitions to unsafe state (a state that is not deadlocked, and is not safe) eg.Total # of instances of resource = 12(Max, Allocated, Still Needs)P0 (10, 5, 5) P1 (4, 2, 2) P2 (9, 2, 7) Free = 3 - SafeThe sequence is a reducible sequencethe first state is safe.What if P2 requests 1 more and is allocated 1 more instance?- results in Unsafe stateSo do not allow P2's request to be satisfied.Banker's Algorithm for Deadlock AvoidanceWhen a request is made, check to see if after the request is satisfied, there is a (atleast one!) sequence of moves that can satisfy all the requests. ie. the new state is safe. If so, satisfy the request, else make the request wait.How do you find if a state is safen process and m resources Max[n * m]Allocated[n * m]Still_Needs[n * m]Available[m]Temp[m]Done[n]while () {Temp[j]=Available[j] for all jFind an i such thata) Done[i] = Falseb) Still_Needs[i,j] if so {Temp[j] += Allocated[i,j] for all jDone[i] = TRUE}}else if Done[i] = TRUE for all i then state is safeelse state is unsafe}Detection and RecoveryIs there a deadlock currently? One resource of each type (1 printer, 1 plotter, 1 terminal etc.)check if there is a cycle in the resource graph. for each node N in the graph do DFS (depth first search) of the graph with N as the root In the DFS if you come back to a node already traversed, then there is a cycle. }Multiple resources of each typem resources, n processesMax resources in existence = [E1, E2, E3, .... Em]Current Allocation = C1-n,1-mResources currently Available = [A1, A2, ... Am]Request matrix = R1-n,1-mInvariant = Sum(Cij) + Aj = EjDefine A Overview of deadlock detection algorithm, Check R matrix, and find a row i such at Ri If such a process is found, add Ci to A and remove process i from the system.Keep doing this till either you have removed all processes, or you cannot remove any other process.Whatever is remaining is deadlocked.Basic idea, is that there is atleast 1 execution which will undeadlock the systemRecoverythrough preemptionrollbackkeep checkpointing periodicallywhen a deadlock is detected, see which resource is needed.Take away the resource from the process currently having it.Later on, you can restart this process from a check pointed state where it may need to reacquire the resource.killing processeswhere possible, kill a process that can be rerun from the beginning without illeffects