Last week On archipelago
1. Since non blocking receiver is rare, what is the purpose of a blocking receiver is asynch message passng?
Recieve shld be blockin, non blocking is a feature for good programmer only.
receiver is blocking because u need to wait for the message.
UNIX Pipes is primitive way of message passing
You cannot proceed without the data we need thus blocking receive must exist
2. Is it possible for threads to use their own files and not share.
E.g each thread runs a different part of the code and responsible for writing in different files
Different threads can open different files and work on them privately
However one thread can use a dile open by another.
unshare() is a new syscall that prevent one thread from oepning others
3. Does using pthread_join in a for loop blocks the join infront
The join operation will not block the main thread
4. does multithreading run concurrently in the hardware context or is it more seq processing at a smaller scale?
Every thread has a logical copy of the hardware context. We have to share the hardware context in a serial fashion.
If we have multiple threads/processes, we need to have parallesim in the hardware (ie multiple core)
each threads have to be running in different cores.
5. What are the adv and disadv of simulat multi threading and multiple core system regarding running a single process with multiple threads?
SMT = running software threads concurrently on a single hardware core
That core has to be
- at least 2 physical copy of hardware context
- GPPR
- Special register
We can run 2 threads on the same core if we have these two, we just need to map the context
What is the difference?
Two threads will still share ALU, caches
These threads will have independent hardware context but they will compete for functional units like ALU
Race Condition
Problems with concurrent execution
When two process tries to
- Change a shared memory and uses it as well
- Can cause synchronization problems
These are like examples of threads
- Can cause synchronization problems
These are like examples of threads
The execution of single sequential process is deterministics
- Repeated execution gives the same results
Execution of concurrent process may be non-deterministic
- Execution outcome depends on who access first
- Race condition
Problems happen when a context switch happen or processes are running in different cores and tries to access the same memory space
One problem is that when compiler add to a value
ie x=x +100
This will do 3 instruction, load, add and store
What if we just add directly?
It depends on the hardware..
If we have a single core system,
this statement will be running without intercept (Because of hardware)
If we have multicore,
If core one 1 running p1 and core 2 p2,
If the first instruction in core 1 is loading x in memory and we also load in c2
At some point, they have to store.
There will be some interleaving between execution phases in a single instruction.
The interleaving cannot be help.
This problem will exist even if we support a instruction like adding directly
ie The access to memory might interleave even if we write directly
The problem with this happen when we have a statement that cnanot be executed in a single instrcutions. In multicore, it will definetely happen
Solution
- Incorrect execution is due to unsynchronise access to a modified resourse that is shared
Need: Synchronisation to control the interleaving of access
- Allow all correct interleavings
- Not allow any incorrect
Imagine a process with shared variables with around 100 interleavings.
There might be some restriction that lose some efficiently
e.g X = X + 1000
One process execute load and store and so does the other process.
The only two correct interleaving is when p1 execute before p2
But what if p2 is running first? Then they will wait for p1 to come before p2 start.. It is a correct solution but it is not efficient
The abstraction of a critical section will make sure that there is no interleaving in the code of the critical section. (Only one process is allow to run this part of the code) This simple abstraction, allow us to solve the synchronisation problems.
Critical Section
A code segment that only allow one process to execute in.
We should keep the critical section short.
e.g
//Normal code
Enter CS
//Critical work
x = x +1000
Exit CS
//Normal code
We should keep the critical section short.
e.g
//Normal code
Enter CS
//Critical work
x = x +1000
Exit CS
//Normal code
Properties of correct implementation
- Mutual Exclusion
If process p is executing in CS, all other process are prevented from entering the CS
- Progress
If no process in cs, one of the waiting processes should be granted access
- Bounded wait
Ensure that a waiting process will eventually get its chance
There is a limit on how many times a process gets to run.
- Independence
Process not executing in CS should never block other process
If process p is executing in CS, all other process are prevented from entering the CS
- Progress
If no process in cs, one of the waiting processes should be granted access
- Bounded wait
Ensure that a waiting process will eventually get its chance
There is a limit on how many times a process gets to run.
- Independence
Process not executing in CS should never block other process
Symptoms of incorrect implementation
- Incorrect output/behavior
Usually due to multiple programs in CS
- Deadlock
All process waiting for each other
All process are block
- LiveLock
Process are not block and they can change the state
But the progress are not guaranteed
This lock usually happen when we are avoiding deadlock
- Starvation
Some processes never get a chance to go to CS
Implementation of critical section
Low level (Hardware) solution
Previously the problem for attempt 1 is that the someone else can jump between the reading of lock.
What if we fix this? We ensure that there is only one instruction that can read and write of the lock variable
TestAndSet Register, Memorylocation
This will load the value from memorylocation and write to register
Stores a 1 into memory location and this is a single atomic machine operation.
No one can intercept between reading and writing
In multiplecores,
It is possible that two cores execute testandset at the same time, this will cause interleaving.
//Not talked in this course
void EnterCS(*lock)
while(testandset(lock)==1);
void exitCS(*lock)
*lock =0
Only one process will be able to read lock and set it
The other process will not be able to rewrite it to 0.
- There is mutex
Only one will read 0 and that must be the first one to invert and enter CS
- If process P1 wants to enter, it can just eter
Progress and independence are ok
- But there is no bounded wait
Might cause starvation if the process keep getting looped and the waiting process keep sleeping and waking up only to find that the lock is already taken by the looped process
The implementation works but it employs busy waiting (waste cpu)
Others:
- Compare and exchange
- Compare and exchange
- Atomic swap
- Load link/store conditional
- Test and test and set
High Level Programming language solution
Making use of a boolean to set the lock to 1 once someone is inside.
while(lock!=0)
//critical sec
lock = 1
//End of CS
lock = 0
This solution does not guaranteed mutex.
There is a vulnerability that someone else will be able to read the value of lock before we set it.
Fixed attempt 1:
We try to disable interrupts and enable interrupts once leave CS
this only works with single core
- Must have privilege to disable and enable interrupt
However, this is bad to disable/enable interrupts. It will affect some other part of the program
Fixed attempt 2:
Each process is trying to both read and write the shared var before entering the CS
But someone else can jump in between the read and write.
Just fix this part
We have another variable called turn which is 1 or 0
Turn will check which process is allowed to enter the CS
Turn acts like a switch that will only let one process at a time. This turn will make sure that the two process will not get confuse on who gets to go into CS
However, this turn forces that if p1 comes first, p1 must enter CS first even if p2 arrives first.
However, this turn forces that if p1 comes first, p1 must enter CS first even if p2 arrives first.
If turn =0, process can enter
1 , process cannot enter
while(turn!=0); //wait here
Once the process change to 0.
This solution guarantee mutex but it overly restricts the execution
If p1 come and finds that it is not its turn, it will continue to wait. If the front process is stuck doing something, it is not efficient.
This solution also does not satisfy the independence property.
P1 cannot access because the process in front of it is processing some unrelated code
There is no guarantee that p1 will get into the CS.
Fixed Attempt 3:
We will use two variables, want[2]
want[0] =
want[1] = p1 wants to enter the CS
1. Indicate want enter CS (want[1] = 1)
2. Wait if the other process want enter while(want[1]);
3. Enter the CS
4. Finish, set the flag (Want[0] = 0)
- there is mutex
Guarantees that one process get to enter and not wait for another.
- There is a possibility that both processes are waiting for each other while they wait (livelock)
The processes are waiting for the other one to go, they are looping but they cannot progress.
Progress is not guaranteed
- Independence is guranteed
The process p1 will wait only if p2 wants to wait
- However, there might be a case where both process starve each other.
Fixed attempt 4:
- Have want[2]
- Have want[2]
- Shared var turn
1. Process will use want the same way (To show that they want to enter CS)
2. Both process are going to let the other go by setting turn to the ID of the other process
3. Wait only if the other process want to go into CS and it is our turn)
4. Reset want[0] =0
- There is mutux
case 1: one proccess enter before the otehr process executed the statement in the cs
case 2: Both want to enter together
-Progress is satisfied
They will only wait if another wants to enter the CS
- Bounded wait is satisfied
It will alternate
this solution is call peterson algo
Disadv:
- Busy waiting
- Busy waiting
Everything is waiting and doing nothing, not efficient.
We want to be able to block them and unblock when it is their turn
- Low level
Basic primitive constructs can work
- Not general
- complex
High level OS abstraction
What if we use the OS to ensure CS?
Semaphores:
- A generalised synchronisation mechanism
- A generalised synchronisation mechanism
- Only behaviors are specified -> can have different implementation
Provide:
- A way to block a number of process (sleeping)
- Way to unblock.wakeup one or more sleeping process
Semaphore:
- wait(S)
if s<=0 , blocks goes to sleep
Decrements S
- Signal(S)
Increments S
Wakes up one more more processes
The operation never blocks the process that calls the signal
(Some will result in the rescheduling of the process after calling signal)
A semophore is an interger and a list of processess.
The value of int is set at 1
After p1 calls wait, p1 enter cs and dec the int to 0
If another p2 comes into cs, it will see the int to be 0 and will be added to the list
s(current)= s(intitial) + signal(s) - wait(s)
General and binary semaphores
General semaphores:
- Increment and decrement by 1
- Increment and decrement by 1
- For convienence
Binary:
- Only allow two values
- This is sufficient
- Often have special implementation
- Mutex
A binary can be use to implement to general and vice versa
Semaphore example:
- Binary sem s= 1
- Binary sem s= 1
wait(s)
//CS
signal(s)
S can only be 0 or 1.
This is a mutex
1. First process exe 1 wait dec the sem to 0
2. Second will see the sem is 0 and is blocked and listed
3. when first process finish, it will increment sem and wake one process
Mutex: Proof
Ncs = num of process in the CS = process that completed wait() but not signal()
Ncs = wait - signal
Sinit = 1
Scurrent = 1 + Signal(S) - wait(s)
Deadlock:
- Dead lock means all processes stuck at wait(S)
- Dead lock means all processes stuck at wait(S)
There is no processes in the critical section
Scurrent =0 amd Ncs = 0
//This is not possible unless misuse because
Scurrent + Ncs = 1
Examples
Starvation:
yes and no, depends on the signal operation.
yes and no, depends on the signal operation.
If the signal selects in FIFO then there is no starvation else there might be
1. Suppose p1 is block at wait(s)
2. P1 is in a list of block processes
3. P2 calls signal and exits, it incremenet
4. Possible that p4 is woken up instead before p1 and run due to the way the next process is selected
Semaphores as a general synchronisation tool
2 processes must execute but section A must exe before section B in p2
- Use semaphore to send a signal only after A is done.
- Wait in front of section B
//This ensure that B will always wait when A is not done yet
Semaphores is the most powerful synchronisation solver.
common alt: Conditional variable
- If some variable needs to have a certain value before exectuin,
the task can wait on that variable a
There is a ability to broadcast to other process that are waiting to change the value
Can choose which process to run
Classical Synchronisation problems (3/5/20)
Crowded NightClub
- A highly popular nightclub has limited space and strict regulations no more than N customers can be present in the club at any time.
Semaphore sem = N
void client(){
wait(sem); /will decrease the N
dance();
signal(sem);
}
I will block until someone leaves. This is where general semaphores is useful because we can let multiple processes enter.
wait(mutex)
allowed++
if(allow<=0)
signal(queue)
signal(mutex)
Only allow if there is no waiting process outside.
Then we signal(queue)
Potential problem for binary semaphores:
- It is possible for the program to block for a long time
- It is possible for the program to block for a long time
- If i have multiple clients going in and a couple of clients are waiting in the queue, the value of the semaphore should be 0, lets say some clients are leaving, they are going to send a signal.
But what if there is a context switch whereby the process does not have time to decrement/increment the semaphores.
Since it is a binary, they cannot increase beyond 1. Two signals might be lost.
Conclusion:
Given the most primitive sema, we can create a more general sema with some pros and cons.
Given the most primitive sema, we can create a more general sema with some pros and cons.
Producer-Consumer
- Processes shared bounded buffer of size K
- Producer produce items to insert in buffer
- Only do so when the buffer is not full (<K items)
- Consumers remove items from buffer
- Consumers remove items from buffer
- Only when buffer is not empty
There is a concurrency issue when two consumers tries to pulls out of the same slot and two producer trying to put into the same slot in the buffer.
Solving:
BUSY WAITING
Producer:
- Use a loop,
if(count<K)
buffer[in] = item;
in = (in+1)%k;
count++
Count keep tracks of how many items we have.
Consumer:
if(count>0)
item = buffer[out[
out = (out+1) % K
count --;
count --;
We can use busy waiting, every producer and consumer is going to block on the same mutex.
Workability: There is no correctness but there is performance.
If there is only a consumer, it will continue waiting since there is nothing produce
Fix: We use a canConsume flag and conProduce flag
Allow the consumer to loop in while(!canConsume) when there is nothing being produced
while(canProduce == false);
wait(mutex);
Problem: When we have a customer that is waiting for the buffer to become non empty, we will wait. All the customer will be waiting in this while loop.
These is burning power for nothing.
These energy can be use to for some other processes that really needed it.
Second solution: BLOCKING
Use semaphores to block producer when there is nothing to produce and consumer when there is nothing in the buffer. We want the producer to signal the consumer when a new item is place in the buffer.
- Mutex that will guard the variables(buffer, count)
- One semaphore to indicate if the buffer is full (If a producer can work)
- Another semaphores if the consumer can work (The buffer is not empty)
Both semaphores can be binary.
When the producer produce something, it will signal to the consumer to go. The consumer should wait as long as the buffer is non empty. If some item is consumed, the consumer will signal to the producer.
The initial value of the
- semaphore is k (Length of the buffer)
- Binary lock: 1 (It is a lock)
Work-ability:
Yes it does work.
They are not modifying the shared structure because there is a mutex guarding
Homework: Think about another implementation. There are two circular buffer in implementation and the key difference is whether we need the variable count or not. Is this variable necessary?
-> No it is not needed, the semaphore is eliminate the need for count.
wait(Notfull) : force produce go to sleep
wait(noempty) : Force consumer to go sleep
Signal(notfull) : 1 consumer wake up 1 produce
signal(notEmpty): 1 produce wake up one consumer
Solving: Message Passing
- Use message passing
mQueue.send(Item)
mQueue.recieve(item)
The send and recieve will block the consumer if there is nothing to be recieved.
For the sender, we want it to be block when the queue is full.
Thus we will use
messageQueue mQueue = new MessageQueue(K);
It is asynchronous, the sender is not normally blocking as it can continously produce next item as long as there is space in the buffer, the producer can produce K times.
This is a non blocking send.
- The system handles mutex, the os ensures that the message are send coherrently. We do not need to worry about synchronisation.
Reader and writers
- Processes shared data structure D
- Reader: Retrieves information from D
- Writer: Modifies information from D
Specification:
Readers can read together
Only one writer to be writing
Reader cannot read when there is a writer writing
Problem:
- Detects when the room is empty, signal to the writer
- Readers to increment and decrement when it enters and leaves
- Readers to wait if theres a writer
- Reader to signal if it has finish reading.
Mutex:
- One mutex to ensure local variables are not editied
- Another mutex to ensure that the reader reader do not try to read while file is being written
Workability:
The problem with writer is not able to have a chance if there is a stream of readers.
Readers will starve the writers
Deadlocks
Dining philosophers Problem
- 5 philosophers sitting in circular table with 5 chairs and 5 chopsticks
- When one philosopher becomes hungry, they will pick up the chopsticks on their left and right.
- A philosopher can only pick chopsticks one at a time
- Must hold both to eat
- Finish eating, puts both down
semaphore choptick[5] = {1,1,1,1,1}
Solution 1:
- When philo tries to get chopstick by calling wait,
- When philo tries to get chopstick by calling wait,
int left = i
int right = (i+1)%5
while(1){
wait(chopstick[right])
wait(chopstick[left])
eat();
signal(chopsstick[right])
signal(chopstick[left])
think();
}
But, what if every philo have the left and they wait for the right. But.. there is no more left. That is a deadlock..
Solution 2: (Look at the avoid and prevent)
- Have a boolean that checks that both chopsticks is free
- This might be not atomic, someone might be checking while someone took it
Solution 3:
- Wait at the left chopsticks
- If right chopstick is available, take it
- Else release left
A livelock could happen and a deadlock
Live: Everyone take a left, tries to take right, instead of blocking ,drop the left and try again
Dead: Race condition where someone tries to get the chopstick while another is enquiring
Solution 4:
- while(test-and-set(chopstick(Left))==1);
- if(test-and-set(chopstick[right]==0)
- Ensure that no one can read while im reading/ writing
- This will solve the deadlock problem in Solution 3
However, livelock might still happen similar to Solution 3
Modelling
Resource type: Memory space, IO devices, locks
Each resource type has Wi identical instances
Each process utilizes a resource as follow:
- Request
- Request
- Use
- Release
Using graphs
<slide 30>
For deadlock to happen, there must be a loop and a limited amount of resources
No cycles - no deadlocks
Got cycles:
- Every instance only one resource -> Deadlock
- Multiple -> Depends (Is the multiple resource in instance involved in the cycle? Run a detection algo)
Conditions
- Mutex: An instance of a resource can only be used by one process at a time
If a resource is shared, there is no reason for something to be block and entering into deadlock
- Hold and wait: Process holding at least one resource, waiting to acquire additional resource held by other processes
- No preemption: A resource can be released only voluntarily by the process holding it, after that process has finish using it
- Circular wait: There exist a set of waiting processes such that p0 is waiting for a resource that is held by p1, p1 is waiting for p2..... pn-1 waiting for p0
Prevention
Prevent and avoid at least one of the four necessary condition for deadlock to happen
Avoid Mutual Exclusion
Allow sharing of resource
- Impossible for non sharable resources (Printers, data areas to be written, chopsticks)
- Minimise it, do not use till necessary
- Shareable: code section, read only data
Avoid Hold and Wait
This is to prevent LIVELOCK
This is to prevent LIVELOCK
1. Allow processes to use only one at a time
- Very restricting
2. Preallocate all resource in advance
- Allow only when all needed resources is available
- Low resource utilization and low system throughput
3, Use non blocking primitives
- Check if the second is not available without blocking
- Release the first chopstick if second is unavailable
- Possible livelocks and starvation
No Preemption
- Allow the system to preempt resources
- E.g If a process currently holding some resources request another resource that cannot immediately (Or after some time) be allocated to it, then the system takes all currently held resources and restarts the processes
This might cause livelock or starvation. We may want to restart the process if this happen but not all process might be restartable.
Circular Wait
- Prevent circular wait
- Do not allow cycles to happen:
- E.g Allow at most 4 philosophers to be hungry simultaneously
- General approach: Create an ordering of all resources and ensure that the resources follow this order strictly
It is hard to find an reordering of resources if there are a lot of resources.
Avoidance
Improve something else
- Resources
- Scheduling ordering
This is to not restrict the system. However, we must have some information regarding the process.
Let processes execute as usual but tracks their resource request and releases. If a specific action can lead to a deadlock, do not execute it
Dynamically examine the state of resource allocation to ensure that the system never go into unsafe state
However, this does not mean that deadlock does not happen -> unsafe state still might exist
Safe state and safe execution sequence
Safe state: An safe execution sequence exist
- If resource is not available, then wait till all other processes have finish
- When that process has finish, the waiting process can get the resource, execute and return before terminating
- Since it terminate, it ensure that all process can finish.
Use a semaphore to represent each resource that is available.
Give the resource to the process that will be satisfied after receiving what is available
This will ensure that a process will be able to terminate and free items.
System in safe state -> Can avoid deadlocks
Unsafe state -> May not deadlock depending on system
Banker's algorithm
Requires a few ds
n = num process
m = resource types
Use a allocation matrix that show how many are allocated currently
Matrix is n*m
1. Init values
Let work and finish be vectors of length m and n
work = available
allocate = allocated k instances
Finish[i] = false for i = 1,2,....,n
2. Find one process that has not finish yet which can be satisfied with whatever is available currently
For an i such that both:
- finish[i] == false
- need <=work
If no i exist, go to 4
3. Allocate
work = work +allocation
finish[i] = true
goto step2
4. if Finish[i] == true for all i then the system is in a safe state, else system is unsafe
This algo determines if a particular safe state is safe.
You can grant the state if it passes the algorithm.
This ensure that the system is in a safe state before proceeding into allocation of resources to the process
However, this algo has a n^2 time complexity which is not good. This is not practical. Nobody actually use this.
Dynamically examine the state of resource allocation to ensure that the system never go into unsafe state
However, this does not mean that deadlock does not happen -> unsafe state still might exist
Safe state and safe execution sequence
Safe state: An safe execution sequence exist
- If resource is not available, then wait till all other processes have finish
- When that process has finish, the waiting process can get the resource, execute and return before terminating
- Since it terminate, it ensure that all process can finish.
Use a semaphore to represent each resource that is available.
Give the resource to the process that will be satisfied after receiving what is available
This will ensure that a process will be able to terminate and free items.
System in safe state -> Can avoid deadlocks
Unsafe state -> May not deadlock depending on system
Banker's algorithm
Requires a few ds
n = num process
m = resource types
Use a allocation matrix that show how many are allocated currently
Matrix is n*m
1. Init values
Let work and finish be vectors of length m and n
work = available
allocate = allocated k instances
Finish[i] = false for i = 1,2,....,n
2. Find one process that has not finish yet which can be satisfied with whatever is available currently
For an i such that both:
- finish[i] == false
- need <=work
If no i exist, go to 4
3. Allocate
work = work +allocation
finish[i] = true
goto step2
4. if Finish[i] == true for all i then the system is in a safe state, else system is unsafe
This algo determines if a particular safe state is safe.
You can grant the state if it passes the algorithm.
This ensure that the system is in a safe state before proceeding into allocation of resources to the process
However, this algo has a n^2 time complexity which is not good. This is not practical. Nobody actually use this.
Detection and recovery
Allow system to experience and recover by detecting a deadlock through an detecting algo. Invoke a recovery algorithm after detection.
- Available: A vector of length m indicates the number of available resource types
- Allocation: An n * m matrix that shows the number of resources of each type currently allocated to each process
- Request: An n*m matrix indicated the current request of each process.
If Request[i,j] = k then process i is requesting k more instance than process j
Inaccurate mechanism
Using the banker's algorithm to check if there is a deadlock
if Finish[i] = false for some i, then the system is in deadlock state
However, the algo needs m*n^2 operation to check if its deadlock
Deadlock recovery
There is no elegant solution
- Kill deadlocked process
Kill one or gradually until the loop is broken
Restart kill processes
- Preempt resource from one process and give to another
Roll back the victim process (If can)
However, some process cannot be restarted [Like IO]
- Available: A vector of length m indicates the number of available resource types
- Allocation: An n * m matrix that shows the number of resources of each type currently allocated to each process
- Request: An n*m matrix indicated the current request of each process.
If Request[i,j] = k then process i is requesting k more instance than process j
Inaccurate mechanism
Using the banker's algorithm to check if there is a deadlock
if Finish[i] = false for some i, then the system is in deadlock state
However, the algo needs m*n^2 operation to check if its deadlock
Deadlock recovery
There is no elegant solution
- Kill deadlocked process
Kill one or gradually until the loop is broken
Restart kill processes
- Preempt resource from one process and give to another
Roll back the victim process (If can)
However, some process cannot be restarted [Like IO]
Ostrich method
Ignore the problem and pretend it never occured
Unix does this
LiveLock
A situation in which a set of processes are not block but cannot progress.
But this is less serious than a deadlock
- Often result of deadlock prevention/avoidance
- Can be resolved by chance :(
No comments:
Post a Comment