fork: copy parent process returns parent? pid_of_children : 0
wait parent wait for children
exec execution
status: running ready wait
data structures: process list, struct proc
machine state: memory, registers
limited direct execution
user mode, kernel mode, system call, trap(system call contains a trap instruction)
switch
wait for system calls
a timer
schedule
Tturnaround = Tcompletion - Tarrival
Tresponse = Tfirstrun - Tarrival
FIFO/FCFS convoy effect bad
SJF 当任务不是同时发生时 bad
STCF/PSJF preempt
RR time-slicing
MLFQ 多级反馈队列
Rule 1: If Priority(A) > Priority(B), A runs (B doesn’t).
Rule 2: If Priority(A) = Priority(B), A & B run in RR.
Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue).
Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).
Rule 5: After some time period S, move all the jobs in the system to the topmost queue.
lottery: counter and ticket
stride: take a stride and increase pass value
memory
every address generated by a user program is a virtual address
eg. 打印出来的都是虚拟地址
location of code : 0x1095afe50 location of heap : 0x1096008c0 location of stack : 0x7fff691aea64
base and bound in MMU(memory management unit) base: calculate pa bouds: protection, saves size or pa of end
static relocation(by software, before dynamic relocation): loader
rewrite addresses, no protection
segementation 分段
different sections can have different base and bound
va translation: va-> segment + offset
stack grow backwards: add a bit to identify grow positive or not
protection: add permission bits
空间浪费
internal fragment: segment 内部
external fragment: segment 之间, 主要原因是段的大小不一样 选用合适的 free-list 算法 free-space need coalescing(合并) allocator add header block before a allocated region, which contains the size (explain why free() dont need size arg)
fit 算法:
best fit
worst fit
first fit
next fit
segregated lists
buddy allocation 2N easy to coalescing
paging: fixed-sized segment 分页
page for virtual, page frame for physical
page table: per-process data structure, stored in memory va-> virtual page number + offset vpn--virtual page table--> page frame number pa -> pfn + offset
indexed by vpn
translation lookaside buffer TLB: in mmu, just a cache
先更新 tlb, 再查询 tlb
结构: vpn pfn flag_bits asid(pid)
too much tables!
bigger pages: internal fragment, mostly page size is 4KB
paging and segment: table for segment, 不需要再记录 invalid entry, 从而减少page tables大小
多级分页表
va tanslation
page fault
OS(sfotware!): page fault handler
TLB -> not hit -> page table register -> page table -(swap space)-> PTE -> PFN -> PA
swap darmon: high watermark, low watermark
page out
average memory access time(AMAT)
FIFO: simple
random: simple
LRU(least recently used): not good when big loop
approximating LRU
当一个 page 被使用时, use bit = 1 clock hand: 当page out 时, 查看当前指向的 page, 如果 use bit == 1, set page bit = 0, move on 如果 use bit == 0, page out
dirty bit: 应该优先 page out clean page 而不是 dirty page
clustering grouping
page0 is invalid, explain null pointer
COW(copy on write): when copy, read: address, write: copy
intget(int key){ if (map.find(key) == map.end()) return-1; auto key_value = *map[key]; cache.erase(map[key]); cache.push_front(key_value); map[key] = cache.begin(); return key_value.second; }
voidput(int key, int value){ if (map.find(key) == map.end()) { if (cache.size() == cap) { map.erase(cache.back().first); cache.pop_back(); } } else { cache.erase(map[key]); } cache.push_front({key, value}); map[key] = cache.begin(); } private: int cap; list<pair<int, int>> cache; unordered_map<int, list<pair<int, int>>::iterator> map; };
LFU
structNode { int cnt, time, key, value;
Node(int _cnt, int _time, int _key, int _value): cnt(_cnt), time(_time), key(_key), value(_value) {} booloperator < (const Node& rhs) const { return cnt == rhs.cnt ? time < rhs.time : cnt < rhs.cnt; } };
classLFUCache { // 缓存容量,时间戳 int capacity, time; unordered_map<int, Node> key_table; set<Node> S; public: LFUCache(int _capacity) { capacity = _capacity; time = 0; key_table.clear(); S.clear(); } intget(int key){ auto it = key_table.find(key); if (it == key_table.end()) return-1;
Node cache = it -> second; S.erase(cache);
cache.cnt += 1; cache.time = ++time;
S.insert(cache); it -> second = cache; return cache.value; } voidput(int key, int value){ if (capacity == 0) return; auto it = key_table.find(key); if (it == key_table.end()) {
if (key_table.size() == capacity) { key_table.erase(S.begin() -> key); S.erase(S.begin()); }
voidmutex_unlock(int *mutex) { /* Adding 0x80000000 to the counter results there are not other interested threads */ if (atomic_add_zero (mutex, 0x80000000)) return; /* There are other threads waiting for this wake one of them up. */ futex_wake (mutex); }
lock based data structure
thread safe
counter
sloppy counter
typedefstruct __counter_t { int global; pthread_mutex_t glock; int local[NUMCPUS]; pthread_mutex_t llock[NUMCPUS]; int threshold; } counter_t;