Posts

CST334 - Week 8

 Wow, this is the final week for this course in Operating Systems! I have learned a lot this week and over the course of eight weeks. Concepts that did not really make sense to me during the week it was a topic, was revisited with the opportunity to make up programming assignments. I am pleased to know I was able to make some changes to my code to pass all the tests for it. If I can succeed in attempting another programming assignment to get full points, I will feel some sense of accomplishment. I was excited to learn about operating systems and came in with no real understanding of it besides the basics. I had no idea how in depth it would go and how much I would struggle to grasp the concepts. Perhaps if there was more time. However, when I started this class, I was also given a special project assignment to take live data and use it to point an antenna. Although I only had to focus on the mathematical portions while my partner set up everything else using C, I was puzzled about ...

CST 334 - Week 7

This week I learned about Persistence, with a focus on hard drives and IO devices. The first half of the lessons taught how to calculate variables like rotational delays and calculating the time it takes to read or write data. Running the calculations show that sequential reading of data performs much better than random reads and is the preferred strategy. Disk schedulers decide the order that disk requests are processed. I also learned about how the main goals for persistent storage is to be high performing, reliable, protected, and follow a naming scheme where it makes sense. At the lowest level, the name of a file is called the "inode number" and for now, we are to assume that every file has an inode number associated with it. While directories are also associated with an inode number, the contents within it are more specific, such as the user-readable name that it would have to map to the inode number. I also finished researching with my team about how IO scheduling might...

CST 334 - Week 6

This week I learned about condition variables. Condition variables is a queue that allows threads to wait on a condition to become true, before proceeding to execute. It is another alternative to spinning locks, which is rather inefficient. Condition variables can be declared with pthread_cond_t c and offers two operations: wait() , which puts the thread to sleep, and executes with a call to  signal() .  wait()  has mutex as a parameter so it is able to release the lock and put a thread to sleep when signaled, with the ability to reacquire the lock. I also learned the importance of using while statements over if with condition variables to ensure the thread re-checks the condition after waking up. Good practice with use of condition variables is to always hold the lock while signaling. I also learned about semaphore, a synchronization primitive. In this, it is important to consider the execution path, as it affects what value will be printed when there are multiple thr...

CST 334 - Week 5

This week, I learned about multi-thread code. Threads are helpful because they exist under a process and share the same virtual address. This way it does not have to start up another process and allows for concurrency. Concurrency offers benefits such as an IDE handling edits while background compilation is occurring. With this comes some risks of poor execution that could lead to problems. The critical section is where both threads run a code, meaning that piece of code accesses a shared resource. It is important to understand that two threads should not run at the same time in this section. Mutual exclusion is a condition that can help with this, by ensuring that at most, only one thread will be within a critical section at a time.   The race condition is when the output of a multi-threaded program depends on the relative speed or scheduling of the thread. It is something that should be avoided because it means there is a lack of consistency on the output of the program. Ens...

CST 334 - Week 4

This week, I learned about how translation-lookaside buffer (TLB) is used in address translation. TLB is a hardware cache implemented as an associative cache  maintained to speed the process of look-ups. Despite it having the ability to be rather useful, it is only as good as the data it contains. If an VPN of the virtual address being searched for is not within the TLB contents, then it will miss the search until that data is stored. It the data is not within the cache, then it will be retrieved from the page table instead. That data that is selected to be cached are determined by how often they are accessed. I also learned how to calculate the number of bits required for the  VPN and how many bits are the offset. I also learned about the process of swapping, where the most important pages are selected to be stored in the physical memory. This is how to address the issue of running out of memory space. To help this process, a present bit is utilized in the page table to repre...

CST 334 - Week 3

This week, I learned about the various factors of free space management. This includes the way operating systems virtualizes memory through the use of virtual addresses. This is when a process tries to load into an address, the OS works with the support of hardware will ensure that it goes into a different virtual address. This gives the illusion that there is a large, private address space for each running program. This tactic is transparent to the programs where only the OS knows where in the physical memory the instructions and data resides. To help eliminate the waste of free space in the memory, a segmentation tactic is used where a base and bound pair is is generated for each logical segment instead. With this tactic, the code, stack, and heap are now placed in different parts of physical memory independently of one another by the OS.  I spent a significant amount of time practicing and ensuring I identified the patterns in getting the calculations correct when determining th...

CST 334- Week 2

This week, I spent a lot of time learning scheduling algorithms. This includes: FIFO, which runs the jobs in the order they arrive, SJF, which runs the shortest job first and so fourth, STCF, which determines which job has the least amount of time left, every time a new job enters and reschedules the jobs using this information, and Round Robin, which then slices each job into a certain time frame, taking turns to run each slice. Each one has their pros and cons, depending on what is needed. Take for example how Round Robin has the fastest response time but the average turnaround time suffers. FIFO makes sense, but if the job that starts first takes a large amount of time, the remaining jobs will suffer in performance, which can affect the turnaround time. In addition to these concepts, I spent a tremendous amount of time trying to figure out what style works best for me when it comes to make calculations on the average response time and turnaround time. It is easy to get lost in the c...