99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

CSCI 4210 — Operating Systems

時間:2024-08-19  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯


CSCI 4210  Operating Systems

Simulation Project Part II (document version 1.0)

Processes and CPU Scheduling

Overview

•  This assignment is due in Submitty by 11:59PM EST on Thursday, August 15, 2024

•  This project is to be completed either individually or in a team of at most three students; as with Project Part I, form your team within the Submitty gradeable, but do not submit any code until we announce that auto-grading is available

•  NEW: If you worked on a team for PartI, feel free to change your team for Part II; all code is reusable from Part I even if you change teams

•  Beyond your team (or yourself if working alone), do not share your code; however, feel free to discuss the project content and your findings with one another on our Discussion Forum

•  To appease Submitty, you must use one of the following programming languages:  C, C++, or Python (be sure you choose only one language for your entire implementation)

• You will have ve penalty-free submissions on Submitty, after which points will slowly be deducted, e.g., -1 on submission #6, etc.

• You can use at most three late days on this assignment; in such cases, each team member must use a late day

• You will have at least three days before the due date to submit your code to Submitty; if the auto-grading is not available three days before the due date, the due date will be 11:59PM EDT three days after auto-grading becomes available

•  NEW: Given that your simulation results might not entirely match the expected output on Submitty, we will cap your auto-graded grade at 50  points even though there will be more than 50 auto-graded points per language available in Submitty

• All submitted code must successfully compile and run on Submitty, which currently uses Ubuntu v22.04.4 LTS

• If you use C or C++, your program must successfully compile via gcc org++ with no warning messages when the -Wall  (i.e., warn all) compiler option is used; we will also use -Werror, which will treat all warnings as critical errors; the -lm flag will also be included; the gcc/g++ compiler is currently version 11.4.0 (Ubuntu  11.4.0-1ubuntu1~22.04)

•  For source file naming conventions, be sure to use * .c for C and * .cpp for C++; in either case, you can also include * .h files

• For Python, you must use python3, which is currently Python 3.10.12; be sure to name your main Python file project .py; also be sure no warning messages or extraneous output occur during interpretation

•  Please “flatten” all directory structures to a single directory of source files

•  Note that you can use square brackets in your code

Project specifications

For Part II of our simulation project, given the set of processes pseudo-randomly generated in Part I, you will implement a series of simulations of a running operating system. The overall focus will again be on processes, assumed to be resident in memory, waiting to use the CPU. Memory and the I/O subsystem will not be covered in depth in either part of this project.

Conceptual design  (from Part I)

process is defined as a program in execution.  For this assignment, processes are in one of the following three states, corresponding to the picture shown further below.

•  RUNNING: actively using the CPU and executing instructions

•  READY: ready to use the CPU, i.e., ready to execute a CPU burst

• WAITING: blocked on I/O or some other event

RUNNING                      READY                                   WAITING  (on  I/O) STATE                     STATE                                     STATE

+-----+                                                             +---------------------+

|           |          +-------------------+          |                                          |

|  CPU   |   <==  |         |         |         |         |              |         I/O  Subsystem          |

|           |          +-------------------+          |                                          |

+-----+           <<<  queue  <<<<<<<<<           +---------------------+

Processes in the READY  state reside in a queue called the ready queue.  This queue is ordered based on a configurable CPU scheduling algorithm.  You will implement specific CPU scheduling algorithms in Part II of this project.

All implemented algorithms (in Part II) will be simulated for the same  set  of processes, which will therefore support a comparative analysis of results. In Part I, the focus is on generating useful sets of processes via pseudo-random number generators.

Back to the conceptual model, when a process is in the READY state and reaches the front of the queue, once the CPU is free to accept the next process, the given process enters the RUNNING state and starts executing its CPU burst.

After each CPU burst is completed, if the process does not terminate, the process enters the WAITING  state, waiting for an I/O operation to complete (e.g., waiting for data to be read in from a file).  When the I/O operation completes, depending on the scheduling algorithm, the process either (1) returns to the READY  state and is added to the ready queue or (2) preempts the currently running process and switches into the RUNNING state.

Note that preemptions occur only for certain algorithms.

Algorithms — (Part II)

The four algorithms that you must simulate are first-come-first-served (FCFS); shortest job first (SJF); shortest remaining time (SRT); and round robin (RR). When you run your program, all four algorithms are to be simulated in succession with the same initial set of processes.

Each algorithm is summarized below.

First-come-first-served  (FCFS)

The FCFS algorithm is a non-preemptive algorithm in which processes simply line up in the ready queue, waiting to use the CPU. This is your baseline algorithm.

Shortest job first  (SJF)

In SJF, processes are stored in the ready queue in order of priority based on their anticipated CPU burst times.  More specifically, the process with the shortest predicted CPU burst time will be selected as the next process executed by the CPU. SJF is non-preemptive.

Shortest remaining time  (SRT)

The SRT algorithm is a preemptive version of the SJF algorithm. In SRT, when a process arrives, if it has a predicted CPU burst time that is less than the remaining predicted time of the currently running process, a preemption occurs.  When such a preemption occurs, the currently running process is added to the ready queue based on priority, i.e., based on its remaining predicted CPU burst time.

Round robin  (RR)

The RR algorithm is essentially the FCFS algorithm with time slice t slice.  Each process is given t slice  amount of time to complete its CPU burst. If the time slice expires, the process is preempted and added to the end of the ready queue.

If a process completes its CPU burst before a time slice expiration, the next process on the ready queue is context-switched in to use the CPU.

For your simulation, if a preemption occurs and there are no other processes on the ready queue, do not perform a context switch. For example, given process G is using the CPU and the ready queue is empty, if process G is preempted by a time slice expiration, do not context-switch process G back to the empty queue; instead, keep process G running with the CPU and do not count this as a context switch. In other words, when the time slice expires, check the queue to determine if a context switch should occur.

 

Simulation configuration  (extended from Part I)

The key to designing a useful simulation is to provide a number of configurable parameters. This allows you to simulate and tune for a variety of scenarios, e.g., a large number of CPU-bound processes, difering average process interarrival times, multiple CPUs, etc.

Define the simulation parameters shown below as tunable constants within your code, all of which will be given as command-line arguments. In Part II of the project, additional parameters will be added.

•  *(argv+1):  Define n as the number of processes to simulate.  Process IDs are assigned a two-character code consisting of an uppercase letter from A to Z followed by a number from

0 to 9. Processes are assigned in order A0, A1, A2, . . ., A9, B0, B1, . . ., Z9.

•  *(argv+2): Definen cpu as the number of processes that are CPU-bound. For this project, we will classify processes as I/O-bound or CPU-bound.  The n cpu   CPU-bound processes, when generated, will have CPU burst times that are longer by a factor of 4 and will have I/O burst times that are shorter by a factor of 8.

•  *(argv+3):  We will use a pseudo-random number generator to determine the interarrival times  of CPU bursts.  This command-line argument, i.e. seed, serves as the seed for the pseudo-random number sequence. To ensure predictability and repeatability, use srand48() with this given seed before simulating each  scheduling algorithm and drand48() to obtain the next value in the range [0.0, 1.0). Since Python does not have these functions, implement an equivalent 48-bit linear congruential generator, as described in the man page for these functions in C.

•  *(argv+4): To determine interarrival times, we will use an exponential distribution, as illus- trated in the exp-random .c example. This command-line argument is parameter λ; remember

that λ/1 will be the average random value generated, e.g., if λ = 0.01, then the average should be appoximately 100.

In the exp-random .c example, use the formula shown in the code, i.e., λ/− ln r.

•  *(argv+5):  For the exponential distribution, this command-line argument represents the upper bound for valid pseudo-random numbers.  This threshold is used to avoid values far down the long tail of the exponential distribution.  As an example, if this is set to 3000, all generated values above 3000 should be skipped. For cases in which this value is used in the ceiling function (see the next page), be sure the ceiling is still valid according to this upper bound.

•  *(argv+6): Define tcs  as the time, in milliseconds, that it takes to perform a context switch. Specifically, the first half of the context switch time (i.e., 2/tcs) is the time required to remove the given process from the CPU; the second half of the context switch time is the time required to bring the next process in to use the CPU. Therefore, require tcs  to be a positive even integer.

 

•  *(argv+7): For the SJF and SRT algorithms, since we do not know the actual CPU burst times beforehand, we will rely on estimates determined via exponential averaging.  As such, this command-line argument is the constant Q, which must be a numeric floating-point value in the range [0; 1].

Note that the initial guess for each process is τ0  = λ/1 .

Also, when calculating τ values, use the “ceiling” function for all calculations.

•  *(argv+8): For the RR algorithm, define the time slice value,t slice, measured in milliseconds. Require t slice  to be a positive integer.

Pseudo-random numbers and predictability  (from Part I)

A key aspect of this assignment is to compare the results of each of the simulated algorithms with one another given the same initial conditions, i.e., the same initial set of processes.

To ensure each CPU scheduling algorithm runs with the same set of processes, carefully follow the algorithm below to create the set of processes.

For each of the n processes, in order A0 through Z9, perform the steps below, with CPU-bound processes generated first. Note that all generated values are integers.

Define your exponential distribution pseudo-random number generation function as next_exp() (or another similar name).

1. Identify the initial process arrival time as the “floor” of the next random number in the sequence given by next_exp(); note that you could therefore have a zero arrival time

2. Identify the number of CPU bursts for the given process as the “ceiling” of the next random number generated from the uniform distribution obtained via drand48() multiplied by **; this should obtain a random integer in the inclusive range [1; **]

3. For each  of these CPU bursts, identify the CPU burst time and the I/O burst time as the “ceiling” of the next two random numbers in the sequence given by next_exp(); multiply the I/O burst time by 8 such that I/O burst time is close to an order of magnitude longer than CPU burst time; as noted above, for CPU-bound processes, multiply the CPU burst time by 4 and divide the I/O burst time by 8 (i.e., do not bother multiplying the original I/O burst time by 8 in this case); for the last CPU burst, do not generate an I/O burst time (since each process ends with a final CPU burst)

Simulation specifics  (Part II)

Your simulator keeps track of elapsed time t (measured in milliseconds), which is initially zero for each scheduling algorithm.  As your simulation proceeds, t  advances to each “interesting” event that occurs, displaying a specific line of output that describes each event.

The “interesting” events are:

•  Start of simulation for a specific algorithm

•  Process arrival (i.e., initially and at each I/O completion)

•  Process starts using the CPU

•  Process finishes using the CPU (i.e., completes a CPU burst)

•  Process has its τ value recalculated (i.e., after a CPU burst completion)

•  Process preemption (SRT and RR only)

•  Process starts an I/O burst

•  Process finishes an I/O burst

•  Process terminates by finishing its last CPU burst

• End of simulation for a specific algorithm

Note that the “process arrival” event occurs each time a process arrives, which includes both the initial arrival time and when a process completes an I/O burst. In other words, processes “arrive” within the subsystem that consists only of the CPU and the ready queue.

The “process preemption” event occurs each time a process is preempted.  When a preemption occurs, a context switch occurs, except when the ready queue is empty for the RR algorithm.

After you simulate each scheduling algorithm, you must reset your simulation back to the initial set of processes and set your elapsed time back to zero.

Note that there may be times during your simulation in which the simulated CPU is idle because no processes have arrived yet or all processes are busy performing I/O. Also, your simulation ends when all processes terminate.

If diferent types of events occur at the same time, simulate these events in the following order:

(a) CPU burst completion; (b) process starts using the CPU; (c) I/O burst completions; and

(d) new process arrivals.

Further, any “ties” that occur within  one of these categories are to be broken using process ID order.  As an example, if processes G1  and S9 happen to both complete I/O bursts at the same time, process G1 wins this “tie” (because G1 is lexicographically before S9) and is therefore added to the ready queue before process S9.

Be sure you do not implement any additional logic for the I/O subsystem.  In other words, there are no specific I/O queues to implement.

Measurements  (from Part I)

There are a number of measurements you will want to track in your simulation. For each algorithm, you will count the number of preemptions and the number of context switches that occur. Further, you will measure CPU utilization by tracking CPU usage and CPU idle time.

Specifically, for each  CPU  burst, you will track CPU burst time (given), turnaround time, and wait time.

CPU burst time

CPU burst times are randomly generated for each process that you simulate via the above algorithm. CPU burst time is defined as the amount of time a process is actually using the CPU. Therefore, this measure does not include context switch times.

Turnaround time

Turnaround times are to be measured for each process that you simulate.  Turnaround time is defined as the end-to-end time a process spends in executing a single  CPU  burst.

More specifically, this is measured from process arrival time through to when the CPU burst is completed and the process is switched out of the CPU. Therefore, this measure includes the second half of the initial context switch in and the first half of the final context switch out, as well as any other context switches that occur while the CPU burst is being completed (i.e., due to preemptions).

Wait time

Wait times are to be measured for each CPU burst. Wait time is defined as the amount of time a process spends waiting to use the CPU, which equates to the amount of time the given process is actually in the ready queue. Therefore, this measure does not include context switch times that the given process experiences, i.e., only measure the time the given process is actually in the ready queue.

CPU utilization

Calculate CPU utilization by tracking how much time the CPU is actively running CPU bursts versus total elapsed simulation time.

 

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp



 

掃一掃在手機打開當前頁
  • 上一篇:代寫COMP501 ICT Fundamentals
  • 下一篇:BISM1201代做、代寫Python/Java程序語言
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                国产中文一区二区三区| 一区二区视频免费在线观看| 亚洲欧美在线高清| 成人免费毛片aaaaa**| 国产精品卡一卡二| 欧美视频一区二区三区四区| 亚洲国产视频一区二区| 91精品国产综合久久精品性色| 一区二区三区不卡在线观看 | 一区二区三区丝袜| 粉嫩一区二区三区在线看| 国产精品狼人久久影院观看方式| 色狠狠综合天天综合综合| 午夜av区久久| 欧美极品aⅴ影院| 欧美高清激情brazzers| 成人av影视在线观看| 日韩和的一区二区| 国产精品成人在线观看| 日韩欧美一区二区免费| 91免费看`日韩一区二区| 麻豆传媒一区二区三区| 亚洲日本在线视频观看| www日韩大片| 在线不卡一区二区| 色婷婷av一区二区三区gif | 久久99热99| 亚洲精品国久久99热| 久久久亚洲精品石原莉奈| 欧美日韩亚洲综合| 91美女视频网站| 成人高清免费在线播放| 激情综合一区二区三区| 石原莉奈在线亚洲二区| 一区二区在线电影| 国产精品不卡一区| 欧美国产日韩在线观看| 久久嫩草精品久久久久| 日韩亚洲国产中文字幕欧美| 欧美精三区欧美精三区| 欧美日韩免费在线视频| 91色视频在线| 97久久精品人人爽人人爽蜜臀| 极品尤物av久久免费看| 免费人成黄页网站在线一区二区| 亚洲成人免费影院| 亚洲图片欧美综合| 亚洲国产精品久久不卡毛片| 亚洲激情中文1区| 亚洲精品菠萝久久久久久久| 国产精品三级av| 色素色在线综合| 国产激情91久久精品导航| 日本亚洲欧美天堂免费| 九九九精品视频| 国产精品亚洲一区二区三区妖精 | 亚洲已满18点击进入久久| 欧美日韩黄色一区二区| 一本到不卡免费一区二区| 国产精品18久久久| 国产精品一区2区| 精品一区二区三区的国产在线播放| 亚洲精选一二三| 一区二区三区四区av| 有坂深雪av一区二区精品| 亚洲男同1069视频| 亚洲一区二区在线免费观看视频| 国产精品大尺度| 国产精品国产三级国产普通话蜜臀| 中文字幕精品—区二区四季| 精品盗摄一区二区三区| 久久影院午夜片一区| 中文欧美字幕免费| 亚洲天堂精品在线观看| 亚洲一区二区成人在线观看| 精品一区二区在线视频| 国产精品久久久久桃色tv| 91片在线免费观看| 欧美日韩一区三区四区| 日韩欧美色综合网站| 亚洲狼人国产精品| 老司机精品视频导航| 91在线看国产| 亚洲国产成人porn| 久久久久久久久久电影| 一区二区三区丝袜| 亚洲一区二区3| 亚洲香肠在线观看| 五月婷婷综合激情| 麻豆精品在线视频| 久久久久久久久久久久久夜| 日本一二三不卡| 久久久综合网站| 亚洲精品ww久久久久久p站| 亚洲男帅同性gay1069| 日本一不卡视频| 国产 欧美在线| 欧美在线三级电影| 日韩午夜在线播放| 中文字幕在线观看不卡视频| 免费在线观看日韩欧美| 国产精华液一区二区三区| 色乱码一区二区三区88| 日本一区二区三区在线观看| 亚洲精品一二三| 国产成+人+日韩+欧美+亚洲| 欧美在线观看禁18| 国产精品美女一区二区三区| 日韩av电影免费观看高清完整版 | 热久久一区二区| 福利一区二区在线| 欧美一区二区三区系列电影| 亚洲女子a中天字幕| 成人免费视频免费观看| 欧美一级夜夜爽| 亚洲第一av色| 97精品久久久午夜一区二区三区 | 久久午夜羞羞影院免费观看| 国产一区 二区 三区一级| 欧美不卡视频一区| 国产精品嫩草影院com| 欧美日本一区二区在线观看| 91精品国产综合久久精品图片| 中文字幕欧美日韩一区| 久久激情五月婷婷| 欧美日韩一区 二区 三区 久久精品| 久久精品综合网| 久久丁香综合五月国产三级网站| 91精彩视频在线| 中文字幕亚洲欧美在线不卡| 蜜桃av一区二区| 欧美日韩国产首页| 亚洲精品国产无套在线观| 高清国产一区二区三区| 中文无字幕一区二区三区| 国产一区二区视频在线播放| 7777精品久久久大香线蕉| 亚洲va国产va欧美va观看| 欧美大胆人体bbbb| 日韩中文字幕亚洲一区二区va在线| 亚洲国产裸拍裸体视频在线观看乱了| 美国精品在线观看| 2024国产精品| 国产精品一卡二卡| 中文一区在线播放| 成人性生交大片免费看在线播放| 中文字幕中文字幕中文字幕亚洲无线| 国产成人精品在线看| 亚洲欧美一区二区三区孕妇| 成人免费看黄yyy456| 亚洲色图制服诱惑 | 91精品国产麻豆| 精品一区二区在线视频| 精品伦理精品一区| 成人网在线免费视频| 亚洲色图欧美激情| 天天综合色天天| 国产精品久久久一区麻豆最新章节| 经典三级视频一区| 91精品蜜臀在线一区尤物| 蜜臀a∨国产成人精品| 久久久99久久| 91国产免费看| 韩国v欧美v亚洲v日本v| 综合中文字幕亚洲| 日韩欧美国产三级电影视频| 99久久精品免费精品国产| 免费高清在线视频一区·| 国产精品久久久久桃色tv| 欧美一区二区三区四区视频| 99国产精品久久久久| 国产在线国偷精品产拍免费yy | 欧美videofree性高清杂交| 国产成人在线网站| 日本欧美韩国一区三区| 亚洲理论在线观看| 国产人久久人人人人爽| 欧美一卡二卡三卡| 欧美综合亚洲图片综合区| 国产盗摄视频一区二区三区| 日本视频一区二区三区| 亚洲免费高清视频在线| 久久久久亚洲综合| 精品国产乱码久久久久久免费| 精品视频免费看| 99在线精品观看| 麻豆一区二区99久久久久| 欧美一级精品在线| 国内精品在线播放| 免费观看30秒视频久久| 三级在线观看一区二区| 亚洲精品日韩综合观看成人91| 国产女同互慰高潮91漫画| 欧美精品一区二| 日韩免费观看高清完整版| 欧美日韩1234| 制服丝袜在线91| 欧美狂野另类xxxxoooo| 欧美日韩mp4| 91精品国产麻豆国产自产在线 |