99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

COMP3230代寫、代做python語言程序

時間:2023-11-13  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP**30 Principles of Operating Systems
Programming Assignment Two
Due date: Nov. 19, 2023, at 23:59
Total 11 points
Programming Exercise – Accelerate LLM Inference via Multi-Threading

Objectives
1. An assessment task related to ILO 4 [Prac7cability] – “demonstrate knowledge in applying system
soAware and tools available in the modern opera7ng system for soAware development”.
2. A learning ac7vity related to ILO 2.
3. The goals of this programming exercise are:
§ to have direct prac7ce in designing and developing mul7threading programs;
§ to learn how to use POSIX pthreads (and semaphore) libraries to create, manage, and
coordinate mul7ple threads in a shared memory environment;
§ to design and implement synchroniza7on schemes for mul7threaded processes using
semaphores, or mutex locks and condi7on variables.
Tasks
Optimize the matrix-vector-multiplication algorithm of GPT by multi-threading. Similar to other
neural networks, GPT and its variations utilize matrix-vector-multiplication, or called fullyconnected/linear layer in DL, to apply the parameter learned, which takes >70% of the whole
calculation. Thus, to accelerate the GPT and get faster response, it’s critical to have faster matrixvector-multiplication, and multi-threading are usually considered powerful.
In this assignment, we will use an open-source variation of GPT, llama2 released by Meta, and we
provide a complete pure C implementation of its inference in seq.c as the baseline of your work,
along with model weights. You need to use pthread.h with either the semaphore or (mutex_lock
+ conditional variable) to implement a multi-threading version of matrix-vector-multiplication. This
multi-threading version will significantly accelerate the inference of Large Language Model.
Acknowledgement: This assignment is based on the open-source project llama2.c by Andrej
Karpathy, thanks open-source.
GPT-based    Large    Language    Model
In high-level, GPT is a machine that could generate words one by one based on previous words (also
known as prompts), and Figure 1a illustrate the basic workflow of GPT on generating “How are you”:
Figure 1. GPT Insight. a) GPT generate text one by one, and each output is the input of next generation. b) GPT has four
major component: Tokenizer turns word (string) into vector, Softmax + Sample give next token, and each layer has
Attention and FFN (Feed-Forward Network), consisting of many Matrix-Vector-Multiplication
Figure 1b showcases the inference workflow of each word like “You” in “How are you”: First, words
are transformed into tokens using a tokenizer, which is essentially a (python) dictionary that assigns
a unique vector to each word. The embedding vectors go through multiple layers, each consisting of
three steps.
§ The first step is attention, where the model calculates attention scores based on the cosine
similarity between the current word's query embedding and the embeddings of previous words
(keys). The attention output is a weighted average of the value embeddings, and this process
involves learnable parameters in the form of Matrix-Vector-Multiplication (linear layer).
§ The second step is a feed-forward network (FFN) that adds more learnable parameters through
Matrix-Vector-Multiplication.
§ The third step is positional embedding, which takes into account the ordering of words in natural
language by adding positional information to the attention calculations.
After going through all the layers, the embeddings are classified to generate a specific word as the
output. This involves using a softmax function to convert the embeddings into a probability
distribution, and randomly samples a word from the distribution.
Understanding GPT is not required for this assignment. Just remember that LLM uses a lot of MatrixVector-Multiplication to apply learned parameters to make it powerful.
Task:    Matrix-Vector-Multiplication
Figure 2. Matrix-Vector-Multiplication Algorithm.
As shown in the Figure 2, Matrix-Vector-Multiplication can be illustrated as two iterations:
For Each Row i
For Column j, accumulate Matrix[i][j] * Vector[j] to Out[i]
More specifically, a sample C implementation is shown below (also in seq.c):
void mat_vec_mul(float* out, float* vec, float* mat, int col, int row) {
 for (int i = 0; i < row; i++) {
 float val = 0.0f;
 for (int j = 0; j < col; j++) {
 val += mat[i * col + j] * vec[j]; // mat[i * col + j] := mat[i][j]
 }
 out[i] = val;
 }
}
Your task in this assignment is to parallelize the outer iteration (at the 2nd line) by allocating rows to
threads. More specifically, in the case of a Matrix with   rows and   threads working on the
computation, if   is divisible by  , the k-th thread (  = 0, 1, … ,   − 1) will handle the rows from
!  ×  
 & ' to !(  + 1) ×  
 & − 1'. To illustrate, if we have a 6-row matrix with 2 threads, the 0th
thread will handle rows 0 to 2, while the 1st thread will handle rows 3 to 5. If   is not divisible by  ,
we can assign first   − 1 threads (  = 0, 1, … ,   − 2) with - 
 & . rows, while the last thread handles
remaining rows. More explanation on such design can be found on Appendix a. Parallel Checking.
Moreover, in order to reduce overhead, you are required to create one set of threads and reuse
them for all mat_vec_mul() function calls, instead of creating threads for each
mat_vec_mul()function call. One popular way based on Synchronization is illustrated in Figure 3.
Figure 3. Reference Synchronization Workflow, consisting of 3 function: a) CREATE_MAT_VEC_MUL function: create n
threads, each threads fall asleep immediately; b) MAT_VEC_MUL function: assign new parameters, wake up threads to
work on parameters and wait until threads to finish to return; c) DESTROY_MAT_VEC_MUL function: wake up threads to
collect system usage and exit, wait until threads to exit and collect usage of threads.
More specifically, the synchronization workflow illustrated in Figure 3 consists of 3 functions and the
thread function:
1. create_mat_vec_mul(int thr_count): to be called at the beginning of program, shall:
a. Create n threads
b. Let threads identify themselves, i.e., each thread knows it is the i-th threads
c. Let the created threads fall asleep immediately
2. void mat_vec_mul(float* out, float* vec, float* mat, int col, int row):
API exposed to do Matrix-Vector-Multiplication, shall:
a. Assign new parameters (out, vec, mat, col, row) to threads
b. Wake up threads to do calculation
c. Main thread wait until all threads finished task, and then return
3. destroy_mat_vec_mul(): to be called at the end of program, shall:
a. Wake up threads to collect the system usage (of themselves) and terminates
b. Wait until all threads to exit and collect system usage of threads
c. Collect system usage of main thread, and display both usage of each thread and main thread
d. Clear all resources related with multi-threading, and return
4. void* thr_func(void* arg): Thread function to do Matrix-Vector-Multiplication, shall:
a. Fall asleep immediately after initialization
b. Can be woke up by main thread to work on assigned tasks
c. After finishing the task, inform main thread
d. Being able to collet the system usage (of itself) and terminate
More details and reasons behind the design can be found in Appendix b. Context Design.
Definitely there might have other synchronization workflow, and we are open to your ideas.
However, due to the large class size, we can only accept submissions following the design above.
Specifications
a.    Preparing    Environment
Download the start_code.zip from course’s Moodle – including sequential version seq.c, along
with utility functions in utilities.c and utilities.h. Compile the seq.c with gcc:
gcc -o seq seq.c utilities.c -O2 -lm
Please include utilities.c, use -lm flag to link math library and -O2 flag to apply level-2 optimization.
Please stick to -O2 and don’t use other optimization for fairness. You don’t need to understand and
are not allowed to modify utilities.c and utilities.h.
Download the model files. There are two files required, model.bin for model weight and
tokenizer.bin for tokenizer. Please use following instructions to download them:
wget -O model.bin
https://huggingface.co/huangs0/llama2.c/resolve/main/model.bin
wget -O tokenizer.bin
https://huggingface.co/huangs0/llama2.c/resolve/main/tokenizer.bin
Run the compiled program by giving an integer as the random seed for sampling.
./seq <seed>
Upon invocation, the program will configure the random seed and begin sentence generation
starting from a special <START> token. The program call transformer function to generate the
next token, and printf with fflush to print the generated word to shell immediately. A pair of
utility time measurement function time_in_ms will measure the time in millisecond accuracy:
long start = time_in_ms(); // measure time in ms accuracy
int next, token = 1, pos = 0; // token = 1 -> <START>
while (pos < config.seq_len) { // not exceed max length
 next = transformer(token, pos, &config, &state, &weights); // generate next
 printf("%s", vocab[next]); fflush(stdout); // force print
 token = next; pos++; // record token and shift position
}
long end = time_in_ms(); // measure time in ms accuracy
This program will start generating tiny stories. Finally, when generation is finished, the length of the
generated text, total time, average speed, and system usage will be printed such as:
One day, a little girl named Lucy
......
Carrying a brightly stepped for one dog ladybuging once she had
length: 256, time: 4.400000 s, achieved tok/s: 58.181818
main thread - user: 4.3881 s, system: 0.0599 s
By fixing the same machine (workbench2) and the same random seed, generated text can be
exactly replicated. For example, the above sample is conducted on workbench2 with random seed
42. Moreover, achieved tok/s represents the average number of tokens generated within a
second, and we use it as the metric for speed measurement. Due to the fluctuating system load from
time to time, the speed of the generation will fluctuates around some level.
b.    Implement    the    parallel    Matrix-Vector-Multiplication    by    multi-threading
Open the llama2_[UID].c, rename [UID] with your UID, and implement the workflow
illustrated in Figure. 3 by completing the four functions and adding appropriate global variables. For
synchronization, please use either semaphore or (mutex locks and conditional variables). You can
only modify code between specified // YOUR CODE STARTS HERE at line 43 and // YOUR
CODE ENDS HERE at line 67 in llama2_[UID].c.
Here are some suggestions for the implementation:
1. How to assign new tasks and inform them to terminate? Noted that all threads can access global
variables so you can updates the global variables and wake them up.
2. Main thread shall wait for threads to work or terminates.
3. For collecting system usage, please consider getrusage.
Your implementation shall be able to be compiled by the following command:
gcc -o llama2_[UID] llama2_[UID].c utilities.c -O2 -pthread -lm
Then run the compiled program. Now it accepts two arguments seed and thr_count . Code
related to reading arguments has been provided in llama2_[UID].c. You can use thr_count
to specify the number of threads to use.
./llama2_[UID] <seed> <thr_count>
If your implementation is correct, under the same random seed, generated text shall be the same
as sequential version, but the generation will be faster. Moreover, you shall report the system
usage for each threads respectively. For example, this is the output of random seed 42 on
workbench2 with 4 threads:
One day, a little girl named Lucy
......
Carrying a brightly stepped for one dog ladybuging once she had
length: 256, time: 2.100000 s, achieved tok/s: 121.****62
Thread 0 has completed - user: 1.2769 s, system: 0.0363 s
Thread 1 has completed - user: 1.2658 s, system: 0.0361 s
Thread 2 has completed - user: 1.2749 s, system: 0.0277 s
Thread 3 has completed - user: 1.2663 s, system: 0.0**3 s
main thread - user: 5.7126 s, system: 0.**9 s
c. Measure    the    performance    and    report    your    finding
Benchmark your implementation (tok/s) on your own computer with different thread numbers and
report metrics like the following table:
Thread Numbers Speed (tok/s) User Time System Time Use Time/System Time
0 (Sequential)
1 (1 (child)Thread)
2
4
6
8
10
12
16
Regarding system usage (user time / system time), please report the usage of the whole process
instead of each thread. Then based on above table, try to briefly analyze relation between
performance and No. threads and reason the relationship. Submit the table, your analysis and
reasoning in an one-page pdf document.
IMPORTANT: Due to the large number of students this year, please conduct the benchmark on your
own computer instead of the workbench2 server. Grading of your report is based on your analysis
and reasoning instead of the speed you achieved. When you’re working on workbench2, please be
reminded that you have limited maximum allowed thread numbers (128) and process (512), so
please do not conduct benchmarking on workbench2 server.
Submission
Submit your program to the Programming # 2 submission page at the course’s moodle website.
Name the program to llama2_[UID].c (replace [UID] with your HKU student number). As the
Moodle site may not accept source code submission, you can compress files to the zip format before
uploading. Submission checklist:
§ Your source code llama2_[UID].c, must be self-contained. (No dependencies other
than utilities.c and utilities.h)
§ Your report including benchmark table, your analysis and reasoning
§ Please do not submit model and tokenizer binary file (model.bin and tokenizer.bin).
Documentation
1. At the head of the submitted source code, i.e., llama2_[UID].c, state the:
§ File name
§ Student’s Name and UID
§ Development Platform
§ Remark – describe how much you have completed (See Grading Criteria)
2. Inline comments (try to be detailed so that your code could be understood by others easily)
Computer    Platform    to    Use
For this assignment, you can develop and test your program on any Linux/Mac platform, but you
must make sure that the program can correctly execute on the workbench2 Linux server (as the
tutors will use this platform to do the grading). Your program must be written in C and successfully
compiled with gcc on the server.
Please note that the only server for COMP**30 is workbench2.cs.hku.hk, and please do not use
any other CS department server, especially academy11 and academy21, as they are reserved for
other courses. In case you can not login to workbench2, please contact tutor(s) for help.
Grading    Criteria
1. Your submission will be primarily tested on the workbench2 server. Make sure that your
program can be compiled without any errors. Otherwise, we have no way to test your
submission and you will get a zero mark.
2. As the tutor will check your source code, please write your program with good readability (i.e.,
with good code convention and sufficient comments) so that you will not lose marks due to
confusion.
3. You can only use pthread.h and semaphore.h(if need), using other external libraries like
OpenMP, LAPACK will lead to 0 mark.
Detailed    Grading    Criteria
§ Documentation -1 point if failed to do
§ Include necessary documentation to explain the logic of the program
§ Include required student’s info at the beginning of the program
§ Report: 1 point
§ Measure the performance of the sequential program and your parallel program on your
computer with various No. threads (0, 1, 2, 4, 6, 8, 10, 12, 16).
§ Briefly analyze the relation between performance and No. threads and reason the relation
§ Implementation: 10 points evaluated progressively:
1. (+2 points = 2 points) Achieve correct result & use multi-threading. Correct means
generated text of multi-threading and sequential are identical with same random seed.
2. (+3 points = 5 points total) All in 1., and achieve >10% acceleration by multi-threading
compared with sequential under 4 threads. Acceleration measurement is based on tok/s,
acceleration must result from multi-threading instead of others like compiler (-O3), etc.
3. (+5 points = 10 points total) All in 2., and reuse threads in multi-threading. Reuse threads
means number of threads created in the whole program must be constant as thr_count.
Plagiarism
Plagiarism is a very serious offense. Students should understand what constitutes plagiarism, the
consequences of committing an offense of plagiarism, and how to avoid it. Please note that we may
request you to explain to us how your program is functioning as well as we may also make use of
software tools to detect software plagiarism.
Note: You must clearly acknowledge it if you use ChatGPT or any AI tools to generate code in your
implementation. Please kindly quote GPT generated code by // GPT Code Start Here and
// GPT Code End Here.
Appendix
a. Parallelism    Checking
To parallel by multi-threading, it’s critical to verify if the computation is independent to avoid race
condition and the potential use of lock. More specifically, we need to pay special attention to check
and avoid writing to the same memory location while persisting the correctness.
For example, 1st iteration (outer for-loop) matches the requirement of independence as the
computation of each row won’t affect others, and the only two writing is out[i] and val. Writing to
the same out[i] can be avoid by separating i between threads. val can be implemented as stack
variables for each threads respectively so no writing to the same memory.
Quite the opposite, 2nd iteration (inner for-loop) is not a good example for multi-threading, though
the only writing is val. If val is implemented as stack variable, then each thread only holds a part of
correct answer. If val is implemented as heap variables to be shared among threads, then val
requires lock to avoid race writing.
b. Design    of    Context
A straightforward solution to the above problem is to let thread function to do computation and exit
when finished, and let original mat_vec_mul function to create threads and wait for threads exit by
pthread_join. This could provide the same synchronization.
However, this implementation is problematic because each function call to mat_vec_mul will create
n new threads. Unfortunately, to generate a sentence, LLM like llama2 will call mat_vec_mul
thousands of times, so thousands of threads will be created and destroyed, which leads to indefinite
overhead to the operation system.
Noted that all the calls to mat_vec_mul are doing the same task, i.e., Matrix-Vector-Multiplication,
and the only difference between each function call is the parameter. Thus, a straightforward
optimization is to reuse the threads. In high-level, we can create n threads in advance, and when
mat_vec_mul is called, we assign new parameters for thread functions and let threads working on
new parameters.
Moreover, It’s worth noticed that mat_vec_mul is only valid within the context, i.e., between
create_mat_vec_mul and destroy_mat_vec_mul, or there are no threads other than the main (not
yet created or has been destroyed). This kind of context provides efficient and robust control over
local variable, and has been integrated with high-level languages like Python `with`.
請加QQ:99515681 或郵箱:99515681@qq.com   WX:codehelp

掃一掃在手機打開當前頁
  • 上一篇:代做CEG3136、代寫C/C++程序語言
  • 下一篇:EECS 2101代寫、代做java編程設計
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          亚洲视频在线观看视频| 韩国自拍一区| 麻豆视频一区二区| 亚洲精品极品| 韩国v欧美v日本v亚洲v| 欧美日韩免费一区二区三区| 久久国产视频网站| 亚洲一区在线免费观看| 亚洲精品免费观看| 黄色影院成人| 国产网站欧美日韩免费精品在线观看 | 亚洲欧美综合网| 99精品国产在热久久| 亚洲第一中文字幕在线观看| 国产伦精品一区二区三区高清版 | 一本久久综合| 最近看过的日韩成人| 在线日韩成人| 韩国一区电影| 精品成人免费| 在线观看国产一区二区| 国产亚洲精品久久久| 国产欧美精品一区二区色综合| 欧美色中文字幕| 国产精品高清免费在线观看| 欧美肉体xxxx裸体137大胆| 欧美精品一区二| 欧美国产日韩精品| 欧美韩日亚洲| 欧美日本国产| 欧美性色视频在线| 国产精品久久久久毛片大屁完整版| 欧美日韩免费观看一区| 欧美激情中文字幕一区二区| 欧美噜噜久久久xxx| 欧美视频在线观看一区二区| 欧美日韩在线一二三| 国产精品高精视频免费| 国产精品一区二区在线观看网站 | 久久久噜噜噜久久人人看| 久久久精品久久久久| 久久露脸国产精品| 欧美久久电影| 国产精品日日摸夜夜添夜夜av| 国产欧美日韩综合一区在线观看 | 韩日成人av| 在线观看国产欧美| 99re6这里只有精品| 亚洲一区二区三区在线看| 欧美一区午夜视频在线观看| 久久综合色一综合色88| 欧美国产高潮xxxx1819| 国产精品欧美风情| 永久免费视频成人| 国产精品99久久久久久人| 亚洲欧美日韩国产成人| 久久深夜福利免费观看| 欧美四级电影网站| 国内精品久久久久久久影视麻豆| 亚洲精品系列| 久久爱www久久做| 欧美体内she精视频| 一区二区三区在线视频观看 | 欧美激情亚洲综合一区| 欧美午夜片欧美片在线观看| 国产在线一区二区三区四区 | 欧美大胆成人| 国产精品色婷婷| 亚洲精品一品区二品区三品区| 亚洲欧美日韩精品| 欧美成人黄色小视频| 国产视频久久| 这里只有精品视频| 猛男gaygay欧美视频| 国产欧美精品一区二区三区介绍| 亚洲精品一二| 久热综合在线亚洲精品| 国产三级欧美三级| 一本大道av伊人久久综合| 久久久久久自在自线| 国产精品永久入口久久久| 亚洲精品乱码久久久久久蜜桃91| 午夜免费久久久久| 欧美片在线播放| 亚洲国产成人不卡| 久久精品人人做人人爽| 国产精品色在线| 亚洲视频精选| 欧美精品一线| 亚洲日本无吗高清不卡| 毛片精品免费在线观看| 韩日精品中文字幕| 欧美一级网站| 国产欧美日本| 欧美亚洲视频一区二区| 国产精品一区2区| 亚洲综合色激情五月| 国产精品美女久久久久久免费 | 亚洲一级高清| 国产精品国码视频| 亚洲午夜一区| 欧美特黄一区| 一区二区三欧美| 国产精品激情av在线播放| 亚洲视频一区二区| 国产精品久久久亚洲一区| 亚洲欧美日韩在线高清直播| 国产精品久久久久久av福利软件| 中文在线资源观看视频网站免费不卡| 欧美日本国产一区| 亚洲私人影吧| 国产精品视频久久| 久久九九有精品国产23| 亚洲国产精品va| 欧美日韩一区二区三区在线看 | 国产欧美1区2区3区| 久久精品久久99精品久久| 亚洲国产精选| 国产精品久久久久久久久免费樱桃| 亚洲欧美一区二区三区在线| 黄色av一区| 欧美日韩一区二区精品| 欧美在线观看视频一区二区| 亚洲国产精品久久| 国产精品理论片| 美女久久网站| 亚洲综合另类| 亚洲韩国精品一区| 国产日韩欧美制服另类| 欧美伦理91| 久久丁香综合五月国产三级网站| 亚洲精品123区| 国产婷婷精品| 欧美日韩一二三四五区| 久久久国产精品亚洲一区| 亚洲免费大片| 精品动漫3d一区二区三区免费版 | 99re这里只有精品6| 国产日韩一区二区| 欧美三级电影精品| 麻豆精品视频在线观看| 小处雏高清一区二区三区| 最近中文字幕日韩精品| 国产亚洲精品v| 欧美体内she精视频| 欧美极品aⅴ影院| 久久久久欧美精品| 亚洲在线视频网站| 亚洲伦理在线免费看| 亚洲福利视频二区| 国产一区久久| 国产欧美精品日韩精品| 国产精品成人一区二区三区吃奶 | 亚洲理伦电影| 伊人久久婷婷| 一区在线播放视频| 国产亚洲精品激情久久| 国产精品国产自产拍高清av| 欧美日韩一区二区在线| 欧美理论视频| 欧美日韩成人一区| 欧美噜噜久久久xxx| 欧美精选在线| 欧美激情亚洲国产| 欧美日本乱大交xxxxx| 欧美黑人多人双交| 欧美区亚洲区| 欧美日精品一区视频| 欧美视频在线一区二区三区| 欧美日韩国产bt| 欧美视频在线观看免费| 欧美日韩亚洲激情| 国产精品美女主播| 国产伦理一区| 国模私拍一区二区三区| 国产一区二区三区四区三区四| 国产夜色精品一区二区av| 黄色亚洲大片免费在线观看| 亚洲丁香婷深爱综合| 99精品久久| 午夜精品亚洲| 久久精品在线免费观看| 欧美成年人视频网站欧美| 欧美日韩久久| 国产美女一区| 亚洲国产高潮在线观看| 亚洲美女在线国产| 亚洲一区999| 久久久欧美精品| 欧美人体xx| 国产日本欧美一区二区三区在线| 尤物yw午夜国产精品视频明星| 在线观看国产精品淫| 亚洲午夜电影网| 久久精品国产99精品国产亚洲性色 | 欧美日韩一区二区在线观看视频 | 久久国产精品毛片| 欧美福利在线观看| 国产精品综合视频| 亚洲精品日韩激情在线电影 |