99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代做COMP3230、代寫c/c++編程設計
代做COMP3230、代寫c/c++編程設計

時間:2024-10-02  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP**30 Principles of Operating Systems Programming Assignment One
Due date: Oct. 17, 2024, at 23:59 Total 13 points – Release Candidate Version 2
Programming Exercise – Implement a LLM Chatbot Interface
Objectives
1. An assessment task related to ILO 4 [Practicability] – “demonstrate knowledge in applying system software and tools available in the modern operating system for software development”.
2. A learning activity related to ILO 2a.
3. The goals of this programming exercise are:
• To have hands-on practice in designing and developing a chatbot program, which involves the
creation, management and coordination of processes.
• to learn how to use various important Unix system functions:
§ toperformprocesscreationandprogramexecution
§ tosupportinteractionbetweenprocessesbyusingsignalsandpipes § togettheprocesses’srunningstatusbyreadingthe/procfilesystem § toconfiguretheschedulingpolicyoftheprocessviasyscall
Tasks
Chatbots like ChatGPT or Poe are the most common user interfaces to large language models (LLMs). Compared with standalone inference programs, it provides a natural way to interact with LLMs. For example, after you enter "What is Fibonacci Number" and press Enter, the chatbot will base on your prompt and use LLM to generate, for example, "Fibonacci Number is a series of numbers whose value is sum of previous two...". But it’s not the end, you could further enter prompt like "Write a Python program to generate Fibonacci Numbers.” And the model would continue to generate based on the previous messages like "def fibonacci_sequence(n): ...".
Moreover, in practice, we usually separate the inference process handles LLM from main process that handles user input and output, which leads to a separable design that facilitates in-depth control on inference process. For example, we can observe the status of the running process via reading the /proc file system or even control the scheduling policy of the inference process from the main process via relevant syscall.
Though understanding GPT structure is not required, in this assignment, we use Llama3, an open- source variation of GPT and we provide a complete single-thread LLM inference engine as the startpoint of your work. You need to use Unix Process API to create inference process that runs LLM, use pipe and signal to communicate between two processes, read /proc pseudo file system to monitor running status of the inference process, and use sched syscall to set the scheduler of the inference process and observe the performance changes.
Acknowledgement: The inference framework used in this assignment is based on the open-source project llama2.c by Andrej Karpathy. The LLM used in this assignment is based on SmolLM by HuggingfaceTB. Thanks open-source!
    
Specifications
a. Preparing Environment
Download start code – Download start.zip from course’s Moodle, unzip to a folder with:
Rename [UID] in inference_[UID].c and main_[UID].c with your UID, and open Makefile, rename [UID] at line 5 and make sure no space left after your uid.
Download the model files. There are two binary files required, model.bin for model weight and tokenizer.bin for tokenizer. Please use following instructions to download them:
Compile and run the inference program. The initial inference_[UID].c is a complete single- thread C inference program that can be compiled as follows:
Please use -lm flag to link math library and -O3 flag to apply the best optimization allowed within C standard. Please stick to -O3 and don’t use other optimization level. Compiled program can be executed with an integer specifying the random seed and a series of string as prompts (up to 4 prompts allowed) supplied via command-line arguments, aka argv:
Upon invocation, the program will configure the random seed and begin sentence generation based on the prompts provided via command line arguments. Then the program call generate function, which will run LLM based on prompt given (prompt[i] in this example) to generate new tokens and leverage printf with fflush to print the decoded tokens to stdout immediately.
  start
├── common.h # common and helper macro defns, read through first
├── main_[UID].c # [your task] template for main process implementation
├── inference_[UID].c # [your task] template for inference child process implementation
  ├── Makefile
├── model.h
└── avg_cpu_use.py
# makefile for the project, update [UID] on line 5
# GPT model definition, modification not allowed
# Utility to parse the log and calculate average cpu usage
 make prepare # will download model.bin and tokenizer.bin if not existed
# or manually download via wget, will force repeated download, not recommended
wget -O model.bin https://huggingface.co/huangs0/smollm/resolve/main/model.bin
wget -O tokenizer.bin https://huggingface.co/huangs0/smollm/resolve/main/tokenizer.bin
 make -B inference # -B := --always-make, force rebuild
# or manually
gcc -o inference inference_[UID].c -O3 -lm # replace [UID] with yours
 ./inference <seed> "<prompt>" "<prompt>" # prompt must quoted with ""
# examples
./inference 42 "What’s the answer to life the universe and everything?" # answer is 42! ./inference 42 "What’s Fibonacci Number?" "Write a python program to generate Fibonaccis."
 for (int idx = 0; idx < num_prompt; idx++) { // 0 < num_prompt <= 4 printf("user: %s \n", prompts[i]); // print user prompt for our information generate(prompts[i]); // handle everything including model, printf, fflush
}

Following is an example running ./inference. It’s worth noticed that when finished, the current sequence length (SEQ LEN), consists of both user prompt and generated text, will be printed:
 $ ./inference 42 "What is Fibonacci Number?" user
What is Fibonacci Number?
assistant
A Fibonacci sequence is a sequence of numbers in which each number is the sum of the two preceding numbers (1, 1, 2, 3, 5, 8, 13, ...)
......
F(n) = F(n-1) + F(n-2) where F(n) is the nth Fibonacci number. The Fibonacci sequence is a powerful mathematical concept that has numerous applications in various<|im_end|>
[INFO] SEQ LEN: 266, Speed: 61.1776 tok/s
If multiple prompts are provided, they will be implied in the same session instead of treated independently. And they will be applied in turns with model generation. For example, 2nd prompt will be implied after 1st generation, 3rd prompt will be implied after 2nd generation, and so on. You can observe the increasing of SEQ LEN in every generation:
 $ ./inference 42 "What is Fibonacci Numbers?" "Write a program to generate Fibonacci Numbers."
user
What is Fibonacci Number?
assistant
A Fibonacci sequence is a sequence of numbers in which each number is the sum of the two preceding numbers (1, 1, 2, 3, 5, 8, 13, ...)
......
F(n) = F(n-1) + F(n-2) where F(n) is the nth Fibonacci number. The Fibonacci sequence is a powerful mathematical concept that has numerous applications in various<|im_end|>
[INFO] SEQ LEN: 266, Speed: 61.1776 tok/s
user
Write a program to generate Fibonacci Numbers.
Assistant
Here's a Python implementation of the Fibonacci sequence using recursion: ```python
def fibonacci_sequence(n):
if n <= 1: return 1
else:
return fibonacci_sequence(n - 1) + fibonacci_sequence(n – 2)
......
[INFO] SEQ LEN: 538, Speed: 54.2636 tok/s
It’s worth noting that with the same machine, random seed, and prompt (case-sensitive), inference can generate exactly the same output. And to avoid time-consuming long generation, the maximum new tokens generated for each response turn is limited to 256 tokens, the maximum prompt length is limited to 256 characters (normally equivalent to 10-50 tokens), and the maximum number of turns is limited to 4 (at most 4 prompts accepted, rest are unused).
b. Implement the Chatbot Interface
Open main_[UID].c and inference_[UID].c, implement the Chatbot Interface that can:

1. Inference based on user input: Accepts prompt input via the chatbot shell and when user presses `Enter`, starts inferencing (generate) based on the prompt, and prints generated texts to stdout.
2. Support Session: During inferencing, stop accepting new prompt input. After each generation, accept new prompt input via the chatbot shell, and can continue the generation based on the new prompt and previous conversations (prompts and generated tokens). Prompts must be treated in a continuous session (SEQ LEN continue growing).
3. Separate main and inference processes: Separate inference workload into a child process, and the main process only in charge of receiving user input, displaying output and maintaining session.
4. Collect exit status of the inference process on exit: A user can press Ctrl+C to terminate both main process and inference process. Moreover, the main process shall wait for the termination of the inference child process, collect and display the exit status of the inference process before it terminates.
5. Monitoring status of inference process: During inferencing, main process shall monitor the status of inference process via reading the /proc file system and print the status to stderr every 300 ms.
6. Set scheduling policy of the inference process: Before first generation, main process shall be able to set the scheduling policy and parameters of the inference process via SYS_sched_setattr syscall.
Your implementation shall be able to be compiled by the following command:
Then run the compiled program with ./main or ./inference (if is in Stage 1). It accepts an argument named seed that specifies the random seed. For stage 3, to avoid stdout and stderr congest the console, we use 2>proc.log to dump /proc log to file system.
We suggest you divide the implementation into three stages:
• Stage 1 – Convert the inference_[UID].c to accept a seed argument and read in the prompt from the stdin.
§ Implementpromptinputreading,callgeneratetogeneratenewtokensandprinttheresult.
• Stage 2 – Separate user-input workload into main_[UID].c (main process) and inference workload in inference_[UID].c (inference process). Add code to the main process to:
§ use fork to create child process and use exec to run inference_[UID].c
§ use pipe to forward user input from main process to the inference process’s stdin.
§ add signal handler to correctly handle SIGINT for termination; more details in specifications.
§ use signal (handlers and kill) to synchronize main process and inference process.
§ Main Process shall receive signal from inference process upon finishing each generation for the prompt.
§ use wait to wait for the inference process to terminate and print the exit status.
• Stag 3 – Adding code to the main process that
§ During the inference, read the /proc file system to get the cpu usage, memory usage of the inference process, and print them out to the stderr every 300ms.
§ Beforefirstgeneration,useSYS_sched_setattrsyscalltosettheschedulingpolicyand related scheduling parameters for the inference child process.
 make -B # applicable after renaming [UID]
# or manually
gcc -o inference inference_[UID].c -O3 -lm # replace [UID] with yours gcc -o main main_[UID].c # replace [UID] with yours
   ./inference <seed> ./main <seed> ./main <seed> 2>log
# stage 1, replace <seed> with number
# stage 2, replace <seed> with number
# stage 3, replace <seed> with number, redirect stderr to file

Following is some further specifications on the behavior of your chatbot interface:
• Your chatbot interface shall print out >>> to indicate user prompt input.
§ >>> shall be printed out before every user prompt input.
§ Your main process shall wait until the user presses `Enter` before forwarding the prompt to
the inference process.
§ Your main process shall stop accepting user input until model generation is finished. § >>> shall be printed immediately AFTER model generation finished.
§ After>>>printoutagain,yourmainprocessshallresumeacceptinguserinput.
• Your inference process shall wait for user prompt forwarded from the main process, and
after finishing model generation, wait again until next user prompt is received.
§ Though blocked, the inference process shall correctly receive and handle SIGINT to terminate.
• Your program shall be able to terminate when 4 prompts is received, or SIGINT signal is received. § Your main process shall wait for inference process to terminate, collect and print the exit status of inference process (in form of Child exited with <status>) before it terminates.
• Your main process shall collect the running status of inference process ONLY when running inference model, for every 300ms. All information about the statistics of a process can be found in the file under the /proc/{pid} directory. It is a requirement of this assignment to make use of the /proc filesystem to extract the running statistics of a process. You may refer to manpage of /proc file system and kernel documentations. Here we mainly focus on /proc/{pid}/stat, which includes 52 fields separated by space in a single line. You need to parse, extract and display following fields:
 ./main <seed>
>>> Do you know Fibonacci Numer?
Fibonacci number! It's a fascinating...<|im_end|>
>>> Write a Program to generate Fibonacci Number? // NOTE: Print >>> Here!!! def generate_fibonacci(n):...
      pid tcomm state
policy nice vsize task_cpu utime stime
Process Id
Executable Name
Running Status (R is running, S is sleeping, D is sleeping in an uninterruptible wait, Z is zombie, T is traced or stopped)
Scheduling Policy (Hint: get_sched_name help convert into string)
Nice Value (Hint: Priority used by default scheduler, default is 0)
Virtual Memory Size
CPU id of the process scheduled to, named cpuid
Running time of process spent in user mode, unit is 10ms (aka 0.01s) Running time of process spent in system mode, unit is 10ms (aka 0.01s)
                  Moreover, you will need to calculate cpu usage in percentage (cpu%) based on utime and stime. CPU usage is calculated by the difference of current- and last- measurement divided by interval length, and as we don’t count on difference between stime and utime, sum the difference of utime and stime. For example, if your current utime and stime is 457 and 13, and last utime and stime is 430 and 12, respectively, then usage will be ((457-430)+(13-12))/30=93.33% (all unit is 10ms). For real case, verify with htop. At last, you shall print to stderr in following form. To separate from stdout for output, use ./main <seed> 2>log to redirect stderr to a log file.
   [pid] 6**017 [tcomm] (inference) [state] R [policy] SCHED_OTHER [nice] 0 [vsize] 358088704 [task_cpu] 4 [utime] 10 [stime] 3 [cpu%] 100.00% # NOTE: Color Not Required!!! [pid] 6**017 [tcomm] (inference) [state] R [policy] SCHED_OTHER [nice] 0 [vsize] 358088704 [task_cpu] 4 [utime] 20 [stime] 3 [cpu%] 100.00%

• Before the first generation, main process shall be able to set the scheduling policy and nice value of the inference process. To make setting policy and parameters unified, you must use the raw syscall SYS_sched_setattr instead of other glibc bindings like sched_setscheduler. Currently Linux implement and support following scheduling policies in two categories:
§ Normal Policies:
§ SCHED_OTHER:defaultschedulingpoliciesofLinux.AlsonamedSCHED_NORMAL § SCHED_BATCH:fornon-interactivecpu-intensiveworkload.
§ SCHED_IDLE:forlowprioritybackgroundtask.
§ Realtime Policies: need sudo privilege, not required in this assignment.
§ [NOTREQUIRED]SCHED_FIFO:First-In-First-OutPolicywithPreemption
§ [NOTREQUIRED]SCHED_RR:Round-RobinPolicy
§ [NOTREQUIRED]SCHED_DEADLINE:EarliestDeadlineFirstwithPreemption
For Normal Policies (SCHED_OTHER, SCHED_BATCH, SCHED_IDLE), their scheduling priority is configured via nice value, an integer between -20 (highest priority) and +19 (lowest priority) with 0 as the default priority. You can find more info on the manpage.
Please be noticed that on workbench2, without sudo, you’re not allowed to set real-time policies or set normal policies with nice < 0 due to resource limit, please do so only for benchmarking in your own environment. Grading on this part at workbench2 will be limited to setting SCHED_OTHER, SCHED_IDLE and SCHED_BATCH with nice >= 0.
c. Measure the performance and report your finding
Benchmark the generation speed (tok/s) and average cpu usage (%) of your implementation with different scheduling policies and nice values.
     Scheduling Policies
SCHED_OTHER SCHED_OTHER SCHED_OTHER SCHED_BATCH SCHED_BATCH SCHED_BATCH SCHED_IDLE
Priority / Nice
0
2
10
0
2
10
0 (only 0)
Speed (tok/s)
Avg CPU Usage (%)
                                For simplicity and fairness, use only the following prompt to benchmark speed:
For average cpu usage, please take the average of cpu usage from the log (like above example). For your convenience, we provide a Python script avg_cpu_use.py that can automatically parse the log (by specifying the path) and print the average. Use it like: python3 avg_cpu_use.py ./log
Based on the above table, try to briefly analyze the relation between scheduling policy and speed (with cpu usage), and briefly report your findings (in one or two paragraph). Please be advised that this is an open question with no clear or definite answer (just like most of problems in our life), any findings correspond to your experiment results is acceptable (including different scheduler make nearly no impact to performance).
 ./main <seed> 2>log
>>> Do you know Fibonacci Numer?
...... # some model generated text
[INFO] SEQ LEN: xxx, Speed: xx.xxxx tok/s # <- speed here!

IMPORTANT: We don’t limit the platform for benchmarking. You may use: 1) workbench2; 2) your own Linux machine (if any); 3) Docker on Windows/MacOs; 4) Hosted Container like Codespaces. Please note that due to large number of students this year, benchmarking on workbench2 could be slow with deadline approaches.
Submit the table, your analysis in one-page pdf document. Grading of your benchmarking and report is based on your analysis (corresponds to your result or not) instead of the speed you achieved.
Suggestions for implementation
• You may consider scanf or fgets to read user input, and user input is bounded to 512 characters, defined as macro MAX_PROMPT_LEN in common.h (also many other useful macro included).
• To forward user input to the inference process’s stdin, you may consider using dup2.
• You may consider using SIGUSR1 and SIGUSR2 and sigwait to support synchronizations
between main process and inference process.
• There is no glibc bindings provided for SYS_sched_setattr and SYS_sched_getattr
syscall, so please use raw syscall interface, check manpage for more info.
• To convert scheduling policy from int to string, use get_sched_name defined in common.h
• Check manpage first if you got any problem, either Google “man <sth>” or “man <sth>” in shell.
Submission
Submit your program to the Programming # 1 submission page at the course’s moodle website. Name the program to inference_[UID].c and main_[UID].c (replace [UID] with your HKU student number). As the Moodle site may not accept source code submission, please compress all files to the zip format before uploading.
Checklist for your submission:
• Your source code inference_[UID].c and main_[UID].c. (must be self-contained, no dependencies other than model.h and common.h provided)
• Your report including benchmark table, your analysis and reasoning.
• Your GenAI usage report containing GenAI models used (if any), prompts and responses.
• Please do not compress and submit model and tokenizer binary file (use make clear_bin)
Documentation
1. At the head of the submitted source code, state the:
• File name
• Name and UID
• Development Platform (Please include compiler version by gcc -v)
• Remark – describe how much you have completed (See Grading Criteria)
2. Inline comments (try to be detailed so that your code could be understood by others easily)
 
Computer Platform to Use
For this assignment, you can develop and test your program on any Linux platform, but you must make sure that the program can correctly execute on the workbench2 Linux server (as the tutors will use this platform to do the grading). Your program must be written in C and successfully compiled with gcc on the server.
It’s worth noticing that the only server for COMP**30 is workbench2.cs.hku.hk, and please do not use any CS department server, especially academy11 and academy21, as they are reserved for other courses. In case you cannot login to workbench2, please contact tutor(s) for help.
Grading Criteria
1. Your submission will be primarily tested on the workbench2 server. Make sure that your program can be compiled without any errors using the Makefile (update if needed). Otherwise, we have no way to test your submission and you will get a zero mark.
2. As tutors will check your source code, please write your program with good readability (i.e., with good code convention and sufficient comments) so that you won’t lose marks due to confusion.
3. You can only use the Standard C library on Linux platform (aka glibc).
Detailed Grading Criteria
• Documentation -1 point if failed to do
• Include necessary documentation to explain the logic of the program.
• Include required student’s info at the beginning of the program.
• Report: 1 point
• Measure the performance and average cpu usage of your chatbot on your own computer.
• Briefly analyze the relation between performance and scheduling policy and report your
finding.
• Your finding will be graded based on the reasoning part.
• Implementation: 12 points
1. [1pt] Build a chatbot that accept user input, inference and print generated text to stdout.
2. [2pt] Separate Inference Process and Main Process (for chatbot interface) via pipe and exec
3. [1pt] Correctly forward user input from main process to subprocess via pip
4. [1pt] Correctly synchronize the main process with the inference process for the completion of
inference generation.
5. [2pt] Correctly handle SIGINT that terminates both main and inference processes and collect
the exit status of the inference process.
6. [2.5pt] Correctly parse the /proc file system of the inference process during inferencing to
collect and print required fields to stderr.
7. [0.5pt] Correctly calculate the cpu usage in percentage and print to stderr.
8. [2pt] Correctly use SYS_sched_setattr to set the scheduling policy and parameters.
Plagiarism
Plagiarism is a very serious offense. Students should understand what constitutes plagiarism, the consequences of committing an offense of plagiarism, and how to avoid it. Please note that we may request you to explain to us how your program is functioning as well as we may also make use of software tools to detect software plagiarism.

GenAI Usage Report
Following course syllabus, you are allowed to use Generative AI to help completing the assignment, and please clearly state the GenAI usage in GenAI Report, including:
• Which GenAI models you used
• Your conversations, including your prompts and the responses.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp



 

掃一掃在手機打開當前頁
  • 上一篇:代做CMPT 477、代寫Java/python語言編程
  • 下一篇:CSCI1120代寫、代做C++設計程序
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                国产精品亚洲午夜一区二区三区 | 成人丝袜高跟foot| 精品国产乱码久久久久久夜甘婷婷| 美女视频一区二区三区| 久久99精品一区二区三区三区| 久久久精品tv| 99久久99精品久久久久久| 懂色一区二区三区免费观看| 日韩精品成人一区二区在线| 国产精品电影院| 久久久久高清精品| 91精品国产综合久久久久久久久久| 福利一区在线观看| 日韩精品一级中文字幕精品视频免费观看 | 免费观看91视频大全| 国产亚洲一二三区| 国产精品一区二区在线观看不卡| 亚洲v中文字幕| 亚洲精选视频在线| 中文字幕一区av| 久久久久久久国产精品影院| 欧美电影在哪看比较好| 国产91清纯白嫩初高中在线观看| 精品少妇一区二区三区| 国产精品一二三四区| 免费欧美日韩国产三级电影| 欧美日韩一区二区三区在线| 欧美三级日韩三级国产三级| 91麻豆国产福利精品| 国产乱人伦偷精品视频不卡| 日本一区二区动态图| 久久综合999| wwwwww.欧美系列| 国产欧美日韩三级| 久久精品欧美一区二区三区麻豆| 一区二区三区鲁丝不卡| 亚洲色图另类专区| 怡红院av一区二区三区| 亚洲欧洲日韩在线| 国产日韩欧美一区二区三区乱码| 欧洲一区二区三区在线| 成人av在线网| 成人一区在线观看| 99精品桃花视频在线观看| 国产原创一区二区| 精品久久国产字幕高潮| 91搞黄在线观看| 国产黄色精品视频| 男男视频亚洲欧美| 亚洲国产精品精华液ab| 久久精品国产精品青草| 久久疯狂做爰流白浆xx| 国产精品18久久久久久vr| 久久一留热品黄| 中文字幕不卡的av| 亚洲猫色日本管| 亚洲午夜久久久久中文字幕久| 欧美精品一区二区三区视频 | 久久久久久97三级| 国产精品国产三级国产aⅴ无密码 国产精品国产三级国产aⅴ原创 | 日韩精品自拍偷拍| 日韩精品专区在线| 制服视频三区第一页精品| 国产视频一区二区在线观看| ㊣最新国产の精品bt伙计久久| 麻豆精品视频在线| 不卡视频在线观看| 日韩精品中文字幕一区二区三区 | 国产精品综合在线视频| 热久久国产精品| 欧美丝袜自拍制服另类| 久久亚洲一级片| 日本色综合中文字幕| 免费在线观看精品| 亚洲一卡二卡三卡四卡无卡久久| 日韩中文字幕亚洲一区二区va在线| 捆绑调教一区二区三区| 成人国产在线观看| 久久人人爽人人爽| 亚洲高清中文字幕| 国产九色精品成人porny| 日韩亚洲欧美在线观看| 五月开心婷婷久久| 日韩一级视频免费观看在线| 国产精品久久一级| 紧缚奴在线一区二区三区| 日韩精品资源二区在线| 黑人巨大精品欧美一区| 国产无人区一区二区三区| 久久不见久久见免费视频7| 国产精选一区二区三区| 久久免费视频色| 亚洲国产精品ⅴa在线观看| 国产成人免费在线观看不卡| 国产欧美一区二区精品性色| 欧美成人激情免费网| 亚洲综合色婷婷| 老司机精品视频在线| 欧美亚州韩日在线看免费版国语版| 国产精品二三区| 狠狠色丁香久久婷婷综合_中| 成人h动漫精品| 国产精品网友自拍| 极品瑜伽女神91| 国产精品丝袜久久久久久app| 美女视频黄 久久| 2020国产精品久久精品美国| 麻豆高清免费国产一区| 国产欧美日韩视频一区二区| 国产东北露脸精品视频| 日韩理论在线观看| 91小视频在线免费看| 日韩欧美不卡一区| 激情都市一区二区| 精品福利一区二区三区免费视频| hitomi一区二区三区精品| 中文字幕制服丝袜成人av| 欧美在线高清视频| 亚洲成a人片综合在线| 日韩欧美国产综合一区| 日本aⅴ精品一区二区三区| 2021国产精品久久精品| 黄色资源网久久资源365| 成人动漫视频在线| 中文字幕乱码久久午夜不卡| 欧美日韩一区二区欧美激情| 亚洲国产中文字幕在线视频综合| 日韩一区二区三区在线视频| 蜜桃av一区二区| 国产精品毛片大码女人| 99九九99九九九视频精品| 日本亚洲最大的色成网站www| 日韩欧美激情一区| 91黄色免费网站| 久久er99精品| 亚洲综合成人在线| 日韩欧美视频在线| 91在线国内视频| 奇米在线7777在线精品| 欧美日韩国产大片| 99久精品国产| 免费久久99精品国产| 一区二区三区四区五区视频在线观看 | 91免费小视频| 青青草97国产精品免费观看无弹窗版| 色av成人天堂桃色av| 成人一区在线观看| 午夜精品一区二区三区电影天堂 | 韩国欧美国产1区| 亚洲成人av电影在线| 久久综合av免费| 亚洲午夜视频在线| 国产精品区一区二区三| 美女国产一区二区| 亚洲欧美日韩国产综合在线| 91成人免费在线视频| 成人av在线网站| 欧美国产国产综合| 国产欧美一区二区精品婷婷| 欧美亚洲国产bt| 欧美日韩一区二区三区四区五区| 国产成人一级电影| 国产 欧美在线| 日本麻豆一区二区三区视频| 亚洲高清在线视频| 亚洲欧洲在线观看av| 亚洲欧美在线视频观看| 精品国免费一区二区三区| 欧美精品一区二区三区久久久 | 精品久久久久久无| 精品视频在线免费| 国产xxx精品视频大全| 美日韩一区二区三区| 欧美酷刑日本凌虐凌虐| 国产精品乡下勾搭老头1| 麻豆精品新av中文字幕| 日韩高清一区在线| 一区二区三区四区精品在线视频| 成人免费在线播放视频| 国产日韩一级二级三级| 最新国产成人在线观看| 中文av一区二区| 亚洲成人午夜电影| 香蕉成人啪国产精品视频综合网 | 日日摸夜夜添夜夜添亚洲女人| √…a在线天堂一区| 亚洲国产毛片aaaaa无费看| 亚洲视频1区2区| 午夜一区二区三区视频| 亚洲人亚洲人成电影网站色| 亚洲综合在线免费观看| 一区二区三区四区在线播放| 亚洲一区二区三区爽爽爽爽爽 | 3atv一区二区三区| xnxx国产精品| 国产网红主播福利一区二区| 亚洲免费看黄网站| 亚洲毛片av在线| 天天av天天翘天天综合网| 日韩av在线发布|