合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

        CEG5304代做、代寫Java/c++編程語言

        時間:2024-04-11  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



        Project #2 for CEG5304: Generating Images through Prompting and Diffusion-based Models.
        Spring (Semester 2), AY 202**024
        In this exploratory project, you are to explore how to generate (realistic) images via diffusion-based models (such as DALLE and Stable Diffusion) through prompting, in particular hard prompting. To recall and recap the concepts of prompting, prompt engineering, LLVM (Large Language Vision Models), and LMM (Large Multi-modal Models), please refer to the slides on Week 5 (“Lect5-DL_prompt.pdf”).
        Before beginning this project, please read the following instructions carefully, failure to comply with the instructions may be penalized:
        1.This project does not involve compulsory coding, complete your project with this given Word document file by filling in the “TO FILL” spaces. Save the completed file as a PDF file for submission. Please do NOT modify anything (including this instruction) in your submission file.
        2.The marking of this project is based on how detailed the description and discussion are over the given questions. To score, please make sure your descriptions and discussions are readable, and adequate visualizations are provided.
        3.The marking of this project is NOT based on any evaluation criteria (e.g., PSNR) over the generated image. Generating a good image does NOT guarantee a high score.
        4.You may use ChatGPT/Claude or any online LLM services for polishing. However, purely using these services for question answering is prohibited (and is actually very obvious). If it is suspected that you generate your answers holistically with these online services, your assignment may be considered as committing plagiarism.
        5.Submit your completed PDF on Canvas before the deadline: 1759 SGT on 20 April 2024 (updated from the slides). Please note that the deadlines are strict and late submission will be deducted 10 points (out of 100) for every 24 hours.
        6.The report must be done individually. You may discuss with your peers, but NO plagiarism is allowed. The University, College, Department, and the teaching team take plagiarism very seriously. An originality report may be generated from iThenticate when necessary. A zero mark will be given to anyone found plagiarizing and a formal report will be handed to the Department/College for further investigation.

        Task 1: generating an image with Stable Diffusion (via Huggingface Spaces) and compare it with the objective real image. (60%)
        In this task, you are to generate an image with the Stable Diffusion model in Huggingface Spaces. The link is provided here: CLICK ME. You can play with the different prompts and negative prompts (prompts that instructs the model NOT to generate something). Your objective is to generate an image that looks like the following image:

        1a) First, select a rather coarse text prompt. A coarse text prompt may not include a lot of details but should be a good starting prompt to generate images towards our objective. An example could be “A Singaporean university campus with a courtyard.”. Display your generated image and its corresponding text prompt (as well as the negative prompt, if applicable) below: (10%)
        TO FILL
        TO FILL
        1b) Describe, in detail, how the generated image is compared to the objective image. You may include the discussion such as the components in the objective image that is missing from the generated image, or anything generated that does not make sense in the real world. (20%)
        TO FILL
        TO FILL
        Next, you are to improve the generated image with prompt engineering. Note that it is highly likely that you may still be unable to obtain the objective image. A good reference material for prompt engineering can be found here: PROMPT ENGINEERING. 
        1c) Describe in detail how you improve your generated image. The description should include display of the generated images and their corresponding prompts, and detailed reasoning over the change in prompts. If the final improved image is generated with several iterations of prompt improvement, you should show each step in detail. I.e., you should display the result of each iteration of prompt change and discuss the result of each prompt change. You should also compare your improved image with both the first image you generated above, as well as the objective image. (30%)
        TO FILL
        TO FILL
        TO FILL
        Task 2: generating images with another diffusion-based model, DALL-E (mini-DALL-E, via Huggingface Spaces). (40%)
        Stable Diffusion is not the only diffusion-based model that has the capability to generate good quality images. DALL-E is an alternative to Stable Diffusion. However, we are not to discuss the differences over these two models technically, but the differences over the generated images qualitatively (in a subjective manner). The link to generating with mini-DALL-E is provided here: MINI-DALL-E.
        2a) You should first use the same prompt as you used in Task 1a and generate the image with mini-DALL-E. Display the generated image and compare, in detail, the new generated image with that generated by Stable Diffusion. (10%)
        TO FILL
        TO FILL
        2b) Similar to what we performed for Stable Diffusion; you are to again improve the generated image with prompt engineering. Describe in detail how you improve your generated image. Similarly, if the final improved image is generated with several iterations of prompt improvement, you should show each step in detail. The description should include display of the generated images and their corresponding prompts, and detailed reasoning over the change in prompts. You should compare your improved image with both the first image you generated above, as well as the objective image.
        In addition, you should also describe how the improvement is similar to or different from the previous improvement process with Stable Diffusion. (10%)
        TO FILL
        TO FILL
        2c) From the generation process in Task 1 and Task 2, discuss the capabilities and limitations over image generation with off-the-shelf diffusion-based models and prompt engineering. You could further elaborate on possible alternatives or improvements that could generate images that are more realistic or similar to the objective image. (20%)
        TO FILL
        TO FILL

        請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp









         

        掃一掃在手機(jī)打開當(dāng)前頁
      1. 上一篇:MCD4700代做、Python/c++編程語言代寫
      2. 下一篇:怎么申請菲律賓移民達(dá)沃?價格多少
      3. 無相關(guān)信息
        合肥生活資訊

        合肥圖文信息
        出評 開團(tuán)工具
        出評 開團(tuán)工具
        挖掘機(jī)濾芯提升發(fā)動機(jī)性能
        挖掘機(jī)濾芯提升發(fā)動機(jī)性能
        戴納斯帝壁掛爐全國售后服務(wù)電話24小時官網(wǎng)400(全國服務(wù)熱線)
        戴納斯帝壁掛爐全國售后服務(wù)電話24小時官網(wǎng)
        菲斯曼壁掛爐全國統(tǒng)一400售后維修服務(wù)電話24小時服務(wù)熱線
        菲斯曼壁掛爐全國統(tǒng)一400售后維修服務(wù)電話2
        美的熱水器售后服務(wù)技術(shù)咨詢電話全國24小時客服熱線
        美的熱水器售后服務(wù)技術(shù)咨詢電話全國24小時
        海信羅馬假日洗衣機(jī)亮相AWE  復(fù)古美學(xué)與現(xiàn)代科技完美結(jié)合
        海信羅馬假日洗衣機(jī)亮相AWE 復(fù)古美學(xué)與現(xiàn)代
        合肥機(jī)場巴士4號線
        合肥機(jī)場巴士4號線
        合肥機(jī)場巴士3號線
        合肥機(jī)場巴士3號線
      4. 上海廠房出租 短信驗(yàn)證碼 酒店vi設(shè)計

        主站蜘蛛池模板: 国产精品污WWW一区二区三区 | 国产成人无码精品一区不卡| 国产成人精品一区二三区熟女 | 国产福利一区二区在线视频 | 国产福利一区视频| 国精品无码一区二区三区在线蜜臀| 亚洲av无一区二区三区| 在线播放国产一区二区三区| 亚洲AV无码一区东京热久久| 日韩制服国产精品一区| 亚洲毛片不卡av在线播放一区| 精品一区二区三区无码视频| 在线观看中文字幕一区| 亚洲AV无码国产一区二区三区 | 亚洲欧美国产国产综合一区| 制服丝袜一区在线| 狠狠色综合一区二区| 精品人妻中文av一区二区三区| 性色A码一区二区三区天美传媒| 国产麻豆精品一区二区三区v视界| 精品国产亚洲一区二区在线观看| 无码人妻精品一区二| 一区二区三区在线观看中文字幕| 亚洲午夜日韩高清一区| 制服中文字幕一区二区 | 欧洲亚洲综合一区二区三区| 亚洲熟女www一区二区三区| 97精品一区二区视频在线观看 | 久久99国产一区二区三区| 国产A∨国片精品一区二区| 熟女大屁股白浆一区二区| 久久一区不卡中文字幕| 亚洲国产精品一区| 国产精品高清视亚洲一区二区 | 麻豆AV一区二区三区久久| 久久精品免费一区二区| 午夜一区二区在线观看| 国产高清在线精品一区二区三区 | 视频一区二区精品的福利| 久久久人妻精品无码一区| 高清国产精品人妻一区二区|