99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

CSC345編程代寫、代做Python語言程序

時間:2023-12-08  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



CSC345/M45 Big Data and Machine Learning
Coursework: Object Recognition
Policy
1. To be completed by students working individually.
2. Feedback: Individual feedback on the report is given via the rubric within Canvas.
3. Learning outcome: The tasks in this assignment are based on both your practical
work in the lab sessions and your understanding of the theories and methods. Thus,
through this coursework, you are expected to demonstrate both practical skills and
theoretical knowledge that you have learned through this module. You also learn to
formally present your understandings through technical writing. It is an opportunity
to apply analytical and critical thinking, as well as practical implementation.
4. Unfair practice: This work is to be attempted individually. You may get help from
your lecturer, academic tutor, and lab tutor, but you may not collaborate with your
peers. Copy and paste from the internet is not allowed. Using external code
without proper referencing is also considered as breaching academic integrity.
5. University Academic Integrity and Academic Misconduct Statement: By
submitting this coursework, electronically and/or hardcopy, you state that you fully
understand and are complying with the university's policy on Academic Integrity and
Academic Misconduct.
The policy can be found at https://www.swansea.ac.uk/academic-services/academicguide/assessment-issues/academic-integrity-academic-misconduct.
6. Submission deadline: Both the report and your implemented code in Python need to
be submitted electronically to Canvas by 11AM 14
th December.
1. Task
The amount of image data is growing exponentially, due in part to convenient and cheap camera
equipment. Teaching computers to recognise objects within a scene has tremendous application
prospects, with applications ranging from medical diagnostics to Snapchat filters. Object
recognition problems have been studied for years in machine learning and computer vision
fields; however, it is still a challenging and open problem for both academic and industry
researchers. The following task is hopefully your first small step on this interesting question
within machine learning.
You are provided with a small image dataset, where there are 100 different categories of objects,
each of which has 500 images for training and 100 images for testing. Each individual image
only contains one object. The task is to apply machine learning algorithms to classify the testing
images into object categories. Code to compute image features and visualize an image is
provided, you can use it to visualize the images and compute features to use in your machine
learning algorithms. You will then use a model to perform classification and report quantitative
results. You do not have to use all the provided code or methods discussed in the labs so far.
You may add additional steps to the process if you wish. You are encouraged to use the
implemented methodology from established Python packages taught in the labsheets (i.e.
sklearn, skimage, keras, scipy,…). You must present a scientific approach, where you make
suitable comparison between at least two methods.
2. Image Dataset – Subset of CIFAR-100
We provide the 100 object categories from the complete CIFAR-100 dataset. Each category
contains 500 training images and 100 testing images, which are stored in two 4D arrays. The
corresponding category labels are also provided. The objects are also grouped into 20 “superclasses”. The size of each image is fixed at **x**x3, corresponding to height, width, and colour
channel, respectively. The training images will be used to train your model(s), and the testing
images will be used to evaluate your model(s). You can download the image dataset and
relevant code for visualization and feature extraction from the Canvas page.
There are six numpy files provided, as follows:
• trnImage, **x**x3x50000 matrix, training images (RGB image)
• trnLabel_fine, 50000 vector, training labels (fine granularity)
• trnLabel_coarse, 50000 vector, training labels (coarse granularity)
• tstImage, **x**x3x10000 matrix, testing images (RGB image)
• tstLabel_fine, 10000 vector, testing labels (fine granularity)
• tstLabel_coarse, 10000 vector, testing labels (coarse granularity)
The data is stored within a 4D matrix, and for many of you this will be the first time seeing a
high dimensionality tensor. Although this can seem intimidating, it is relatively
straightforward. The first dimension is the height of the image, the second dimension is the
width, the third dimension is the colour channels (RGB), and the fourth dimension is the
samples. Indexing into the matrix is like as with any other numeric array in Python, but now
we deal with the additional dimensions. So, in a 4D matrix ‘X’, to index all pixels in all
channels of the 5th image, we use the index notation X[:, :, :, 4]. So, in a generic form, if we
want to index into the i,j,k,lth element of X we use X[i, j, k, l].
Figure 1. Coarse Categories of CIFAR-100 Dataset
aquatic mammals
fish
flowers
food containers
fruit and vegetables
household electrical devices
household furniture
insects
large carnivores
large man-made outdoor things
large natural outdoor scenes
large omnivores and herbivores
medium-sized mammals
non-insect invertebrates
people
reptiles
small mammals
trees
vehicles 1
vehicles 2
3. Computing Features and Visualizing Images
A notebook, RunMe.ipynb, is provided to explain the concept of computing image features.
The notebook is provided to showcase how to use the skimage.feature.hog() function to obtain
features we wish to train our models on, how to visualize these features as an image, and how
to visualize a raw image from the 4D array. You do not need to use this if your experiments
do not require it! You should also consider the dimensionality of the problem and the features
being used to train your models, this may lead to some questions you might want to explore.
The function utilises the Histogram of Orientated Gradients method to represent image domain
features as a vector. You are NOT asked to understand how these features are extracted from
the images, but feel free to explore the algorithm, underlying code, and the respective Python
package APIs. You can simply treat the features as the same as the features you loaded from
Fisher Iris dataset in the Lab work. Note that the hog() method can return two outputs, the first
are the features, the second is an image representation of those features. Computing the second
output is costly and not needed, but RunMe.ipynb provides it for your information.
4. Learning Algorithms
You can find all relative learning algorithms in the lab sheets and lecture notes. You can use
the following algorithms (Python (and associated packages) built-in functions) to analyse the
data and carry out the classification task. Please note: if you feed certain algorithms with a
large chunk of data, it may take a long time to train. Not all methods are relevant to the task.
• Lab sheet 2:
o K-Means
o Gaussian Mixture Models
• Lab sheet 3:
o Linear Regression
o Principal Component Analysis
o Linear Discriminative Analysis
• Lab sheet 4:
o Support Vector Machine
o Neural Networks
o Convolutional Neural Networks
5. Benchmark and Discussion
Your proposed method should be trained on the training set alone, and then evaluated on the
testing set. To evaluate: you should count, for each category, the percentage of correct
recognition (i.e., classification), and report the confusion matrix. Note that the confusion matrix
can be large, and so you may need to think of ways to present appropriately; you can place it
in your appendices if you wish, or show a particularly interesting sub-region.
The benchmark to compare your methods with is 39.43%, averaged across all 20 super
categories, and 24.49% for the finer granularity categories. Note: this is a reference, not a
target. You will not lose marks for being slightly under this target, but you should be aware of
certain indicative results (very low or very high) that show your method/implementation may
not be correct. Your report will contain a section in which you discuss your results.
6. Assessment
You are required to write a 3-page conference/publication style report to summarize your
proposed method and the results. Your report should contain the following sections:
1. Introduction. Overview of the problem, proposed solution, and experimental results.
2. Method. Present your proposed method in detail. This should cover how the features
are extracted, any feature processing you use (e.g., clustering and histogram generation,
dimensionality reduction), which classifier(s) is/are used, and how they are trained and
tested. This section may contain multiple sub-sections.
3. Results. Present your experimental results in this section. Explain the evaluation
metric(s) you use and present the quantitative results (including the confusion matrix).
4. Conclusion. Provide a summary for your method and the results. Provide your critical
analysis; including shortcomings of the methods and how they may be improved.
5. References. Include correctly formatted references where appropriate. References are
not included in the page limit.
6. Appendices. You may include appendix content if you wish for completeness,
however the content you want graded must be in the main body of the report.
Appendices are not included in the page limit.
Page Limit: The main body of the report should be no more than 3 pages. Font size should be
no smaller than 10, and the text area is approximately 9.5x6 inches. You may use images but
do so with care; do not use images to fill up the pages. You may use an additional cover sheet,
which has your name and student number.
Source Code: Your submission should be professionally implemented and must be formatted
as an ipynb notebook. You may produce your notebook either locally (Jupyter, VSCode etc.),
or you may utilize Google Colab to develop your notebook, however your submission must be
an ipynb notebook. Remember to carefully structure, comment, and markdown your
implementation for clarity.
7. Submission
You will be given the marking rubric in advance of the submission deadline. This assignment
is worth 20% of the total module credit.
Submit your work electronically to Canvas. Your report should be in PDF format only.
Your code must be in a .ipynb format. Both files should be named with your student number,
i.e. 123456.pdf and 123456.ipynb, where 123456 is your student number.
There are two submission areas on Canvas, one for the report and another for the .ipynb
notebook. You must upload both submissions to the correct area by the deadline.
The deadline for this coursework is 11AM 14
請加QQ:99515681 或郵箱:99515681@qq.com   WX:codehelp

掃一掃在手機打開當前頁
  • 上一篇:代寫COMP26120、代做C++, Java/Python編程
  • 下一篇:MATH4063代做、C++編程語言代寫
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                欧美在线视频全部完| 国产激情一区二区三区| 亚洲一二三四区不卡| 国产一区久久久| 91精品欧美一区二区三区综合在 | 美国十次综合导航| 日韩欧美一级精品久久| 亚洲另类在线制服丝袜| 在线观看亚洲精品| 午夜精品福利久久久| 欧美日本韩国一区二区三区视频| 亚洲精品乱码久久久久久| 欧美亚洲高清一区二区三区不卡| 亚洲午夜一二三区视频| 欧美国产精品久久| 欧美一区二区网站| 欧美一区2区视频在线观看| 欧美伊人久久久久久久久影院 | 国产精品初高中害羞小美女文 | 图片区小说区国产精品视频| 欧美一区二区女人| 亚洲精品一线二线三线无人区| 国产精品免费久久| 91黄色免费看| 免费不卡在线观看| 国产精品家庭影院| 色国产综合视频| 午夜一区二区三区视频| 日韩欧美综合在线| 99热精品一区二区| 视频一区视频二区中文| 日本一区二区三区电影| 色综合久久天天综合网| 麻豆91在线播放免费| 一区二区三区在线观看视频| 日韩久久免费av| 欧美午夜精品一区二区三区| 免费观看久久久4p| 亚洲综合免费观看高清完整版在线 | 亚洲伦理在线免费看| 欧美丝袜自拍制服另类| 99视频在线观看一区三区| 免费高清在线视频一区·| 日韩av电影一区| 亚洲中国最大av网站| 日韩一区欧美小说| 亚洲欧美日韩电影| 亚洲综合色视频| 亚洲永久精品国产| 亚洲一区二区三区中文字幕| 亚洲免费视频成人| 夜夜精品视频一区二区| 亚洲综合视频在线观看| 亚洲一区免费在线观看| 亚洲综合自拍偷拍| 国产乱码字幕精品高清av | 久久综合色之久久综合| 综合激情网...| 亚洲国产综合人成综合网站| 午夜欧美2019年伦理| 国产91在线观看| 欧美美女一区二区在线观看| 中文在线一区二区| 青青草97国产精品免费观看无弹窗版| 精品一区二区三区在线播放| 欧美主播一区二区三区美女| 精品成人一区二区三区| 麻豆精品久久精品色综合| 99精品在线观看视频| 久久免费美女视频| 亚洲国产一区视频| 国产成人99久久亚洲综合精品| 欧美在线观看一区| 中文字幕亚洲欧美在线不卡| 日本欧美肥老太交大片| 成人av综合在线| 久久婷婷久久一区二区三区| 国产精品的网站| 亚洲六月丁香色婷婷综合久久| 日韩一区二区电影在线| 91久久人澡人人添人人爽欧美| a在线欧美一区| 538在线一区二区精品国产| 91精品黄色片免费大全| 欧美一级一区二区| 精品对白一区国产伦| 18成人在线观看| 日本美女一区二区三区| 国产激情91久久精品导航| 99久久夜色精品国产网站| 欧美欧美欧美欧美| 国产性天天综合网| xfplay精品久久| 亚洲四区在线观看| 国产乱码精品一区二区三区忘忧草 | 激情综合色播激情啊| 91福利区一区二区三区| 日韩欧美一区二区免费| 男男视频亚洲欧美| 综合电影一区二区三区| 粉嫩av亚洲一区二区图片| 欧美日韩午夜在线| 亚洲日韩欧美一区二区在线| 风间由美一区二区三区在线观看| 欧美亚男人的天堂| 曰韩精品一区二区| av亚洲产国偷v产偷v自拍| 精品av久久707| 国产91色综合久久免费分享| 久久综合狠狠综合久久综合88 | 亚洲成av人片在www色猫咪| 成人视屏免费看| 中文字幕人成不卡一区| 亚洲综合免费观看高清完整版在线| 亚洲国产aⅴ成人精品无吗| 成人av在线一区二区| 欧美一区国产二区| 另类综合日韩欧美亚洲| 欧美一区二区三区播放老司机| 久久精品国产色蜜蜜麻豆| 久久久亚洲精华液精华液精华液| 成人中文字幕合集| 亚洲国产aⅴ天堂久久| 日韩一区二区三区视频在线| 国产精品自拍av| 亚洲综合一区二区| 国产亚洲成年网址在线观看| 在线这里只有精品| 国产一区二区三区视频在线播放| 国产精品盗摄一区二区三区| 欧美一区二区三区视频在线| 成人黄色在线看| 在线国产电影不卡| 久久一夜天堂av一区二区三区| 综合中文字幕亚洲| 国产精品亚洲一区二区三区妖精 | 欧美激情资源网| 91精品国产综合久久精品性色 | 伊人一区二区三区| 97精品久久久久中文字幕| 国产69精品久久久久777| 日本欧美在线观看| 日本三级亚洲精品| 免费欧美高清视频| 激情久久五月天| 成人一区二区三区视频| av日韩在线网站| 一本久久综合亚洲鲁鲁五月天| 91蜜桃视频在线| 欧美一区二区高清| 欧美国产日本韩| 丝袜诱惑亚洲看片| 另类调教123区| 狠狠色丁香婷婷综合| 日本不卡一区二区三区高清视频| 亚洲天堂av老司机| 国产尤物一区二区在线| 中文字幕在线免费不卡| 亚洲一区二区美女| 精品av久久707| 亚洲三级电影全部在线观看高清| 亚洲成人av一区| 国产精品入口麻豆九色| 制服丝袜国产精品| 国产精品精品国产色婷婷| 日韩中文字幕1| 韩国av一区二区| 免费在线看成人av| 99久久综合色| 欧美日韩国产精品自在自线| 日本一区二区三区免费乱视频| 亚洲成人先锋电影| 精品一区二区三区视频在线观看| 91麻豆免费在线观看| 中文字幕日韩av资源站| 日韩成人一区二区| 欧美一区二区三区在线视频| 91精品欧美一区二区三区综合在| 亚洲一区二区3| 蜜臀av性久久久久av蜜臀妖精 | 国产亚洲成av人在线观看导航| 午夜不卡av在线| 欧美日韩国产一级| 激情综合一区二区三区| 国产精品美女久久久久久久久久久| 亚洲三级在线免费| 岛国精品一区二区| 成人免费视频在线观看| 成人av先锋影音| 自拍偷拍亚洲欧美日韩| 99视频精品免费视频| 日韩电影在线一区| 欧洲av在线精品| 蜜桃视频免费观看一区| 精品国产伦一区二区三区免费| 国产a久久麻豆| 一区在线中文字幕| 精品视频一区 二区 三区| 美女一区二区在线观看| wwwwxxxxx欧美|