合肥生活安徽新聞合肥交通合肥房產(chǎn)生活服務(wù)合肥教育合肥招聘合肥旅游文化藝術(shù)合肥美食合肥地圖合肥社保合肥醫(yī)院企業(yè)服務(wù)合肥法律

        代做IMSE7140、代寫Java/c++程序語言
        代做IMSE7140、代寫Java/c++程序語言

        時間:2024-11-03  來源:合肥網(wǎng)hfw.cc  作者:hfw.cc 我要糾錯



        IMSE7140 Assignment 2
        Cracking CAPTCHAs
        (20 points)
        2.1 Brief Introduction
        CAPTCHA or captcha is the acronym for “Completely Automated Public Turing test
        to tell Computers and Humans Apart.” You must have been already familiar with it
        because of its popularity in preventing bot attacks or spam everywhere. This assign ment, however, will guide you in implementing a deep learning model that can crack a
        commercial-level captcha!
        You deliverables for this assignment should include
        1. A single PDF file answers.pdf with answers to all the questions explicitly marked
        by “Q” with a serial number in this document, and
        2. A train.py file to fulfill the programming task requirements marked by “PT.”
        Of course, GPUs can facilitate your experiments—Don’t worry if you don’t have any,
        the training requirement is deliberately simplified.
        2.2 Training your model
        The captchas we will crack is the multicolorcaptcha. Please pip install the exact version
        1.2.0 (the current latest one) in case there might be any incompatibility for other releases.
        We use the following codes to generate captchas.
        1 from multicolorcaptcha import CaptchaGenerator
        2
        3 generator = CaptchaGenerator (0)
        4 captcha = generator . gen_captcha_image ( difficult_level =0)
        5 image = captcha . image
        6 characters = captcha . characters
        7 image . save ( f"{ characters }. png", "PNG")
        In this snippet, CaptchaGenerator(0) configures the image size to 256 × 144 pixels,
        and the difficult level is set to 0 so that the captchas only contains four 0–9 digits.
        Please run the code snippet on your computer. If the captcha is successfully generated,
        it should look like Figure 2.1.
        1
        2.2. Training your model S. Qin
        Figure 2.1: Sample captcha with digits 0570
        The training and the validation datasets are generated and attached in folders
        capts train and capts val. For any machine learning problem, before you start to
        devise a solution, it is always a good idea to observe the data and gain some intuition
        first. You may immediately recognize some difficulties in this task:
        • The digits have a set of random fonts and colors;
        • Some certain range of random rotations are applied to the digits;
        • Some line segments are randomly added to the image.
        Such a task is considered impossible for traditional pattern recognition methods,
        which may tackle the problem in a process like this: image thresholding, segmenta tion, handcrafted filter design, and pattern matching. We can conjecture that “filter
        design” may fail in capturing useful features and “pattern matching” may have a poor
        performance.
        Fortunately, in the deep learning era, we can delegate the pattern or feature extrac tion job to deep neural networks. As introduced in the previous lecture “Deep Learning
        for Computer Vision,” the slide “Understand feature maps: CAPTCHA recognition”
        shows that a typical architecture for the task consists of two parts:
        1. A backbone model to extract a feature map from the captcha image, and
        2. A certain amount of prediction heads to interpret the feature map to readable
        forms.
        We will follow this architecture in this assignment. I encourage you to search open source solutions and learn from their experience. Here we follow this Kaggle post by
        Ashadullah Shawon.
        PT| Use capts train as the training dataset, capts val as the validation dataset, and Keras
        as the deep learning framework, referring to Shawon’s solution, provide the training code
        train.py that fulfills the following requirements. “Copy and paste” the codes from the
        original post is allowed, as well as other AI-generated codes.
        2
        2.3. Example: A practical model S. Qin
        1. The maximal number for epochs should be 10. Considering some students
        will train the model by CPU, it is fair to limit the number of epochs, so the training
        time for the model should be less than half an hour.
        2. The accuracy for one digit should be no less than 30% after training for
        10 epochs. The training outputs contain four accuracies respective to the four
        digits. Since they are similar, you will only need to examine one of them. Keep in
        mind that 30% for one digit indicates that the overall accuracy for the recognition
        is only 0.3
        4 = 0.81%. Such a low accuracy is not useful for cracking the captcha.
        However, on the one hand, you may need a GPU to experiment on a practical
        solution; on the other hand, a wild guess for a 0–9 digit has an accuracy of 10%,
        so if your model’s accuracy can reach 30% after 10 epochs, it already indicates
        the model learns from the training set. Hint: if the accuracy for one digit keeps
        wandering around 0.1 but not increasing in the first two or three epochs, it is the
        signal that you should modify somewhere in your code and try again.
        3. The trained model should be saved as a file my model.keras after training.
        Though, this model file my model.keras doesn’t need to be uploaded.
        Q1| Can we convert the captcha images to grayscale at the preprocessing stage before train ing? What is the possible advantage by doing that? If any, can you point out the
        possible disadvantage?
        Q2| After the 10-epoch training, what are your accuracies of one digit, for the training and
        the validation datasets respectively?
        Q3| Is the accuracy for the validation dataset lower than that for the training dataset? What
        are the possible reasons?
        Q4| How can we improve the model’s performance on the validation dataset? List at least
        three different measures.
        2.3 Example: A practical model
        To demonstrate that the backbone–heads architecture can actually solve the real-world
        captcha, I trained a relatively large model by an Nvidia GeForce RTX 30** GPU.
        You may find in attached the model file 099**0.9956.keras and the inference code
        inference.py. The accuracies versus training epochs are shown in Figure 2.2. The
        inference code reads a randomly generated captcha, inferences the model, and compares
        the predicted results with the targets. You can press “n” for the next captcha or “q” to
        quit the program. You may need to pip install keras cv to run the code.
        Q5| What kind of backbone did I use in the model 099**0.9956.keras?
        Q6| The backbone’s pre-trained weights on the ImageNet 2012 dataset were loaded before
        training. What is the possible advantage by doing that?
        Q7| Why didn’t I use any dropout in the model? Guess the reason.
        Q8| In Figure 2.2, you may have noticed that the accuracies rise very fast from 0 to 0.9, but
        significantly slow from 0.95 to 0.99. Explain the phenomenon.
        Q9| Using the same hardware (which means you can’t upgrade the GPU, for example), how
        can we speed up the learning process of the model, i.e. the rate of convergence?
        3
        2.3. Example: A practical model S. Qin
        0 200 40**00 800 1000
        Epoch
        0.2
        0.4
        0.6
        0.8
        1.0
        Model Accuracies
        digi0
        digi1
        digi2
        digi3
        Figure 2.2: Accuracies through 1000 epochs in training
        Q10| Since the accuracy for one digit is about 99%, the overall accuracy for a captcha is
        0.994 ≈ 96%. This performance would be better than humans. Can you propose some
        methods that can even further improve the performance?
        Please note that, not all the questions above have a definite answer. You may also
        need to do some research as the course doesn’t cover all the details in class. The source
        code for training this model and the reference answers will be available on Moodle or
        sent by email after all the students completing the submission.


        請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




         

        掃一掃在手機打開當前頁
      1. 上一篇:IS3240代做、代寫c/c++,Java程序語言
      2. 下一篇:DATA 2100代寫、代做Python語言編程
      3. 無相關(guān)信息
        合肥生活資訊

        合肥圖文信息
        急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
        急尋熱仿真分析?代做熱仿真服務(wù)+熱設(shè)計優(yōu)化
        出評 開團工具
        出評 開團工具
        挖掘機濾芯提升發(fā)動機性能
        挖掘機濾芯提升發(fā)動機性能
        海信羅馬假日洗衣機亮相AWE  復(fù)古美學與現(xiàn)代科技完美結(jié)合
        海信羅馬假日洗衣機亮相AWE 復(fù)古美學與現(xiàn)代
        合肥機場巴士4號線
        合肥機場巴士4號線
        合肥機場巴士3號線
        合肥機場巴士3號線
        合肥機場巴士2號線
        合肥機場巴士2號線
        合肥機場巴士1號線
        合肥機場巴士1號線
      4. 短信驗證碼 酒店vi設(shè)計 deepseek 幣安下載 AI生圖 AI寫作 aippt AI生成PPT

        關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

        Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網(wǎng) 版權(quán)所有
        ICP備06013414號-3 公安備 42010502001045

        主站蜘蛛池模板: 无码aⅴ精品一区二区三区浪潮| 在线精品日韩一区二区三区| 国产人妖视频一区二区破除| 亚洲AV午夜福利精品一区二区| 国产精品区一区二区三| 日韩精品一区二区三区中文精品| 中日韩一区二区三区| 亲子乱av一区区三区40岁| 无码人妻一区二区三区在线视频| 久久久人妻精品无码一区| 国产精品亚洲一区二区无码| 亚洲欧洲日韩国产一区二区三区| 精品亚洲一区二区| 好爽毛片一区二区三区四| 亚洲国产一区明星换脸| 国产一区二区免费在线| 多人伦精品一区二区三区视频| 色噜噜狠狠一区二区三区| 日韩高清国产一区在线| 亚洲国产国产综合一区首页| 国产精品女同一区二区| 久久精品国产一区二区三 | 色欲综合一区二区三区| 无码人妻精品一区二区三18禁| 久久se精品动漫一区二区三区| 久久婷婷久久一区二区三区| 无码成人一区二区| 亚洲午夜精品一区二区公牛电影院| 无码精品人妻一区二区三区免费看 | 亚洲日本中文字幕一区二区三区 | 日韩电影一区二区| 香蕉久久一区二区不卡无毒影院 | 亚洲国产成人久久一区WWW| 一区二区视频在线免费观看| 亚洲国产成人精品久久久国产成人一区二区三区综 | 亚洲av不卡一区二区三区| 91福利视频一区| 无码午夜人妻一区二区不卡视频| 理论亚洲区美一区二区三区| 亚洲第一区在线观看| 性色AV一区二区三区|