99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代寫COMP9417、代做Python設計程序

時間:2024-02-28  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning Homework 1: Regularized Optimization & Gradient Methods
Introduction In this homework we will explore gradient based optimization. Gradient based algorithms have been crucial to the development of machine learning in the last few decades. The most famous exam ple is the backpropagation algorithm used in deep learning, which is in fact just a particular application of a simple algorithm known as (stochastic) gradient descent. We will first implement gradient descent from scratch on a deterministic problem (no data), and then extend our implementation to solve a real world regression problem.

Points Allocation There are a total of 30 marks.

Question 1 a): 2 marks
Question 1 b): 4 marks
Question 1 c): 2 marks
Question 1 d): 2 marks
Question 1 e): 6 marks
Question 1 f): 6 marks
Question 1 g): 4 marks
Question 1 h): 2 marks
Question 1 i): 2 marks
What to Submit
A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
.py file(s) containing all code you used for the project, which should be provided in a separate .zip
This code must match the code provided in the report.
You may be deducted points for not following these instructions.
You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1

You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please
nly post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
You may not use SymPy or any other symbolic programming toolkits to answer the derivation ques- tions. This will result in an automatic grade of zero for the relevant question. You must do the derivations manually.
When and Where to Submit
Due date: Week 4, Monday March 4th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be
3× 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
Submission must be made on Moodle, no exceptions.
Question 1. Gradient Based Optimization
The general framework for a gradient method for finding a minimizer of a function f : Rn → R is defined by

x(k+1) = x(k) − αk∇f(xk),

k = 0, 1, 2, . . . ,

(1)

where αk > 0 is known as the step size, or learning rate. Consider the following simple example of minimizing the function g(x) = 2 √ x3 + 1. We first note that g′(x) = 3x2(x3 + 1)−1/2. We then need to choose a starting value of x, say x(0) = 1. Let’s also take the step size to be constant, αk = α = 0.1. Then

we have the following iterations:

x(1) = x(0) − 0.1× 3(x(0))2((x(0))3 + 1)−1/2 = 0.78**9656440357 x(2) = x(1) − 0.1× 3(x(1))2((x(1))3 + 1)−1/2 = 0.6**6170**300827 x(3) = 0.5272505146487**7
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up and compare it to the true minimum of the function which is x∗ = −11). This idea works for functions that have vector valued inputs, which is often the case in machine learning. For example, when we minimize a loss function we do so with respect to a weight vector, β. When we take the step size to be constant at each iteration, this algorithm is known as gradient descent. For the entirety of this

question, do not use any existing implementations of gradient methods, doing so will result in an automatic mark of zero for the entire question.

(a) Consider the following optimisation problem:

x∈Rn min f(x),

f(x) = 2 1 ‖Ax− b‖22 + γ 2 ‖x‖22,

where

and where A ∈ Rm×n, b ∈ Rm are defined as

A =   −1 0 3

3 2 0 0 −1 2 ?**7; −4 −2 7 ?**9; ,

b =   −4 3 1 ?**9; ?**7; ,

and γ is a positive constant. Run gradient descent on f using a step size of α = 0.01 and γ = 2 and starting point of x(0) = (1, 1, 1, 1). You will need to terminate the algorithm when the following condition is met: ‖∇f(x(k))‖2 < 0.001. In your answer, clearly write down the version of the gradient steps (1) for this problem. Also, print out the first 5 and last 5 values of x(k), clearly

indicating the value of k, in the form:

1Does the algorithm converge to the true minimizer? Why/Why not?

What to submit: an equation outlining the explicit gradient update, a print out of the first 5 (k = 5 inclusive) and last 5 rows of your iterations. Use the round function to round your numbers to 4 decimal places. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

Consider now a slightly different problem: let y, β ∈ Rp and λ > 0. Further, we define the matrix
where blanks denote zero elements. 2 Define the loss function:

L(β) = 2p 1 ‖y − β‖22 + λ‖Wβ‖22.

The following code allows you to load in the data needed for this problem3:

Note, the t variable is purely for plotting purposes, it should not appear in any of your calculations. (b) Show that

ˆβ = arg min β L(β) = (I + 2λpWTW )−1y.

Update the following code4 so that it returns a plot of ˆβ and calculates L( ˆβ). Only in your code implementation, set λ = 0.9.

def create_W(p):

## generate W which is a p-2 x p matrix as defined in the question

W = np.zeros((p-2, p)) b = np.array([1,-2,1]) for i in range(p-2): W[i,i:i+3] = b return W

def loss(beta, y, W, L):

## compute loss for a given vector beta for data y, matrix W, regularization parameter L (lambda) # your code here

2If it is not already clear: for the first row of W : W11 = 1,W12 = −2,W13 = 1 and W1j = 0 for any j ≥ 4. For the second row of W : W21 = 0,W22 = 1,W23 = −2,W24 = 1 and W2j = 0 for any j ≥ 5 and so on.

3a copy of this code is provided in code student.py 4a copy of this code is provided in code student.py
return loss_val

## your code here, e.g. compute betahat and loss, and set other params..

plt.plot(t_var, y_var, zorder=1, color=’red’, label=’truth’) plt.plot(t_var, beta_hat, zorder=3, color=’blue’,

linewidth=2, linestyle=’--’, label=’fit’) plt.legend(loc=’best’) plt.title(f"L(beta_hat) = {loss(beta_hat, y, W, L)}") plt.show()

What to submit: a closed form expression along with your working, a single plot and a screen shot of your code along with a copy of your code in your .py file.

Write out each of the two terms that make up the loss function ( 2p‖y−β‖22 1 and λ‖Wβ‖22) explicitly using summations. Use this representation to explain the role played by each of the two terms. Be as specific as possible. What to submit: your answer, and any working either typed or handwritten.
Show that we can write (2) in the following way:
L(β) = p 1 j=1 p∑ Lj(β),

where Lj(β) depends on the data y1, . . . , yp only through yj . Further, show that

∇Lj(β) =   −(yj 0 0 0 0 . . . . . . − βj) ?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**8;?**9; ?**7; + 2λWTWβ,

j = 1, . . . , p.

Note that the first vector is the p-dimensional vector with zero everywhere except for the j-th index. Take a look at the supplementary material if you are confused by the notation. What to submit: your

answer, and any working either typed or handwritten.

(e) In this question, you will implement (batch) GD from scratch to solve the (2). Use an initial estimate β(0) = 1p (the p-dimensional vector of ones), and λ = 0.001 and run the algorithm for 1000 epochs

(an epoch is one pass over the entire data, so a single GD step). Repeat this for the following step sizes:

α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}

To monitor the performance of the algorithm, we will plot the value

∆(k) = L(β(k))− L( ˆβ),

where ˆβ is the true (closed form) solution derived earlier. Present your results in a single 3× 3 grid plot, with each subplot showing the progression of ∆(k) when running GD with a specific step-size.

State which step-size you think is best in terms of speed of convergence. What to submit: a single

plot. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

We will now implement SGD from scratch to solve (2). Use an initial estimate β(0) = 1p (the vector
f ones) and λ = 0.001 and run the algorithm for 4 epochs (this means a total of 4p updates of β. Repeat this for the following step sizes:
α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}

Present an analogous single 3 × 3 grid plot as in the previous question. Instead of choosing an index randomly at each step of SGD, we will cycle through the observations in the order they are stored in y to ensure consistent results. Report the best step-size choice. In some cases you might observe that the value of ∆(k) jumps up and down, and this is not something you would have seen using batch GD. Why do you think this might be happening?

What to submit: a single plot and some commentary. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.

An alternative Coordinate Based scheme: In GD, SGD and mini-batch GD, we always update the entire p-dimensional vector β at each iteration. An alternative approach is to update each of the p parameters individually. To make this idea more clear, we write the loss function of interest L(β) as L(β1, β2 . . . , βp). We initialize β(0) , and then solve for k = 1, 2, 3, . . . ,

β(k) 1 = arg min β1 L(β1, β(k−1) 2 , β(k−1) 3 , . . . , β(k−1) p ) β(k) 2 = arg min β2 L(β(k) 1 , β2, β(k−1) 3 , . . . , β(k−1) p )

.

.

.

β(k) p = arg min βp L(β(k) 1 , β(k) 2 , β(k) 3 , . . . , βp).

Note that each of the minimizations is over a single (**dimensional) coordinate of β, and also that as as soon as we update β(k) j , we use the new value when solving the update for β(k) j+1 and so on. The idea is then to cycle through these coordinate level updates until convergence. In the next two parts we will implement this algorithm from scratch for the problem we have been working on (2).

(g) Derive closed-form expressions for ˆβ1, ˆβ2, . . . , ˆβp where for j = 1, 2, . . . , p:

ˆβj = arg min βj L(β1, . . . , βj−1, βj , βj+1, . . . , βp).

What to submit: a closed form expression along with your working.

Hint: Be careful, this is not as straight-forward as it might seem at first. It is recommended to choose a value for p, e.g. p = 8 and first write out the expression in terms of summations. Then take derivatives to get the closed form expressions.

Implement both gradient descent and the coordinate scheme in code (from scratch) and apply it to the provided data. In your implementation:
Use λ = 0.001 for the coordinate scheme, and step-sizeα = 1 for your gradient descent scheme.
Initialize both algorithms with β = 1p, the p-dimensional vector of ones.
For the coordinate scheme, be sure to update the βj ’s in order (i.e. 1,2,3,...)
For your coordinate scheme, terminate the algorithm after 1000 updates (each time you update a single coordinate, that counts as an update.)
For your GD scheme, terminate the algoirthm after 1000 epochs.
Create a single plot of k vs ∆(k) = L(β(k)) − L( ˆβ), where ˆβ is the closed form expression derived earlier.
Your plot should have both the coordinate scheme (blue) and GD (green)
displayed and should start from k = 0. Your plot should have a legend.
What to submit: a single plot and a screen shot of your code along with a copy of your code in your .py file.

(i) Based on your answer to the previous part, when would you prefer GD? When would you prefer the coordinate scheme? What to submit: Some commentary.

Supplementary: Background on Gradient Descent As noted in the lectures, there are a few variants of gradient descent that we will briefly outline here. Recall that in gradient descent our update rule is

β(k+1) = β(k) − αk∇L(β(k)),

k = 0, 1, 2, . . . ,

where L(β) is the loss function that we are trying to minimize. In machine learning, it is often the case that the loss function takes the form

L(β) = n 1 n∑ Li(β),

i=1

i.e. the loss is an average of n functions that we have lablled Li, and each Li depends on the data only through (xi, yi). It then follows that the gradient is also an average of the form

∇L(β) = n 1 n∑ ∇Li(β).

i=1

We can now define some popular variants of gradient descent .

(i) Gradient Descent (GD) (also referred to as batch gradient descent): here we use the full gradient, as in we take the average over all n terms, so our update rule is:

β(k+1) = β(k) − αk n∑ ∇Li(β(k)),

n

k = 0, 1, 2, . . . .

i=1

(ii) Stochastic Gradient Descent (SGD): instead of considering all n terms, at the k-th step we choose an index ik randomly from {1, . . . , n}, and update

β(k+1) = β(k) − αk∇Lik(β(k)),

k = 0, 1, 2, . . . .

Here, we are approximating the full gradient∇L(β) using ∇Lik(β).

(iii) Mini-Batch Gradient Descent: GD (using all terms) and SGD (using a single term) represents the two possible extremes. In mini-batch GD we choose batches of size 1 < B < n randomly at each step, call their indices {ik1 , ik2 , . . . , ikB}, and then we update

β(k+1) = β(k) − αk B B∑ ∇Lij (β(k)),

j=1

k = 0, 1, 2, . . . ,

so we are still approximating the full gradient but using more than a single element as is done in SGD.
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

掃一掃在手機打開當前頁
  • 上一篇:莆田鞋子批發市場進貨渠道(推薦十個最新進貨地方)
  • 下一篇:CSC173代做、Java編程設計代寫
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 豆包 幣安下載 AI生圖 目錄網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          午夜精品久久久久久久久| 国产午夜精品理论片a级大结局| 亚洲午夜成aⅴ人片| 国产真实久久| 国产精品高潮呻吟久久av无限| 欧美在线国产| 亚洲欧美视频| 亚洲天堂久久| 亚洲欧洲免费视频| 一区二区在线看| 国产精品日韩一区二区| 欧美日韩精品中文字幕| 噜噜噜在线观看免费视频日韩| 午夜激情一区| 亚洲免费一区二区| 亚洲天堂成人在线视频| 亚洲乱码久久| 亚洲欧洲视频| 亚洲欧洲在线视频| 亚洲国产欧美国产综合一区| 国产伊人精品| 国产亚洲午夜高清国产拍精品| 国产精品久久久久国产精品日日| 欧美剧在线免费观看网站| 欧美69视频| 欧美激情一区二区三区在线视频观看| 久久伊人亚洲| 蘑菇福利视频一区播放| 欧美大片免费观看| 欧美大胆a视频| 欧美精品一区二区三区很污很色的| 蜜月aⅴ免费一区二区三区| 久久影院亚洲| 欧美夫妇交换俱乐部在线观看| 免费看av成人| 欧美精品一区二区三区久久久竹菊| 欧美高清在线观看| 欧美裸体一区二区三区| 欧美三级视频在线播放| 国产精品美腿一区在线看| 国产精品影音先锋| 黄色成人免费观看| 亚洲日本电影| 亚洲男人天堂2024| 久久精品免视看| 欧美大片第1页| 国产精品成人免费精品自在线观看| 国产精品久久久久9999吃药| 国产欧美日韩视频在线观看| 精品成人国产| 亚洲视频导航| 久久天堂精品| 欧美美女操人视频| 国产无一区二区| 亚洲国产一区二区三区a毛片| 在线观看一区二区视频| 国产欧美日韩免费看aⅴ视频| 国内精品久久久久久影视8| 在线日韩一区二区| 亚洲欧美日韩爽爽影院| 毛片av中文字幕一区二区| 欧美人与禽猛交乱配视频| 国产精品影片在线观看| 亚洲人在线视频| 欧美一区二区三区四区在线观看地址| 美国十次了思思久久精品导航| 欧美视频日韩| 1024亚洲| 久久精品人人做人人综合| 欧美精品v国产精品v日韩精品| 国产日韩欧美二区| 夜夜嗨一区二区三区| 老牛嫩草一区二区三区日本| 国产精品萝li| 一区二区高清视频在线观看| 久久先锋影音av| 国产精品亚洲人在线观看| 日韩一区二区免费高清| 美女国产一区| 黑人一区二区三区四区五区| 亚洲一区欧美激情| 欧美日韩1区2区| 亚洲日本va午夜在线影院| 久久久久国色av免费观看性色| 国产精品毛片在线看| 日韩视频在线永久播放| 免费一级欧美在线大片| 加勒比av一区二区| 久久精品夜色噜噜亚洲a∨ | 亚洲无限av看| 欧美激情精品久久久久久大尺度| 狠狠色丁香久久婷婷综合_中| 亚洲欧美中日韩| 国产乱码精品一区二区三区忘忧草 | 黄色一区三区| 久久gogo国模啪啪人体图| 国产精品国内视频| 亚洲香蕉在线观看| 国产精品私拍pans大尺度在线| 亚洲少妇在线| 国产精品红桃| 欧美一级视频| 国内精品视频一区| 久久gogo国模裸体人体| 狠狠色综合网站久久久久久久| 欧美一区二区播放| 国内精品久久久| 蜜桃av一区二区| 亚洲七七久久综合桃花剧情介绍| 欧美二区在线播放| 一区二区三区视频在线观看| 国产精品扒开腿做爽爽爽软件| 亚洲一区二区三区四区在线观看 | 国产精品久久久久久久久免费| 亚洲欧美日韩在线不卡| 国产一区二区三区久久悠悠色av | 亚洲国产精品www| 欧美日韩日本视频| 欧美中文在线观看| 亚洲国产精品尤物yw在线观看 | 99视频有精品| 国产精品亚洲激情 | 在线播放不卡| 欧美日韩一区不卡| 欧美资源在线| 亚洲狼人精品一区二区三区| 国产精品啊啊啊| 久久天堂国产精品| 9国产精品视频| 国内精品免费午夜毛片| 欧美成人综合| 欧美一区二区日韩一区二区| 亚洲国产高清自拍| 国产日韩欧美另类| 欧美久久一级| 乱中年女人伦av一区二区| 中文一区在线| 亚洲国产日韩欧美| 国产拍揄自揄精品视频麻豆| 欧美精品手机在线| 久久精品一区二区| 中文一区二区| 亚洲人成小说网站色在线| 国内精品国产成人| 欧美三区在线| 欧美日本不卡视频| 免费高清在线一区| 久久久.com| 亚洲永久免费精品| 一本色道久久综合亚洲91| 在线国产精品一区| 韩国视频理论视频久久| 国产精品一区二区a| 欧美日韩精品系列| 欧美国产三区| 久久久青草婷婷精品综合日韩| 亚洲永久免费精品| 亚洲午夜日本在线观看| 亚洲美女精品久久| 亚洲人成77777在线观看网| 136国产福利精品导航网址应用| 国产女优一区| 国产视频精品xxxx| 国产精品一区视频网站| 国产精品盗摄久久久| 欧美揉bbbbb揉bbbbb| 欧美成人精品在线观看| 久久国产精品久久久久久电车| 亚洲欧美日韩一区| 亚洲欧美激情一区| 午夜老司机精品| 性欧美办公室18xxxxhd| 欧美在线不卡| 久久久久国产精品一区三寸 | 国产精品亚发布| 国产精品中文字幕欧美| 国产欧美日韩另类一区| 国产一区二区观看| 悠悠资源网亚洲青| 亚洲破处大片| 亚洲女ⅴideoshd黑人| 欧美一二三视频| 久久亚洲精品一区| 欧美电影免费观看高清| 欧美久久久久久久久| 欧美无砖砖区免费| 国产一区激情| 亚洲国产综合在线看不卡| 亚洲精品久久久久久久久久久久 | 国产精品不卡在线| 国产午夜精品久久久久久久| 狠狠色综合播放一区二区| 亚洲日本成人网| 亚洲在线网站| 免费观看30秒视频久久| 欧美激情中文字幕一区二区| 国产精品久久久99| 在线成人性视频| 亚洲欧美激情诱惑| 美女日韩在线中文字幕|