99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

CS 189代做、Python編程語言代寫
CS 189代做、Python編程語言代寫

時間:2025-02-20  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



CS 189/289A Introduction to Machine Learning
Due: Wednesday, February 26 at 11:59 pm
• Homework 3 consists of coding assignments and math problems.
• We prefer that you typeset your answers using LATEX or other word processing software. If
you haven’t yet learned LATEX, one of the crown jewels of computer science, now is a good
time! Neatly handwritten and scanned solutions will also be accepted.
• In all of the questions, show your work, not just the final answer.
• The assignment covers concepts on Gaussian distributions and classifiers. Some of the ma terial may not have been covered in lecture; you are responsible for finding resources to
understand it.
• Start early; you can submit models to Kaggle only twice a day!
Deliverables:
1. Submit your predictions for the test sets to Kaggle as early as possible. Include your Kaggle
scores in your write-up. The Kaggle competition for this assignment can be found at
• MNIST: https://www.kaggle.com/t/ca07d5e39d9b49cd946deb02583ad31f
• SPAM: https://www.kaggle.com/t/3fb20b97254049f8acbf189a75830627
2. Write-up: Submit your solution in PDF format to “Homework 3 Write-Up” in Gradescope.
• On the same page as the honor code, please list students and their SIDs with whom you
collaborated.
• Start each question on a new page. If there are graphs, include those graphs on the
same pages as the question write-up. DO NOT put them in an appendix. We need each
solution to be self-contained on pages of its own.
• Only PDF uploads to Gradescope will be accepted. You are encouraged use LATEX
or Word to typeset your solution. You may also scan a neatly handwritten solution to
produce the PDF.
• Replicate all your code in an appendix. Begin code for each coding question in a fresh
page. Do not put code from multiple questions in the same page. When you upload this
PDF on Gradescope, make sure that you assign the relevant pages of your code from
appendix to correct questions.
• While collaboration is encouraged, everything in your solution must be your (and only
your) creation. Copying the answers or code of another student is strictly forbidden.
Furthermore, all external material (i.e., anything outside lectures and assigned readings,
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 1
including figures and pictures) should be cited properly. We wish to remind you that
consequences of academic misconduct are particularly severe!
3. Code: Submit your code as a .zip file to “Homework 3 Code”. The code must be in a form that
enables the readers to compile (if necessary) and run it to produce your Kaggle submissions.
• Set a seed for all pseudo-random numbers generated in your code. This ensures
your results are replicated when readers run your code. For example, you can seed
numpy with np.random.seed(42).
• Include a README with your name, student ID, the values of random seed you used,
and instructions for compiling (if necessary) and running your code. If the data files
need to be anywhere other than the main directory for your code to run, let us know
where.
• Do not submit any data files. Supply instructions on how to add data to your code.
• Code requiring exorbitant memory or execution time might not be considered.
• Code submitted here must match that in the PDF Write-up. The Kaggle score will not
be accepted if the code provided a) does not compile/run or b) runs but does not produce
the file submitted to Kaggle.
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 2
1 Honor Code
1. Please list the names and SIDs of all students you have collaborated with below.
2. Declare and sign the following statement (Mac Preview, PDF Expert, and FoxIt PDF Reader,
among others, have tools to let you sign a PDF file):
“I certify that all solutions are entirely my own and that I have not looked at anyone else’s
solution. I have given credit to all external sources I consulted.”
Signature:
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 3
2 Gaussian Classification
Let fX|Y=Ci
(x) ∼ N(µi
, σ2
) for a two-class, one-dimensional (d = 1) classification problem with
classes C1 and C2, P(Y = C1) = P(Y = C2) = 1/2, and µ2 > µ1.
1. Find the Bayes optimal decision boundary and the corresponding Bayes decision rule by
finding the point(s) at which the posterior probabilities are equal. Use the 0-1 loss function.
2. Suppose the decision boundary for your classifier is x = b. The Bayes error is the probability
of misclassification, namely,
Pe = P((C1 misclassified as C2) ∪ (C2 misclassified as C1)).
Show that the Bayes error associated with this decision rule, in terms of b, is
Pe(b) =
1
2

2πσ


Z
b
−∞
exp

−
(x − µ2)
2
2σ2


dx +
Z
b

exp

−
(x − µ1)
2
2σ2


dx


.
3. Using the expression above for the Bayes error, calculate the optimal decision boundary b

that minimizes Pe(b). How does this value compare to that found in part 1? Hint: Pe(b) is
convex for µ1 < b < µ2.
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 4
3 Classification and Risk
Suppose we have a classification problem with classes labeled 1, . . . , c and an additional “doubt”
category labeled c + 1. Let r : R
d → {1, . . . , c + 1} be a decision rule. Define the loss function
L(r(x) = i, y = j) =



0 if i = j i, j ∈ {1, . . . , c},
λc
if i , j i ∈ {1, . . . , c},
λd if i = c + 1
(1)
where λc ≥ 0 is the loss incurred for making a misclassification and λd ≥ 0 is the loss incurred for
choosing doubt. In words this means the following:
• When you are correct, you should incur no loss.
• When you are incorrect, you should incur some penalty λc for making the wrong choice.
• When you are unsure about what to choose, you might want to select a category correspond ing to “doubt” and you should incur a penalty λd.
The risk of classifying a new data point x as a class i ∈ {1, 2, . . . , c + 1} is
R(r(x) = i | x) =
cX
j=1
L(r(x) = i, y = j) P(Y = j | x).
To be clear, the actual label Y can never be c + 1.
1. First, we will simplify the risk function using our specific loss function separately for when
r(x) is or is not the doubt category.
(a) Prove that R(r(x) = i | x) = λc
  1 − P(Y = i | x)
 when i is not the doubt category (i.e.
i , c + 1).
(b) Prove that R(r(x) = c + 1 | x) = λd.
2. Show that the following policy ropt(x) obtains the minimum risk:
• (R1) Find the non-doubt class i such that P(Y = i | x) ≥ P(Y = j | x) for all j, meaning
you pick the class with the highest probability given x.
• (R2) Choose class i if P(Y = i | x) ≥ 1 −
λd
λc
• (R3) Choose doubt otherwise.
3. How would you modify your optimum decision rule if λd = 0? What happens if λd > λc?
Explain why this is or is not consistent with what one would expect intuitively.
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 5
4 Maximum Likelihood Estimation and Bias
Let X1, . . . , Xn ∈ R be n sample points drawn independently from univariate normal distributions
such that Xi ∼ N(µ, σ2
i
), where σi = σ/ √
i for some parameter σ. (Every sample point comes
from a distribution with a different variance.) Note the word “univariate”; we are working in
dimension d = 1, and each “point” is just a real number.
1. Derive the maximum likelihood estimates, denoted ˆµ and ˆσ, for the mean µ and the pa rameter σ. (The formulae from class don’t apply here, because every point has a different
variance.) You may write an expression for ˆσ
2
rather than ˆσ if you wish—it’s probably
simpler that way. Show all your work.
2. Given the true value of a statistic θ and an estimator θˆ of that statistic, we define the bias of
the estimator to be the the expected difference from the true value. That is,
bias(θˆ) = E[θˆ] − θ.
We say that an estimator is unbiased if its bias is 0.
Either prove or disprove the following statement: The MLE sample estimator µˆ is unbiased.
Hint: Neither the true µ nor true σ
2 are known when estimating sample statistics, thus we
need to plug in appropriate estimators.
3. Either prove or disprove the following statement: The MLE sample estimator σˆ
2
is unbiased.
Hint: Neither the true µ nor true σ
2 are known when estimating sample statistics, thus we
need to plug in appropriate estimators.
4. Suppose the Variance Fairy drops by to give us the true value of σ
2
, so that we only have
to estimate µ. Given the loss function L( ˆµ, µ) = ( ˆµ − µ)
2
, what is the risk of our MLE
estimator ˆµ?
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 6
5 Covariance Matrices and Decompositions
As described in lecture, the covariance matrix Var(R) ∈ R
d×d
for a random variable R ∈ R
d with
mean µ ∈ R
d
is
Var(R) = Cov(R, R) = E[(R − µ) (R − µ)
> ] =
Var(R1) Cov(R1, R2) . . . Cov(R1, Rd)
Cov(R2, R1) Var(R2) Cov(R2, Rd)
Cov(Rd, R1) Cov(Rd, R2) . . . Var(Rd)
where Cov(Ri
, Rj) = E[(Ri − µi) (Rj − µj)] and Var(Ri) = Cov(Ri
, Ri).
If the random variable R is sampled from the multivariate normal distribution N(µ, Σ) with the
then as you proved in Homework 2, Var(R) = Σ.
Given n points X1, X2, . . . , Xn sampled from N(µ, Σ), we can estimate Σ with the maximum likeli hood estimator
Σ =ˆ
1
n
nX
i=1
(Xi − µˆ) (Xi − µˆ)
> ,
which is also known as the sample covariance matrix.
1. The estimate Σˆ makes sense as an approximation of Σ only if Σˆ is invertible. Under what cir cumstances is Σˆ not invertible? Express your answer in terms of the geometric arrangement
of the sample points Xi
. We want a geometric characterization, not an algebraic one. Make
sure your answer is complete; i.e., it includes all cases in which the covariance matrix of the
sample is singular. (No proof is required.)
2. Suggest a way to fix a singular covariance matrix estimator Σˆ by replacing it with a similar but
invertible matrix. Your suggestion may be a kludge, but it should not change the covariance
matrix too much. Note that infinitesimal numbers do not exist; if your solution uses a very
small number, explain how to calculate a number that is sufficiently small for your purposes.
3. Consider the normal distribution N(0, Σ) with mean µ = 0. Consider all vectors of length 1;
i.e., any vector x for which k xk = 1. Which vector(s) x of length 1 maximizes the PDF f(x)?
Which vector(s) x of length 1 minimizes f(x)? Your answers should depend on the properties
of Σ. Explain your answer.
4. Suppose we have X ∼ N(0, Σ), X ∈ R
n
and a unit vector y ∈ R
n
. We can compute the
projection of the random vector X onto a unit direction vector y as p = y
> X. First, compute
the variance of p. Second, with this information, what does the largest eigenvalue λmax of the
covariance matrix tell us about the variances of expressions of the form y
> X?
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 7
6 Isocontours of Normal Distributions
Let f(µ, Σ) be the probability density function of a normally distributed random variable in R
2
.
Write code to plot the isocontours of the following functions, each on its own separate figure. Make
sure it is clear which figure belongs to which part. You’re free to use any plotting libraries or stats
utilities you like; for instance, in Python you can use Matplotlib and SciPy. Choose the boundaries
of the domain you plot large enough to show the interesting characteristics of the isocontours (use
your judgment). Make sure we can tell what isovalue each contour is associated with—you can do
this with labels or a colorbar/legend.
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 8
7 Eigenvectors of the Gaussian Covariance Matrix
Consider two one-dimensional random variables X1 ∼ N(3, 9) and X2 ∼
1
2
X1 + N(4, 4), where
N(µ, σ2
) is a Gaussian distribution with mean µ and variance σ
2
. (This means that you have to
draw X1 first and use it to compute a random X2. Be mindful that most packages for sampling from
a Gaussian distribution use standard deviation, not variance, as input.)
Write a program that draws n = 100 random two-dimensional sample points from (X1, X2). For
each sample point, the value of X2 is a function of the value of X1 for that same sample point,
but the sample points are independent of each other. In your code, make sure to choose and set a
fixed random number seed for whatever random number generator you use, so your simulation is
reproducible, and document your choice of random number seed and random number generator in
your write-up. For each of the following parts, include the corresponding output of your program.
1. Compute the mean (in R
2
) of the sample.
2. Compute the 2 × 2 covariance matrix of the sample (based on the sample mean, not the true
mean—which you would not know given real-world data). Ensure that the sample covariance
uses the maximum likelihood estimator as described in Question 5.
3. Compute the eigenvectors and eigenvalues of this covariance matrix.
4. On a two-dimensional grid with a horizonal axis for X1 with range [−15, 15] and a vertical
axis for X2 with range [−15, 15], plot
(i) all n = 100 data points, and
(ii) arrows representing both covariance eigenvectors. The eigenvector arrows should orig inate at the mean and have magnitudes equal to their corresponding eigenvalues.
Hint: make sure your plotting software is set so the figure is square (i.e., the horizontal and
vertical scales are the same). Not doing that may lead to hours of frustration!
5. Let U = [v1 v2] be a 2×2 matrix whose columns are the unit eigenvectors of the covariance
matrix, where v1 is the eigenvector with the larger eigenvalue. We use U
> as a rotation
matrix to rotate each sample point from the (X1, X2) coordinate system to a coordinate system
aligned with the eigenvectors. (As U
> = U
−1
, the matrix U reverses this rotation, moving
back from the eigenvector coordinate system to the original coordinate system). Center your
sample points by subtracting the mean µ from each point; then rotate each point by U
> ,
giving xrotated = U
> (x − µ). Plot these rotated points on a new two dimensional-grid, again
with both axes having range [−15, 15]. (You are not required to plot the eigenvectors, which
would be horizontal and vertical.)
In your plots, clearly label the axes and include a title. Moreover, make sure the horizontal
and vertical axis have the same scale! The aspect ratio should be one.
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 9
8 Gaussian Classifiers for Digits and Spam
In this problem, you will build classifiers based on Gaussian discriminant analysis. Unlike Home work 1, you are NOT allowed to use any libraries for out-of-the-box classification (e.g. sklearn).
You may use anything in numpy and scipy.
The training and test data can be found with this homework. Do NOT use the training/test data
from Homework 1, as they have changed for this homework. The starter code is similar to
HW1’s; we provide check.py and save csv.py files for you to produce your Kaggle submission
files. Submit your predicted class labels for the test data on the Kaggle competition website and be
sure to include your Kaggle display name and scores in your writeup. Also be sure to include an
appendix of your code at the end of your writeup.
Reminder: please also select relevant code from the appendix on Gradescope for your answer to
each question.
1. (Code) Taking pixel values as features (no new features yet, please), fit a Gaussian distri bution to each digit class using maximum likelihood estimation. This involves computing a
mean and a covariance matrix for each digit class, as discussed in Lecture 9 and Section 4.4
of An Introduction to Statistical Learning. Attach the relevant code as your answer to this
part.
Hint: You may, and probably should, contrast-normalize the images before using their pixel
values. One way to normalize is to divide the pixel values of an image by the ` 2-norm of its
pixel values.
2. (Written Answer + Graph) Visualize the covariance matrix for a particular class (digit). Tell
us which digit and include your visualization in your write-up. How do the diagonal terms
compare with the off-diagonal terms? What do you conclude from this?
3. Classify the digits in the test set on the basis of posterior probabilities with two different
approaches.
(a) (Graph) Linear discriminant analysis (LDA). Model the class conditional probabilities
as Gaussians N(µC, Σ) with different means µC (for class C) and the same pooled within class covariance matrix Σ, which you compute from a weighted average of the 10 co variance matrices from the 10 classes, as described in Lecture 9.
In your implementation, you might run into issues of determinants overflowing or under-
flowing, or normal PDF probabilities underflowing. These problems might be solved by
learning about numpy.linalg.slogdet and/or scipy.stats.multivariate normal.
logpdf.
To implement LDA, you will sometimes need to compute a matrix-vector product of
the form Σ
−1
x for some vector x. You should not compute the inverse of Σ (nor the
determinant of Σ) as it is not guaranteed to be invertible. Instead, you should find a way
to solve the implied linear system without computing the inverse.
Hold out 10,000 randomly chosen training points for a validation set. (You may re use your Homework 1 solution or an out-of-the-box library for dataset splitting only.)
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 10
Classify each image in the validation set into one of the 10 classes. Compute the error
rate (1−
# points correctly classified
# total points ) on the validation set and plot it over the following numbers
of randomly chosen training points: 100, 200, 500, 1,000, 2,000, 5,000, 10,000, 30,000,
50,000. (Expect unpredictability in your error rate when few training points are used.)
(b) (Graph) Quadratic discriminant analysis (QDA). Model the class conditional probabili ties as Gaussians N(µC, ΣC), where ΣC is the estimated covariance matrix for class C. (If
any of these covariance matrices turn out singular, implement the trick you described in
Q5(b). You are welcome to use validation to choose the right constant(s) for that trick.)
Repeat the same tests and error rate calculations you did for LDA.
(c) (Written Answer) Which of LDA and QDA performed better? (Note: We don’t expect
everybody to get the same answer.) Why?
(d) (Written Answer + Graphs) Include two plots, one using LDA and one using QDA, of
validation error versus the number of training points for each digit. Each plot should
include all the 10 curves on the same graph as shown in Figure 1. Which digit is easiest
to classify for LDA/QDA? Write down your answer and suggest why you think it’s the
easiest digit.
Figure 1: Sample graph with 10 plots
4. (Written Answer) With mnist-data-hw3.npz, train your best classifier for the training data
and classify the images in the test data. Submit your labels to the online Kaggle com petition. Record your optimum prediction rate in your write-up and include your Kaggle
username. Don’t forget to use the “submissions” tab or link on Kaggle to select your best
submission!
You are welcome to compute extra features for the Kaggle competition, as long as they do
not use an exterior learned model for their computation (no transfer learning!). If you do so,
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 11
please describe your implementation in your assignment. Please use extra features only for
the Kaggle portion of the assignment.
5. (Written Answer) Next, apply LDA or QDA (your choice) to spam (spam-data-hw3.npz).
Submit your test results to the online Kaggle competition. Record your optimum prediction
rate in your submission. If you use additional features (or omit features), please describe
them. We include a featurize.py file (similar to HW1’s) that you may modify to create
new features.
Optional: If you use the defaults, expect relatively low classification rates. We suggest using
a Bag-Of-Words model. You are encouraged to explore alternative hand-crafted features, and
are welcome to use any third-party library to implement them, as long as they do not use a
separate model for their computation (no large language models, BERT, or word2vec!).
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 12
Submission Checklist
Please ensure you have completed the following before your final submission.
At the beginning of your writeup...
1. Have you copied and hand-signed the honor code specified in Question 1?
2. Have you listed all students (Names and SIDs) that you collaborated with?
In your writeup for Question 8...
1. Have you included your Kaggle Score and Kaggle Username for both questions 8.4 and
8.5?
At the end of the writeup...
1. Have you provided a code appendix including all code you wrote in solving the homework?
2. Have you included featurize.py in your code appendix if you modified it?
Executable Code Submission
1. Have you created an archive containing all “.py” files that you wrote or modified to generate
your homework solutions (including featurize.py if you modified it)?
2. Have you removed all data and extraneous files from the archive?
3. Have you included a README file in your archive briefly describing how to run your code
on the test data and reproduce your Kaggle results?
Submissions
1. Have you submitted your test set predictions for both MNIST and SPAM to the appropriate
Kaggle challenges?
2. Have you submitted your written solutions to the Gradescope assignment titled HW3 Write Up and selected pages appropriately?
3. Have you submitted your executable code archive to the Gradescope assignment titled HW3
Code?
Congratulations! You have completed Homework 3.
HW3, ©UCB CS 189/289A, Spring 2025. All Rights Reserved. This may not be publicly shared without explicit permission. 13

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp



 

掃一掃在手機打開當前頁
  • 上一篇:代做PGEE11117、代寫Python/Java編程
  • 下一篇:紅星花全國客服電話-紅星花24小時人工服務熱線
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 trae 豆包網頁版入口 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    99爱在线视频这里只有精品_窝窝午夜看片成人精品_日韩精品久久久毛片一区二区_亚洲一区二区久久

          9000px;">

                日韩一级二级三级| 欧美一区二区三区思思人| 欧美白人最猛性xxxxx69交| 久久99国内精品| 中文字幕一区二区三中文字幕| 91亚洲大成网污www| 国产在线看一区| 麻豆91在线播放免费| 亚洲动漫第一页| 国产精品久久久久婷婷二区次| 91精品国产美女浴室洗澡无遮挡| 亚洲福利视频一区二区| 看国产成人h片视频| 久久这里只有精品首页| 99视频精品免费视频| 伊人色综合久久天天| 欧美一区二区三区四区久久 | 日本美女视频一区二区| 日韩欧美一区二区三区在线| 狠狠色丁香婷婷综合| 亚洲欧洲av在线| 5566中文字幕一区二区电影 | 久久先锋资源网| 欧美日韩性生活| 91毛片在线观看| 国产激情视频一区二区在线观看| 国产毛片精品视频| 欧美性做爰猛烈叫床潮| 久久99精品国产麻豆不卡| 久久亚洲一区二区三区明星换脸| 亚洲精品老司机| 日韩国产精品久久| 日韩精品一区二区三区在线| 国产精品电影一区二区| 在线亚洲一区观看| 天堂久久一区二区三区| 日韩一区二区在线看| 精品视频一区二区不卡| 5566中文字幕一区二区电影 | 日韩欧美国产小视频| 国产高清精品网站| 精品一区二区三区在线观看| 亚洲精品成人天堂一二三| 成人精品免费看| 日本美女视频一区二区| 亚洲精品一卡二卡| 亚洲永久免费av| 精品久久久久久最新网址| 一个色在线综合| 亚洲精品国产精华液| 国产欧美一区二区三区网站 | 国产成人欧美日韩在线电影| 欧美午夜不卡在线观看免费| 午夜不卡av免费| 国产欧美一区视频| 国产乱码精品一区二区三区av| 欧美精品色综合| 日本aⅴ亚洲精品中文乱码| 成人黄色小视频在线观看| 5月丁香婷婷综合| 色噜噜狠狠成人中文综合| 久久精品国产一区二区三区免费看| www国产成人免费观看视频 深夜成人网| 亚洲女与黑人做爰| 一区二区三区在线免费播放| 欧美日韩高清一区二区不卡| 国产精品女人毛片| 日韩中文字幕区一区有砖一区 | 欧美tickle裸体挠脚心vk| 欧美日本乱大交xxxxx| 欧美日韩一区三区四区| 国产精品夫妻自拍| 国产成人精品亚洲日本在线桃色| 精品电影一区二区| 国产精品色眯眯| 国产成人免费视频一区| 色域天天综合网| 一区二区三区丝袜| 日韩欧美国产一区二区在线播放| 亚洲乱码精品一二三四区日韩在线| 91精品国产综合久久久久久漫画| 日韩三级视频在线看| 欧美一区二区啪啪| 国产精品欧美经典| 天天影视色香欲综合网老头| 免播放器亚洲一区| 99久久国产综合精品色伊| 精品视频免费看| 国产精品久99| 风间由美一区二区av101| 91在线视频网址| 欧美一区二区三区系列电影| 国产精品福利一区| 成人av电影在线网| 亚洲国产精品精华液2区45| 国产盗摄精品一区二区三区在线 | 亚洲国产高清不卡| 国产做a爰片久久毛片| 国产日韩成人精品| 国产精品一区二区三区99| 久久久精品蜜桃| 成人小视频免费在线观看| 最新日韩av在线| 欧美最新大片在线看 | 久久婷婷综合激情| 日日夜夜一区二区| 亚洲欧美在线另类| 日韩欧美一级在线播放| 国产精品资源在线观看| 亚洲日本va在线观看| 69久久夜色精品国产69蝌蚪网| 国产一区激情在线| 综合欧美亚洲日本| 日韩美女一区二区三区四区| 国产乱人伦偷精品视频免下载| 国产欧美一区二区三区在线老狼| 91欧美一区二区| 国产成人免费视频一区| 粉嫩一区二区三区性色av| 一区二区在线观看不卡| 精品国产免费一区二区三区四区 | 日韩精品一区二区三区老鸭窝| 亚洲精品一区二区三区蜜桃下载| 日韩精品一区第一页| 精品国产免费视频| 国产人伦精品一区二区| 中文字幕乱码日本亚洲一区二区| 在线观看91av| 久久久久亚洲蜜桃| 国产欧美va欧美不卡在线| 精品福利在线导航| 2024国产精品视频| 亚洲国产高清不卡| 一区二区在线观看免费| 狠狠v欧美v日韩v亚洲ⅴ| 精品中文av资源站在线观看| 国产一级精品在线| 欧美亚洲一区二区在线| 日韩视频在线你懂得| 中文字幕在线不卡一区二区三区| 亚洲国产精品久久人人爱蜜臀 | 91蜜桃免费观看视频| 欧美久久久久久久久| 欧美激情中文不卡| 蜜桃传媒麻豆第一区在线观看| 成人晚上爱看视频| 精品国产一区二区精华| 日日夜夜免费精品| 91精品午夜视频| 6080yy午夜一二三区久久| 久久久国产综合精品女国产盗摄| 国产精品无码永久免费888| 亚洲成人免费在线观看| 久久精品国产亚洲aⅴ| 91首页免费视频| 日韩色视频在线观看| 亚洲综合自拍偷拍| 国产成人精品影视| 久久影音资源网| 日韩精品一卡二卡三卡四卡无卡| 99久久婷婷国产综合精品电影 | 国产精品的网站| 国产麻豆视频精品| 亚洲精品在线免费观看视频| 福利电影一区二区| 久久久久久夜精品精品免费| 亚洲国产精品麻豆| 欧美三级视频在线观看| 亚洲精品国产第一综合99久久| 精品在线一区二区| 2024国产精品视频| 国产一区二区剧情av在线| 欧美日韩国产免费一区二区| 亚洲无线码一区二区三区| 欧美丝袜丝交足nylons图片| 亚洲一区在线免费观看| 91免费观看国产| 中文字幕一区二区三| 97久久人人超碰| 亚洲免费视频成人| 欧美伊人久久大香线蕉综合69 | 欧美精品丝袜中出| 一区二区三区四区不卡在线| 色噜噜狠狠成人中文综合| 亚洲精品高清在线观看| 精品污污网站免费看| 一区二区三区在线观看国产| 欧美三级视频在线播放| 麻豆精品一二三| 亚洲日韩欧美一区二区在线| 欧美三级韩国三级日本三斤| 久久9热精品视频| 一区二区在线观看免费视频播放| 欧美在线影院一区二区| 精品无人码麻豆乱码1区2区| 欧美激情中文字幕一区二区| 欧美色男人天堂| 91在线观看视频| 国产一区二区在线看| 日本视频一区二区|