基本信息
源码名称:VAMP算法.pdf
源码大小:0.12M
文件格式:.pdf
开发语言:MATLAB
更新时间:2021-01-20
友情提示:(无需注册或充值,赞助后即可获取资源下载链接)
嘿,亲!知识可是无价之宝呢,但咱这精心整理的资料也耗费了不少心血呀。小小地破费一下,绝对物超所值哦!如有下载和支付问题,请联系我们QQ(微信同号):813200300
本次赞助数额为: 2 元×
微信扫码支付:2 元
×
请留下您的邮箱,我们将在2小时内将文件发到您的邮箱
源码介绍
Abstract—The standard linear regression (SLR) problem is to
recover a vector x0 from noisy linear observations y = Ax0 w.
The approximate message passing (AMP) algorithm recently
proposed by Donoho, Maleki, and Montanari is a computationally
efficient iterative approach to SLR that has a remarkable
property: for large i.i.d. sub-Gaussian matrices A, its periteration
behavior is rigorously characterized by a scalar stateevolution
whose fixed points, when unique, are Bayes optimal.
AMP, however, is fragile in that even small deviations from the
i.i.d. sub-Gaussian model can cause the algorithm to diverge. This
paper considers a “vector AMP” (VAMP) algorithm and shows
that VAMP has a rigorous scalar state-evolution that holds under
a much broader class of large random matrices A: those that are
right-rotationally invariant. After performing an initial singular
value decomposition (SVD) of A, the per-iteration complexity of
VAMP is similar to that of AMP. In addition, the fixed points of
VAMP’s state evolution are consistent with the replica prediction
of the minimum mean-squared error recently derived by Tulino,
Caire, Verd´ u, and Shamai
Abstract—The standard linear regression (SLR) problem is to
recover a vector x0 from noisy linear observations y = Ax0 w.
The approximate message passing (AMP) algorithm recently
proposed by Donoho, Maleki, and Montanari is a computationally
efficient iterative approach to SLR that has a remarkable
property: for large i.i.d. sub-Gaussian matrices A, its periteration
behavior is rigorously characterized by a scalar stateevolution
whose fixed points, when unique, are Bayes optimal.
AMP, however, is fragile in that even small deviations from the
i.i.d. sub-Gaussian model can cause the algorithm to diverge. This
paper considers a “vector AMP” (VAMP) algorithm and shows
that VAMP has a rigorous scalar state-evolution that holds under
a much broader class of large random matrices A: those that are
right-rotationally invariant. After performing an initial singular
value decomposition (SVD) of A, the per-iteration complexity of
VAMP is similar to that of AMP. In addition, the fixed points of
VAMP’s state evolution are consistent with the replica prediction
of the minimum mean-squared error recently derived by Tulino,
Caire, Verd´ u, and Shamai