考虑估计期望的问题:
$$
\ell=\mathbb{E}_{\mathbf{u}}[H(\mathbf{X})]=\int H(\mathbf{x}) f(\mathbf{x} ; \mathbf{u}) \mathrm{d} \mathbf{x}
$$
可以用 importance sampling 方法
$$
\hat{\ell}=\frac{1}{N} \sum_{i=1}^{N} H\left(\mathbf{X}{i}\right) \frac{f\left(\mathbf{X}{i} ; \mathbf{u}\right)}{g\left(\mathbf{X}_{i}\right)}
$$
最优的$g^*$:
$$
g^{*}(\mathbf{x})=H(\mathbf{x}) f(\mathbf{x} ; \mathbf{u}) / \ell
$$
但$\ell$是未知的,算法就是通过最小化$g$与$g^*$的 KL 散度来选择$g$
2
,收敛则返回原始的用途是估计期望,可以用来优化
如求$S(x)=\mathrm{e}^{-(x-2)^{2}}+0.8 \mathrm{e}^{-(x+2)^{2}}$最大
估计$\mathbb{P}_{\theta}(S(X) \geq \gamma)$,$\gamma$是一个分位数点,逐步更新$g$与$\gamma$
(重要性采样中,$H$和 $f$都比较大的部分$g^*$应该也越大)
while t < maxits and σ2 > ε do
// Obtain N samples from current sampling distribution
X := SampleGaussian(μ, σ2, N)
// Evaluate objective function at sampled points
S := exp(−(X − 2) ^ 2) + 0.8 exp(−(X + 2) ^ 2)
// Sort X by objective function values in descending order
X := sort(X, S)
// Update parameters of sampling distribution
μ := mean(X(1:Ne))
σ2 := var(X(1:Ne))
t := t + 1
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》