# V-NET 多文档阅读理解任务思路论文（下）

## 文章内容

Conclusion
In this blog, we will talk about the main idea of V-NET, which is used for machine reading comprehension. Nowadays, we could firmly believe that good machine reading comprehension systems will help a lot with NLP problem. So I work a lot with the MRC problem. I found that R-NET, V-NET, QA-NET all work well with the task. So I had written three blogs about these papers. Now this is the next chapter of V-NET.

### V-NET 亮点

• 问题-文章对/Q-P pair
• 指针网络/Pointer Network

$P_{k}^{c}={\rm sigmoid}({\rm w_{1}^{cT}ReLU(W_{2}^{c}v_{k}^{p_{i}})})$。这里，百度的这篇文章强调这样一个模型是凭直觉出来的，而后，它提到自己的相关的损失函数，就不说明了。而后就可以得出在下一节中需要用的联合权重$r^{A_{i}}$。其计算公式如下：
$r^{A_{i}}=\frac{1}{P_{i}}\sum_{k=1}^{P_{i}}P_{k}^{c}\left[e_{k}^{P_{i}},c_{k}^{P_{i}} \right]$

### 多文档答案验证 - 跨注意力机制

$s_{i,j}=\begin{cases} 0 , & \text{ if } i=j, \ \rm{r^{A_{i}T}·r^{A_{j}}}, & \rm otherwise \end{cases}$

$\alpha_{i,j}=\rm{exp(s_{i,j})/ \sum_{k=1}^{n}\rm exp(s_{i,k})}$

$\tilde{r}^{A_{i}}=\sum_{j=1}^{n}a_{i,j}r^{A_{j}}$

## 参考文献

[1] Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 .
[2] Wang S, Jiang J. Machine comprehension using match-lstm and answer pointer[J]. arXiv preprint arXiv:1608.07905, 2016.

# Bon