Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
LYFY7 3993146d7a | 2 years ago | |
---|---|---|
.. | ||
10-fold-data-splits | 2 years ago | |
20-fold-data-splits | 2 years ago | |
Utils | 2 years ago | |
__pycache__ | 2 years ago | |
.DS_Store | 2 years ago | |
Config.py | 2 years ago | |
Modules.py | 2 years ago | |
README.md | 2 years ago | |
Run.py | 2 years ago |
This repository was used in our paper:
Multi-task Sequence Tagging for Emotion-Cause Pair Extraction via Tag Distribution Refinement
Chuang Fan, Chaofa Yuan, Lin Gui, Yue Zhang, Ruifeng Xu*. TASLP 2021
Please cite our paper if you use this code.
20-fold-data-splits - A dir where contains data splits following Fan et al.
train.pkl
: A list where contains two items. train[0] is a list of document and train[1] is a list of the correspondding emotion-cause pairs. For example, train[0][0]="Last week, I lost my phone where shopping, I feel sad now", then train[1][0]=[(2, 1)].valid.pkl
: Similar to train.pkl.test.pkl
: Similar to train.pkl.10-fold-data-splits - A dir where contains data splits following Xia and Ding. The data format is the same as 20-fold-data-splits.
bert-base-chinese - Put the download Pytorch bert model here.
Utils - A dir where contains several python scripts used in this code.
Evaluation.py
: Used to evaluate the performance of the proposed model.Metrics.py
: Metrics for emotion extraction, cause extraction and emotion-cause pair extractions.PrepareData.py
: The scipt for preparing data.Config.py
- The script holds all the model configuration.
Modules.py
- The script where contains the proposed multi-task sequence tagging model.
Run.py
- The main script to train and evaluate the proposed model on different splits.
The BibTex of the citation is as follow:
@ARTICLE{9457144,
author={Fan, Chuang and Yuan, Chaofa and Gui, Lin and Zhang, Yue and Xu, Ruifeng},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
title={Multi-Task Sequence Tagging for Emotion-Cause Pair Extraction Via Tag Distribution Refinement},
year={2021},
volume={29},
number={},
pages={2339-2350},
abstract={The task emotion-cause pair extraction deals with finding all emotions and the corresponding causes from emotion texts. Existing joint methods solve it as multi-task learning, which introduces two auxiliary tasks (i.e., emotion extraction and cause extraction) to make use of task correlations for their mutual benefits. However, these methods focus on capturing such correlations by sharing parameters in an implicit way, not only have a limitation of cannot explicitly model their information interaction, but also suffer from low interpretability. Towards these issues, we propose a multi-task sequence tagging framework, which can extract emotions with the associated causes simultaneously by encoding their distances into a novel tagging scheme. In addition, the output of both auxiliary tasks can be directly used as inductive bias, to refine the tag distribution for benefiting emotion-cause pair extraction, so that the information exchange between them can be more explicit and interpretable. Results show that our model achieves the best performance, outperforming a number of competitive baselines by at least 1.03% (p <; 0.01) in F1 score. The comprehensive analysis further confirms the superiority and robustness of our model.},
keywords={},
doi={10.1109/TASLP.2021.3089837},
ISSN={2329-9304},
month={},}
Open Source Sentiment Analysis Algorithm inlcuding Aspect-Based Sentiment Analysis (-ABSA) and Emotion Cause Extraction (-ECE).
Pickle Raw token data Text CSV Python other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》