HGNN_AC[WWW2021]
How to run
-
Clone the Openhgnn-DGL
python main.py -m HGNN_AC -t node_classification -d imdb4MAGNN -g 0
If you do not have gpu, set -gpu -1.
the dataset imdb4MAGNN is supported.
Performance: Node classification
- Device: CPU, Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz 2.59 GHz
- Dataset: IMDB
Node classification |
Macro-F1 |
Micro-F1 |
MAGNN |
58.65% |
59.20% |
paper |
60.75% |
60.98% |
OpenHGNN |
60.54% |
60.70% |
The perform of experiments are run in the setting of paper which uses SVM classification, so it is a little bit different from semi-supervised node classification. And directly running the model is using semi-supervised node classification trainerflow.
Dataset
- We process the IMDB dataset given by MAGNN. It is saved as dgl.heterograph and can be loaded by dgl.load_graphs
Description
-
imdb4MAGNN
-
Number of nodes
movie |
4278 |
director |
2081 |
actor |
5257 |
-
Number of edges
movie-director |
4278 |
movie-actor |
12828 |
-
Types of metapaths: MDM, MAM, DMD, DMAMD, AMA, AMDMA
. Please note that the M
is movie, D
is director, A
is actor, and the edges above are all bidirectional.
[TODO]
TrainerFlow: Node classification trainer
Hyper-parameters specific to the model
You can modify the parameters in openhgnn/config.ini
Description
feats_drop_rate = 0.3 # feature drop rate to get the feature drop list
attn_vec_dim = 64 # the dimesions of vector in the Attention Layer
feats_opt = 110 # the type of nodes that needs to get the new features
loss_lambda = 0.2 # the weighted coefficient to balance the two parts.
src_node_type = 2 # the type of nodes that has the raw attributes
dropout = 0.1 # the drop rate used in Drop some Attributes
num_heads = 8 # the num of heads used in muti-head attention mechanism
HIN = MAGNN # the type of model used in Combination with the HIN Model.
More
Contirbutor
Yaoqi Liu[GAMMA LAB]
If you have any questions,
Submit an issue or email to YaoqiLiu@bupt.edu.cn.