All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
LinkLoader
acccording to source and destination node weights (#9316)EdgeIndex.unbind
(#9298)torch_geometric.Index
into torch_geometric.EdgeIndex
(#9296)EdgeIndex.sparse_narrow
for non-sorted edge indices (#9291)torch_geometric.Index
(#9276, #9277, #9278, #9279, #9280, #9281, #9284, #9285, #9286, #9287, #9288, #9289, #9297)EdgeIndex
in message_and_aggregate
(#9131)CornellTemporalHyperGraphDataset
(#9090)GAT
in single node Papers100m examples (#8173)VariancePreservingAggregation
(VPA) (#9075) from_smiles
functionality to PCQM4Mv2
and MoleculeNet
(#9073)group_cat
functionality (#9029)EdgeIndex
in spmm
(#9026)ApproxKNN
(#9046)EdgeIndex
in MessagePassing
(#9007)torch.compile
in combination with EdgeIndex
(#9007)ogbn-mag240m
example (#8249)EdgeIndex.sparse_resize_
functionality (#8983)faiss
-based KNN-search (#8952)MoleculeNet
(#9318)OnDiskDataset
for multi-threaded get
calls (#9140)None
outputs in FeatureStore.get_tensor()
- KeyError
should now be raised based on the implementation in FeatureStore._get_tensor()
(#9102)ogbn-papers100m
default hyperparameters and adding evaluation on all ranks (#8823)pytest
(#8978)trim_to_layer
functionality (#9021)scatter
operations in MessagePassing
in case torch.use_deterministic_algorithms
is not set (#9009)MessagePassing
interface thread-safe (#9001)EdgeIndex
in cugraph
GNN layers (#8938)dim
arg to torch.cross
calls (#8918)data.edge_index
(#9317)MessagePassing.propgate
(#9245)RCDD
dataset (#9234)edge_label
and edge_label_index
in ToSparseTensor
transform (#9199)EgoData
processing in SnapDataset
in case filenames are unsorted (#9195)to_dgl
(#9188)to_scipy_sparse_matrix
when cuda is set as default torch device (#9146)MetaPath2Vec
in case the last node is isolated (#9145)MessagePassing
via torch.load
(#9105)propagate
functions (#9079)self.propagate
appearances in comments when parsing MessagePassing
implementation (#9044)OSError
on read-only file systems within MessagePassing
(#9032)Dataset
(#8999)MessagePassing
modules with nested inheritance (#8973)segment
in case torch-scatter
is not installed (#8852)visualize_graph()
(#8816)torch_geometric.distributed
(#8718, #8815, #8874)TreeGraph
and GridMotif
generators (#8736)num_graphs
option to the StochasticBlockModelDataset
(#8648)ViSNet
model (#8287)Data
(#8454)to_networkx
(#8575)profileit
decorator (#8532)KNNIndex
exclusion logic (#8573)dataset.num_classes
on regression problems (#8550)dropout_node
(#8524)mypy
(#8254)ClusterData
(#8438)is_torch_instance
to check against the original class of compiled models (#8461)AddRandomWalkPE
(#8431)fsspec
as file system backend (#8379, #8426, #8434, #8474)FakeDataset
and FakeHeteroDataset
(#8404)InMemoryDataset
(#8402)NeighborLoader
and LinkNeighborLoader
(#8372, #8428)torch.compile
in ModuleDict
and ParameterDict
(#8363)force_reload
option to Dataset
and InMemoryDataset
to reload datasets (#8352, #8357, #8436)torch.compile
in MultiAggregation
(#8345)torch.compile
in HeteroConv
(#8344)sparse_cross_entropy
(#8340)KGEModel.test()
(#8298)examples/multi_gpu/model_parallel.py
) (#8309)ogbn-papers100M
(#8070)to_hetero_with_bases
on static graphs (#8247)RCDD
dataset (#8196)GAT + ogbn-products
example targeting XPU device (#8032)conv.explain = False
(#8216)use_segment_matmul
based on benchmarking (from a heuristic-based version) (#8615)utils.group_argsort
if its input tensor is empty (#8752)NELL
and AttributedGraphDataset
are now represented as torch.sparse_csr_tensor
instead of torch_sparse.SparseTensor
(#8679)torch.sparse
tensors (#8670)DistLoader
with atexit
not executed correctly in worker_init_fn
(#8605)ExplainerDataset
will now contain node labels for any motif generator (#8519)utils.softmax
faster via softmax_csr
(#8399)utils.mask.mask_select
faster (#8369)DistNeighborSampler
(#8209, #8367, #8375, (#8624, #8722)GraphStore
and FeatureStore
to support distributed training (#8083)add_self_loops=True
in GCNConv(normalize=False)
(#8210)torch_geometric.compile
(#8220)MessagePassing.jittable
(#8781)torch_geometric.compile
; Use torch.compile
instead (#8780)typing
argument in MessagePassing.jittable()
(#8731)torch_geometric.data.makedirs
in favor of os.makedirs
(#8421)DataParallel
in favor of DistributedDataParallel
(#8250)to_homogeneous()
(#8858)InMemoryDataset
did not reconstruct the correct data class when a pre_transform
has modified it (#8692)OnDiskDataset
(#8663)DMoNPooing
loss function (#8285)NaN
handling in SQLDatabase
(#8479)CaptumExplainer
in case no index
is passed (#8440)edge_index
construction in the UPFD
dataset (#8413)AttentionalAggregation
and DeepSetsAggregation
(#8406)GraphMaskExplainer
for GNNs with more than two layers (#8401)GATConv
depending on whether the input is bipartite or non-bipartite (#8397)input_id
computation in NeighborLoader
in case a mask
is given (#8312)Linear
layers (#8311)Data.subgraph()
/HeteroData.subgraph()
in case edge_index
is not defined (#8277)MetaPath2Vec
(#8248)AttentionExplainer
usage within AttentiveFP
(#8244)load_from_state_dict
in lazy Linear
modules (#8242)DimeNet++
performance on QM9
(#8239)GNNExplainer
usage within AttentiveFP
(#8216)to_networkx(to_undirected=True)
in case the input graph is not undirected (#8204)TwoHop
and AddRandomWalkPE
transformations (#8197, #8225)HeteroData
converted using ToSparseTensor()
when torch_sparse
is not installed (#8356)torch_geometric.compile
(#8698)ogc
method as example (#8168)NeighborLoader
(#7931)segment_matmul
/grouped_matmul
via the torch_geometric.backend.use_segment_matmul
flag (#8148)NeuroGraphDataset
benchmark collection (#8122)mask
tensor in dense_to_sparse
(#8117)to_on_disk_dataset()
method to convert InMemoryDataset
instances to OnDiskDataset
instances (#8116)torch-frame
support (#8110, #8118, #8151, #8152)DistLoader
base class (#8079)HyperGraphData
to support hypergraphs (#7611)PCQM4Mv2
dataset as a reference implementation for OnDiskDataset
(#8102)module_headers
property to nn.Sequential
models (#8093)OnDiskDataset
interface with data loader support (#8066, #8088, #8092, #8106)Node2Vec
and MetaPath2Vec
usage (#7938)edge_attr
support to ResGatedGraphConv
(#8048)Database
interface and SQLiteDatabase
/RocksDatabase
implementations (#8028, #8044, #8046, #8051, #8052, #8054, #8057, #8058)NeighborLoader
/LinkNeighborLoader
(#8038)MixHopConv
layer and an corresponding example (#8025)BasicGNN
and MLP
(#8024, #8033)IBMBNodeLoader
and IBMBBatchLoader
data loaders (#6230)NeuralFingerprint
model for learning fingerprints of molecules (#7919)SparseTensor
support to WLConvContinuous
, GeneralConv
, PDNConv
and ARMAConv
(#8013)LCMAggregation
, an implementation of Learnable Communitive Monoids, along with an example (#7976, #8020, #8023, #8026, #8075)HeteroData.validate()
(#7995)utils.cumsum
implementation (#7994)BrcaTcga
dataset (#7905)MyketDataset
(#7959)ogbn-papers100M
example (#7921)group_argsort
implementation (#7948)CachedLoader
implementation (#7896, #7897)utils.ppr
for personalized PageRank computation (#7917)PrefetchLoader
(#7918)Dataset
, e.g., dataset[:0.9]
(#7915)HalfHop
graph upsampling augmentation (#7827)Wikidata5M
dataset (#7864)BasicGNN
models (#7865)batch_size
argument to unbatch
functionalities (#7851)graphlearn-for-pytorch
(#7402)neg_sampling_ratio
into TemporalDataLoader
(#7644)faiss
-based KNNINdex
classes for L2 or maximum inner product search (#7842)OSE_GVCS
dataset (#7811)output_initializer
argument to DimeNet
models (#7774, #7780)lexsort
implementation (#7775)HeteroData
support in to_networkx
(#7713)FlopsCount
support via fvcore
(#7693)Data.sort()
and HeteroData.sort()
functionalities (#7649)torch.nested_tensor
support in Data
and Batch
(#7643, #7647)interval
argument to Cartesian
, LocalCartesian
and Distance
transformations (#7533, #7614, #7700)LightGCN
example on the AmazonBook
dataset (7603)HypergraphConv
via the attention_mode
argument (#7601)FilterEdges
graph coarsening operator (#7361)DirGNN
model for learning on directed graphs (#7458)NodeLoader
and LinkLoader
(#7572)embedding_device
option to allow for GPU inference in BasicGNN
(#7548, #7829)Performer
to GPSConv
and remove attn_dropout
argument from GPSConv
(#7465)LinkNeighborLoader
to return number of sampled nodes and edges per hop (#7516)HM
personalized fashion recommendation dataset (#7515)GraphMixer
model (#7501, #7459)disable_dynamic_shape
experimental flag (#7246, #7534)MovieLens-1M
heterogeneous dataset (#7479)map_index
implementation (#7493, #7764 #7765)AmazonBook
heterogeneous dataset (#7483)torch_geometric.distributed
package (#7451, #7452), #7482, #7502, #7628, #7671, #7846, #7715, #7974)GDELTLite
dataset (#7442)approx_knn
function for approximated nearest neighbor search (#7421)IGMCDataset
(#7441)cross_entropy
implementation (#7447, #7466)MovieLens-100K
heterogeneous dataset (#7398)PMLP
model and an example (#7370, #7543)HeteroData.to_homogeneous()
in case feature dimensionalities do not match (#7374)batch_size
argument to fps
, knn
, knn_graph
, radius
and radius_graph
(#7368)PrefetchLoader
capabilities (#7376, #7378, #7383)add_pad_mask
argument to the Pad
transform (#7339)keep_inter_cluster_edges
option to ClusterData
to support inter-subgraph edge connections when doing graph partitioning (#7326)ModuleDict
/ParameterDict
(#7294)NodePropertySplit
transform for creating node-level splits using structural node properties (#6894)CitationFull
datasets (#7275)torch.sparse.Tensor
in DataLoader
(#7252)save
and load
methods to InMemoryDataset
(#7250, #7413)CaptumExplainer
(#7096)visualize_feature_importance
functionality to HeteroExplanation
(#7096)AddRemainingSelfLoops
transform (#7192)optimizer_resolver
(#7209)type_ptr
argument to HeteroLayerNorm
(#7208)"any"
-reductions in scatter
(#7198)NodeLoader
and LinkLoader
(#7197)torch.sparse
support (#7155)LightGCN
(#7157)SparseTensor
support to trim_to_layer
function (#7089)ComposeFilters
class to compose pre_filter
functions in Dataset
(#7097)EllipticBitcoinDataset
called EllipticBitcoinTemporalDataset
(#7011)to_dgl
and from_dgl
conversion functions (#7053)torch.jit.script
within MessagePassing
layers without torch_sparse
being installed (#7061, #7062)torch.sparse
tensors (#7037)RotatE
KGE model (#7026)HeteroConv
for layers that have a non-default argument order, e.g., GCN2Conv
(#8166)ModuleDict
and ParameterDict
(#8163)torch.compile(dynamic=True)
in PyTorch 2.1.0 (#8145)AddLaplacianEigenvectorPE
for small-scale graphs (#8143)DynamicBatchSampler.__len__
to raise an error in case num_steps
is undefined (#8137)DimeNet
models (#8019)trim_to_layer
function to filter out non-reachable node and edge types when operating on heterogeneous graphs (#7942)top_k
computation in TopKPooling
(#7737)GIN
implementation in kernel benchmarks to have sequential batchnorms (#7955)cache
argument in heterogeneous models (#7956batch.e_id
was not correctly computed on unsorted graph inputs (#7953)from_networkx
conversion from nx.stochastic_block_model
graphs (#7941)bias_initializer
in HeteroLinear
(#7923)HGBDataset
(#7907)SetTransformerAggregation
produced NaN
values for isolates nodes (#7902)model_summary
on modules with uninitialized parameters (#7884)QM9
data pre-processing to include the SMILES string (#7867)add_self_loops
for a dynamic number of nodes (#7330)PNAConv.get_degree_histogram
(#7830)edge_label_time
when using temporal sampling on homogeneous graphs (#7807)torch_geometric.contrib.explain.GraphMaskExplainer
to torch_geometric.explain.algorithm.GraphMaskExplainer
(#7779)FieldStatus
enum picklable to avoid PicklingError
in a multi-process setting (#7808)edge_label_index
computation in LinkNeighborLoader
for the homogeneous+disjoint
mode (#7791)CaptumExplainer
for binary_classification
tasks (#7787)training
flag in to_hetero
modules (#7772)HeteroData
(#7714)dest
argument to dst
in utils.geodesic_distance
(#7708)add_random_edge
to only add true negative edges (#7654)BasicGNN
models in DeepGraphInfomax
(#7648)Data.keys
a method rather than a property (#7629)num_edges
parameter to the forward method of HypergraphConv
(#7560)get_mesh_laplacian
for normalization="sym"
(#7544)dim_size
to initialize output size of the EquilibriumAggregation
layer (#7530)max_num_elements
parameter to the forward method of GraphMultisetTransformer
, GRUAggregation
, LSTMAggregation
and SetTransformerAggregation
(#7529)SparseTensor
(#7519)scaler
tensor in GeneralConv
to the correct device (#7484)HeteroLinear
bug when used via mixed precision (#7473)output_size
in the repeat_interleave
operation in QuantileAggregation
(#7426)utils.spmm
(#7428)ClusterLoader
to integrate pyg-lib
METIS routine (#7416)QuantileAggregation
when dim_size
is passed (#7407)filter_per_worker
option will not get automatically inferred by default based on the device of the underlying data (#7399)LightGCN.recommendation_loss()
to only use the embeddings of the nodes involved in the current mini-batch (#7384)max_num_elements
argument to SortAggregation
(#7367)fill_value
as a torch.tensor
to utils.to_dense_batch
(#7367)to_hetero_with_bases
(#7363)node_default
and edge_default
attributes in from_networkx
(#7348)NeighborLoader
instead of NeighborSampler
(#7152)HGTConv
utility function _construct_src_node_feat
(#7194)batch_size
argument to avg_pool_x
and max_pool_x
(#7216)subgraph
on unordered inputs (#7187)HeteroDictLinear
(#7185)from_networkx
memory footprint by reducing unnecessary copies (#7119)batch_size
argument to LayerNorm
, GraphNorm
, InstanceNorm
, GraphSizeNorm
and PairNorm
(#7135)numpy
incompatiblity when reading files for Planetoid
datasets (#7141)Data.num_edges
for native torch.sparse.Tensor
adjacency matrices (#7104)MultiAggregation
(#7077)HeterophilousGraphDataset
are now undirected by default (#7065)FastHGTConv
that computed values via parameters used to compute the keys (#7050)torch_sparse.SparseTensor
logic to utilize torch.sparse_csr
instead (#7041)batch_size
and max_num_nodes
arguments to MemPooling
layer (#7239)CaptumExplainer
to be called multiple times in a row (#7391)layer_type
argument in contrib.explain.GraphMaskExplainer
(#7445)FastHGTConv
with HGTConv
(#7117)utils.one_hot
implementation (#7005)HeteroDictLinear
and an optimized FastHGTConv
module (#6178, #6998)DenseGATConv
module (#6928)trim_to_layer
utility function for more efficient NeighborLoader
use-cases (#6661)DistMult
KGE model (#6958)HeteroData.set_value_dict
functionality (#6961, #6974)ComplEx
KGE model (#6898)HeteroLayerNorm
and HeteroBatchNorm
layers (#6838)HeterophilousGraphDataset
suite (#6846)NeighborLoader
to return number of sampled nodes and edges per hop (#6834)ZipLoader
to execute multiple NodeLoader
or LinkLoader
instances (#6829)utils.select
and utils.narrow
functionality to support filtering of both tensors and lists (#6162)normalization
customization in get_mesh_laplacian
(#6790)TemporalEncoding
module (#6785)spmm_reduce
functionality via CSR format (#6699, #6759)MD17
dataset (#6734)RECT_L
model (#6727)Node2Vec
model (#6726)utils.to_edge_index
to convert sparse tensors to edge indices and edge attributes (#6728)PolBlogs
dataset (#6714)SimpleConv
to perform non-trainable propagation (#6718)RemoveDuplicatedEdges
transform (#6709)LINKX
model (#6712)torch.jit
examples for example/film.py
and example/gcn.py
(#6602)Pad
transform (#5940, #6697, #6731, #6758)cat
aggregation type to the HeteroConv
class so that features can be concatenated during grouping (#6634)torch.compile
support and benchmark study (#6610, #6952, #6953, #6980, #6983, #6984, #6985, #6986, #6989, #7002)AntiSymmetricConv
layer (#6577)nn.conv.cugraph
via cugraph-ops
(#6278, #6388, #6412)index_sort
function from pyg-lib
for faster sorting (#6554)EquilibriumAggregration
(#6560)dense_to_sparse()
(#6546)BAMultiShapesDataset
(#6541)n_id
and e_id
attributes to mini-batches produced by NodeLoader
and LinkLoader
(#6524)PGMExplainer
to torch_geometric.contrib
(#6149, #6588, #6589)NumNeighbors
helper class for specifying the number of neighbors when sampling (#6501, #6505, #6690)is_node_attr()
and is_edge_attr()
calls (#6492)ToHeteroLinear
and ToHeteroMessagePassing
modules to accelerate to_hetero
functionality (#5992, #6456)GraphMaskExplainer
(#6284)GRBCD
and PRBCD
adversarial attack models (#5972)dropout
option to SetTransformer
and GraphMultisetTransformer
(#6484)LightningNodeData
and LightningLinkData
(#6450, #6456)num_neighbors
in NeighborSampler
after instantiation (#6446)Taobao
dataset and a corresponding example for it (#6144)pyproject.toml
(#6431)torch_geometric.contrib
sub-package (#6422)pyright
type checker support (#6415)CaptumExplainer
(#6383, #6387, #6433, #6487, #6966)HeteroData
mini-batch class in remote backends (#6377)GNNFF
model (#5866)MLPAggregation
, SetTransformerAggregation
, GRUAggregation
, and DeepSetsAggregation
as adaptive readout functions (#6301, #6336, #6338)Dataset.to_datapipe
for converting PyG datasets into a torchdata DataPipe
(#6141)to_nested_tensor
and from_nested_tensor
functionality (#6329, #6330, #6331, #6332)GPSConv
Graph Transformer layer and example (#6326, #6327)networkit
conversion utilities (#6321)dataset.{attr_name}
(#6319)TransE
KGE model and example (#6314)FB15k_237
dataset (#3204)Data.update()
and HeteroData.update()
functionality (#6313)PGExplainer
(#6204)AirfRANS
dataset (#6287)AttentionExplainer
(#6279)LinkNeighborLoader
(#6264)BA2MotifDataset
explainer dataset (#6257)CycleMotif
motif generator to generate n
-node cycle shaped motifs (#6256)InfectionDataset
to evaluate explanations (#6222)characterization_score
and fidelity_curve_auc
explainer metrics (#6188)get_message_passing_embeddings
(#6201)PointGNNConv
layer (#6194)GridGraph
graph generator to generate grid graphs (#6220visualize_feature_importance
to support node feature visualizations (#6094)Explanation
framework (#6091, #6218)CustomMotif
motif generator (#6179)ERGraph
graph generator to generate Ergos-Renyi (ER) graphs (#6073)BAGraph
graph generator to generate Barabasi-Albert graphs - the usage of datasets.BAShapes
is now deprecated (#6072seed_time
attribute to temporal NodeLoader
outputs in case input_time
is given (#6196)Data.edge_subgraph
and HeteroData.edge_subgraph
functionalities (#6193)input_time
option to LightningNodeData
and transform_sampler_output
to NodeLoader
and LinkLoader
(#6187)summary
for PyG/PyTorch models (#5859, #6161)torch.sparse
support to PyG (#5906, #5944, #6003, #6033, #6514, #6532, #6748, #6847, #6868, #6874, #6897, #6930, #6932, #6936, #6937, #6939, #6947, #6950, #6951, #6957)inputs_channels
back in training benchmark (#6154)utils.to_dense_batch
in case max_num_nodes
is smaller than the number of nodes (#6124)pyproject.toml
for packaging (#6880)__dunder__
names (#6999)sort_edge_index
, coalesce
and to_undirected
to only return single edge_index
information in case the edge_attr
argument is not specified (#6875, #6879, #6893)to_hetero
when using an uninitialized submodule without implementing reset_parameters
(#6863)get_mesh_laplacian
(#6790)GNNExplainer
on link prediction tasks (#6787)ChebConv
within GNNExplainer
(#6778)EdgeStorage.num_edges
property (#6710)utils.bipartite_subgraph()
and updated docs of HeteroData.subgraph()
(#6654)data_list
cache of an InMemoryDataset
when accessing dataset.data
(#6685)Data.subgraph()
and HeteroData.subgraph()
(#6613)PNAConv
and DegreeScalerAggregation
to correctly incorporate degree statistics of isolated nodes (#6609)data.to_heterogeneous()
filtered attributs in the wrong dimension (#6522)pyg-lib>0.1.0
) (#6517)DataLoader
workers with affinity to start at cpu0
(#6512)global_*_pool
functions (#6504)RGCNConv
(#6482)numpy 1.24.0
(#6495)examples/mnist_voxel_grid.py
(#6478)LightningNodeData
and LightningLinkData
code paths (#6473)RGCNConv
(#6463)DataParallel
class (#6376)ImbalancedSampler
on sliced InMemoryDataset
(#6374)GraphMultisetTransformer
(#6343)transforms.GDC
to not crash on graphs with isolated nodes (#6242)InMemoryDataset.data
(#6318)SparseTensor
dependency in GraphStore
(#5517)NeighborSampler
with NeighborLoader
in the distributed sampling example (#6204)transforms.RemoveIsolatedNodes
(#6308)DimeNet
that causes a output dimension mismatch (#6305)Data.to_heterogeneous()
with empty edge_index
(#6304)Explanation.node_mask
and Explanation.node_feat_mask
(#6267)Explainer
to Explanation
(#6215)HeteroLinear
for un-sorted type vectors (#6198)ExplainerConfig
arguments to the Explainer
class (#6176)NeighborSampler
to be input-type agnostic (#6173)profileit
decorator (#6164)GDC
example (#6159)torch_geometric.data.lightning
(#6140)torch_sparse
an optional dependency (#6132, #6134, #6138, #6139, #7387)utils.softmax
implementation (#6113, #6155, #6805)topk
implementation for large enough graphs (#6123)torch-sparse
is now an optional dependency (#6625, #6626, #6627, #6628, #6629, #6630)torch-scatter
dependencies (#6394, #6395, #6399, #6400, #6615, #6617)GNNExplainer
and Explainer
from nn.models
(#6382)target_index
argument in the Explainer
interface (#6270)Aggregation.set_validate_args
option (#6175)GNNExplainer
to support edge level explanations (#6056, #6083)NodeLoader
(#6005)LinkNeighborLoader
(#6004)FusedAggregation
of simple scatter reductions (#6036)to_smiles
function (#6038)PNAConv
(#6039)semi_grad
option in VarAggregation
and StdAggregation
(#6042)MultiAggregation
(#6036, #6040)HeteroData
support for to_captum_model
and added to_captum_input
(#5934)HeteroData
support in RandomNodeLoader
(#6007)GraphSAGE
example (#5834)LRGBDataset
to include 5 datasets from the Long Range Graph Benchmark (#5935)HeteroData
(#5990)int32
support in NeighborLoader
(#5948)dgNN
support and FusedGATConv
implementation (#5140)lr_scheduler_solver
and customized lr_scheduler
classes (#5942)to_fixed_size
graph transformer (#5939)SchNet
model (#5938)SchNet
model (#5919)torch.sparse
support to PyG (#5906, #5944, #6003, #6633)HydroNet
water cluster dataset (#5537, #5902, #5903)SparseTensor
support to SuperGATConv
(#5888)AttentiveFP
(#5868)num_steps
argument to training and inference benchmarks (#5898)torch.onnx.export
support (#5877, #5997)sampler
support in LightningDataModule
(#5820)return_semantic_attention_weights
argument HANConv
(#5787)disjoint
argument to NeighborLoader
and LinkNeighborLoader
(#5775)input_time
in NeighborLoader
(#5763)disjoint
mode for temporal LinkNeighborLoader
(#5717)HeteroData
support for transforms.Constant
(#5700)np.memmap
support in NeighborLoader
(#5696)assortativity
that computes degree assortativity coefficient (#5587)SSGConv
layer (#5599)shuffle_node
, mask_feature
and add_random_edge
augmentation methdos (#5548)dropout_path
augmentation that drops edges from a graph based on random walks (#5531)HeteroData.to_homogeneous()
(#5540)temporal_strategy
option to neighbor_sample
(#5576)torch_geometric.sampler
package to docs (#5563)DGraphFin
dynamic graph dataset (#5504)dropout_edge
augmentation that randomly drops edges from a graph - the usage of dropout_adj
is now deprecated (#5495)dropout_node
augmentation that randomly drops nodes from a graph (#5481)AddRandomMetaPaths
that adds edges based on random walks along a metapath (#5397)WLConvContinuous
for performing WL refinement with continuous attributes (#5316)print_summary
method for the torch_geometric.data.Dataset
interface (#5438)sampler
support to LightningDataModule
(#5456, #5457)MalNetTiny
dataset (#5078)IndexToMask
and MaskToIndex
transforms (#5375, #5455)FeaturePropagation
transform (#5387)PositionalEncoding
(#5381)torch_geometric.sampler
, enabling ease of extensibility in the future (#5312, #5365, #5402, #5404), #5418)pyg-lib
neighbor sampling (#5384, #5388)pyg_lib.segment_matmul
integration within HeteroLinear
(#5330, #5347))bf16
support in benchmark scripts (#5293, #5341)Aggregation.set_validate_args
option to skip validation of dim_size
(#5290)SparseTensor
support to inference and training benchmark suite (#5242, #5258, #5881)utils.scatter
(#5232, #5241, #5386)HGBDataset
(#5233)BaseStorage.get()
functionality (#5240)to_hetero
works with SparseTensor
(#5222)torch_geometric.explain
module with base functionality for explainability methods (#5804, #6054, #6089)GNNExplainer
from torch_geometric.nn
to torch_geometric.explain.algorithm
(#5967, #6065)dense_mincut_pool
(#5908)VirtualNode
mistakenly treated node features as edge features (#5819)setter
and getter
handling in BaseStorage
(#5815)path
in hetero_conv_dblp.py
example (#5686)auto_select_device
routine in GraphGym for PyTorch Lightning>=1.7 (#5677)in_channels
with tuple
in GENConv
for bipartite message passing (#5627, #5641)RandomLinkSplit
(#5642)RGCN+pyg-lib
for LongTensor
input (#5610)mode_kwargs
in MultiAggregation
(#5601)BatchNorm
to allow for batches of size one during training (#5530, #5614)PNAConv
(#5514).
in ParameterDict
key names (#5494)drop_unconnected_nodes
to drop_unconnected_node_types
and drop_orig_edges
to drop_orig_edge_types
in AddMetapaths
(#5490)utils.scatter
performance by explicitly choosing better implementation for add
and mean
reduction (#5399)to_dense_adj
with empty edge_index
(#5476)AttentionalAggregation
module can now be applied to compute attentin on a per-feature level (#5449)num_neighbors
across edge types in NeighborLoader
(#5444)TUDataset
in which node features were wrongly constructed whenever node_attributes
only hold a single feature (e.g., in PROTEINS
) (#5441)num_neighbors
as an attribute of loader (#5404)ASAPooling
is now jittable (#5395)GraphSAGE
example to leverage LinkNeighborLoader
(#5317)torch.scatter_reduce
API (#5353)PointTransformerConv
now correctly uses sum
aggregation (#5332)MessagePassing
(#5339)Dataset
to be specified as either property and method (#5338)SparseTensor
within InMemoryDataset
(#5299)GLIBC
errors within torch-spline-conv
(#5276)Dataset.num_classes
in case a transform
modifies data.y
(#5274)PNAConv
(#5262)InMemoryDataset
cache on dataset.num_features
(#5264)dblp
datasets to instead use synthetic data (#5250)custom_graphgym
(#5243)scatter_reduce
option from experimental mode (#5399)DeepGCNLayer
(#5704).
in ModuleDict
key names (#5227)edge_label_time
argument to LinkNeighborLoader
(#5137, #5173)ImbalancedSampler
accept torch.Tensor
as input (#5138)flow
argument to gcn_norm
to correctly normalize the adjacency matrix in GCNConv
(#5149)NeighborSampler
supports graphs without edges (#5072)MeanSubtractionNorm
layer (#5068)pyg_lib.segment_matmul
integration within RGCNConv
(#5052, #5096)SparseTensor
as edge label in LightGCN
(#5046)BasicGNN
models within to_hetero
(#5091)AddMetapaths
(#5049)bias
and dropout
per layer in the MLP
model (#4981)EdgeCNN
model (#4991)inference
mode in BasicGNN
with layer-wise neighbor loading (#4977)unbatch_edge_index
functionality for splitting an edge_index
tensor according to a batch
vector (#4903)LayerNorm
(#4944)normalization_resolver
(#4926, #4951, #4958, #4959)torch_geometric.nn.aggr
package to documentation (#4927)follow_batch
for lists or dictionaries of tensors (#4837)Data.validate()
and HeteroData.validate()
functionality (#4885)LinkNeighborLoader
support to LightningDataModule
(#4868)predict()
support to the LightningNodeData
module (#4884)time_attr
argument to LinkNeighborLoader
(#4877, #4908)filter_per_worker
argument to data loaders to allow filtering of data within sub-processes (#4873)NeighborLoader
benchmark script (#4815, #4862)FeatureStore
and GraphStore
in NeighborLoader
(#4817, #4851, #4854, #4856, #4857, #4882, #4883, #4929, #4992, #4962, #4968, #5037, #5088, #5270, #5307, #5318)normalize
parameter to dense_diff_pool
(#4847)size=None
explanation to jittable MessagePassing
modules in the documentation (#4850)DataLoaderIterator
class (#4838)GraphStore
support to Data
and HeteroData
(#4816)FeatureStore
support to Data
and HeteroData
(#4807, #4853)FeatureStore
and GraphStore
abstractions (#4534, #4568, #5120)global_*_pool
(#4827)JumpingKnowledge
module (#4805)max_sample
argument to AddMetaPaths
in order to tackle very dense metapath edges (#4750)HANConv
with empty tensors (#4756, #4841)bias
vector to the GCN
model definition in the "Create Message Passing Networks" tutorial (#4755)transforms.RootedSubgraph
interface with two implementations: RootedEgoNets
and RootedRWSubgraph
(#3926)ptr
vectors for follow_batch
attributes within Batch.from_data_list
(#4723)torch_geometric.nn.aggr
package (#4687, #4721, #4731, #4762, #4749, #4779, #4863, #4864, #4865, #4866, #4872, #4934, #4935, #4957, #4973, #4973, #4986, #4995, #5000, #5034, #5036, #5039, #4522, #5033, #5085, #5097, #5099, #5104, #5113, #5130, #5098, #5191)DimeNet++
model (#4432, #4699, #4700, #4800)GroupAddRev
module with support for reducing training GPU memory (#4671, #4701, #4715, #4730)wandb
(#4656, #4672, #4676)unbatch
functionality (#4628)to_hetero()
works with custom functions, e.g., dropout_adj
(4653)MLP.plain_last=False
option (4652)HeteroConv
and to_hetero()
to ensure that MessagePassing.add_self_loops
is disabled (4647)HeteroData.subgraph()
, HeteroData.node_type_subgraph()
and HeteroData.edge_type_subgraph()
support (#4635)AQSOL
dataset (#4626)HeteroData.node_items()
and HeteroData.edge_items()
functionality (#4644)MLP
models (#4625)NeighborLoader
in case edge indices are already sorted (via is_sorted=True
) (#4620, #4702)AddPositionalEncoding
transform (#4521)HeteroData.is_undirected()
support (#4604)Genius
and Wiki
datasets to nn.datasets.LINKXDataset
(#4570, #4600)nn.aggr.EquilibrumAggregation
implicit global layer (#4522)to_hetero
(#4582)CHANGELOG.md
(#4581)HeteroData
support to the RemoveIsolatedNodes
transform (#4479)HeteroData.num_features
functionality (#4504)SAGEConv
(#4437)Geom-GCN
splits to the Planetoid
datasets (#4442)LinkNeighborLoader
for training scalable link predictions models #4396, #4439, #4441, #4446, #4508, #4509)GraphSAGE
example on PPI
(#4416)LSTM
aggregation in SAGEConv
(#4379)RandomLinkSplit
(#4311, #4383)torch.data
DataPipes
(#4302, #4345, #4349)cosine
argument in the KNNGraph
/RadiusGraph
transforms (#4344)networkx
conversion (#4343)HeteroData.rename
(#4329)MessagePassing.explain_message
method to customize making explanations on messages (#4278, #4448))GATv2Conv
in the nn.models.GAT
model (#4357)HeteroData.subgraph
functionality (#4243)MaskLabel
module and a corresponding masked label propagation example (#4197)NeighborLoader
(#4025)RandomLinkSplit
(#5190)scatter_reduce
implementation - experimental feature (#5120)RGATConv
device mismatches for f-scaled
mode (#5187)edge_labels
in LinkNeighborLoader
(#5186)GINEConv
bug with non-sequential input (#5154)HGTLoader
bug which produced outputs with missing edge types (#5067)load_state_dict
in Linear
with strict=False
mode (5094)MaskLabel.ratio_mask
(5093)data.num_node_features
computation for sparse matrices (5089)torch.fx
bug with torch.nn.aggr
package (#5021))GenConv
test (4993)act_dict
(part of graphgym
) to create individual instances instead of reusing the same ones everywhere (4978)F.one_hot
(4970)bool
arugments in argparse
in benchmark/
(#4967)BasicGNN
for num_layers=1
, which now respects a desired number of out_channels
(#4943)len(batch)
will now return the number of graphs inside the batch, not the number of attributes (#4931)data.subgraph
generation for 0-dim tensors (#4932)InMemoryDataset
inferring wrong len
for lists of tensors (#4837)Batch.separate
when using it for lists of tensors (#4837)TUDataset
where pre_filter
was not applied whenever pre_transform
was presentRandomTranslate
to RandomJitter
- the usage of RandomTranslate
is now deprecated (#4828)HeteroData
with two node types when there exists multiple relations between these types (#4782)edge_type == rev_edge_type
argument in RandomLinkSplit
(#4757, #5221)GeneralConv
and neighbor_sample
tests (#4754)HANConv
in which destination node features rather than source node features were propagated (#4753)checkout
and setup-python
in CI (#4751)protobuf
version (#4719)setter
properties in Data
(#4682, #4686)edge_weight
in GCN2Conv
(#4670)TUDataset
and pre_transform
that modify node features (#4669)pyg_sphinx_theme
documentation template (#4664, #4667)MLP.jittable()
bug in case return_emb=True
(#4645, #4648)StochasticBlockModelDataset
are now ordered with respect to their labels (#4617)bias
argument in TAGConv
is now actually applied (#4597)process
and download
in Datsaet
(#4586)__cat_dim__ != 0
(#4629)SparseTensor
support in NeighborLoader
(#4320)PNAConv
(#4312)from_networkx
in case some attributes are PyTorch tensors (#4486)DimeNet
(#4506, #4562)DBP15K
(#4428)DimeNet
when resetting parameters (#4424)flow="target_to_source"
(#4418)num_nodes
was not properly updated in the FixedPoints
transform (#4394)GATConv
was not jittable (#4347)nn.models.GAT
did not produce out_channels
-many output channels (#4299)GCNConv
could not be combined with to_hetero
on heterogeneous graphs with one node type (#4279)torchmetrics
(#4287)Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》