@@ -6,7 +6,7 @@ English | [简体中文](ConstraintList.md)
- [Torch.nn](#jump4)
- [nn.functional](#jump5)
- [torch.linalg](#jump6)
- [torch.utils.data](#jump7)
## <span id="jump1">API Constraints List</span>
@@ -22,8 +22,8 @@ English | [简体中文](ConstraintList.md)
| torch.imag | Currently not support on GRAPH mode |
| torch.max | Currently not support other, Not support on GRAPH mode |
| torch.sum | Currently not support on GRAPH mode |
| torch.lu | Currently not support GRAPH mode, input `get_infos=True` currently cannot scan the error, mindspore not support `pivot=False`,, only support 2-D square matrix as input, not support (*,M,N) shape input |
| torch.lu_solve | Currently not support GRAPH mode, input `left=False` not support, only support 2-D square matrix as input, not support 3-D input |
| torch.lu | Currently not support GRAPH mode, not support gradient computation, input `get_infos=True` currently cannot scan the error, mindspore not support `pivot=False`,, only support 2-D square matrix as input, not support (*,M,N) shape input |
| torch.lu_solve | Currently not support GRAPH mode, not support gradient computation, input `left=False` not support, only support 2-D square matrix as input, not support 3-D input |
| torch.lstsq | Currently not support return the second result QR, not support on GRAPH mode, not support gradient computation |
| torch.svd | Currently not support GRAPH mode on Ascend, not support gradient computation on Ascend |
| torch.nextafter | Currently not support float32 on CPU |
@@ -31,8 +31,7 @@ English | [简体中文](ConstraintList.md)
| torch.i0 | Currently not support gradient computation on Ascend, currently not support GRAPH mode on Ascend |
| torch.index_add | Not support `input` of more than 2-D or `dim` >= 1. Not suppor GRAPH mode |
| torch.index_copy | Not support `input` of more than 2-D or `dim` >= 1. Not suppor GRAPH mode |
| torch.scatter_reduce | Currently not support `reduce`="mean" |
| torch.histogramdd | Currently not support float64 input |
| torch.scatter_reduce | Currently not support `reduce`="mean", not support `reduce`="prod" with `dim`>0 on Ascend |
| torch.asarray | Currently not support input `device`, `copy`, `requires_grad` as configuration |
| torch.complex | Currently not support float16 input |
| torch.fmin | Currently not support gradient computation, not support GRAPH mode |
@@ -41,29 +40,41 @@ English | [简体中文](ConstraintList.md)
| torch.float_power | Currently not support complex input |
| torch.add | Currently not support both bool type input and return bool output |
| torch.polygamma | When `n` is zero, the result may be wrong |
| torch.matmul | Currently not support int type input on GPU |
| torch.geqrf | Currently not support input ndim > 2 |
| torch.repeat_interleave | Currently not support `output_size` |
| torch.index_reduce | Currently not support `reduce`="mean" |
| torch.view_as_complex | Currently the output tensor is provided by data copying instead of a view of shared memory |
| torch.pad | when `padding_mode` is 'reflect', not support 5D input |
| torch.pad | when `padding_mode` is 'reflect', not support pad last 3 dimentions |
| torch.corrcoef | Currently not support complex inputs |
| torch.symeig | Currently not support gradient computation, not support GRAPH mode |
| torch.fmax | Currently not support gradient computation on GPU and Ascend, not support GRAPH mode on GPU and Ascend |
| torch.fft | Currently not support gradient computation, not support GRAPH mode |
| torch.rfft | Currently not support gradient computation, not support GRAPH mode |
| torch.poisson| Currently not support gradient computation on Ascend |
| torch.poisson| Currently not support gradient computation on Ascend, not support GRAPH mode on Ascend |
| torch.norm | 1.when `p` in 0/1/-1/-2,matrix-norm not support;2.not support `p` in int/float type beside inf/-inf/0/1/-1/2/-2 |
| torch.xlogy | Currently only support float16 and float32 on Ascend |
| torch.digamma | Currently only support float16 and float32 on Ascend |
| torch.lgamma | Currently only support float16 and float32 on Ascend |
| torch.logspace | Currently not support float type `base`. Currently only support GPU |
| torch.sgn | Currently not support int16 on Ascend |
| torch.mm | Currently not support int type input on GPU |
| torch.inner | Currently not support int type input on Ascend |
| torch.isclose | Currently not support equal_nan=False on Ascend |
| torch.matrix_rank | Currently not support complex input, not support GRAPH mode, not support gradient computation on Ascend |
| torch.autograd.functional.vjp | `create_graph`, `strict` not support |
| torch.autograd.functional.jvp | `create_graph`, `strict` not support |
| torch.autograd.functional.jacobian | `create_graph`, `strict` not support |
| torch.inference_mode | Currently equivalent to 'no_grad' |
| torch.tensordot | Currently not support int type input on GPU |
| torch.cuda.amp.GradScaler | 1.The unscale method needs to pass in the corresponding gradient: unscale_(optimizer, grads); 2.The step method needs to pass in the corresponding gradient: step(optimizer, grads); 3.The unscale_ method does not support graph mode |
| Tensor.bool | Not support parameter memory_format|
| Tensor.bool | Currently not support `memory_format` |
| Tensor.expand | Type is constrained, only support Tensor[Float16], Tensor[Float32], Tensor[Int32], Tensor[Int8], Tensor[UInt8] |
| Tensor.float | Currently not support memory_format |
| Tensor.float | Currently not support `memory_format` |
| Tensor.scatter | Currently not support reduce='mutiply', AscendNot support reduce='add', Not support indices.shape != src.shape |
| Tensor.std | Currently not support complex number and float64 input |
| Tensor.xlogy | Currently only support float16 and float32 on Ascend |
@@ -117,8 +128,8 @@ English | [简体中文](ConstraintList.md)
| Tensor.logical_xor_ | Currently not support on GRAPH mode |
| Tensor.lt_ | Currently not support on GRAPH mode |
| Tensor.less_ | Currently not support on GRAPH mode |
| Tensor.lu | Currently not support GRAPH mode, input `get_infos=True` currently cannot scan the error, not support `pivot=False`, only support 2-D square matrix as input, not support (*,M,N) shape input |
| Tensor.lu_solve | Currently not support GRAPH mode, input `left=False` not support, only support 2-D square matrix as input, not support 3-D input |
| Tensor.lu | Currently not support GRAPH mode, not support gradient computation, input `get_infos=True` currently cannot scan the error, not support `pivot=False`, only support 2-D square matrix as input, not support (*,M,N) shape input |
| Tensor.lu_solve | Currently not support GRAPH mode, not support gradient computation, input `left=False` not support, only support 2-D square matrix as input, not support 3-D input |
| Tensor.lstsq | Not support return the second result QR, not support on GRAPH mode, not support gradient computation |
| Tensor.mul_ | Currently not support on GRAPH mode |
| Tensor.multiply_ | Currently not support on GRAPH mode |
@@ -158,12 +169,11 @@ English | [简体中文](ConstraintList.md)
| Tensor.nextafter_ | Currently not support float32 on CPU |
| Tensor.fmin | Currently not support gradient computation, not support GRAPH mode |
| Tensor.imag | Currently not support on GRAPH mode |
| Tensor.scatter_reduce | Currently not support `reduce`="mean" |
| Tensor.scatter_reduce_ | Currently not support `reduce`="mean" and GRAPH mode |
| Tensor.scatter_reduce | Currently not support `reduce`="mean",not support `reduce`="prod" with `dim`>0 on Ascend |
| Tensor.scatter_reduce_ | Currently not support `reduce`="mean" and GRAPH mode, not support `reduce`="prod" with `dim`>0 on Ascend |
| Tensor.neg | Currently not support uint32, uint64 |
| Tensor.add | Currently not support both bool type input and return bool output |
| Tensor.polygamma | When `n` is zero, the result may be wrong |
| Tensor.matmul | Currently not support int type input on GPU |
| Tensor.geqrf | Currently not support input ndim > 2 |
| Tensor.repeat_interleave | Currently not support `output_size` |
| Tensor.index_reduce | Currently not support `reduce`="mean" |
@@ -181,6 +191,20 @@ English | [简体中文](ConstraintList.md)
| Tensor.digamma | Currently only support float16 and float32 on Ascend |
| Tensor.lgamma | Currently only support float16 and float32 on Ascend |
| Tensor.arcsinh_ | Currently not support on GRAPH mode |
| Tensor.long | Currently not support `memory_format` |
| Tensor.half | Currently not support `memory_format` |
| Tensor.int | Currently not support `memory_format` |
| Tensor.double | Currently not support `memory_format` |
| Tensor.char | Currently not support `memory_format` |
| Tensor.byte | Currently not support `memory_format` |
| Tensor.short | Currently not support `memory_format` |
| Tensor.new_full | 1.Currently not support `device`; 2.Currently not support `requires_grad`; 3.Currently not support `layout`; 4.Currently not support `pin_memory`; |
| Tensor.new_zeros | 1.Currently not support `device`; 2.Currently not support `requires_grad`; |
| Tensor.sgn | Currently not support int16 on Ascend |
| Tensor.mm | Currently not support int type input on GPU |
| Tensor.inner | Currently not support int type input on Ascend |
| Tensor.scatter_add_ | Requires updates_shape = indices_shape + input_shape[1:] on Ascend. Currently not supported on GPU |
### <span id="jump4">Torch.nn</span>
| MSAdapter APIs | Constraint conditions |
@@ -197,30 +221,32 @@ English | [简体中文](ConstraintList.md)
| nn.RReLU | inplace not support GRAPH mode |
| nn.SELU | inplace not support GRAPH mode |
| nn.CELU | inplace not support GRAPH mode |
| nn.Mish | inplace not support GRAPH mode |
| nn.Mish | 1.`inplace` not support GRAPH mode; 2.Not support float64 |
| nn.Threshold | inplace not support GRAPH mode |
| nn.Softshrink | Not support float64 |
| nn.LogSoftmax | Not support float64, Not support 8D and higher dimension |
| nn.Linear | device, dtype parameter Not support |
| nn.UpsamplingNearest2d | Not support size=None |
| nn.Conv1d | 1.`padding_mode` only support 'zeros'; 2.On Ascend, `groups` can only support 1 or equal to `in_channels` |
| nn.Conv2d | 1.`padding_mode` only support 'zeros'; 2.On Ascend, `groups` can only support 1 or equal to `in_channels` |
| nn.Conv3d | 1.Not support complex number; 2. `padding_mode` only support 'zeros'; 3.`groups`,`dialtion` only support 1 on Ascend |
| nn.Conv1d | On Ascend, `groups` can only support 1 or equal to `in_channels` |
| nn.Conv2d | On Ascend, `groups` can only support 1 or equal to `in_channels` |
| nn.Conv3d | 1.Not support complex number; 2.`padding_mode` not support 'reflect'; 3.`groups`,`dialtion` only support 1 on Ascend |
| nn.ConvTranspose1d | 1.`output_padding`,`output_size` not support; 2.On Ascend, `groups` can only support 1 or equal to `in_channels` |
| nn.ConvTranspose2d | 1.`output_padding`,`output_size` not support. 2.On Ascend, `groups` can only support 1 or equal to `in_channels` |
| nn.AdaptiveLogSoftmaxWithLoss | Not support GRAPH mode |
| nn.LSTM | Currently `proj_size` not support |
| nn.ReflectionPad1d | `padding` not support negative values |
| nn.ReflectionPad2d | `padding` not support negative values |
| nn.LSTM | Under GRAPH mode, `input` not support PackedSequence type |
| nn.ReflectionPad3d | `padding` not support negative values |
| nn.Transformer | Not support assigning values to keyword arguments with `=` operator. Not support input tensors of shape 0 |
| nn.TransformerEncoder | Not support assigning values to keyword arguments with `=` operator. Not support input tensors of shape 0 |
| nn.TransformerDecoder | Not support assigning values to keyword arguments with `=` operator. Not support input tensors of shape 0 |
| nn.TransformerEncoderLayer | Not support assigning values to keyword arguments with `=` operator. Not support input tensors of shape 0 |
| nn.TransformerDecoderLayer | Not support assigning values to keyword arguments with `=` operator. Not support input tensors of shape 0 |
| nn.Transformer | Not support input tensors of shape 0 |
| nn.TransformerEncoder | Not support input tensors of shape 0 |
| nn.TransformerDecoder | Not support input tensors of shape 0 |
| nn.TransformerEncoderLayer | Not support input tensors of shape 0 |
| nn.TransformerDecoderLayer | Not support input tensors of shape 0 |
| nn.AdaptiveMaxPool1d | `return_indices` not support on Ascend |
| nn.AdaptiveMaxPool2d | `return_indices` not support on Ascend |
| nn.Embedding | 1. `scale_grad_by_freq`, `sparse` is not supported; 2. `norm_type` can only be 2 |
| nn.Upsample | Not support `recompute_scale_factor` |
| nn.RNN | Under GRAPH mode, `input` not support PackedSequence type |
| nn.GRU | Under GRAPH mode, `input` not support PackedSequence type |
| nn.CrossEntropyLoss | There is risk of overflow when `target` type is int64 |
### <span id="jump5">nn.functional</span>
| MSAdapter APIs | Constraint conditions |
@@ -246,14 +272,19 @@ English | [简体中文](ConstraintList.md)
| functional.instance_norm | In graph mode, when training mode, `running_mean` and `running_var` are not supported |
| functional.batch_norm | In graph mode, when training mode, `running_mean` and `running_var` are not supported |
| functional.embedding | 1. 'scale_grad_by_freq', 'sparse' is not supported; 2. 'norm_type' can only be 2 |
| functional.mish | 1.`inplace` not support GRAPH mode; 2.Not support float64 |
| functional.selu | `inplace` not support GRAPH mode |
| functional.celu | 1.`inplace` not support GRAPH mode; 2.Not support float64 |
| functional.grid_sample | Not support `mode='bicubic'` |
| functional.cross_entropy | There is risk of overflow when `target` type is int64 |
### <span id="jump6">torch.linalg</span>
| MSAdapter APIs | Constraint conditions |
| --------------- | -------------- |
| lu | Currently not support on GRAPH mode, not support `pivot=False`, only support 2-D square matrix as input, not support (*,M,N) shape input |
| lu_solve | Currently not support on GRAPH mode, input`left=False` not support, only support 2-D square matrix as input, not support 3-D input |
| lu_factor | Currently not support on GRAPH mode, only support 2-D square matrix as input, not support (*,M,N) shape input |
| lu_factor_ex | Currently not support on GRAPH mode,Input `get_infos=True` currently cannot scan the error, not support `pivot=False`, only support 2-D square matrix as input, not support (*,M,N) shape input |
| lu | Currently not support on GRAPH mode, not support gradient computation, not support `pivot=False`, only support 2-D square matrix as input, not support (*,M,N) shape input |
| lu_solve | Currently not support on GRAPH mode, not support gradient computation, input`left=False` not support, only support 2-D square matrix as input, not support 3-D input |
| lu_factor | Currently not support on GRAPH mode, not support gradient computation, only support 2-D square matrix as input, not support (*,M,N) shape input |
| lu_factor_ex | Currently not support on GRAPH mode, not support gradient computation. Input `get_infos=True` currently cannot scan the error, not support `pivot=False`, only support 2-D square matrix as input, not support (*,M,N) shape input |
| lstsq | Currently not support on GRAPH mode, not support gradient computation |
| eigvals | Currently not support GRAPH mode, not support gradient computation |
| svd | `driver` only support None as input, not support gradient computation on Ascend, currently not support GRAPH mode on Ascend |
@@ -261,6 +292,22 @@ English | [简体中文](ConstraintList.md)
| norm | Currently not support complex input, `ord` not support float input, not support ord is nuclear norm, float('inf') or int on Ascend |
| vector_norm | Currently not support complex input, `ord` not support float input |
| matrix_power | Currently not support `n` < 0 on GPU |
| eigvalsh | not support gradient computation |
| eigvalsh | Currently not support on GRAPH mode, not support gradient computation |
| eigh | Currently not support on GRAPH mode, not support gradient computation |
| solve | Currently not support gradient computation |
| solve | Currently not support on GRAPH mode, not support gradient computation |
| cholesky | Currently not support integer input on GPU |
| cholesky_ex | Input `check_errors=True` currently cannot scan the error, not support integer input on GPU |
| inv_ex | Input `check_errors=True` currently cannot scan the error |
| matrix_norm | Currently input `ord` not support +2/-2 norm and nuclear norm on Ascend, not support complex input |
| matrix_rank | Currently not support complex input, not support GRAPH mode, not support gradient computation on Ascend |
| solve_triangular | Currently not support on Ascend, not support `left=False` |
| cond | Currently only support 2-D square matrix as input,not support complex input on Ascend, float32 type input only support `p=1/-1/'fro'/'inf'/'-inf'`, float64 type input only support `p='fro'`; complex128 type input only support `p=2/-2`, complex64 type input only support `p='fro'/'nuc'` on GPU and CPU |
@@ -7,18 +7,19 @@ English | [简体中文](SupportedList.md)
- [nn.functional](#jump5)
- [torch.linalg](#jump6)
- [torch.optim](#jump7)
- [torch.utils.data](#jump9)
### <span id="jump8">General Constraint</span>
- Not support the function of configuration `layout`, `device`, `requires_grad`, `memory_format`.
- Not support `Generator` that manages the state of the algorithm which produces pseudo random numbers.
- Not support 7D and higher dimensions calculations.
- The Complex type function is being improved.
- Ascend not fully support float64 type value as input, if the function is not applicable for float64, please try float32 and float16 instead.
- Ascend not fully support float64 type value as input, if the function is not applicable for float64, please try float32 and float16 instead.
- Currently, inputs of nan and inf are not supported on Ascend. If the input contains nan or inf values, the results may be incorrect.
- The function of [PyTorch APIs that support tensor to be a view](https://pytorch.org/docs/1.12/tensor_view.html) is constrained. Currently MSAdapter does not support sharing memory between the input and output tensor, but copying the data.
- On Ascend and GPU, there are differences between mindspore and pytorch in the processing overflow results, such as the upper limits of int16 and int32. Therefore, it is not recommended to assign input parameters exceed the upper or lower limits, or to convert data that significantly exceeds the data type to a smaller range of data types to avoid unexpected results.
- For the function with note "Function is constrained", please check the [APIs Constraints List](ConstraintList_en.md) for more details.
- For general constraints related to optimizers, see [Optimizer General Constraints](#jump10) and [lr_scheduler General Constraints](#jump11)
## <span id="jump1">List of PyTorch APIs supported by MSAdapter</span>
@@ -193,7 +194,7 @@ English | [简体中文](SupportedList.md)
| torch.prod | Supported | |
| torch.qr | Supported | |
| torch.std | Supported | |
| torch.sgn | Supported | |
| torch.sgn | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.unique_consecutive | Supported | |
| torch.var | Supported | |
| torch.count_nonzero | Supported | |
@@ -246,7 +247,7 @@ English | [简体中文](SupportedList.md)
| torch.flatten | Supported | |
| torch.flip | Supported | |
| torch.flipud | Supported | |
| torch.histc | Partly supported | Currently not support on GPU |
| torch.histc | Supported | |
| torch.meshgrid | Supported | |
| torch.ravel | Supported | |
| torch.not_equal | Supported | |
@@ -261,12 +262,13 @@ English | [简体中文](SupportedList.md)
| torch.bmm | Supported | |
| torch.cholesky | Supported | |
| torch.cholesky_inverse | Partly supported | Currently not support on GPU |
| torch.cholesky_solve | Supported | |
| torch.dot | Supported | |
| torch.repeat_interleave | Partly Supported | [Function is constrained](ConstraintList_en.md) |
| torch.addbmm | Supported | |
| torch.det | Supported | |
| torch.addmm | Supported | |
| torch.matmul | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.matmul | Supported | |
| torch.mv | Supported | |
| torch.orgqr | Supported | |
| torch.outer | Supported | |
@@ -275,7 +277,7 @@ English | [简体中文](SupportedList.md)
| torch.inner | Supported | |
| torch.logdet | Supported | |
| torch.lstsq | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.mm | Supported | |
| torch.mm | Supported | |
| torch.cuda.is_available | Supported | |
| torch.ByteTensor | Supported | |
| torch.CharTensor | Supported | |
@@ -317,12 +319,12 @@ English | [简体中文](SupportedList.md)
| torch.argsort | Supported | |
| torch.cross | Partly supported | Currently not support on GPU |
| torch.cummax | Partly supported | Currently not support on Ascend |
| torch.einsum | Partly supported | Only support on GPU |
| torch.einsum | Supported | |
| torch.fliplr | Supported | |
| torch.hamming_window | Supported | |
| torch.svd | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.searchsorted | Supported | |
| torch.fmax | Partly supported | Only support on CPU |
| torch.fmax | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.fmin | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.inverse | Partly supported | Currently not support on Ascend |
| torch.poisson | Partly supported | [Function is constrained](ConstraintList_en.md) |
@@ -335,9 +337,9 @@ English | [简体中文](SupportedList.md)
| torch.resolve_conj | Partly supported | Currently not support on GRAPH mode |
| torch.index_add | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.scatter_reduce | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.scatter_add | Supported | |
| torch.scatter_add | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.index_copy | Supported | |
| torch.histogramdd | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.histogramdd | Supported | |
| torch.diag_embed | Supported | |
| torch.resolve_neg | Partly supported | Currently not support on GRAPH mode |
| torch.pinverse | Partly supported | Currently not support on Ascend |
@@ -356,7 +358,7 @@ English | [简体中文](SupportedList.md)
| torch.gcd | Supported | |
| torch.histogram | Supported | | [Function is constrained](ConstraintList_en.md) |
| torch.lcm | Supported | |
| torch.tensordot | Supported | |
| torch.tensordot | Partly supported | [Function is constrained](ConstraintList_en.md) |
| torch.tril_indices | Supported | |
| torch.triu_indices | Supported | |
| torch.geqrf | Partly Supported | [Function is constrained](ConstraintList_en.md) |
@@ -376,12 +378,26 @@ English | [简体中文](SupportedList.md)
@@ -389,10 +405,10 @@ English | [简体中文](SupportedList.md)
| Tensor.acosh | Supported | |
| Tensor.new | Supported | |
| Tensor.new_tensor | Supported | |
| Tensor.new_full | Supported | |
| Tensor.new_full | Partly Supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.new_empty | Supported | |
| Tensor.new_ones | Supported | |
| Tensor.new_zeros | Supported | |
| Tensor.new_zeros | Partly Supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.is_cuda | Supported | |
| Tensor.ndim | Supported | |
| Tensor.add | Partly supported | [Function is constrained](ConstraintList_en.md) |
@@ -431,11 +447,12 @@ English | [简体中文](SupportedList.md)
| Tensor.bmm | Supported | |
| Tensor.bool | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.broadcast_to | Supported | |
| Tensor.byte | Supported | |
| Tensor.byte | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.ceil | Supported | |
| Tensor.char | Supported | |
| Tensor.char | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.cholesky | Supported | |
| Tensor.cholesky_inverse | Partly supported | Currently not support on GPU |
| Tensor.cholesky_solve | Supported | |
| Tensor.clamp | Supported | |
| Tensor.clip | Supported | |
| Tensor.clone | Supported | |
@@ -460,7 +477,7 @@ English | [简体中文](SupportedList.md)
| Tensor.dist | Supported | |
| Tensor.divide | Supported | |
| Tensor.dot | Supported | |
| Tensor.double | Supported | |
| Tensor.double | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.dsplit | Supported | |
| Tensor.eig | Partly supported | Currently not support on GPU |
| Tensor.eq | Supported | |
@@ -484,13 +501,13 @@ English | [简体中文](SupportedList.md)
| Tensor.greater | Supported | |
| Tensor.greater_equal | Supported | |
| Tensor.gt | Supported | |
| Tensor.half | Supported | |
| Tensor.half | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.hardshrink | Supported | |
| Tensor.heaviside | Supported | |
| Tensor.hsplit | Supported | |
| Tensor.hypot | Supported | |
| Tensor.index_select | Supported | |
| Tensor.int | Supported | |
| Tensor.int | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.is_complex | Supported | |
| Tensor.isclose | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.isfinite | Supported | |
@@ -499,7 +516,6 @@ English | [简体中文](SupportedList.md)
| Tensor.isneginf | Supported | |
| Tensor.isposinf | Supported | |
| Tensor.isreal | Supported | |
| Tensor.is_tensor | Supported | |
| Tensor.item | Supported | |
| Tensor.le | Supported | |
| Tensor.less | Supported | |
@@ -514,20 +530,20 @@ English | [简体中文](SupportedList.md)
| Tensor.logical_or | Supported | |
| Tensor.logical_xor | Supported | |
| Tensor.logsumexp | Supported | |
| Tensor.long | Supported | |
| Tensor.long | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.lt | Supported | |
| Tensor.lu | Partly supported | Currently not support on Ascend |
| Tensor.lu_solve | Partly supported | Currently not support on Ascend |
| Tensor.lstsq | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.masked_fill | Supported | |
| Tensor.matmul | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.matmul | Supported | |
| Tensor.max | Supported | |
| Tensor.maximum | Supported | |
| Tensor.mean | Supported | |
| Tensor.min | Supported | |
| Tensor.fmax | Partly supported | Only support on CPU |
| Tensor.fmax | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.fmin | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.histc | Partly supported | Currently not support on GPU |
| Tensor.histc | Supported | |
| Tensor.minimum | Supported | |
| Tensor.moveaxis | Supported | |
| Tensor.movedim | Supported | |
@@ -537,6 +553,7 @@ English | [简体中文](SupportedList.md)
| Tensor.nanmean | Supported | |
| Tensor.nansum | Supported | |
| Tensor.narrow | Supported | |
| Tensor.narrow_copy | Supported | |
| Tensor.ndimension | Supported | |
| Tensor.ne | Supported | |
| Tensor.neg | Partly supported | [Function is constrained](ConstraintList_en.md) |
@@ -567,7 +584,7 @@ English | [简体中文](SupportedList.md)
| Tensor.rsqrt_ | Partly supported | Not support the GRAPH mode |
| Tensor.rsqrt | Supported | |
| Tensor.select | Supported | |
| Tensor.short | Supported | |
| Tensor.short | Partly supported | [Function is constrained](ConstraintList_en.md)|
| Tensor.sigmoid | Supported | |
| Tensor.sign | Supported | |
| Tensor.signbit | Supported | |
@@ -734,8 +751,8 @@ English | [简体中文](SupportedList.md)
| Tensor.index_fill_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.index_add | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.index_add_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.scatter_add | Supported | |
| Tensor.scatter_add_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.scatter_add | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.scatter_add_ | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.index_copy | Supported | |
| Tensor.index_copy_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.diag_embed | Supported | |
@@ -764,7 +781,7 @@ English | [简体中文](SupportedList.md)
| Tensor.igammac_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.positive | Supported | |
| Tensor.remainder_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.sgn | Supported | |
| Tensor.sgn | Partly supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.sgn_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.subtract_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.argmax | Supported | |
@@ -772,7 +789,7 @@ English | [简体中文](SupportedList.md)
| Tensor.histogram | Supported | |
| Tensor.lcm | Supported | |
| Tensor.geqrf | Partly Supported | [Function is constrained](ConstraintList_en.md) |
| Tensor.inner | Supported | |
| Tensor.inner | Supported | |
| Tensor.kthvalue | Supported | |
| Tensor.adjoint | Supported | |
| Tensor.angle | Supported | |
@@ -836,6 +853,11 @@ English | [简体中文](SupportedList.md)
| Tensor.map_ | Partly supported | Currently not support on GRAPH mode |
| Tensor.diagonal_scatter | Supported | |
| Tensor.apply_ | Partly Supported | Currently not support on GRAPH mode |
| Tensor.nanmedian | Partly supported | Currently not support on GPU or Ascend |
| Tensor.frexp | Supported | |
| Tensor.detach_ | Partly Supported | Currently not support on GRAPH mode |
| Tensor.backward | Not supported | For derivation, use mindspore's differential interface ms.grad(https://www.mindspore.cn/docs/en/r2.0/api_python/mindspore/mindspore.grad.html) or ms.value_and_grad(https://www.mindspore.cn/docs/en/r2.0/api_python/mindspore/mindspore.value_and_grad.html). For actual network usage, please refer to mobilenet_v2 Examples (https://openi.pcl.ac.cn/OpenI/MSAdapterModelZoo/src/branch/master/official/cv/mobilenet_v2/mobilenet_v2_adapter.py) |
| Tensor.triangular_solve | Partly supported | Currently not support on Ascend |
### <span id="jump4">Torch.nn</span>
| MSAdapter APIs | Status | Restrictions |
@@ -861,8 +883,8 @@ English | [简体中文](SupportedList.md)
| nn.AdaptiveAvgPool1d | Supported | |
| nn.AdaptiveAvgPool2d | Supported | |
| nn.AdaptiveAvgPool3d | Supported | |
| nn.ReflectionPad1d | Partly supported | [Function is constrained](ConstraintList_en.md) |
| nn.ReflectionPad2d | Partly supported | [Function is constrained](ConstraintList_en.md) |
| nn.ReflectionPad1d | Supported | |
| nn.ReflectionPad2d | Supported | |
| nn.ReflectionPad3d | Partly supported | [Function is constrained](ConstraintList_en.md) |
| nn.ReplicationPad1d | Supported | |
| nn.ReplicationPad2d | Supported | |
@@ -882,12 +904,12 @@ English | [简体中文](SupportedList.md)
| nn.ReLU | Supported | |
| nn.ReLU6 | Partly supported | [Function is constrained](ConstraintList_en.md) |
| nn.RReLU | Partly supported | inplace not support on GRAPH mode |
| nn.SELU | Partly supported | inplace not support on GRAPH mode |
| nn.CELU | Partly supported | inplace not support on GRAPH mode |
| nn.SELU | Partly supported | inplace not support on GRAPH mode |
| nn.CELU | Partly supported | inplace not support on GRAPH mode |
| nn.GELU | Supported | |
| nn.Sigmoid | Supported | |
| nn.SiLU | Supported | |
| nn.Mish | Partly supported | inplace not support on GRAPH mode |
| nn.Mish | Partly supported | [Function is constrained](ConstraintList_en.md) |
| nn.Softplus | Supported | |
| nn.Softshrink | Partly supported | [Function is constrained](ConstraintList_en.md) |
| nn.Softsign | Supported | |
@@ -909,7 +931,7 @@ English | [简体中文](SupportedList.md)
| nn.LayerNorm | Supported | |
| nn.LocalResponseNorm | Supported | |
| nn.RNNBase | Supported | |
| nn.RNN | Supported | |
| nn.RNN | Partly Supported | [Function is constrained](ConstraintList_en.md) |
| nn.RNNCell | Supported | |
| nn.LSTMCell | Supported | |
| nn.GRUCell | Supported | |
@@ -927,7 +949,7 @@ English | [简体中文](SupportedList.md)
| nn.PairwiseDistance | Supported | |
| nn.L1Loss | Supported | |
| nn.MSELoss | Supported | |
| nn.CrossEntropyLoss | Supported | |
| nn.CrossEntropyLoss | Partly supported | [Function is constrained](ConstraintList_en.md) |
| nn.CTCLoss | Supported | |
| nn.NLLLoss | Supported | |
| nn.PoissonNLLLoss | Supported | |
@@ -944,7 +966,7 @@ English | [简体中文](SupportedList.md)
| norm | Partly supported | [Function is constrained](ConstraintList_en.md) |
| vector_norm | Partly supported | [Function is constrained](ConstraintList_en.md) |
| matrix_norm | Unsupported | |
| matrix_norm | Partly supported | [Function is constrained](ConstraintList_en.md) |
| diagonal | Supported | |
| det | Supported | |
| slogdet | Supported | |
| cond | Unsupported | |
| matrix_rank | Unsupported | |
| cholesky | Unsupported | |
| cond | Partly supported | [Function is constrained](ConstraintList_en.md) |
| matrix_rank | Partly supported | [Function is constrained](ConstraintList_en.md) |
| cholesky | Partly supported | [Function is constrained](ConstraintList_en.md) |
| qr | Unsupported | |
| lu | Partly supported | [Function is constrained](ConstraintList_en.md) |
| lu_factor | Partly supported | [Function is constrained](ConstraintList_en.md) |
@@ -1137,7 +1159,7 @@ English | [简体中文](SupportedList.md)
| svd | Partly supported | [Function is constrained](ConstraintList_en.md) |
| svdvals | Partly supported | [Function is constrained](ConstraintList_en.md) |
| solve | Partly supported | [Function is constrained](ConstraintList_en.md) |
| solve_triangular | Unsupported | |
| solve_triangular | Partly supported | [Function is constrained](ConstraintList_en.md) |
| lu_solve | Unsupported | |
| lstsq | Partly supported | [Function is constrained](ConstraintList_en.md) |
| inv | Partly supported | [Function is constrained](ConstraintList_en.md) |
@@ -1145,16 +1167,16 @@ English | [简体中文](SupportedList.md)
| qr | Supported| |
| matrix_exp | Unsupported | |
| matrix_power | Partly supported | [Function is constrained](ConstraintList_en.md) |
| cross | Unsupported | |
| matmul | Partly supported | [Function is constrained](ConstraintList_en.md) |
| cross | Partly supported | Currently not support on GPU |
| matmul | Supported | |
| vecdot | Unsupported | |
| multi_dot | Supported | |
| householder_product | Supported | |
| tensorinv | Unsupported | |
| tensorsolve | Unsupported | |
| vander | Supported | |
| cholesky_ex | Unsupported | |
| inv_ex | Unsupported | |
| cholesky_ex | [Function is constrained](ConstraintList_en.md) |
| inv_ex | [Function is constrained](ConstraintList_en.md) |
| solve_ex | Unsupported | |
| lu_factor_ex | Unsupported | |
| ldl_factor | Unsupported | |
@@ -1165,13 +1187,18 @@ English | [简体中文](SupportedList.md)
### <span id="jump7">torch.optim</span>
<span id="jump10">Optimizer General Constraints:</span>
- The properties in the member variable `param_group` can be all modified under PyNative mode, but only `lr` can be modified in Graph mode.
- The following optimizer, for compatibility with the MindSpore Graph mode, `param_group['lr']` is initialized to MindSpore's `Parameters` type. When `param_group['lr']` needs to be modified, the `param_group['lr'] = lr` is supported in PyNative mode, but in Graph mode, `lr = mindspore.ops.depend(lr, mindspore.ops.assign(param_group['lr'], lr)` is required.
- Since `param_group['lr']` is initialized to type `Parameters` as shown above, use `float(param_group['lr'])` to convert if `param_group['lr']` is of type `Parameters` when printing is required.
- Due to differences in differential mechanisms, 'optimizer.step()' needs to be replaced with 'optimizer.step(grads)', where 'grad' can be obtained by 'mindspore.grad' or 'mindspore.value_and_grad'.
| Optimizer | Unsupported | Please use [mindspore.nn.Optimizer](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Optimizer.html#mindspore.nn.Optimizer) instead|
| Optimizer | Supported | |
| Adadelta | Unsupported | Please use [mindspore.nn.Adadelta](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Adadelta.html#mindspore.nn.Adadelta) instead|
| Adagrad | Unsupported | Please use [mindspore.nn.Adagrad](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Adagrad.html#mindspore.nn.Adagrad) instead|
| Adam | Unsupported | Please use [mindspore.nn.Adam](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Adam.html#mindspore.nn.Adam) instead|
| AdamW | Unsupported | Please use [mindspore.nn.AdamWeightDecay](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.AdamWeightDecay.html#mindspore.nn.AdamWeightDecay) instead|
| Adam | Supported | |
| AdamW | Supported | |
| SparseAdam | Unsupported | |
| Adamax | Unsupported | Please use [mindspore.nn.AdaMax](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.AdaMax.html#mindspore.nn.AdaMax) instead|
| ASGD | Unsupported | Please use [mindspore.nn.ASGD](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.ASGD.html#mindspore.nn.ASGD) instead|
@@ -1180,4 +1207,48 @@ English | [简体中文](SupportedList.md)
| RAdam | Unsupported | |
| RMSprop | Unsupported | Please use [mindspore.nn.RMSprop](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.RMSProp.html#mindspore.nn.RMSProp) instead|
| Rprop | Unsupported | Please use [mindspore.nn.Rprop](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.Rprop.html#mindspore.nn.Rprop) instead |
| SGD | Unsupported | Please use [mindspore.nn.SGD](https://www.mindspore.cn/docs/en/master/api_python/nn/mindspore.nn.SGD.html#mindspore.nn.SGD) instead|
| SGD | Supported | |
<span id="jump11">lr_scheduler General Constraints:</span>
- Since the `lr` of the optimizer exists with type `Parameter`, the `base_lr` and `_last_lr` of the lr_scheduler may also be of type `Parameter`. Therefore, when you need to save or restore the above variables, you need to convert their types in advance to save or restore them normally. For example, when saving, in the `state_dict` function, `return state_dict` should be changed to `return self._process_state_dict(state_dict)`, where `_process_state_dict` is a public function defined in the parent class `LRScheduler`, and the corresponding variable can be changed from `Parameter` The type is converted to a numeric type in Python. Similarly, when recovering, you can call `_process_state_dict_revert` in the parent class to restore to the `Parameter` type.
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.