MSAdapter
简体中文 | [English]
Introduction
MSAdapter is MindSpore tool for adapting the PyTorch interface, which is designed to make PyTorch code perform efficiently on Ascend without changing the habits of the original PyTorch users.
- PyTorch interface support: MSAdapter aims to support the original expression of PyTorch syntax, users just need to replace
import torch
in PyTorch source code with import ms_adapter.pytorch
to realize that the model can support training on ascending. The support status of the higher-order APIs used in the model can be found here Supported List
- PyTorch interface support scope: MSAdapter is currently mainly adapted to PyTorch data processing and model structure part of the code, currently fully supports MindSpore's PYNATIVE mode training, part of the network structure support GRAPH mode training. The training process part of the code needs to be written by the user to customize the specific use can refer to the User Guide
Install
Install MindSpore
Please install MindSpore version 2.0.0Nightly according to the Installation Guide on MindSpore official website.
Install MSAdapter
via pip
pip install ms_adapter
via source code
git clone https://git.openi.org.cn/OpenI/MSAdapter.git
cd MSAdapter
python setup.py install
If there is an insufficient permissions message, install as follows
python setup.py install --user || exit 1
User guide
For data processing and model building, MSAdapter can be used in the same way as PyTorch, while the model training part of the code needs to be customized, as shown in the following example.
1.Data processing (only modify the import package)
from ms_adapter.pytorch.utils.data import DataLoader
from ms_adapter.torchvision import datasets, transforms
transform = transforms.Compose([transforms.Resize((224, 224), interpolation=InterpolationMode.BICUBIC),
transforms.ToTensor(),
transforms.Normalize(mean=[0.4914, 0.4822, 0.4465], std=[0.247, 0.2435, 0.2616])
])
train_images = datasets.CIFAR10('./', train=True, download=True, transform=transform)
train_data = DataLoader(train_images, batch_size=128, shuffle=True, num_workers=2, drop_last=True)
2.Model construction (modify import package only)
from ms_adapter.pytorch.nn import Module, Linear, Flatten
class MLP(Module):
def __init__(self):
super(MLP, self).__init__()
self.flatten = Flatten()
self.line1 = Linear(in_features=1024, out_features=64)
self.line2 = Linear(in_features=64, out_features=128, bias=False)
self.line3 = Linear(in_features=128, out_features=10)
def forward(self, inputs):
x = self.flatten(inputs)
x = self.line1(x)
x = self.line2(x)
x = self.line3(x)
return x
3.Model training (custom training)
import ms_adapter.pytorch as torch
import ms_adapter.pytorch.nn as nn
import mindspore as ms
net = MLP()
net.train()
epochs = 500
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0005)
# Define the training process
loss_net = ms.nn.WithLossCell(net, criterion)
train_net = ms.nn.TrainOneStepCell(loss_net, optimizer)
for i in range(epochs):
for X, y in train_data:
res = train_net(X, y)
print("epoch:{}, loss:{:.6f}".format(i, res.asnumpy()))
# Save model
ms.save_checkpoint(net, "save_path.ckpt")
Resources
- Model library: MSAdapter supports rich deep learning applications, migration to MSAdapter models from the official PyTorch code is given here. Model Resources.
Contributing
Developers are welcome to contribute. For more details, please see our Contribution Guidelines.
License
Apache License 2.0