Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
i-robot ac7c9ab0ee | 5 days ago | |
---|---|---|
.gitee | 1 month ago | |
.github | 3 years ago | |
.jenkins | 1 week ago | |
akg @ 7ce3506054 | 3 weeks ago | |
cmake | 1 week ago | |
config | 4 weeks ago | |
docs | 5 days ago | |
graphengine @ 10fe59d72e | 1 week ago | |
include | 1 month ago | |
mindspore | 5 days ago | |
scripts | 2 weeks ago | |
tests | 1 week ago | |
third_party | 1 week ago | |
.clang-format | 3 years ago | |
.gitattributes | 3 weeks ago | |
.gitignore | 1 month ago | |
.gitmodules | 1 month ago | |
CMakeLists.txt | 3 months ago | |
CONTRIBUTING.md | 1 year ago | |
CONTRIBUTING_CN.md | 1 year ago | |
LICENSE | 4 years ago | |
NOTICE | 4 years ago | |
OWNERS | 2 months ago | |
README.md | 1 month ago | |
README_CN.md | 1 month ago | |
RELEASE.md | 1 week ago | |
RELEASE_CN.md | 1 week ago | |
SECURITY.md | 3 years ago | |
Third_Party_Open_Source_Software_Notice | 11 months ago | |
build.bat | 6 months ago | |
build.sh | 4 months ago | |
requirements.txt | 1 year ago | |
setup.py | 3 months ago | |
version.txt | 3 weeks ago |
MindSpore is a new open source deep learning training/inference framework that
could be used for mobile, edge and cloud scenarios. MindSpore is designed to
provide development experience with friendly design and efficient execution for
the data scientists and algorithmic engineers, native support for Ascend AI
processor, and software hardware co-optimization. At the meantime MindSpore as
a global AI open source community, aims to further advance the development and
enrichment of the AI software/hardware application ecosystem.
For more details please check out our Architecture Guide.
Currently, there are two automatic differentiation techniques in mainstream deep learning frameworks:
PyTorch used OO. Compared to ST, OO generates gradient graph in runtime, so it does not need to take function call and control flow into consideration, which makes it easier to develop. However, OO can not perform gradient graph optimization in compilation time and the control flow has to be unfolded in runtime, so it is difficult to achieve extreme optimization in performance.
MindSpore implemented automatic differentiation based on ST. On the one hand, it supports automatic differentiation of automatic control flow, so it is quite convenient to build models like PyTorch. On the other hand, MindSpore can perform static compilation optimization on neural networks to achieve great performance.
The implementation of MindSpore automatic differentiation can be understood as the symbolic differentiation of the program itself. Because MindSpore IR is a functional intermediate expression, it has an intuitive correspondence with the composite function in basic algebra. The derivation formula of the composite function composed of arbitrary basic functions can be derived. Each primitive operation in MindSpore IR can correspond to the basic functions in basic algebra, which can build more complex flow control.
The goal of MindSpore automatic parallel is to build a training method that combines data parallelism, model parallelism, and hybrid parallelism. It can automatically select a least cost model splitting strategy to achieve automatic distributed parallel training.
At present, MindSpore uses a fine-grained parallel strategy of splitting operators, that is, each operator in the figure is split into a cluster to complete parallel operations. The splitting strategy during this period may be very complicated, but as a developer advocating Pythonic, you don't need to care about the underlying implementation, as long as the top-level API compute is efficient.
MindSpore offers build options across multiple backends:
Hardware Platform | Operating System | Status |
---|---|---|
Ascend910 | Ubuntu-x86 | ✔️ |
Ubuntu-aarch64 | ✔️ | |
EulerOS-aarch64 | ✔️ | |
CentOS-x86 | ✔️ | |
CentOS-aarch64 | ✔️ | |
GPU CUDA 10.1 | Ubuntu-x86 | ✔️ |
CPU | Ubuntu-x86 | ✔️ |
Ubuntu-aarch64 | ✔️ | |
Windows-x86 | ✔️ |
For installation using pip
, take CPU
and Ubuntu-x86
build version as an example:
Download whl from MindSpore download page, and install the package.
pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/1.2.0-rc1/MindSpore/cpu/ubuntu_x86/mindspore-1.2.0rc1-cp37-cp37m-linux_x86_64.whl
Run the following command to verify the install.
import numpy as np
import mindspore.context as context
import mindspore.nn as nn
from mindspore import Tensor
from mindspore.ops import operations as P
context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
class Mul(nn.Cell):
def __init__(self):
super(Mul, self).__init__()
self.mul = P.Mul()
def construct(self, x, y):
return self.mul(x, y)
x = Tensor(np.array([1.0, 2.0, 3.0]).astype(np.float32))
y = Tensor(np.array([4.0, 5.0, 6.0]).astype(np.float32))
mul = Mul()
print(mul(x, y))
[ 4. 10. 18.]
Use pip mode method to install MindSpore in different environments. Refer to the following documents.
Use the source code compilation method to install MindSpore in different environments. Refer to the following documents.
MindSpore docker image is hosted on Docker Hub,
currently the containerized build options are supported as follows:
Hardware Platform | Docker Image Repository | Tag | Description |
---|---|---|---|
CPU | mindspore/mindspore-cpu |
x.y.z |
Production environment with pre-installed MindSpore x.y.z CPU release. |
devel |
Development environment provided to build MindSpore (with CPU backend) from the source, refer to https://www.mindspore.cn/install/en for installation details. |
||
runtime |
Runtime environment provided to install MindSpore binary package with CPU backend. |
||
GPU | mindspore/mindspore-gpu |
x.y.z |
Production environment with pre-installed MindSpore x.y.z GPU release. |
devel |
Development environment provided to build MindSpore (with GPU CUDA10.1 backend) from the source, refer to https://www.mindspore.cn/install/en for installation details. |
||
runtime |
Runtime environment provided to install MindSpore binary package with GPU CUDA10.1 backend. |
NOTICE: For GPU
devel
docker image, it's NOT suggested to directly install the whl package after building from the source, instead we strongly RECOMMEND you transfer and install the whl package inside GPUruntime
docker image.
CPU
For CPU
backend, you can directly pull and run the latest stable image using the below command:
docker pull mindspore/mindspore-cpu:1.1.0
docker run -it mindspore/mindspore-cpu:1.1.0 /bin/bash
GPU
For GPU
backend, please make sure the nvidia-container-toolkit
has been installed in advance, here are some install guidelines for Ubuntu
users:
DISTRIBUTION=$(. /etc/os-release; echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$DISTRIBUTION/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit nvidia-docker2
sudo systemctl restart docker
Then edit the file daemon.json:
$ vim /etc/docker/daemon.json
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
Restart docker again:
sudo systemctl daemon-reload
sudo systemctl restart docker
Then you can pull and run the latest stable image using the below command:
docker pull mindspore/mindspore-gpu:1.1.0
docker run -it -v /dev/shm:/dev/shm --runtime=nvidia --privileged=true mindspore/mindspore-gpu:1.1.0 /bin/bash
To test if the docker image works, please execute the python code below and check the output:
import numpy as np
import mindspore.context as context
from mindspore import Tensor
from mindspore.ops import functional as F
context.set_context(mode=context.PYNATIVE_MODE, device_target="GPU")
x = Tensor(np.ones([1,3,3,4]).astype(np.float32))
y = Tensor(np.ones([1,3,3,4]).astype(np.float32))
print(F.tensor_add(x, y))
[[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]],
[[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.],
[ 2. 2. 2. 2.]]]
If you want to learn more about the building process of MindSpore docker images,
please check out docker repo for the details.
See the Quick Start
to implement the image classification.
More details about installation guide, tutorials and APIs, please see the
User Documentation.
Check out how MindSpore Open Governance works.
#mindspore
(only for meeting minutes logging purpose)Welcome contributions. See our Contributor Wiki for
more details.
Project stable branches will be in one of the following states:
State | Time frame | Summary |
---|---|---|
Planning | 1 - 3 months | Features are under planning. |
Development | 3 months | Features are under development. |
Maintained | 6 - 12 months | All bugfixes are appropriate. Releases produced. |
Unmaintained | 0 - 3 months | All bugfixes are appropriate. No Maintainers and No Releases produced. |
End Of Life (EOL) | N/A | Branch no longer accepting changes. |
Branch | Status | Initial Release Date | Next Phase | EOL Date |
---|---|---|---|---|
r2.2 | Maintained | 2023-10-18 | Unmaintained 2024-10-18 estimated |
|
r2.1 | Maintained | 2023-07-29 | Unmaintained 2024-07-29 estimated |
|
r2.0 | Maintained | 2023-06-15 | Unmaintained 2024-06-15 estimated |
|
r1.10 | End Of Life | 2023-02-02 | 2024-02-02 | |
r1.9 | End Of Life | 2022-10-26 | 2023-10-26 | |
r1.8 | End Of Life | 2022-07-29 | 2023-07-29 | |
r1.7 | End Of Life | 2022-04-29 | 2023-04-29 | |
r1.6 | End Of Life | 2022-01-29 | 2023-01-29 | |
r1.5 | End Of Life | 2021-10-15 | 2022-10-15 | |
r1.4 | End Of Life | 2021-08-15 | 2022-08-15 | |
r1.3 | End Of Life | 2021-07-15 | 2022-07-15 | |
r1.2 | End Of Life | 2021-04-15 | 2022-04-29 | |
r1.1 | End Of Life | 2020-12-31 | 2021-09-30 | |
r1.0 | End Of Life | 2020-09-24 | 2021-07-30 | |
r0.7 | End Of Life | 2020-08-31 | 2021-02-28 | |
r0.6 | End Of Life | 2020-07-31 | 2020-12-30 | |
r0.5 | End Of Life | 2020-06-30 | 2021-06-30 | |
r0.3 | End Of Life | 2020-05-31 | 2020-09-30 | |
r0.2 | End Of Life | 2020-04-30 | 2020-08-31 | |
r0.1 | End Of Life | 2020-03-28 | 2020-06-30 |
The release notes, see our RELEASE.
MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
C++ Python Text C Unity3D Asset other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》