#### MindConverter Provides Two Modes for PyTorch:
1. **Abstract Syntax Tree (AST) based conversion**: Use the argument `--in_file` will enable the AST mode.
2. **Computational Graph based conversion**: Use `--model_file` and `--shape` arguments will enable the Graph mode.
> The AST mode will be enabled, if both `--in_file` and `--model_file` are specified.
For the Graph mode, `--shape` is mandatory.
@@ -104,7 +103,6 @@ Please note that your original PyTorch project is included in the module search
> AST mode is not supported for TensorFlow, only computational graph based mode is available.
## Scenario
MindConverter provides two modes for different migration demands.
@@ -122,7 +120,7 @@ Some typical image classification networks such as ResNet and VGG have been test
> 2. The Dropout operator will be lost after conversion because the inference mode is used to load the PyTorch or TensorFlow model. Manually re-implement is necessary.
> 3. The Graph-based mode will be continuously developed and optimized with further updates.
Supported models list:
Supported models list (Models in below table have been tested based on PyTorch 1.14.0 and TensorFlow 1.15.0, X86 Ubuntu released version):
| Supported Model | PyTorch Script | TensorFlow Script |
| :----: | :----: | :----: |
@@ -236,7 +234,6 @@ class Classifier(nn.Cell):
#### TensorFlow Model Scripts Conversion
To use TensorFlow model script migration, you need to export TensorFlow model to Pb format first, and obtain the model input node and output node name. You can refer to the following methods to export and obtain the node name:
After the above code is executed, the model will be saved to `/home/user/xxx/frozen_model.pb`. `INPUT_NODE` can be passed into `--input_nodes`, and `OUTPUT_NODE` is the corresponding `--output_nodes`.
Suppose the input node name is `input_1:0`, output node name is `predictions/Softmax:0`, the input shape of model is `1,224,224,3`, the following command can be used to generate the script:
After executed MindSpore script, and report file can be found in corresponding directory.
The format of conversion report generated by script generation scheme based on graph structure is the same as that of AST scheme. However, since the graph based scheme is a generative method, the original pytorch script is not referenced in the conversion process. Therefore, the code line and column numbers involved in the generated conversion report refer to the generated script.
In addition, for operators that are not converted successfully, the input and output shape of tensor of the node will be identified in the code_ shape`, `output_ For example, please refer to [PyTorch Model Scripts Conversion](#manual_modify).
In addition, for operators that are not converted successfully, the input and output shape of tensor of the node will be identified in the code by `input_shape` and `output_shape`. For example, please refer to [PyTorch Model Scripts Conversion](#manual_modify).
## Caution
@@ -296,17 +291,18 @@ In addition, for operators that are not converted successfully, the input and ou
Classes and functions that can't be converted:
* The use of shape, ndim and dtype member of torch.Tensor.
* torch.nn.AdaptiveXXXPoolXd and torch.nn.functional.adaptive_XXX_poolXd()
* torch.nn.functional.Dropout
* torch.unsqueeze() and torch.Tensor.unsqueeze()
* torch.chunk() and torch.Tensor.chunk()
1. The use of `.shape`, `.ndim` and `.dtype` member of `torch.Tensor`.
2. `torch.nn.AdaptiveXXXPoolXd` and `torch.nn.functional.adaptive_XXX_poolXd()`.
3. `torch.nn.functional.Dropout`.
4. `torch.unsqueeze()` and `torch.Tensor.unsqueeze()`.
5. `torch.chunk()` and `torch.Tensor.chunk()`.
### Situation2
Subclassing from the subclasses of nn.Module
e.g. (code snip from torchvision.models.mobilenet)
```python
from torch import nn
@@ -324,4 +320,7 @@ class ConvBNReLU(nn.Sequential):
Q1. `terminate called after throwing an instance of 'std::system_error', what(): Resource temporarily unavailable, Aborted (core dumped)`:
> Answer: This problem is caused by TensorFlow. First step of conversion process is loading TensorFlow model into memory using TensorFlow module, and TensorFlow starts to apply for needed resource. When required resource is unavailable, such as exceeding max process number of Linux system limit, etc., TensorFlow will raise an error from its C/C++ layer. For more detail, please refer to TensorFlow official repository. There are some known issue for reference only: