Browse Source

!1190 Update mindconverter readme

From: @liuchongming74
Reviewed-by: @ouwenchang
Signed-off-by:
tags/v1.2.0-rc1
mindspore-ci-bot Gitee 4 years ago
parent
commit
55ccd5f23d
2 changed files with 17 additions and 0 deletions
  1. +8
    -0
      mindinsight/mindconverter/README.md
  2. +9
    -0
      mindinsight/mindconverter/README_CN.md

+ 8
- 0
mindinsight/mindconverter/README.md View File

@@ -394,6 +394,14 @@ Q2. Can MindConverter run on ARM platform?
Q3. Why did I get message of `Error detail: [NodeInputMissing] ...` when converting PyTorch model? Q3. Why did I get message of `Error detail: [NodeInputMissing] ...` when converting PyTorch model?
> Answer: For PyTorch model, if operations in `torch.nn.functional.xxx`, `torch.xxx`, `torch.Tensor.xxx` were used, node parsing could be failed. It's better to replace those operations with `torch.nn.xxx`. > Answer: For PyTorch model, if operations in `torch.nn.functional.xxx`, `torch.xxx`, `torch.Tensor.xxx` were used, node parsing could be failed. It's better to replace those operations with `torch.nn.xxx`.


Q4. Why does the conversion process take a lot of time (more than 10 minutes), but the model is not so large?
> Answer: When converting, MindConverter needs to use protobuf to deserialize the model file. Please make sure that the protobuf installed in Python environment is implemented by C++ backend. The validation method is as follows. If the output is "python", you need to install Python protobuf implemented by C++ (download the protobuf source code, enter the "python" subdirectory in the source code, and use `python setup.py install --cpp_implementation` to install). If the output is "cpp" and the conversion process still takes a long time, please add environment variable `export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp` before conversion.

```python
from google.protobuf.internal import api_implementation
print(api_implementation.Type())
```

## Appendix ## Appendix


### TensorFlow Pb model exporting ### TensorFlow Pb model exporting


+ 9
- 0
mindinsight/mindconverter/README_CN.md View File

@@ -407,6 +407,15 @@ Q3. PyTorch模型转换时为什么提示`Error detail: [NodeInputMissing] ...`?


> 答:对于PyTorch模型,若网络中存在`torch.nn.functional.xxx`, `torch.xxx`, `torch.Tensor.xxx`层算子,可能存在节点解析失败的情况,需要用户手动替换为torch.nn层算子。 > 答:对于PyTorch模型,若网络中存在`torch.nn.functional.xxx`, `torch.xxx`, `torch.Tensor.xxx`层算子,可能存在节点解析失败的情况,需要用户手动替换为torch.nn层算子。


Q4. 为什么使用MindConverter进行模型转换需要很长时间(超过十分钟),而模型并不大?

> 答:MindConverter进行转换时,需要使用Protobuf对模型文件进行反序列化,请确保Python环境中安装的Protobuf采用C++后端实现,检查方法如下,若输出为python,则需要安装采用C++实现的Python Protobuf(下载Protobuf源码并进入源码中的python子目录,使用python setup.py install --cpp_implementation进行安装);若输出为cpp,转换过程仍耗时较长,请在转换前使用添加环境变量`export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp`。

```python
from google.protobuf.internal import api_implementation
print(api_implementation.Type())
```

## 附录 ## 附录


### TensorFlow Pb模型导出 ### TensorFlow Pb模型导出


Loading…
Cancel
Save