You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 11 kB

5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177
  1. # MindInsight Profiler Introduction
  2. [查看中文](./README_CN.md)
  3. MindInsight Profiler is a performance analysis tool for MindSpore. It can help to analyse and optimize the performance of the neural networks.
  4. The Profiler enables users to:
  5. * Start/finish profiling the neural networks by adding two simple Profiler APIs to the script.
  6. * Analyse the performance of the operators in the neural network.
  7. ## Add profiling code to MindSpore script
  8. To enable profiling in MindSpore, the MindInsight Profiler APIs should be added to the script:
  9. 1. Import the Profiler
  10. ```
  11. from mindspore.profiler import Profiler
  12. ```
  13. 2. Initialize the Profiler between setting the context and initializing the network and hccl.
  14. Example:
  15. context.set_context(mode=context.GRAPH_MODE, device_target="Ascend", device_id=int(os.environ["DEVICE_ID"]))
  16. profiler = Profiler(output_path="./data", is_detail=True, is_show_op_path=False, subgraph='all')
  17. net = Net()
  18. Parameters of the Profiler should include the following:
  19. subgraph (str): Defines which subgraph to monitor and analyse, can be 'all', 'Default', 'Gradients'.
  20. is_detail (bool): Whether to show profiling data for op_instance level, only show optype level if False.
  21. is_show_op_path (bool): Whether to save the full path for each op instance.
  22. output_path (str): Output data path.
  23. optypes_to_deal (list): Op type names, the data of which optype should be collected and analysed,
  24. will deal with all op if null.
  25. optypes_not_deal (list): Op type names, the data of which optype will not be collected and analysed.
  26. 3. Call ```Profiler.analyse()``` at the end of the program
  27. ```Profiler.analyse()``` will collect profiling data and generate the analysis results.
  28. After training, we can analyse the performance through the MindInsight UI.
  29. ## Performance Analysis
  30. Users can access to the Performance Profiler by selecting a specific training from the training list, and click the performance profiling link.
  31. ![performance_overall.png](./images/performance_overall.png)
  32. Figure 1: Overall Performance
  33. Figure 1 displays the overall performance of the training, including the overall data of Step Trace, Operator Performance, MindData Performance and Timeline.
  34. Users can click the corresponding link to check the details of each components. Besides, the MindInsight Profiler could analyse the performance data. The assistant on the left
  35. would show performance tuning suggestions for this training.
  36. ### Step Trace Analysis
  37. The Step Trace Component is used to show the general performance of the stages in the training. Step Trace will divide the training into several stages:
  38. Step Gap, Forward/Backward Propagation, All Reduce and Parameter Update. It will show the execution time for each stage, and help to find the bottleneck
  39. stage quickly.
  40. ![step_trace.png](./images/step_trace.png)
  41. Figure 2: Step Trace Analysis
  42. Figure 2 displays the Step Trace page. The Step Trace detail will show the start/finish time for each stage. By default, it shows the average time for all the steps. Users
  43. can also choose a specific step to see its step trace statistics. The graphs at the bottom of the page show how the execution time of Step Gap, Forward/Backward Propagation and
  44. Step Tail changes according to different steps, it helps users to decide whether the performance of some stages could be further optimized.
  45. *Notice:* MindSpore chooses the Forward Start/Backward End Operators automatically, The names of the two operators are shown on the page. The Profiler does not guarantee that these two operators are
  46. always chosen as the user's expectation. Users can choose the two operators according to the execution graph, and specify (delete the) the them manually by setting the `FP_POINT` and `BP_POINT` environment variables.
  47. For example: `export FP_POINT=fp32_vars/conv2d/conv2Dfp32_vars/BatchNorm/FusedBatchNorm_Reduce` and `export BP_POINT=loss_scale/gradients/AddN_70`.
  48. ### Operator Performance Analysis
  49. The operator performance analysis component is used to display the execution time of the operators during MindSpore runtime.
  50. ![op_type_statistics.png](./images/op_type_statistics.PNG)
  51. Figure 3: Statistics for Operator Types
  52. Figure 3 displays the statistics for the operator types, including:
  53. - Choose pie or bar graph to show the proportion time occupied by each operator type. The time of one operator type is calculated by accumulating the execution time of operators belong to this type.
  54. - Display top 20 operator types with the longest execution time, show the proportion and execution time (ms) of each operator type.
  55. ![op_statistics.png](./images/op_statistics.PNG)
  56. Figure 4: Statistics for Operators
  57. Figure 4 displays the statistics table for the operators, including:
  58. - Choose All: Display statistics for the operators, including operator name, type, execution time, full scope time, information, etc. The table will be sorted by execution time by default.
  59. - Choose Type: Display statistics for the operator types, including operator type name, execution time, execution frequency and proportion of total time. Users can click on each line, querying for all the operators belong to this type.
  60. - Search: There is a search box in the upper right corner, which can support fuzzy search for operators/operator types.
  61. ### MindData Performance Analysis
  62. The MindData performance analysis component is used to analyse the execution of data input pipeline for the training. The data input pipeline can be divided into three stages:
  63. the data process pipeline, data transfer from host to device and data fetch on device. The component could analyse the performance of each stage for detail and display the results.
  64. ![minddata_profile.png](./images/minddata_profile.png)
  65. Figure 5: MindData Performance Analysis
  66. Figure 5 displays the page of MindData performance analysis component. It consists of two tabs: The step gap and the data process.
  67. The Step Gap tab is used to analyse whether there is a performance bottleneck in the three stages. Conclusions can be drawn the data queue graphs:
  68. - The data queue size stands for the queue length when the training fetches data from the queue on the device. If the data queue size is 0, the training will wait until there is data in
  69. the queue; If the data queue size is above 0, the training can get data very quickly, and it means MindData is not the bottleneck for this training step.
  70. - The host queue size can be used to infer the speed of data process and data transfer. If the host queue size is 0, it means we need to speed up the data process stage.
  71. - If the host queue remains a big size and the data queue size is very small, the data transfer may be the bottleneck.
  72. ![data_op_profile.png](./images/data_op_profile.png)
  73. Figure 6: Data Process Pipeline Analysis
  74. Figure 6 displays the page of data process pipeline analysis. The data queues are used to exchange data between the MindData operators. The data size of the queues reflects the
  75. data consume speed of the operators, and can be used to infer the bottleneck operator. The queue usage percentage stands for the average value of the data size in queue divide data queue maximum size. The higher
  76. the usage percentage, the more data that are accumulated in the queue. The graph at the bottom of the page shows the MindData pipeline operators with the data queues, the user can click one queue to see how
  77. the data size changes according to the time, and the operators connected to the queue. The data process pipeline can be analysed as follows:
  78. - When the input queue usage percentage of one operator is high, and the output queue usage percentage is low, the operator may be the bottleneck;
  79. - For the leftmost operator, if the usage percentage of the queues on the right are all low, the operator may be the bottleneck;
  80. - For the rightmost operator, if the usage percentage of the queues on th left are all high, the operator may be the bottleneck.
  81. To optimize the perforamnce of MindData operators, there are some suggestions:
  82. - If the `Dataset` Operator is the bottleneck, try to increase the `num_parallel_workers`;
  83. - If a `GeneratorOp` type operator is the bottleneck, try to increase the `num_parallel_workers` and replace the operator to `MindRecordDataset`;
  84. - If a `MapOp` type operator is the bottleneck, try to increase the `num_parallel_workers`; If it is a python operator, try to optimize the training script;
  85. - If a `BatchOp` type operator is the bottleneck, try to adjust the size of `prefetch_size`.
  86. ### Timeline Analysis
  87. The Timeline component can display:
  88. - The operators (AICore/AICPU operators) are executed on which device;
  89. - The MindSpore stream split strategy for this neural network;
  90. - The time of tasks executed on the device.
  91. How to view the timeline:
  92. To view the detailed information of the timeline, you can click the "Download" button to save the file with the timeline information locally, and then view it through the tool.
  93. We recommend you to use Google plugin: chrome://tracing, or Perfetto tool: https://ui.perfetto.dev/#!viewer.
  94. - Select one of the tools mentioned above, enter the address in the browser and press Enter;
  95. - After entered the page, click the button to load the file to view the timeline.
  96. - For chrome tracing, using "load" button in the upper left corner.
  97. - For Perfetto, using "Open trace file" in the left column.
  98. Users can get the most detailed information from the Timeline:
  99. - From the high level, users can analyse whether the stream split strategy can be optimized and whether the step tail is too long;
  100. - From the low level, users can analyse the execution time for all the operators, etc.
  101. ![timeline.png](./images/timeline.png)
  102. Figure 7 Timeline Analysis
  103. The Timeline consists of the following parts:
  104. - *Device and Stream List*: It can show the stream list on each device. Each stream consists of a series of tasks. One rectangle stands for one task, and the area stands for the execution time of the task;
  105. - *The Operator Information*: When we click one task, the corresponding operator of this task will be shown at the bottom.
  106. W/A/S/D can be applied to zoom in and out the timeline graph.
  107. ## Limitations
  108. The Profiler has the following limitations now:
  109. * Only programs running on the Ascend chip are supported.
  110. * To limit the data size generated by the Profiler, it is suggested that for a large neural network, the profiled steps should better be below 10.
  111. * The parse of Timeline data is time consuming, and several step's data is usually enough for analysis. In order to speed up the data parse and UI
  112. display, Profiler will show at most 20M data (Contain 10+ step information for large networks).