Compare commits

...

109 Commits
master ... r0.6

Author SHA1 Message Date
  mindspore-ci-bot 309fe331cc !757 update securec repository link for r0.6 5 years ago
  liangyongxiong 66e3a10ba6 update securec repository link 5 years ago
  mindspore-ci-bot 37dd464ae9 !500 update url of readme, from master into r0.6 5 years ago
  gaocongli 9983b01469 update the urls in readme,from master into r0.6 5 years ago
  mindspore-ci-bot 276b0bee8e !498 Fix the available hints 5 years ago
  Li Hongzhang 985ae60ce8 fix the available hints 5 years ago
  mindspore-ci-bot 5f86a9f056 !496 UI display timeline detail of profile dashboard 5 years ago
  WeiFeng-mindinsight b78d33f12a UI display timeline detail of profile dashboard 5 years ago
  mindspore-ci-bot 4bc0887c5d !494 append r0.6 description to RELEASE.md for r0.6 branch 5 years ago
  liangyongxiong 08857cb03d append r0.6 description to RELEASE.md 5 years ago
  mindspore-ci-bot b4b4129e3d !492 UI fix multiSelect component limit value does not effect 5 years ago
  xiayifan 50c43b01df UI fix multiSelect component limit value does not effect 5 years ago
  mindspore-ci-bot 8846c1d4bc !456 update readme for profiler 5 years ago
  mindspore-ci-bot c98efd23d6 !488 1. add the limitation of the number of tag in tensor visualization; update the max step per tensor tag to 20; 3. support query one train_job in the interface of train_jobs. 5 years ago
  mindspore-ci-bot 867e34126f !487 UI add loading icon in training dashboard 5 years ago
  wangshuide2020 a542f004ef 1. add the limitation of the number of tag in tensor visualization; 2. update the max step per tensor tag to 20; 3. support query one train_job in the interface of train_jobs. 5 years ago
  xiayifan efb6bab284 UI add loading icon in training dashboard 5 years ago
  mindspore-ci-bot f95ac403e2 !484 ui display no data modification 5 years ago
  fengxuefeng aa20fcbf3e ui display no data modification 5 years ago
  mindspore-ci-bot cd65f2516b !483 save the choose effect for memory in hardware resource. 5 years ago
  mindspore-ci-bot 93713d7ceb !479 UI fix table display when toggle full-screen with autoupdate open in tensor page 5 years ago
  mindspore-ci-bot 582463f78b !480 fix dsmi_get_device_utilization_rate 5 years ago
  fengxuefeng c46d6c2398 save the choose effect for memory 5 years ago
  Li Hongzhang 70116923ba fix dsmi_get_device_utilization_rate 5 years ago
  xiayifan 43e3c7cb47 UI fix the top row form tensor tables displays incorrectly form 5 years ago
  mindspore-ci-bot 3a7b4fa16c !467 Fix several bugs 5 years ago
  Li Hongzhang 02217977b4 fix several fixes 5 years ago
  mindspore-ci-bot 77be6b54db !469 fix The timer refresh edit box cannot clear 0 5 years ago
  mindspore-ci-bot cfc92d8017 !472 UI fix bug of graph that click more to modify aggregation scope page display error 5 years ago
  mindspore-ci-bot 22808ac6f8 !474 UI fix hardware issue 5 years ago
  mindspore-ci-bot eb61e7ff98 !476 new a thread to load detail info 5 years ago
  luopengting ca56b3b89b mainly to new a thread to load detail info 5 years ago
  ph 29ac51a106 fix hardware issue: 1. Hard to distinguish between specific states for health and available;2.Cannot uncheck after selecting a single cpu at U;3.its better not display ascend usage in hardware resource for gpu env 5 years ago
  WeiFeng-mindinsight 25e6492b40 UI fix bug of graph that click more to modify aggregation scope page display error 5 years ago
  fengxuefeng f3ea15ceed fix The timer refresh edit box cannot clear 0 5 years ago
  mindspore-ci-bot 46099e8290 !465 UI Modify the error message display mode of the tensor 5 years ago
  mindspore-ci-bot 27b25267fe !463 UI fix bug of step trace page that graph covers others and graph data show clearly 5 years ago
  mindspore-ci-bot fd8bd4f61b !461 Fix the timeout decorator to support different args and lock 5 years ago
  WeiFeng-mindinsight 5518cb7d75 UI fix bug of step trace page that graph covers others and graph data show clearly 5 years ago
  Li Hongzhang 6b43959314 fix npu timeout mechanism 5 years ago
  yelihua c62bae1f26 update readme 5 years ago
  mindspore-ci-bot c35b45ffae !454 Pre-triggering the querying in the background and store for later use 5 years ago
  Li Hongzhang bb93390ab4 pretrigger the npu queryings and set timeout 5 years ago
  mindspore-ci-bot fdcc25cc8e !453 Upgrade VERSION to 0.6.0 5 years ago
  Li Hongzhang 3155b6751a upgrade VERSION to 0.6.0 5 years ago
  mindspore-ci-bot 51024d3b6e !451 use forkserver multiprocess context to avoid forking child process with locked stream resource(eg stdout) 5 years ago
  mindspore-ci-bot 16d94309d0 !450 Remove bytes2human which not presented in psutil 5.6.1 5 years ago
  wenkai 70462daedb use forkserver multiprocess context to avoid forking child process with locked stream resource(eg stdout) 5 years ago
  Li Hongzhang 8dbbf2288b rm bytes2human which not presented in psutil 5.6.1 5 years ago
  mindspore-ci-bot a589d02cfd !446 Parse next lineage file when parsing a summary with an Exception 5 years ago
  luopengting f92e47b943 fix lineage parsing, parse next summary when there is an Exception 5 years ago
  mindspore-ci-bot 68b682c95c !442 convert the object of `NAN`、`-INF`and `INF` to string so that browser can view these tensor data 5 years ago
  mindspore-ci-bot 5a32f4e00a !447 limit the tensor count in one step and one tag and the length of event string. 5 years ago
  wangshuide2020 0d9af857fa limit the tensor count in one step and one tag and the length of event string. 5 years ago
  mindspore-ci-bot 5b6de4a67b !441 Remove profiler user interface and parser. 5 years ago
  mindspore-ci-bot 9c6ae7c026 !445 fix sometimes deserialized protobuf data cannot be pickled to be sent to another process 5 years ago
  mindspore-ci-bot 08dc87def0 !443 Optimize number of max processes used for computing and limit number of thread running load_data_in_thread() 5 years ago
  wenkai 0877c6f7b1 Optimize number of max processes used for computing and limit number of thread running load_data_in_thread() 5 years ago
  mindspore-ci-bot a5cabe8ca7 !440 Change the mindinsight multiprocessing computing code to use a unified manager, add new features 5 years ago
  wenkai 902972e3eb fix sometimes deserialized protobuf data cannot be pickled to be sent to another process. 5 years ago
  mindspore-ci-bot 933bf5940c !444 Table text is displayed in the center 5 years ago
  mindspore-ci-bot 8e0d7de8bd !427 Hardware resource monitor API for getting Ascend, CPU, memory metrics 5 years ago
  fengxuefeng b4a6a036f5 Table text is displayed in the center 5 years ago
  wenkai 26fabf4770 Refactor the mindinsight multiprocessing computing code to use a unified manager. 5 years ago
  yuximiao d3cc7a89a3 remove profiler user interface. 5 years ago
  Li Hongzhang 3da4d71dff add the resource monitor api 5 years ago
  wangshuide2020 4ab7af0c5f bug fix. convert the object of `NAN`、`-INF`and `INF` to string so that browser can show these tensor data. 5 years ago
  mindspore-ci-bot f674ae3e4c !439 modify operator full screen display problem 5 years ago
  fengxuefeng cf2e811f57 Modify the bug of operator full screen display 5 years ago
  mindspore-ci-bot 008bdf5632 !437 Use multiple processes to calc events 5 years ago
  wangshuide2020 7877f33b70 Use multiple processes to calc events. 5 years ago
  mindspore-ci-bot 4211e0ecca !438 Modify the visual tips and icons of hardware resources 5 years ago
  fengxuefeng dce99fab27 Modify prompts and icons 5 years ago
  mindspore-ci-bot 6dd5698025 !436 give the costing time an acurate unit 5 years ago
  mindspore-ci-bot 174747be93 !434 fix step trace bug that click confirm button show error message when step input box is empty 5 years ago
  mindspore-ci-bot 8464784148 !435 UI display the error message in table area 5 years ago
  fengxuefeng 4e3213a842 give the costing time an acurate unit 5 years ago
  WeiFeng-mindinsight 879558b170 fix step trace bug that click confirm button show error message when step input box is empty 5 years ago
  mindspore-ci-bot 3407444072 !431 Add hardware resource visualization 5 years ago
  mindspore-ci-bot 78edb521ef !430 fix graph bug that attributes of selected node show empty in content area 5 years ago
  mindspore-ci-bot 6323b4c3f1 !433 supplement the docstring of Histogram class 5 years ago
  mindspore-ci-bot 74e4593cdd !432 UI tensor bugfix 5 years ago
  wangshuide2020 5d9473de6d supplement the docstring of Histogram class. 5 years ago
  fengxuefeng b3ef94968a Add hardware resource visualization 5 years ago
  WeiFeng-mindinsight 1c4ec5e0b4 fix graph bug that attributes of selected node show empty in content area 5 years ago
  mindspore-ci-bot 9a63a44d27 !426 UI fix bug of data map that the attribute of selected node display empty when it is null 5 years ago
  mindspore-ci-bot 5a10c87b10 !429 classify reduce time info in step trace 5 years ago
  yelihua f8f5c7a987 classify reduce events into different types 5 years ago
  mindspore-ci-bot 6c82ec3ede !420 UI fixed the coordinate axis text is not clearly displayed 5 years ago
  mindspore-ci-bot ab4210e13b !422 The model traceability judgment includes the use of the includes method, which results in the index not being obtained 5 years ago
  mindspore-ci-bot 071e8f3b5d !425 rectify hyperlink in README.md 5 years ago
  WeiFeng-mindinsight 7811d5d965 UI fix bug of data map that the attribute of selected node display empty when it is null 5 years ago
  liangyongxiong f00106ab2b rectify hyperlink in README.md 5 years ago
  mindspore-ci-bot 1ef2205514 !424 profiler: fixed 500 error when timeline summary file dose not exist 5 years ago
  qin_jun_yan 5827e27f1f Judgment that the use of the includes method is incorrect and the index cannot be obtained 5 years ago
  zhangyunshu b4fbc0f274 profiler: fixed 500 error when there is no timeline summary file 5 years ago
  mindspore-ci-bot 4f6d8a3ca7 !411 Add copyright information in container.py 5 years ago
  mindspore-ci-bot ca05afb5aa !419 Modify this variable in data traceability to point to the problem 5 years ago
  qin_jun_yan 77562d16d0 Modify this variable in data traceability to point to the problem 5 years ago
  mindspore-ci-bot b17f7c1e55 !417 UI fix step trace bug that text covers others in svg 5 years ago
  mindspore-ci-bot 9a0150656c !415 UI add tensors feature (V2) 5 years ago
  mindspore-ci-bot c72416792c !416 New delete all tag threshold function 5 years ago
  mindspore-ci-bot 06b88ee59a !418 Model traceability code review problem modification, method split optimization and string hard coding rectification 5 years ago
  mindspore-ci-bot 995f7d64d2 !413 support tensor visualization 5 years ago
  qin_jun_yan 3e26964791 Model traceability code review problem modification, method split optimization and string hard coding rectification 5 years ago
  WeiFeng-mindinsight 1647882a3b UI fix step trace bug that text covers others 5 years ago
  wwx691809 8dc7da1fa8 1.New delete all tag threshold function 5 years ago
  wangshuide2020 e8ffeb70ef Support tensor visualization. 1.Tensor display in a table, it can support no more than two dimensions tensor visualization; 2.Tensor histogram visualization for all step in cache. 5 years ago
  maning bec7ffaca5 Add copyright information to container.py. 5 years ago
100 changed files with 68829 additions and 3902 deletions
Split View
  1. +1
    -1
      .gitmodules
  2. +2
    -2
      README.md
  3. +20
    -0
      RELEASE.md
  4. +1
    -1
      mindinsight/_version.py
  5. +1
    -0
      mindinsight/backend/application.py
  6. +2
    -0
      mindinsight/backend/datavisual/__init__.py
  7. +39
    -0
      mindinsight/backend/datavisual/sysmetric_api.py
  8. +2
    -1
      mindinsight/backend/datavisual/task_manager_api.py
  9. +20
    -0
      mindinsight/backend/datavisual/train_visual_api.py
  10. +5
    -6
      mindinsight/backend/run.py
  11. +35
    -1
      mindinsight/conf/constants.py
  12. +8
    -0
      mindinsight/datavisual/common/enums.py
  13. +27
    -0
      mindinsight/datavisual/common/exceptions.py
  14. +8
    -3
      mindinsight/datavisual/data_transform/data_loader.py
  15. +61
    -36
      mindinsight/datavisual/data_transform/data_manager.py
  16. +4
    -1
      mindinsight/datavisual/data_transform/events_data.py
  17. +237
    -0
      mindinsight/datavisual/data_transform/histogram.py
  18. +15
    -217
      mindinsight/datavisual/data_transform/histogram_container.py
  19. +30
    -0
      mindinsight/datavisual/data_transform/image_container.py
  20. +87
    -32
      mindinsight/datavisual/data_transform/ms_data_loader.py
  21. +7
    -7
      mindinsight/datavisual/data_transform/reservoir.py
  22. +269
    -0
      mindinsight/datavisual/data_transform/tensor_container.py
  23. +381
    -0
      mindinsight/datavisual/processors/tensor_processor.py
  24. +49
    -27
      mindinsight/datavisual/processors/train_task_manager.py
  25. +12
    -0
      mindinsight/datavisual/utils/tools.py
  26. +13
    -10
      mindinsight/lineagemgr/cache_item_updater.py
  27. +9
    -8
      mindinsight/lineagemgr/lineage_parser.py
  28. +3
    -3
      mindinsight/profiler/README.md
  29. +1
    -13
      mindinsight/profiler/__init__.py
  30. +1
    -1
      mindinsight/profiler/analyser/analyser_factory.py
  31. +46
    -8
      mindinsight/profiler/analyser/timeline_analyser.py
  32. +0
    -182
      mindinsight/profiler/parser/aicpu_data_parser.py
  33. +0
    -99
      mindinsight/profiler/parser/container.py
  34. +0
    -598
      mindinsight/profiler/parser/framework_parser.py
  35. +0
    -109
      mindinsight/profiler/parser/hwts_log_parser.py
  36. +0
    -93
      mindinsight/profiler/parser/minddata_parser.py
  37. +0
    -289
      mindinsight/profiler/parser/minddata_pipeline_parser.py
  38. +0
    -247
      mindinsight/profiler/parser/optime_parser.py
  39. +0
    -312
      mindinsight/profiler/parser/step_trace_parser.py
  40. +0
    -460
      mindinsight/profiler/profiling.py
  41. +22
    -14
      mindinsight/scripts/stop.py
  42. +41
    -0
      mindinsight/sysmetric/collector/__init__.py
  43. +37
    -0
      mindinsight/sysmetric/collector/_collect_cpu.py
  44. +27
    -0
      mindinsight/sysmetric/collector/_collect_mem.py
  45. +420
    -0
      mindinsight/sysmetric/collector/_collect_npu.py
  46. +0
    -0
      mindinsight/sysmetric/common/__init__.py
  47. +25
    -0
      mindinsight/sysmetric/common/exceptions.py
  48. +5
    -1
      mindinsight/sysmetric/common/log.py
  49. +22
    -0
      mindinsight/ui/src/assets/images/cpu-bg.svg
  50. +0
    -1
      mindinsight/ui/src/components/autocomplete.vue
  51. +6
    -2
      mindinsight/ui/src/components/gridTableSimple.vue
  52. +123
    -32
      mindinsight/ui/src/components/header.vue
  53. +12
    -11
      mindinsight/ui/src/components/multiselectGroup.vue
  54. +47
    -6
      mindinsight/ui/src/locales/zh-cn.json
  55. +3
    -0
      mindinsight/ui/src/main.js
  56. +4
    -0
      mindinsight/ui/src/router.js
  57. +6
    -0
      mindinsight/ui/src/services/request-service.js
  58. +17
    -0
      mindinsight/ui/src/store.js
  59. +5
    -1
      mindinsight/ui/src/views/train-manage/data-map.vue
  60. +45
    -26
      mindinsight/ui/src/views/train-manage/data-traceback.vue
  61. +81
    -64
      mindinsight/ui/src/views/train-manage/graph.vue
  62. +981
    -0
      mindinsight/ui/src/views/train-manage/hardware-visual.vue
  63. +84
    -54
      mindinsight/ui/src/views/train-manage/model-traceback.vue
  64. +32
    -4
      mindinsight/ui/src/views/train-manage/operator.vue
  65. +230
    -128
      mindinsight/ui/src/views/train-manage/profiling-dashboard.vue
  66. +212
    -126
      mindinsight/ui/src/views/train-manage/scalar.vue
  67. +218
    -149
      mindinsight/ui/src/views/train-manage/step-trace.vue
  68. +2
    -1
      mindinsight/ui/src/views/train-manage/tensor.vue
  69. +42
    -1
      mindinsight/ui/src/views/train-manage/training-dashboard.vue
  70. +265
    -0
      mindinsight/utils/computing_resource_mgr.py
  71. +9
    -0
      mindinsight/utils/constant.py
  72. +3
    -0
      mindinsight/utils/log.py
  73. +2
    -22
      tests/st/func/profiler/conftest.py
  74. +3
    -18
      tests/st/func/profiler/test_analyse.py
  75. +2
    -23
      tests/st/func/profiler/test_minddata_pipeline_analyser.py
  76. +2
    -21
      tests/st/func/profiler/test_op_analyser.py
  77. +1
    -14
      tests/ut/backend/datavisual/conftest.py
  78. +0
    -5
      tests/ut/backend/datavisual/test_request_error.py
  79. +0
    -45
      tests/ut/datavisual/conftest.py
  80. +4
    -3
      tests/ut/datavisual/data_transform/test_data_loader.py
  81. +3
    -2
      tests/ut/datavisual/data_transform/test_data_manager.py
  82. +15
    -15
      tests/ut/datavisual/data_transform/test_histogram_container.py
  83. +4
    -3
      tests/ut/datavisual/data_transform/test_ms_data_loader.py
  84. +1
    -1
      tests/ut/datavisual/processors/test_train_task_manager.py
  85. +0
    -74
      tests/ut/profiler/parser/test_aicpu_parser.py
  86. +0
    -158
      tests/ut/profiler/parser/test_framework_parser.py
  87. +0
    -93
      tests/ut/profiler/parser/test_minddata_pipeline_parser.py
  88. +15
    -14
      tests/ut/sysmetric/__init__.py
  89. +42
    -0
      tests/ut/sysmetric/metrics_collector.py
  90. +110
    -0
      tests/utils/log_generators/tensor_log_generator.py
  91. +3
    -1
      tests/utils/log_operations.py
  92. +6
    -2
      tests/utils/resource/JOB3/Framework.host.vm.point.1.slice_0
  93. +200
    -0
      tests/utils/resource/run_1/normal_run/profiler/aicore_intermediate_1_detail.csv
  94. +30
    -0
      tests/utils/resource/run_1/normal_run/profiler/aicore_intermediate_1_type.csv
  95. +200
    -0
      tests/utils/resource/run_1/normal_run/profiler/framework_raw_1.csv
  96. +1
    -0
      tests/utils/resource/run_1/normal_run/profiler/min_cycle_counter_1.txt
  97. +5
    -0
      tests/utils/resource/run_1/normal_run/profiler/minddata_pipeline_raw_1.csv
  98. +62850
    -0
      tests/utils/resource/run_1/normal_run/profiler/output_format_data_hwts_1.txt
  99. +203
    -0
      tests/utils/resource/run_1/normal_run/profiler/output_op_compute_time_1.txt
  100. +705
    -0
      tests/utils/resource/run_1/normal_run/profiler/output_op_compute_time_detail_1.txt

+ 1
- 1
.gitmodules View File

@@ -1,3 +1,3 @@
[submodule "third_party/securec"]
path = third_party/securec
url = https://gitee.com/openeuler/bounds_checking_function.git
url = https://gitee.com/openeuler/libboundscheck.git

+ 2
- 2
README.md View File

@@ -105,11 +105,11 @@ See [Install MindInsight](https://www.mindspore.cn/install/en).

# QuickStart

See [guidance](https://www.mindspore.cn/tutorial/en/0.1.0-alpha/advanced_use/visualization_tutorials.html)
See [guidance](https://www.mindspore.cn/tutorial/en/r0.6/advanced_use/visualization_tutorials.html)

# Docs

See [API Reference](https://www.mindspore.cn/api/en/master/index.html)
See [API Reference](https://www.mindspore.cn/api/en/r0.6/index.html)

# Community



+ 20
- 0
RELEASE.md View File

@@ -1,5 +1,25 @@
## MindInsight

# Release 0.6.0-beta

## Major Features and Improvements
* Provide monitoring capabilities for each of Ascend AI processor and other hardware resources, including CPU and memory.
* Visualization of weight, gradient and other tensor data in model training.
* Provide tabular from presentation of tensor data.
* Provide histogram to show the distribution of tensor data and its change over time.

## Bugfixes
* UI fix for the error message display mode of the tensor during real-time training. ([!465](https://gitee.com/mindspore/mindinsight/pulls/465))
* The summary file size is larger than max_file_size. ([!3481](https://gitee.com/mindspore/dashboard/projects/mindspore/mindspore/pulls/3481))
* Fix real-time training error when disk is full. ([!3058](https://gitee.com/mindspore/mindspore/pulls/3058))

## Thanks to our Contributors
Thanks goes to these wonderful people:

Congli Gao, Weifeng Huang, Zhenzhong Kou, Hongzhang Li, Longfei Li, Yongxiong Liang, Chongming Liu, Pengting Luo, Yanming Miao, Gongchang Ou, Yongxiu Qu, Hui Pan, Luyu Qiu, Junyan Qin, Kai Wen, Weining Wang, Yue Wang, Zhuanke Wu, Yifan Xia, Lihua Ye, Weibiao Yu, Ximiao Yu, Yunshu Zhang, Ting Zhao, Jianfeng Zhu, Ning Ma, Yihui Zhang, Shuide Wang.

Contributions of any kind are welcome!

# Release 0.5.0-beta

## Major Features and Improvements


+ 1
- 1
mindinsight/_version.py View File

@@ -14,4 +14,4 @@
# ============================================================================
"""Mindinsight version module."""

VERSION = '0.5.0'
VERSION = '0.6.0'

+ 1
- 0
mindinsight/backend/application.py View File

@@ -111,6 +111,7 @@ def create_app():
static_folder_path = os.path.realpath(os.path.join(os.path.dirname(__file__), os.pardir, 'ui', 'dist', 'static'))

app = Flask(__name__, static_url_path=static_url_path, static_folder=static_folder_path)
app.config['JSON_SORT_KEYS'] = False

if settings.ENABLE_CORS:
CORS(app, supports_credentials=True)


+ 2
- 0
mindinsight/backend/datavisual/__init__.py View File

@@ -17,6 +17,7 @@
from mindinsight.backend.datavisual.static_resource_api import init_module as static_init_module
from mindinsight.backend.datavisual.task_manager_api import init_module as task_init_module
from mindinsight.backend.datavisual.train_visual_api import init_module as train_init_module
from mindinsight.backend.datavisual.sysmetric_api import init_module as sysmetric_init_module


def init_module(app):
@@ -30,3 +31,4 @@ def init_module(app):
static_init_module(app)
task_init_module(app)
train_init_module(app)
sysmetric_init_module(app)

+ 39
- 0
mindinsight/backend/datavisual/sysmetric_api.py View File

@@ -0,0 +1,39 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""System metrics API."""

from flask import Blueprint, jsonify
from mindinsight.conf import settings
from mindinsight.sysmetric.collector import get_metrics

BLUEPRINT = Blueprint("sysmetric", __name__, url_prefix=settings.URL_PATH_PREFIX + settings.API_PREFIX)


@BLUEPRINT.route("/sysmetric/current", methods=["GET"])
def query_sysmetric():
"""Query the system metrics."""

return jsonify(get_metrics())


def init_module(app):
"""
Init module entry.

Args:
app: the application obj.

"""
app.register_blueprint(BLUEPRINT)

+ 2
- 1
mindinsight/backend/datavisual/task_manager_api.py View File

@@ -66,12 +66,13 @@ def query_train_jobs():
"""Query train jobs."""
offset = request.args.get("offset", default=0)
limit = request.args.get("limit", default=10)
train_id = get_train_id(request)

offset = Validation.check_offset(offset=offset)
limit = Validation.check_limit(limit, min_value=1, max_value=SummaryWatcher.MAX_SUMMARY_DIR_COUNT)

processor = TrainTaskManager(DATA_MANAGER)
total, train_jobs = processor.query_train_jobs(offset, limit)
total, train_jobs = processor.query_train_jobs(offset, limit, train_id)

return jsonify({
'name': os.path.basename(os.path.realpath(settings.SUMMARY_BASE_DIR)),


+ 20
- 0
mindinsight/backend/datavisual/train_visual_api.py View File

@@ -25,6 +25,7 @@ from mindinsight.conf import settings
from mindinsight.datavisual.utils.tools import get_train_id
from mindinsight.datavisual.utils.tools import if_nan_inf_to_none
from mindinsight.datavisual.processors.histogram_processor import HistogramProcessor
from mindinsight.datavisual.processors.tensor_processor import TensorProcessor
from mindinsight.datavisual.processors.images_processor import ImageProcessor
from mindinsight.datavisual.processors.scalars_processor import ScalarsProcessor
from mindinsight.datavisual.processors.graph_processor import GraphProcessor
@@ -173,6 +174,25 @@ def get_scalars():
return jsonify({'scalars': scalars})


@BLUEPRINT.route("/datavisual/tensors", methods=["GET"])
def get_tensors():
"""
Interface to obtain tensor data.

Returns:
Response, which contains a JSON object.
"""
train_ids = request.args.getlist('train_id')
tags = request.args.getlist('tag')
step = request.args.get("step", default=None)
dims = request.args.get("dims", default=None)
detail = request.args.get("detail", default=None)

processor = TensorProcessor(DATA_MANAGER)
response = processor.get_tensors(train_ids, tags, step, dims, detail)
return jsonify(response)


def init_module(app):
"""
Init module entry.


+ 5
- 6
mindinsight/backend/run.py View File

@@ -236,9 +236,10 @@ def start():
process = subprocess.Popen(
shlex.split(cmd),
shell=False,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
# Change stdout to DEVNULL to prevent broken pipe error when creating new processes.
stdin=subprocess.DEVNULL,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT
)

# sleep 1 second for gunicorn appplication to load modules
@@ -246,9 +247,7 @@ def start():

# check if gunicorn application is running
if process.poll() is not None:
_, stderr = process.communicate()
for line in stderr.decode().split('\n'):
console.error(line)
console.error("Start MindInsight failed. See log for details.")
else:
state_result = _check_server_start_stat(errorlog_abspath, log_size)
# print gunicorn start state to stdout


+ 35
- 1
mindinsight/conf/constants.py View File

@@ -14,6 +14,36 @@
# ============================================================================
"""Constants module for mindinsight settings."""
import logging
import math
import os


_DEFAULT_MAX_THREADS_COUNT = 15


def _calc_default_max_processes_cnt():
"""Calc default processes count."""

# We need to make sure every summary directory has a process to load data.
min_cnt = _DEFAULT_MAX_THREADS_COUNT
# Do not use too many processes to avoid system problems (eg. out of memory).
max_cnt = 45
used_cpu_ratio = 0.75

cpu_count = os.cpu_count()
if cpu_count is None:
return min_cnt

processes_cnt = math.floor(cpu_count * used_cpu_ratio)

if processes_cnt < min_cnt:
return min_cnt

if processes_cnt > max_cnt:
return max_cnt

return processes_cnt


####################################
# Global default settings.
@@ -47,13 +77,17 @@ API_PREFIX = '/v1/mindinsight'
####################################
# Datavisual default settings.
####################################
MAX_THREADS_COUNT = 15
MAX_THREADS_COUNT = _DEFAULT_MAX_THREADS_COUNT
MAX_PROCESSES_COUNT = _calc_default_max_processes_cnt()

MAX_TAG_SIZE_PER_EVENTS_DATA = 300
DEFAULT_STEP_SIZES_PER_TAG = 500

MAX_GRAPH_TAG_SIZE = 10
MAX_TENSOR_TAG_SIZE = 6
MAX_IMAGE_STEP_SIZE_PER_TAG = 10
MAX_SCALAR_STEP_SIZE_PER_TAG = 1000
MAX_GRAPH_STEP_SIZE_PER_TAG = 1
MAX_HISTOGRAM_STEP_SIZE_PER_TAG = 50
MAX_TENSOR_STEP_SIZE_PER_TAG = 20
MAX_TENSOR_RESPONSE_DATA_SIZE = 100000

+ 8
- 0
mindinsight/datavisual/common/enums.py View File

@@ -32,12 +32,20 @@ class DataManagerStatus(BaseEnum):
INVALID = 'INVALID'


class DetailCacheManagerStatus(BaseEnum):
"""Data manager status."""
INIT = 'INIT'
LOADING = 'LOADING'
DONE = 'DONE'


class PluginNameEnum(BaseEnum):
"""Plugin Name Enum."""
IMAGE = 'image'
SCALAR = 'scalar'
GRAPH = 'graph'
HISTOGRAM = 'histogram'
TENSOR = 'tensor'


@enum.unique


+ 27
- 0
mindinsight/datavisual/common/exceptions.py View File

@@ -161,6 +161,33 @@ class HistogramNotExistError(MindInsightException):
http_code=400)


class TensorNotExistError(MindInsightException):
"""Unable to get tensor values based on a given condition."""
def __init__(self, error_detail):
error_msg = f'Tensor value is not exist. Detail: {error_detail}'
super(TensorNotExistError, self).__init__(DataVisualErrors.TENSOR_NOT_EXIST,
error_msg,
http_code=400)


class StepTensorDataNotInCacheError(MindInsightException):
"""Tensor data with specific step does not in cache."""
def __init__(self, error_detail):
error_msg = f'Tensor data not in cache. Detail: {error_detail}'
super(StepTensorDataNotInCacheError, self).__init__(DataVisualErrors.STEP_TENSOR_DATA_NOT_IN_CACHE,
error_msg,
http_code=400)


class ResponseDataExceedMaxValueError(MindInsightException):
"""Response data exceed max value based on a given condition."""
def __init__(self, error_detail):
error_msg = f'Response data exceed max value. Detail: {error_detail}'
super(ResponseDataExceedMaxValueError, self).__init__(DataVisualErrors.MAX_RESPONSE_DATA_EXCEEDED_ERROR,
error_msg,
http_code=400)


class TrainJobDetailNotInCacheError(MindInsightException):
"""Detail info of given train job is not in cache."""
def __init__(self, error_detail="no detail provided."):


+ 8
- 3
mindinsight/datavisual/data_transform/data_loader.py View File

@@ -34,8 +34,13 @@ class DataLoader:
self._summary_dir = summary_dir
self._loader = None

def load(self):
"""Load the data when loader is exist."""
def load(self, computing_resource_mgr):
"""Load the data when loader is exist.

Args:
computing_resource_mgr (ComputingResourceManager): The ComputingResourceManager instance.
"""

if self._loader is None:
ms_dataloader = MSDataLoader(self._summary_dir)
loaders = [ms_dataloader]
@@ -48,7 +53,7 @@ class DataLoader:
logger.warning("No valid files can be loaded, summary_dir: %s.", self._summary_dir)
raise exceptions.SummaryLogPathInvalid()

self._loader.load()
self._loader.load(computing_resource_mgr)

def get_events_data(self):
"""


+ 61
- 36
mindinsight/datavisual/data_transform/data_manager.py View File

@@ -35,14 +35,16 @@ from mindinsight.conf import settings
from mindinsight.datavisual.common import exceptions
from mindinsight.datavisual.common.enums import CacheStatus
from mindinsight.datavisual.common.log import logger
from mindinsight.datavisual.common.enums import DataManagerStatus
from mindinsight.datavisual.common.enums import DataManagerStatus, DetailCacheManagerStatus
from mindinsight.datavisual.common.enums import PluginNameEnum
from mindinsight.datavisual.common.exceptions import TrainJobNotExistError
from mindinsight.datavisual.data_transform.loader_generators.loader_generator import MAX_DATA_LOADER_SIZE
from mindinsight.datavisual.data_transform.loader_generators.data_loader_generator import DataLoaderGenerator
from mindinsight.utils.computing_resource_mgr import ComputingResourceManager
from mindinsight.utils.exceptions import MindInsightException
from mindinsight.utils.exceptions import ParamValueError
from mindinsight.utils.exceptions import UnknownError
from mindinsight.datavisual.utils.tools import exception_wrapper


class _BasicTrainJob:
@@ -414,6 +416,13 @@ class _DetailCacheManager(_BaseCacheManager):
self._loader_pool_mutex = threading.Lock()
self._max_threads_count = 30
self._loader_generators = loader_generators
self._status = DetailCacheManagerStatus.INIT.value
self._loading_mutex = threading.Lock()

@property
def status(self):
"""Get loading status, if it is loading, return True."""
return self._status

def has_content(self):
"""Whether this cache manager has train jobs."""
@@ -434,6 +443,20 @@ class _DetailCacheManager(_BaseCacheManager):
"""Get loader pool size."""
return len(self._loader_pool)

def _load_in_cache(self):
"""Generate and execute loaders."""
def load():
self._generate_loaders()
self._execute_load_data()
try:
exception_wrapper(load())
except UnknownError as ex:
logger.warning("Load event data failed. Detail: %s.", str(ex))
finally:
self._status = DetailCacheManagerStatus.DONE.value
logger.info("Load event data end, status: %r, and loader pool size is %r.",
self._status, self.loader_pool_size())

def update_cache(self, disk_train_jobs: Iterable[_BasicTrainJob]):
"""
Update cache.
@@ -444,8 +467,13 @@ class _DetailCacheManager(_BaseCacheManager):
disk_train_jobs (Iterable[_BasicTrainJob]): Basic info about train jobs on disk.

"""
self._generate_loaders()
self._execute_load_data()
with self._loading_mutex:
if self._status == DetailCacheManagerStatus.LOADING.value:
logger.debug("Event data is loading, and loader pool size is %r.", self.loader_pool_size())
return
self._status = DetailCacheManagerStatus.LOADING.value
thread = threading.Thread(target=self._load_in_cache, name="load_detail_in_cache")
thread.start()

def cache_train_job(self, train_id):
"""Cache given train job."""
@@ -510,7 +538,7 @@ class _DetailCacheManager(_BaseCacheManager):
logger.debug("delete loader %s", loader_id)
self._loader_pool.pop(loader_id)

def _execute_loader(self, loader_id):
def _execute_loader(self, loader_id, computing_resource_mgr):
"""
Load data form data_loader.

@@ -518,7 +546,7 @@ class _DetailCacheManager(_BaseCacheManager):

Args:
loader_id (str): An ID for `Loader`.
computing_resource_mgr (ComputingResourceManager): The ComputingResourceManager instance.
"""
try:
with self._loader_pool_mutex:
@@ -527,7 +555,7 @@ class _DetailCacheManager(_BaseCacheManager):
logger.debug("Loader %r has been deleted, will not load data.", loader_id)
return

loader.data_loader.load()
loader.data_loader.load(computing_resource_mgr)

# Update loader cache status to CACHED.
# Loader with cache status CACHED should remain the same cache status.
@@ -580,13 +608,17 @@ class _DetailCacheManager(_BaseCacheManager):

logger.info("Start to execute load data. threads_count: %s.", threads_count)

with ThreadPoolExecutor(max_workers=threads_count) as executor:
futures = []
loader_pool = self._get_snapshot_loader_pool()
for loader_id in loader_pool:
future = executor.submit(self._execute_loader, loader_id)
futures.append(future)
wait(futures, return_when=ALL_COMPLETED)
with ComputingResourceManager(
executors_cnt=threads_count,
max_processes_cnt=settings.MAX_PROCESSES_COUNT) as computing_resource_mgr:

with ThreadPoolExecutor(max_workers=threads_count) as executor:
futures = []
loader_pool = self._get_snapshot_loader_pool()
for loader_id in loader_pool:
future = executor.submit(self._execute_loader, loader_id, computing_resource_mgr)
futures.append(future)
wait(futures, return_when=ALL_COMPLETED)

def _get_threads_count(self):
"""
@@ -706,8 +738,7 @@ class _DetailCacheManager(_BaseCacheManager):

loader = self._get_loader(train_id)
if loader is None:
logger.warning("No valid summary log in train job %s, "
"or it is not in the cache.", train_id)
logger.info("No valid summary log in train job %s, or it is not in the cache.", train_id)
return None

train_job = loader.to_dict()
@@ -831,6 +862,11 @@ class DataManager:
self._detail_cache = _DetailCacheManager(loader_generators)
self._brief_cache = _BriefCacheManager()

# This lock is used to make sure that only one self._load_data_in_thread() is running.
# Because self._load_data_in_thread() will create process pool when loading files, we can not
# afford to run multiple self._load_data_in_thread() simultaneously (will create too many processes).
self._load_data_lock = threading.Lock()

@property
def summary_base_dir(self):
"""Get summary base dir."""
@@ -886,19 +922,12 @@ class DataManager:
def _load_data_in_thread_wrapper(self):
"""Wrapper for load data in thread."""
try:
self._load_data_in_thread()
except MindInsightException as exc:
with self._load_data_lock:
exception_wrapper(self._load_data())
except UnknownError as exc:
# Not raising the exception here to ensure that data reloading does not crash.
logger.warning(exc.message)

def _load_data_in_thread(self):
"""Log (but not swallow) exceptions in thread to help debugging."""
try:
self._load_data()
except Exception as exc:
logger.exception(exc)
raise UnknownError('Load data thread error.')

def _load_data(self):
"""This function will load data once and ignore it if the status is loading."""
logger.info("Start to load data, reload interval: %r.", self._reload_interval)
@@ -928,13 +957,13 @@ class DataManager:
self._brief_cache.update_cache(basic_train_jobs)
self._detail_cache.update_cache(basic_train_jobs)

if not self._brief_cache.has_content() and not self._detail_cache.has_content():
if not self._brief_cache.has_content() and not self._detail_cache.has_content() \
and self._detail_cache.status == DetailCacheManagerStatus.DONE.value:
self.status = DataManagerStatus.INVALID.value
else:
self.status = DataManagerStatus.DONE.value

logger.info("Load event data end, status: %r, and loader pool size is %r.",
self.status, self._detail_cache.loader_pool_size())
logger.info("Load brief data end, and loader pool size is %r.", self._detail_cache.loader_pool_size())

@staticmethod
def check_reload_interval(reload_interval):
@@ -1035,14 +1064,6 @@ class DataManager:

return TrainJob(brief_train_job, detail_train_job)

def list_train_jobs(self):
"""
List train jobs.

To be implemented.
"""
raise NotImplementedError()

@property
def status(self):
"""
@@ -1077,5 +1098,9 @@ class DataManager:
"""Get brief train job."""
return self._brief_cache.get_train_job(train_id)

def get_detail_cache_status(self):
"""Get detail status, just for ut/st."""
return self._detail_cache.status


DATA_MANAGER = DataManager(settings.SUMMARY_BASE_DIR)

+ 4
- 1
mindinsight/datavisual/data_transform/events_data.py View File

@@ -35,13 +35,15 @@ CONFIG = {
'max_tag_sizes_per_plugin':
{
PluginNameEnum.GRAPH.value: settings.MAX_GRAPH_TAG_SIZE,
PluginNameEnum.TENSOR.value: settings.MAX_TENSOR_TAG_SIZE
},
'max_step_sizes_per_tag':
{
PluginNameEnum.SCALAR.value: settings.MAX_SCALAR_STEP_SIZE_PER_TAG,
PluginNameEnum.IMAGE.value: settings.MAX_IMAGE_STEP_SIZE_PER_TAG,
PluginNameEnum.GRAPH.value: settings.MAX_GRAPH_STEP_SIZE_PER_TAG,
PluginNameEnum.HISTOGRAM.value: settings.MAX_HISTOGRAM_STEP_SIZE_PER_TAG
PluginNameEnum.HISTOGRAM.value: settings.MAX_HISTOGRAM_STEP_SIZE_PER_TAG,
PluginNameEnum.TENSOR.value: settings.MAX_TENSOR_STEP_SIZE_PER_TAG
}
}

@@ -84,6 +86,7 @@ class EventsData:
deleted_tag = self._check_tag_out_of_spec(plugin_name)
if deleted_tag is not None:
if tag in self._deleted_tags:
logger.debug("Tag is in deleted tags: %s.", tag)
return
self.delete_tensor_event(deleted_tag)



+ 237
- 0
mindinsight/datavisual/data_transform/histogram.py View File

@@ -0,0 +1,237 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Histogram data."""
import math

from mindinsight.utils.exceptions import ParamValueError
from mindinsight.datavisual.utils.utils import calc_histogram_bins


def mask_invalid_number(num):
"""Mask invalid number to 0."""
if math.isnan(num) or math.isinf(num):
return type(num)(0)

return num


class Bucket:
"""
Bucket data class.

Args:
left (double): Left edge of the histogram bucket.
width (double): Width of the histogram bucket.
count (int): Count of numbers fallen in the histogram bucket.
"""
def __init__(self, left, width, count):
self._left = left
self._width = width
self._count = count

@property
def left(self):
"""Gets left edge of the histogram bucket."""
return self._left

@property
def count(self):
"""Gets count of numbers fallen in the histogram bucket."""
return self._count

@property
def width(self):
"""Gets width of the histogram bucket."""
return self._width

@property
def right(self):
"""Gets right edge of the histogram bucket."""
return self._left + self._width

def as_tuple(self):
"""Gets the bucket as tuple."""
return self._left, self._width, self._count

def __repr__(self):
"""Returns repr(self)."""
return "Bucket(left={}, width={}, count={})".format(self._left, self._width, self._count)


class Histogram:
"""
Histogram data class.

Args:
buckets (tuple[Bucket]): The buckets of histogram data.
max_val (number): The max value of histogram data.
min_val (number): The min value of histogram data.
count (int): The count of histogram data.
"""

# Max quantity of original buckets.
MAX_ORIGINAL_BUCKETS_COUNT = 90

def __init__(self, buckets, max_val, min_val, count):
self._visual_max = max_val
self._visual_min = min_val
self._count = count
self._original_buckets = buckets
# default bin number
self._visual_bins = calc_histogram_bins(count)
# Note that tuple is immutable, so sharing tuple is often safe.
self._re_sampled_buckets = ()

@property
def original_buckets_count(self):
"""Gets original buckets quantity."""
return len(self._original_buckets)

def set_visual_range(self, max_val: float, min_val: float, bins: int) -> None:
"""
Sets visual range for later re-sampling.

It's caller's duty to ensure input is valid.

Why we need visual range for histograms? Aligned buckets between steps can help users know about the trend of
tensors. Miss aligned buckets between steps might miss-lead users about the trend of a tensor. Because for
given tensor, if you have thinner buckets, count of every bucket will get lower, however, if you have
thicker buckets, count of every bucket will get higher. When they are displayed together, user might think
the histogram with thicker buckets has more values. This is miss-leading. So we need to unify buckets across
steps. Visual range for histogram is a technology for unifying buckets.

Args:
max_val (float): Max value for visual histogram.
min_val (float): Min value for visual histogram.
bins (int): Bins number for visual histogram.
"""
if max_val < min_val:
raise ParamValueError(
"Invalid input. max_val({}) is less or equal than min_val({}).".format(max_val, min_val))

if bins < 1:
raise ParamValueError("Invalid input bins({}). Must be greater than 0.".format(bins))

self._visual_max = max_val
self._visual_min = min_val
self._visual_bins = bins

# mark _re_sampled_buckets to empty
self._re_sampled_buckets = ()

def _calc_intersection_len(self, max1, min1, max2, min2):
"""Calculates intersection length of [min1, max1] and [min2, max2]."""
if max1 < min1:
raise ParamValueError(
"Invalid input. max1({}) is less than min1({}).".format(max1, min1))

if max2 < min2:
raise ParamValueError(
"Invalid input. max2({}) is less than min2({}).".format(max2, min2))

if min1 <= min2:
if max1 <= min2:
# return value must be calculated by max1.__sub__
return max1 - max1
if max1 <= max2:
return max1 - min2
# max1 > max2
return max2 - min2

# min1 > min2
if max2 <= min1:
return max2 - max2
if max2 <= max1:
return max2 - min1
return max1 - min1

def _re_sample_buckets(self):
"""Re-samples buckets according to visual_max, visual_min and visual_bins."""
if self._visual_max == self._visual_min:
# Adjust visual range if max equals min.
self._visual_max += 0.5
self._visual_min -= 0.5

width = (self._visual_max - self._visual_min) / self._visual_bins

if not self._count:
self._re_sampled_buckets = tuple(
Bucket(self._visual_min + width * i, width, 0)
for i in range(self._visual_bins))
return

re_sampled = []
original_pos = 0
original_bucket = self._original_buckets[original_pos]
for i in range(self._visual_bins):
cur_left = self._visual_min + width * i
cur_right = cur_left + width
cur_estimated_count = 0.0

# Skip no bucket range.
if cur_right <= original_bucket.left:
re_sampled.append(Bucket(cur_left, width, math.ceil(cur_estimated_count)))
continue

# Skip no intersect range.
while cur_left >= original_bucket.right:
original_pos += 1
if original_pos >= len(self._original_buckets):
break
original_bucket = self._original_buckets[original_pos]

# entering with this condition: cur_right > original_bucket.left and cur_left < original_bucket.right
while True:
if original_pos >= len(self._original_buckets):
break
original_bucket = self._original_buckets[original_pos]

intersection = self._calc_intersection_len(
min1=cur_left, max1=cur_right,
min2=original_bucket.left, max2=original_bucket.right)
if not original_bucket.width:
estimated_count = original_bucket.count
else:
estimated_count = (intersection / original_bucket.width) * original_bucket.count

cur_estimated_count += estimated_count
if cur_right > original_bucket.right:
# Need to sample next original bucket to this visual bucket.
original_pos += 1
else:
# Current visual bucket has taken all intersect buckets into account.
break

re_sampled.append(Bucket(cur_left, width, math.ceil(cur_estimated_count)))

self._re_sampled_buckets = tuple(re_sampled)

def buckets(self, convert_to_tuple=True):
"""
Get visual buckets instead of original buckets.

Args:
convert_to_tuple (bool): Whether convert bucket object to tuple.

Returns:
tuple, contains buckets.
"""
if not self._re_sampled_buckets:
self._re_sample_buckets()

if not convert_to_tuple:
return self._re_sampled_buckets

return tuple(bucket.as_tuple() for bucket in self._re_sampled_buckets)

+ 15
- 217
mindinsight/datavisual/data_transform/histogram_container.py View File

@@ -13,90 +13,26 @@
# limitations under the License.
# ============================================================================
"""Histogram data container."""
import math

from mindinsight.datavisual.data_transform.histogram import Histogram, Bucket, mask_invalid_number
from mindinsight.datavisual.proto_files.mindinsight_summary_pb2 import Summary
from mindinsight.utils.exceptions import ParamValueError
from mindinsight.datavisual.utils.utils import calc_histogram_bins


def _mask_invalid_number(num):
"""Mask invalid number to 0."""
if math.isnan(num) or math.isinf(num):
return type(num)(0)

return num


class Bucket:
"""
Bucket data class.

Args:
left (double): Left edge of the histogram bucket.
width (double): Width of the histogram bucket.
count (int): Count of numbers fallen in the histogram bucket.
"""
def __init__(self, left, width, count):
self._left = left
self._width = width
self._count = count

@property
def left(self):
"""Gets left edge of the histogram bucket."""
return self._left

@property
def count(self):
"""Gets count of numbers fallen in the histogram bucket."""
return self._count

@property
def width(self):
"""Gets width of the histogram bucket."""
return self._width

@property
def right(self):
"""Gets right edge of the histogram bucket."""
return self._left + self._width

def as_tuple(self):
"""Gets the bucket as tuple."""
return self._left, self._width, self._count

def __repr__(self):
"""Returns repr(self)."""
return "Bucket(left={}, width={}, count={})".format(self._left, self._width, self._count)


class HistogramContainer:
"""
Histogram data container.
Histogram data container.

Args:
histogram_message (Summary.Histogram): Histogram message in summary file.
Args:
histogram_message (Summary.Histogram): Histogram message in summary file.
"""

# Max quantity of original buckets.
MAX_ORIGINAL_BUCKETS_COUNT = 90

def __init__(self, histogram_message: Summary.Histogram):
self._msg = histogram_message
original_buckets = [Bucket(bucket.left, bucket.width, bucket.count) for bucket in self._msg.buckets]
original_buckets = [Bucket(bucket.left, bucket.width, bucket.count) for bucket in histogram_message.buckets]
# Ensure buckets are sorted from min to max.
original_buckets.sort(key=lambda bucket: bucket.left)
self._original_buckets = tuple(original_buckets)
self._count = sum(bucket.count for bucket in self._original_buckets)
self._max = _mask_invalid_number(histogram_message.max)
self._min = _mask_invalid_number(histogram_message.min)
self._visual_max = self._max
self._visual_min = self._min
# default bin number
self._visual_bins = calc_histogram_bins(self._count)
# Note that tuple is immutable, so sharing tuple is often safe.
self._re_sampled_buckets = ()
self._count = sum(bucket.count for bucket in original_buckets)
self._max = mask_invalid_number(histogram_message.max)
self._min = mask_invalid_number(histogram_message.min)
self._histogram = Histogram(tuple(original_buckets), self._max, self._min, self._count)

@property
def max(self):
@@ -114,148 +50,10 @@ class HistogramContainer:
return self._count

@property
def original_msg(self):
"""Gets original proto message."""
return self._msg

@property
def original_buckets_count(self):
"""Gets original buckets quantity."""
return len(self._original_buckets)

def set_visual_range(self, max_val: float, min_val: float, bins: int) -> None:
"""
Sets visual range for later re-sampling.

It's caller's duty to ensure input is valid.

Why we need visual range for histograms? Aligned buckets between steps can help users know about the trend of
tensors. Miss aligned buckets between steps might miss-lead users about the trend of a tensor. Because for
given tensor, if you have thinner buckets, count of every bucket will get lower, however, if you have
thicker buckets, count of every bucket will get higher. When they are displayed together, user might think
the histogram with thicker buckets has more values. This is miss-leading. So we need to unify buckets across
steps. Visual range for histogram is a technology for unifying buckets.

Args:
max_val (float): Max value for visual histogram.
min_val (float): Min value for visual histogram.
bins (int): Bins number for visual histogram.
"""
if max_val < min_val:
raise ParamValueError(
"Invalid input. max_val({}) is less or equal than min_val({}).".format(max_val, min_val))

if bins < 1:
raise ParamValueError("Invalid input bins({}). Must be greater than 0.".format(bins))

self._visual_max = max_val
self._visual_min = min_val
self._visual_bins = bins

# mark _re_sampled_buckets to empty
self._re_sampled_buckets = ()

def _calc_intersection_len(self, max1, min1, max2, min2):
"""Calculates intersection length of [min1, max1] and [min2, max2]."""
if max1 < min1:
raise ParamValueError(
"Invalid input. max1({}) is less than min1({}).".format(max1, min1))

if max2 < min2:
raise ParamValueError(
"Invalid input. max2({}) is less than min2({}).".format(max2, min2))

if min1 <= min2:
if max1 <= min2:
# return value must be calculated by max1.__sub__
return max1 - max1
if max1 <= max2:
return max1 - min2
# max1 > max2
return max2 - min2

# min1 > min2
if max2 <= min1:
return max2 - max2
if max2 <= max1:
return max2 - min1
return max1 - min1

def _re_sample_buckets(self):
"""Re-samples buckets according to visual_max, visual_min and visual_bins."""
if self._visual_max == self._visual_min:
# Adjust visual range if max equals min.
self._visual_max += 0.5
self._visual_min -= 0.5

width = (self._visual_max - self._visual_min) / self._visual_bins

if not self.count:
self._re_sampled_buckets = tuple(
Bucket(self._visual_min + width * i, width, 0)
for i in range(self._visual_bins))
return

re_sampled = []
original_pos = 0
original_bucket = self._original_buckets[original_pos]
for i in range(self._visual_bins):
cur_left = self._visual_min + width * i
cur_right = cur_left + width
cur_estimated_count = 0.0

# Skip no bucket range.
if cur_right <= original_bucket.left:
re_sampled.append(Bucket(cur_left, width, math.ceil(cur_estimated_count)))
continue

# Skip no intersect range.
while cur_left >= original_bucket.right:
original_pos += 1
if original_pos >= len(self._original_buckets):
break
original_bucket = self._original_buckets[original_pos]

# entering with this condition: cur_right > original_bucket.left and cur_left < original_bucket.right
while True:
if original_pos >= len(self._original_buckets):
break
original_bucket = self._original_buckets[original_pos]

intersection = self._calc_intersection_len(
min1=cur_left, max1=cur_right,
min2=original_bucket.left, max2=original_bucket.right)
if not original_bucket.width:
estimated_count = original_bucket.count
else:
estimated_count = (intersection / original_bucket.width) * original_bucket.count

cur_estimated_count += estimated_count
if cur_right > original_bucket.right:
# Need to sample next original bucket to this visual bucket.
original_pos += 1
else:
# Current visual bucket has taken all intersect buckets into account.
break

re_sampled.append(Bucket(cur_left, width, math.ceil(cur_estimated_count)))

self._re_sampled_buckets = tuple(re_sampled)

def buckets(self, convert_to_tuple=True):
"""
Get visual buckets instead of original buckets.

Args:
convert_to_tuple (bool): Whether convert bucket object to tuple.

Returns:
tuple, contains buckets.
"""
if not self._re_sampled_buckets:
self._re_sample_buckets()

if not convert_to_tuple:
return self._re_sampled_buckets
def histogram(self):
"""Gets histogram data"""
return self._histogram

return tuple(bucket.as_tuple() for bucket in self._re_sampled_buckets)
def buckets(self):
"""Gets histogram buckets"""
return self._histogram.buckets()

+ 30
- 0
mindinsight/datavisual/data_transform/image_container.py View File

@@ -0,0 +1,30 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Image container."""
from mindinsight.datavisual.proto_files.mindinsight_summary_pb2 import Summary


class ImageContainer:
"""
Container for image to allow pickling.

Args:
image_message (Summary.Image): Image proto buffer message.
"""
def __init__(self, image_message: Summary.Image):
self.height = image_message.height
self.width = image_message.width
self.colorspace = image_message.colorspace
self.encoded_image = image_message.encoded_image

+ 87
- 32
mindinsight/datavisual/data_transform/ms_data_loader.py View File

@@ -32,14 +32,18 @@ from mindinsight.datavisual.data_access.file_handler import FileHandler
from mindinsight.datavisual.data_transform.events_data import EventsData
from mindinsight.datavisual.data_transform.events_data import TensorEvent
from mindinsight.datavisual.data_transform.graph import MSGraph
from mindinsight.datavisual.proto_files import mindinsight_summary_pb2 as summary_pb2
from mindinsight.datavisual.data_transform.histogram import Histogram
from mindinsight.datavisual.data_transform.histogram_container import HistogramContainer
from mindinsight.datavisual.data_transform.image_container import ImageContainer
from mindinsight.datavisual.data_transform.tensor_container import TensorContainer, MAX_TENSOR_COUNT
from mindinsight.datavisual.proto_files import mindinsight_anf_ir_pb2 as anf_ir_pb2
from mindinsight.datavisual.proto_files import mindinsight_summary_pb2 as summary_pb2
from mindinsight.datavisual.utils import crc32
from mindinsight.utils.exceptions import UnknownError
from mindinsight.datavisual.data_transform.histogram_container import HistogramContainer

HEADER_SIZE = 8
CRC_STR_SIZE = 4
MAX_EVENT_STRING = 500000000


class MSDataLoader:
@@ -77,11 +81,14 @@ class MSDataLoader:
"we will reload all files in path %s.", self._summary_dir)
self.__init__(self._summary_dir)

def load(self):
def load(self, computing_resource_mgr):
"""
Load all log valid files.

When the file is reloaded, it will continue to load from where it left off.

Args:
computing_resource_mgr (ComputingResourceManager): The ComputingResourceManager instance.
"""
logger.debug("Start to load data in ms data loader.")
filenames = self.filter_valid_files()
@@ -92,8 +99,9 @@ class MSDataLoader:
self._valid_filenames = filenames
self._check_files_deleted(filenames, old_filenames)

for parser in self._parser_list:
parser.parse_files(filenames, events_data=self._events_data)
with computing_resource_mgr.get_executor() as executor:
for parser in self._parser_list:
parser.parse_files(executor, filenames, events_data=self._events_data)

def filter_valid_files(self):
"""
@@ -123,11 +131,12 @@ class _Parser:
self._latest_mtime = 0
self._summary_dir = summary_dir

def parse_files(self, filenames, events_data):
def parse_files(self, executor, filenames, events_data):
"""
Load files and parse files content.

Args:
executor (Executor): The executor instance.
filenames (list[str]): File name list.
events_data (EventsData): The container of event data.
"""
@@ -175,7 +184,7 @@ class _Parser:
class _PbParser(_Parser):
"""This class is used to parse pb file."""

def parse_files(self, filenames, events_data):
def parse_files(self, executor, filenames, events_data):
pb_filenames = self.filter_files(filenames)
pb_filenames = self.sort_files(pb_filenames)
for filename in pb_filenames:
@@ -253,11 +262,12 @@ class _SummaryParser(_Parser):
self._summary_file_handler = None
self._events_data = None

def parse_files(self, filenames, events_data):
def parse_files(self, executor, filenames, events_data):
"""
Load summary file and parse file content.

Args:
executor (Executor): The executor instance.
filenames (list[str]): File name list.
events_data (EventsData): The container of event data.
"""
@@ -283,7 +293,9 @@ class _SummaryParser(_Parser):

self._latest_file_size = new_size
try:
self._load_single_file(self._summary_file_handler)
self._load_single_file(self._summary_file_handler, executor)
# Wait for data in this file to be processed to avoid loading multiple files at the same time.
executor.wait_all_tasks_finish()
except UnknownError as ex:
logger.warning("Parse summary file failed, detail: %r,"
"file path: %s.", str(ex), file_path)
@@ -302,14 +314,14 @@ class _SummaryParser(_Parser):
lambda filename: (re.search(r'summary\.\d+', filename)
and not filename.endswith("_lineage")), filenames))

def _load_single_file(self, file_handler):
def _load_single_file(self, file_handler, executor):
"""
Load a log file data.

Args:
file_handler (FileHandler): A file handler.
executor (Executor): The executor instance.
"""
logger.debug("Load single summary file, file path: %s.", file_handler.file_path)
while True:
start_offset = file_handler.offset
try:
@@ -317,9 +329,34 @@ class _SummaryParser(_Parser):
if event_str is None:
file_handler.reset_offset(start_offset)
break

event = summary_pb2.Event.FromString(event_str)
self._event_parse(event)
if len(event_str) > MAX_EVENT_STRING:
logger.warning("file_path: %s, event string: %d exceeds %d and drop it.",
file_handler.file_path, len(event_str), MAX_EVENT_STRING)
continue

future = executor.submit(self._event_parse, event_str, self._latest_filename)

def _add_tensor_event_callback(future_value):
try:
tensor_values = future_value.result()
for tensor_value in tensor_values:
if tensor_value.plugin_name == PluginNameEnum.GRAPH.value:
try:
graph_tags = self._events_data.list_tags_by_plugin(PluginNameEnum.GRAPH.value)
except KeyError:
graph_tags = []

summary_tags = self.filter_files(graph_tags)
for tag in summary_tags:
self._events_data.delete_tensor_event(tag)

self._events_data.add_tensor_event(tensor_value)
except Exception as exc:
# Log exception for debugging.
logger.exception(exc)
raise

future.add_done_callback(_add_tensor_event_callback)
except exceptions.CRCFailedError:
file_handler.reset_offset(start_offset)
logger.warning("Check crc faild and ignore this file, file_path=%s, "
@@ -379,19 +416,29 @@ class _SummaryParser(_Parser):

return event_str

def _event_parse(self, event):
@staticmethod
def _event_parse(event_str, latest_file_name):
"""
Transform `Event` data to tensor_event and update it to EventsData.

This method is static to avoid sending unnecessary objects to other processes.

Args:
event (Event): Message event in summary proto, data read from file handler.
event (str): Message event string in summary proto, data read from file handler.
latest_file_name (str): Latest file name.
"""

plugins = {
'scalar_value': PluginNameEnum.SCALAR,
'image': PluginNameEnum.IMAGE,
'histogram': PluginNameEnum.HISTOGRAM,
'tensor': PluginNameEnum.TENSOR
}
logger.debug("Start to parse event string. Event string len: %s.", len(event_str))
event = summary_pb2.Event.FromString(event_str)
logger.debug("Deserialize event string completed.")

ret_tensor_events = []
if event.HasField('summary'):
for value in event.summary.value:
for plugin in plugins:
@@ -399,44 +446,52 @@ class _SummaryParser(_Parser):
continue
plugin_name_enum = plugins[plugin]
tensor_event_value = getattr(value, plugin)
logger.debug("Processing plugin value: %s.", plugin_name_enum)

if plugin == 'histogram':
if plugin == PluginNameEnum.HISTOGRAM.value:
tensor_event_value = HistogramContainer(tensor_event_value)
# Drop steps if original_buckets_count exceeds HistogramContainer.MAX_ORIGINAL_BUCKETS_COUNT
# to avoid time-consuming re-sample process.
if tensor_event_value.original_buckets_count > HistogramContainer.MAX_ORIGINAL_BUCKETS_COUNT:
if tensor_event_value.histogram.original_buckets_count > Histogram.MAX_ORIGINAL_BUCKETS_COUNT:
logger.info('original_buckets_count exceeds '
'HistogramContainer.MAX_ORIGINAL_BUCKETS_COUNT')
continue

elif plugin == PluginNameEnum.TENSOR.value:
tensor_event_value = TensorContainer(tensor_event_value)
tensor_count = 1
for d in tensor_event_value.dims:
tensor_count *= d
if tensor_count > MAX_TENSOR_COUNT:
logger.warning('tag: %s/tensor, tensor count: %d exceeds %d and drop it.',
value.tag, tensor_count, MAX_TENSOR_COUNT)
continue

elif plugin == PluginNameEnum.IMAGE.value:
tensor_event_value = ImageContainer(tensor_event_value)

tensor_event = TensorEvent(wall_time=event.wall_time,
step=event.step,
tag='{}/{}'.format(value.tag, plugin_name_enum.value),
plugin_name=plugin_name_enum.value,
value=tensor_event_value,
filename=self._latest_filename)
self._events_data.add_tensor_event(tensor_event)
filename=latest_file_name)
logger.debug("Tensor event generated, plugin is %s, tag is %s, step is %s.",
plugin_name_enum, value.tag, event.step)
ret_tensor_events.append(tensor_event)

elif event.HasField('graph_def'):
graph = MSGraph()
graph.build_graph(event.graph_def)
tensor_event = TensorEvent(wall_time=event.wall_time,
step=event.step,
tag=self._latest_filename,
tag=latest_file_name,
plugin_name=PluginNameEnum.GRAPH.value,
value=graph,
filename=self._latest_filename)

try:
graph_tags = self._events_data.list_tags_by_plugin(PluginNameEnum.GRAPH.value)
except KeyError:
graph_tags = []

summary_tags = self.filter_files(graph_tags)
for tag in summary_tags:
self._events_data.delete_tensor_event(tag)
filename=latest_file_name)
ret_tensor_events.append(tensor_event)

self._events_data.add_tensor_event(tensor_event)
return ret_tensor_events

@staticmethod
def _compare_summary_file(current_file, dst_file):


+ 7
- 7
mindinsight/datavisual/data_transform/reservoir.py View File

@@ -205,15 +205,15 @@ class HistogramReservoir(Reservoir):
visual_range = _VisualRange()
max_count = 0
for sample in self._samples:
histogram = sample.value
if histogram.count == 0:
histogram_container = sample.value
if histogram_container.count == 0:
# ignore empty tensor
continue
max_count = max(histogram.count, max_count)
visual_range.update(histogram.max, histogram.min)
max_count = max(histogram_container.count, max_count)
visual_range.update(histogram_container.max, histogram_container.min)

if visual_range.max == visual_range.min and not max_count:
logger.info("Max equals to min. Count is zero.")
logger.debug("Max equals to min. Count is zero.")

bins = calc_histogram_bins(max_count)

@@ -225,7 +225,7 @@ class HistogramReservoir(Reservoir):
bins,
max_count)
for sample in self._samples:
histogram = sample.value
histogram = sample.value.histogram
histogram.set_visual_range(visual_range.max, visual_range.min, bins)

self._visual_range_up_to_date = True
@@ -245,6 +245,6 @@ class ReservoirFactory:
Returns:
Reservoir, reservoir instance for given plugin name.
"""
if plugin_name == PluginNameEnum.HISTOGRAM.value:
if plugin_name in (PluginNameEnum.HISTOGRAM.value, PluginNameEnum.TENSOR.value):
return HistogramReservoir(size)
return Reservoir(size)

+ 269
- 0
mindinsight/datavisual/data_transform/tensor_container.py View File

@@ -0,0 +1,269 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Tensor data container."""
import threading

import numpy as np

from mindinsight.datavisual.common.log import logger
from mindinsight.datavisual.data_transform.histogram import Histogram, Bucket
from mindinsight.datavisual.utils.utils import calc_histogram_bins
from mindinsight.utils.exceptions import ParamValueError

F32_MIN, F32_MAX = np.finfo(np.float32).min, np.finfo(np.float32).max
MAX_TENSOR_COUNT = 10000000


class Statistics:
"""Statistics data class.

Args:
max_value (float): max value of tensor data.
min_value (float): min value of tensor data.
avg_value (float): avg value of tensor data.
count (int): total count of tensor data.
nan_count (int): count of NAN.
neg_inf_count (int): count of negative INF.
pos_inf_count (int): count of positive INF.
"""

def __init__(self, max_value=0, min_value=0, avg_value=0,
count=0, nan_count=0, neg_inf_count=0, pos_inf_count=0):
self._max = max_value
self._min = min_value
self._avg = avg_value
self._count = count
self._nan_count = nan_count
self._neg_inf_count = neg_inf_count
self._pos_inf_count = pos_inf_count

@property
def max(self):
"""Get max value of tensor."""
return self._max

@property
def min(self):
"""Get min value of tensor."""
return self._min

@property
def avg(self):
"""Get avg value of tensor."""
return self._avg

@property
def count(self):
"""Get total count of tensor."""
return self._count

@property
def nan_count(self):
"""Get count of NAN."""
return self._nan_count

@property
def neg_inf_count(self):
"""Get count of negative INF."""
return self._neg_inf_count

@property
def pos_inf_count(self):
"""Get count of positive INF."""
return self._pos_inf_count


def get_statistics_from_tensor(tensors):
"""
Calculates statistics data of tensor.

Args:
tensors (numpy.ndarray): An numpy.ndarray of tensor data.

Returns:
an instance of Statistics.
"""
ma_value = np.ma.masked_invalid(tensors)
total, valid = tensors.size, ma_value.count()
invalids = []
for isfn in np.isnan, np.isposinf, np.isneginf:
if total - valid > sum(invalids):
count = np.count_nonzero(isfn(tensors))
invalids.append(count)
else:
invalids.append(0)

nan_count, pos_inf_count, neg_inf_count = invalids
if not valid:
logger.warning('There are no valid values in the tensors(size=%d, shape=%s)', total, tensors.shape)
statistics = Statistics(max_value=0,
min_value=0,
avg_value=0,
count=total,
nan_count=nan_count,
neg_inf_count=neg_inf_count,
pos_inf_count=pos_inf_count)
return statistics

# BUG: max of a masked array with dtype np.float16 returns inf
# See numpy issue#15077
if issubclass(tensors.dtype.type, np.floating):
tensor_min = ma_value.min(fill_value=np.PINF)
tensor_max = ma_value.max(fill_value=np.NINF)
if tensor_min < F32_MIN or tensor_max > F32_MAX:
logger.warning('Values(%f, %f) are too large, you may encounter some undefined '
'behaviours hereafter.', tensor_min, tensor_max)
else:
tensor_min = ma_value.min()
tensor_max = ma_value.max()
tensor_sum = ma_value.sum(dtype=np.float64)
statistics = Statistics(max_value=tensor_max,
min_value=tensor_min,
avg_value=tensor_sum / valid,
count=total,
nan_count=nan_count,
neg_inf_count=neg_inf_count,
pos_inf_count=pos_inf_count)
return statistics


def _get_data_from_tensor(tensor):
"""
Get data from tensor and convert to tuple.

Args:
tensor (TensorProto): Tensor proto data.

Returns:
tuple, the item of tensor value.
"""
return tuple(tensor.float_data)


def calc_original_buckets(np_value, stats):
"""
Calculate buckets from tensor data.

Args:
np_value (numpy.ndarray): An numpy.ndarray of tensor data.
stats (Statistics): An instance of Statistics about tensor data.

Returns:
list, a list of bucket about tensor data.

Raises:
ParamValueError, If np_value or stats is None.
"""
if np_value is None or stats is None:
raise ParamValueError("Invalid input. np_value or stats is None.")
valid_count = stats.count - stats.nan_count - stats.neg_inf_count - stats.pos_inf_count
if not valid_count:
return []

bins = calc_histogram_bins(valid_count)
first_edge, last_edge = stats.min, stats.max

if not first_edge < last_edge:
first_edge -= 0.5
last_edge += 0.5

bins = np.linspace(first_edge, last_edge, bins + 1, dtype=np_value.dtype)
hists, edges = np.histogram(np_value, bins=bins)

buckets = []
for hist, edge1, edge2 in zip(hists, edges, edges[1:]):
bucket = Bucket(edge1, edge2 - edge1, hist)
buckets.append(bucket)

return buckets


class TensorContainer:
"""
Tensor data container.

Args:
tensor_message (Summary.TensorProto): Tensor message in summary file.
"""

def __init__(self, tensor_message):
self._lock = threading.Lock
# Original dims can not be pickled to transfer to other process, so tuple is used.
self._dims = tuple(tensor_message.dims)
self._data_type = tensor_message.data_type
self._np_array = None
self._data = _get_data_from_tensor(tensor_message)
self._stats = get_statistics_from_tensor(self.get_or_calc_ndarray())
original_buckets = calc_original_buckets(self.get_or_calc_ndarray(), self._stats)
self._count = sum(bucket.count for bucket in original_buckets)
self._max = self._stats.max
self._min = self._stats.min
self._histogram = Histogram(tuple(original_buckets), self._max, self._min, self._count)

@property
def dims(self):
"""Get dims of tensor."""
return self._dims

@property
def data_type(self):
"""Get data type of tensor."""
return self._data_type

@property
def max(self):
"""Get max value of tensor."""
return self._max

@property
def min(self):
"""Get min value of tensor."""
return self._min

@property
def stats(self):
"""Get statistics data of tensor."""
return self._stats

@property
def count(self):
"""Get count value of tensor."""
return self._count

@property
def histogram(self):
"""Get histogram data."""
return self._histogram

def buckets(self):
"""Get histogram buckets."""
return self._histogram.buckets()

def get_or_calc_ndarray(self):
"""Get or calculate ndarray."""
with self._lock():
if self._np_array is None:
self._convert_to_numpy_array()
return self._np_array

def _convert_to_numpy_array(self):
"""Convert a list data to numpy array."""
try:
ndarray = np.array(self._data).reshape(self._dims)
except ValueError as ex:
logger.error("Reshape array fail, detail: %r", str(ex))
return

self._np_array = ndarray

+ 381
- 0
mindinsight/datavisual/processors/tensor_processor.py View File

@@ -0,0 +1,381 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Tensor Processor APIs."""
from urllib.parse import unquote

import numpy as np

from mindinsight.datavisual.utils.tools import to_int
from mindinsight.utils.exceptions import ParamValueError, UrlDecodeError
from mindinsight.conf.constants import MAX_TENSOR_RESPONSE_DATA_SIZE
from mindinsight.datavisual.common.validation import Validation
from mindinsight.datavisual.common.exceptions import StepTensorDataNotInCacheError, TensorNotExistError
from mindinsight.datavisual.common.exceptions import ResponseDataExceedMaxValueError
from mindinsight.datavisual.data_transform.tensor_container import TensorContainer, get_statistics_from_tensor
from mindinsight.datavisual.processors.base_processor import BaseProcessor
from mindinsight.datavisual.proto_files import mindinsight_anf_ir_pb2 as anf_ir_pb2


def convert_array_from_str(dims, limit=0):
"""
Convert string of dims data to array.

Args:
dims (str): Specify dims of tensor.
limit (int): The max flexible dimension count, default value is 0 which means that there is no limitation.

Returns:
list, a string like this: "[0, 0, :, :]" will convert to this value: [0, 0, None, None].

Raises:
ParamValueError, If flexible dimensions exceed limit value.
"""
dims = dims.replace('[', '') \
.replace(']', '')
dims_list = []
count = 0
for dim in dims.split(','):
dim = dim.strip()
if dim == ':':
dims_list.append(None)
count += 1
else:
dims_list.append(to_int(dim, "dim"))
if limit and count > limit:
raise ParamValueError("Flexible dimensions cannot exceed limit value: {}, size: {}"
.format(limit, count))
return dims_list


def get_specific_dims_data(ndarray, dims, tensor_dims):
"""
Get specific dims data.

Args:
ndarray (numpy.ndarray): An ndarray of numpy.
dims (list): A list of specific dims.
tensor_dims (list): A list of tensor dims.

Returns:
numpy.ndarray, an ndarray of specific dims tensor data.

Raises:
ParamValueError, If the length of param dims is not equal to the length of tensor dims or
the index of param dims out of range.
"""
if len(dims) != len(tensor_dims):
raise ParamValueError("The length of param dims: {}, is not equal to the "
"length of tensor dims: {}.".format(len(dims), len(tensor_dims)))
indices = []
for k, d in enumerate(dims):
if d is not None:
if d >= tensor_dims[k]:
raise ParamValueError("The index: {} of param dims out of range: {}.".format(d, tensor_dims[k]))
indices.append(d)
else:
indices.append(slice(0, tensor_dims[k]))
return ndarray[tuple(indices)]


def get_statistics_dict(stats):
"""
Get statistics dict according to statistics value.

Args:
stats (Statistics): An instance of Statistics.

Returns:
dict, a dict including 'max', 'min', 'avg', 'count', 'nan_count', 'neg_inf_count', 'pos_inf_count'.
"""
statistics = {
"max": stats.max,
"min": stats.min,
"avg": stats.avg,
"count": stats.count,
"nan_count": stats.nan_count,
"neg_inf_count": stats.neg_inf_count,
"pos_inf_count": stats.pos_inf_count
}
return statistics


class TensorProcessor(BaseProcessor):
"""Tensor Processor."""
def get_tensors(self, train_ids, tags, step, dims, detail):
"""
Get tensor data for given train_ids, tags, step, dims and detail.

Args:
train_ids (list): Specify list of train job ID.
tags (list): Specify list of tag.
step (int): Specify step of tag, it's necessary when detail is equal to 'data'.
dims (str): Specify dims of step, it's necessary when detail is equal to 'data'.
detail (str): Specify which data to query, available values: 'stats', 'histogram' and 'data'.

Returns:
dict, a dict including the `tensors`.

Raises:
UrlDecodeError, If unquote train id error with strict mode.
"""
Validation.check_param_empty(train_id=train_ids, tag=tags)
for index, train_id in enumerate(train_ids):
try:
train_id = unquote(train_id, errors='strict')
except UnicodeDecodeError:
raise UrlDecodeError('Unquote train id error with strict mode')
else:
train_ids[index] = train_id

tensors = []
for train_id in train_ids:
tensors += self._get_train_tensors(train_id, tags, step, dims, detail)

return {"tensors": tensors}

def _get_train_tensors(self, train_id, tags, step, dims, detail):
"""
Get tensor data for given train_id, tags, step, dims and detail.

Args:
train_id (str): Specify list of train job ID.
tags (list): Specify list of tag.
step (int): Specify step of tensor, it's necessary when detail is set to 'data'.
dims (str): Specify dims of tensor, it's necessary when detail is set to 'data'.
detail (str): Specify which data to query, available values: 'stats', 'histogram' and 'data'.

Returns:
list[dict], a list of dictionaries containing the `train_id`, `tag`, `values`.

Raises:
TensorNotExistError, If tensor with specific train_id and tag is not exist in cache.
ParamValueError, If the value of detail is not within available values:
'stats', 'histogram' and 'data'.
"""

tensors_response = []
for tag in tags:
try:
tensors = self._data_manager.list_tensors(train_id, tag)
except ParamValueError as err:
raise TensorNotExistError(err.message)

if tensors and not isinstance(tensors[0].value, TensorContainer):
raise TensorNotExistError("there is no tensor data in this tag: {}".format(tag))

if detail is None or detail == 'stats':
values = self._get_tensors_summary(detail, tensors)
elif detail == 'data':
Validation.check_param_empty(step=step, dims=dims)
step = to_int(step, "step")
values = self._get_tensors_data(step, dims, tensors)
elif detail == 'histogram':
values = self._get_tensors_histogram(tensors)
else:
raise ParamValueError('Can not support this value: {} of detail.'.format(detail))

tensor = {
"train_id": train_id,
"tag": tag,
"values": values
}
tensors_response.append(tensor)

return tensors_response

def _get_tensors_summary(self, detail, tensors):
"""
Builds a JSON-serializable object with information about tensor summary.

Args:
detail (str): Specify which data to query, detail value is None or 'stats' at this method.
tensors (list): The list of _Tensor data.

Returns:
dict, a dict including the `wall_time`, `step`, and `value' for each tensor.
{
"wall_time": 0,
"step": 0,
"value": {
"dims": [1],
"data_type": "DT_FLOAT32"
"statistics": {
"max": 0,
"min": 0,
"avg": 0,
"count": 1,
"nan_count": 0,
"neg_inf_count": 0,
"pos_inf_count": 0
} This dict is being set when detail is equal to stats.
}
}
"""
values = []
for tensor in tensors:
# This value is an instance of TensorContainer
value = tensor.value
value_dict = {
"dims": value.dims,
"data_type": anf_ir_pb2.DataType.Name(value.data_type)
}
if detail and detail == 'stats':
stats = get_statistics_dict(value.stats)
value_dict.update({"statistics": stats})

values.append({
"wall_time": tensor.wall_time,
"step": tensor.step,
"value": value_dict
})

return values

def _get_tensors_data(self, step, dims, tensors):
"""
Builds a JSON-serializable object with information about tensor dims data.

Args:
step (int): Specify step of tensor.
dims (str): Specify dims of tensor.
tensors (list): The list of _Tensor data.

Returns:
dict, a dict including the `wall_time`, `step`, and `value' for each tensor.
{
"wall_time": 0,
"step": 0,
"value": {
"dims": [1],
"data_type": "DT_FLOAT32",
"data": [[0.1]]
"statistics": {
"max": 0,
"min": 0,
"avg": 0,
"count": 1,
"nan_count": 0,
"neg_inf_count": 0,
"pos_inf_count": 0
}
}
}

Raises:
ResponseDataExceedMaxValueError, If the size of response data exceed max value.
StepTensorDataNotInCacheError, If query step is not in cache.
"""
values = []
step_in_cache = False
dims = convert_array_from_str(dims, limit=2)
for tensor in tensors:
# This value is an instance of TensorContainer
value = tensor.value
if step != tensor.step:
continue
step_in_cache = True
ndarray = value.get_or_calc_ndarray()
res_data = get_specific_dims_data(ndarray, dims, list(value.dims))
flatten_data = res_data.flatten().tolist()
if len(flatten_data) > MAX_TENSOR_RESPONSE_DATA_SIZE:
raise ResponseDataExceedMaxValueError("the size of response data: {} exceed max value: {}."
.format(len(flatten_data), MAX_TENSOR_RESPONSE_DATA_SIZE))

def transfer(array):
if not isinstance(array, np.ndarray):
# The list is used here so that len function can be used
# when the value of array is `NAN`、`-INF` or `INF`.
array = [array]
transfer_data = [None] * len(array)
for index, data in enumerate(array):
if isinstance(data, np.ndarray):
transfer_data[index] = transfer(data)
else:
if np.isnan(data):
transfer_data[index] = 'NAN'
elif np.isneginf(data):
transfer_data[index] = '-INF'
elif np.isposinf(data):
transfer_data[index] = 'INF'
else:
transfer_data[index] = data
return transfer_data

stats = get_statistics_from_tensor(res_data)
if stats.nan_count + stats.neg_inf_count + stats.pos_inf_count > 0:
tensor_data = transfer(res_data)
else:
tensor_data = res_data.tolist()
values.append({
"wall_time": tensor.wall_time,
"step": tensor.step,
"value": {
"dims": value.dims,
"data_type": anf_ir_pb2.DataType.Name(value.data_type),
"data": tensor_data,
"statistics": get_statistics_dict(stats)
}
})
break
if not step_in_cache:
raise StepTensorDataNotInCacheError("this step: {} data may has been dropped.".format(step))

return values

def _get_tensors_histogram(self, tensors):
"""
Builds a JSON-serializable object with information about tensor histogram data.

Args:
tensors (list): The list of _Tensor data.

Returns:
dict, a dict including the `wall_time`, `step`, and `value' for each tensor.
{
"wall_time": 0,
"step": 0,
"value": {
"dims": [1],
"data_type": "DT_FLOAT32",
"histogram_buckets": [[0.1, 0.2, 3]]
"statistics": {
"max": 0,
"min": 0,
"avg": 0,
"count": 1,
"nan_count": 0,
"neg_inf_count": 0,
"pos_inf_count": 0
}
}
}
"""
values = []
for tensor in tensors:
# This value is an instance of TensorContainer
value = tensor.value
buckets = value.buckets()
values.append({
"wall_time": tensor.wall_time,
"step": tensor.step,
"value": {
"dims": value.dims,
"data_type": anf_ir_pb2.DataType.Name(value.data_type),
"histogram_buckets": buckets,
"statistics": get_statistics_dict(value.stats)
}
})

return values

+ 49
- 27
mindinsight/datavisual/processors/train_task_manager.py View File

@@ -83,17 +83,24 @@ class TrainTaskManager(BaseProcessor):
plugins=plugins
)

def query_train_jobs(self, offset=0, limit=10):
def query_train_jobs(self, offset=0, limit=10, request_train_id=None):
"""
Query train jobs.

Args:
offset (int): Specify page number. Default is 0.
limit (int): Specify page size. Default is 10.
request_train_id (str): Specify train id. Default is None.

Returns:
tuple, return quantity of total train jobs and list of train jobs specified by offset and limit.
"""
if request_train_id is not None:
train_job_item = self._get_train_job_item(request_train_id)
if train_job_item is None:
return 0, []
return 1, [train_job_item]

brief_cache = self._data_manager.get_brief_cache()
brief_train_jobs = list(brief_cache.get_train_jobs().values())
brief_train_jobs.sort(key=lambda x: x.basic_info.update_time, reverse=True)
@@ -106,37 +113,52 @@ class TrainTaskManager(BaseProcessor):
train_ids = [train_job.basic_info.train_id for train_job in brief_train_jobs[start:end]]

for train_id in train_ids:
try:
train_job = self._data_manager.get_train_job(train_id)
except exceptions.TrainJobNotExistError:
logger.warning('Train job %s not existed', train_id)
train_job_item = self._get_train_job_item(train_id)
if train_job_item is None:
continue

basic_info = train_job.get_basic_info()
train_job_item = dict(
train_id=basic_info.train_id,
relative_path=basic_info.train_id,
create_time=basic_info.create_time.strftime('%Y-%m-%d %H:%M:%S'),
update_time=basic_info.update_time.strftime('%Y-%m-%d %H:%M:%S'),
profiler_dir=basic_info.profiler_dir,
cache_status=train_job.cache_status.value,
)

if train_job.cache_status == CacheStatus.CACHED:
plugins = self.get_plugins(train_id)
else:
plugins = dict(plugins={
'graph': [],
'scalar': [],
'image': [],
'histogram': [],
})

train_job_item.update(plugins)
train_jobs.append(train_job_item)

return total, train_jobs

def _get_train_job_item(self, train_id):
"""
Get train job item.

Args:
train_id (str): Specify train id.

Returns:
dict, a dict of train job item.
"""
try:
train_job = self._data_manager.get_train_job(train_id)
except exceptions.TrainJobNotExistError:
logger.warning('Train job %s not existed', train_id)
return None

basic_info = train_job.get_basic_info()
train_job_item = dict(
train_id=basic_info.train_id,
relative_path=basic_info.train_id,
create_time=basic_info.create_time.strftime('%Y-%m-%d %H:%M:%S'),
update_time=basic_info.update_time.strftime('%Y-%m-%d %H:%M:%S'),
profiler_dir=basic_info.profiler_dir,
cache_status=train_job.cache_status.value,
)

if train_job.cache_status == CacheStatus.CACHED:
plugins = self.get_plugins(train_id)
else:
plugins = dict(plugins={
'graph': [],
'scalar': [],
'image': [],
'histogram': [],
})

train_job_item.update(plugins)
return train_job_item

def cache_train_jobs(self, train_ids):
"""
Cache train jobs.


+ 12
- 0
mindinsight/datavisual/utils/tools.py View File

@@ -21,7 +21,9 @@ from numbers import Number
from urllib.parse import unquote

from mindinsight.datavisual.common.exceptions import MaxCountExceededError
from mindinsight.datavisual.common.log import logger
from mindinsight.utils import exceptions
from mindinsight.utils.exceptions import UnknownError

_IMG_EXT_TO_MIMETYPE = {
'bmp': 'image/bmp',
@@ -216,6 +218,16 @@ def if_nan_inf_to_none(name, value):
return value


def exception_wrapper(func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as exc:
logger.exception(exc)
raise UnknownError(str(exc))
return wrapper


class Counter:
"""Count accumulator with limit checking."""



+ 13
- 10
mindinsight/lineagemgr/cache_item_updater.py View File

@@ -17,7 +17,7 @@ import os

from mindinsight.datavisual.data_transform.data_manager import BaseCacheItemUpdater, CachedTrainJob
from mindinsight.lineagemgr.common.log import logger
from mindinsight.lineagemgr.common.exceptions.exceptions import MindInsightException
from mindinsight.lineagemgr.common.exceptions.exceptions import LineageFileNotFoundError
from mindinsight.lineagemgr.common.validator.validate import validate_train_id, validate_added_info
from mindinsight.lineagemgr.lineage_parser import LineageParser, LINEAGE
from mindinsight.utils.exceptions import ParamValueError
@@ -59,19 +59,13 @@ class LineageCacheItemUpdater(BaseCacheItemUpdater):

try:
lineage_parser = self._lineage_parsing(cache_item)
except MindInsightException as err:
with cache_item.lock_key(LINEAGE):
try:
cache_item.delete(key=LINEAGE)
logger.info("Parse failed, delete the tran job %s. Detail: %s.", relative_path, str(err))
except ParamValueError:
logger.debug("Parse failed, no need to delete the train job %s. "
"Detail: %s.", relative_path, str(err))
except LineageFileNotFoundError:
self._delete_lineage_in_cache(cache_item, LINEAGE, relative_path)
return

super_lineage_obj = lineage_parser.super_lineage_obj
if super_lineage_obj is None:
logger.debug("There is no lineage to update in tran job %s.", relative_path)
logger.debug("There is no lineage to update in train job %s.", relative_path)
return

cache_item.set(key=LINEAGE, value=lineage_parser)
@@ -91,3 +85,12 @@ class LineageCacheItemUpdater(BaseCacheItemUpdater):
lineage_parser.load()

return lineage_parser

def _delete_lineage_in_cache(self, cache_item, key, relative_path):
with cache_item.lock_key(key):
try:
cache_item.delete(key=key)
logger.info("Parse failed, delete the tran job %s.", relative_path)
except ParamValueError:
logger.debug("Parse failed, and it is not in cache, "
"no need to delete the train job %s.", relative_path)

+ 9
- 8
mindinsight/lineagemgr/lineage_parser.py View File

@@ -100,7 +100,15 @@ class LineageParser:
continue

self._latest_file_size = new_size
self._parse_summary_log()
try:
self._parse_summary_log()
except (LineageSummaryAnalyzeException,
LineageEventNotExistException,
LineageEventFieldNotExistException) as error:
logger.debug("Parse file failed, file_path is %s. Detail: %s.", file_path, str(error))
except MindInsightException as error:
logger.exception(error)
logger.debug("Parse file failed, file_path is %s.", file_path)

def _init_if_files_deleted(self, file_list):
"""Init variables if files deleted."""
@@ -189,13 +197,6 @@ class LineageOrganizer:
self._super_lineage_objs.update({abs_summary_dir: super_lineage_obj})
except LineageFileNotFoundError:
no_lineage_count += 1
except (LineageSummaryAnalyzeException,
LineageEventNotExistException,
LineageEventFieldNotExistException) as error:
logger.warning("Parse file failed under summary_dir %s. Detail: %s.", relative_dir, str(error))
except MindInsightException as error:
logger.exception(error)
logger.warning("Parse file failed under summary_dir %s.", relative_dir)

if no_lineage_count == len(relative_dirs):
logger.info('There is no summary log file under summary_base_dir.')


+ 3
- 3
mindinsight/profiler/README.md View File

@@ -11,11 +11,11 @@ The Profiler enables users to:

To enable profiling on MindSpore, the MindInsight Profiler apis should be added to the script:

1. Import MindInsight Profiler
1. Import the Profiler
```
from mindinsight.profiler import Profiler
from mindspore.profiler import Profiler
```
2. Initialize the Profiler after set context, and before the network initialization.
2. Initialize the Profiler after set context, and before the network and hccl initialization.

Example:


+ 1
- 13
mindinsight/profiler/__init__.py View File

@@ -12,16 +12,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
Profiler Module Introduction.

This module provides Python APIs to enable the profiling of MindSpore neural networks.
Users can import the mindinsight.profiler.Profiler, initialize the Profiler object to start profiling,
and use Profiler.analyse() to stop profiling and analyse the results.
To visualize the profiling results, users can open MindInsight Web, find the corresponding run
and click the profile link.
Now, Profiler supports the AICore operator analysis.
"""
from mindinsight.profiler.profiling import Profiler

__all__ = ["Profiler"]
"""Profiler Module Introduction."""

+ 1
- 1
mindinsight/profiler/analyser/analyser_factory.py View File

@@ -15,7 +15,7 @@
"""The analyser factory."""
import threading

import mindinsight.profiler.analyser as analyser_module
from mindinsight.profiler import analyser as analyser_module
from mindinsight.profiler.common.exceptions.exceptions import \
ProfilerAnalyserNotExistException



+ 46
- 8
mindinsight/profiler/analyser/timeline_analyser.py View File

@@ -17,7 +17,6 @@ import json
import os

from mindinsight.profiler.analyser.base_analyser import BaseAnalyser
from mindinsight.profiler.parser.container import TimelineContainer
from mindinsight.profiler.common.exceptions.exceptions import ProfilerFileNotFoundException, \
ProfilerIOException
from mindinsight.profiler.common.log import logger
@@ -27,6 +26,48 @@ from mindinsight.profiler.common.validator.validate_path import validate_and_nor
SIZE_LIMIT = 20 * 1024 * 1024 # 20MB


class TimelineContainer:
"""
A container of operator computation metadata.

Args:
split_list (list): The split list of metadata in op_compute output file.
"""
def __init__(self, split_list):
self._op_name = split_list[0]
self._stream_id = int(split_list[1])
self._start_time = float(split_list[2])
self._duration = float(split_list[3])
self._pid = None
if len(split_list) == 5:
self._pid = int(split_list[4])

@property
def op_name(self):
"""Get the name of the operator."""
return self._op_name

@property
def stream_id(self):
"""Get the stream id of the operator."""
return self._stream_id

@property
def start_time(self):
"""Get the execution start time of the operator."""
return self._start_time

@property
def duration(self):
"""Get the duration of the operator execution."""
return self._duration

@property
def pid(self):
"""Get the pid of the operator execution."""
return self._pid


class TimelineAnalyser(BaseAnalyser):
"""
Analyse timeline data from file.
@@ -62,9 +103,7 @@ class TimelineAnalyser(BaseAnalyser):
Returns:
json, the content of timeline data.
"""
# Search timeline json file under profiling dir.
display_filename = self._display_filename.format(self._device_id)
# Check if there is a timeline json file for display
file_path = os.path.join(self._profiling_dir, display_filename)
file_path = validate_and_normalize_path(
file_path, raise_key='Invalid timeline json path.'
@@ -90,11 +129,8 @@ class TimelineAnalyser(BaseAnalyser):
Returns:
json, the content of timeline summary information.
"""
file_path = None
summary_file_name = 'timeline_summary_{}.json'.format(self._device_id)
if summary_file_name in os.listdir(self._profiling_dir):
file_path = os.path.join(self._profiling_dir, summary_file_name)

summary_filename = self._timeline_summary_filename.format(self._device_id)
file_path = os.path.join(self._profiling_dir, summary_filename)
file_path = validate_and_normalize_path(
file_path, raise_key='Invalid timeline summary path.'
)
@@ -107,6 +143,8 @@ class TimelineAnalyser(BaseAnalyser):
except (IOError, OSError, json.JSONDecodeError) as err:
logger.error('Error occurred when read timeline summary file: %s', err)
raise ProfilerIOException
else:
logger.info('No timeline summary file. Please check the output path.')

return timeline_summary



+ 0
- 182
mindinsight/profiler/parser/aicpu_data_parser.py View File

@@ -1,182 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
The parser for AI CPU preprocess data.
"""
import os

from tabulate import tabulate

from mindinsight.profiler.common._utils import fwrite_format, get_file_join_name
from mindinsight.profiler.common.log import logger


class DataPreProcessParser:
"""
The Parser for AI CPU preprocess data.

Args:
input_path(str): The profiling job path.
output_filename(str): The output data path and name.

"""

_source_file_target = 'DATA_PREPROCESS.dev.AICPU.'
_dst_file_title = 'title:DATA_PREPROCESS AICPU'
_dst_file_column_title = ['serial_number', 'node_type_name', 'total_time(ms)',
'dispatch_time(ms)', 'run_start', 'run_end']
_ms_unit = 1000

def __init__(self, input_path, output_filename):
self._input_path = input_path
self._output_filename = output_filename
self._source_file_name = self._get_source_file()
self._ms_kernel_flag = 3
self._other_kernel_flag = 6
self._thread_flag = 7
self._ms_kernel_run_end_index = 2
self._other_kernel_run_end_index = 5
self._result_list = []
self._min_cycle_counter = float('inf')

def _get_source_file(self):
"""Get log file name, which was created by ada service."""
file_name = get_file_join_name(self._input_path, self._source_file_target)
if not file_name:
data_path = os.path.join(self._input_path, "data")
file_name = get_file_join_name(data_path, self._source_file_target)
return file_name

def _get_kernel_result(self, number, node_list, thread_list):
"""Get the profiling data form different aicpu kernel"""
try:
if len(node_list) == self._ms_kernel_flag and len(thread_list) == self._thread_flag:
node_type_name = node_list[0].split(':')[-1]
run_end_index = self._ms_kernel_run_end_index
elif len(node_list) == self._other_kernel_flag and len(thread_list) == self._thread_flag:
node_type_name = node_list[0].split(':')[-1].split('/')[-1].split('-')[0]
run_end_index = self._other_kernel_run_end_index
else:
logger.warning("the data format can't support 'node_list':%s", str(node_list))
return None

run_start = node_list[1].split(':')[-1].split(' ')[0]
run_end = node_list[run_end_index].split(':')[-1].split(' ')[0]
total_time = float(thread_list[-1].split('=')[-1].split()[0]) / self._ms_unit
dispatch_time = float(thread_list[-2].split('=')[-1].split()[0]) / self._ms_unit

return [number, node_type_name, total_time, dispatch_time,
run_start, run_end]
except IndexError as e:
logger.exception(e)
return None

def execute(self):
"""Execute the parser, get result data, and write it to the output file."""

if not os.path.exists(self._source_file_name):
logger.info("Did not find the aicpu profiling source file")
return

with open(self._source_file_name, 'rb') as ai_cpu_data:
ai_cpu_str = str(ai_cpu_data.read().replace(b'\n\x00', b' ___ ')
.replace(b'\x00', b' ___ '))[2:-1]
ai_cpu_lines = ai_cpu_str.split(" ___ ")

result_list = list()
ai_cpu_total_time_summary = 0
# Node serial number.
serial_number = 1
for i in range(len(ai_cpu_lines) - 1):
node_line = ai_cpu_lines[i]
thread_line = ai_cpu_lines[i + 1]
result = []
if "Node" in node_line and "Thread" in thread_line:
# Get the node data from node_line
node_list = node_line.split(',')
thread_list = thread_line.split(',')
result = self._get_kernel_result(serial_number, node_list, thread_list)

if result is None:
continue

result_list.append(result)
# Calculate the total time.
total_time = result[2]
ai_cpu_total_time_summary += total_time
# Increase node serial number.
serial_number += 1
elif "Node" in node_line and "Thread" not in thread_line:
node_type_name = node_line.split(',')[0].split(':')[-1]
logger.warning("The node type:%s cannot find thread data", node_type_name)

if result_list:
ai_cpu_total_time = format(ai_cpu_total_time_summary, '.6f')
result_list.append(["AI CPU Total Time(ms):", ai_cpu_total_time])
fwrite_format(self._output_filename, data_source=self._dst_file_title, is_print=True,
is_start=True)
fwrite_format(self._output_filename,
data_source=tabulate(result_list, self._dst_file_column_title,
tablefmt='simple'),
is_start=True, is_print=True)

# For timeline display.
self._result_list = result_list

def query_aicpu_data(self):
"""
Get execution time of AI CPU operator.

Returns:
a dict, the metadata of AI CPU operator execution time.
"""
stream_id = 0 # Default stream id for AI CPU.
pid = 9000 # Default pid for AI CPU.
factor = 1000 # Convert time unit from 1us to 1ms
total_time = 0
min_cycle_counter = float('inf')
aicpu_info = []
op_count_list = []
for aicpu_item in self._result_list:
if "AI CPU Total Time(ms):" in aicpu_item:
total_time = aicpu_item[-1]
continue

op_name = aicpu_item[1]
start_time = float(aicpu_item[4]) / factor
min_cycle_counter = min(min_cycle_counter, start_time)
end_time = float(aicpu_item[5]) / factor
duration = end_time - start_time
aicpu_info.append([op_name, stream_id, start_time, duration, pid])

# Record the number of operator types.
if op_name not in op_count_list:
op_count_list.append(op_name)

self._min_cycle_counter = min_cycle_counter
aicpu_dict = {
'info': aicpu_info,
'total_time': float(total_time),
'op_exe_times': len(aicpu_info),
'num_of_ops': len(op_count_list),
'num_of_streams': 1
}

return aicpu_dict

@property
def min_cycle_counter(self):
"""Get minimum cycle counter in AI CPU."""
return self._min_cycle_counter

+ 0
- 99
mindinsight/profiler/parser/container.py View File

@@ -1,99 +0,0 @@
"""The container of metadata used in profiler parser."""


class HWTSContainer:
"""
HWTS output container.

Args:
split_list (list): The split list of metadata in HWTS output file.
"""
def __init__(self, split_list):
self._op_name = ''
self._duration = None
self._status = split_list[0]
self._task_id = split_list[6]
self._cycle_counter = float(split_list[7])
self._stream_id = split_list[8]

@property
def status(self):
"""Get the status of the operator, i.e. Start or End."""
return self._status

@property
def task_id(self):
"""Get the task id of the operator."""
return self._task_id

@property
def cycle_counter(self):
"""Get the cycle counter."""
return self._cycle_counter

@property
def stream_id(self):
"""Get the stream id of the operator."""
return self._stream_id

@property
def op_name(self):
"""Get the name of the operator."""
return self._op_name

@op_name.setter
def op_name(self, name):
"""Set the name of the operator."""
self._op_name = name

@property
def duration(self):
"""Get the duration of the operator execution."""
return self._duration

@duration.setter
def duration(self, value):
"""Set the duration of the operator execution."""
self._duration = value


class TimelineContainer:
"""
A container of operator computation metadata.

Args:
split_list (list): The split list of metadata in op_compute output file.
"""
def __init__(self, split_list):
self._op_name = split_list[0]
self._stream_id = int(split_list[1])
self._start_time = float(split_list[2])
self._duration = float(split_list[3])
self._pid = None
if len(split_list) == 5:
self._pid = int(split_list[4])

@property
def op_name(self):
"""Get the name of the operator."""
return self._op_name

@property
def stream_id(self):
"""Get the stream id of the operator."""
return self._stream_id

@property
def start_time(self):
"""Get the execution start time of the operator."""
return self._start_time

@property
def duration(self):
"""Get the duration of the operator execution."""
return self._duration

@property
def pid(self):
"""Get the pid of the operator execution."""
return self._pid

+ 0
- 598
mindinsight/profiler/parser/framework_parser.py View File

@@ -1,598 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Thr parser for parsing framework files."""
import csv
import enum
import json
import os
import re

from marshmallow import ValidationError

from mindinsight.profiler.common.exceptions.exceptions import \
ProfilerPathErrorException, ProfilerDirNotFoundException, \
ProfilerFileNotFoundException, ProfilerDeviceIdMismatchException, \
ProfilerRawFileException, ProfilerParamValueErrorException
from mindinsight.profiler.common.validator.validate_path import \
validate_and_normalize_path


class VmDataType(enum.IntEnum):
"""Definition of vm data type."""
NUMBER_TYPE_BEGIN = 26
NUMBER_TYPE_BOOL = 27
NUMBER_TYPE_INT = 28
NUMBER_TYPE_INT8 = 29
NUMBER_TYPE_INT16 = 30
NUMBER_TYPE_INT32 = 31
NUMBER_TYPE_INT64 = 32
NUMBER_TYPE_UINT = 33
NUMBER_TYPE_UINT8 = 34
NUMBER_TYPE_UINT16 = 35
NUMBER_TYPE_UINT32 = 36
NUMBER_TYPE_UINT64 = 37
NUMBER_TYPE_FLOAT = 38
NUMBER_TYPE_FLOAT16 = 39
NUMBER_TYPE_FLOAT32 = 40
NUMBER_TYPE_FLOAT64 = 41
NUMBER_TYPE_END = 42

@classmethod
def get_data_type_name(cls, num):
"""
Get the name of data type by enum number.

Args:
num (int): Enum number.

Returns:
str, the name of data type.
"""
data_type = cls._value2member_map_.get(num)
return 'UNKNOWN' if data_type is None else data_type.name


class GeDataType(enum.IntEnum):
"""Definition of ge data type."""
DT_FLOAT = 0
DT_FLOAT16 = 1
DT_INT8 = 2
DT_INT16 = 6
DT_UINT16 = 7
DT_UINT8 = 4
DT_INT32 = 3
DT_INT64 = 9
DT_UINT32 = 8
DT_UINT64 = 10
DT_BOOL = 12
DT_DOUBLE = 11
DT_STRING = 13
DT_DUAL_SUB_INT8 = 14
DT_DUAL_SUB_UINT8 = 15
DT_COMPLEX64 = 16
DT_COMPLEX128 = 17
DT_QINT8 = 18
DT_QINT16 = 19
DT_QINT32 = 20
DT_QUINT8 = 21
DT_QUINT16 = 22
DT_RESOURCE = 23
DT_STRING_REF = 24
DT_DUAL = 25
DT_UNDEFINED = 26

@classmethod
def get_data_type_name(cls, num):
"""
Get the name of data type by enum number.

Args:
num (int): Enum number.

Returns:
str, the name of data type.
"""
data_type = cls._value2member_map_.get(num)
return 'UNKNOWN' if data_type is None else data_type.name


class GeFormat(enum.IntEnum):
"""Definition of ge format type."""
FORMAT_NCHW = 0
FORMAT_NHWC = 1
FORMAT_ND = 2
FORMAT_NC1HWC0 = 3
FORMAT_FRACTAL_Z = 4
FORMAT_NC1C0HWPAD = 5
FORMAT_NHWC1C0 = 6
FORMAT_FSR_NCHW = 7
FORMAT_FRACTAL_DECONV = 8
FORMAT_C1HWNC0 = 9
FORMAT_FRACTAL_DECONV_TRANSPOSE = 10
FORMAT_FRACTAL_DECONV_SP_STRIDE_TRANS = 11
FORMAT_NC1HWC0_C04 = 12
FORMAT_FRACTAL_Z_C04 = 13
FORMAT_CHWN = 14
FORMAT_FRACTAL_DECONV_SP_STRIDE8_TRANS = 15
FORMAT_HWCN = 16
FORMAT_NC1KHKWHWC0 = 17
FORMAT_BN_WEIGHT = 18
FORMAT_FILTER_HWCK = 19
FORMAT_HASHTABLE_LOOKUP_LOOKUPS = 20
FORMAT_HASHTABLE_LOOKUP_KEYS = 21
FORMAT_HASHTABLE_LOOKUP_VALUE = 22
FORMAT_HASHTABLE_LOOKUP_OUTPUT = 23
FORMAT_HASHTABLE_LOOKUP_HITS = 24
FORMAT_C1HWNCOC0 = 25
FORMAT_MD = 26
FORMAT_NDHWC = 27
FORMAT_FRACTAL_ZZ = 28
FORMAT_FRACTAL_NZ = 29
FORMAT_NCDHW = 30
FORMAT_DHWCN = 31
FORMAT_NDC1HWC0 = 32
FORMAT_FRACTAL_Z_3D = 33
FORMAT_CN = 34
FORMAT_NC = 35
FORMAT_DHWNC = 36
FORMAT_FRACTAL_Z_3D_TRANSPOSE = 37
FORMAT_RESERVED = 38
FORMAT_ALL = 39

@classmethod
def get_format_name(cls, num):
"""
Get the name of format type by enum number.

Args:
num (int): Enum number.

Returns:
str, the name of format type.
"""
format_type = cls._value2member_map_.get(num)
return 'UNKNOWN' if format_type is None else format_type.name


class FrameworkParser:
"""
Thr parser for parsing framework files.

Args:
profiling_id (str): The profiling ID.
device_id (str): The device ID.
output_path (str): The directory of the parsed file. Default: `./`.
"""
_raw_data_dir = '/var/log/npu/profiling'
_regex_framework = r'Framework\.host\.(?P<data_type>.+)\.(?P<device_id>\d).+'
_regex_framework_in_data = r'Framework\.host\.(?P<data_type>.+)\.' \
r'(?P<device_id>\d)\.(?P<profiling_id>[a-zA-Z0-9]+).+'
_col_names = [
'task_id', 'stream_id', 'block_dim', 'full_op_name', 'op_name',
'op_type', 'subgraph', 'op_info'
]
_graph_attr_name = [
'input_format', 'input_data_type', 'input_shape', 'output_format',
'output_data_type', 'output_shape'
]

# if the task id is less than the task id threshold, The combination of
# task id and Stream id represents one operator, else the task id represents
# one operator
_task_id_threshold = 25000

def __init__(self, profiling_id, device_id, output_path='./'):
self._profiling_path = self._get_raw_profiling_path(profiling_id)
self._backend_type = None
self._framework_path = {'graph': [], 'task': [], 'point': []}
self._search_file(profiling_id, device_id)
self._device_id = device_id
self._save_path = self._get_save_path(device_id, output_path)
self._task_id_full_op_name_dict = {}
self._task_cache = {}
self._point_info = {}
self._parse_task_files()
self._parse_point_files()

@property
def save_path(self):
"""
The property of save path.

Returns:
str, the save path.
"""
return self._save_path

@property
def point_info(self):
"""
The property of the framework point information.

Returns:
dict, the framework point information.
"""
return self._point_info

def to_task_id_full_op_name_dict(self):
"""
Get the task id and full operator name dict.

Returns:
dict, the task id and full operator name dict.
"""
return self._task_id_full_op_name_dict

def parse(self):
"""Parse the framework files."""
self._parse_graph_files_and_save(self._task_cache)
del self._task_cache

def check_op_name(self, op_name, is_prefix=True):
"""
Check whether the operator name exists.

Args:
op_name (str): The operator name or operator name prefix.
is_prefix (bool): `True` if the op_name is prefix, else `False`.
Default: True.

Returns:
bool, `True` if the operator name does exist in framework file, else
`False`.
"""
if not op_name:
raise ProfilerParamValueErrorException('The op_name should exist.')
for full_op_name in self._task_id_full_op_name_dict.values():
if full_op_name:
if is_prefix and full_op_name.startswith(op_name):
return True
if not is_prefix and op_name == full_op_name:
return True
return False

def _get_raw_profiling_path(self, profiling_id):
"""
Get raw profiling path.

Args:
profiling_id (str): The profiling ID.

Returns:
str, the raw profiling path.

Raises:
ProfilerPathErrorException: If the profiling path is invalid.
ProfilerDirNotFoundException: If the profiling dir is not found.
"""
profiling_path = os.path.join(self._raw_data_dir, profiling_id)
try:
profiling_path = validate_and_normalize_path(
profiling_path, 'profiler'
)
except ValidationError:
raise ProfilerPathErrorException('Profiling path is invalid.')
if not os.path.isdir(profiling_path):
raise ProfilerDirNotFoundException(profiling_path)
return profiling_path

def _search_file(self, profiling_id, device_id):
"""
Search all framework files in raw profiling path.

Args:
profiling_id (str): The profiling ID.
device_id (str): The device ID.

Raises:
ProfilerFileNotFoundException: If the framework files are not found.
"""
# first search in the JOB dir, and if not, search in the sub directory
# in the JOB
self._search_file_from_job_path(device_id, search_in_sub_path=False)
if self._backend_type is None:
self._search_file_from_job_path(device_id, search_in_sub_path=True)
self._search_file_from_data_path(profiling_id, device_id)

if self._backend_type is None:
raise ProfilerFileNotFoundException('Framework')
self._framework_path['graph'].sort()
self._framework_path['task'].sort()

def _search_file_from_job_path(self, device_id, search_in_sub_path=False):
"""
Search framework files from job path.

Args:
device_id (str): The device ID.
search_in_sub_path (bool): `True` if search file in profiling dir,
else search in profiling sub dir. Default: False.

Raises:
ProfilerRawFileException: If the framework file type is inconsistent.
ProfilerDeviceIdMismatchException: If the device id is mismatch
with framework in the raw dir.
"""
profiling_dir = os.path.join(self._profiling_path, 'data') \
if search_in_sub_path else self._profiling_path
if not os.path.isdir(profiling_dir):
return

files = os.listdir(profiling_dir)
for file in files:
pattern = re.search(self._regex_framework, file)
if not pattern or file.endswith('.done'):
continue
attrs = pattern.groupdict()

device_id_in_path = attrs.get('device_id')
if device_id_in_path != device_id:
raise ProfilerDeviceIdMismatchException()

data_type = attrs.get('data_type')
if data_type.startswith('vm.'):
if self._backend_type and self._backend_type != 'vm':
raise ProfilerRawFileException('Backend type is inconsistent.')
self._backend_type = 'vm'
data_type = data_type.split('.')[1]
else:
if self._backend_type and self._backend_type != 'ge':
raise ProfilerRawFileException('Backend type is inconsistent.')
self._backend_type = 'ge'
if data_type.startswith('graph_desc_info'):
self._framework_path['graph'].append(
os.path.join(profiling_dir, file)
)
elif data_type.startswith('task_desc_info'):
self._framework_path['task'].append(
os.path.join(profiling_dir, file)
)
elif data_type.startswith('point'):
self._framework_path['point'].append(
os.path.join(profiling_dir, file)
)

def _search_file_from_data_path(self, profiling_id, device_id):
"""
Search framework files from data path.

Args:
profiling_id (str): The profiling ID.
device_id (str): The device ID.

Raises:
ProfilerRawFileException: If the framework file type is inconsistent.
ProfilerDeviceIdMismatchException: If the device id is mismatch
with framework in the raw dir.
"""
profiling_data_path = os.path.join(
self._raw_data_dir, 'container', device_id, 'data'
)
if not os.path.isdir(profiling_data_path):
return

files = os.listdir(profiling_data_path)
for file in files:
pattern = re.search(self._regex_framework_in_data, file)
if not pattern or file.endswith('.done') or file.endswith('.zip'):
continue
attrs = pattern.groupdict()

profiling_id_in_path = attrs.get('profiling_id')
if profiling_id_in_path != profiling_id:
continue

device_id_in_path = attrs.get('device_id')
if device_id_in_path != device_id:
raise ProfilerDeviceIdMismatchException()

data_type = attrs.get('data_type')
if data_type.startswith('vm.'):
if self._backend_type and self._backend_type != 'vm':
raise ProfilerRawFileException('Backend type is inconsistent.')
self._backend_type = 'vm'
data_type = data_type.split('.')[1]
else:
if self._backend_type and self._backend_type != 'ge':
raise ProfilerRawFileException('Backend type is inconsistent.')
self._backend_type = 'ge'
if data_type.startswith('graph_desc_info'):
self._framework_path['graph'].append(
os.path.join(profiling_data_path, file)
)
elif data_type.startswith('task_desc_info'):
self._framework_path['task'].append(
os.path.join(profiling_data_path, file)
)
elif data_type.startswith('point'):
self._framework_path['point'].append(
os.path.join(profiling_data_path, file)
)

def _get_save_path(self, device_id, output_path):
"""
Get the save path.

Args:
device_id (str): The device ID.
output_path (str): The output dir.

Returns:
str, the save path.

Raises:
ProfilerPathErrorException: If the output path is invalid.
ProfilerDirNotFoundException: If the output dir is not found.
"""
try:
output_dir = validate_and_normalize_path(output_path, 'profiler')
except ValidationError:
raise ProfilerPathErrorException('Output path is invalid.')
if not os.path.isdir(output_dir):
raise ProfilerDirNotFoundException(output_dir)
return os.path.join(
output_dir, '_'.join(['framework', 'raw', device_id]) + '.csv'
)

def _parse_task_files(self):
"""Parse the framework task files."""
for path in self._framework_path['task']:
with open(path, 'r') as file:
for task_info in file:
infos = task_info.strip('\n').split(' ')
# key is op name, values is task id, stream id, block_dim
self._task_cache[infos[0]] = [infos[2], infos[3], infos[1]]

# if the task id is less than the task id threshold, the
# stream id and task id correspond to an operator
task_id = infos[2]
if int(task_id) < self._task_id_threshold:
task_id = '_'.join([infos[3], task_id])
self._task_id_full_op_name_dict[task_id] = infos[0]

def _parse_graph_files_and_save(self, task_cache):
"""
Parse the framework graph files and save the framework information.

Args:
task_cache (dict): The task information cache.
"""
with open(self._save_path, 'w') as save_file:
csv_writer = csv.writer(save_file)
csv_writer.writerow(self._col_names)
for path in self._framework_path['graph']:
with open(path, 'r') as graph_file:
for graph_info in graph_file:
result = self._parse_one_row_graph_info(graph_info)
task_info = task_cache.get(result[0])
if task_info:
task_info.extend(result)
csv_writer.writerow(task_info)
del task_cache[result[0]]
else:
save_info = [None, None, None]
save_info.extend(result)
csv_writer.writerow(save_info)

none_list = [None, None, None, None]
for key, value in task_cache.items():
value.append(key)
value.extend(none_list)
csv_writer.writerow(value)

def _parse_one_row_graph_info(self, row_info):
"""
Parse the graph information in one row.

Args:
row_info (str): One row graph information.

Returns:
list[str], the parsed graph information.
"""
full_op_name = None
op_name = None
subgraph_name = None
op_type = None
op_info = dict()
cur_op_info_key = None

infos = row_info.strip('\n').split(' ')
for info in infos:
attr_name, attr_value = info.split(':', 1)
if attr_name == 'op_name':
full_op_name = attr_value
subgraph_name = self._get_subgraph_name(full_op_name)
op_name = self._get_op_name(full_op_name, subgraph_name)
elif attr_name == 'op_type':
op_type = attr_value
elif attr_name in ['input_id', 'output_id']:
cur_op_info_key = '{}_{}'.format(
attr_name.split('_')[0], attr_value
)
op_info[cur_op_info_key] = dict()
elif attr_name in self._graph_attr_name:
op_attr = attr_name.split('_', 1)[1]
if op_attr == 'shape':
attr_value = attr_value.strip('"')
if self._backend_type == 'vm':
if op_attr == 'data_type':
attr_value = VmDataType.get_data_type_name(
int(attr_value)
)
else:
if op_attr == 'data_type':
attr_value = GeDataType.get_data_type_name(
int(attr_value)
)
elif op_attr == 'format':
attr_value = GeFormat.get_format_name(int(attr_value))

op_info[cur_op_info_key][op_attr] = attr_value

# the list info are full_op_name, op_name, op_type, subgraph, op_info
return [full_op_name, op_name, op_type, subgraph_name,
json.dumps(op_info)]

def _get_subgraph_name(self, full_op_name):
"""
Get subgraph name.

Args:
full_op_name (str): The full operator name.

Returns:
str, the subgraph name.
"""
subgraph_name = full_op_name.split('/', 1)[0]
if subgraph_name in ['Default', 'Gradients']:
return subgraph_name
return None

def _get_op_name(self, full_op_name, subgraph_name):
"""
Get operator name.

Args:
full_op_name (str): The full operator name.
subgraph_name (str): The subgraph name.

Returns:
str, the operator name.
"""
if subgraph_name is None:
return full_op_name

if self._backend_type == 'vm':
return full_op_name.split('/')[-1]

strs = full_op_name.split(subgraph_name + '/')
op_name = None
for name_str in strs:
if not name_str:
continue
if op_name is None:
op_name = name_str.split('/')[-1]
else:
op_name = '+'.join([op_name, name_str.split('/')[-1]])
return op_name

def _parse_point_files(self):
"""Parse the framework point files."""
for path in self._framework_path['point']:
with open(path, 'r') as file:
for point_info in file:
infos = point_info.strip('\n').split(' ')
self._point_info[int(infos[0])] = infos[1]

+ 0
- 109
mindinsight/profiler/parser/hwts_log_parser.py View File

@@ -1,109 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""The parser for hwts log file."""
import os
import struct
from mindinsight.profiler.common._utils import fwrite_format, get_file_join_name
from mindinsight.profiler.common.log import logger


class HWTSLogParser:
"""
The Parser for hwts log files.

Args:
_input_path (str): The profiling job path. Such as: '/var/log/npu/profiling/JOBAIFGJEJFEDCBAEADIFJAAAAAAAAAA".
output_filename (str): The output data path and name. Such as: './output_format_data_hwts_0.txt'.
"""

_source_file_target = 'hwts.log.data.45.dev.profiler_default_tag'
_dst_file_title = 'title:45 HWTS data'
_dst_file_column_title = 'Type cnt Core_ID Block_ID Task_ID Cycle_counter Stream_ID'

def __init__(self, input_path, output_filename):
self._input_path = input_path
self._output_filename = output_filename
self._source_flie_name = self._get_source_file()

def _get_source_file(self):
"""Get hwts log file name, which was created by ada service."""

file_name = get_file_join_name(self._input_path, self._source_file_target)
if not file_name:
data_path = os.path.join(self._input_path, "data")
file_name = get_file_join_name(data_path, self._source_file_target)
if not file_name:
msg = ("Fail to find hwts log file, under profiling directory")
raise RuntimeError(msg)

return file_name

def execute(self):
"""
Execute the parser, get result data, and write it to the output file.

Returns:
bool, whether succeed to analyse hwts log.
"""

content_format = ['QIIIIIIIIIIII', 'QIIQIIIIIIII', 'IIIIQIIIIIIII']
log_type = ['Start of task', 'End of task', 'Start of block', 'End of block', 'Block PMU']

result_data = ""

with open(self._source_flie_name, 'rb') as hwts_data:
while True:
line = hwts_data.read(64)
if line:
if not line.strip():
continue
else:
break
byte_first_four = struct.unpack('BBHHH', line[0:8])
byte_first = bin(byte_first_four[0]).replace('0b', '').zfill(8)
ms_type = byte_first[-3:]
is_warn_res0_ov = byte_first[4]
cnt = int(byte_first[0:4], 2)
core_id = byte_first_four[1]
blk_id, task_id = byte_first_four[3], byte_first_four[4]
if ms_type in ['000', '001', '010']: # log type 0,1,2
result = struct.unpack(content_format[0], line[8:])
syscnt = result[0]
stream_id = result[1]
elif ms_type == '011': # log type 3
result = struct.unpack(content_format[1], line[8:])
syscnt = result[0]
stream_id = result[1]
elif ms_type == '100': # log type 4
result = struct.unpack(content_format[2], line[8:])
stream_id = result[2]
if is_warn_res0_ov == '0':
syscnt = result[4]
else:
syscnt = None
else:
logger.info("Profiling: invalid hwts log record type %s", ms_type)
continue

if int(task_id) < 25000:
task_id = str(stream_id) + "_" + str(task_id)
result_data += ("%-14s %-4s %-8s %-9s %-8s %-15s %s\n" %(log_type[int(ms_type, 2)], cnt, core_id,
blk_id, task_id, syscnt, stream_id))

fwrite_format(self._output_filename, data_source=self._dst_file_title, is_start=True)
fwrite_format(self._output_filename, data_source=self._dst_file_column_title)
fwrite_format(self._output_filename, data_source=result_data)

return True

+ 0
- 93
mindinsight/profiler/parser/minddata_parser.py View File

@@ -1,93 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Minddata aicpu parser."""
import os
from tabulate import tabulate
from mindinsight.profiler.common._utils import get_file_join_name, fwrite_format
from mindinsight.profiler.common.log import logger
class MinddataParser:
"""Minddata Aicpu Parser."""
@staticmethod
def parse_minddata_aicpu_data(minddata_aicpu_source_path):
"""
Parse minddata get_next info which contains queue size and execute time.
Args:
minddata_aicpu_source_path (str): the source file path.
Returns:
list[Union[str, float]], the converted data.
"""
result = list()
try:
with open(minddata_aicpu_source_path) as source_data_file:
source_data = source_data_file.read()
step_data = source_data.split("\x00")
for one_step in step_data:
if one_step:
node_info = one_step.split(", ")
node_name, node_start, node_end, queue_size = "", 0, 0, 0
if node_info:
node_name = node_info[0].replace("Node:", "")
if len(node_info) > 2:
node_start = node_info[1].replace("Run start:", "")
if node_start.isdigit():
node_start = int(node_start)
node_end = node_info[2].replace("Run end:", "")
if node_end.isdigit():
node_end = int(node_end)
if len(node_info) > 3:
queue_size = node_info[3].replace("queue size:", "")
if queue_size.isdigit():
queue_size = int(queue_size)
one_step_list = [node_name, node_start, node_end, queue_size]
result.append(one_step_list)
except OSError:
logger.error("Open get_next profiling file error.")
return result
@staticmethod
def execute(source_path, output_path, device_id):
"""
Execute the parser.
Args:
source_path (str): the source file path.
output_path (str): the output file path.
device_id (str): the device id.
"""
col_names = ["node_name", "start_time", "end_time", "queue_size"]
minddata_aicpu_source_path = get_file_join_name(
input_path=source_path, file_name='DATA_PREPROCESS.dev.AICPUMI')
if not minddata_aicpu_source_path:
minddata_aicpu_source_path = get_file_join_name(
input_path=os.path.join(source_path, "data"), file_name='DATA_PREPROCESS.dev.AICPUMI')
if not minddata_aicpu_source_path:
return
minddata_aicpu_output_path = os.path.join(output_path, "minddata_aicpu_" + device_id + ".txt")
minddata_aicpu_data = MinddataParser.parse_minddata_aicpu_data(minddata_aicpu_source_path)
if minddata_aicpu_data:
fwrite_format(
minddata_aicpu_output_path,
tabulate(minddata_aicpu_data, col_names, tablefmt='simple'),
is_start=True
)

+ 0
- 289
mindinsight/profiler/parser/minddata_pipeline_parser.py View File

@@ -1,289 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Thr parser for parsing minddata pipeline files."""
import csv
import json
import os
from queue import Queue

from marshmallow import ValidationError

from mindinsight.profiler.common.exceptions.exceptions import \
ProfilerPathErrorException, ProfilerFileNotFoundException, \
ProfilerDirNotFoundException, ProfilerRawFileException
from mindinsight.profiler.common.log import logger
from mindinsight.profiler.common.validator.validate_path import \
validate_and_normalize_path


class MinddataPipelineParser:
"""
Thr parser for parsing minddata pipeline files.

Args:
source_dir (str): The minddata pipeline source dir.
device_id (str): The device ID.
output_path (str): The directory of the parsed file. Default: `./`.

Raises:
ProfilerPathErrorException: If the minddata pipeline file path or
the output path is invalid.
ProfilerFileNotFoundException: If the minddata pipeline file or
the output dir does not exist.
"""
_raw_pipeline_file_name = 'pipeline_profiling_{}.json'
_parsed_pipeline_file_name = 'minddata_pipeline_raw_{}.csv'
_col_names = [
'op_id', 'op_type', 'num_workers', 'output_queue_size',
'output_queue_average_size', 'output_queue_length',
'output_queue_usage_rate', 'sample_interval', 'parent_id', 'children_id'
]

def __init__(self, source_dir, device_id, output_path='./'):
self._device_id = device_id
self._pipeline_path = self._get_pipeline_path(source_dir)
self._save_path = self._get_save_path(output_path)

@property
def save_path(self):
"""
The property of save path.

Returns:
str, the save path.
"""
return self._save_path

def parse(self):
"""
Parse the minddata pipeline files.

Raises:
ProfilerRawFileException: If fails to parse the raw file of
minddata pipeline or the file is empty.
"""
with open(self._pipeline_path, 'r') as file:
try:
pipeline_info = json.load(file)
except (json.JSONDecodeError, TypeError) as err:
logger.exception(err)
raise ProfilerRawFileException(
'Fail to parse minddata pipeline file.'
)
if not pipeline_info:
logger.warning('The minddata pipeline file is empty.')
raise ProfilerRawFileException(
'The minddata pipeline file is empty.'
)

self._parse_and_save(pipeline_info)

def _get_pipeline_path(self, source_dir):
"""
Get the minddata pipeline file path.

Args:
source_dir (str): The minddata pipeline source dir.

Returns:
str, the minddata pipeline file path.
"""
pipeline_path = os.path.join(
source_dir,
self._raw_pipeline_file_name.format(self._device_id)
)

try:
pipeline_path = validate_and_normalize_path(pipeline_path, 'profiler')
except ValidationError:
logger.warning('Minddata pipeline file is invalid.')
raise ProfilerPathErrorException('Minddata pipeline file is invalid.')
if not os.path.isfile(pipeline_path):
logger.warning(
'The minddata pipeline file <%s> not found.', pipeline_path
)
raise ProfilerFileNotFoundException(pipeline_path)

return pipeline_path

def _get_save_path(self, output_path):
"""
Get the save path.

Args:
output_path (str): The output dir.

Returns:
str, the save path.
"""
try:
output_dir = validate_and_normalize_path(output_path, 'profiler')
except ValidationError:
logger.warning('Output path is invalid.')
raise ProfilerPathErrorException('Output path is invalid.')
if not os.path.isdir(output_dir):
logger.warning('The output dir <%s> not found.', output_dir)
raise ProfilerDirNotFoundException(output_dir)
return os.path.join(
output_dir, self._parsed_pipeline_file_name.format(self._device_id)
)

def _parse_and_save(self, pipeline_info):
"""
Parse and save the parsed minddata pipeline file.

Args:
pipeline_info (dict): The pipeline info reads from the raw file of
the minddata pipeline.

Raises:
ProfilerRawFileException: If the format of minddata pipeline raw
file is wrong.
"""
sample_interval = pipeline_info.get('sampling_interval')
op_info = pipeline_info.get('op_info')
if sample_interval is None or not op_info:
raise ProfilerRawFileException(
'The format of minddata pipeline raw file is wrong.'
)

op_id_info_cache = {}
for item in op_info:
op_id_info_cache[item.get('op_id')] = item

with open(self._save_path, 'w') as save_file:
csv_writer = csv.writer(save_file)
csv_writer.writerow(self._col_names)
self._parse_and_save_op_info(
csv_writer, op_id_info_cache, sample_interval
)

def _parse_and_save_op_info(self, csv_writer, op_id_info_cache,
sample_interval):
"""
Parse and save the minddata pipeline operator information.

Args:
csv_writer (csv.writer): The csv writer.
op_id_info_cache (dict): The operator id and information cache.
sample_interval (int): The sample interval.

Raises:
ProfilerRawFileException: If the operator that id is 0 does not exist.
"""
queue = Queue()
root_node = op_id_info_cache.get(0)
if not root_node:
raise ProfilerRawFileException(
'The format of minddata pipeline raw file is wrong, '
'the operator that id is 0 does not exist.'
)
root_node['parent_id'] = None
queue.put_nowait(root_node)

while not queue.empty():
node = queue.get_nowait()
self._update_child_node(node, op_id_info_cache)
csv_writer.writerow(self._get_op_info(node, sample_interval))

op_id = node.get('op_id')
children_ids = node.get('children')
if not children_ids:
continue
for child_op_id in children_ids:
sub_node = op_id_info_cache.get(child_op_id)
sub_node['parent_id'] = op_id
queue.put_nowait(sub_node)

def _update_child_node(self, node, op_id_info_cache):
"""
Updates the child node information of the operator.

Args:
node (dict): The node represents an operator.
op_id_info_cache (dict): The operator id and information cache.
"""
child_op_ids = node.get('children')
if not child_op_ids:
return

queue = Queue()
self._cp_list_item_to_queue(child_op_ids, queue)

new_child_op_ids = []
while not queue.empty():
child_op_id = queue.get_nowait()
child_node = op_id_info_cache.get(child_op_id)
if child_node is None:
continue
metrics = child_node.get('metrics')
if not metrics or not metrics.get('output_queue'):
op_ids = child_node.get('children')
if op_ids:
self._cp_list_item_to_queue(op_ids, queue)
else:
new_child_op_ids.append(child_op_id)

node['children'] = new_child_op_ids

def _get_op_info(self, op_node, sample_interval):
"""
Get the operator information.

Args:
op_node (dict): The node represents an operator.
sample_interval (int): The sample interval.

Returns:
list[str, int, float], the operator information.
"""
queue_size = None
queue_average_size = None
queue_length = None
queue_usage_rate = None
metrics = op_node.get('metrics')
if metrics:
output_queue = metrics.get('output_queue')
if output_queue:
queue_size = output_queue.get('size')
queue_average_size = sum(queue_size) / len(queue_size)
queue_length = output_queue.get('length')
queue_usage_rate = queue_average_size / queue_length

children_id = op_node.get('children')
op_info = [
op_node.get('op_id'),
op_node.get('op_type'),
op_node.get('num_workers'),
queue_size,
queue_average_size,
queue_length,
queue_usage_rate,
sample_interval,
op_node.get('parent_id'),
children_id if children_id else None
]
return op_info

def _cp_list_item_to_queue(self, inner_list, queue):
"""
Copy the contents of a list to a queue.

Args:
inner_list (list): The list.
queue (Queue): The target queue.
"""
for item in inner_list:
queue.put_nowait(item)

+ 0
- 247
mindinsight/profiler/parser/optime_parser.py View File

@@ -1,247 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Op compute time files parser."""
import os
from mindinsight.profiler.common._utils import fwrite_format
from mindinsight.profiler.common.exceptions.exceptions import ProfilerFileNotFoundException, \
ProfilerIOException
from mindinsight.profiler.common.log import logger
from mindinsight.profiler.common.validator.validate_path import validate_and_normalize_path
from mindinsight.profiler.parser.container import HWTSContainer

TIMELINE_FILE_COLUMN_TITLE = 'op_name, stream_id, start_time(ms), duration(ms)'

class OPComputeTimeParser:
"""
Join hwts info and framework info, get op time info, and output to the result file.

Args:
hwts_output_file (str): The file path of hwts_output_file. Such as: './output_format_data_hwts_0.txt".
output_filename (str): The output data file path and name. Such as: './output_op_compute_time_0.txt'.
op_task_info (dict): The task and op relation info. The format: {task_id, [opname, stream_id, block dim]}.
"""

_dst_file_title = 'title:op compute time'
_dst_file_column_title = 'op_name compute_time(ms) stream_id'
_dst_file_column_title += '\n------------ --------------- ---------'

def __init__(self, hwts_output_file, output_filename, op_task_info,
output_path, device_id):
hwts_output_file = validate_and_normalize_path(
hwts_output_file, raise_key='Invalid hwts output file path.'
)
self._hwts_output_file = hwts_output_file
self._output_filename = output_filename
self._op_task_info = op_task_info
self._output_path = output_path
self._device_id = device_id
self._min_cycle_counter = float("inf")

def _get_op_task_id_map(self):
"""
Read hwts data file, get the task time info.

Returns:
list: all hwts task time info.
"""

op_map_result = []
hwts_list = []

if not os.path.exists(self._hwts_output_file):
logger.error('The hwts output file does not exist.')
raise ProfilerFileNotFoundException('hwts output file')

with open(self._hwts_output_file, 'r') as data_file:
lines = data_file.readlines()
for line in lines:
if line.startswith("Start of task") or line.startswith("End of task"):
line_split = line.split()
container = HWTSContainer(line_split)
hwts_list.append(container)

# hwts op map by taskId
for hwts in hwts_list:
if hwts.task_id in self._op_task_info.keys():
hwts.op_name = self._op_task_info[hwts.task_id]
op_map_result.append(hwts)

return op_map_result

def execute(self):
"""Execute the parser, compute all op, get op time, and write it to the output file."""
# Calculate the execution time of operators,
# and update the minimum cycle counter.
tmp_result_data = self._calculate_op_execution_time()

# Convert time units from nanoseconds to milliseconds.
# The unit of the cycle counter is 10 nanoseconds.
op_name_time_dict = {}
op_name_stream_dict = {}
op_name_count_dict = {}
op_name_task_dict = {}
op_name_start_time = {}
self._convert_op_time_unit(
tmp_result_data, op_name_time_dict, op_name_stream_dict,
op_name_count_dict, op_name_task_dict, op_name_start_time
)

result_data = ""
total_time = 0
for op_name, time in op_name_time_dict.items():
if op_name in op_name_stream_dict.keys():
stream_id = op_name_stream_dict[op_name]
avg_time = time / op_name_count_dict[op_name]
total_time += avg_time
result_data += ("%s %s %s\n" %(op_name, str(avg_time), stream_id))
result_data += ("total op %s 0" %(str(total_time)))

timeline_data = []
for op_name, time in op_name_time_dict.items():
if op_name in op_name_stream_dict.keys():
stream_id = op_name_stream_dict[op_name]
start_time_list = op_name_start_time.get(op_name)
for (start_time, duration) in start_time_list:
timeline_data.append([op_name, stream_id, start_time, duration])

# Write the metadata of operators into the file,
# including operator name, average time, and stream id.
self._write_op_time_into_file(result_data)
# Write the timeline data into file,
# including operator name, stream id, start time, and duration.
self._write_timeline_data_into_file(timeline_data)

def _write_op_time_into_file(self, result_data):
"""
Write the metadata of operators into the file, including
op name, average time, and stream id.

Args:
result_data (str): The metadata to be written into the file.
'op_name_1', 'avg_time_1', 'stream_id_1',
'op_name_2', 'avg_time_2', 'stream_id_2',
...
"""

fwrite_format(self._output_filename, data_source=self._dst_file_title, is_start=True)
fwrite_format(self._output_filename, data_source=self._dst_file_column_title)
fwrite_format(self._output_filename, data_source=result_data)

def _write_timeline_data_into_file(self, timeline_data):
"""
Write the timeline information into the file, including
operator name, stream id, start time and duration.

Args:
timeline_data (list): The metadata to be written into the file.
[
['op_name_1', 'stream_id_1', 'start_time_1', 'durarion_1'],
['op_name_2', 'stream_id_2', 'start_time_2', 'durarion_2'],
[...]
]
"""
# sorted by start times
timeline_data.sort(key=lambda x: float(x[2]))
filename = 'output_timeline_data_{}.txt'.format(self._device_id)
file_path = os.path.join(self._output_path, filename)
file_path = validate_and_normalize_path(file_path, raise_key='Invalid file path of timeline data.')

# write to file
try:
with open(file_path, 'w') as f_obj:
f_obj.write(TIMELINE_FILE_COLUMN_TITLE + '\n')
for timeline in timeline_data:
timeline = [str(item) for item in timeline]
f_obj.write(','.join(timeline) + '\n')
except (IOError, OSError) as err:
logger.error('Error occurred when writing intermediate timeline file: %s', err)
raise ProfilerIOException

def _calculate_op_execution_time(self):
"""
Calculate the execution time of each operator.

Returns:
list, including the intermediate data of op execution time.
"""
tmp_result_data = []
op_map_list = self._get_op_task_id_map()

cur_index = 0
length = len(op_map_list)
min_cycle_counter = float("inf")
while cur_index < length:
if cur_index + 1 == length:
break

op_start = op_map_list[cur_index]
op_end = op_map_list[cur_index + 1]
if op_start.status == "Start" and op_end.status == "End" \
and op_start.op_name == op_end.op_name:
op_start.duration = op_end.cycle_counter - op_start.cycle_counter
tmp_result_data.append(op_start)
cur_index += 2
if not op_start.op_name.startswith("assign"):
min_cycle_counter = min(min_cycle_counter, op_start.cycle_counter)
else:
cur_index += 1

# Update the value of minimum cycle counter.
self._min_cycle_counter = min_cycle_counter / 1e5 # Convert the time unit from 10ns to 1ms

return tmp_result_data

def _convert_op_time_unit(self, op_data_list, op_name_time_dict, op_name_stream_dict,
op_name_count_dict, op_name_task_dict, op_name_start_time):
"""
Calculate the execution time of operator and convert it into millisecond.

Args:
op_data_list (list): The list of operator metadata.
op_name_time_dict (dict): The mapping relation of operator name and its execution time.
op_name_stream_dict (dict): The mapping relation of operator name and its stream id.
op_name_count_dict (dict): The mapping relation of operator name and its count.
op_name_task_dict (dict): The mapping relation of operator name and its task id.
op_name_start_time (dict): The mapping relation of operator name and its start time.
"""
factor = 1e5
for item in op_data_list:
op_name = item.op_name
# Unit conversion: converting the cycle counter into ms.
op_start_time_str = str(item.cycle_counter / factor)
op_duration = item.duration / factor
op_duration_str = str(item.duration / factor)
if op_name in op_name_time_dict.keys():
op_name_time_dict[op_name] += op_duration
if item.task_id == op_name_task_dict[op_name]:
op_name_count_dict[op_name] += 1
op_name_start_time[op_name].append(
(op_start_time_str, op_duration_str)
)

else:
op_name_time_dict[op_name] = op_duration
op_name_stream_dict[op_name] = item.stream_id
op_name_task_dict[op_name] = item.task_id
op_name_count_dict[op_name] = 1
op_name_start_time[op_name] = []
op_name_start_time[op_name].append(
(op_start_time_str, op_duration_str)
)

@property
def min_cycle_counter(self):
"""Get minimum cycle counter."""
return self._min_cycle_counter

+ 0
- 312
mindinsight/profiler/parser/step_trace_parser.py View File

@@ -1,312 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""The parser for step trace data."""
import csv
import json
import os
import stat
import struct
from collections import namedtuple
from decimal import Decimal

from mindinsight.profiler.common.exceptions.exceptions import ProfilerPathErrorException, \
JobIdMismatchException, ProfilerIOException
from mindinsight.profiler.common.log import logger as log
from mindinsight.profiler.common.util import get_summary_for_step_trace

StepTraceStruct = namedtuple(
'TrainingTraceStruct', ['tag_id', 'task_id', 'stream_id', 'sys_count']
)


class StepTraceParser:
"""
The parser for step trace data.

Args:
input_dir (str): The directory that contains original step trace data.
output_file_path (str): The output file path.
job_id (int): The job id used to define the start of new step. Default: 0.
skip_first_step (bool): Whether skip the first step or not.
"""
_event_size = 20
_fp_tag = 1
_bp_tag = 2

def __init__(self, input_dir, output_file_path, job_id=0, skip_first_step=False):
self._input_dir = input_dir
self._output_path = output_file_path
self._job_id = job_id
self._skip_first_step = skip_first_step
self._result = []
self._header = []
self._step_num = 0

@property
def output_file(self):
"""The property of step trace header."""
file_name = self._output_path.rsplit('/', 2)
return file_name[-1] if len(file_name) == 3 else ''

def show(self):
"""The property of step trace info."""
summary_info = {}
if self._result:
summary_info = get_summary_for_step_trace(self._result[-1], self._header)
summary_info['total_steps'] = len(self._result) - 1
print('\nStep trace summary info (unit: syscnt):')
print(summary_info)
print('\nThe step trace parse result saves under ${summary_dir}/profiler/%s'
% self.output_file)

def parse_and_save(self):
"""Parse step trace files and save the result."""
try:
source_files = self._get_step_trace_files()
self._parse(source_files)
self._save()
except IOError as err:
log.exception(err)
raise ProfilerIOException()
else:
log.info("Finish to save intermediate result for step trace file.")

def record_point_info(self, point_info, output_path):
"""
Record point info into json.

Args:
point_info (dict): The point info about tag id and relative op name.
output_path (str): The output path for saving point info.

Returns:
dict, parsed point info.
"""
points = {
'fp_start': point_info.get(self._fp_tag, ''),
'bp_end': point_info.get(self._bp_tag, '')
}
try:
with open(output_path, 'w') as json_file:
json.dump(points, json_file)
os.chmod(output_path, stat.S_IREAD)
except (IOError, OSError) as err:
log.warning('Failed to save point info. %s', err)
raise ProfilerIOException
return points

def _get_step_trace_files(self):
"""Get step trace files."""
# step trace files may under $profiler_dir or $profiler_dir/data
profiler_dir = self._input_dir
step_trace_files = self._search_file(profiler_dir)
if not step_trace_files:
# try to find step trace files under $profiler_dir/data
profiler_dir = os.path.join(profiler_dir, 'data')
step_trace_files = self._search_file(profiler_dir)
if not step_trace_files:
raise ProfilerPathErrorException('Training trace file does not exist.')

return step_trace_files

@staticmethod
def _search_file(input_dir):
"""Search step trace file under specific input directory."""
# validate input_dir
if not os.path.isdir(input_dir):
raise ProfilerPathErrorException(
'{} does not exist or is not a dir'.format(input_dir)
)
# get step trace files
files = os.listdir(input_dir)
step_trace_files = list(
filter(
lambda file: file.startswith('training_trace') and not file.endswith('.done'),
files
)
)
# validate result
if len(step_trace_files) > 1:
# the format of file name is like
# `training_trace.46.dev.profiler_default_tag.$id.slice_$number`
# use the $number as the sorted key
try:
step_trace_files.sort(key=lambda path: int(path.rsplit('_', 1)[-1]))
except ValueError as err:
log.warning("Unable to parse file names: %s. %s", step_trace_files, err)
step_trace_files = []

file_paths = [os.path.join(input_dir, file) for file in step_trace_files]
log.info("Find %d step trace files.", len(file_paths))
return file_paths

def _parse(self, source_files):
"""Parse source step trace files."""
log.info("Start to parse step trace file.")
event_info = {}
for source_file in source_files:
with open(source_file, 'rb') as handler:
content = handler.read()
for step_trace in self._get_next_step_trace(content, event_info):
if self._skip_first_step:
self._skip_first_step = False
continue
self._record_trace_event(step_trace)
self._record_average_info()
log.info("Finish to parse step trace file.")

def _get_next_step_trace(self, content, event_info):
"""
Get next step trace info.

Args:
content (bytes): The input step trace info.
event_info (dict): The event info.

Returns:
Generator, return the step trace one by one.
"""
for pos in range(0, len(content), 20):
next_event = self._get_trace_struct(content[pos:pos + self._event_size])
self._construct_event_info(next_event, event_info)
if event_info.get('end'):
yield event_info

def _get_trace_struct(self, bin_info):
"""Translate event info to StepTraceStruct."""
if len(bin_info) == self._event_size:
parsed_info = struct.unpack('=QHHQ', bin_info)
return StepTraceStruct(*parsed_info)
return None

def _construct_event_info(self, next_event, event_info):
"""Construct event info according to next_event."""
min_job_id = 255
step_flag: bool = lambda tag: tag > min_job_id or tag == 0
end_flag: bool = lambda tag: tag == min_job_id
fp_flag: bool = lambda tag: tag == self._fp_tag
bp_flag: bool = lambda tag: tag == self._bp_tag

def _on_step_event():
"""Handle step event."""
self._validate_tag_id(tag_id)
start_time = event_info.get('end', '-')
event_info.clear()
event_info['start'] = start_time
event_info['reduce'] = {}

def _on_reduce_event():
"""Handle reduce event."""
stream_id = next_event.stream_id
if event_info['reduce'].get(stream_id):
event_info['reduce'][stream_id].append(sys_count)
else:
event_info['reduce'][stream_id] = [sys_count]

tag_id = next_event.tag_id
sys_count = next_event.sys_count
if end_flag(tag_id):
event_info['end'] = sys_count
elif step_flag(tag_id):
_on_step_event()
elif fp_flag(tag_id):
event_info['fp'] = sys_count
elif bp_flag(tag_id):
event_info['bp'] = sys_count
else:
_on_reduce_event()

def _validate_tag_id(self, job_id):
"""Check the job id in source step trace file is same os user set."""
if not self._job_id:
self._job_id = job_id
elif self._job_id != job_id:
raise JobIdMismatchException()

def _record_trace_event(self, step_trace):
"""Record trace event."""
self._step_num += 1
start_time = step_trace.get('start')
end_time = step_trace.get('end')
fp_time = step_trace.get('fp')
bp_time = step_trace.get('bp')
if not (start_time and end_time and fp_time and bp_time):
log.warning("The step %d is missing basic time.", self._step_num)
return
if start_time == '-':
start_time = fp_time
row_data = {
'step_num': self._step_num,
'start_point': start_time,
'end_point': end_time,
'total': end_time - start_time,
'fp_point': fp_time,
'bp_point': bp_time,
'iteration_interval': fp_time - start_time,
'fp_and_bp': bp_time - fp_time,
'tail': end_time - bp_time
}
# update reduce info
self._update_reduce_info(step_trace, row_data)
# save the row data
if not self._header:
self._header = list(row_data.keys())
row_data_list = [row_data.get(header_name, 0) for header_name in self._header]
self._result.append(row_data_list)

@staticmethod
def _update_reduce_info(step_trace, row_data):
"""Extract reduce info."""
reduce_time = step_trace.get('reduce', {})
for stream_id, time_points in reduce_time.items():
time_point_num = len(time_points)
if time_point_num % 2:
log.warning("Stream %d has %d reduce time points.", stream_id, time_point_num)
continue
for index, point_id in enumerate(range(0, time_point_num, 2)):
field_name = f'stream_{stream_id}_parallel_{index}'
row_data[field_name + '_start_point'] = time_points[point_id]
row_data[field_name + '_end_point'] = time_points[point_id + 1]
row_data[field_name] = time_points[point_id + 1] - time_points[point_id]

def _record_average_info(self):
"""Calculate average info."""
result_size = len(self._result)
# calculate average data for each column in result data
average_data = [0] * len(self._header)
if result_size >= 2:
for row_info in self._result[1:]:
average_data = [
Decimal(i) + Decimal(j) for i, j in zip(row_info, average_data)
]
average_data = [
round((item / (result_size - 1))) for item in average_data
]
# change step num info in average_data to None
step_num_index = self._header.index('step_num')
average_data[step_num_index] = '-'
self._result.append(average_data)
log.info("Finish add average info for step trace.")

def _save(self):
log.info("Start to save step trace file.")
if not self._header:
return
with open(self._output_path, 'w') as file_handle:
csv_writer = csv.writer(file_handle)
csv_writer.writerow(self._header)
for row_data in self._result:
csv_writer.writerow(row_data)
os.chmod(self._output_path, stat.S_IREAD)

+ 0
- 460
mindinsight/profiler/profiling.py View File

@@ -1,460 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Profiling api file."""
import os
import time

from marshmallow import ValidationError
from tabulate import tabulate

from mindinsight.profiler.analyser.analyser_factory import AnalyserFactory
from mindinsight.profiler.analyser.integrator import Integrator
from mindinsight.profiler.common._utils import get_file_names, fwrite_format
from mindinsight.profiler.common.exceptions.exceptions import ProfilerFileNotFoundException, \
ProfilerIOException
from mindinsight.profiler.common.log import logger
from mindinsight.profiler.common.validator.checkparam import \
check_bool, check_subgraph
from mindinsight.profiler.common.validator.validate_path import \
validate_and_normalize_path
from mindinsight.profiler.parser.aicpu_data_parser import DataPreProcessParser
from mindinsight.profiler.parser.framework_parser import FrameworkParser
from mindinsight.profiler.parser.hwts_log_parser import HWTSLogParser
from mindinsight.profiler.parser.minddata_parser import MinddataParser
from mindinsight.profiler.parser.minddata_pipeline_parser import \
MinddataPipelineParser
from mindinsight.profiler.parser.optime_parser import OPComputeTimeParser
from mindinsight.profiler.parser.step_trace_parser import StepTraceParser
from mindinsight.utils.exceptions import MindInsightException

PROFILING_LOG_BASE_PATH = "/var/log/npu/profiling"
INIT_OP_NAME = 'Default/InitDataSetQueue'


class Profiler:
"""
Performance profiling API.

Enable MindSpore users to profile the performance of neural network.

Args:
subgraph (str): Define which subgraph to monitor and analyse, can be 'all', 'Default', 'Gradients'.
is_detail (bool): Whether to show profiling data for op_instance level, only show optype level if False.
is_show_op_path (bool): Whether to save the full path for each op instance.
output_path (str): Output data path.
optypes_to_deal (str): Op type names, the data of which optype should be collected and analysed,
will deal with all op if null; Different op types should be seperated by comma.
optypes_not_deal (str): Op type names, the data of which optype will not be collected and analysed;
Different op types should be seperated by comma.

Examples:
>>> from mindinsight.profiler import Profiler
>>> context.set_context(mode=context.GRAPH_MODE, device_target="Ascend",
>>> device_id=int(os.environ["DEVICE_ID"]))
>>> profiler = Profiler(subgraph='all', is_detail=True, is_show_op_path=False, output_path='./data')
>>> model = Model(train_network)
>>> dataset = get_dataset()
>>> model.train(2, dataset)
>>> profiler.analyse()
"""

_base_profiling_container_path = "/var/log/npu/profiling/container"
_hwts_output_filename_target = "output_format_data_hwts_"
_opcompute_output_filename_target = "output_op_compute_time_"
_aicpu_op_output_filename_target = "output_data_preprocess_aicpu_"

def __init__(self, subgraph='all', is_detail=True, is_show_op_path=False, output_path='./data',
optypes_to_deal='', optypes_not_deal='Variable', job_id=""):
# get device_id and device_target
self._get_devid_and_devtarget()
self._container_path = os.path.join(self._base_profiling_container_path, self._dev_id)
data_path = os.path.join(self._container_path, "data")
if not os.path.exists(data_path):
os.makedirs(data_path, exist_ok=True)
self._output_path = validate_and_normalize_path(output_path,
'Profiler output path (' + output_path + ')')
self._output_path = os.path.join(self._output_path, "profiler")
if not os.path.exists(self._output_path):
os.makedirs(self._output_path, exist_ok=True)

os.environ['PROFILING_MODE'] = 'true'
os.environ['PROFILING_OPTIONS'] = 'training_trace:task_trace'
os.environ['MINDDATA_PROFILING_DIR'] = self._output_path
os.environ['DEVICE_ID'] = self._dev_id
# use context interface to open profiling, for the new mindspore version(after 2020.5.21)
try:
import mindspore.context as context
context.set_context(enable_profiling=True, profiling_options="training_trace:task_trace")
except ImportError:
logger.error("Profiling: fail to import context from mindspore.")
except ValueError:
logger.error("Profiling: fail to set context enable_profiling")

os.environ['AICPU_PROFILING_MODE'] = 'true'
os.environ['PROFILING_DIR'] = str(self._container_path)
self._subgraph = check_subgraph(subgraph)
self._valid_optype_name = optypes_to_deal.split(",") if optypes_to_deal else []
self._filt_optype_names = optypes_not_deal.split(",") if optypes_not_deal else []
self._detail = check_bool(is_detail, 'is_detail')
self._withfullpath = check_bool(is_show_op_path, 'is_show_op_path')
self._profiling_job_id = job_id
# add job id env through user input later
self._job_id_env = 0
self._start_time = int(time.time() * 10000000)
logger.info("Profiling: profiling start time: %d", self._start_time)

def analyse(self):
"""
Collect and analyse performance data, called after training or during training.

Examples:
>>> from mindinsight.profiler import Profiler
>>> context.set_context(mode=context.GRAPH_MODE, device_target="Ascend",
>>> device_id=int(os.environ["DEVICE_ID"]))
>>> profiler = Profiler(subgraph='all', is_detail=True, is_show_op_path=False, output_path='./data')
>>> model = Model(train_network)
>>> dataset = get_dataset()
>>> model.train(2, dataset)
>>> profiler.analyse()
"""

try:
from mindspore.communication.management import release
release()
except ImportError:
logger.error("Profiling: fail to import release from mindspore.")

job_id = self._get_profiling_job_id()
logger.info("Profiling: job id is %s ", job_id)

source_path = os.path.join(PROFILING_LOG_BASE_PATH, job_id)
# parse hwts.log.data.45.dev file, and get task profiling data
hwts_output_filename = self._hwts_output_filename_target + self._dev_id + ".txt"
hwts_output_filename = os.path.join(self._output_path, hwts_output_filename)
hwtslog_parser = HWTSLogParser(source_path, hwts_output_filename)
result = hwtslog_parser.execute()
if not result:
logger.error("Profiling: fail to parse hwts log file.")
return

# parse Framework file, and get the relation of op and tasks
framework_parser = FrameworkParser(job_id, self._dev_id, self._output_path)
framework_parser.parse()
op_task_dict = framework_parser.to_task_id_full_op_name_dict()
if not op_task_dict:
logger.error("Profiling: fail to parse framework files.")
return

# get op compute time from hwts data and framework data, write output_op_compute_time.txt
opcompute_output_filename = self._opcompute_output_filename_target + self._dev_id + ".txt"
opcompute_output_filename = os.path.join(self._output_path, opcompute_output_filename)
optime_parser = OPComputeTimeParser(
hwts_output_filename, opcompute_output_filename,
op_task_dict, self._output_path, self._dev_id
)
optime_parser.execute()

# parse DATA_PREPROCESS.dev.AICPU file, write output_data_preprocess_aicpu_x.txt
output_data_preprocess_aicpu = self._aicpu_op_output_filename_target + self._dev_id + ".txt"
output_data_preprocess_aicpu = os.path.join(self._output_path, output_data_preprocess_aicpu)
aicpu_data_parser = DataPreProcessParser(source_path, output_data_preprocess_aicpu)
aicpu_data_parser.execute()

# Parsing minddata AICPU profiling
MinddataParser.execute(source_path, self._output_path, self._dev_id)

# parse minddata pipeline operator and queue
try:
pipeline_parser = MinddataPipelineParser(self._output_path, self._dev_id, self._output_path)
pipeline_parser.parse()
except MindInsightException as err:
logger.warning(err.message)

# analyse op compute time info
try:
self._analyser_op_info()
except MindInsightException as err:
logger.warning(err.message)

# analyse step trace info
try:
self._analyse_step_trace(source_path, framework_parser)
except MindInsightException as err:
logger.warning(err.message)

# analyse timeline info
try:
self._analyse_timeline(aicpu_data_parser, optime_parser)
except (ProfilerIOException, ProfilerFileNotFoundException, ValidationError) as err:
logger.warning('Fail to write timeline data: %s', err)

def _analyse_step_trace(self, source_path, framework_parser):
"""
Analyse step trace data and save the result.

Args:
source_path (str): The directory that contains the step trace original data.
framework_parser (FrameworkParser): The framework parse instance.
"""
logger.info("Begin to parse step trace.")
# construct output path
step_trace_intermediate_file_path = os.path.join(
self._output_path,
f'step_trace_raw_{self._dev_id}_detail_time.csv'
)
point_info_file_path = os.path.join(
self._output_path,
'step_trace_point_info.json'
)
# whether keep the first step
skip_first_step_flag = framework_parser.check_op_name(INIT_OP_NAME)
point_info = framework_parser.point_info
# parser the step trace files and save the result to disk
parser = StepTraceParser(input_dir=source_path,
output_file_path=step_trace_intermediate_file_path,
job_id=self._job_id_env,
skip_first_step=skip_first_step_flag)
parser.parse_and_save()
point_info = parser.record_point_info(point_info, point_info_file_path)
# print parser result
parser.show()
logger.info("Finish saving the intermediate result: %s", step_trace_intermediate_file_path)
logger.info("The point info is: %s", point_info)

def _analyse_timeline(self, aicpu_parser, optime_parser):
"""
Analyse and parse timeline info.

Args:
aicpu_parser (DataPreProcessParser): The parser instance for AI CPU operator
execution time calculation.
optime_parser (OPComputeTimeParserParser): The parser instance for AI Core
operator execution time calculation.
"""
timeline_analyser = AnalyserFactory.instance().get_analyser(
'timeline', self._output_path, self._dev_id
)

# Get framework info
aicoredetail_analyser = AnalyserFactory.instance().get_analyser(
'aicore_detail', self._output_path, self._dev_id
)
framework_info = aicoredetail_analyser.query()

# Get all reduce info
step_trace_analyser = AnalyserFactory.instance().get_analyser(
'step_trace', self._output_path, self._dev_id
)
all_reduce_info = step_trace_analyser.query_for_all_reduce()

# Get timeline info
logger.info('Start writing timeline info...')
logger.info('Warm Prompt: It could take a few minutes if you are training '
'with a complex network or more than 10 steps.')
# Add info into timeline, such as AI CPU, AllReduce, framework info.
aicpu_info = aicpu_parser.query_aicpu_data()
min_cycle_counter = min(aicpu_parser.min_cycle_counter, optime_parser.min_cycle_counter)
timeline_analyser.init_timeline(all_reduce_info, framework_info, aicpu_info, min_cycle_counter)
timeline_analyser.write_timeline()
timeline_analyser.write_timeline_summary()

def __del__(self):
"""Disable the profiling collection service, called after training."""

os.environ['PROFILING_MODE'] = str("false")
try:
import mindspore.context as context
context.set_context(enable_profiling=False)
except ImportError:
pass

def _get_profiling_job_id(self):
"""Get profiling job id, which was generated by ada service.

Returns:
str: profiling jon id.
"""

if self._profiling_job_id:
return self._profiling_job_id

job_id = ""
cmd = "ls -t " + PROFILING_LOG_BASE_PATH + "|grep JOB|awk '{print $1}'"
r = os.popen(cmd)
profiling_job_dirs = r.readlines()
r.close()
for item in profiling_job_dirs:
path = os.path.join(PROFILING_LOG_BASE_PATH, item.strip())
log_file = get_file_names(path, "host_start.log")
if not log_file:
logger.error("Profiling: job path %s, host_start.log not exist.", path)
continue

log_file = os.path.join(path, log_file[0])
item_dict = self._parse_host_start_log(log_file)

if not item_dict:
logger.error("Profiling: job path %s, fail to get job start info.", path)
continue
if self._start_time > int(item_dict["start_time"]):
logger.info("Profiling: job path %s, start_time %s, training start_time %d.",
path, item_dict["start_time"], self._start_time)
break

if self._dev_id != item_dict["device_id"]:
logger.info("Profiling: job path %s, dev id %s, training device id %s.",
path, item_dict["device_id"], self._dev_id)
continue

job_id = item.strip()
break

if not job_id:
msg = ("Fail to get profiling job, please check whether job dir was generated")
raise RuntimeError(msg)

return job_id

def _parse_host_start_log(self, input_file):
"""
Parse host start log file, get the device id and start time of the job.

Args:
input_file (str): The file path of the host start log file.

Returns:
dict, job start time and device id.
"""

item_dict = {}
for line in open(input_file):
if "Device" in line:
item_dict["device_id"] = line[7:len(line)-2]
elif "clock_realtime" in line:
item_dict["start_time"] = line[16:len(line)-3]

return item_dict

def _analyser_op_info(self):
"""Analyse the operator information."""
integrator = Integrator(self._output_path, self._dev_id)
integrator.integrate()

aicore_type_result = self._query_op_type_info()
detail_file_path = os.path.join(
self._output_path,
'output_op_compute_time_detail_{}.txt'.format(self._dev_id)
)
fwrite_format(detail_file_path, data_source='title:op compute time')
display_names = [
'optype_name', 'compute_time(ms, per-step)',
'called_times(per-step)', 'percent'
]
data_source = tabulate(aicore_type_result, display_names)
fwrite_format(detail_file_path, data_source=data_source, is_print=True)

if self._detail:
op_type_order = [item[0] for item in aicore_type_result]
aicore_detail_result = self._query_op_detail_info(op_type_order)
fwrite_format(detail_file_path, data_source='', is_print=True)
fwrite_format(detail_file_path, data_source='Detail:', is_print=True)
data_source = tabulate(
aicore_detail_result.get('object'),
aicore_detail_result.get('col_name')
)
fwrite_format(detail_file_path, data_source=data_source, is_print=True)

def _query_op_type_info(self):
"""
Query AICORE operator type information.

Returns:
list[list], the AICORE operator type and execution time information.
"""
condition = {
'sort_condition': {
'name': 'execution_time',
'type': 'descending'
}
}
analyser = AnalyserFactory.instance().get_analyser(
'aicore_type', self._output_path, self._dev_id
)
result = analyser.query(condition)
return result.get('object')

def _query_op_detail_info(self, op_type_order):
"""
Query AICORE operator detail information.

Args:
op_type_order(list): The name of the op type in order.

Returns:
dict, the AICORE operator detail information.
"""

op_type_condition = {}
if self._valid_optype_name:
op_type_condition['in'] = self._valid_optype_name
if self._filt_optype_names:
op_type_condition['not_in'] = self._filt_optype_names

subgraph_condition = {}
if self._subgraph != 'all':
subgraph_condition['in'] = [self._subgraph]

filter_condition = {
'op_type': op_type_condition,
'subgraph': subgraph_condition,
'is_display_detail': False,
'is_display_full_op_name': self._withfullpath
}
analyser = AnalyserFactory.instance().get_analyser(
'aicore_detail', self._output_path, self._dev_id
)
result = analyser.query_and_sort_by_op_type(
filter_condition, op_type_order
)
return result


def _get_devid_and_devtarget(self):
"""Get device id and target of this training."""

device_target = ""
dev_id = ""
try:
import mindspore.context as context
dev_id = str(context.get_context("device_id"))
device_target = context.get_context("device_target")
except ImportError:
logger.error("Profiling: fail to import context from mindspore.")
except ValueError as err:
logger.error("Profiling: fail to get context, %s", err)

if not dev_id or not dev_id.isdigit():
dev_id = os.getenv('DEVICE_ID')
if not dev_id or not dev_id.isdigit():
dev_id = "0"
logger.error("Fail to get DEVICE_ID, use 0 instead.")

if device_target and device_target != "Davinci" \
and device_target != "Ascend":
msg = ("Profiling: unsupport backend: %s" \
% device_target)
raise RuntimeError(msg)

self._dev_id = dev_id

+ 22
- 14
mindinsight/scripts/stop.py View File

@@ -103,21 +103,17 @@ class Command(BaseCommand):
self.logfile.info('Stop mindinsight with port %s and pid %s.', port, pid)

process = psutil.Process(pid)
child_pids = [child.pid for child in process.children()]
processes_to_kill = [process]
# Set recursive to True to kill grand children processes.
for child in process.children(recursive=True):
processes_to_kill.append(child)

# kill gunicorn master process
try:
os.kill(pid, signal.SIGKILL)
except PermissionError:
self.console.info('kill pid %s failed due to permission error', pid)
sys.exit(1)

# cleanup gunicorn worker processes
for child_pid in child_pids:
for proc in processes_to_kill:
self.logfile.info('Stopping mindinsight process %s.', proc.pid)
try:
os.kill(child_pid, signal.SIGKILL)
except ProcessLookupError:
pass
proc.send_signal(signal.SIGKILL)
except psutil.Error as ex:
self.logfile.warning("Stop process %s failed. Detail: %s.", proc.pid, str(ex))

for hook in HookUtils.instance().hooks():
hook.on_shutdown(self.logfile)
@@ -154,7 +150,19 @@ class Command(BaseCommand):
if user != process.username():
continue

pid = process.pid if process.ppid() == 1 else process.ppid()
gunicorn_master_process = process

# The gunicorn master process might have grand children (eg forked by process pool).
while True:
parent_process = gunicorn_master_process.parent()
if parent_process is None or parent_process.pid == 1:
break
parent_cmd = parent_process.cmdline()
if ' '.join(parent_cmd).find(self.cmd_regex) == -1:
break
gunicorn_master_process = parent_process

pid = gunicorn_master_process.pid

for open_file in process.open_files():
if open_file.path.endswith(self.access_log_path):


+ 41
- 0
mindinsight/sysmetric/collector/__init__.py View File

@@ -0,0 +1,41 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""The metrics collector."""
from ._collect_cpu import collect_cpu
from ._collect_mem import collect_mem
from ._collect_npu import collect_npu

__all__ = ['collect_cpu', 'collect_mem', 'collect_npu', 'get_metrics']


def get_metrics():
mem = collect_mem()
mem_total = mem.get('total')
mem_available = mem.get('available')
mem_used = mem.get('used')
return {
'npu': collect_npu(),
'cpu': {
'overall': collect_cpu(percent=True),
'percpu': collect_cpu(percpu=True, percent=True)
},
'memory': {
'virtual': {
'available': mem_available,
'used': mem_used,
'others': max(mem_total - mem_available - mem_used, 0)
}
}
}

+ 37
- 0
mindinsight/sysmetric/collector/_collect_cpu.py View File

@@ -0,0 +1,37 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""The cpu collector."""

import psutil


def collect_cpu(percpu=False, percent=False):
"""
Collect the cpu info.

Args:
percpu (bool): To return a list of cpu info for each logical CPU on the system.
percent (bool): Represent the sized in percentage.

Returns:
Union[dict, List[dict]], the CPUs info.
"""
if percent:
times = psutil.cpu_times_percent(percpu=percpu)
else:
times = psutil.cpu_times(percpu=percpu)
if not percpu:
return dict(times._asdict())
return [dict(time._asdict()) for time in times]

+ 27
- 0
mindinsight/sysmetric/collector/_collect_mem.py View File

@@ -0,0 +1,27 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""The memory collector."""

import psutil


def collect_mem():
"""
Collect the virtual memory info.

Returns:
dict, the virtual memory info.
"""
return dict(psutil.virtual_memory()._asdict())

+ 420
- 0
mindinsight/sysmetric/collector/_collect_npu.py View File

@@ -0,0 +1,420 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""The npu collector."""

import inspect
from collections import defaultdict
from ctypes import CDLL, Structure, byref, c_char, c_int, c_uint, c_ulong, c_ushort
from functools import lru_cache, wraps
from threading import Lock, Thread

from mindinsight.sysmetric.common.exceptions import DsmiQueryingException
from mindinsight.sysmetric.common.log import logger


def _timeout(seconds, default):
"""
The timeout decorator wait for specified seconds or return the default value.

Args:
seconds (float): The specified seconds.
default (Any): The default value.
"""

def outer(fn):
cached, lockdict = {}, defaultdict(Lock)

def target(*args):
lock = lockdict[args]
if lock.acquire(blocking=False):
try:
cached[args] = fn(*args)
finally:
lock.release()
else:
logger.debug('%s%r skipped.', fn.__name__, args)

@wraps(fn)
def inner(*args):
thread = Thread(target=target, args=args, daemon=True)
thread.start()
thread.join(seconds)
if thread.is_alive():
logger.debug('%s%r timeouted.', fn.__name__, args)
return cached.get(args, default)

return inner

return outer


def _fallback_to_prev_result(fn):
"""Fallback to previous successful result when failing."""
prev_result = None

@wraps(fn)
def wrap(*args):
nonlocal prev_result
sucess, result = fn(*args)
if sucess:
prev_result = result
return sucess, result
if prev_result is not None:
return sucess, prev_result
raise RuntimeError(f'{fn.__name__} querying failed and no previous successful result.')

return wrap


def _libsmicall(*args):
"""
Call the lib function to querying NPU metrics.

Returns:
bool, True when success of querying, False otherwise.
"""
if not libsmi:
logger.error('Trying to call the libdrvdsmi_host which is not loaded.')
raise ValueError('Trying to call the libdrvdsmi_host which is not loaded.')
fname = inspect.stack()[1].function
error_code = getattr(libsmi, fname)(*args)
if error_code != 0:
logger.error('%s querying failed with error code %d.', fname, error_code)
return error_code == 0


@lru_cache(maxsize=4)
def dsmi_get_device_count():
"""
Get device count.

Returns:
int, the device count.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""
device_count = c_int()

if _libsmicall(byref(device_count)):
return device_count.value
raise RuntimeError('Querying device count failed.')


@lru_cache(maxsize=4)
def dsmi_list_device(count):
"""
List the device IDs.

Args:
count (int): The device count.

Returns:
List[int], the device IDs.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""
device_id_array = c_int * count
device_id_list = device_id_array()
count = c_int(count)

if _libsmicall(device_id_list, count):
return list(device_id_list)
raise RuntimeError('Querying device id list failed.')


@lru_cache(maxsize=8)
@_fallback_to_prev_result
def dsmi_get_chip_info(device_id):
"""
Get chip info.

Args:
device_id (int): The specific device id.

Returns:
dict, the chip info:
- chip_type (str): The chip type.
- chip_name (str): The chip name.
- chip_ver (str): The chip name.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""

class ChipInfoStruct(Structure):
_fields_ = [('chip_type', c_char * 32), ('chip_name', c_char * 32), ('chip_ver', c_char * 32)]

device_id = c_int(device_id)
chip_info = ChipInfoStruct()
success = _libsmicall(device_id, byref(chip_info))
return success, {
'chip_type': chip_info.chip_type.decode('utf-8'),
'chip_name': chip_info.chip_name.decode('utf-8'),
'chip_ver': chip_info.chip_ver.decode('utf-8')
}


@_fallback_to_prev_result
def dsmi_get_device_health(device_id):
"""
Get device health.

Args:
device_id (int): The specific device id.

Returns:
int, 0 indicats normal, 1 minor alarm, 2 major alarm, 3 critical alarm, 0xffffffff device not found.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""
device_id = c_int(device_id)
health = c_uint()

success = _libsmicall(device_id, byref(health))

return success, health.value


@lru_cache(maxsize=8)
@_fallback_to_prev_result
def dsmi_get_device_ip_address(device_id):
"""
Get device IP address.

Args:
device_id (int): The specific device ID.
Returns:
dict, the device IP address:
- ip_address (str): the IP address.
- mask_address (str): the mask address.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""
is_ipv6, port_type, port_id = False, 1, 0

class Ipaddrstruct(Structure):
_fields_ = [('u_addr', c_char * (16 if is_ipv6 else 4)), ('ip_type', c_int)]

ip_type = c_int(1 if is_ipv6 else 0)

device_id = c_int(device_id)
ip_address = Ipaddrstruct(b'', ip_type)
mask_address = Ipaddrstruct(b'', ip_type)

success = _libsmicall(device_id, port_type, port_id, byref(ip_address), byref(mask_address))

def pad(u_addr):
for i in range(4):
if i < len(u_addr):
yield u_addr[i]
else:
yield 0

return success, {
'ip_address': '.'.join(str(c) for c in pad(ip_address.u_addr)),
'mask_address': '.'.join(str(c) for c in pad(mask_address.u_addr))
}


@_fallback_to_prev_result
def dsmi_get_hbm_info(device_id):
"""
Get the HBM info.

Args:
device_id (int): The specific device id.

Returns:
dict, the HBM info:
memory_size (int), The total HBM memory, in KB.
frep (int), The HBM frequency, in MHZ.
memory_usage (int), The used HBM memory, in KB.
temp (int), The HBM temperature, in °C.
bandwith_util_rate (int): The bandwith util rate, in %.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""

class HbmInfoStruct(Structure):
_fields_ = [('memory_size', c_ulong), ('freq', c_uint), ('memory_usage', c_ulong), ('temp', c_int),
('bandwith_util_rate', c_uint)]

device_id = c_int(device_id)
hbm_info = HbmInfoStruct()

success = _libsmicall(device_id, byref(hbm_info))

return success, {
'memory_size': hbm_info.memory_size,
'freq': hbm_info.freq,
'memory_usage': hbm_info.memory_usage,
'temp': hbm_info.temp,
'bandwith_util_rate': hbm_info.bandwith_util_rate
}


@_timeout(0.2, -1)
def dsmi_get_device_utilization_rate(device_id, device_type):
"""
Get device utilization rate, %.

Note: Query AI Core when profiling turns on will return failure.

Args:
device_id (int): The specific device id
device_type (int): The device type, 1 for memory, 2 AI Core, 5 memory bandwidth, 6 HBM, 10 HBM bandwidth.
Returns:
int, the utilization rate, returning -1 to indicate querying failed.
"""
device_id = c_int(device_id)
device_type = c_int(device_type)
utilization_rate = c_uint()
if _libsmicall(device_id, device_type, byref(utilization_rate)):
return utilization_rate.value
return -1


@_fallback_to_prev_result
def dsmi_get_device_power_info(device_id):
"""
Get the device power.

Args:
device_id (int): The specific device id.

Returns:
dict, the device power info.
- power, the device power, in Watt.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""

class PowerInfoStruct(Structure):
_fields_ = [('power', c_ushort)]

power_info = PowerInfoStruct()
device_id = c_int(device_id)

success = _libsmicall(device_id, byref(power_info))
return success, {'power': round(power_info.power * 0.1, 2)}


@_fallback_to_prev_result
def dsmi_get_device_temperature(device_id):
"""
Get the device temperature.

Args:
device_id (int): The specific device id.

Returns:
int, the device temperature, in °C.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""
device_id = c_int(device_id)
temperature = c_uint()

success = _libsmicall(device_id, byref(temperature))

return success, temperature.value


def collect_npu():
"""Collect the metrics for each NPUs.

Returns:
List[dict], the metrics of each NPUs.

Raises:
DsmiQueryingException, when querying dsmi returning non-zero.
"""
try:
return _collect_npus()
except RuntimeError as e:
logger.warning(e.args[0])
raise DsmiQueryingException(e.args[0])


def _collect_npus():
"""Collect the metrics for each NPUs.

Returns:
List[dict], the metrics of each NPUs.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""
if not libsmi:
return None
count = dsmi_get_device_count()
device_ids = dsmi_list_device(count)
npus = []
for device_id in device_ids:
npu = _collect_one(device_id)
npus.append(npu)
return npus


def _collect_one(device_id):
"""
Collect NPU info by the device_id.

Args:
device_id (int): The specific device id.

Returns:
dict, the NPU info.

Raises:
RuntimeError, when querying dsmi returning non-zero.
"""
kb_to_mb, memory_threshold, success = 1024, 4, [True] * 6
success[0], health = dsmi_get_device_health(device_id)
success[1], hbm_info = dsmi_get_hbm_info(device_id)
success[2], chip_info = dsmi_get_chip_info(device_id)
success[3], ip_addr = dsmi_get_device_ip_address(device_id)
success[4], power_info = dsmi_get_device_power_info(device_id)
success[5], temperature = dsmi_get_device_temperature(device_id)
aicore_rate = dsmi_get_device_utilization_rate(device_id, 2)
return {
'chip_name': chip_info.get('chip_name'),
'device_id': device_id,
'available': all(success) and health == 0 and hbm_info.get('memory_usage', 0) // kb_to_mb < memory_threshold,
'health': health,
'ip_address': ip_addr.get('ip_address'),
'aicore_rate': aicore_rate,
'hbm_info': {
'memory_size': hbm_info.get('memory_size') // kb_to_mb,
'memory_usage': hbm_info.get('memory_usage') // kb_to_mb
},
'power': power_info.get('power'),
'temperature': temperature,
'success': all(success)
}


try:
libsmi = CDLL('libdrvdsmi_host.so')
except OSError:
logger.info('Failed to load libdrvdsmi_host.so.')
libsmi = None

mindinsight/profiler/parser/__init__.py → mindinsight/sysmetric/common/__init__.py View File


+ 25
- 0
mindinsight/sysmetric/common/exceptions.py View File

@@ -0,0 +1,25 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Define custom exception."""

from mindinsight.utils.exceptions import MindInsightException
from mindinsight.utils.constant import SysmetricErrors


class DsmiQueryingException(MindInsightException):
"""Dsmi Querying Failure"""

def __init__(self, message):
super(DsmiQueryingException, self).__init__(SysmetricErrors.DSMI_QUERYING_NONZERO, message)

tests/ut/datavisual/common/__init__.py → mindinsight/sysmetric/common/log.py View File

@@ -1,4 +1,4 @@
# Copyright 2019 Huawei Technologies Co., Ltd
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,3 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Resource logger."""
from mindinsight.utils.log import setup_logger

logger = setup_logger(sub_module='sysmetric', log_name='sysmetric')

+ 22
- 0
mindinsight/ui/src/assets/images/cpu-bg.svg View File

@@ -0,0 +1,22 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="1920px" height="1080px" viewBox="0 0 1920 1080" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<!-- Generator: Sketch 63.1 (92452) - https://sketch.com -->
<title>矩形</title>
<desc>Created with Sketch.</desc>
<defs>
<polygon id="path-1" points="0 0 1920 0 1920 1080 0 1080"></polygon>
<pattern id="pattern-3" width="16.4850993" height="16.4850993" x="-16.4850993" y="-16.4850993" patternUnits="userSpaceOnUse">
<use xlink:href="#image-4" transform="scale(0.34343957,0.34343957)"></use>
</pattern>
<image id="image-4" width="48" height="48" xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAwCAYAAABXAvmHAAAEGWlDQ1BrQ0dDb2xvclNwYWNlR2VuZXJpY1JHQgAAOI2NVV1oHFUUPrtzZyMkzlNsNIV0qD8NJQ2TVjShtLp/3d02bpZJNtoi6GT27s6Yyc44M7v9oU9FUHwx6psUxL+3gCAo9Q/bPrQvlQol2tQgKD60+INQ6Ium65k7M5lpurHeZe58853vnnvuuWfvBei5qliWkRQBFpquLRcy4nOHj4g9K5CEh6AXBqFXUR0rXalMAjZPC3e1W99Dwntf2dXd/p+tt0YdFSBxH2Kz5qgLiI8B8KdVy3YBevqRHz/qWh72Yui3MUDEL3q44WPXw3M+fo1pZuQs4tOIBVVTaoiXEI/MxfhGDPsxsNZfoE1q66ro5aJim3XdoLFw72H+n23BaIXzbcOnz5mfPoTvYVz7KzUl5+FRxEuqkp9G/Ajia219thzg25abkRE/BpDc3pqvphHvRFys2weqvp+krbWKIX7nhDbzLOItiM8358pTwdirqpPFnMF2xLc1WvLyOwTAibpbmvHHcvttU57y5+XqNZrLe3lE/Pq8eUj2fXKfOe3pfOjzhJYtB/yll5SDFcSDiH+hRkH25+L+sdxKEAMZahrlSX8ukqMOWy/jXW2m6M9LDBc31B9LFuv6gVKg/0Szi3KAr1kGq1GMjU/aLbnq6/lRxc4XfJ98hTargX++DbMJBSiYMIe9Ck1YAxFkKEAG3xbYaKmDDgYyFK0UGYpfoWYXG+fAPPI6tJnNwb7ClP7IyF+D+bjOtCpkhz6CFrIa/I6sFtNl8auFXGMTP34sNwI/JhkgEtmDz14ySfaRcTIBInmKPE32kxyyE2Tv+thKbEVePDfW/byMM1Kmm0XdObS7oGD/MypMXFPXrCwOtoYjyyn7BV29/MZfsVzpLDdRtuIZnbpXzvlf+ev8MvYr/Gqk4H/kV/G3csdazLuyTMPsbFhzd1UabQbjFvDRmcWJxR3zcfHkVw9GfpbJmeev9F08WW8uDkaslwX6avlWGU6NRKz0g/SHtCy9J30o/ca9zX3Kfc19zn3BXQKRO8ud477hLnAfc1/G9mrzGlrfexZ5GLdn6ZZrrEohI2wVHhZywjbhUWEy8icMCGNCUdiBlq3r+xafL549HQ5jH+an+1y+LlYBifuxAvRN/lVVVOlwlCkdVm9NOL5BE4wkQ2SMlDZU97hX86EilU/lUmkQUztTE6mx1EEPh7OmdqBtAvv8HdWpbrJS6tJj3n0CWdM6busNzRV3S9KTYhqvNiqWmuroiKgYhshMjmhTh9ptWhsF7970j/SbMrsPE1suR5z7DMC+P/Hs+y7ijrQAlhyAgccjbhjPygfeBTjzhNqy28EdkUh8C+DU9+z2v/oyeH791OncxHOs5y2AtTc7nb/f73TWPkD/qwBnjX8BoJ98VQNcC+8AAAFFSURBVGgF7djbDYMwDAXQBjEMYqyqY1WMxTppLiJIQIA8ariWkp8IPuweO60am67r7MstY8wwjuPH7dMz3uUua63p+/7r9rePIRW/9QmQzCV9ub0YgSIgzhxvQkjFb1EZBAdEKolkfJTqtnb7bv/zOJm58moRE0AzYgFoRawAGhE7gDZEEKAJcQjQgjgFaEBcAtgRUQBmRDSAFZEEYEQkA9gQWQAmRDaABVEEYEAUAx5H4EaGOyw+SMl66mbXzNOD4k6gCPNUY/CFwF1YOn5zRxJJRINqaUbUuZA/79sd3wm3xIdbdS60rXzo2XVCbO60/HxKJgFKKv4CkEziuyKBWAE0InYAbYggQBPiEKAFcQrQgLgEsCOiAMyIaAArIgnAiEgGsCGyAEyIbAALogjAgCgGPI7AX9w6F3JtqHMhnMWLFSrSD9jOnakVHpZYAAAAAElFTkSuQmCC"></image>
</defs>
<g id="硬件资源可视-特性文档" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<mask id="mask-2" fill="white">
<use xlink:href="#path-1"></use>
</mask>
<g id="矩形">
<use fill="#F2F5FC" xlink:href="#path-1"></use>
<use fill-opacity="0.2" fill="url(#pattern-3)" style="mix-blend-mode: multiply;" xlink:href="#path-1"></use>
</g>
</g>
</svg>

+ 0
- 1
mindinsight/ui/src/components/autocomplete.vue View File

@@ -22,7 +22,6 @@ limitations under the License.
:aria-expanded="suggestionVisible"
:aria-owns="id">
<el-input ref="input"
class="rtl-item"
v-bind="[$props, $attrs]"
@input="handleChange"
@focus="handleFocus"


+ 6
- 2
mindinsight/ui/src/components/gridTableSimple.vue View File

@@ -296,7 +296,9 @@ export default {
*/
accuracyChange(value) {
this.formateGridArray();
this.updateGrid();
if (!this.requestError && !this.incorrectData) {
this.updateGrid();
}
},
/**
* Dimension selection changed
@@ -369,7 +371,9 @@ export default {
}
this.formateGridArray();
this.formateColumnsData();
this.updateGrid();
if (!this.incorrectData) {
this.updateGrid();
}
});
},
/**


+ 123
- 32
mindinsight/ui/src/components/header.vue View File

@@ -32,6 +32,7 @@ limitations under the License.
<el-menu-item index="/model-traceback">{{$t("summaryManage.modelTraceback")}}</el-menu-item>
<el-menu-item index="/data-traceback">{{$t("summaryManage.dataTraceback")}}</el-menu-item>
<el-menu-item index="/compare-plate">{{$t("summaryManage.comparePlate")}}</el-menu-item>
<el-menu-item index="/hardware-visual">{{$t("summaryManage.hardwareVisual")}}</el-menu-item>
</el-menu>
</div>
</div>
@@ -42,28 +43,62 @@ limitations under the License.
|| this.$route.path.indexOf('/histogram') > 0
|| this.$route.path.indexOf('/tensor') > 0
|| this.$route.path.indexOf('/training-dashboard') > 0
|| !this.$route.path.indexOf('/compare-plate')">
<!-- automatic refresh switch -->
<el-switch v-model="isTimeReload"
:active-text="$t('header.timeReload')+$t('symbols.leftbracket')+
timeReloadValue+$t('header.timeSecond')+$t('symbols.rightbracket')"
@change="timeReload"></el-switch>
<i class="el-icon-edit"
:title="$t('header.timeReloadScope')"
v-if="isTimeReload && !isShowInp"
@click="editTime"></i>
|| !this.$route.path.indexOf('/compare-plate')
|| !this.$route.path.indexOf('/hardware-visual')">
<div class="reload-training"
v-if="this.$route.path.indexOf('/scalar') > 0
|| this.$route.path.indexOf('/image') > 0
|| this.$route.path.indexOf('/histogram') > 0
|| this.$route.path.indexOf('/tensor') > 0
|| this.$route.path.indexOf('/training-dashboard') > 0
|| !this.$route.path.indexOf('/compare-plate')">
<!-- automatic refresh switch -->
<el-switch v-model="isTimeReload"
:active-text="$t('header.timeReload')+$t('symbols.leftbracket')+
timeReloadValue+$t('header.timeSecond')+$t('symbols.rightbracket')"
@change="timeReload"></el-switch>
<i class="el-icon-edit"
:title="$t('header.timeReloadScope')"
v-if="isTimeReload && !isShowInp"
@click="editTime"></i>

<el-input v-if="isTimeReload && isShowInp"
v-model="newReloadValue"
type="text"
@input="timeValueChange"></el-input>
<el-input v-if="isTimeReload && isShowInp"
v-model="newReloadValue"
type="text"
@input="timeValueChange"></el-input>

<i class="el-icon-check"
v-if="isTimeReload && isShowInp"
@click="saveTimeValue"></i>
<i class="el-icon-close"
v-if="isTimeReload && isShowInp"
@click="cancelTimeValue"></i>
</div>
<div class="reload-hardware"
v-if="!this.$route.path.indexOf('/hardware-visual')">
<!-- automatic refresh switch -->
<el-switch v-model="isHardwareTimeReload"
:active-text="$t('header.timeReload')+$t('symbols.leftbracket')+
hardwareTimeReloadValue+$t('header.timeSecond')+$t('symbols.rightbracket')"
@change="hardwareTimeReload"></el-switch>
<i class="el-icon-edit"
:title="$t('header.timeReloadScope')"
v-if="isHardwareTimeReload && !isShowHardwareInp"
@click="editHardwareTime"></i>

<el-input v-if="isHardwareTimeReload && isShowHardwareInp"
v-model="newHardwareReloadValue"
type="text"
@input="hardwareTimeValueChange"></el-input>

<i class="el-icon-check"
v-if="isHardwareTimeReload && isShowHardwareInp"
@click="saveHardwareTimeValue"></i>
<i class="el-icon-close"
v-if="isHardwareTimeReload && isShowHardwareInp"
@click="cancelHardwareTimeValue"></i>
</div>

<i class="el-icon-check"
v-if="isTimeReload && isShowInp"
@click="saveTimeValue"></i>
<i class="el-icon-close"
v-if="isTimeReload && isShowInp"
@click="cancleTimeValue"></i>

<!-- manual refresh switch -->
<img src="../assets/images/reload.png"
@@ -90,6 +125,9 @@ export default {
isShowInp: false,
timeReloadValue: this.$store.state.timeReloadValue,
newReloadValue: this.$store.state.timeReloadValue,
isShowHardwareInp: false,
hardwareTimeReloadValue: this.$store.state.hardwareTimeReloadValue,
newHardwareReloadValue: this.$store.state.hardwareTimeReloadValue,
};
},
computed: {
@@ -104,6 +142,13 @@ export default {
},
set(val) {},
},
// set and get isHardwareTimeReload status
isHardwareTimeReload: {
get() {
return this.$store.state.isHardwareTimeReload;
},
set(val) {},
},
},
watch: {},
mounted() {},
@@ -117,7 +162,7 @@ export default {
relPath(path) {
this.$router.push(path);
},
// save isTimeReload status
// training reload setting
timeReload(val) {
localStorage.isTimeReload = val;
this.$store.commit('setIsTimeReload', val);
@@ -128,7 +173,7 @@ export default {
},

saveTimeValue() {
if (this.newReloadValue) {
if (this.newReloadValue >= 0) {
this.newReloadValue =
this.newReloadValue < 3
? 3
@@ -141,20 +186,65 @@ export default {
this.$store.commit('setTimeReloadValue', timeValue);
this.isShowInp = false;
} else {
this.cancleTimeValue();
this.cancelTimeValue();
}
},
cancleTimeValue() {
cancelTimeValue() {
this.isShowInp = false;
this.newReloadValue = this.timeReloadValue;
},
timeValueChange() {
if (this.newReloadValue === '') {
return;
}
this.newReloadValue = this.newReloadValue
.toString()
.replace(/[^\.\d]/g, '')
.replace(/\./g, '');
this.newReloadValue = Number(this.newReloadValue);
},
// hardware reload setting
hardwareTimeReload(val) {
localStorage.isHardwareTimeReload = val;
this.$store.commit('setIsHardwareTimeReload', val);
},

editHardwareTime() {
this.isShowHardwareInp = true;
},

saveHardwareTimeValue() {
if (this.newHardwareReloadValue >= 0) {
this.newHardwareReloadValue =
this.newHardwareReloadValue < 3
? 3
: this.newHardwareReloadValue > 300
? 300
: this.newHardwareReloadValue;
const timeValue = this.newHardwareReloadValue;
this.hardwareTimeReloadValue = timeValue;
localStorage.hardwareTimeReloadValue = timeValue;
this.$store.commit('setHardwareTimeReloadValue', timeValue);
this.isShowHardwareInp = false;
} else {
this.cancelHardwareTimeValue();
}
},
cancelHardwareTimeValue() {
this.isShowHardwareInp = false;
this.newHardwareReloadValue = this.hardwareTimeReloadValue;
},
hardwareTimeValueChange() {
if (this.newHardwareReloadValue === '') {
return;
}
this.newHardwareReloadValue = this.newHardwareReloadValue
.toString()
.replace(/[^\.\d]/g, '')
.replace(/\./g, '');
this.newHardwareReloadValue = Number(this.newHardwareReloadValue);
},

// get active menu item
getActive() {
const str = this.$route.path.split('/');
@@ -217,6 +307,13 @@ export default {
.el-icon-close {
color: #f56c6c;
}
.el-input {
width: 45px;
input {
padding: 0;
text-align: center;
}
}
}

// reload style
@@ -232,16 +329,10 @@ export default {
transform: rotate(1turn);
}
}
.cl-header-right .el-input {
width: 45px;
input {
padding: 0;
text-align: center;
}
}

.cl-header-nav {
margin-left: 50px;
flex: 1;
flex: 1.5;

.el-menu {
border-bottom: none;


+ 12
- 11
mindinsight/ui/src/components/multiselectGroup.vue View File

@@ -164,25 +164,26 @@ export default {
listSelectAll() {
this.operateSelectAll = !this.operateSelectAll;
this.multiSelectedItemNames = {};
this.selectedNumber = 0;
// Setting the status of list items
if (this.operateSelectAll) {
if (this.isLimit) {
const loopCount = this.checkListArr.length;
for (let i = 0; i < loopCount; i++) {
if (this.selectedNumber >= this.limitNum) {
break;
}
const listItem = this.checkListArr[i];
if (listItem.checked) {
this.selectedNumber++;
if (listItem.show) {
if (this.selectedNumber >= this.limitNum) {
if (listItem.checked && listItem.show) {
this.multiSelectedItemNames[listItem.label] = true;
}
} else if (listItem.show) {
listItem.checked = true;
this.multiSelectedItemNames[listItem.label] = true;
this.selectedNumber++;
} else {
if (listItem.checked) {
if (listItem.show) {
this.multiSelectedItemNames[listItem.label] = true;
}
} else if (listItem.show) {
listItem.checked = true;
this.multiSelectedItemNames[listItem.label] = true;
this.selectedNumber++;
}
}
}
} else {


+ 47
- 6
mindinsight/ui/src/locales/zh-cn.json View File

@@ -45,7 +45,8 @@
"modelTraceback": "模型溯源",
"dataTraceback": "数据溯源",
"comparePlate": "对比看板",
"disableProfilerTip": "无profiler日志,无法查看性能分析"
"disableProfilerTip": "无profiler日志,无法查看性能分析",
"hardwareVisual": "硬件资源"
},
"modelTraceback": {
"summaryPath": "训练日志路径",
@@ -99,7 +100,8 @@
"samplingData": "数据抽样",
"imagesampleSwitch": "切换标签",
"invalidId": "无效的训练作业",
"summaryDirPath": "训练日志路径:"
"summaryDirPath": "训练日志路径:",
"loadingTip": "加载中"
},
"scalar": {
"titleText": "标量",
@@ -368,7 +370,7 @@
"FPMessage": "前向起始算子:",
"BPMessage": "反向终止算子:",
"approximateTime": "总时长 ≈ ",
"stepInputTip": "请输入step值(1~{max}的正整数)",
"stepInputTip": "请输入step值(1~{max}的正整数,值为空时展示平均值)",
"inputError": "输入参数异常,请输入一个1~{max}的正整数",
"defaultTip": "默认展示平均值",
"downloadTimeline": "下载",
@@ -384,7 +386,45 @@
"title3": "如何使用时间线:",
"content31": "您可以通过时间线信息分析流切分方法是否合理、迭代间隙和拖尾时间是否过长等;",
"content32": "也可以具体定位到某个算子,查看分析它的执行时间。"
}
},
"unit": "ms/次"
},
"hardwareVisual": {
"processor": "昇腾AI处理器",
"ram": "内存",
"selectedCpu": "CPU-选中:",
"allCpu": "CPU-总计:",
"chipNameTip": "芯片名称",
"deviceIdTip": "芯片号",
"availableTip": "芯片是否空闲(仅供参考)",
"healthTip": "芯片健康指数",
"ipTip": "芯片IP地址",
"aicoreTip": "芯片利用率",
"hbmTip": "芯片已用HBM内存",
"powerTip": "芯片功耗",
"temperatureTip": "芯片温度",
"cpuUserTip": "运行于用户态的时间百分比",
"cpuSystemTip": "运行于内核态的时间百分比",
"cpuIdleTip": "处于空闲状态的时间百分比",
"cpuNiceTip": "运行低优先级进程的时间百分比",
"cpuIowaitTip": "等待IO的时间百分比",
"cpuIrqTip": "处理硬中断的时间百分比",
"cpuSoftirqTip": "处理软中断的时间百分比",
"cpuStealTip": "被其他虚拟机抢夺的时间百分比",
"cpuGuestTip": "运行虚拟机的时间百分比",
"cpuGuestniceTip": "运行低优先级虚拟机的时间百分比",
"cpuInterruptTip": "处理硬中断的时间百分比",
"cpuDpcTip": "远程调用的时间百分比",
"noNpuInfo": "暂无昇腾AI处理器信息",
"normal": "正常",
"generalWarn": "一般警告",
"importantWarn": "重要警告",
"emergencyWarn": "紧急警告",
"noChip": "芯片不存在或未启动",
"availableFree": "芯片空闲",
"availableBusy": "芯片已被占用或不可用",
"failQueryChip": "芯片信息查询有误",
"faliQuery": "查询有误"
},
"components": {
"summaryTitle": "训练选择",
@@ -422,6 +462,7 @@
"50542218": "筛选参数错误",
"50545012": "张量数据不存在,请刷新。",
"50545013": "请求的数据过大,请使用其他维度重试。",
"50545014": "查询的张量数据已被新数据替换,请刷新。"
"50545014": "查询的张量数据已被新数据替换,请刷新。",
"50548001": "昇腾AI处理器信息查询超时"
}
}
}

+ 3
- 0
mindinsight/ui/src/main.js View File

@@ -40,6 +40,9 @@ router.beforeEach((to, from, next) => {
store.commit('setIsReload', false);
next();
});
router.onError((error) => {
Vue.prototype.$message.error(i18n.messages[i18n.locale].public.netWorkError);
});

// forbidden showing production tip
Vue.config.productionTip = false;


+ 4
- 0
mindinsight/ui/src/router.js View File

@@ -102,5 +102,9 @@ export default new Router({
},
],
},
{
path: '/hardware-visual',
component: () => import('./views/train-manage/hardware-visual.vue'),
},
],
});

+ 6
- 0
mindinsight/ui/src/services/request-service.js View File

@@ -288,4 +288,10 @@ export default {
},
});
},
getMetricsData() {
return axios({
method: 'get',
url: 'v1/mindinsight/sysmetric/current',
});
},
};

+ 17
- 0
mindinsight/ui/src/store.js View File

@@ -30,10 +30,20 @@ export default new Vuex.Store({
timeReloadValue: localStorage.timeReloadValue
? localStorage.timeReloadValue
: 3,
// Scheduled hardware reload flag
isHardwareTimeReload: localStorage.isHardwareTimeReload === 'false' ? false : true,
// hardware reload time
hardwareTimeReloadValue: localStorage.hardwareTimeReloadValue
? localStorage.hardwareTimeReloadValue
: 5,
// multiSelevtGroup component count
multiSelectedGroupCount: 0,
tableId: 0,
componentsCount: 0,
summaryDirList: undefined,
selectedBarList: [],
hidenDirChecked: [],
customizedColumnOptions: [],
},
mutations: {
// set cancelTokenArr
@@ -71,6 +81,13 @@ export default new Vuex.Store({
setTimeReloadValue: (state, val) => {
state.timeReloadValue = val;
},
// set isHardwareTimeReload
setIsHardwareTimeReload: (state, val) => {
state.isHardwareTimeReload = val;
},
setHardwareTimeReloadValue: (state, val) => {
state.hardwareTimeReloadValue = val;
},
multiSelectedGroupComponentNum(state) {
state.multiSelectedGroupCount++;
},


+ 5
- 1
mindinsight/ui/src/views/train-manage/data-map.vue View File

@@ -172,7 +172,9 @@ export default {
return;
}
this.trainJobID = this.$route.query.train_id;
document.title = `${decodeURIComponent(this.trainJobID)}-${this.$t('trainingDashboard.dataMap')}-MindInsight`;
document.title = `${decodeURIComponent(this.trainJobID)}-${this.$t(
'trainingDashboard.dataMap',
)}-MindInsight`;
this.$nextTick(() => {
this.queryGraphData();
});
@@ -554,6 +556,8 @@ export default {
const value =
select[item] instanceof Array
? select[item].join(', ')
: select[item] === null
? 'None'
: select[item];
this.selectedNode.push({key: item, value: value});
}


+ 45
- 26
mindinsight/ui/src/views/train-manage/data-traceback.vue View File

@@ -435,6 +435,14 @@ export default {
'learning_rate',
'device_num',
],
valueType: {
float: 'float',
int: 'int',
string: 'string',
model_size: 'model_size',
learning_rate: 'learning_rate',
dataset_mark: 'dataset_mark',
},
table: {
columnOptions: {
summary_dir: {
@@ -964,10 +972,10 @@ export default {
id: item,
checked: true,
};
if (value && value.type === 'float') {
obj.type = 'float';
} else if (value && value.type === 'int') {
obj.type = 'int';
if (value && value.type === this.valueType.float) {
obj.type = this.valueType.float;
} else if (value && value.type === this.valueType.int) {
obj.type = this.valueType.int;
}
arrayTemp.push(obj);
});
@@ -1006,14 +1014,14 @@ export default {
content.name === this.repeatTitle ||
content.name === this.shuffleTitle ||
content.id === this.deviceNum ||
(content.type && content.type === 'int')
(content.type && content.type === this.valueType.int)
) {
obj.scale = true;
obj.minInterval = 1;
this.setColorOfSelectedBar(selectedBarList, obj);
} else if (
this.numberTypeIdList.includes(content.id) ||
(content.type && content.type === 'float')
(content.type && content.type === this.valueType.float)
) {
obj.scale = true;
this.setColorOfSelectedBar(selectedBarList, obj);
@@ -1024,7 +1032,7 @@ export default {
show: false,
};
this.setColorOfSelectedBar(selectedBarList, obj);
if (content.id === 'dataset_mark') {
if (content.id === this.valueType.dataset_mark) {
obj.axisLabel = {
show: false,
};
@@ -1073,13 +1081,16 @@ export default {
if (this.parallelEchart) {
this.parallelEchart.off('axisareaselected', null);
window.removeEventListener('resize', this.resizeChart, false);
} else {
this.parallelEchart = Echarts.init(
document.querySelector('#data-echart'),
);
}
this.parallelEchart = Echarts.init(
document.querySelector('#data-echart'),
);
this.parallelEchart.setOption(option, true);
window.addEventListener('resize', this.resizeChart, false);

this.chartEventsListen(parallelAxis);
},
chartEventsListen(parallelAxis) {
this.parallelEchart.on('axisareaselected', (params) => {
this.recordsNumber = 0;
this.showNumber = 0;
@@ -1154,12 +1165,14 @@ export default {
}
const strs = val.split('');
let str = '';
if (val.length > 100) {
return val.substring(0, 12) + '...';
const maxStringLength = 100;
const showStringLength = 12;
if (val.length > maxStringLength) {
return val.substring(0, showStringLength) + '...';
} else {
for (let i = 0, s = ''; (s = strs[i++]); ) {
str += s;
if (!(i % 12)) {
if (!(i % showStringLength)) {
str += '\n';
}
}
@@ -1204,20 +1217,25 @@ export default {
if (isNaN(value) || !value) {
return value;
} else {
if (key === 'learning_rate') {
let temp = value.toPrecision(4);
const numDigits = 4;
if (key === this.valueType.learning_rate) {
let temp = value.toPrecision(numDigits);
let row = 0;
while (temp < 1) {
temp = temp * 10;
row += 1;
}
temp = this.toFixedFun(temp, 4);
temp = this.toFixedFun(temp, numDigits);
return `${temp}${row ? `e-${row}` : ''}`;
} else if (key === 'model_size') {
} else if (key === this.valueType.model_size) {
return value + 'MB';
} else {
if (value < 1000) {
return Math.round(value * Math.pow(10, 4)) / Math.pow(10, 4);
const num = 1000;
if (value < num) {
return (
Math.round(value * Math.pow(10, numDigits)) /
Math.pow(10, numDigits)
);
} else {
const reg = /(?=(\B)(\d{3})+$)/g;
return (value + '').replace(reg, ',');
@@ -1245,7 +1263,8 @@ export default {
* @param {Object} scope
*/
showDialogData(val, scope) {
if (typeof val !== 'string' || val === '{}') {
const emptyObjectStr = '{}';
if (typeof val !== this.valueType.string || val === emptyObjectStr) {
return;
} else {
const isJson = this.isJSON(val);
@@ -1541,7 +1560,7 @@ export default {
hideDataMarkTableData() {
const result = [];
this.selectedBarList.forEach((item) => {
if (item !== 'dataset_mark') {
if (item !== this.valueType.dataset_mark) {
result.push(item);
}
});
@@ -1906,10 +1925,6 @@ export default {
.el-color-alpha-slider {
display: none;
}
.el-select > .el-input {
width: 280px !important;
max-width: 500px !important;
}
.select-inner-input {
width: calc(100% - 140px);
margin: 2px 4px;
@@ -1958,6 +1973,10 @@ export default {
height: 100%;
overflow-y: auto;
position: relative;
.el-select > .el-input {
width: 280px !important;
max-width: 500px !important;
}
.el-table th.is-leaf {
background: #f5f7fa;
}


+ 81
- 64
mindinsight/ui/src/views/train-manage/graph.vue View File

@@ -467,7 +467,7 @@ export default {
show: false,
info: '',
},
scaleRange: [0.0001, 10000], // graph zooms in and zooms out.
scaleRange: [0.001, 1000], // graph zooms in and zooms out.
rightShow: true, // Check whether the right side bar is displayed.
fullScreen: false, // Display Full Screen
totalMemory: 16777216 * 2, // Memory size of the graph plug-in
@@ -583,29 +583,32 @@ export default {
* @param {String} dot dot statement encapsulated in graph data
*/
initGraph(dot) {
this.graphviz = d3
.select('#graph')
.graphviz({useWorker: false, totalMemory: this.totalMemory})
.zoomScaleExtent(this.scaleRange)
.dot(dot)
.attributer(this.attributer)
.render(() => {
this.initSvg();
this.afterInitGraph();
});
try {
this.graphviz = d3
.select('#graph')
.graphviz({useWorker: false, totalMemory: this.totalMemory})
.zoomScaleExtent(this.scaleRange)
.dot(dot)
.attributer(this.attributer)
.render(() => {
this.initSvg();
this.afterInitGraph();
});
} catch (error) {
const svg = document.querySelector('#graph svg');
if (svg) {
svg.remove();
}
this.initGraph(dot);
}

// Generate the dom of the submap.
if (!d3.select('#graphTemp').size()) {
d3.select('body')
.append('div')
.attr('id', 'graphTemp')
.attr('style', 'visibility: collapse');
d3.select('body').append('div').attr('id', 'graphTemp');
}
// Stores the dom of all the sorted subgraphs.
if (!d3.select('#subgraphTemp').size()) {
d3.select('body')
.append('div')
.attr('id', 'subgraphTemp')
.attr('style', 'visibility: collapse');
d3.select('body').append('div').attr('id', 'subgraphTemp');
}
},
initSvg() {
@@ -654,10 +657,7 @@ export default {
this.$nextTick(() => {
this.loading.show = false;
});
const elements = d3
.select('#graph')
.selectAll('g.node, g.edge')
.nodes();
const elements = d3.select('#graph').selectAll('g.node, g.edge').nodes();
elements.forEach((ele) => {
if (!ele.hasAttribute('transform')) {
ele.setAttribute('transform', 'translate(0,0)');
@@ -728,6 +728,7 @@ export default {
.parentNode.id;
name = parentId.replace('_unfold', '');
this.allGraphData[name].index += changePage;
this.selectedNode.name = name;
}
if (unfoldFlag) {
this.dealDoubleClick(name);
@@ -928,26 +929,39 @@ export default {
* @param {Boolean} toUnfold Expand the namespace.
*/
layoutNamescope(name, toUnfold) {
setTimeout(() => {
this.$nextTick(() => {
const dotStr = this.packageNamescope(name);
this.graphvizTemp = d3
.select('#graphTemp')
.graphviz({useWorker: false, totalMemory: this.totalMemory})
.dot(dotStr)
.zoomScaleExtent(this.scaleRange)
.attributer((datum, index, nodes) => {
if (
datum.tag === 'polygon' &&
datum.attributes.stroke !== 'transparent'
) {
datum.attributes.stroke = 'rgb(167, 167, 167)';
}
})
.render(() => {
this.fitGraph('graphTemp');
this.dealNamescopeTempGraph(name);
});
}, 20);
try {
this.graphvizTemp = d3
.select('#graphTemp')
.graphviz({useWorker: false, totalMemory: this.totalMemory})
.dot(dotStr)
.zoomScaleExtent(this.scaleRange)
.attributer((datum, index, nodes) => {
if (
datum.tag === 'polygon' &&
datum.attributes.stroke !== 'transparent'
) {
datum.attributes.stroke = 'rgb(167, 167, 167)';
}
})
.render(() => {
this.fitGraph('graphTemp');
this.dealNamescopeTempGraph(name);
});
} catch (error) {
const graphTempSvg = document.querySelector('#graphTemp svg');
if (graphTempSvg) {
graphTempSvg.remove();
}
const subGraphTempSvg = document.querySelector('#subgraphTemp svg');
if (subGraphTempSvg) {
subGraphTempSvg.remove();
}

this.dealDoubleClick(this.selectedNode.name);
}
});
},
/**
* To obtain graph data, initialize and expand the namespace or aggregate nodes.
@@ -1665,10 +1679,7 @@ export default {
.attr('width', g.node().getBBox().width + this.frameSpace * 2)
.attr('height', g.node().getBBox().height + this.frameSpace * 2);

boxTemp = d3
.select(`${idStr}g[id="${name}_unfold"]`)
.node()
.getBBox();
boxTemp = d3.select(`${idStr}g[id="${name}_unfold"]`).node().getBBox();
// After the namespace dom is successfully encapsulated, set the related data of the data object.
this.allGraphData[name].isUnfold = true;
this.allGraphData[name].size = [boxTemp.width / 72, boxTemp.height / 72];
@@ -1680,8 +1691,9 @@ export default {
const node = document.querySelector(`#graphTemp g[id="${name}"]`);
const box = node.getBBox();
const boxTemp = nodeTemp.getBBox();
const translateStr = `translate(${box.x - boxTemp.x},${box.y -
boxTemp.y})`;
const translateStr = `translate(${box.x - boxTemp.x},${
box.y - boxTemp.y
})`;
nodeTemp.setAttribute('transform', translateStr);
node.parentNode.appendChild(nodeTemp);
document.querySelector('#subgraphTemp svg').remove();
@@ -1732,8 +1744,9 @@ export default {
if (node && nodeTemp) {
const box = node.getBBox();
const boxTemp = nodeTemp.getBBox();
const translateStr = `translate(${box.x - boxTemp.x},${box.y -
boxTemp.y})`;
const translateStr = `translate(${box.x - boxTemp.x},${
box.y - boxTemp.y
})`;
nodeTemp.setAttribute('transform', translateStr);
node.parentNode.appendChild(nodeTemp);
node.remove();
@@ -1809,10 +1822,7 @@ export default {
this.loading.show = true;
}
if (name.includes('/')) {
const subPath = name
.split('/')
.slice(0, -1)
.join('/');
const subPath = name.split('/').slice(0, -1).join('/');
this.layoutNamescope(subPath, true);
} else {
const svg = document.querySelector('#graph svg');
@@ -2029,9 +2039,13 @@ export default {
selectedNode.type === 'name_scope' ||
selectedNode.type === 'aggregation_scope';
this.selectedNode.count = selectedNode.subnode_count;
this.selectedNode.info.attributes = JSON.parse(
JSON.stringify(selectedNode.attr),
);
const attrTemp = JSON.parse(JSON.stringify(selectedNode.attr || {}));
this.selectedNode.info.attributes = Object.keys(attrTemp).map((key) => {
return {
name: key,
value: attrTemp[key],
};
});

Object.keys(selectedNode.input).forEach((key) => {
const value = this.getEdgeLabel(selectedNode.input[key]);
@@ -2288,10 +2302,7 @@ export default {
if (subPsth && this.allGraphData[subPsth]) {
// The virtual node and its subnodes need to return their namespaces.
if (this.allGraphData[subPsth].independent_layout) {
subPsth = subPsth
.split('/')
.slice(0, -1)
.join('/');
subPsth = subPsth.split('/').slice(0, -1).join('/');
}
}
return subPsth;
@@ -2615,7 +2626,7 @@ export default {
this.insideBox.height = this.smallResize.height;
this.insideBox.top = this.insideBox.left = 0;
this.styleSet('#inside-box', this.insideBox);
insideBox.style.cursor = 'not-allowed';
this.insideBox.style.cursor = 'not-allowed';
} else {
let transformString = '';
const transTemp = this.graph.dom.attributes.transform || null;
@@ -2753,8 +2764,9 @@ export default {
`<svg xmlns="http://www.w3.org/2000/svg" ` +
`xmlns:xlink="http://www.w3.org/1999/xlink" width="100%" height="100%" ` +
`viewBox="0.00 0.00 ${this.svg.originSize.width} ${this.svg.originSize.height}"` +
`><g id="smallGraph" class="graph" transform="translate(4,${this.svg
.originSize.height - 4}) scale(1)"` +
`><g id="smallGraph" class="graph" transform="translate(4,${
this.svg.originSize.height - 4
}) scale(1)"` +
`>${this.graph.dom.innerHTML}</g></svg>`;

smallMap.innerHTML = svgOuterHtml;
@@ -3489,4 +3501,9 @@ export default {
padding-right: 32px;
}
}
#graphTemp,
#subgraphTemp {
position: absolute;
bottom: 0;
}
</style>

+ 981
- 0
mindinsight/ui/src/views/train-manage/hardware-visual.vue View File

@@ -0,0 +1,981 @@
<!--
Copyright 2020 Huawei Technologies Co., Ltd.All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

<template>
<div class="cl-hardware-visual">
<div class="cl-hardware-content"
v-if="!(chipTableData.length === 0 && cpuList.length===0)">
<div class="cl-hardware-top"
v-if="chipTableData.length">
<div class="cl-hardware-left">
<div class="cl-sub-title"
v-if="chipTableData.length">
{{$t('hardwareVisual.processor')}}
</div>
<div class="cl-chip-wrap">
<el-table v-if="chipTableData.length"
:data="chipTableData"
width="100%"
height="100%"
:row-class-name="tableRowClassName">
<el-table-column width="120">
<template slot="header">
<span class="cl-text-center">
Name
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.chipNameTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</span>
</template>
<template slot-scope="scope">
<span class="cl-text-center">{{ scope.row.chip_name }}</span>
</template>
</el-table-column>
<el-table-column width="80">
<template slot="header">
<span class="cl-text-center">
NPU
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.deviceIdTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</span>
</template>
<template slot-scope="scope">
<span class="cl-text-center">{{ scope.row.device_id }}</span>
</template>
</el-table-column>
<el-table-column width="110">
<template slot="header">
<span class="cl-text-center">
Available
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.availableTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</span>
</template>
<template slot-scope="scope">
<span class="cl-text-center">
<i class="el-icon-success"
v-if="scope.row.available"
:title="$t('hardwareVisual.availableFree')"></i>
<i class="el-icon-question"
:title="$t('hardwareVisual.availableBusy')"
v-else></i>
</span>
</template>
</el-table-column>
<el-table-column width="80">
<template slot="header">
<span class="cl-text-center">
Health
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.healthTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</span>
</template>
<template slot-scope="scope">
<span class="cl-text-center">
<i class="el-icon-success"
v-if="scope.row.health===0"
:title="$t('hardwareVisual.normal')"></i>
<i class="el-icon-warning normal"
v-if="scope.row.health===1"
:title="$t('hardwareVisual.generalWarn')"></i>
<i class="el-icon-warning important"
v-if="scope.row.health===2"
:title="$t('hardwareVisual.importantWarn')"></i>
<i class="el-icon-warning emergency"
v-if="scope.row.health===3"
:title="$t('hardwareVisual.emergencyWarn')"></i>
<i class="el-icon-remove"
v-if="scope.row.health=== 0xffffffff"
:title="$t('hardwareVisual.noChip')"></i>
</span>
</template>
</el-table-column>
<el-table-column width="130">
<template slot="header">
<span class="cl-text-center">
IP Address
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.ipTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</span>
</template>
<template slot-scope="scope">
<span class="cl-text-center">{{ scope.row.ip_address }}</span>
</template>
</el-table-column>
<el-table-column prop="aicore">
<template slot="header">
AI Core(%)
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.aicoreTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</template>
<template slot-scope="scope">
<div class="core-wrap">
<el-progress :percentage="scope.row.aicore_rate===-1?0:scope.row.aicore_rate"
:format="format(scope.row.aicore_rate)"></el-progress>
</div>
</template>

</el-table-column>
<el-table-column prop="hbm_usage"
min-width="100">
<template slot="header">
HBM-Usage(MB)
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.hbmTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</template>
<template slot-scope="scope">
<div class="hbs-wrap">
<el-progress :percentage="scope.row.hbm_info.memory_size?
parseInt(scope.row.hbm_info.memory_usage/scope.row.hbm_info.memory_size*100):0"
:format="formatHbm(scope.row.hbm_info)"></el-progress>
</div>
</template>
</el-table-column>
<el-table-column prop="power">
<template slot="header">
Power(W)
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.powerTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</template>
<template slot-scope="scope">
<div class="power-wrap">
<div class="power"
:style="{width:`${scope.row.power/powerMax*100}%`}">{{scope.row.power}}</div>
</div>
</template>

</el-table-column>
<el-table-column prop="temp"
width="150">
<template slot="header">
Temp(℃)
<el-tooltip class="item"
effect="light"
:content="$t('hardwareVisual.temperatureTip')"
placement="top-start">
<i class="el-icon-info"></i>
</el-tooltip>
</template>
<template slot-scope="scope">
<div class="temp-wrap">
<div class="circle"
:class="{zero:!scope.row.temperature}"></div>
<div class="process-wrap">
<div class="process-cover"
:style="{width:temperatureMax?scope.row.temperature/temperatureMax*100+'%':0}"></div>
</div>
<span>{{scope.row.temperature}}</span>
</div>
</template>
</el-table-column>
</el-table>
<div class="image-noData"
v-if="chipTableData.length === 0">
<div>
<img :src="require('@/assets/images/nodata.png')"
alt="" />
</div>
<p>{{$t("hardwareVisual.noNpuInfo")}}</p>
</div>
</div>
</div>
</div>
<div class="cl-hardware-bottom"
:class="{noNpu:!chipTableData.length}">
<div class="cl-hardware-left">
<div class="cl-sub-title">
CPU
</div>
<div class="cl-cpu-wrap">
<div class="cpu-items">
<div class="cpu-item"
v-for="(item,key) in cpuList"
:key="key">
<div class="cpu"
:class="{selected:item.selected}"
:style="{backgroundColor:item.idle!==undefined?
`rgba(250,152,65,${(100-item.idle).toFixed(2)/100}`:'#ccc'}"
:title="item.idle!==undefined?`Core ${key}`:''"
@click="viewPerCpuInfo(key)">
{{ item.idle!==undefined?(100-item.idle).toFixed(2):'' }}
</div>
</div>
</div>
<div class="cpu-detail">
<div class="all-cpu-info">
<span>{{$t('hardwareVisual.allCpu')}}</span>
<div class="info-item"
v-for="(item,index) in overallCpuInfo"
:key="index">
<el-tooltip class="item"
effect="light"
:content="item.tips"
placement="top-start">
<span>
<span class="label">{{item.label}}</span>
<span class="value">{{`${item.value}%`}}</span>
</span>
</el-tooltip>
</div>
</div>
<div class="selected-cpu-info"
v-if="selectedCpuIndex!==null">
<span>{{$t('hardwareVisual.selectedCpu')}}</span>
<div class="info-item"
v-for="(item,index) in selectedCpuInfo"
:key="index">
<el-tooltip class="item"
effect="light"
:content="item.tips"
placement="top-start">
<span>
<span class="label">{{item.label}}</span>
<span class="value">{{`${item.value}%`}}</span>
</span>
</el-tooltip>
</div>
</div>
</div>
</div>
</div>
<div class="cl-hardware-right">
<div class="cl-sub-title ram">
{{$t('hardwareVisual.ram')}}
</div>
<div class="cl-ram-wrap">
<div class="virtual-wrap">
<div id="virtual"></div>
</div>
</div>
</div>
</div>
</div>
<div class="image-noData"
v-if="chipTableData.length === 0 && cpuList.length===0 && initOver">
<div>
<img :src="require('@/assets/images/nodata.png')"
alt="" />
</div>
<p>{{$t("public.noData")}}</p>
</div>
</div>
</template>

<script>
import echarts from 'echarts';
import RequestService from '../../services/request-service';
export default {
data() {
return {
chipTableData: [],
powerMax: null,
temperatureMax: null,
virtualChart: {
id: 'virtual',
chartDom: null,
data: [],
legend: [],
totalValue: null,
},
defaultCpuNum: 96,
cpuList: [],
overallCpuInfo: [],
selectedCpuInfo: [],
selectedCpuIndex: null,
pieColorArr: ['#5e7ce0', '#ccc', '#a6dd82'],
autoUpdateTimer: null, // Automatic refresh timer
isReloading: false, // Manually refresh
legendSelected: {},
initOver: false,
};
},
computed: {
/**
* Global refresh switch
* @return {Boolean}
*/
isReload() {
return this.$store.state.isReload;
},
/**
* Automatic hardware refresh switch
* @return {Boolean}
*/
isHardwareTimeReload() {
return this.$store.state.isHardwareTimeReload;
},
/**
* Automatic hardware refresh value
* @return {Boolean}
*/
hardwareTimeReloadValue() {
return this.$store.state.hardwareTimeReloadValue;
},
},
watch: {
/**
* Global refresh switch Listener
* @param {Boolean} newVal Value After Change
* @param {Boolean} oldVal Value Before Change
*/
isReload(newVal, oldVal) {
if (newVal) {
this.isReloading = true;
if (this.isHardwareTimeReload) {
this.autoUpdateSamples();
}
this.init();
}
},
/**
* Automatic refresh switch Listener
* @param {Boolean} newVal Value After Change
* @param {Boolean} oldVal Value Before Change
*/
isHardwareTimeReload(newVal, oldVal) {
if (newVal) {
this.autoUpdateSamples();
} else {
this.stopUpdateSamples();
}
},
/**
* The refresh time is changed.
*/
hardwareTimeReloadValue() {
this.autoUpdateSamples();
},
},
destroyed() {
// Disable the automatic refresh function
if (this.autoUpdateTimer) {
clearInterval(this.autoUpdateTimer);
this.autoUpdateTimer = null;
}
// Stop Refreshing
if (this.isReloading) {
this.$store.commit('setIsReload', false);
this.isReloading = false;
}
},
mounted() {
document.title = this.$t('summaryManage.hardwareVisual') + '-MindInsight';
// Automatic refresh
if (this.isHardwareTimeReload) {
this.autoUpdateSamples();
}
this.init();
},

methods: {
/**
* Initialization data
*/
init() {
RequestService.getMetricsData().then(
(res) => {
this.initOver = true;
if (this.isReloading) {
this.$store.commit('setIsReload', false);
this.isReloading = false;
}
if (res && res.data) {
this.chipTableData = res.data.npu || [];
if (this.chipTableData.length === 0) {
this.defaultCpuNum = 192;
}
this.powerMax =
Math.max(...this.chipTableData.map((val) => val.power)) * 1.2;
this.temperatureMax =
Math.max(...this.chipTableData.map((val) => val.temperature)) *
1.2;
// 1.2 In order to Demonstrated effect
if (res.data.memory && res.data.memory.virtual) {
this.dealChartData(this.virtualChart, res.data.memory.virtual);
this.setOption(this.virtualChart);
}
if (res.data.cpu) {
const overall = res.data.cpu.overall || {};
this.overallCpuInfo = Object.keys(overall).map((val) => {
return {
label: val,
value: overall[val],
};
});
this.addtips(this.overallCpuInfo);
this.cpuList = (res.data.cpu.percpu || []).map((val) => {
return {...val, selected: false};
});
while (this.cpuList.length < this.defaultCpuNum) {
this.cpuList.push({});
}
if (this.selectedCpuIndex !== null) {
this.viewPerCpuInfo(this.selectedCpuIndex);
} else {
this.selectedCpuInfo = [];
}
this.$nextTick(() => {
const doms = document.querySelectorAll('.fail-row');
if (doms) {
for (let i = 0; i < doms.length; i++) {
doms[i].setAttribute(
'title',
this.$t('hardwareVisual.failQueryChip'),
);
}
}
});
}
}
},
(err) => {
this.chipTableData = [];
this.cpuList = [];
this.initOver = true;
if (this.isReloading) {
this.$store.commit('setIsReload', false);
this.isReloading = false;
}
},
);
},
tableRowClassName({row, rowIndex}) {
if (!row.success) {
return 'fail-row';
}
return '';
},
/**
* add tips
* @param {Array} arr cpu Info
*/
addtips(arr) {
arr.forEach((val) => {
switch (val.label) {
case 'user':
val.tips = this.$t('hardwareVisual.cpuUserTip');
break;
case 'nice':
val.tips = this.$t('hardwareVisual.cpuNiceTip');
break;
case 'system':
val.tips = this.$t('hardwareVisual.cpuSystemTip');
break;
case 'idle':
val.tips = this.$t('hardwareVisual.cpuIdleTip');
break;
case 'iowait':
val.tips = this.$t('hardwareVisual.cpuIowaitTip');
break;
case 'irq':
val.tips = this.$t('hardwareVisual.cpuIrqTip');
break;
case 'softirq':
val.tips = this.$t('hardwareVisual.cpuSoftirqTip');
break;
case 'steal':
val.tips = this.$t('hardwareVisual.cpuStealTip');
break;
case 'guest':
val.tips = this.$t('hardwareVisual.cpuGuestTip');
break;
case 'guest_nice':
val.tips = this.$t('hardwareVisual.cpuGuestniceTip');
break;
case 'interrupt':
val.tips = this.$t('hardwareVisual.cpuInterruptTip');
break;
case 'dpc':
val.tips = this.$t('hardwareVisual.cpuDpcTip');
break;
}
});
},
/**
* View the information of each cpu
* @param {Number} index index
*/
viewPerCpuInfo(index) {
this.cpuList.forEach((val, key) => {
if (val.idle !== undefined) {
if (index === key) {
this.selectedCpuIndex = key;
val.selected = !val.selected;
if (val.selected) {
this.selectedCpuInfo = Object.keys(this.cpuList[index]).map(
(val) => {
return {
label: val,
value: this.cpuList[index][val],
};
},
);
this.selectedCpuInfo.pop();
} else {
this.selectedCpuIndex = null;
this.selectedCpuInfo = [];
}
} else {
if (this.cpuList[index].idle !== undefined) {
val.selected = false;
}
}
}
});
this.addtips(this.selectedCpuInfo);
},
/**
* Handling pie chart data
* @param {Object} chart chart obejct
* @param {Object} data chart data
*/
dealChartData(chart, data) {
if (data.others === 0) {
chart.legend = ['used', 'available'];
} else {
chart.legend = ['used', 'others', 'available'];
}
chart.data = chart.legend.map((val) => {
return {
value: data[val],
name: val,
};
});
chart.totalValue = 0;
chart.data.forEach((val) => {
chart.totalValue += val.value;
});
},
/**
* Data unit conversion
* @param {Number} n chart obejct
* @param {Boolean} type format type
* @return {String}
*/
bytesHuman(n, type) {
const symbols = 'KMG'
.split('')
.map((symbol, index) => [symbol, 1 << ((index + 1) * 10)]);
for (const [symbol, prefix] of symbols.reverse()) {
if (n >= prefix) {
if (type) {
return `${n}(${(n / prefix).toFixed(1)}${symbol})`;
} else {
return `${(n / prefix).toFixed(1)}${symbol}`;
}
}
}
return `${n}`;
},
format(percentage, item) {
return () => {
return percentage === -1
? this.$t('hardwareVisual.faliQuery')
: `${percentage}`;
};
},
formatHbm(hbmInfo) {
return function() {
return `${hbmInfo.memory_usage}/${hbmInfo.memory_size}`;
};
},
/**
* Enable automatic hardware refresh
*/
autoUpdateSamples() {
if (this.autoUpdateTimer) {
clearInterval(this.autoUpdateTimer);
this.autoUpdateTimer = null;
}
this.autoUpdateTimer = setInterval(() => {
this.$store.commit('clearToken');
this.init();
}, this.hardwareTimeReloadValue * 1000);
},
/**
* Disable automatic refresh
*/
stopUpdateSamples() {
if (this.autoUpdateTimer) {
clearInterval(this.autoUpdateTimer);
this.autoUpdateTimer = null;
}
},
setOption(chart) {
const option = {
tooltip: {
trigger: 'item',
formatter: (params) => {
return `${params.name}<br>
${params.marker}${this.bytesHuman(params.value, true)}`;
},
confine: true,
},
legend: {
orient: 'vertical',
left: '50%',
top: '35%',
icon: 'circle',
data: chart.legend,
formatter: (params) => {
let legendStr = '';
for (let i = 0; i < chart.data.length; i++) {
if (chart.data[i].name === params) {
const name = chart.data[i].name;
legendStr = `{a|${this.bytesHuman(
chart.data[i].value,
true,
)}}\n{b|${name}}`;
}
}
return legendStr;
},
selected: this.legendSelected,
textStyle: {
rich: {
a: {
fontSize: 14,
},
b: {
color: '#aeb2bf',
},
},
},
},
series: [
{
name: '',
center: ['25%', '50%'],
type: 'pie',
radius: this.chipTableData.length ? ['40%', '60%'] : ['30%', '40%'],
avoidLabelOverlap: false,
label: {
show: true,
formatter: () => {
return `{a|${this.bytesHuman(chart.totalValue)}}{b|All}`;
},
position: 'center',
textStyle: {
rich: {
a: {
fontSize: 20,
color: '#000',
},
b: {
color: '#aeb2bf',
},
},
},
},
labelLine: {
show: false,
},
data: chart.data,
itemStyle: {
normal: {
color: (params) => {
return this.pieColorArr[params.dataIndex];
},
},
},
},
],
};
this.$nextTick(() => {
const cpuDom = document.getElementById(chart.id);
if (cpuDom) {
chart.chartDom = echarts.init(cpuDom, null);
chart.chartDom.setOption(option, true);
chart.chartDom.resize();
chart.chartDom.on('legendselectchanged', (obj) => {
this.legendSelected = obj.selected;
});
}
});
},
},
};
</script>
<style lang="scss" >
.cl-hardware-visual {
height: 100%;
background-color: #fff;

.cl-hardware-content {
height: 100%;
padding: 0 24px 24px 24px;
.cl-hardware-top {
height: calc(100% - 372px);
padding-top: 16px;
& > div {
width: 100%;
.cl-text-center {
display: inline-block;
text-align: center;
width: 100%;
}
.el-table::before {
height: 0px;
}
}
}
.cl-hardware-bottom {
height: 360px;
.cl-hardware-left {
width: calc(100% - 466px);
margin-right: 16px;
}
.cl-hardware-right {
width: 450px;
}
}
& > div {
height: calc(50% - 8px);
margin-bottom: 16px;
& > div {
float: left;
height: 100%;
border: 1px solid #eee;
border-radius: 4px;
padding: 16px;
.cl-sub-title {
font-weight: bold;
font-size: 16px;
margin-bottom: 15px;
}
.cl-sub-title.ram {
margin-bottom: 10px;
}
.cl-chip-wrap {
height: calc(100% - 36px);
overflow: auto;
.el-icon-question::before {
color: #f06281;
}
.el-icon-success:before {
color: #57d7ac;
}
.el-icon-error:before {
color: #e37783;
}
.el-icon-warning.normal:before {
color: #6f81e4;
}
.el-icon-warning.important:before {
color: #faa048;
}
.el-icon-warning.emergency:before {
color: #f06281;
}
.el-icon-remove:before {
color: #8b8e95;
}
.temp-wrap {
.circle {
width: 10px;
height: 10px;
border-radius: 5px;
background: #ffaa00;
display: inline-block;
position: absolute;
left: 1px;
top: 50%;
margin-top: -4px;
}
.circle.zero {
background: #e6ebf5;
}
.process-wrap {
background: #e6ebf5;
width: calc(100% - 50px);
height: 6px;
display: inline-block;
border-top-right-radius: 50px;
border-bottom-right-radius: 50px;
margin-right: 5px;
.process-cover {
height: 6px;
border-top-right-radius: 50px;
border-bottom-right-radius: 50px;
background: #ff5100;
background-image: linear-gradient(to right, #ffaa00, #ff5100);
}
}
}
.hbs-wrap {
.el-progress-bar {
padding-right: 140px;
margin-right: -145px;
}
}
.core-wrap {
.el-progress-bar {
padding-right: 80px;
margin-right: -85px;
}
}
.power {
background: #e5f6f6;
padding-left: 10px;
}
}
.cl-ram-wrap {
height: calc(100% - 36px);
.virtual-wrap {
height: 100%;
overflow: auto;
#virtual {
height: 100%;
overflow: hidden;
}
}
}
.cl-disk-wrap {
height: calc(100% - 36px);
overflow: auto;
}
.cl-cpu-wrap {
height: 201px;
.cpu-items {
height: 100%;
overflow: auto;
background: url('../../assets/images/cpu-bg.svg') repeat;
padding: 3px 0 0 3px;
.cpu-item {
float: left;
width: calc(6.25% - 3px);
height: 30px;
text-align: center;
background: #fff;
margin-right: 3px;
margin-bottom: 3px;
cursor: pointer;
.cpu {
height: 100%;
line-height: 30px;
}
.cpu.selected {
line-height: 30px;
outline: 3px solid #00a5a7;
}
}
}
.cpu-detail {
& > div {
margin-top: 10px;
& > span {
margin-right: 5px;
color: #b2b4bb;
}
& > div {
display: inline-block;
padding: 0 7px;
border-right: 1px solid #ccc;
&:last-child {
border-right: none;
}
.label {
margin-right: 5px;
cursor: pointer;
}
.value {
display: inline-block;
width: 40px;
text-align: right;
cursor: pointer;
}
}
}
}
}
}
}
.cl-hardware-bottom.noNpu {
padding-top: 16px;
height: 570px;
.cl-cpu-wrap {
height: 399px;
}
}
.el-table thead tr {
background: #f0f3fa;
}
.el-table th.is-leaf .cell {
border-left: 1px solid #d4d9e6;
}
.el-table th.is-leaf:first-child .cell {
border-left: none;
}
.el-pagination {
margin: 7px 0;
float: right;
}
}
.el-table th {
height: 32px;
}
.image-noData {
width: 100%;
height: 100%;
display: flex;
justify-content: center;
align-items: center;
flex-direction: column;
p {
font-size: 16px;
padding-top: 10px;
}
}
.el-icon-info:before {
color: #6c7280;
}
.el-table .fail-row {
opacity: 0.24;
filter: grayscale(1);
}
}
</style>

+ 84
- 54
mindinsight/ui/src/views/train-manage/model-traceback.vue View File

@@ -128,7 +128,7 @@ limitations under the License.
:prop="key"
:label="table.columnOptions[key].label.substring(3)"
show-overflow-tooltip
min-width="150"
min-width="120"
sortable="custom">
<template slot="header"
slot-scope="scope">
@@ -152,7 +152,7 @@ limitations under the License.
:prop="key"
:label="table.columnOptions[key].label.substring(3)"
show-overflow-tooltip
min-width="150"
min-width="120"
sortable="custom">
<template slot="header"
slot-scope="scope">
@@ -426,6 +426,26 @@ export default {
metric: 'metric/',
userDefined: 'user_defined/',
},
valueType: {
int: 'int',
str: 'str',
mixed: 'mixed',
category: 'category',
model_size: 'model_size',
dataset_mark: 'dataset_mark',
},
valueName: {
userDefined: 'userDefined',
metric: 'metric',
UserDefined: 'UserDefined',
Metric: 'Metric',
},
labelValue: {
loss: 'loss',
batch_size: 'batch_size',
epoch: 'epoch',
learning_rate: 'learning_rate',
},
};
},
computed: {},
@@ -479,8 +499,9 @@ export default {
}
this.addIconBorder(row);
this.tagDialogShow = true;
const dialogHeight = 130;
document.getElementById('tag-dialog').style.top =
window.event.clientY - 130 + 'px';
window.event.clientY - dialogHeight + 'px';
},

/**
@@ -790,7 +811,7 @@ export default {
required: true,
},
loss: {
label: 'loss',
label: this.labelValue.loss,
required: true,
},
network: {
@@ -814,11 +835,11 @@ export default {
required: false,
},
epoch: {
label: 'epoch',
label: this.labelValue.epoch,
required: false,
},
batch_size: {
label: 'batch_size',
label: this.labelValue.batch_size,
required: false,
},
device_num: {
@@ -909,9 +930,9 @@ export default {
} else if (item.indexOf('user_defined/') === 0) {
userDefinedArray.push(item);
} else if (
item === 'epoch' ||
item === 'batch_size' ||
item === 'learning_rate'
item === this.labelValue.epoch ||
item === this.labelValue.batch_size ||
item === this.labelValue.learning_rate
) {
hyperArray.push(item);
} else {
@@ -965,7 +986,9 @@ export default {
.then(
(res) => {
if (res && res.data && res.data.object) {
const list = this.setDataOfModel(res.data.object);
const listTemp = this.setDataOfModel(res.data.object);
const list = JSON.parse(JSON.stringify(listTemp));
const tempEchartData = JSON.parse(JSON.stringify(listTemp));
if (allData) {
let customized = {};
if (res.data.customized) {
@@ -973,17 +996,17 @@ export default {
const customizedKeys = Object.keys(customized);
if (customizedKeys.length) {
customizedKeys.forEach((i) => {
if (customized[i].type === 'int') {
if (customized[i].type === this.valueType.int) {
this.keysOfIntValue.push(i);
} else if (customized[i].type === 'str') {
} else if (customized[i].type === this.valueType.str) {
this.keysOfStringValue.push(i);
} else if (customized[i].type === 'mixed') {
} else if (customized[i].type === this.valueType.mixed) {
// list of type mixed
this.keysOfMixed.push(i);
this.keysOfStringValue.push(i);
}
if (i.startsWith(this.replaceStr.userDefined)) {
this.labelObj.userDefined = 'userDefined';
this.labelObj.userDefined = this.valueName.userDefined;
customized[i].label = customized[i].label.replace(
this.replaceStr.userDefined,
'[U]',
@@ -997,7 +1020,7 @@ export default {
this.replaceStr.metric,
'[M]',
);
this.labelObj.metric = 'metric';
this.labelObj.metric = this.valueName.metric;
const metricObject = {value: '', label: ''};
metricObject.value = customized[i].label;
metricObject.label = customized[i].label;
@@ -1033,7 +1056,7 @@ export default {
];
if (this.labelObj.metric) {
const metricTemp = {
label: 'Metric',
label: this.valueName.Metric,
options: this.metricOptions,
};
this.checkOptions.push(metricTemp);
@@ -1041,7 +1064,7 @@ export default {
}
if (this.labelObj.userDefined) {
const userTemp = {
label: 'UserDefined',
label: this.valueName.UserDefined,
options: this.userOptions,
};
this.checkOptions.push(userTemp);
@@ -1049,19 +1072,19 @@ export default {
}
Object.keys(this.table.columnOptions).forEach((item) => {
if (
item !== 'epoch' &&
item !== 'learning_rate' &&
item !== 'batch_size'
item !== this.labelValue.epoch &&
item !== this.labelValue.learning_rate &&
item !== this.labelValue.batch_size
) {
const index = this.table.optionsNotInCheckbox.indexOf(
const haveItem = this.table.optionsNotInCheckbox.includes(
item,
);
if (index < 0) {
if (!haveItem) {
const otherType = {value: '', label: ''};
otherType.value = this.table.columnOptions[item].label;
otherType.label = this.table.columnOptions[item].label;
if (
otherType.value === 'loss' ||
otherType.value === this.labelValue.loss ||
otherType.value ===
this.$t('modelTraceback.network') ||
otherType.value ===
@@ -1119,7 +1142,6 @@ export default {
this.noData = !res.data.object.length;
this.showEchartPic = !!res.data.object.length;
if (this.hidenDirChecked.length) {
const tempEchartData = this.setDataOfModel(res.data.object);
this.hidenDirChecked.forEach((dir) => {
tempEchartData.forEach((item, index) => {
if (item.summary_dir === dir) {
@@ -1230,8 +1252,9 @@ export default {
? item.added_info.tag
: 0;
const modelData = JSON.parse(JSON.stringify(item.model_lineage));
const byteNum = 1024;
modelData.model_size = parseFloat(
((modelData.model_size || 0) / 1024 / 1024).toFixed(2),
((modelData.model_size || 0) / byteNum / byteNum).toFixed(2),
);
const keys = Object.keys(modelData.metric || {});
if (keys.length) {
@@ -1512,9 +1535,9 @@ export default {
values[i[key].toString()] = i[key].toString();
}
});
obj.type = 'category';
obj.type = this.valueType.category;
obj.data = Object.keys(values);
if (key === 'dataset_mark') {
if (key === this.valueType.dataset_mark) {
obj.axisLabel = {
show: false,
};
@@ -1612,15 +1635,15 @@ export default {
if (this.echart.chart) {
this.echart.chart.off('axisareaselected', null);
window.removeEventListener('resize', this.resizeChart, false);
} else {
this.echart.chart = Echarts.init(document.querySelector('#echart'));
}

this.echart.chart = Echarts.init(document.querySelector('#echart'));
this.echart.chart.setOption(echartOption, true);
window.addEventListener('resize', this.resizeChart, false);

// select use api
this.chartEventsListen(parallelAxis);
},
chartEventsListen(parallelAxis) {
this.echart.chart.on('axisareaselected', (params) => {
// key of mixed item
this.recordsNumber = 0;
this.showNumber = 0;
const key = params.parallelAxisId;
@@ -1649,15 +1672,16 @@ export default {
const [axisData] = parallelAxis.filter((i) => {
return i.id === key;
});

if (axisData && range.length === 2) {
if (axisData && axisData.id === 'model_size') {
const lineLength = 2;
if (axisData && range.length === lineLength) {
if (axisData && axisData.id === this.valueType.model_size) {
const byteNum = 1024;
range = [
parseInt(range[0] * 1024 * 1024, 0),
parseInt(range[1] * 1024 * 1024, 0),
parseInt(range[0] * byteNum * byteNum, 0),
parseInt(range[1] * byteNum * byteNum, 0),
];
}
if (axisData.type === 'category') {
if (axisData.type === this.valueType.category) {
const rangeData = {};
for (let i = range[0]; i <= range[1]; i++) {
rangeData[axisData.data[i]] = axisData.data[i];
@@ -1720,11 +1744,11 @@ export default {
];
this.keysOfMixed = [];
customizedKeys.forEach((i) => {
if (customized[i].type === 'int') {
if (customized[i].type === this.valueType.int) {
this.keysOfIntValue.push(i);
} else if (customized[i].type === 'str') {
} else if (customized[i].type === this.valueType.str) {
this.keysOfStringValue.push(i);
} else if (customized[i].type === 'mixed') {
} else if (customized[i].type === this.valueType.mixed) {
// list of type mixed
this.keysOfMixed.push(i);
this.keysOfStringValue.push(i);
@@ -1858,20 +1882,25 @@ export default {
if (isNaN(value) || !value) {
return value;
} else {
if (key === 'learning_rate') {
let temp = value.toPrecision(4);
const numDigits = 4;
if (key === this.labelValue.learning_rate) {
let temp = value.toPrecision(numDigits);
let row = 0;
while (temp < 1) {
temp = temp * 10;
row += 1;
}
temp = this.toFixedFun(temp, 4);
temp = this.toFixedFun(temp, numDigits);
return `${temp}${row ? `e-${row}` : ''}`;
} else if (key === 'model_size') {
} else if (key === this.valueType.model_size) {
return value + 'MB';
} else {
if (value < 1000) {
return Math.round(value * Math.pow(10, 4)) / Math.pow(10, 4);
const num = 1000;
if (value < num) {
return (
Math.round(value * Math.pow(10, numDigits)) /
Math.pow(10, numDigits)
);
} else {
const reg = /(?=(\B)(\d{3})+$)/g;
return (value + '').replace(reg, ',');
@@ -1883,7 +1912,9 @@ export default {
* Resizing Chart
*/
resizeChart() {
this.echart.chart.resize();
if (this.echart && this.echart.chart) {
this.echart.chart.resize();
}
},
},
/**
@@ -1927,11 +1958,6 @@ export default {
.el-tag.el-tag--info .el-tag__close {
color: #fff;
}
// select
.el-select > .el-input {
min-width: 280px !important;
max-width: 500px !important;
}
.select-inner-input {
width: calc(100% - 140px);
margin: 2px 4px;
@@ -2091,7 +2117,11 @@ export default {
-webkit-box-shadow: 0 1px 0 0 rgba(200, 200, 200, 0.5);
box-shadow: 0 1px 0 0 rgba(200, 200, 200, 0.5);
overflow: hidden;

// select
.el-select > .el-input {
min-width: 180px !important;
max-width: 500px !important;
}
.top-area {
margin: 24px 32px 12px;
display: flex;


+ 32
- 4
mindinsight/ui/src/views/train-manage/operator.vue View File

@@ -22,6 +22,7 @@ limitations under the License.
<el-tab-pane label="AI CORE"
name="core">
<div class="cl-profiler-top"
:class="{fullScreen:fullScreen}"
v-if="coreCharts.data.length">
<div>
<span class="profiler-title">
@@ -46,10 +47,15 @@ limitations under the License.
</div>
</div>
<div class="cl-profiler-bottom"
:class="{fullScreen:fullScreen}"
v-if="coreCharts.data.length">
<span class="profiler-title">
{{ $t('operator.operatorStatistics') }}
</span>
<img src="../../assets/images/full-screen.png"
:title="$t('graph.fullScreen')"
class="fullScreen"
@click="fullScreenControl()">
<div>
<el-radio-group v-model="statisticType"
@change="coreTableChange"
@@ -103,7 +109,7 @@ limitations under the License.
:width="(ele==='avg_execution_time'|| ele==='subgraph' ||
ele==='op_name'|| ele==='op_type')?'220':''"
show-overflow-tooltip
:label="ele==='avg_execution_time'?`${ele} (ms)`:ele">
:label="ele==='avg_execution_time'?`${ele} (${$t('profiling.unit')})`:ele">
</el-table-column>
</el-table>
<el-pagination :current-page="props.row.opDetailPage.offset + 1"
@@ -120,7 +126,7 @@ limitations under the License.
:property="item"
:key="$index"
sortable
:label="item==='execution_time'?`${item} (ms)`:item">
:label="item==='execution_time'?`${item} (${$t('profiling.unit')})`:item">
</el-table-column>
</el-table>
<el-table v-show="statisticType && opAllTypeList.opDetailCol && opAllTypeList.opDetailCol.length"
@@ -135,7 +141,7 @@ limitations under the License.
<el-table-column v-for="(item, $index) in opAllTypeList.opDetailCol"
:property="item"
:key="$index"
:label="item==='avg_execution_time'?`${item} (ms)`:item"
:label="item==='avg_execution_time'?`${item} (${$t('profiling.unit')})`:item"
:sortable="item === 'op_info' ? false : 'custom'"
:width="(item==='avg_execution_time'|| item==='subgraph' ||
item==='op_name'|| item==='op_type')?'220':''"
@@ -198,7 +204,8 @@ limitations under the License.
<el-table-column v-for="(item, $index) in opCpuList.opDetailCol"
:property="item"
:key="$index"
:label="(item==='total_time' || item==='dispatch_time')?`${item} (ms)`:item"
:label="(item==='total_time' || item==='dispatch_time')?
`${item} (${$t('profiling.unit')})`:item"
sortable="custom"
show-overflow-tooltip>
</el-table-column>
@@ -318,6 +325,7 @@ export default {
childProp: null,
childOrder: null,
},
fullScreen: false,
};
},
watch: {
@@ -347,6 +355,14 @@ export default {
}, 300);
}
},
fullScreenControl() {
this.fullScreen = !this.fullScreen;
if (this.coreCharts.chartDom && !this.fullScreen) {
this.$nextTick(() => {
this.coreCharts.chartDom.resize();
});
}
},
/**
* Current device change
*/
@@ -1106,13 +1122,25 @@ export default {
.cl-search-box {
float: right;
margin-bottom: 10px;
margin-right: 20px;
}
.cl-profiler-top {
height: 45%;
}
.cl-profiler-top.fullScreen {
display: none;
}
.cl-profiler-bottom {
height: 55%;
padding-top: 10px;
.fullScreen {
float: right;
margin-top: 5px;
cursor: pointer;
}
}
.cl-profiler-bottom.fullScreen {
height: 100%;
}
.cpu-tab {
.cl-profiler-top {


+ 230
- 128
mindinsight/ui/src/views/train-manage/profiling-dashboard.vue View File

@@ -73,8 +73,8 @@ limitations under the License.
markerHeight="8"
orient="auto">
<path d="M1,1 L1,7 L9,4 z"
fill="#E6EBF5"
stroke="#E6EBF5"></path>
fill="#6c7280"
stroke="#6c7280"></path>
</marker>
<marker id="marker_start"
refX="5"
@@ -83,14 +83,14 @@ limitations under the License.
markerHeight="8"
orient="auto">
<path d="M9,1 L9,7 L1,4 z"
fill="#E6EBF5"
stroke="#E6EBF5"></path>
fill="#6c7280"
stroke="#6c7280"></path>
</marker>
</defs>
</svg>
</div>
<div class="image-noData"
v-if="svg.noData">
v-if="svg.noData && svg.initOver">
<div>
<img :src="require('@/assets/images/nodata.png')"
alt="" />
@@ -236,7 +236,7 @@ limitations under the License.
</div>
</div>
<div class="image-noData"
v-if="processSummary.noData">
v-if="processSummary.noData && processSummary.initOver">
<div>
<img :src="require('@/assets/images/nodata.png')"
alt="" />
@@ -257,7 +257,7 @@ limitations under the License.
</div>
</div>
<div class="image-noData"
v-if="pieChart.noData && pieChart.data.length === 0">
v-if="pieChart.noData && pieChart.data.length === 0 && pieChart.initOver">
<div>
<img :src="require('@/assets/images/nodata.png')"
alt="" />
@@ -281,7 +281,7 @@ limitations under the License.
<span class="time">
<span class="bar"
:style="{width: item.time / pieChart.topN[0].time * 100 + '%'}"></span>
<span class="value">{{item.time}}ms</span>
<span class="value">{{item.time + $t('profiling.unit')}}</span>
</span>
</li>
</ul>
@@ -337,7 +337,7 @@ limitations under the License.

</div>
<div class="image-noData"
v-if="timelineInfo.noData">
v-if="timelineInfo.noData && timelineInfo.initOver">
<div>
<img :src="require('@/assets/images/nodata.png')"
alt="" />
@@ -372,19 +372,27 @@ export default {
svgPadding: 20,
totalWidth: 0,
totalTime: 0,
rowHeight: 60,
cellHeight: 40,
cellPadding: 0,
rowPadding: 20,
rowMargin: 10,
totalHeight: 0,
markerPadding: 4,
minRate: 0.05,
minRate: 0.1,
minTime: 0,
minWidth: 1,
fontSize: 12,
textMargin: 21,
namespaceURI: 'http://www.w3.org/2000/svg',
resizeTimer: null,
colorList: [
['#A6DD82', '#edf8e6'],
['#6CBFFF', '#e2f2ff'],
['#fa8e5b', '#fff4de'],
['#01a5a7', '#cceded'],
],
colorIndex: 0,
colors: {
iteration_interval: ['#A6DD82', '#edf8e6'],
fp_and_bp: ['#6CBFFF', '#e2f2ff'],
tail: ['#fa8e5b', '#fff4de'],
stream_parallel: ['#01a5a7', '#cceded'],
},
noData: false,
initOver: false,
},
trainingJobId: this.$route.query.id,
summaryPath: this.$route.query.dir,
@@ -396,6 +404,7 @@ export default {
noData: false,
topN: [],
colorList: ['#6C92FA', '#6CBFFF', '#4EDED2', '#7ADFA0', '#A6DD82'],
initOver: false,
},
timeLine: {
data: null,
@@ -407,6 +416,7 @@ export default {
opNum: 0,
opTimes: 0,
noData: true,
initOver: false,
},
processSummary: {
noData: true,
@@ -422,6 +432,7 @@ export default {
full: 0,
total: 0,
},
initOver: false,
},
};
},
@@ -474,6 +485,7 @@ export default {
device_id: this.currentCard,
};
RequestService.queryProcessSummary(params).then((resp) => {
this.processSummary.initOver = true;
if (resp && resp.data) {
const data = JSON.parse(JSON.stringify(resp.data));
this.processSummary.count = Object.keys(data).length;
@@ -498,6 +510,7 @@ export default {
}
} else {
this.dealProcess(null);
this.processSummary.initOver = true;
}
});
},
@@ -573,6 +586,7 @@ export default {
};
RequestService.getProfilerOpData(params)
.then((res) => {
this.pieChart.initOver = true;
if (res && res.data) {
if (res.data.object) {
this.pieChart.data = [];
@@ -614,6 +628,7 @@ export default {
})
.catch(() => {
this.pieChart.noData = true;
this.pieChart.initOver = true;
});
},
queryTrainingTrace() {
@@ -624,21 +639,20 @@ export default {
};
RequestService.queryTrainingTrace(params).then(
(res) => {
this.svg.initOver = true;
if (
res.data &&
res.data.training_trace_graph &&
res.data.training_trace_graph.length
) {
this.svg.noData = false;
document.querySelector('#trace').style.height = `${res.data
.training_trace_graph.length * this.svg.rowHeight}px`;
this.svg.data = JSON.parse(
JSON.stringify(res.data.training_trace_graph),
);
this.removeTrace();
setTimeout(() => {
this.dealTraceData();
}, 100);
this.$nextTick(() => {
this.packageTraceData(
JSON.parse(JSON.stringify(res.data.training_trace_graph)),
);
});

if (res.data.summary) {
this.fp_and_bp_percent = res.data.summary.fp_and_bp_percent;
this.iteration_interval_percent =
@@ -664,44 +678,86 @@ export default {
document.querySelector('#trace').style.height = '0px';
this.svg.noData = true;
this.svg.data = [];
this.svg.initOver = true;
this.removeTrace();
},
);
},

packageTraceData(traceGraph) {
this.svg.totalTime = 0;
this.svg.minTime = 0;
this.svg.totalHeight = 0;
const data = [];

if (traceGraph && traceGraph[0] && traceGraph[0][0]) {
this.svg.totalTime = traceGraph[0][0].duration;
this.svg.minTime = this.svg.minRate * this.svg.totalTime;

traceGraph.forEach((row, index) => {
const rowObj = {
rowCount: 0,
data: [],
height: 0,
startY: this.svg.totalHeight,
};
let obj = [];
for (let i = 0; i < row.length; i++) {
if (row[i].duration < this.svg.minTime) {
if (obj.length) {
rowObj.data.push(obj);
obj = [];
rowObj.rowCount++;
}
rowObj.data.push([row[i]]);
rowObj.rowCount++;
} else {
obj.push(row[i]);
}

if (i === row.length - 1 && obj.length) {
rowObj.data.push(obj);
obj = [];
rowObj.rowCount++;
}
}
rowObj.height =
rowObj.rowCount * this.svg.cellHeight +
(rowObj.rowCount - 1) * this.svg.cellPadding +
(index ? this.svg.rowPadding * 2 : 0);

this.svg.totalHeight += rowObj.height + this.svg.rowMargin;
data.push(rowObj);
});

this.svg.totalHeight += this.svg.rowPadding;
document.querySelector(
'#trace',
).style.height = `${this.svg.totalHeight}px`;
this.svg.data = JSON.parse(JSON.stringify(data));

this.$nextTick(() => {
this.dealTraceData();
});
}
},

dealTraceData() {
const traceDom = document.querySelector('#trace');
if (traceDom) {
this.svg.totalWidth = traceDom.offsetWidth - this.svg.svgPadding * 2;

if (this.svg.data[0] && this.svg.data[0].length) {
if (this.svg.data[0] && this.svg.data[0].data.length) {
const svg = traceDom.querySelector('svg');
this.svg.totalTime = this.svg.data[0][0].duration;

if (this.svg.totalTime) {
this.svg.colorIndex = 0;
const minTime = this.svg.minRate * this.svg.totalTime;

this.svg.data.forEach((row, index) => {
if (row && row.length) {
const dashedLine = this.addDashedLine(index);
svg.insertBefore(dashedLine, svg.querySelector('g'));

row.forEach((i) => {
if (i.duration) {
if (i.name) {
const tempDom = this.createRect(i, index);
const tempStr = `g${
i.duration > minTime ? '' : '.arrow'
}`;
svg.insertBefore(tempDom, svg.querySelector(tempStr));
} else {
const tempDom = this.createArrow(i, index);
svg.appendChild(tempDom);
}
}
});
this.svg.data.forEach((item, index) => {
let itemDom = {};
if (index) {
itemDom = this.createMultipleRowContainer(item);
} else {
itemDom = this.createRowContainer(item.data, item.startY);
}
svg.appendChild(itemDom);
});
}
} else {
@@ -709,40 +765,69 @@ export default {
}
}
},
addDashedLine(index) {
const x1 = this.svg.svgPadding;
const x2 = this.svg.svgPadding + this.svg.totalWidth;
const y = index * this.svg.rowHeight;
const line = document.createElementNS(this.svg.namespaceURI, 'line');
line.setAttribute('x1', x1);
line.setAttribute('y1', y);
line.setAttribute('x2', x2);
line.setAttribute('y2', y);
line.setAttribute('style', 'stroke:#E2E2E2;stroke-width:1');
line.setAttribute('stroke-dasharray', '5 5');

createMultipleRowContainer(item) {
const rectContainer = document.createElementNS(
this.svg.namespaceURI,
'g',
);
rectContainer.setAttribute('class', 'container');

const rect = document.createElementNS(this.svg.namespaceURI, 'rect');
rect.setAttribute('x', this.svg.svgPadding);
rect.setAttribute('y', item.startY + this.svg.rowPadding);
rect.setAttribute('height', item.height);
rect.setAttribute('width', this.svg.totalWidth);
rect.setAttribute('style', 'fill:#edf0f5;stroke:#E2E2E2;stroke-width:1');
rectContainer.appendChild(rect);

const temp = this.createRowContainer(
item.data,
item.startY + this.svg.rowPadding,
);
rectContainer.appendChild(temp);
return rectContainer;
},

createRowContainer(data, startY) {
const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'dashedLine');
g.appendChild(line);

data.forEach((row, index) => {
const y =
startY +
this.svg.rowPadding +
index * (this.svg.cellPadding + this.svg.cellHeight);
row.forEach((i) => {
if (i.duration) {
let temp;
if (i.name) {
temp = this.createRect(i, y);
g.insertBefore(temp, g.querySelector('g'));
} else {
temp = this.createArrow(i, y);
g.appendChild(temp);
}
}
});
});
return g;
},
createRect(data, rowIndex) {
const color = this.svg.colorList[
rowIndex > 1 ? 3 : this.svg.colorIndex++ % 4
];
const height = 40;
const width = (data.duration / this.svg.totalTime) * this.svg.totalWidth;
const fontSize = 12;
const normalRect = data.duration > this.svg.minRate * this.svg.totalTime;

createRect(data, startY) {
const color =
data.name && this.svg.colors[data.name]
? this.svg.colors[data.name]
: this.svg.colors.stream_parallel;

const x1 =
(data.start / this.svg.totalTime) * this.svg.totalWidth +
this.svg.svgPadding;
const y1 =
rowIndex * this.svg.rowHeight + (this.svg.rowHeight - height) / 2;

const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'rect');
const gChild = document.createElementNS(this.svg.namespaceURI, 'g');
const width = Math.max(
this.svg.minWidth,
(data.duration / this.svg.totalTime) * this.svg.totalWidth,
);

let name = '';
switch (data.name) {
case 'iteration_interval':
@@ -759,102 +844,117 @@ export default {
break;
}

const textContent = `${name}: ${data.duration.toFixed(4)}ms`;
const textWidth = this.getTextWidth(textContent);
const normalSize = data.duration >= this.svg.minTime;

const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'rect');

const rect = document.createElementNS(this.svg.namespaceURI, 'rect');
rect.setAttribute('x', x1);
rect.setAttribute('y', y1);
rect.setAttribute('height', height);
rect.setAttribute('y', startY);
rect.setAttribute('height', this.svg.cellHeight);
rect.setAttribute('width', width);
rect.setAttribute('style', `fill:${color[1]};stroke:${color[1]};`);
rect.setAttribute('style', `fill:${color[1]};stroke:${color[0]};`);

const foreignObject = document.createElementNS(
this.svg.namespaceURI,
'foreignObject',
);
foreignObject.textContent = `${name}: ${data.duration.toFixed(4)}ms`;
const textWidth = this.getTextWidth(foreignObject.textContent);

foreignObject.textContent = textContent;
foreignObject.setAttribute(
'x',
normalRect
normalSize
? x1
: Math.min(
this.svg.svgPadding * 2 + this.svg.totalWidth - textWidth,
Math.max(0, x1 + width / 2 - textWidth / 2),
this.svg.svgPadding * 2 +
this.svg.totalWidth -
textWidth -
this.svg.textMargin,
Math.max(this.svg.textMargin, x1 + width / 2 - textWidth / 2),
),
);

foreignObject.setAttribute(
'y',
y1 + (height - fontSize) / 2 + (normalRect ? 0 : fontSize),
);
foreignObject.setAttribute('height', fontSize);
foreignObject.setAttribute('y', startY);
foreignObject.setAttribute('height', this.svg.cellHeight);
foreignObject.setAttribute('width', width);
foreignObject.setAttribute('style', `color:${color[0]}`);
foreignObject.setAttribute(
'class',
`content${normalRect ? '' : ' content-mini'}`,
`content${normalSize ? '' : ' content-mini'}`,
);

const title = document.createElementNS(this.svg.namespaceURI, 'title');
title.textContent = `${name}: ${data.duration.toFixed(4)}ms`;
title.textContent = textContent;

gChild.appendChild(rect);
gChild.appendChild(foreignObject);
gChild.appendChild(title);
g.appendChild(gChild);
g.appendChild(rect);
g.appendChild(foreignObject);
g.appendChild(title);
return g;
},

createArrow(data, rowIndex) {
createArrow(data, startY) {
const width = (data.duration / this.svg.totalTime) * this.svg.totalWidth;
const x1 =
(data.start / this.svg.totalTime) * this.svg.totalWidth +
this.svg.markerPadding +
this.svg.svgPadding;
const x2 = x1 + width - this.svg.markerPadding * 2;
const y = rowIndex * this.svg.rowHeight + this.svg.rowHeight / 2;
const centerY = startY + this.svg.cellHeight / 2;
const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'arrow');

const line = document.createElementNS(this.svg.namespaceURI, 'line');
line.setAttribute('x1', x1);
line.setAttribute('y1', y);
line.setAttribute('x2', x2);
line.setAttribute('y2', y);
line.setAttribute('style', 'stroke:#E6EBF5;stroke-width:1');
line.setAttribute('marker-end', 'url(#marker_end)');
line.setAttribute('marker-start', 'url(#marker_start)');
line.setAttribute('y1', centerY);
line.setAttribute('y2', centerY);
line.setAttribute('style', 'stroke:#6c7280;stroke-width:1');
if (width > this.svg.markerPadding) {
line.setAttribute('x1', x1 + this.svg.markerPadding);
line.setAttribute('x2', x1 + width - this.svg.markerPadding);
line.setAttribute('marker-end', 'url(#marker_end)');
line.setAttribute('marker-start', 'url(#marker_start)');
} else {
line.setAttribute('x1', x1);
line.setAttribute('x2', x1 + width);
}

const text = document.createElementNS(this.svg.namespaceURI, 'text');
text.textContent = `${
rowIndex === 0 ? this.$t('profiling.approximateTime') : ''
data.duration === this.svg.totalTime
? this.$t('profiling.approximateTime')
: ''
}${data.duration.toFixed(4)}ms`;
const textWidth = this.getTextWidth(text.textContent);
const textWidth = text.textContent
? this.getTextWidth(text.textContent)
: 0;
text.setAttribute(
'x',
Math.min(
this.svg.svgPadding * 2 + this.svg.totalWidth - textWidth,
Math.max(0, (x2 - x1) / 2 + x1 - textWidth / 2),
this.svg.svgPadding * 2 +
this.svg.totalWidth -
textWidth -
this.svg.textMargin,
Math.max(this.svg.textMargin, width / 2 + x1 - textWidth / 2),
),
);
text.setAttribute('y', y - 6);
text.setAttribute('font-size', 12);
text.setAttribute('fill', '#6c7280');
text.setAttribute('y', centerY - this.svg.fontSize / 2);
text.setAttribute('font-size', this.svg.fontSize);
text.setAttribute('fill', 'black');

const startLine = document.createElementNS(this.svg.namespaceURI, 'line');
startLine.setAttribute('x1', x1 - this.svg.markerPadding);
startLine.setAttribute('y1', y - this.svg.rowHeight / 4);
startLine.setAttribute('x2', x1 - this.svg.markerPadding);
startLine.setAttribute('y2', y + this.svg.rowHeight / 4);
startLine.setAttribute('style', 'stroke:#E6EBF5;stroke-width:1');
startLine.setAttribute('x1', x1);
startLine.setAttribute('y1', startY);
startLine.setAttribute('x2', x1);
startLine.setAttribute('y2', startY + this.svg.cellHeight);
startLine.setAttribute('style', 'stroke:#6c7280;stroke-width:1');
g.appendChild(startLine);

const endLine = document.createElementNS(this.svg.namespaceURI, 'line');
endLine.setAttribute('x1', x1 + width - this.svg.markerPadding);
endLine.setAttribute('y1', y - this.svg.rowHeight / 4);
endLine.setAttribute('x2', x1 + width - this.svg.markerPadding);
endLine.setAttribute('y2', y + this.svg.rowHeight / 4);
endLine.setAttribute('style', 'stroke:#E6EBF5;stroke-width:1');
endLine.setAttribute('x1', x1 + width);
endLine.setAttribute('y1', startY);
endLine.setAttribute('x2', x1 + width);
endLine.setAttribute('y2', startY + this.svg.cellHeight);
endLine.setAttribute('style', 'stroke:#6c7280;stroke-width:1');
g.appendChild(endLine);
g.appendChild(line);
g.appendChild(text);
@@ -901,12 +1001,14 @@ export default {
return new Uint8Array(arr);
},
queryTimeline() {
this.timeLine.waiting = true;
const params = {
dir: this.relativePath,
device_id: this.currentCard,
};
RequestService.queryTimlineInfo(params)
.then((res) => {
this.timelineInfo.initOver = true;
if (res && res.data) {
this.timelineInfo.noData = false;
this.timelineInfo.totalTime = res.data.total_time.toFixed(4);
@@ -919,8 +1021,8 @@ export default {
})
.catch(() => {
this.timelineInfo.noData = true;
this.timelineInfo.initOver = true;
});
this.timeLine.waiting = true;
RequestService.queryTimeline(params)
.then((res) => {
if (res && res.data && res.data.length) {
@@ -1093,7 +1195,7 @@ export default {
margin-bottom: 15px;
.trace-container {
width: 100%;
height: calc(100% - 50px);
height: calc(100% - 54px);
overflow: auto;
.training-trace {
position: relative;
@@ -1104,7 +1206,7 @@ export default {
text-overflow: ellipsis;
white-space: nowrap;
font-size: 12px;
line-height: 12px;
line-height: 40px;
}
.content-mini {
overflow: visible;


+ 212
- 126
mindinsight/ui/src/views/train-manage/scalar.vue View File

@@ -222,6 +222,31 @@ limitations under the License.
</span>
</el-dialog>

<el-dialog :title="$t('scalar.info')"
:visible.sync="delThresholdVisible"
custom-class="delDialog"
:close-on-click-modal="false"
@close="delThresholdCancel"
top="35vh"
width="425px">
<div class="delThresholdItem">
<span class="delThresholdIcon el-icon-warning"></span>
<span class="delThresholdInfo">{{$t('scalar.isDelete')}}</span>
</div>
<div class="delThresholdItem">
<span class="delThresholdIcon">
<el-switch v-model="delThresholdSwitch"></el-switch>
</span>
<span class="delThresholdInfo">{{$t('scalar.applyAllSelectTag')}}</span>
</div>
<span slot="footer"
class="dialog-footer">
<el-button @click="delThresholdCancel">{{$t('public.cancel')}}</el-button>
<el-button type="primary"
@click="delThresholdCommit">{{$t('public.sure')}}</el-button>
</span>
</el-dialog>

</div>
</template>
<script>
@@ -262,9 +287,9 @@ export default {
yAxisScaleTimer: null, // yAxis scale timer
compare: false, // Comparison Page
scalarCompare: this.$t('scalar')['comparison'],
abort: false, // charts that have not been drawn.
trainingJobId: this.$route.query.train_id, // ID of the current training job
thresholdDialogVisible: false,
delThresholdVisible: false,
currentTagName: '',
currentSample: {},
thresholdErrorMsg: '',
@@ -285,6 +310,7 @@ export default {
],
thresholdLocal: null,
thresholdSwitch: false,
delThresholdSwitch: false,
thresholdColor: '#f00',
decodeTrainingJobId: '',
};
@@ -579,6 +605,10 @@ export default {
.then((res) => {
// error
if (!res || !res.data || !res.data.metadatas) {
// canceled
if (res.toString() === 'false') {
return;
}
if (sampleObject.charObj) {
sampleObject.charObj.clear();
sampleObject.onePoint = false;
@@ -666,8 +696,6 @@ export default {
// Draw chart
if (!this.compare) {
this.updateOrCreateChar(sampleIndex, true);
} else {
this.abort = true;
}
});
})
@@ -844,7 +872,6 @@ export default {
scale: true,
nameGap: 30,
minInterval: this.isActive === 0 ? 1 : 0,

axisLine: {
lineStyle: {
color: '#E6EBF5',
@@ -1263,68 +1290,65 @@ export default {
clearTimeout(this.axisBenchChangeTimer);
this.axisBenchChangeTimer = null;
}
switch (val) {
case this.$t('scalar.step'):
this.curBenchX = 'stepData';
this.curAxisName = this.$t('scalar.step');
this.isActive = 0;
break;
case this.$t('scalar.relativeTime'):
this.curBenchX = 'relativeData';
this.curAxisName = this.$t('scalar.relativeTime');
this.isActive = 1;
break;
case this.$t('scalar.absoluteTime'):
this.curBenchX = 'absData';
this.curAxisName = this.$t('scalar.absoluteTime');
this.isActive = 2;
break;
default:
this.curBenchX = 'stepData';
this.curAxisName = this.$t('scalar.step');
this.isActive = 0;
break;
}
this.axisBenchChangeTimer = setTimeout(() => {
switch (val) {
case this.$t('scalar.step'):
this.curBenchX = 'stepData';
this.curAxisName = this.$t('scalar.step');
this.isActive = 0;
break;
case this.$t('scalar.relativeTime'):
this.curBenchX = 'relativeData';
this.curAxisName = this.$t('scalar.relativeTime');
this.isActive = 1;
break;
case this.$t('scalar.absoluteTime'):
this.curBenchX = 'absData';
this.curAxisName = this.$t('scalar.absoluteTime');
this.isActive = 2;
break;
default:
this.curBenchX = 'stepData';
this.curAxisName = this.$t('scalar.step');
this.isActive = 0;
break;
}
// Update the horizontal benchmark of the default data
this.curPageArr.forEach((sampleObject) => {
if (sampleObject.charObj) {
sampleObject.charData.oriData.forEach((originData, index) => {
const seriesData = sampleObject.charData.charOption.series;
const oriIndexData = sampleObject.charData.oriData[index];
if (sampleObject.log) {
sampleObject.charData.charOption.series[
index * 2
].data = this.formateSmoothData(
sampleObject.charData.oriData[index].logData[this.curBenchX],
seriesData[index * 2].data = this.formateSmoothData(
oriIndexData.logData[this.curBenchX],
);
sampleObject.charData.charOption.series[index * 2 + 1].data =
sampleObject.charData.oriData[index].logData[this.curBenchX];
seriesData[index * 2 + 1].data =
oriIndexData.logData[this.curBenchX];
} else {
sampleObject.charData.charOption.series[
index * 2
].data = this.formateSmoothData(
sampleObject.charData.oriData[index].valueData[this.curBenchX],
seriesData[index * 2].data = this.formateSmoothData(
oriIndexData.valueData[this.curBenchX],
);
sampleObject.charData.charOption.series[index * 2 + 1].data =
sampleObject.charData.oriData[index].valueData[
this.curBenchX
];
seriesData[index * 2 + 1].data =
oriIndexData.valueData[this.curBenchX];
}
});
sampleObject.charData.charOption.xAxis.minInterval =
this.isActive === 0 ? 1 : 0;
sampleObject.charData.charOption.xAxis.axisLabel.rotate =
this.isActive === 2 ? 0 : 90;

const optionxAxis = sampleObject.charData.charOption.xAxis;
const seriesData = sampleObject.charData.charOption.series[0];
optionxAxis.minInterval = this.isActive === 0 ? 1 : 0;
optionxAxis.axisLabel.rotate = this.isActive === 2 ? 0 : 90;
sampleObject.updateFlag = true;
sampleObject.charObj.clear();

if (sampleObject.charData.charOption.series[0].data.length === 1) {
sampleObject.charData.charOption.series[0].showSymbol = true;
if (seriesData.data.length === 1) {
seriesData.showSymbol = true;
sampleObject.onePoint = true;
} else {
sampleObject.charData.charOption.series[0].showSymbol = false;
seriesData.showSymbol = false;
sampleObject.onePoint = false;
}
this.updateOrCreateChar(sampleObject.sampleIndex);
this.updateOrCreateChar(sampleObject.sampleIndex, true);
}
});
}, 500);
@@ -1763,40 +1787,35 @@ export default {
return;
}
this.yAxisScaleTimer = setTimeout(() => {
const tempOption = sampleObject.charData.charOption;
const tempOriData = sampleObject.charData.oriData;
const log = !sampleObject.log;
if (log) {
sampleObject.charData.charOption.toolbox.feature.myTool2.iconStyle.borderColor =
'#3E98C5';
sampleObject.charData.charOption.yAxis.type = 'log';
tempOption.toolbox.feature.myTool2.iconStyle.borderColor = '#3E98C5';
tempOption.yAxis.type = 'log';
} else {
sampleObject.charData.charOption.yAxis.type = 'value';
sampleObject.charData.charOption.toolbox.feature.myTool2.iconStyle.borderColor =
'#666';
tempOption.yAxis.type = 'value';
tempOption.toolbox.feature.myTool2.iconStyle.borderColor = '#666';
}
sampleObject.charData.oriData.forEach((originData, index) => {
tempOriData.forEach((originData, index) => {
if (log) {
sampleObject.charData.charOption.series[
index * 2
].data = this.formateSmoothData(
sampleObject.charData.oriData[index].logData[this.curBenchX],
tempOption.series[index * 2].data = this.formateSmoothData(
tempOriData[index].logData[this.curBenchX],
);
sampleObject.charData.charOption.series[index * 2 + 1].data =
sampleObject.charData.oriData[index].logData[this.curBenchX];
tempOption.series[index * 2 + 1].data =
tempOriData[index].logData[this.curBenchX];
} else {
sampleObject.charData.charOption.series[
index * 2
].data = this.formateSmoothData(
sampleObject.charData.oriData[index].valueData[this.curBenchX],
tempOption.series[index * 2].data = this.formateSmoothData(
tempOriData[index].valueData[this.curBenchX],
);
sampleObject.charData.charOption.series[index * 2 + 1].data =
sampleObject.charData.oriData[index].valueData[this.curBenchX];
tempOption.series[index * 2 + 1].data =
tempOriData[index].valueData[this.curBenchX];
}
});
sampleObject.log = log;
sampleObject.updateFlag = true;
sampleObject.charObj.clear();

const tempOption = sampleObject.charData.charOption;
const dataObj = tempOption.series[0];

// one point
@@ -1834,19 +1853,12 @@ export default {
} else {
this.scalarCompare = this.$t('scalar.comparison');

if (this.abort) {
this.curPageArr.forEach((sampleObject) => {
this.$nextTick(() => {
// Draw chart
if (!this.compare) {
this.updateOrCreateChar(sampleObject.sampleIndex);
} else {
this.abort = true;
}
});
});
this.abort = false;
}
this.curPageArr.forEach((sampleObject) => {
// Draw chart
if (!sampleObject.charObj) {
this.updateOrCreateChar(sampleObject.sampleIndex);
}
});

this.$nextTick(() => {
this.resizeCallback();
@@ -1896,49 +1908,9 @@ export default {

delThreshold(sampleItem) {
this.stopUpdateSamples();
this.$confirm(this.$t('scalar.isDelete'), this.$t('scalar.info'), {
confirmButtonText: this.$t('public.sure'),
cancelButtonText: this.$t('public.cancel'),
type: 'warning',
})
.then(() => {
this.getCache();
if (
this.thresholdLocal &&
this.thresholdLocal[this.decodeTrainingJobId] &&
this.thresholdLocal[this.decodeTrainingJobId][sampleItem.tagName]
) {
delete this.thresholdLocal[this.decodeTrainingJobId][
sampleItem.tagName
];
this.clearCache();
localStorage.setItem(
'thresholdCache',
JSON.stringify(this.thresholdLocal),
);
}
sampleItem.pieceStr = '';
const tempCharOption = sampleItem.charData.charOption;

if (
tempCharOption.visualMap &&
tempCharOption.visualMap['pieces'] &&
tempCharOption.visualMap['pieces'].length > 0
) {
tempCharOption.visualMap = null;
tempCharOption.series[0].markLine = null;
tempCharOption.series[0].lineStyle['color'] = sampleItem.colors;
}
sampleItem.charObj.setOption(tempCharOption, false);
if (this.isTimeReload) {
this.autoUpdateSamples();
}
})
.catch(() => {
if (this.isTimeReload) {
this.autoUpdateSamples();
}
});
this.currentTagName = sampleItem.tagName;
this.currentSample = sampleItem;
this.delThresholdVisible = true;
},

/**
@@ -2325,6 +2297,90 @@ export default {
this.autoUpdateSamples();
}
},

/**
* delete threshold cancel
*/

delThresholdCancel() {
this.currentTagName = '';
this.currentSample = {};
this.delThresholdSwitch = false;
this.delThresholdVisible = false;
if (this.isTimeReload) {
this.autoUpdateSamples();
}
},

/**
* delete threshold commit
*/

delThresholdCommit() {
this.getCache();
if (this.delThresholdSwitch) {
this.originDataArr.forEach((sampleObject) => {
if (this.multiSelectedTagNames[sampleObject.tagName]) {
if (
this.thresholdLocal &&
this.thresholdLocal[this.decodeTrainingJobId] &&
this.thresholdLocal[this.decodeTrainingJobId][
sampleObject.tagName
]
) {
delete this.thresholdLocal[this.decodeTrainingJobId][
sampleObject.tagName
];
sampleObject.pieceStr = '';
const tempCharOption = sampleObject.charData.charOption;
if (
tempCharOption.visualMap &&
tempCharOption.visualMap['pieces'] &&
tempCharOption.visualMap['pieces'].length > 0
) {
tempCharOption.visualMap = null;
tempCharOption.series[0].markLine = null;
tempCharOption.series[0].lineStyle['color'] =
sampleObject.colors;
}
if (sampleObject.charObj) {
sampleObject.charObj.setOption(tempCharOption, false);
}
}
}
});
} else {
if (
this.thresholdLocal &&
this.thresholdLocal[this.decodeTrainingJobId] &&
this.thresholdLocal[this.decodeTrainingJobId][this.currentTagName]
) {
delete this.thresholdLocal[this.decodeTrainingJobId][
this.currentTagName
];
this.currentSample.pieceStr = '';
const tempCharOption = this.currentSample.charData.charOption;
if (
tempCharOption.visualMap &&
tempCharOption.visualMap['pieces'] &&
tempCharOption.visualMap['pieces'].length > 0
) {
tempCharOption.visualMap = null;
tempCharOption.series[0].markLine = null;
tempCharOption.series[0].lineStyle[
'color'
] = this.currentSample.colors;
}
this.currentSample.charObj.setOption(tempCharOption, false);
}
}
this.clearCache();
localStorage.setItem(
'thresholdCache',
JSON.stringify(this.thresholdLocal),
);
this.delThresholdVisible = false;
},
},
components: {
ScalarButton,
@@ -2601,7 +2657,7 @@ export default {

span {
cursor: pointer;
width: 80px;
width: auto;
height: 39px;
display: inline-block;
}
@@ -2702,7 +2758,7 @@ export default {
.fs16 {
font-size: 16px;
color: #6c7280;
width: 160px;
width: 180px;
}
}
.tooltip-show-content {
@@ -2711,4 +2767,34 @@ export default {
.cl-title-right {
padding-right: 20px;
}

.delDialog {
.el-dialog__header {
padding: 15px 15px 10px;
}
.el-dialog__title {
font-weight: normal;
line-height: 18px;
}
.el-dialog__body {
padding: 10px 15px;
}

.delThresholdItem {
display: flex;
margin-bottom: 10px;
}

.delThresholdIcon {
color: #e6a23c;
font-size: 24px;
width: 40px;
margin-right: 10px;
}

.delThresholdInfo {
line-height: 24px;
height: 24px;
}
}
</style>

+ 218
- 149
mindinsight/ui/src/views/train-manage/step-trace.vue View File

@@ -59,10 +59,6 @@ limitations under the License.
<div id="trace-container">
<div id="trace"
class="training-trace">
<div :title="$t('graph.downloadPic')"
class="download-button"
@click="downloadSVG">
</div>
<svg version="1.1"
xmlns="http://www.w3.org/2000/svg"
height="100%"
@@ -75,8 +71,8 @@ limitations under the License.
markerHeight="8"
orient="auto">
<path d="M1,1 L1,7 L9,4 z"
fill="#E6EBF5"
stroke="#E6EBF5"></path>
fill="#6c7280"
stroke="#6c7280"></path>
</marker>
<marker id="marker_start"
refX="5"
@@ -85,14 +81,14 @@ limitations under the License.
markerHeight="8"
orient="auto">
<path d="M9,1 L9,7 L1,4 z"
fill="#E6EBF5"
stroke="#E6EBF5"></path>
fill="#6c7280"
stroke="#6c7280"></path>
</marker>
</defs>
</svg>
</div>
<div class="image-noData svg"
v-if="svg.data.length === 0">
v-if="svg.data.length === 0 && svg.initOver">
<div>
<img :src="require('@/assets/images/nodata.png')"
alt="" />
@@ -162,19 +158,27 @@ export default {
svgPadding: 20,
totalWidth: 0,
totalTime: 0,
rowHeight: 60,
cellHeight: 40,
cellPadding: 0,
rowPadding: 20,
rowMargin: 10,
totalHeight: 0,
markerPadding: 4,
minRate: 0.05,
minRate: 0.1,
minTime: 0,
minWidth: 1,
fontSize: 12,
textMargin: 21,
namespaceURI: 'http://www.w3.org/2000/svg',
resizeTimer: null,
colorList: [
['#A6DD82', '#edf8e6'],
['#6CBFFF', '#e2f2ff'],
['#fa8e5b', '#fff4de'],
['#01a5a7', '#cceded'],
],
colorIndex: 0,
colors: {
iteration_interval: ['#A6DD82', '#edf8e6'],
fp_and_bp: ['#6CBFFF', '#e2f2ff'],
tail: ['#fa8e5b', '#fff4de'],
stream_parallel: ['#01a5a7', '#cceded'],
},
noData: false,
initOver: false,
},
deviceId: 0,
radio: this.$t('profiling.lterationGap'),
@@ -270,7 +274,7 @@ export default {
this.queryTrainingTrace(0);
},
changeStep(value) {
if (value === 0) {
if (value === 0 || (!this.steps.step && this.steps.step !== 0)) {
this.steps.step = null;
this.steps.trueStep = null;
this.queryTrainingTrace(0);
@@ -421,6 +425,7 @@ export default {
};
RequestService.queryTrainingTrace(params).then(
(res) => {
this.svg.initOver = true;
if (
res.data &&
res.data.training_trace_graph &&
@@ -438,15 +443,13 @@ export default {
this.fp_start = '--';
this.bp_end = '--';
}
document.querySelector('#trace').style.height = `${res.data
.training_trace_graph.length * this.svg.rowHeight}px`;
this.svg.data = JSON.parse(
JSON.stringify(res.data.training_trace_graph),
);

this.removeTrace();
setTimeout(() => {
this.dealTraceData();
}, 100);
this.$nextTick(() => {
this.packageTraceData(
JSON.parse(JSON.stringify(res.data.training_trace_graph)),
);
});
} else {
this.fp_start = '--';
this.bp_end = '--';
@@ -460,44 +463,85 @@ export default {
this.bp_end = '--';
this.svg.data = [];
this.svg.noData = true;
this.svg.initOver = true;
this.removeTrace();
},
);
},
packageTraceData(traceGraph) {
this.svg.totalTime = 0;
this.svg.minTime = 0;
this.svg.totalHeight = 0;
const data = [];

if (traceGraph && traceGraph[0] && traceGraph[0][0]) {
this.svg.totalTime = traceGraph[0][0].duration;
this.svg.minTime = this.svg.minRate * this.svg.totalTime;

traceGraph.forEach((row, index) => {
const rowObj = {
rowCount: 0,
data: [],
height: 0,
startY: this.svg.totalHeight,
};
let obj = [];
for (let i = 0; i < row.length; i++) {
if (row[i].duration < this.svg.minTime) {
if (obj.length) {
rowObj.data.push(obj);
obj = [];
rowObj.rowCount++;
}
rowObj.data.push([row[i]]);
rowObj.rowCount++;
} else {
obj.push(row[i]);
}

if (i === row.length - 1 && obj.length) {
rowObj.data.push(obj);
obj = [];
rowObj.rowCount++;
}
}
rowObj.height =
rowObj.rowCount * this.svg.cellHeight +
(rowObj.rowCount - 1) * this.svg.cellPadding +
(index ? this.svg.rowPadding * 2 : 0);

this.svg.totalHeight += rowObj.height + this.svg.rowMargin;
data.push(rowObj);
});

this.svg.totalHeight += this.svg.rowPadding;
document.querySelector(
'#trace',
).style.height = `${this.svg.totalHeight}px`;
this.svg.data = JSON.parse(JSON.stringify(data));

this.$nextTick(() => {
this.dealTraceData();
});
}
},

dealTraceData() {
const traceDom = document.querySelector('#trace');
if (traceDom) {
this.svg.totalWidth = traceDom.offsetWidth - this.svg.svgPadding * 2;

if (this.svg.data[0] && this.svg.data[0].length) {
if (this.svg.data[0] && this.svg.data[0].data.length) {
const svg = traceDom.querySelector('svg');
this.svg.totalTime = this.svg.data[0][0].duration;

if (this.svg.totalTime) {
this.svg.colorIndex = 0;
const minTime = this.svg.minRate * this.svg.totalTime;

this.svg.data.forEach((row, index) => {
if (row && row.length) {
const dashedLine = this.addDashedLine(index);
svg.insertBefore(dashedLine, svg.querySelector('g'));

row.forEach((i) => {
if (i.duration) {
if (i.name) {
const tempDom = this.createRect(i, index);
const tempStr = `g${
i.duration > minTime ? '' : '.arrow'
}`;
svg.insertBefore(tempDom, svg.querySelector(tempStr));
} else {
const tempDom = this.createArrow(i, index);
svg.appendChild(tempDom);
}
}
});
this.svg.data.forEach((item, index) => {
let itemDom = {};
if (index) {
itemDom = this.createMultipleRowContainer(item);
} else {
itemDom = this.createRowContainer(item.data, item.startY);
}
svg.appendChild(itemDom);
});
}
} else {
@@ -505,40 +549,69 @@ export default {
}
}
},
addDashedLine(index) {
const x1 = this.svg.svgPadding;
const x2 = this.svg.svgPadding + this.svg.totalWidth;
const y = index * this.svg.rowHeight;
const line = document.createElementNS(this.svg.namespaceURI, 'line');
line.setAttribute('x1', x1);
line.setAttribute('y1', y);
line.setAttribute('x2', x2);
line.setAttribute('y2', y);
line.setAttribute('style', 'stroke:#E2E2E2;stroke-width:1');
line.setAttribute('stroke-dasharray', '5 5');

createMultipleRowContainer(item) {
const rectContainer = document.createElementNS(
this.svg.namespaceURI,
'g',
);
rectContainer.setAttribute('class', 'container');

const rect = document.createElementNS(this.svg.namespaceURI, 'rect');
rect.setAttribute('x', this.svg.svgPadding);
rect.setAttribute('y', item.startY + this.svg.rowPadding);
rect.setAttribute('height', item.height);
rect.setAttribute('width', this.svg.totalWidth);
rect.setAttribute('style', 'fill:#edf0f5;stroke:#E2E2E2;stroke-width:1');
rectContainer.appendChild(rect);

const temp = this.createRowContainer(
item.data,
item.startY + this.svg.rowPadding,
);
rectContainer.appendChild(temp);
return rectContainer;
},

createRowContainer(data, startY) {
const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'dashedLine');
g.appendChild(line);

data.forEach((row, index) => {
const y =
startY +
this.svg.rowPadding +
index * (this.svg.cellPadding + this.svg.cellHeight);
row.forEach((i) => {
if (i.duration) {
let temp;
if (i.name) {
temp = this.createRect(i, y);
g.insertBefore(temp, g.querySelector('g'));
} else {
temp = this.createArrow(i, y);
g.appendChild(temp);
}
}
});
});
return g;
},
createRect(data, rowIndex) {
const color = this.svg.colorList[
rowIndex > 1 ? 3 : this.svg.colorIndex++ % 4
];
const height = 40;
const width = (data.duration / this.svg.totalTime) * this.svg.totalWidth;
const fontSize = 12;
const normalRect = data.duration > this.svg.minRate * this.svg.totalTime;

createRect(data, startY) {
const color =
data.name && this.svg.colors[data.name]
? this.svg.colors[data.name]
: this.svg.colors.stream_parallel;

const x1 =
(data.start / this.svg.totalTime) * this.svg.totalWidth +
this.svg.svgPadding;
const y1 =
rowIndex * this.svg.rowHeight + (this.svg.rowHeight - height) / 2;

const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'rect');
const gChild = document.createElementNS(this.svg.namespaceURI, 'g');
const width = Math.max(
this.svg.minWidth,
(data.duration / this.svg.totalTime) * this.svg.totalWidth,
);

let name = '';
switch (data.name) {
case 'iteration_interval':
@@ -555,102 +628,117 @@ export default {
break;
}

const textContent = `${name}: ${data.duration.toFixed(4)}ms`;
const textWidth = this.getTextWidth(textContent);
const normalSize = data.duration >= this.svg.minTime;

const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'rect');

const rect = document.createElementNS(this.svg.namespaceURI, 'rect');
rect.setAttribute('x', x1);
rect.setAttribute('y', y1);
rect.setAttribute('height', height);
rect.setAttribute('y', startY);
rect.setAttribute('height', this.svg.cellHeight);
rect.setAttribute('width', width);
rect.setAttribute('style', `fill:${color[1]};stroke:${color[1]};`);
rect.setAttribute('style', `fill:${color[1]};stroke:${color[0]};`);

const foreignObject = document.createElementNS(
this.svg.namespaceURI,
'foreignObject',
);
foreignObject.textContent = `${name}: ${data.duration.toFixed(4)}ms`;
const textWidth = this.getTextWidth(foreignObject.textContent);

foreignObject.textContent = textContent;
foreignObject.setAttribute(
'x',
normalRect
normalSize
? x1
: Math.min(
this.svg.svgPadding * 2 + this.svg.totalWidth - textWidth,
Math.max(0, x1 + width / 2 - textWidth / 2),
this.svg.svgPadding * 2 +
this.svg.totalWidth -
textWidth -
this.svg.textMargin,
Math.max(this.svg.textMargin, x1 + width / 2 - textWidth / 2),
),
);

foreignObject.setAttribute(
'y',
y1 + (height - fontSize) / 2 + (normalRect ? 0 : fontSize),
);
foreignObject.setAttribute('height', fontSize);
foreignObject.setAttribute('y', startY);
foreignObject.setAttribute('height', this.svg.cellHeight);
foreignObject.setAttribute('width', width);
foreignObject.setAttribute('style', `color:${color[0]}`);
foreignObject.setAttribute(
'class',
`content${normalRect ? '' : ' content-mini'}`,
`content${normalSize ? '' : ' content-mini'}`,
);

const title = document.createElementNS(this.svg.namespaceURI, 'title');
title.textContent = `${name}: ${data.duration.toFixed(4)}ms`;
title.textContent = textContent;

gChild.appendChild(rect);
gChild.appendChild(foreignObject);
gChild.appendChild(title);
g.appendChild(gChild);
g.appendChild(rect);
g.appendChild(foreignObject);
g.appendChild(title);
return g;
},

createArrow(data, rowIndex) {
createArrow(data, startY) {
const width = (data.duration / this.svg.totalTime) * this.svg.totalWidth;
const x1 =
(data.start / this.svg.totalTime) * this.svg.totalWidth +
this.svg.markerPadding +
this.svg.svgPadding;
const x2 = x1 + width - this.svg.markerPadding * 2;
const y = rowIndex * this.svg.rowHeight + this.svg.rowHeight / 2;
const centerY = startY + this.svg.cellHeight / 2;
const g = document.createElementNS(this.svg.namespaceURI, 'g');
g.setAttribute('class', 'arrow');

const line = document.createElementNS(this.svg.namespaceURI, 'line');
line.setAttribute('x1', x1);
line.setAttribute('y1', y);
line.setAttribute('x2', x2);
line.setAttribute('y2', y);
line.setAttribute('style', 'stroke:#E6EBF5;stroke-width:1');
line.setAttribute('marker-end', 'url(#marker_end)');
line.setAttribute('marker-start', 'url(#marker_start)');
line.setAttribute('y1', centerY);
line.setAttribute('y2', centerY);
line.setAttribute('style', 'stroke:#6c7280;stroke-width:1');
if (width > this.svg.markerPadding) {
line.setAttribute('x1', x1 + this.svg.markerPadding);
line.setAttribute('x2', x1 + width - this.svg.markerPadding);
line.setAttribute('marker-end', 'url(#marker_end)');
line.setAttribute('marker-start', 'url(#marker_start)');
} else {
line.setAttribute('x1', x1);
line.setAttribute('x2', x1 + width);
}

const text = document.createElementNS(this.svg.namespaceURI, 'text');
text.textContent = `${
rowIndex === 0 ? this.$t('profiling.approximateTime') : ''
data.duration === this.svg.totalTime
? this.$t('profiling.approximateTime')
: ''
}${data.duration.toFixed(4)}ms`;
const textWidth = this.getTextWidth(text.textContent);
const textWidth = text.textContent
? this.getTextWidth(text.textContent)
: 0;
text.setAttribute(
'x',
Math.min(
this.svg.svgPadding * 2 + this.svg.totalWidth - textWidth,
Math.max(0, (x2 - x1) / 2 + x1 - textWidth / 2),
this.svg.svgPadding * 2 +
this.svg.totalWidth -
textWidth -
this.svg.textMargin,
Math.max(this.svg.textMargin, width / 2 + x1 - textWidth / 2),
),
);
text.setAttribute('y', y - 6);
text.setAttribute('font-size', 12);
text.setAttribute('fill', '#6c7280');
text.setAttribute('y', centerY - this.svg.fontSize / 2);
text.setAttribute('font-size', this.svg.fontSize);
text.setAttribute('fill', 'black');

const startLine = document.createElementNS(this.svg.namespaceURI, 'line');
startLine.setAttribute('x1', x1 - this.svg.markerPadding);
startLine.setAttribute('y1', y - this.svg.rowHeight / 4);
startLine.setAttribute('x2', x1 - this.svg.markerPadding);
startLine.setAttribute('y2', y + this.svg.rowHeight / 4);
startLine.setAttribute('style', 'stroke:#E6EBF5;stroke-width:1');
startLine.setAttribute('x1', x1);
startLine.setAttribute('y1', startY);
startLine.setAttribute('x2', x1);
startLine.setAttribute('y2', startY + this.svg.cellHeight);
startLine.setAttribute('style', 'stroke:#6c7280;stroke-width:1');
g.appendChild(startLine);

const endLine = document.createElementNS(this.svg.namespaceURI, 'line');
endLine.setAttribute('x1', x1 + width - this.svg.markerPadding);
endLine.setAttribute('y1', y - this.svg.rowHeight / 4);
endLine.setAttribute('x2', x1 + width - this.svg.markerPadding);
endLine.setAttribute('y2', y + this.svg.rowHeight / 4);
endLine.setAttribute('style', 'stroke:#E6EBF5;stroke-width:1');
endLine.setAttribute('x1', x1 + width);
endLine.setAttribute('y1', startY);
endLine.setAttribute('x2', x1 + width);
endLine.setAttribute('y2', startY + this.svg.cellHeight);
endLine.setAttribute('style', 'stroke:#6c7280;stroke-width:1');
g.appendChild(endLine);
g.appendChild(line);
g.appendChild(text);
@@ -689,15 +777,6 @@ export default {
this.svg.resizeTimer = null;
}, 500);
},
downloadSVG() {
const svgDom = document.querySelector('svg').outerHTML;
const src = `data:image/svg+xml;base64,
${window.btoa(unescape(encodeURIComponent(svgDom)))}`;
const a = document.createElement('a');
a.href = src;
a.download = new Date().valueOf();
a.click();
},
},
destroyed() {
window.removeEventListener('resize', this.resizeTrace, false);
@@ -747,8 +826,8 @@ export default {
}
}
.step-message {
height: 24px;
line-height: 24px;
height: 32px;
line-height: 16px;
margin-top: 6px;
margin-left: 14px;
overflow-y: auto;
@@ -765,7 +844,7 @@ export default {
font-weight: bold;
}
.pf-content-middle {
padding: 15px 15px 0;
padding: 10px 15px 0;
height: calc(100% - 62px);
#trace-container {
width: 100%;
@@ -775,23 +854,13 @@ export default {
.training-trace {
position: relative;
height: 0;
.download-button {
display: none;
position: absolute;
width: 12px;
height: 12px;
right: 10px;
top: 10px;
cursor: pointer;
background-image: url('../../assets/images/download.png');
}
.content {
overflow: hidden;
text-align: center;
text-overflow: ellipsis;
white-space: nowrap;
font-size: 12px;
line-height: 12px;
line-height: 40px;
}
.content-mini {
overflow: visible;


+ 2
- 1
mindinsight/ui/src/views/train-manage/tensor.vue View File

@@ -707,13 +707,14 @@ export default {
this.$nextTick(() => {
elementItem = this.$refs[sampleItem.ref];
if (elementItem) {
elementItem[0].updateGridData();
if (showLimitError) {
elementItem[0].showRequestErrorMessage(
errorMsg,
sampleItem.formateData.value.dims,
sampleItem.filterStr,
);
} else {
elementItem[0].updateGridData();
}
}
});


+ 42
- 1
mindinsight/ui/src/views/train-manage/training-dashboard.vue View File

@@ -21,6 +21,9 @@ limitations under the License.
<div class="cl-dashboard-top-title">
{{$t('trainingDashboard.trainingDashboardTitle')}}
</div>
<div :title="$t('trainingDashboard.loadingTip')"
v-if="trainJobCached"
class="el-icon-loading loading-icon"></div>
<div class="path-message">
<span>{{$t('symbols.leftbracket')}}</span>
<span>{{$t('trainingDashboard.summaryDirPath')}}</span>
@@ -234,6 +237,12 @@ export default {
mousedown: 'mousedown',
mouseup: 'mouseup',
},
trainJobCached: false,
cacheKey: {
notInCache: 'NOT_IN_CACHE',
caching: 'CACHING',
cached: 'CACHED',
},
};
},
computed: {
@@ -262,6 +271,7 @@ export default {
}
this.getDatavisualPlugins(true);
this.queryDatasetGraph();
this.queryTrainJobCacheState();
setTimeout(() => {
this.$store.commit('setIsReload', false);
}, this.reloadStopTime);
@@ -349,6 +359,7 @@ export default {
this.startAutoUpdate();
}
this.queryDatasetGraph();
this.queryTrainJobCacheState();
},

/**
@@ -504,6 +515,7 @@ export default {
if (!Object.keys(this.allDatasetGraphData).length) {
this.queryDatasetGraph();
}
this.queryTrainJobCacheState();
}, this.timeReloadValue * 1000);
},
/**
@@ -990,9 +1002,10 @@ export default {
this.$nextTick(() => {
const elementItem = this.$refs.tensorChart;
if (elementItem) {
elementItem.updateGridData();
if (showLimitError) {
elementItem.showRequestErrorMessage(errorMsg);
} else {
elementItem.updateGridData();
}
}
});
@@ -1940,6 +1953,31 @@ export default {
.selectAll('title')
.remove();
},
/**
* Query the cachee status of training jonb
*/
queryTrainJobCacheState() {
const params = {
train_id: this.trainingJobId,
};
RequestService.querySummaryList(params, true).then((response) => {
if (
response &&
response.data &&
response.data.train_jobs &&
response.data.train_jobs.length
) {
const curTrain = response.data.train_jobs[0];
if (curTrain.cache_status !== this.cacheKey.cached) {
this.trainJobCached = true;
} else {
this.trainJobCached = false;
}
} else {
this.trainJobCached = false;
}
});
},
},
};
</script>
@@ -1962,6 +2000,9 @@ export default {
padding: 18px 16px;
font-weight: bold;
}
.loading-icon {
margin-left: 5px;
}
.cl-dashboard-top-title {
float: left;
color: #000000;


+ 265
- 0
mindinsight/utils/computing_resource_mgr.py View File

@@ -0,0 +1,265 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Compute resource manager."""
import fractions
import math
import threading
import multiprocessing
from concurrent import futures

from mindinsight.utils.log import utils_logger as logger
from mindinsight.utils.constant import GeneralErrors
from mindinsight.utils.exceptions import MindInsightException


_MP_CONTEXT = multiprocessing.get_context(method="forkserver")


class ComputingResourceManager:
"""
Manager for computing resources.

This class provides executors for computing tasks. Executors can only be used once.

Args:
executors_cnt (int): Number of executors to be provided by this class.
max_processes_cnt (int): Max number of processes to be used for computing.
"""
def __init__(self, executors_cnt, max_processes_cnt):
self._max_processes_cnt = max_processes_cnt
self._executors_cnt = executors_cnt
self._lock = threading.Lock()
self._executors = {
ind: Executor(
self, executor_id=ind,
available_workers=fractions.Fraction(self._max_processes_cnt, self._executors_cnt))
for ind in range(self._executors_cnt)
}
self._remaining_executors = len(self._executors)
self._backend = futures.ProcessPoolExecutor(max_workers=max_processes_cnt, mp_context=_MP_CONTEXT)
logger.info("Initialized ComputingResourceManager with executors_cnt=%s, max_processes_cnt=%s.",
executors_cnt, max_processes_cnt)

def __enter__(self):
"""This method is not thread safe."""
return self

def __exit__(self, exc_type, exc_val, exc_tb):
"""
This should not block because every executor have waited. If it blocks, there may be some problem.

This method is not thread safe.
"""
self._backend.shutdown()

def get_executor(self):
"""
Get an executor.

Returns:
Executor, which can be used for submitting tasks.

Raises:
ComputeResourceManagerException: when no more executor is available.
"""
with self._lock:
self._remaining_executors -= 1
if self._remaining_executors < 0:
raise ComputingResourceManagerException("No more executors.")
return self._executors[self._remaining_executors]

def destroy_executor(self, executor_id):
"""
Destroy an executor to reuse it's workers.

Args:
executor_id (int): Id of the executor to be destroyed.
"""
with self._lock:
released_workers = self._executors[executor_id].available_workers
self._executors.pop(executor_id)

remaining_executors = len(self._executors)
logger.info("Destroy executor %s. Will release %s worker(s). Remaining executors: %s.",
executor_id, released_workers, remaining_executors)
if not remaining_executors:
return

for executor in self._executors.values():
executor.add_worker(
fractions.Fraction(
released_workers.numerator,
released_workers.denominator * remaining_executors))

def submit(self, *args, **kwargs):
"""
Submit a task.

See concurrent.futures.Executor.submit() for details.

This method should only be called by Executor. Users should not call this method directly.
"""
with self._lock:
return self._backend.submit(*args, **kwargs)


class ComputingResourceManagerException(MindInsightException):
"""
Indicates a computing resource error has occurred.

This exception should not be presented to end users.

Args:
msg (str): Exception message.
"""
def __init__(self, msg):
super().__init__(error=GeneralErrors.COMPUTING_RESOURCE_ERROR, message=msg)


class WrappedFuture:
"""
Wrap Future objects with custom logics to release compute slots.

Args:
executor (Executor): The executor which generates this future.
original_future (futures.Future): Original future object.
"""
def __init__(self, executor, original_future: futures.Future):
self._original_future = original_future
self._executor = executor

def add_done_callback(self, callback):
"""
Add done callback.

See futures.Future.add_done_callback() for details.
"""
def _wrapped_callback(*args, **kwargs):
logger.debug("Future callback called.")
try:
return callback(*args, **kwargs)
finally:
self._executor.release_slot()
self._executor.remove_done_future(self._original_future)
self._original_future.add_done_callback(_wrapped_callback)


class Executor:
"""
Task executor.

Args:
mgr (ComputingResourceManager): The ComputingResourceManager that generates this executor.
executor_id (int): Executor id.
available_workers (fractions.Fraction): Available workers.
"""
def __init__(self, mgr: ComputingResourceManager, executor_id, available_workers):
self._mgr = mgr
self.closed = False
self._available_workers = available_workers
self._effective_workers = self._calc_effective_workers(self._available_workers)
self._slots = threading.Semaphore(value=self._effective_workers)
self._id = executor_id
self._futures = set()

self._lock = threading.Lock()

logger.debug("Available workers: %s.", available_workers)

def __enter__(self):
"""This method is not thread safe."""
if self.closed:
raise ComputingResourceManagerException("Can not reopen closed executor.")
return self

def __exit__(self, exc_type, exc_val, exc_tb):
"""This method is not thread safe."""
self._close()

def submit(self, *args, **kwargs):
"""
Submit task.

See concurrent.futures.Executor.submit() for details. This method is not thread safe.
"""
logger.debug("Task submitted to executor %s.", self._id)

if self.closed:
raise ComputingResourceManagerException("Cannot submit task to a closed executor.")

# Thread will wait on acquire().
self._slots.acquire()
future = self._mgr.submit(*args, **kwargs)

# set.add is atomic in c-python.
self._futures.add(future)
return WrappedFuture(self, future)

def release_slot(self):
"""
Release a slot for new tasks to be submitted.

Semaphore is itself thread safe, so no lock is needed.

This method should only be called by ExecutorFuture.
"""
self._slots.release()

def remove_done_future(self, future):
"""
Remove done futures so the executor will not track them.

This method should only be called by WrappedFuture.
"""
# set.remove is atomic in c-python so no lock is needed.
self._futures.remove(future)

@staticmethod
def _calc_effective_workers(available_workers):
return 1 if available_workers <= 1 else math.floor(available_workers)

def _close(self):
self.closed = True
logger.debug("Executor is being closed, futures to wait: %s", self._futures)
futures.wait(self._futures)
logger.debug("Executor wait futures completed.")
self._mgr.destroy_executor(self._id)
logger.debug("Executor is closed.")

@property
def available_workers(self):
"""Get available workers."""
with self._lock:
return self._available_workers

def add_worker(self, added_available_workers):
"""This method should only be called by ComputeResourceManager."""
logger.debug("Add worker: %s", added_available_workers)
with self._lock:
self._available_workers += added_available_workers
new_effective_workers = self._calc_effective_workers(self._available_workers)
if new_effective_workers > self._effective_workers:
for _ in range(new_effective_workers - self._effective_workers):
self._slots.release()

self._effective_workers = new_effective_workers

def wait_all_tasks_finish(self):
"""
Wait all tasks finish.

This method is not thread safe.
"""
futures.wait(self._futures)

+ 9
- 0
mindinsight/utils/constant.py View File

@@ -31,6 +31,7 @@ class MindInsightModules(Enum):
DATAVISUAL = 5
PROFILERMGR = 6
SCRIPTCONVERTER = 7
SYSMETRIC = 8


class GeneralErrors(Enum):
@@ -43,6 +44,7 @@ class GeneralErrors(Enum):
FILE_SYSTEM_PERMISSION_ERROR = 8
PORT_NOT_AVAILABLE_ERROR = 9
URL_DECODE_ERROR = 10
COMPUTING_RESOURCE_ERROR = 11


class ProfilerMgrErrors(Enum):
@@ -71,7 +73,14 @@ class DataVisualErrors(Enum):
HISTOGRAM_NOT_EXIST = 15
TRAIN_JOB_DETAIL_NOT_IN_CACHE = 16
QUERY_STRING_CONTAINS_NULL_BYTE = 17
TENSOR_NOT_EXIST = 18
MAX_RESPONSE_DATA_EXCEEDED_ERROR = 19
STEP_TENSOR_DATA_NOT_IN_CACHE = 20


class ScriptConverterErrors(Enum):
"""Enum definition for mindconverter errors."""

class SysmetricErrors(Enum):
"""Enum definition for sysmetric errors."""
DSMI_QUERYING_NONZERO = 1

+ 3
- 0
mindinsight/utils/log.py View File

@@ -224,3 +224,6 @@ def setup_logger(sub_module, log_name, **kwargs):
logger.addHandler(logfile_handler)

return logger


utils_logger = setup_logger("utils", "utils")

+ 2
- 22
tests/st/func/profiler/conftest.py View File

@@ -15,31 +15,11 @@
"""The st config."""

import os
import shutil
import sys
import tempfile

import pytest

from tests.st.func.profiler import RAW_DATA_BASE
from tests.utils import mindspore

sys.modules['mindspore'] = mindspore

BASE_SUMMARY_DIR = tempfile.mkdtemp(prefix='test_profiler_summary_dir_base_')


@pytest.fixture(scope="session")
def create_summary_dir():
"""Create summary directory for profiler module."""

try:
if os.path.exists(BASE_SUMMARY_DIR):
shutil.rmtree(BASE_SUMMARY_DIR)
permissions = os.R_OK | os.W_OK | os.X_OK
mode = permissions << 6
if not os.path.exists(BASE_SUMMARY_DIR):
os.mkdir(BASE_SUMMARY_DIR, mode=mode)
yield
finally:
if os.path.exists(BASE_SUMMARY_DIR):
shutil.rmtree(BASE_SUMMARY_DIR)
BASE_SUMMARY_DIR = os.path.realpath(os.path.join(RAW_DATA_BASE, "run_1"))

+ 3
- 18
tests/st/func/profiler/test_analyse.py View File

@@ -21,19 +21,16 @@ Usage:
"""

import os
from unittest import mock, TestCase
from unittest import TestCase

import pytest

from mindinsight.profiler.analyser.analyser_factory import AnalyserFactory
from mindinsight.profiler.common.exceptions.exceptions import StepNumNotSupportedException, \
ProfilerParamValueErrorException
from mindinsight.profiler.profiling import Profiler, FrameworkParser
from tests.st.func.profiler import RAW_DATA_BASE
from tests.st.func.profiler.conftest import BASE_SUMMARY_DIR


@pytest.mark.usefixtures('create_summary_dir')
class TestProfilerAnalyse(TestCase):
"""Test Converter module."""
JOB_ID = 'JOB3'
@@ -42,26 +39,14 @@ class TestProfilerAnalyse(TestCase):
def setup_class(cls):
"""Generate parsed files."""
cls.step_trace_file = 'step_trace_raw_1_detail_time.csv'
cls.generate_parsed_files()
cls.summary_dir = os.path.join(BASE_SUMMARY_DIR, 'normal_run')
cls.profiler = os.path.join(cls.summary_dir, 'profiler')

def setUp(self):
"""Setup before each test."""
self.step_trace_analyser = AnalyserFactory.instance().get_analyser(
'step_trace', self.profiler, '1')

@classmethod
def generate_parsed_files(cls):
"""Test parse raw info about profiler."""
cls.summary_dir = os.path.join(BASE_SUMMARY_DIR, 'normal_run')
cls.profiler = os.path.join(cls.summary_dir, 'profiler')
FrameworkParser._raw_data_dir = RAW_DATA_BASE
if not os.path.exists(cls.summary_dir):
os.makedirs(cls.summary_dir)
Profiler._base_profiling_container_path = os.path.join(RAW_DATA_BASE, 'container')
with mock.patch('mindinsight.profiler.profiling.PROFILING_LOG_BASE_PATH', RAW_DATA_BASE):
profiler = Profiler(subgraph='all', is_detail=True, is_show_op_path=False,
output_path=cls.summary_dir, job_id=cls.JOB_ID)
profiler.analyse()

@pytest.mark.level0
@pytest.mark.env_single


+ 2
- 23
tests/st/func/profiler/test_minddata_pipeline_analyser.py View File

@@ -19,19 +19,13 @@ Usage:
pytest tests/st/func/profiler
"""
import os
import shutil
from unittest import mock

import pytest

from mindinsight.profiler import Profiler
from mindinsight.profiler.analyser.analyser_factory import AnalyserFactory
from mindinsight.profiler.parser.framework_parser import FrameworkParser
from tests.st.func.profiler.conftest import BASE_SUMMARY_DIR
from tests.ut.profiler import RAW_DATA_BASE


@pytest.mark.usefixtures('create_summary_dir')
class TestMinddataPipelineAnalyser:
"""Test minddata pipeline analyser module."""
JOB_ID = 'JOB3'
@@ -39,29 +33,14 @@ class TestMinddataPipelineAnalyser:
@classmethod
def setup_class(cls):
"""Generate parsed files."""
cls.generate_parsed_files()
cls.summary_dir = os.path.join(BASE_SUMMARY_DIR, 'normal_run')
cls.profiler = os.path.join(cls.summary_dir, 'profiler')

def setup_method(self):
"""Create analyser."""
self._analyser = AnalyserFactory.instance().get_analyser(
'minddata_pipeline', self.profiler, '1')

@classmethod
def generate_parsed_files(cls):
"""Test parse raw info about profiler."""
cls.summary_dir = os.path.join(BASE_SUMMARY_DIR, 'normal_run')
cls.profiler = os.path.join(cls.summary_dir, 'profiler')
FrameworkParser._raw_data_dir = RAW_DATA_BASE
if not os.path.exists(cls.summary_dir):
os.makedirs(cls.summary_dir)
os.makedirs(cls.profiler, exist_ok=True)
pipeline_path = os.path.join(RAW_DATA_BASE, 'profiler', 'pipeline_profiling_1.json')
shutil.copy(pipeline_path, cls.profiler)
Profiler._base_profiling_container_path = os.path.join(RAW_DATA_BASE, 'container')
with mock.patch('mindinsight.profiler.profiling.PROFILING_LOG_BASE_PATH', RAW_DATA_BASE):
profiler = Profiler(subgraph='all', is_detail=True, is_show_op_path=False,
output_path=cls.summary_dir, job_id=cls.JOB_ID)
profiler.analyse()

@pytest.mark.level0
@pytest.mark.env_single


+ 2
- 21
tests/st/func/profiler/test_op_analyser.py View File

@@ -19,16 +19,11 @@ Usage:
pytest tests/st/func/profiler
"""
import os
from unittest import mock

import pytest

from mindinsight.profiler import Profiler
from mindinsight.profiler.analyser.analyser_factory import AnalyserFactory
from mindinsight.profiler.parser.framework_parser import FrameworkParser
from tests.st.func.profiler.conftest import BASE_SUMMARY_DIR
from tests.ut.profiler import RAW_DATA_BASE


OP_GATHER_V2_INFO = {
'col_name': [
@@ -84,7 +79,6 @@ OP_GATHER_V2_INFO = {
}


@pytest.mark.usefixtures('create_summary_dir')
class TestOpAnalyser:
"""Test AICORE and AICPU analyser module."""
JOB_ID = 'JOB3'
@@ -92,7 +86,8 @@ class TestOpAnalyser:
@classmethod
def setup_class(cls):
"""Generate parsed files."""
cls.generate_parsed_files()
cls.summary_dir = os.path.join(BASE_SUMMARY_DIR, 'normal_run')
cls.profiler = os.path.join(cls.summary_dir, 'profiler')

def setup_method(self):
"""Create analyser."""
@@ -101,20 +96,6 @@ class TestOpAnalyser:
self._analyser_aicore_detail = AnalyserFactory.instance().get_analyser(
'aicore_detail', self.profiler, '1')

@classmethod
def generate_parsed_files(cls):
"""Test parse raw info about profiler."""
cls.summary_dir = os.path.join(BASE_SUMMARY_DIR, 'normal_run')
cls.profiler = os.path.join(cls.summary_dir, 'profiler')
FrameworkParser._raw_data_dir = RAW_DATA_BASE
if not os.path.exists(cls.summary_dir):
os.makedirs(cls.summary_dir)
Profiler._base_profiling_container_path = os.path.join(RAW_DATA_BASE, 'container')
with mock.patch('mindinsight.profiler.profiling.PROFILING_LOG_BASE_PATH', RAW_DATA_BASE):
profiler = Profiler(subgraph='all', is_detail=True, is_show_op_path=False,
output_path=cls.summary_dir, job_id=cls.JOB_ID)
profiler.analyse()

@pytest.mark.level0
@pytest.mark.env_single
@pytest.mark.platform_x86_cpu


+ 1
- 14
tests/ut/backend/datavisual/conftest.py View File

@@ -15,28 +15,15 @@
"""
Description: This file is used for some common util.
"""
from unittest.mock import Mock

import pytest
from flask import Response

from mindinsight.backend import datavisual
from mindinsight.datavisual.utils import tools
from mindinsight.backend.application import APP


@pytest.fixture
def client():
"""This fixture is flask client."""
mock_data_manager = Mock()
mock_data_manager.start_load_data = Mock()
datavisual.DATA_MANAGER = mock_data_manager

packages = ["mindinsight.backend.data_visual"]

mock_obj = Mock(return_value=packages)
tools.find_app_package = mock_obj

from mindinsight.backend.application import APP
APP.response_class = Response
app_client = APP.test_client()



tests/ut/datavisual/common/test_error_handler.py → tests/ut/backend/datavisual/test_request_error.py View File

@@ -22,12 +22,10 @@ from unittest.mock import patch

from werkzeug.exceptions import MethodNotAllowed, NotFound

from mindinsight.datavisual.processors import scalars_processor
from mindinsight.datavisual.processors.scalars_processor import ScalarsProcessor

from ....utils.tools import get_url
from ...backend.datavisual.conftest import TRAIN_ROUTES
from ..mock import MockLogger


class TestErrorHandler:
@@ -36,7 +34,6 @@ class TestErrorHandler:
@patch.object(ScalarsProcessor, 'get_metadata_list')
def test_handle_http_exception_error_not_found(self, mock_scalar_processor, client):
"""Test handle http exception error not found."""
scalars_processor.logger = MockLogger
text = 'Test Message'

# NotFound
@@ -59,7 +56,6 @@ class TestErrorHandler:
@patch.object(ScalarsProcessor, 'get_metadata_list')
def test_handle_http_exception_error_method_not_allowed(self, mock_scalar_processor, client):
"""Test handling http exception error method not allowed."""
scalars_processor.logger = MockLogger
text = 'Test Message'

# MethodNotAllowed
@@ -82,7 +78,6 @@ class TestErrorHandler:
@patch.object(ScalarsProcessor, 'get_metadata_list')
def test_handle_http_exception_error_method_other_errors(self, mock_scalar_processor, client):
"""Test handling http exception error method other errors."""
scalars_processor.logger = MockLogger
text = 'Test Message'

# Other errors

+ 0
- 45
tests/ut/datavisual/conftest.py View File

@@ -1,45 +0,0 @@
# Copyright 2019 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""
Description: This file is used for some common util.
"""
from unittest.mock import Mock

import pytest
from flask import Response

from mindinsight.backend import datavisual
from mindinsight.datavisual import utils


@pytest.fixture
def client():
"""This fixture is flask client."""
mock_data_manager = Mock()
mock_data_manager.start_load_data = Mock()
datavisual.DATA_MANAGER = mock_data_manager

packages = ["mindinsight.backend.raw_dataset",
"mindinsight.backend.train_dataset",
"mindinsight.backend.data_visual"]

mock_obj = Mock(return_value=packages)
utils.find_app_package = mock_obj

from mindinsight.backend.application import APP
APP.response_class = Response
app_client = APP.test_client()

yield app_client

+ 4
- 3
tests/ut/datavisual/data_transform/test_data_loader.py View File

@@ -27,6 +27,7 @@ import pytest
from mindinsight.datavisual.common.exceptions import SummaryLogPathInvalid
from mindinsight.datavisual.data_transform import data_loader
from mindinsight.datavisual.data_transform.data_loader import DataLoader
from mindinsight.utils.computing_resource_mgr import ComputingResourceManager

from ..mock import MockLogger

@@ -57,7 +58,7 @@ class TestDataLoader:
"""Test loading method with empty file list."""
loader = DataLoader(self._summary_dir)
with pytest.raises(SummaryLogPathInvalid):
loader.load()
loader.load(ComputingResourceManager(1, 1))
assert 'No valid files can be loaded' in str(MockLogger.log_msg['warning'])

def test_load_with_invalid_file_list(self):
@@ -66,7 +67,7 @@ class TestDataLoader:
self._generate_files(self._summary_dir, file_list)
loader = DataLoader(self._summary_dir)
with pytest.raises(SummaryLogPathInvalid):
loader.load()
loader.load(ComputingResourceManager(1, 1))
assert 'No valid files can be loaded' in str(MockLogger.log_msg['warning'])

def test_load_success(self):
@@ -77,6 +78,6 @@ class TestDataLoader:
file_list = ['summary.001', 'summary.002']
self._generate_files(dir_path, file_list)
dataloader = DataLoader(dir_path)
dataloader.load()
dataloader.load(ComputingResourceManager(1, 1))
assert dataloader._loader is not None
shutil.rmtree(dir_path)

+ 3
- 2
tests/ut/datavisual/data_transform/test_data_manager.py View File

@@ -81,8 +81,9 @@ class TestDataManager:
def test_start_load_data_success(self):
"""Test start_load_data method success."""
summary_base_dir = tempfile.mkdtemp()
dir_num = 3
train_ids = []
for i in range(3):
for i in range(dir_num):
log_path = os.path.join(summary_base_dir, f'dir{i}')
self._make_path_and_file_list(log_path)
train_ids.append(f'./dir{i}')
@@ -215,7 +216,7 @@ class TestDataManager:
expected_loader_ids = expected_loader_ids[-MAX_DATA_LOADER_SIZE:]

# Make sure to finish loading, make it init.
mock_data_manager._status = DataManagerStatus.INIT
mock_data_manager._detail_cache._status = DataManagerStatus.INIT.value
mock_generate_loaders.return_value = loader_dict
mock_data_manager.start_load_data(reload_interval=0)
check_loading_done(mock_data_manager)


+ 15
- 15
tests/ut/datavisual/data_transform/test_histogram_container.py View File

@@ -29,9 +29,9 @@ class TestHistogram:
mocked_bucket.width = 1
mocked_bucket.count = 1
mocked_input.buckets = [mocked_bucket]
histogram = hist.HistogramContainer(mocked_input)
histogram.set_visual_range(max_val=1, min_val=0, bins=1)
buckets = histogram.buckets()
histogram_container = hist.HistogramContainer(mocked_input)
histogram_container.histogram.set_visual_range(max_val=1, min_val=0, bins=1)
buckets = histogram_container.buckets()
assert buckets == ((0.0, 1.0, 1),)

def test_re_sample_buckets_split_original(self):
@@ -42,9 +42,9 @@ class TestHistogram:
mocked_bucket.width = 1
mocked_bucket.count = 1
mocked_input.buckets = [mocked_bucket]
histogram = hist.HistogramContainer(mocked_input)
histogram.set_visual_range(max_val=1, min_val=0, bins=3)
buckets = histogram.buckets()
histogram_container = hist.HistogramContainer(mocked_input)
histogram_container.histogram.set_visual_range(max_val=1, min_val=0, bins=3)
buckets = histogram_container.buckets()
assert buckets == ((0.0, 0.3333333333333333, 1), (0.3333333333333333, 0.3333333333333333, 1),
(0.6666666666666666, 0.3333333333333333, 1))

@@ -60,9 +60,9 @@ class TestHistogram:
mocked_bucket2.width = 1
mocked_bucket2.count = 2
mocked_input.buckets = [mocked_bucket, mocked_bucket2]
histogram = hist.HistogramContainer(mocked_input)
histogram.set_visual_range(max_val=3, min_val=-1, bins=4)
buckets = histogram.buckets()
histogram_container = hist.HistogramContainer(mocked_input)
histogram_container.histogram.set_visual_range(max_val=3, min_val=-1, bins=4)
buckets = histogram_container.buckets()
assert buckets == ((-1.0, 1.0, 0), (0.0, 1.0, 1), (1.0, 1.0, 2), (2.0, 1.0, 0))

def test_re_sample_buckets_merge_bucket(self):
@@ -77,9 +77,9 @@ class TestHistogram:
mocked_bucket2.width = 1
mocked_bucket2.count = 10
mocked_input.buckets = [mocked_bucket, mocked_bucket2]
histogram = hist.HistogramContainer(mocked_input)
histogram.set_visual_range(max_val=3, min_val=-1, bins=5)
buckets = histogram.buckets()
histogram_container = hist.HistogramContainer(mocked_input)
histogram_container.histogram.set_visual_range(max_val=3, min_val=-1, bins=5)
buckets = histogram_container.buckets()
assert buckets == (
(-1.0, 0.8, 0), (-0.19999999999999996, 0.8, 1), (0.6000000000000001, 0.8, 5), (1.4000000000000004, 0.8, 6),
(2.2, 0.8, 0))
@@ -96,9 +96,9 @@ class TestHistogram:
mocked_bucket2.width = 0
mocked_bucket2.count = 2
mocked_input.buckets = [mocked_bucket, mocked_bucket2]
histogram = hist.HistogramContainer(mocked_input)
histogram.set_visual_range(max_val=2, min_val=0, bins=3)
buckets = histogram.buckets()
histogram_container = hist.HistogramContainer(mocked_input)
histogram_container.histogram.set_visual_range(max_val=2, min_val=0, bins=3)
buckets = histogram_container.buckets()
assert buckets == (
(0.0, 0.6666666666666666, 1),
(0.6666666666666666, 0.6666666666666666, 3),


+ 4
- 3
tests/ut/datavisual/data_transform/test_ms_data_loader.py View File

@@ -30,6 +30,7 @@ from mindinsight.datavisual.data_transform.ms_data_loader import MSDataLoader
from mindinsight.datavisual.data_transform.ms_data_loader import _PbParser
from mindinsight.datavisual.data_transform.events_data import TensorEvent
from mindinsight.datavisual.common.enums import PluginNameEnum
from mindinsight.utils.computing_resource_mgr import ComputingResourceManager

from ..mock import MockLogger
from ....utils.log_generators.graph_pb_generator import create_graph_pb_file
@@ -85,7 +86,7 @@ class TestMsDataLoader:
write_file(file1, SCALAR_RECORD)
ms_loader = MSDataLoader(summary_dir)
ms_loader._latest_summary_filename = 'summary.00'
ms_loader.load()
ms_loader.load(ComputingResourceManager(1, 1))
shutil.rmtree(summary_dir)
tag = ms_loader.get_events_data().list_tags_by_plugin('scalar')
tensors = ms_loader.get_events_data().tensors(tag[0])
@@ -98,7 +99,7 @@ class TestMsDataLoader:
file2 = os.path.join(summary_dir, 'summary.02')
write_file(file2, SCALAR_RECORD)
ms_loader = MSDataLoader(summary_dir)
ms_loader.load()
ms_loader.load(ComputingResourceManager(1, 1))
shutil.rmtree(summary_dir)
assert 'Check crc faild and ignore this file' in str(MockLogger.log_msg['warning'])

@@ -124,7 +125,7 @@ class TestMsDataLoader:
summary_dir = tempfile.mkdtemp()
create_graph_pb_file(output_dir=summary_dir, filename=filename)
ms_loader = MSDataLoader(summary_dir)
ms_loader.load()
ms_loader.load(ComputingResourceManager(1, 1))
events_data = ms_loader.get_events_data()
plugins = events_data.list_tags_by_plugin(PluginNameEnum.GRAPH.value)
shutil.rmtree(summary_dir)


+ 1
- 1
tests/ut/datavisual/processors/test_train_task_manager.py View File

@@ -69,7 +69,7 @@ class TestTrainTaskManager:
def load_data(self):
"""Load data."""
log_operation = LogOperations()
self._plugins_id_map = {'image': [], 'scalar': [], 'graph': [], 'histogram': []}
self._plugins_id_map = {'image': [], 'scalar': [], 'graph': [], 'histogram': [], 'tensor': []}
self._events_names = []
self._train_id_list = []



+ 0
- 74
tests/ut/profiler/parser/test_aicpu_parser.py View File

@@ -1,74 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Test the aicpu parser."""
import os
import tempfile
import shutil

from unittest import TestCase

from mindinsight.profiler.parser.aicpu_data_parser import DataPreProcessParser


def get_result(file_path):
"""
Get result from the aicpu file.

Args:
file_path (str): The aicpu file path.

Returns:
list[list], the parsed aicpu information.
"""
result = []
try:
file = open(file_path, 'r')
result.append(file.read())
return result
finally:
if file:
file.close()


class TestAicpuParser(TestCase):
"""Test the class of Aicpu Parser."""

def setUp(self) -> None:
"""Initialization before test case execution."""
self.profiling_dir = os.path.realpath(os.path.join(os.path.dirname(__file__),
'../../../utils/resource/'
'JOB_AICPU/data'))
self.expect_dir = os.path.realpath(os.path.join(os.path.dirname(__file__),
'../../../utils/resource/'
'JOB_AICPU/expect'))
self.output_path = tempfile.mkdtemp(prefix='output_data_preprocess_aicpu_')
self.output_file = os.path.join(self.output_path, 'output_data_preprocess_aicpu_0.txt')
self.expect_file = os.path.join(self.expect_dir, 'output_data_preprocess_aicpu_0.txt')

def test_aicpu_parser(self):
"""Test the class of Aicpu Parser."""
data = DataPreProcessParser(self.profiling_dir, self.output_file)
data.execute()
expect_result = get_result(self.expect_file)
result = get_result(self.output_file)
shutil.rmtree(self.output_path)
assert expect_result == result

def test_aicpu_parser_file_not_exist(self):
"""Test the class of Aicpu Parser."""
profiling_dir = os.path.realpath(os.path.join(self.profiling_dir, 'data'))
data = DataPreProcessParser(profiling_dir, self.output_file)
data.execute()
shutil.rmtree(self.output_path)

+ 0
- 158
tests/ut/profiler/parser/test_framework_parser.py View File

@@ -1,158 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Test the framework parser module."""
import csv
import os
import shutil
import tempfile
from unittest import mock

import pytest
from marshmallow import ValidationError

from mindinsight.profiler.common.exceptions.exceptions import \
ProfilerPathErrorException, ProfilerDirNotFoundException, \
ProfilerFileNotFoundException
from mindinsight.profiler.parser.framework_parser import FrameworkParser
from tests.ut.profiler import PROFILER_DIR, RAW_DATA_BASE


def get_framework_result(file_path):
"""
Get framework result from the framework file.

Args:
file_path (str): The framework file path.

Returns:
list[list], the parsed framework information.
"""
result = []
with open(file_path, 'r') as file:
csv_reader = csv.reader(file)
for row in csv_reader:
result.append(row)
return result


class TestFrameworkParser:
"""Test the class of `FrameworkParser`."""
def setup_method(self):
"""Initialization before test case execution."""
FrameworkParser._raw_data_dir = RAW_DATA_BASE

self._output_path_1 = tempfile.mkdtemp(prefix='test_framework_parser_')
self._parser_1 = FrameworkParser('JOB1', '0', self._output_path_1)

self._output_path_2 = tempfile.mkdtemp(prefix='test_framework_parser_')
self._parser_2 = FrameworkParser('JOB2', '0', self._output_path_2)

self._output_path_4 = tempfile.mkdtemp(prefix='test_framework_parser_')
self._parser_4 = FrameworkParser('JOB4', '0', self._output_path_4)

def teardown_method(self) -> None:
"""Clear up after test case execution."""
shutil.rmtree(self._output_path_1)
shutil.rmtree(self._output_path_2)
shutil.rmtree(self._output_path_4)
FrameworkParser._raw_data_dir = '/var/log/npu/profiling'

def test_save_path(self):
"""Test the querying save path function."""
expect_result = os.path.join(self._output_path_1, 'framework_raw_0.csv')
assert expect_result == self._parser_1.save_path

expect_result = os.path.join(self._output_path_2, 'framework_raw_0.csv')
assert expect_result == self._parser_2.save_path

def test_point_info(self):
"""Test the querying point info function."""
expect_result = {
1: 'Default/Cast-op6',
2: 'Default/TransData-op7'
}
assert expect_result == self._parser_4.point_info

def test_to_task_id_full_op_name_dict(self):
"""Test the querying task id and full operator name dict function."""
expect_result = {
'51517': 'Default/Cast-op6',
'51518': 'Default/TransData-op7',
'51519': 'Default/network-WithLossCell/_backbone-ResNet/conv1-Conv2d/Cast-op5',
'51522': 'Default/network-WithLossCell/_backbone-ResNet/'
'layer1-SequentialCell/0-ResidualBlock/conv1-Conv2d/Cast-op28'
}
assert expect_result == self._parser_1.to_task_id_full_op_name_dict()
assert expect_result == self._parser_2.to_task_id_full_op_name_dict()

expect_result = {
'0_1': 'Default/Cast-op6',
'0_2': 'Default/TransData-op7',
'0_3': 'Default/network-WithLossCell/_backbone-ResNet/conv1-Conv2d/Cast-op5',
'0_4': 'Default/network-WithLossCell/_backbone-ResNet/layer1-SequentialCell/'
'0-ResidualBlock/conv1-Conv2d/Cast-op28'
}
assert expect_result == self._parser_4.to_task_id_full_op_name_dict()

def test_parse(self):
"""Test the parse function."""
expect_framework_file = os.path.join(PROFILER_DIR, 'framework_raw_0.csv')
expect_framework_file = os.path.realpath(expect_framework_file)
expect_result = get_framework_result(expect_framework_file)

self._parser_1.parse()
framework_file = os.path.join(self._output_path_1, 'framework_raw_0.csv')
result = get_framework_result(framework_file)
assert expect_result == result

self._parser_2.parse()
framework_file = os.path.join(self._output_path_2, 'framework_raw_0.csv')
result = get_framework_result(framework_file)
assert expect_result == result

@mock.patch('mindinsight.profiler.parser.framework_parser.validate_and_normalize_path')
def test_create_framework_parser_fail_1(self, *args):
"""Test the function of fail to create framework parser."""
args[0].side_effect = ValidationError({'profiler': {"The path is invalid!"}})

with pytest.raises(ProfilerPathErrorException) as exc_info:
FrameworkParser('JOB1', '0')
assert exc_info.value.error_code == '50546081'
assert exc_info.value.message == 'Path error. Profiling path is invalid.'

@mock.patch('os.path.isdir')
def test_create_framework_parser_fail_2(self, *args):
"""Test the function of fail to create framework parser."""
args[0].return_value = False
FrameworkParser._raw_data_dir = '/var/log/npu/profiling'

with pytest.raises(ProfilerDirNotFoundException) as exc_info:
FrameworkParser('JOB1', '0')
assert exc_info.value.error_code == '50546083'
assert exc_info.value.message == \
'The dir </var/log/npu/profiling/JOB1> not found.'

@mock.patch('os.listdir')
@mock.patch('os.path.isdir')
def test_create_framework_parser_fail_3(self, *args):
"""Test the function of fail to create framework parser."""
args[0].return_value = True
args[1].return_value = []
FrameworkParser._raw_data_dir = '/var/log/npu/profiling'

with pytest.raises(ProfilerFileNotFoundException) as exc_info:
FrameworkParser('JOB1', '0')
assert exc_info.value.error_code == '50546084'
assert exc_info.value.message == 'The file <Framework> not found.'

+ 0
- 93
tests/ut/profiler/parser/test_minddata_pipeline_parser.py View File

@@ -1,93 +0,0 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Test the minddata pipeline parser module."""
import csv
import os
import shutil
import tempfile

from mindinsight.profiler.parser.minddata_pipeline_parser import \
MinddataPipelineParser
from tests.ut.profiler import PROFILER_DIR, RAW_DATA, RAW_DATA_JOB2


def get_minddata_pipeline_result(file_path):
"""
Get minddata pipeline result from the minddata pipeline file.

Args:
file_path (str): The minddata pipeline file path.

Returns:
list[list], the parsed minddata pipeline information.
"""
result = []
with open(file_path, 'r') as file:
csv_reader = csv.reader(file)
for row in csv_reader:
result.append(row)
return result


class TestMinddataPipelineParser:
"""Test the class of `MinddataPipelineParser`."""
def setup_method(self):
"""Initialization before test case execution."""
self._output_path_1 = tempfile.mkdtemp(
prefix='test_minddata_pipeline_parser_'
)
self._parser_1 = MinddataPipelineParser(
RAW_DATA, '0', self._output_path_1
)

self._output_path_2 = tempfile.mkdtemp(
prefix='test_minddata_pipeline_parser_'
)
self._parser_2 = MinddataPipelineParser(
RAW_DATA_JOB2, '0', self._output_path_2
)

def teardown_method(self) -> None:
"""Clear up after test case execution."""
shutil.rmtree(self._output_path_1)
shutil.rmtree(self._output_path_2)

def test_save_path(self):
"""Test the querying save path function."""
expect_result = os.path.join(
self._output_path_1, 'minddata_pipeline_raw_0.csv'
)
assert expect_result == self._parser_1.save_path

def test_parse(self):
"""Test the parse function."""
expect_pipeline_file = os.path.join(
PROFILER_DIR, 'minddata_pipeline_raw_0.csv'
)
expect_result = get_minddata_pipeline_result(expect_pipeline_file)

self._parser_1.parse()
pipeline_file = os.path.join(
self._output_path_1, 'minddata_pipeline_raw_0.csv'
)
result = get_minddata_pipeline_result(pipeline_file)
assert expect_result == result

self._parser_2.parse()
pipeline_file = os.path.join(
self._output_path_2, 'minddata_pipeline_raw_0.csv'
)
result = get_minddata_pipeline_result(pipeline_file)
assert expect_result == result

tests/ut/profiler/parser/__init__.py → tests/ut/sysmetric/__init__.py View File

@@ -1,14 +1,15 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Test the system metrics."""

+ 42
- 0
tests/ut/sysmetric/metrics_collector.py View File

@@ -0,0 +1,42 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Test the metrics collector."""
from os import cpu_count

from mindinsight.sysmetric.collector import collect_cpu, collect_mem, collect_npu


def test_collect_cpu():
overall = collect_cpu(percent=True)
assert isinstance(overall, dict)
for value in overall.values():
assert 0 <= value <= 100
for key in 'user', 'system', 'idle':
assert key in overall
cores = collect_cpu(percpu=True)
assert isinstance(cores, list) and len(cores) == cpu_count()


def test_collect_mem():
mem = collect_mem()
assert 'total' in mem
assert 'available' in mem
assert mem['total'] > mem['available']


def test_collect_npu():
npu = collect_npu()
if npu is not None:
assert len(npu) == 8

+ 110
- 0
tests/utils/log_generators/tensor_log_generator.py View File

@@ -0,0 +1,110 @@
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Log generator for tensor data."""
import time

from operator import mul
from functools import reduce
import numpy as np
from mindinsight.datavisual.proto_files import mindinsight_anf_ir_pb2 as anf_ir_pb2
from mindinsight.datavisual.proto_files import mindinsight_summary_pb2 as summary_pb2

from .log_generator import LogGenerator


class TensorLogGenerator(LogGenerator):
"""
Log generator for tensor data.

This is a log generator writing tensor data. User can use it to generate fake
summary logs about tensor.
"""

def generate_event(self, values):
"""
Method for generating tensor event.

Args:
values (dict): A dict contains:
{
wall_time (float): Timestamp.
step (int): Train step.
value (float): Tensor value.
tag (str): Tag name.
}

Returns:
summary_pb2.Event.

"""
tensor_event = summary_pb2.Event()
tensor_event.wall_time = values.get('wall_time')
tensor_event.step = values.get('step')

value = tensor_event.summary.value.add()
value.tag = values.get('tag')
tensor = values.get('value')

value.tensor.dims[:] = tensor.get('dims')
value.tensor.data_type = tensor.get('data_type')
value.tensor.float_data[:] = tensor.get('float_data')
print(tensor.get('float_data'))

return tensor_event

def generate_log(self, file_path, steps_list, tag_name):
"""
Generate log for external calls.

Args:
file_path (str): Path to write logs.
steps_list (list): A list consists of step.
tag_name (str): Tag name.

Returns:
list[dict], generated tensor metadata.
list, generated tensors.

"""
tensor_metadata = []
tensor_values = dict()
for step in steps_list:
tensor = dict()

wall_time = time.time()
tensor.update({'wall_time': wall_time})
tensor.update({'step': step})
tensor.update({'tag': tag_name})
dims = list(np.random.randint(1, 10, 4))
mul_value = reduce(mul, dims)
tensor.update({'value': {
"dims": dims,
"data_type": anf_ir_pb2.DataType.DT_FLOAT32,
"float_data": np.random.randn(mul_value)
}})
tensor_metadata.append(tensor)
tensor_values.update({step: tensor})

self._write_log_one_step(file_path, tensor)

return tensor_metadata, tensor_values


if __name__ == "__main__":
tensor_log_generator = TensorLogGenerator()
test_file_name = '%s.%s.%s' % ('tensor', 'summary', str(time.time()))
test_steps = [1, 3, 5]
test_tag = "test_tensor_tag_name"
tensor_log_generator.generate_log(test_file_name, test_steps, test_tag)

+ 3
- 1
tests/utils/log_operations.py View File

@@ -25,12 +25,14 @@ from .log_generators.graph_log_generator import GraphLogGenerator
from .log_generators.images_log_generator import ImagesLogGenerator
from .log_generators.scalars_log_generator import ScalarsLogGenerator
from .log_generators.histogram_log_generator import HistogramLogGenerator
from .log_generators.tensor_log_generator import TensorLogGenerator

log_generators = {
PluginNameEnum.GRAPH.value: GraphLogGenerator(),
PluginNameEnum.IMAGE.value: ImagesLogGenerator(),
PluginNameEnum.SCALAR.value: ScalarsLogGenerator(),
PluginNameEnum.HISTOGRAM.value: HistogramLogGenerator()
PluginNameEnum.HISTOGRAM.value: HistogramLogGenerator(),
PluginNameEnum.TENSOR.value: TensorLogGenerator()
}




+ 6
- 2
tests/utils/resource/JOB3/Framework.host.vm.point.1.slice_0 View File

@@ -1,4 +1,8 @@
1 Default/Cast-op6
2 Default/TransData-op7
3 Default/network-WithLossCell/_backbone-ResNet/conv1-Conv2d/Cast-op5
4 Default/network-WithLossCell/_backbone-ResNet/layer1-SequentialCell/0-ResidualBlock/conv1-Conv2d/Cast-op28
3 Default/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/AllGather-op136
4 Default/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/AllGather-op136
5 Default/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/ReduceScatter-op145
6 Default/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/ReduceScatter-op145
7 Gradients/Default/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradReduceScatter/AllGather-op147
8 Gradients/Default/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradReduceScatter/AllGather-op147

+ 200
- 0
tests/utils/resource/run_1/normal_run/profiler/aicore_intermediate_1_detail.csv View File

@@ -0,0 +1,200 @@
full_op_name,execution_time
Default/AssignAdd-op414,0.001688
Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op29,0.0012020000000000002
Default/network-TrainStepWrap/optimizer_d-Adam/Assign-op30,0.0013606666666666667
Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op31,0.0011659999999999997
Default/network-TrainStepWrap/optimizer_d-Adam/Assign-op32,0.001116
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/GatherV2-op33,0.9352293333333332
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Mul-op35,0.010222666666666666
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/ReduceSum-op36,0.015073333333333333
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op37,0.003832666666666666
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Cast-op34,0.001396666666666667
Default/TransData-op216,0.006697333333333332
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Square-op38,0.009799333333333334
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Split-op39,0.09720533333333335
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Concat-op40,0.08841666666666667
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/StridedSlice-op41,0.012427333333333335
Default/AtomicAddrClean-op418,0.001378666666666667
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/ReduceSum-op42,0.009832666666666665
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op48,0.001400666666666667
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/RealDiv-op44,0.0014346666666666666
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Mul-op28,0.001468
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Cast-op47,0.004459333333333333
Default/TransData-op281,0.0027733333333333334
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Cast-op46,0.004600000000000001
Default/TransData-op278,0.004403333333333333
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Cast-op45,0.00711
Default/TransData-op275,0.005461333333333334
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Cast-op52,0.023115999999999994
Default/TransData-op272,0.009749333333333332
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op43,0.0013153333333333335
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op53,0.004243333333333333
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/RealDiv-op54,0.004824666666666667
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op49,0.003735
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/RealDiv-op50,0.0045564285714285715
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/grad_VirtualDiv/RealDiv-op51,0.004516428571428571
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/GatherV2-op55,42.220212142857136
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op56,0.00871357142857143
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradStridedSlice/StridedSliceGrad-op57,0.15243714285714288
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op58,0.9626657142857143
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op59,1.0643285714285715
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op60,0.9675764285714286
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op61,0.9675435714285714
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op62,1.0075085714285714
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op63,0.9250400000000002
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op64,1.1294107142857144
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op65,1.0091157142857143
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSplit/Concat-op66,0.051030714285714276
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Split-op67,2.617072142857143
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Concat-op68,3.084827142857143
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/StridedSlice-op69,0.3331414285714285
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Mul-op70,0.37437785714285715
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/ReLU-op71,0.32776857142857135
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Mul-op72,0.33151499999999995
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Cast-op73,0.2518214285714286
Default/TransData-op271,0.14980214285714283
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/MatMul-op74,0.45218500000000006
Default/TransData-op240,0.09184714285714284
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/RealDiv-op76,0.10391071428571431
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/BiasAdd-op77,0.11015571428571427
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/ReLU-op78,0.10085142857142855
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Mul-op79,0.10943071428571426
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Cast-op80,0.04727285714285715
Default/TransData-op274,0.03735642857142857
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/MatMul-op81,0.09832214285714284
Default/TransData-op245,0.037176428571428576
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/RealDiv-op83,0.036798571428571424
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/BiasAdd-op84,0.04016857142857143
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/ReLU-op85,0.027936428571428574
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Mul-op86,0.039065
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Cast-op87,0.02587642857142857
Default/TransData-op277,0.01939
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/MatMul-op88,0.03152
Default/TransData-op250,0.020935000000000002
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/RealDiv-op90,0.025487142857142854
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/BiasAdd-op91,0.021720714285714288
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/ReLU-op92,0.016717857142857142
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Mul-op93,0.021017857142857147
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Cast-op94,0.014929999999999999
Default/TransData-op280,0.012425714285714285
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/MatMul-op95,0.013189999999999997
Default/TransData-op255,0.014586428571428571
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/RealDiv-op97,0.015751428571428572
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/BiasAdd-op98,0.013145
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/ReLU-op99,0.010007857142857143
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Mul-op100,0.01205
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Cast-op101,0.009261428571428571
Default/TransData-op215,0.009404285714285714
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/MatMul-op102,0.007625
Default/TransData-op204,0.016274285714285713
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/RealDiv-op104,0.004828571428571428
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/BiasAdd-op105,0.004472857142857142
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op106,0.003925714285714286
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/SigmoidCrossEntropyWithLogits-op107,0.004808571428571428
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSigmoidCrossEntropyWithLogits/SigmoidCrossEntropyWithLogitsGrad-op109,0.004950714285714286
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSigmoidCrossEntropyWithLogits/SigmoidCrossEntropyWithLogitsGrad-op108,0.004631428571428572
Default/AtomicAddrClean-op425,0.0015150000000000003
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/ReduceMean-op110,0.004534999999999999
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradBiasAdd/BiasAddGrad-op112,0.0030614285714285717
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradRealDiv/RealDiv-op113,0.004547142857142856
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op111,0.0031428571428571426
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradCast/Cast-op115,0.0026614285714285715
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op114,0.027466428571428576
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSquare/Mul-op116,0.03313571428571428
Default/TransData-op257,0.06620642857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradMul/Mul-op117,0.010132142857142855
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSquare/Mul-op121,0.020947142857142855
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMatMul/MatMul-op119,0.009299285714285715
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMatMul/MatMul-op120,0.009546428571428572
Default/AtomicAddrClean-op427,0.002937142857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradGatherV2/UnsortedSegmentSum-op123,7.355592142857143
Default/TransData-op235,0.014415714285714283
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMul/Mul-op128,0.012212857142857145
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradReLU/ReluGrad-op131,0.02228428571428571
Default/AtomicAddrClean-op428,0.001404285714285714
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradBiasAdd/BiasAddGrad-op134,0.008783571428571429
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradRealDiv/RealDiv-op135,0.013412857142857143
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradCast/Cast-op136,0.008197142857142856
Default/TransData-op252,0.008572857142857142
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMatMul/MatMul-op138,0.029589285714285724
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMatMul/MatMul-op139,0.016685
Default/TransData-op233,0.020412142857142858
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMul/Mul-op143,0.020592142857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradReLU/ReluGrad-op145,0.03936785714285714
Default/AtomicAddrClean-op429,0.0014571428571428572
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradBiasAdd/BiasAddGrad-op147,0.012325714285714285
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradRealDiv/RealDiv-op148,0.021508571428571432
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradCast/Cast-op149,0.012591428571428571
Default/TransData-op247,0.012454999999999997
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMatMul/MatMul-op151,0.053485000000000005
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMatMul/MatMul-op152,0.03651357142857143
Default/TransData-op231,0.03276571428571429
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMul/Mul-op156,0.037129999999999996
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradReLU/ReluGrad-op158,0.09050499999999999
Default/AtomicAddrClean-op430,0.001497142857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradBiasAdd/BiasAddGrad-op160,0.017480714285714283
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradRealDiv/RealDiv-op161,0.042566428571428575
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradCast/Cast-op162,0.02489785714285714
Default/TransData-op242,0.019189285714285714
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMatMul/MatMul-op164,0.10608857142857142
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMatMul/MatMul-op165,0.1160064285714286
Default/TransData-op229,0.09212928571428572
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMul/Mul-op169,0.10092642857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradReLU/ReluGrad-op171,0.18744071428571424
Default/AtomicAddrClean-op431,0.0014599999999999997
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradBiasAdd/BiasAddGrad-op173,0.030029999999999998
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradRealDiv/RealDiv-op174,0.13704571428571427
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradCast/Cast-op175,0.04649285714285715
Default/TransData-op237,0.03681785714285714
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMatMul/MatMul-op177,0.42144428571428577
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/RealDiv-op118,0.001617857142857143
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op124,0.0014600000000000001
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMatMul/MatMul-op178,0.5351814285714286
Default/TransData-op284,0.32571142857142854
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMul/Mul-op182,0.3179142857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradReLU/ReluGrad-op184,0.5144707142857143
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradMul/Mul-op186,0.3859778571428571
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradStridedSlice/StridedSliceGrad-op187,1.3543971428571429
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op188,1.5460985714285713
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op189,1.5340514285714286
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op190,1.540242857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op191,1.5514735714285715
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op192,1.5607435714285713
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op193,1.5385385714285713
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op194,1.537682857142857
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op195,1.5342942857142856
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradSplit/Concat-op196,2.584179285714286
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/grad_MirrorOperator/Mul-op130,0.005715714285714287
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/grad_MirrorOperator/Mul-op129,0.0015964285714285716
Default/Mul-op183,0.0016557142857142858
Default/Mul-op170,0.0015957142857142856
Default/Mul-op157,0.0015314285714285714
Default/Mul-op144,0.0014735714285714285
Default/Mul-op122,0.0012207142857142855
Default/TransData-op206,0.02777642857142857
Default/TransData-op208,0.008395714285714286
Default/TransData-op210,0.006270714285714287
Default/TransData-op212,0.003332857142857143
Default/TransData-op214,0.0024235714285714286
Default/Mul-op197,0.016677857142857147
Default/Mul-op176,0.007605000000000001
Default/Mul-op163,0.0062528571428571425
Default/Mul-op150,0.004635
Default/Mul-op137,0.0016078571428571429
Default/AtomicAddrClean-op434,0.007719285714285714
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradGatherV2/UnsortedSegmentSum-op199,37.25223428571428
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/AddN-op200,0.012836428571428572
Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op201,0.010897142857142855
Default/network-TrainStepWrap/optimizer_w-FTRL/ApplyFtrl-op133,0.02319642857142857
Default/network-TrainStepWrap/optimizer_w-FTRL/ApplyFtrl-op132,0.0022571428571428573
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op185,0.003688571428571429
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op172,0.003175714285714285
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op159,0.0029478571428571436
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op146,0.0028899999999999998
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op127,0.0022257142857142853
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op198,0.133745
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op179,0.03321571428571428
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op166,0.010665
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op153,0.006292857142857143
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op140,0.002818571428571428
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op202,0.08427071428571428

+ 30
- 0
tests/utils/resource/run_1/normal_run/profiler/aicore_intermediate_1_type.csv View File

@@ -0,0 +1,30 @@
op_type,execution_time,execution_frequency,percent
AssignAdd,0.001688,1,0.00
Mul,1.9029486666666665347,32,1.51
Assign,0.0024766666666666667,2,0.00
GatherV2,43.1554414761904692,2,34.13
ReduceSum,0.0307648571428571411,5,0.02
TensorAdd,0.0092183809523809521,3,0.01
Cast,0.4846848571428571735,15,0.38
TransData,1.1151575238095237340,30,0.88
Square,0.009799333333333334,1,0.01
Split,2.71427747619047635,2,2.15
Concat,5.808453809523809946,4,4.59
StridedSlice,0.345568761904761835,2,0.27
AtomicAddrClean,0.0193686666666666662,8,0.02
RealDiv,0.4228071904761904831,15,0.33
Tile,0.044158333333333339,4,0.03
StridedSliceGrad,1.50683428571428578,2,1.19
Slice,20.3763149999999997,16,16.12
ReLU,0.483282142857142759,5,0.38
MatMul,1.936681428571428733,15,1.53
BiasAdd,0.189662857142857130,5,0.15
SigmoidCrossEntropyWithLogits,0.004808571428571428,1,0.00
SigmoidCrossEntropyWithLogitsGrad,0.009582142857142858,2,0.01
ReduceMean,0.004534999999999999,1,0.00
BiasAddGrad,0.0716814285714285667,5,0.06
UnsortedSegmentSum,44.607826428571423,2,35.28
ReluGrad,0.85406857142857138,5,0.68
AddN,0.012836428571428572,1,0.01
ApplyFtrl,0.0254535714285714273,2,0.02
Adam,0.2859357142857142737,11,0.23

+ 200
- 0
tests/utils/resource/run_1/normal_run/profiler/framework_raw_1.csv View File

@@ -0,0 +1,200 @@
task_id,stream_id,block_dim,full_op_name,op_name,op_type,subgraph,op_info
30092,3,1,Default/AssignAdd-op414,AssignAdd-op414,AssignAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_INT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_INT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_INT32"", ""shape"": ""1""}}"
30093,3,1,Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op29,Mul-op29,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
30094,3,1,Default/network-TrainStepWrap/optimizer_d-Adam/Assign-op30,Assign-op30,Assign,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
30095,3,1,Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op31,Mul-op31,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
30096,3,1,Default/network-TrainStepWrap/optimizer_d-Adam/Assign-op32,Assign-op32,Assign,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
30103,3,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/GatherV2-op33,GatherV2-op33,GatherV2,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_INT32"", ""shape"": ""16000,39""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}}"
30104,3,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Mul-op35,Mul-op35,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}}"
30105,3,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/ReduceSum-op36,ReduceSum-op36,ReduceSum,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
30106,3,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op37,TensorAdd-op37,TensorAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
30107,3,1,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Cast-op34,Cast-op34,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""128,1""}}"
30108,3,8,Default/TransData-op216,TransData-op216,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""128,1""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1,8,16,16""}}"
30109,3,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Square-op38,Square-op38,Square,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
30453,7,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Split-op39,Split-op39,Split,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1477568,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
30454,7,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Concat-op40,Concat-op40,Concat,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}}"
30455,7,22,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/StridedSlice-op41,StridedSlice-op41,StridedSlice,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""23087,64""}}"
30456,7,1,Default/AtomicAddrClean-op418,AtomicAddrClean-op418,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
30457,7,33,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/ReduceSum-op42,ReduceSum-op42,ReduceSum,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""23087,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
30646,9,1,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op48,ReduceSum-op48,ReduceSum,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
30837,11,1,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/RealDiv-op44,RealDiv-op44,RealDiv,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
30838,11,1,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Mul-op28,Mul-op28,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
30839,11,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Cast-op47,Cast-op47,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""256,128""}}"
30840,11,16,Default/TransData-op281,TransData-op281,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""256,128""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,16,16,16""}}"
30841,11,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Cast-op46,Cast-op46,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""512,256""}}"
30842,11,32,Default/TransData-op278,TransData-op278,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""512,256""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,32,16,16""}}"
30843,11,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Cast-op45,Cast-op45,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1024,512""}}"
30844,11,32,Default/TransData-op275,TransData-op275,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1024,512""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,64,16,16""}}"
30845,11,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Cast-op52,Cast-op52,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""2496,1024""}}"
30846,11,32,Default/TransData-op272,TransData-op272,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""2496,1024""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,156,16,16""}}"
30847,11,1,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op43,ReduceSum-op43,ReduceSum,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
31038,13,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op53,Tile-op53,Tile,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31039,13,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/RealDiv-op54,RealDiv-op54,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31231,15,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op49,Tile-op49,Tile,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31232,15,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/RealDiv-op50,RealDiv-op50,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31233,15,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/grad_VirtualDiv/RealDiv-op51,RealDiv-op51,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31236,15,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/GatherV2-op55,GatherV2-op55,GatherV2,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_INT32"", ""shape"": ""128000,39""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
31409,17,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op56,Tile-op56,Tile,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""23087,64""}}"
31410,17,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradStridedSlice/StridedSliceGrad-op57,StridedSliceGrad-op57,StridedSliceGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""23087,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}}"
31411,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op58,Slice-op58,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31412,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op59,Slice-op59,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31413,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op60,Slice-op60,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31414,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op61,Slice-op61,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31415,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op62,Slice-op62,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31416,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op63,Slice-op63,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31417,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op64,Slice-op64,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31418,17,23087,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op65,Slice-op65,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31419,17,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSplit/Concat-op66,Concat-op66,Concat,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1477568,8""}}"
31598,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Split-op67,Split-op67,Split,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024000,39,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
31599,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Concat-op68,Concat-op68,Concat,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}}"
31600,19,15,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/StridedSlice-op69,StridedSlice-op69,StridedSlice,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,64""}}"
31601,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Mul-op70,Mul-op70,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,64""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,64""}}"
31602,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/ReLU-op71,ReLU-op71,ReLU,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}}"
31603,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Mul-op72,Mul-op72,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}}"
31604,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Cast-op73,Cast-op73,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,2496""}}"
31605,19,32,Default/TransData-op271,TransData-op271,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,2496""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""156,1000,16,16""}}"
31606,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/MatMul-op74,MatMul-op74,MatMul,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""156,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,156,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""64,1000,16,16""}}"
31607,19,32,Default/TransData-op240,TransData-op240,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""64,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31608,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/RealDiv-op76,RealDiv-op76,RealDiv,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31609,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/BiasAdd-op77,BiasAdd-op77,BiasAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31610,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/ReLU-op78,ReLU-op78,ReLU,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31611,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Mul-op79,Mul-op79,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31612,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Cast-op80,Cast-op80,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,1024""}}"
31613,19,32,Default/TransData-op274,TransData-op274,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,1000,16,16""}}"
31614,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/MatMul-op81,MatMul-op81,MatMul,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,64,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""32,1000,16,16""}}"
31615,19,32,Default/TransData-op245,TransData-op245,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""32,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31616,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/RealDiv-op83,RealDiv-op83,RealDiv,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31617,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/BiasAdd-op84,BiasAdd-op84,BiasAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31618,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/ReLU-op85,ReLU-op85,ReLU,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31619,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Mul-op86,Mul-op86,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31620,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Cast-op87,Cast-op87,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,512""}}"
31621,19,32,Default/TransData-op277,TransData-op277,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,1000,16,16""}}"
31622,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/MatMul-op88,MatMul-op88,MatMul,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,32,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16,1000,16,16""}}"
31623,19,32,Default/TransData-op250,TransData-op250,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31624,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/RealDiv-op90,RealDiv-op90,RealDiv,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31625,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/BiasAdd-op91,BiasAdd-op91,BiasAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31626,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/ReLU-op92,ReLU-op92,ReLU,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31627,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Mul-op93,Mul-op93,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31628,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Cast-op94,Cast-op94,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,256""}}"
31629,19,32,Default/TransData-op280,TransData-op280,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,1000,16,16""}}"
31630,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/MatMul-op95,MatMul-op95,MatMul,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,16,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""8,1000,16,16""}}"
31631,19,32,Default/TransData-op255,TransData-op255,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""8,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31632,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/RealDiv-op97,RealDiv-op97,RealDiv,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31633,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/BiasAdd-op98,BiasAdd-op98,BiasAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31634,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/ReLU-op99,ReLU-op99,ReLU,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31635,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Mul-op100,Mul-op100,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31636,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Cast-op101,Cast-op101,Cast,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,128""}}"
31637,19,32,Default/TransData-op215,TransData-op215,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,1000,16,16""}}"
31638,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/MatMul-op102,MatMul-op102,MatMul,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1,8,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1,1000,16,16""}}"
31639,19,32,Default/TransData-op204,TransData-op204,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31640,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/RealDiv-op104,RealDiv-op104,RealDiv,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31641,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/BiasAdd-op105,BiasAdd-op105,BiasAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31642,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op106,TensorAdd-op106,TensorAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31643,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/SigmoidCrossEntropyWithLogits-op107,SigmoidCrossEntropyWithLogits-op107,SigmoidCrossEntropyWithLogits,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31644,19,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSigmoidCrossEntropyWithLogits/SigmoidCrossEntropyWithLogitsGrad-op109,SigmoidCrossEntropyWithLogitsGrad-op109,SigmoidCrossEntropyWithLogitsGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31645,19,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSigmoidCrossEntropyWithLogits/SigmoidCrossEntropyWithLogitsGrad-op108,SigmoidCrossEntropyWithLogitsGrad-op108,SigmoidCrossEntropyWithLogitsGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31646,19,1,Default/AtomicAddrClean-op425,AtomicAddrClean-op425,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
31647,19,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/ReduceMean-op110,ReduceMean-op110,ReduceMean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
31648,19,1,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradBiasAdd/BiasAddGrad-op112,BiasAddGrad-op112,BiasAddGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
31649,19,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradRealDiv/RealDiv-op113,RealDiv-op113,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}}"
31650,19,1,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op111,ReduceSum-op111,ReduceSum,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}}"
31839,21,16,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradCast/Cast-op115,Cast-op115,Cast,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,1""}}"
31840,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op114,Tile-op114,Tile,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}}"
31843,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSquare/Mul-op116,Mul-op116,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31844,21,32,Default/TransData-op257,TransData-op257,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,1""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1,1000,16,16""}}"
31845,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradMul/Mul-op117,Mul-op117,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}}"
31846,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSquare/Mul-op121,Mul-op121,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
31847,21,8,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMatMul/MatMul-op119,MatMul-op119,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1,1000,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1,8,16,16""}}"
31848,21,63,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMatMul/MatMul-op120,MatMul-op120,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""1,8,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""8,1000,16,16""}}"
31849,21,16,Default/AtomicAddrClean-op427,AtomicAddrClean-op427,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}}"
31850,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradGatherV2/UnsortedSegmentSum-op123,UnsortedSegmentSum-op123,UnsortedSegmentSum,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_INT32"", ""shape"": ""16000,39""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}}"
31851,21,32,Default/TransData-op235,TransData-op235,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""8,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31852,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMul/Mul-op128,Mul-op128,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31853,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradReLU/ReluGrad-op131,ReluGrad-op131,ReluGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31854,21,1,Default/AtomicAddrClean-op428,AtomicAddrClean-op428,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}}"
31855,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradBiasAdd/BiasAddGrad-op134,BiasAddGrad-op134,BiasAddGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}}"
31856,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradRealDiv/RealDiv-op135,RealDiv-op135,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}}"
31857,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradCast/Cast-op136,Cast-op136,Cast,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,128""}}"
31858,21,32,Default/TransData-op252,TransData-op252,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,128""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,1000,16,16""}}"
31859,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMatMul/MatMul-op138,MatMul-op138,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,1000,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""8,16,16,16""}}"
31860,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMatMul/MatMul-op139,MatMul-op139,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""8,16,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16,1000,16,16""}}"
31861,21,32,Default/TransData-op233,TransData-op233,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31862,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMul/Mul-op143,Mul-op143,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31863,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradReLU/ReluGrad-op145,ReluGrad-op145,ReluGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31864,21,1,Default/AtomicAddrClean-op429,AtomicAddrClean-op429,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}}"
31865,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradBiasAdd/BiasAddGrad-op147,BiasAddGrad-op147,BiasAddGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}}"
31866,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradRealDiv/RealDiv-op148,RealDiv-op148,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}}"
31867,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradCast/Cast-op149,Cast-op149,Cast,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,256""}}"
31868,21,32,Default/TransData-op247,TransData-op247,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,256""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,1000,16,16""}}"
31869,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMatMul/MatMul-op151,MatMul-op151,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,1000,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16,32,16,16""}}"
31870,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMatMul/MatMul-op152,MatMul-op152,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16,32,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""32,1000,16,16""}}"
31871,21,32,Default/TransData-op231,TransData-op231,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""32,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31872,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMul/Mul-op156,Mul-op156,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31873,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradReLU/ReluGrad-op158,ReluGrad-op158,ReluGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31874,21,1,Default/AtomicAddrClean-op430,AtomicAddrClean-op430,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}}"
31875,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradBiasAdd/BiasAddGrad-op160,BiasAddGrad-op160,BiasAddGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}}"
31876,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradRealDiv/RealDiv-op161,RealDiv-op161,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}}"
31877,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradCast/Cast-op162,Cast-op162,Cast,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,512""}}"
31878,21,32,Default/TransData-op242,TransData-op242,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,512""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,1000,16,16""}}"
31879,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMatMul/MatMul-op164,MatMul-op164,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,1000,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""32,64,16,16""}}"
31880,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMatMul/MatMul-op165,MatMul-op165,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""32,64,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""64,1000,16,16""}}"
31881,21,32,Default/TransData-op229,TransData-op229,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""64,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31882,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMul/Mul-op169,Mul-op169,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31883,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradReLU/ReluGrad-op171,ReluGrad-op171,ReluGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31884,21,1,Default/AtomicAddrClean-op431,AtomicAddrClean-op431,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}}"
31885,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradBiasAdd/BiasAddGrad-op173,BiasAddGrad-op173,BiasAddGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}}"
31886,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradRealDiv/RealDiv-op174,RealDiv-op174,RealDiv,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}}"
31887,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradCast/Cast-op175,Cast-op175,Cast,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,1024""}}"
31888,21,32,Default/TransData-op237,TransData-op237,TransData,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""16000,1024""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,1000,16,16""}}"
31889,21,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMatMul/MatMul-op177,MatMul-op177,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""156,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,1000,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""64,156,16,16""}}"
32218,23,1,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/RealDiv-op118,RealDiv-op118,RealDiv,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
32219,23,1,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op124,TensorAdd-op124,TensorAdd,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
32220,23,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMatMul/MatMul-op178,MatMul-op178,MatMul,Gradients,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,1000,16,16""}, ""input_1"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT16"", ""shape"": ""64,156,16,16""}, ""output_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""156,1000,16,16""}}"
32221,23,32,Default/TransData-op284,TransData-op284,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""156,1000,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}}"
32222,23,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMul/Mul-op182,Mul-op182,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}}"
32223,23,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradReLU/ReluGrad-op184,ReluGrad-op184,ReluGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}}"
32224,23,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradMul/Mul-op186,Mul-op186,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,2496""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,64""}}"
32225,23,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradStridedSlice/StridedSliceGrad-op187,StridedSliceGrad-op187,StridedSliceGrad,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}}"
32226,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op188,Slice-op188,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32227,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op189,Slice-op189,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32228,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op190,Slice-op190,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32229,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op191,Slice-op191,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32230,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op192,Slice-op192,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32231,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op193,Slice-op193,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32232,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op194,Slice-op194,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32233,23,640,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op195,Slice-op195,Slice,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,64""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}}"
32234,23,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradSplit/Concat-op196,Concat-op196,Concat,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024000,39,8""}}"
32414,25,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/grad_MirrorOperator/Mul-op130,Mul-op130,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}}"
32415,25,1,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/grad_MirrorOperator/Mul-op129,Mul-op129,Mul,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
32416,25,2,Default/Mul-op183,Mul-op183,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}}"
32417,25,1,Default/Mul-op170,Mul-op170,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}}"
32418,25,1,Default/Mul-op157,Mul-op157,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}}"
32419,25,1,Default/Mul-op144,Mul-op144,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}}"
32420,25,1,Default/Mul-op122,Mul-op122,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
32421,25,32,Default/TransData-op206,TransData-op206,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""64,156,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}}"
32422,25,32,Default/TransData-op208,TransData-op208,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""32,64,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}}"
32423,25,32,Default/TransData-op210,TransData-op210,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""16,32,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}}"
32424,25,16,Default/TransData-op212,TransData-op212,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""8,16,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}}"
32425,25,8,Default/TransData-op214,TransData-op214,TransData,Default,"{""input_0"": {""format"": ""FRACTAL_NZ"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1,8,16,16""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}}"
32426,25,32,Default/Mul-op197,Mul-op197,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}}"
32427,25,32,Default/Mul-op176,Mul-op176,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}}"
32428,25,32,Default/Mul-op163,Mul-op163,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}}"
32429,25,32,Default/Mul-op150,Mul-op150,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}}"
32430,25,1,Default/Mul-op137,Mul-op137,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}}"
32431,25,31,Default/AtomicAddrClean-op434,AtomicAddrClean-op434,AtomicAddrClean,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
32434,25,32,Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradGatherV2/UnsortedSegmentSum-op199,UnsortedSegmentSum-op199,UnsortedSegmentSum,Gradients,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128000,39,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_INT32"", ""shape"": ""128000,39""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
32435,25,32,Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/AddN-op200,AddN-op200,AddN,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
32436,25,32,Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op201,Mul-op201,Mul,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"
32437,25,29,Default/network-TrainStepWrap/optimizer_w-FTRL/ApplyFtrl-op133,ApplyFtrl-op133,ApplyFtrl,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,1""}}"
32438,25,1,Default/network-TrainStepWrap/optimizer_w-FTRL/ApplyFtrl-op132,ApplyFtrl-op132,ApplyFtrl,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
32439,25,1,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op185,Adam-op185,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024""}}"
32440,25,1,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op172,Adam-op172,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512""}}"
32441,25,1,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op159,Adam-op159,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256""}}"
32442,25,1,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op146,Adam-op146,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128""}}"
32443,25,1,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op127,Adam-op127,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}}"
32444,25,32,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op198,Adam-op198,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""2496,1024""}}"
32445,25,31,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op179,Adam-op179,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1024,512""}}"
32446,25,31,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op166,Adam-op166,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""512,256""}}"
32447,25,16,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op153,Adam-op153,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""256,128""}}"
32448,25,1,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op140,Adam-op140,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""128,1""}}"
32449,25,31,Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op202,Adam-op202,Adam,Default,"{""input_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""input_3"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_4"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""1""}, ""input_5"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_6"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_7"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_8"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": """"}, ""input_9"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_0"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_1"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}, ""output_2"": {""format"": ""DefaultFormat"", ""data_type"": ""NUMBER_TYPE_FLOAT32"", ""shape"": ""184696,8""}}"

+ 1
- 0
tests/utils/resource/run_1/normal_run/profiler/min_cycle_counter_1.txt View File

@@ -0,0 +1 @@
43806841592.0

+ 5
- 0
tests/utils/resource/run_1/normal_run/profiler/minddata_pipeline_raw_1.csv View File

@@ -0,0 +1,5 @@
op_id,op_type,num_workers,output_queue_size,output_queue_average_size,output_queue_length,output_queue_usage_rate,sample_interval,parent_id,children_id
0,Batch,4,,,,,10,,[1]
1,Shuffle,1,"[10, 20, 30]",20.0,64,0.3125,10,0,"[2, 3]"
2,TFReader,4,"[10, 20, 30]",20.0,64,0.3125,10,1,
3,TFReader,4,"[10, 20, 30]",20.0,64,0.3125,10,1,

+ 62850
- 0
tests/utils/resource/run_1/normal_run/profiler/output_format_data_hwts_1.txt
File diff suppressed because it is too large
View File


+ 203
- 0
tests/utils/resource/run_1/normal_run/profiler/output_op_compute_time_1.txt View File

@@ -0,0 +1,203 @@
====================op compute time====================
op_name compute_time(ms) stream_id
------------ --------------- ---------
Default/AssignAdd-op414 0.001688 519
Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op29 0.0012020000000000002 519
Default/network-TrainStepWrap/optimizer_d-Adam/Assign-op30 0.0013606666666666667 519
Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op31 0.0011659999999999997 519
Default/network-TrainStepWrap/optimizer_d-Adam/Assign-op32 0.001116 519
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/GatherV2-op33 0.9352293333333332 519
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Mul-op35 0.010222666666666666 519
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/ReduceSum-op36 0.015073333333333333 519
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op37 0.003832666666666666 519
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Cast-op34 0.001396666666666667 519
Default/TransData-op216 0.006697333333333332 519
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Square-op38 0.009799333333333334 519
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Split-op39 0.09720533333333335 523
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Concat-op40 0.08841666666666667 523
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/StridedSlice-op41 0.012427333333333335 523
Default/AtomicAddrClean-op418 0.001378666666666667 523
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/ReduceSum-op42 0.009832666666666665 523
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op48 0.001400666666666667 525
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/RealDiv-op44 0.0014346666666666666 527
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/Mul-op28 0.001468 527
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Cast-op47 0.004459333333333333 527
Default/TransData-op281 0.0027733333333333334 527
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Cast-op46 0.004600000000000001 527
Default/TransData-op278 0.004403333333333333 527
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Cast-op45 0.00711 527
Default/TransData-op275 0.005461333333333334 527
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Cast-op52 0.023115999999999994 527
Default/TransData-op272 0.009749333333333332 527
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op43 0.0013153333333333335 527
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op53 0.004243333333333333 529
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/RealDiv-op54 0.004824666666666667 529
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op49 0.003735 531
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/RealDiv-op50 0.0045564285714285715 531
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/grad_VirtualDiv/RealDiv-op51 0.004516428571428571 531
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/GatherV2-op55 42.220212142857136 531
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op56 0.00871357142857143 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradStridedSlice/StridedSliceGrad-op57 0.15243714285714288 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op58 0.9626657142857143 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op59 1.0643285714285715 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op60 0.9675764285714286 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op61 0.9675435714285714 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op62 1.0075085714285714 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op63 0.9250400000000002 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op64 1.1294107142857144 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradConcat/Slice-op65 1.0091157142857143 533
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSplit/Concat-op66 0.051030714285714276 533
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Split-op67 2.617072142857143 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Concat-op68 3.084827142857143 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/StridedSlice-op69 0.3331414285714285 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/Mul-op70 0.37437785714285715 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/ReLU-op71 0.32776857142857135 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Mul-op72 0.33151499999999995 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/Cast-op73 0.2518214285714286 535
Default/TransData-op271 0.14980214285714283 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/MatMul-op74 0.45218500000000006 535
Default/TransData-op240 0.09184714285714284 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/RealDiv-op76 0.10391071428571431 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/BiasAdd-op77 0.11015571428571427 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/ReLU-op78 0.10085142857142855 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Mul-op79 0.10943071428571426 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/Cast-op80 0.04727285714285715 535
Default/TransData-op274 0.03735642857142857 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/MatMul-op81 0.09832214285714284 535
Default/TransData-op245 0.037176428571428576 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/RealDiv-op83 0.036798571428571424 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/BiasAdd-op84 0.04016857142857143 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/ReLU-op85 0.027936428571428574 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Mul-op86 0.039065 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/Cast-op87 0.02587642857142857 535
Default/TransData-op277 0.01939 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/MatMul-op88 0.03152 535
Default/TransData-op250 0.020935000000000002 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/RealDiv-op90 0.025487142857142854 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/BiasAdd-op91 0.021720714285714288 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/ReLU-op92 0.016717857142857142 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Mul-op93 0.021017857142857147 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/Cast-op94 0.014929999999999999 535
Default/TransData-op280 0.012425714285714285 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/MatMul-op95 0.013189999999999997 535
Default/TransData-op255 0.014586428571428571 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/RealDiv-op97 0.015751428571428572 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/BiasAdd-op98 0.013145 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/ReLU-op99 0.010007857142857143 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Mul-op100 0.01205 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/Cast-op101 0.009261428571428571 535
Default/TransData-op215 0.009404285714285714 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/MatMul-op102 0.007625 535
Default/TransData-op204 0.016274285714285713 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/RealDiv-op104 0.004828571428571428 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/BiasAdd-op105 0.004472857142857142 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op106 0.003925714285714286 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/SigmoidCrossEntropyWithLogits-op107 0.004808571428571428 535
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSigmoidCrossEntropyWithLogits/SigmoidCrossEntropyWithLogitsGrad-op109 0.004950714285714286 535
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSigmoidCrossEntropyWithLogits/SigmoidCrossEntropyWithLogitsGrad-op108 0.004631428571428572 535
Default/AtomicAddrClean-op425 0.0015150000000000003 535
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/ReduceMean-op110 0.004534999999999999 535
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradBiasAdd/BiasAddGrad-op112 0.0030614285714285717 535
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradRealDiv/RealDiv-op113 0.004547142857142856 535
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradRealDiv/ReduceSum-op111 0.0031428571428571426 535
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradCast/Cast-op115 0.0026614285714285715 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradReduceMean/Tile-op114 0.027466428571428576 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSquare/Mul-op116 0.03313571428571428 537
Default/TransData-op257 0.06620642857142857 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradMul/Mul-op117 0.010132142857142855 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/gradSquare/Mul-op121 0.020947142857142855 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMatMul/MatMul-op119 0.009299285714285715 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMatMul/MatMul-op120 0.009546428571428572 537
Default/AtomicAddrClean-op427 0.002937142857142857 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradGatherV2/UnsortedSegmentSum-op123 7.355592142857143 537
Default/TransData-op235 0.014415714285714283 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradMul/Mul-op128 0.012212857142857145 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_5-DenseLayer/gradReLU/ReluGrad-op131 0.02228428571428571 537
Default/AtomicAddrClean-op428 0.001404285714285714 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradBiasAdd/BiasAddGrad-op134 0.008783571428571429 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradRealDiv/RealDiv-op135 0.013412857142857143 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradCast/Cast-op136 0.008197142857142856 537
Default/TransData-op252 0.008572857142857142 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMatMul/MatMul-op138 0.029589285714285724 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMatMul/MatMul-op139 0.016685 537
Default/TransData-op233 0.020412142857142858 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradMul/Mul-op143 0.020592142857142857 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_4-DenseLayer/gradReLU/ReluGrad-op145 0.03936785714285714 537
Default/AtomicAddrClean-op429 0.0014571428571428572 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradBiasAdd/BiasAddGrad-op147 0.012325714285714285 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradRealDiv/RealDiv-op148 0.021508571428571432 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradCast/Cast-op149 0.012591428571428571 537
Default/TransData-op247 0.012454999999999997 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMatMul/MatMul-op151 0.053485000000000005 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMatMul/MatMul-op152 0.03651357142857143 537
Default/TransData-op231 0.03276571428571429 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradMul/Mul-op156 0.037129999999999996 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_3-DenseLayer/gradReLU/ReluGrad-op158 0.09050499999999999 537
Default/AtomicAddrClean-op430 0.001497142857142857 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradBiasAdd/BiasAddGrad-op160 0.017480714285714283 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradRealDiv/RealDiv-op161 0.042566428571428575 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradCast/Cast-op162 0.02489785714285714 537
Default/TransData-op242 0.019189285714285714 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMatMul/MatMul-op164 0.10608857142857142 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMatMul/MatMul-op165 0.1160064285714286 537
Default/TransData-op229 0.09212928571428572 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradMul/Mul-op169 0.10092642857142857 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_2-DenseLayer/gradReLU/ReluGrad-op171 0.18744071428571424 537
Default/AtomicAddrClean-op431 0.0014599999999999997 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradBiasAdd/BiasAddGrad-op173 0.030029999999999998 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradRealDiv/RealDiv-op174 0.13704571428571427 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradCast/Cast-op175 0.04649285714285715 537
Default/TransData-op237 0.03681785714285714 537
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMatMul/MatMul-op177 0.42144428571428577 537
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/RealDiv-op118 0.001617857142857143 539
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/TensorAdd-op124 0.0014600000000000001 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMatMul/MatMul-op178 0.5351814285714286 539
Default/TransData-op284 0.32571142857142854 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradMul/Mul-op182 0.3179142857142857 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/dense_layer_1-DenseLayer/gradReLU/ReluGrad-op184 0.5144707142857143 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradMul/Mul-op186 0.3859778571428571 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradStridedSlice/StridedSliceGrad-op187 1.3543971428571429 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op188 1.5460985714285713 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op189 1.5340514285714286 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op190 1.540242857142857 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op191 1.5514735714285715 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op192 1.5607435714285713 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op193 1.5385385714285713 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op194 1.537682857142857 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradConcat/Slice-op195 1.5342942857142856 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradSplit/Concat-op196 2.584179285714286 539
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/grad_MirrorOperator/Mul-op130 0.005715714285714287 541
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/grad_MirrorOperator/Mul-op129 0.0015964285714285716 541
Default/Mul-op183 0.0016557142857142858 541
Default/Mul-op170 0.0015957142857142856 541
Default/Mul-op157 0.0015314285714285714 541
Default/Mul-op144 0.0014735714285714285 541
Default/Mul-op122 0.0012207142857142855 541
Default/TransData-op206 0.02777642857142857 541
Default/TransData-op208 0.008395714285714286 541
Default/TransData-op210 0.006270714285714287 541
Default/TransData-op212 0.003332857142857143 541
Default/TransData-op214 0.0024235714285714286 541
Default/Mul-op197 0.016677857142857147 541
Default/Mul-op176 0.007605000000000001 541
Default/Mul-op163 0.0062528571428571425 541
Default/Mul-op150 0.004635 541
Default/Mul-op137 0.0016078571428571429 541
Default/AtomicAddrClean-op434 0.007719285714285714 541
Gradients/Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/network-WideDeepModel/gradGatherV2/UnsortedSegmentSum-op199 37.25223428571428 541
Default/network-TrainStepWrap/network-VirtualDatasetCellTriple/_backbone-NetWithLossClass/AddN-op200 0.012836428571428572 541
Default/network-TrainStepWrap/optimizer_d-Adam/Mul-op201 0.010897142857142855 541
Default/network-TrainStepWrap/optimizer_w-FTRL/ApplyFtrl-op133 0.02319642857142857 541
Default/network-TrainStepWrap/optimizer_w-FTRL/ApplyFtrl-op132 0.0022571428571428573 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op185 0.003688571428571429 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op172 0.003175714285714285 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op159 0.0029478571428571436 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op146 0.0028899999999999998 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op127 0.0022257142857142853 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op198 0.133745 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op179 0.03321571428571428 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op166 0.010665 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op153 0.006292857142857143 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op140 0.002818571428571428 541
Default/network-TrainStepWrap/optimizer_d-Adam/Adam-op202 0.08427071428571428 541
total op 126.43631757142849 0

+ 705
- 0
tests/utils/resource/run_1/normal_run/profiler/output_op_compute_time_detail_1.txt View File

@@ -0,0 +1,705 @@
====================op compute time====================
optype_name compute_time(ms, per-step) called_times(per-step) percent
--------------------------------- ---------------------------- ------------------------ ---------
UnsortedSegmentSum 44.6078 2 35.28
GatherV2 43.1554 2 34.13
Slice 20.3763 16 16.12
Concat 5.80845 4 4.59
Split 2.71428 2 2.15
MatMul 1.93668 15 1.53
Mul 1.90295 32 1.51
StridedSliceGrad 1.50683 2 1.19
TransData 1.11516 30 0.88
ReluGrad 0.854069 5 0.68
Cast 0.484685 15 0.38
ReLU 0.483282 5 0.38
RealDiv 0.422807 15 0.33
StridedSlice 0.345569 2 0.27
Adam 0.285936 11 0.23
BiasAdd 0.189663 5 0.15
BiasAddGrad 0.071681 5 0.06
Tile 0.044158 4 0.03
ReduceSum 0.030765 5 0.02
ApplyFtrl 0.025454 2 0.02
AtomicAddrClean 0.019369 8 0.02
AddN 0.012836 1 0.01
Square 0.009799 1 0.01
SigmoidCrossEntropyWithLogitsGrad 0.009582 2 0.01
TensorAdd 0.009218 3 0.01
SigmoidCrossEntropyWithLogits 0.004809 1 0
ReduceMean 0.004535 1 0
Assign 0.002477 2 0
AssignAdd 0.001688 1 0

Detail:
op_name op_type avg_execution_time subgraph
--------------------------------------- --------------------------------- -------------------- ----------
UnsortedSegmentSum-op199 UnsortedSegmentSum 37.2522 Gradients
UnsortedSegmentSum-op123 UnsortedSegmentSum 7.35559 Gradients
GatherV2-op55 GatherV2 42.2202 Default
GatherV2-op33 GatherV2 0.935229 Default
Slice-op192 Slice 1.56074 Gradients
Slice-op191 Slice 1.55147 Gradients
Slice-op188 Slice 1.5461 Gradients
Slice-op190 Slice 1.54024 Gradients
Slice-op193 Slice 1.53854 Gradients
Slice-op194 Slice 1.53768 Gradients
Slice-op195 Slice 1.53429 Gradients
Slice-op189 Slice 1.53405 Gradients
Slice-op64 Slice 1.12941 Gradients
Slice-op59 Slice 1.06433 Gradients
Slice-op65 Slice 1.00912 Gradients
Slice-op62 Slice 1.00751 Gradients
Slice-op60 Slice 0.967576 Gradients
Slice-op61 Slice 0.967544 Gradients
Slice-op58 Slice 0.962666 Gradients
Slice-op63 Slice 0.92504 Gradients
Concat-op68 Concat 3.08483 Default
Concat-op196 Concat 2.58418 Gradients
Concat-op40 Concat 0.0884167 Default
Concat-op66 Concat 0.0510307 Gradients
Split-op67 Split 2.61707 Default
Split-op39 Split 0.0972053 Default
MatMul-op178 MatMul 0.535181 Gradients
MatMul-op74 MatMul 0.452185 Default
MatMul-op177 MatMul 0.421444 Gradients
MatMul-op165 MatMul 0.116006 Gradients
MatMul-op164 MatMul 0.106089 Gradients
MatMul-op81 MatMul 0.0983221 Default
MatMul-op151 MatMul 0.053485 Gradients
MatMul-op152 MatMul 0.0365136 Gradients
MatMul-op88 MatMul 0.03152 Default
MatMul-op138 MatMul 0.0295893 Gradients
MatMul-op139 MatMul 0.016685 Gradients
MatMul-op95 MatMul 0.01319 Default
MatMul-op120 MatMul 0.00954643 Gradients
MatMul-op119 MatMul 0.00929929 Gradients
MatMul-op102 MatMul 0.007625 Default
Mul-op186 Mul 0.385978 Gradients
Mul-op70 Mul 0.374378 Default
Mul-op72 Mul 0.331515 Default
Mul-op182 Mul 0.317914 Gradients
Mul-op79 Mul 0.109431 Default
Mul-op169 Mul 0.100926 Gradients
Mul-op86 Mul 0.039065 Default
Mul-op156 Mul 0.03713 Gradients
Mul-op116 Mul 0.0331357 Gradients
Mul-op93 Mul 0.0210179 Default
Mul-op121 Mul 0.0209471 Gradients
Mul-op143 Mul 0.0205921 Gradients
Mul-op197 Mul 0.0166779 Default
Mul-op128 Mul 0.0122129 Gradients
Mul-op100 Mul 0.01205 Default
Mul-op201 Mul 0.0108971 Default
Mul-op35 Mul 0.0102227 Default
Mul-op117 Mul 0.0101321 Gradients
Mul-op176 Mul 0.007605 Default
Mul-op163 Mul 0.00625286 Default
Mul-op130 Mul 0.00571571 Gradients
Mul-op150 Mul 0.004635 Default
Mul-op183 Mul 0.00165571 Default
Mul-op137 Mul 0.00160786 Default
Mul-op129 Mul 0.00159643 Gradients
Mul-op170 Mul 0.00159571 Default
Mul-op157 Mul 0.00153143 Default
Mul-op144 Mul 0.00147357 Default
Mul-op28 Mul 0.001468 Default
Mul-op122 Mul 0.00122071 Default
Mul-op29 Mul 0.001202 Default
Mul-op31 Mul 0.001166 Default
StridedSliceGrad-op187 StridedSliceGrad 1.3544 Gradients
StridedSliceGrad-op57 StridedSliceGrad 0.152437 Gradients
TransData-op284 TransData 0.325711 Default
TransData-op271 TransData 0.149802 Default
TransData-op229 TransData 0.0921293 Default
TransData-op240 TransData 0.0918471 Default
TransData-op257 TransData 0.0662064 Default
TransData-op274 TransData 0.0373564 Default
TransData-op245 TransData 0.0371764 Default
TransData-op237 TransData 0.0368179 Default
TransData-op231 TransData 0.0327657 Default
TransData-op206 TransData 0.0277764 Default
TransData-op250 TransData 0.020935 Default
TransData-op233 TransData 0.0204121 Default
TransData-op277 TransData 0.01939 Default
TransData-op242 TransData 0.0191893 Default
TransData-op204 TransData 0.0162743 Default
TransData-op255 TransData 0.0145864 Default
TransData-op235 TransData 0.0144157 Default
TransData-op247 TransData 0.012455 Default
TransData-op280 TransData 0.0124257 Default
TransData-op272 TransData 0.00974933 Default
TransData-op215 TransData 0.00940429 Default
TransData-op252 TransData 0.00857286 Default
TransData-op208 TransData 0.00839571 Default
TransData-op216 TransData 0.00669733 Default
TransData-op210 TransData 0.00627071 Default
TransData-op275 TransData 0.00546133 Default
TransData-op278 TransData 0.00440333 Default
TransData-op212 TransData 0.00333286 Default
TransData-op281 TransData 0.00277333 Default
TransData-op214 TransData 0.00242357 Default
ReluGrad-op184 ReluGrad 0.514471 Gradients
ReluGrad-op171 ReluGrad 0.187441 Gradients
ReluGrad-op158 ReluGrad 0.090505 Gradients
ReluGrad-op145 ReluGrad 0.0393679 Gradients
ReluGrad-op131 ReluGrad 0.0222843 Gradients
Cast-op73 Cast 0.251821 Default
Cast-op80 Cast 0.0472729 Default
Cast-op175 Cast 0.0464929 Gradients
Cast-op87 Cast 0.0258764 Default
Cast-op162 Cast 0.0248979 Gradients
Cast-op52 Cast 0.023116 Default
Cast-op94 Cast 0.01493 Default
Cast-op149 Cast 0.0125914 Gradients
Cast-op101 Cast 0.00926143 Default
Cast-op136 Cast 0.00819714 Gradients
Cast-op45 Cast 0.00711 Default
Cast-op46 Cast 0.0046 Default
Cast-op47 Cast 0.00445933 Default
Cast-op115 Cast 0.00266143 Gradients
Cast-op34 Cast 0.00139667 Default
ReLU-op71 ReLU 0.327769 Default
ReLU-op78 ReLU 0.100851 Default
ReLU-op85 ReLU 0.0279364 Default
ReLU-op92 ReLU 0.0167179 Default
ReLU-op99 ReLU 0.0100079 Default
RealDiv-op174 RealDiv 0.137046 Gradients
RealDiv-op76 RealDiv 0.103911 Default
RealDiv-op161 RealDiv 0.0425664 Gradients
RealDiv-op83 RealDiv 0.0367986 Default
RealDiv-op90 RealDiv 0.0254871 Default
RealDiv-op148 RealDiv 0.0215086 Gradients
RealDiv-op97 RealDiv 0.0157514 Default
RealDiv-op135 RealDiv 0.0134129 Gradients
RealDiv-op104 RealDiv 0.00482857 Default
RealDiv-op54 RealDiv 0.00482467 Gradients
RealDiv-op50 RealDiv 0.00455643 Gradients
RealDiv-op113 RealDiv 0.00454714 Gradients
RealDiv-op51 RealDiv 0.00451643 Gradients
RealDiv-op118 RealDiv 0.00161786 Default
RealDiv-op44 RealDiv 0.00143467 Default
StridedSlice-op69 StridedSlice 0.333141 Default
StridedSlice-op41 StridedSlice 0.0124273 Default
Adam-op198 Adam 0.133745 Default
Adam-op202 Adam 0.0842707 Default
Adam-op179 Adam 0.0332157 Default
Adam-op166 Adam 0.010665 Default
Adam-op153 Adam 0.00629286 Default
Adam-op185 Adam 0.00368857 Default
Adam-op172 Adam 0.00317571 Default
Adam-op159 Adam 0.00294786 Default
Adam-op146 Adam 0.00289 Default
Adam-op140 Adam 0.00281857 Default
Adam-op127 Adam 0.00222571 Default
BiasAdd-op77 BiasAdd 0.110156 Default
BiasAdd-op84 BiasAdd 0.0401686 Default
BiasAdd-op91 BiasAdd 0.0217207 Default
BiasAdd-op98 BiasAdd 0.013145 Default
BiasAdd-op105 BiasAdd 0.00447286 Default
BiasAddGrad-op173 BiasAddGrad 0.03003 Gradients
BiasAddGrad-op160 BiasAddGrad 0.0174807 Gradients
BiasAddGrad-op147 BiasAddGrad 0.0123257 Gradients
BiasAddGrad-op134 BiasAddGrad 0.00878357 Gradients
BiasAddGrad-op112 BiasAddGrad 0.00306143 Gradients
Tile-op114 Tile 0.0274664 Gradients
Tile-op56 Tile 0.00871357 Gradients
Tile-op53 Tile 0.00424333 Gradients
Tile-op49 Tile 0.003735 Gradients
ReduceSum-op36 ReduceSum 0.0150733 Default
ReduceSum-op42 ReduceSum 0.00983267 Default
ReduceSum-op111 ReduceSum 0.00314286 Gradients
ReduceSum-op48 ReduceSum 0.00140067 Gradients
ReduceSum-op43 ReduceSum 0.00131533 Gradients
ApplyFtrl-op133 ApplyFtrl 0.0231964 Default
ApplyFtrl-op132 ApplyFtrl 0.00225714 Default
AtomicAddrClean-op434 AtomicAddrClean 0.00771929 Default
AtomicAddrClean-op427 AtomicAddrClean 0.00293714 Default
AtomicAddrClean-op425 AtomicAddrClean 0.001515 Default
AtomicAddrClean-op430 AtomicAddrClean 0.00149714 Default
AtomicAddrClean-op431 AtomicAddrClean 0.00146 Default
AtomicAddrClean-op429 AtomicAddrClean 0.00145714 Default
AtomicAddrClean-op428 AtomicAddrClean 0.00140429 Default
AtomicAddrClean-op418 AtomicAddrClean 0.00137867 Default
AddN-op200 AddN 0.0128364 Default
Square-op38 Square 0.00979933 Default
SigmoidCrossEntropyWithLogitsGrad-op109 SigmoidCrossEntropyWithLogitsGrad 0.00495071 Gradients
SigmoidCrossEntropyWithLogitsGrad-op108 SigmoidCrossEntropyWithLogitsGrad 0.00463143 Gradients
TensorAdd-op106 TensorAdd 0.00392571 Default
TensorAdd-op37 TensorAdd 0.00383267 Default
TensorAdd-op124 TensorAdd 0.00146 Default
SigmoidCrossEntropyWithLogits-op107 SigmoidCrossEntropyWithLogits 0.00480857 Default
ReduceMean-op110 ReduceMean 0.004535 Default
Assign-op30 Assign 0.00136067 Default
Assign-op32 Assign 0.001116 Default
AssignAdd-op414 AssignAdd 0.001688 Default
====================op compute time====================
optype_name compute_time(ms, per-step) called_times(per-step) percent
--------------------------------- ---------------------------- ------------------------ ---------
UnsortedSegmentSum 44.6078 2 35.28
GatherV2 43.1554 2 34.13
Slice 20.3763 16 16.12
Concat 5.80845 4 4.59
Split 2.71428 2 2.15
MatMul 1.93668 15 1.53
Mul 1.90295 32 1.51
StridedSliceGrad 1.50683 2 1.19
TransData 1.11516 30 0.88
ReluGrad 0.854069 5 0.68
Cast 0.484685 15 0.38
ReLU 0.483282 5 0.38
RealDiv 0.422807 15 0.33
StridedSlice 0.345569 2 0.27
Adam 0.285936 11 0.23
BiasAdd 0.189663 5 0.15
BiasAddGrad 0.071681 5 0.06
Tile 0.044158 4 0.03
ReduceSum 0.030765 5 0.02
ApplyFtrl 0.025454 2 0.02
AtomicAddrClean 0.019369 8 0.02
AddN 0.012836 1 0.01
Square 0.009799 1 0.01
SigmoidCrossEntropyWithLogitsGrad 0.009582 2 0.01
TensorAdd 0.009218 3 0.01
SigmoidCrossEntropyWithLogits 0.004809 1 0
ReduceMean 0.004535 1 0
Assign 0.002477 2 0
AssignAdd 0.001688 1 0

Detail:
op_name op_type avg_execution_time subgraph
--------------------------------------- --------------------------------- -------------------- ----------
UnsortedSegmentSum-op199 UnsortedSegmentSum 37.2522 Gradients
UnsortedSegmentSum-op123 UnsortedSegmentSum 7.35559 Gradients
GatherV2-op55 GatherV2 42.2202 Default
GatherV2-op33 GatherV2 0.935229 Default
Slice-op192 Slice 1.56074 Gradients
Slice-op191 Slice 1.55147 Gradients
Slice-op188 Slice 1.5461 Gradients
Slice-op190 Slice 1.54024 Gradients
Slice-op193 Slice 1.53854 Gradients
Slice-op194 Slice 1.53768 Gradients
Slice-op195 Slice 1.53429 Gradients
Slice-op189 Slice 1.53405 Gradients
Slice-op64 Slice 1.12941 Gradients
Slice-op59 Slice 1.06433 Gradients
Slice-op65 Slice 1.00912 Gradients
Slice-op62 Slice 1.00751 Gradients
Slice-op60 Slice 0.967576 Gradients
Slice-op61 Slice 0.967544 Gradients
Slice-op58 Slice 0.962666 Gradients
Slice-op63 Slice 0.92504 Gradients
Concat-op68 Concat 3.08483 Default
Concat-op196 Concat 2.58418 Gradients
Concat-op40 Concat 0.0884167 Default
Concat-op66 Concat 0.0510307 Gradients
Split-op67 Split 2.61707 Default
Split-op39 Split 0.0972053 Default
MatMul-op178 MatMul 0.535181 Gradients
MatMul-op74 MatMul 0.452185 Default
MatMul-op177 MatMul 0.421444 Gradients
MatMul-op165 MatMul 0.116006 Gradients
MatMul-op164 MatMul 0.106089 Gradients
MatMul-op81 MatMul 0.0983221 Default
MatMul-op151 MatMul 0.053485 Gradients
MatMul-op152 MatMul 0.0365136 Gradients
MatMul-op88 MatMul 0.03152 Default
MatMul-op138 MatMul 0.0295893 Gradients
MatMul-op139 MatMul 0.016685 Gradients
MatMul-op95 MatMul 0.01319 Default
MatMul-op120 MatMul 0.00954643 Gradients
MatMul-op119 MatMul 0.00929929 Gradients
MatMul-op102 MatMul 0.007625 Default
Mul-op186 Mul 0.385978 Gradients
Mul-op70 Mul 0.374378 Default
Mul-op72 Mul 0.331515 Default
Mul-op182 Mul 0.317914 Gradients
Mul-op79 Mul 0.109431 Default
Mul-op169 Mul 0.100926 Gradients
Mul-op86 Mul 0.039065 Default
Mul-op156 Mul 0.03713 Gradients
Mul-op116 Mul 0.0331357 Gradients
Mul-op93 Mul 0.0210179 Default
Mul-op121 Mul 0.0209471 Gradients
Mul-op143 Mul 0.0205921 Gradients
Mul-op197 Mul 0.0166779 Default
Mul-op128 Mul 0.0122129 Gradients
Mul-op100 Mul 0.01205 Default
Mul-op201 Mul 0.0108971 Default
Mul-op35 Mul 0.0102227 Default
Mul-op117 Mul 0.0101321 Gradients
Mul-op176 Mul 0.007605 Default
Mul-op163 Mul 0.00625286 Default
Mul-op130 Mul 0.00571571 Gradients
Mul-op150 Mul 0.004635 Default
Mul-op183 Mul 0.00165571 Default
Mul-op137 Mul 0.00160786 Default
Mul-op129 Mul 0.00159643 Gradients
Mul-op170 Mul 0.00159571 Default
Mul-op157 Mul 0.00153143 Default
Mul-op144 Mul 0.00147357 Default
Mul-op28 Mul 0.001468 Default
Mul-op122 Mul 0.00122071 Default
Mul-op29 Mul 0.001202 Default
Mul-op31 Mul 0.001166 Default
StridedSliceGrad-op187 StridedSliceGrad 1.3544 Gradients
StridedSliceGrad-op57 StridedSliceGrad 0.152437 Gradients
TransData-op284 TransData 0.325711 Default
TransData-op271 TransData 0.149802 Default
TransData-op229 TransData 0.0921293 Default
TransData-op240 TransData 0.0918471 Default
TransData-op257 TransData 0.0662064 Default
TransData-op274 TransData 0.0373564 Default
TransData-op245 TransData 0.0371764 Default
TransData-op237 TransData 0.0368179 Default
TransData-op231 TransData 0.0327657 Default
TransData-op206 TransData 0.0277764 Default
TransData-op250 TransData 0.020935 Default
TransData-op233 TransData 0.0204121 Default
TransData-op277 TransData 0.01939 Default
TransData-op242 TransData 0.0191893 Default
TransData-op204 TransData 0.0162743 Default
TransData-op255 TransData 0.0145864 Default
TransData-op235 TransData 0.0144157 Default
TransData-op247 TransData 0.012455 Default
TransData-op280 TransData 0.0124257 Default
TransData-op272 TransData 0.00974933 Default
TransData-op215 TransData 0.00940429 Default
TransData-op252 TransData 0.00857286 Default
TransData-op208 TransData 0.00839571 Default
TransData-op216 TransData 0.00669733 Default
TransData-op210 TransData 0.00627071 Default
TransData-op275 TransData 0.00546133 Default
TransData-op278 TransData 0.00440333 Default
TransData-op212 TransData 0.00333286 Default
TransData-op281 TransData 0.00277333 Default
TransData-op214 TransData 0.00242357 Default
ReluGrad-op184 ReluGrad 0.514471 Gradients
ReluGrad-op171 ReluGrad 0.187441 Gradients
ReluGrad-op158 ReluGrad 0.090505 Gradients
ReluGrad-op145 ReluGrad 0.0393679 Gradients
ReluGrad-op131 ReluGrad 0.0222843 Gradients
Cast-op73 Cast 0.251821 Default
Cast-op80 Cast 0.0472729 Default
Cast-op175 Cast 0.0464929 Gradients
Cast-op87 Cast 0.0258764 Default
Cast-op162 Cast 0.0248979 Gradients
Cast-op52 Cast 0.023116 Default
Cast-op94 Cast 0.01493 Default
Cast-op149 Cast 0.0125914 Gradients
Cast-op101 Cast 0.00926143 Default
Cast-op136 Cast 0.00819714 Gradients
Cast-op45 Cast 0.00711 Default
Cast-op46 Cast 0.0046 Default
Cast-op47 Cast 0.00445933 Default
Cast-op115 Cast 0.00266143 Gradients
Cast-op34 Cast 0.00139667 Default
ReLU-op71 ReLU 0.327769 Default
ReLU-op78 ReLU 0.100851 Default
ReLU-op85 ReLU 0.0279364 Default
ReLU-op92 ReLU 0.0167179 Default
ReLU-op99 ReLU 0.0100079 Default
RealDiv-op174 RealDiv 0.137046 Gradients
RealDiv-op76 RealDiv 0.103911 Default
RealDiv-op161 RealDiv 0.0425664 Gradients
RealDiv-op83 RealDiv 0.0367986 Default
RealDiv-op90 RealDiv 0.0254871 Default
RealDiv-op148 RealDiv 0.0215086 Gradients
RealDiv-op97 RealDiv 0.0157514 Default
RealDiv-op135 RealDiv 0.0134129 Gradients
RealDiv-op104 RealDiv 0.00482857 Default
RealDiv-op54 RealDiv 0.00482467 Gradients
RealDiv-op50 RealDiv 0.00455643 Gradients
RealDiv-op113 RealDiv 0.00454714 Gradients
RealDiv-op51 RealDiv 0.00451643 Gradients
RealDiv-op118 RealDiv 0.00161786 Default
RealDiv-op44 RealDiv 0.00143467 Default
StridedSlice-op69 StridedSlice 0.333141 Default
StridedSlice-op41 StridedSlice 0.0124273 Default
Adam-op198 Adam 0.133745 Default
Adam-op202 Adam 0.0842707 Default
Adam-op179 Adam 0.0332157 Default
Adam-op166 Adam 0.010665 Default
Adam-op153 Adam 0.00629286 Default
Adam-op185 Adam 0.00368857 Default
Adam-op172 Adam 0.00317571 Default
Adam-op159 Adam 0.00294786 Default
Adam-op146 Adam 0.00289 Default
Adam-op140 Adam 0.00281857 Default
Adam-op127 Adam 0.00222571 Default
BiasAdd-op77 BiasAdd 0.110156 Default
BiasAdd-op84 BiasAdd 0.0401686 Default
BiasAdd-op91 BiasAdd 0.0217207 Default
BiasAdd-op98 BiasAdd 0.013145 Default
BiasAdd-op105 BiasAdd 0.00447286 Default
BiasAddGrad-op173 BiasAddGrad 0.03003 Gradients
BiasAddGrad-op160 BiasAddGrad 0.0174807 Gradients
BiasAddGrad-op147 BiasAddGrad 0.0123257 Gradients
BiasAddGrad-op134 BiasAddGrad 0.00878357 Gradients
BiasAddGrad-op112 BiasAddGrad 0.00306143 Gradients
Tile-op114 Tile 0.0274664 Gradients
Tile-op56 Tile 0.00871357 Gradients
Tile-op53 Tile 0.00424333 Gradients
Tile-op49 Tile 0.003735 Gradients
ReduceSum-op36 ReduceSum 0.0150733 Default
ReduceSum-op42 ReduceSum 0.00983267 Default
ReduceSum-op111 ReduceSum 0.00314286 Gradients
ReduceSum-op48 ReduceSum 0.00140067 Gradients
ReduceSum-op43 ReduceSum 0.00131533 Gradients
ApplyFtrl-op133 ApplyFtrl 0.0231964 Default
ApplyFtrl-op132 ApplyFtrl 0.00225714 Default
AtomicAddrClean-op434 AtomicAddrClean 0.00771929 Default
AtomicAddrClean-op427 AtomicAddrClean 0.00293714 Default
AtomicAddrClean-op425 AtomicAddrClean 0.001515 Default
AtomicAddrClean-op430 AtomicAddrClean 0.00149714 Default
AtomicAddrClean-op431 AtomicAddrClean 0.00146 Default
AtomicAddrClean-op429 AtomicAddrClean 0.00145714 Default
AtomicAddrClean-op428 AtomicAddrClean 0.00140429 Default
AtomicAddrClean-op418 AtomicAddrClean 0.00137867 Default
AddN-op200 AddN 0.0128364 Default
Square-op38 Square 0.00979933 Default
SigmoidCrossEntropyWithLogitsGrad-op109 SigmoidCrossEntropyWithLogitsGrad 0.00495071 Gradients
SigmoidCrossEntropyWithLogitsGrad-op108 SigmoidCrossEntropyWithLogitsGrad 0.00463143 Gradients
TensorAdd-op106 TensorAdd 0.00392571 Default
TensorAdd-op37 TensorAdd 0.00383267 Default
TensorAdd-op124 TensorAdd 0.00146 Default
SigmoidCrossEntropyWithLogits-op107 SigmoidCrossEntropyWithLogits 0.00480857 Default
ReduceMean-op110 ReduceMean 0.004535 Default
Assign-op30 Assign 0.00136067 Default
Assign-op32 Assign 0.001116 Default
AssignAdd-op414 AssignAdd 0.001688 Default
====================op compute time====================
optype_name compute_time(ms, per-step) called_times(per-step) percent
--------------------------------- ---------------------------- ------------------------ ---------
UnsortedSegmentSum 44.6078 2 35.28
GatherV2 43.1554 2 34.13
Slice 20.3763 16 16.12
Concat 5.80845 4 4.59
Split 2.71428 2 2.15
MatMul 1.93668 15 1.53
Mul 1.90295 32 1.51
StridedSliceGrad 1.50683 2 1.19
TransData 1.11516 30 0.88
ReluGrad 0.854069 5 0.68
Cast 0.484685 15 0.38
ReLU 0.483282 5 0.38
RealDiv 0.422807 15 0.33
StridedSlice 0.345569 2 0.27
Adam 0.285936 11 0.23
BiasAdd 0.189663 5 0.15
BiasAddGrad 0.071681 5 0.06
Tile 0.044158 4 0.03
ReduceSum 0.030765 5 0.02
ApplyFtrl 0.025454 2 0.02
AtomicAddrClean 0.019369 8 0.02
AddN 0.012836 1 0.01
Square 0.009799 1 0.01
SigmoidCrossEntropyWithLogitsGrad 0.009582 2 0.01
TensorAdd 0.009218 3 0.01
SigmoidCrossEntropyWithLogits 0.004809 1 0
ReduceMean 0.004535 1 0
Assign 0.002477 2 0
AssignAdd 0.001688 1 0

Detail:
op_name op_type avg_execution_time subgraph
--------------------------------------- --------------------------------- -------------------- ----------
UnsortedSegmentSum-op199 UnsortedSegmentSum 37.2522 Gradients
UnsortedSegmentSum-op123 UnsortedSegmentSum 7.35559 Gradients
GatherV2-op55 GatherV2 42.2202 Default
GatherV2-op33 GatherV2 0.935229 Default
Slice-op192 Slice 1.56074 Gradients
Slice-op191 Slice 1.55147 Gradients
Slice-op188 Slice 1.5461 Gradients
Slice-op190 Slice 1.54024 Gradients
Slice-op193 Slice 1.53854 Gradients
Slice-op194 Slice 1.53768 Gradients
Slice-op195 Slice 1.53429 Gradients
Slice-op189 Slice 1.53405 Gradients
Slice-op64 Slice 1.12941 Gradients
Slice-op59 Slice 1.06433 Gradients
Slice-op65 Slice 1.00912 Gradients
Slice-op62 Slice 1.00751 Gradients
Slice-op60 Slice 0.967576 Gradients
Slice-op61 Slice 0.967544 Gradients
Slice-op58 Slice 0.962666 Gradients
Slice-op63 Slice 0.92504 Gradients
Concat-op68 Concat 3.08483 Default
Concat-op196 Concat 2.58418 Gradients
Concat-op40 Concat 0.0884167 Default
Concat-op66 Concat 0.0510307 Gradients
Split-op67 Split 2.61707 Default
Split-op39 Split 0.0972053 Default
MatMul-op178 MatMul 0.535181 Gradients
MatMul-op74 MatMul 0.452185 Default
MatMul-op177 MatMul 0.421444 Gradients
MatMul-op165 MatMul 0.116006 Gradients
MatMul-op164 MatMul 0.106089 Gradients
MatMul-op81 MatMul 0.0983221 Default
MatMul-op151 MatMul 0.053485 Gradients
MatMul-op152 MatMul 0.0365136 Gradients
MatMul-op88 MatMul 0.03152 Default
MatMul-op138 MatMul 0.0295893 Gradients
MatMul-op139 MatMul 0.016685 Gradients
MatMul-op95 MatMul 0.01319 Default
MatMul-op120 MatMul 0.00954643 Gradients
MatMul-op119 MatMul 0.00929929 Gradients
MatMul-op102 MatMul 0.007625 Default
Mul-op186 Mul 0.385978 Gradients
Mul-op70 Mul 0.374378 Default
Mul-op72 Mul 0.331515 Default
Mul-op182 Mul 0.317914 Gradients
Mul-op79 Mul 0.109431 Default
Mul-op169 Mul 0.100926 Gradients
Mul-op86 Mul 0.039065 Default
Mul-op156 Mul 0.03713 Gradients
Mul-op116 Mul 0.0331357 Gradients
Mul-op93 Mul 0.0210179 Default
Mul-op121 Mul 0.0209471 Gradients
Mul-op143 Mul 0.0205921 Gradients
Mul-op197 Mul 0.0166779 Default
Mul-op128 Mul 0.0122129 Gradients
Mul-op100 Mul 0.01205 Default
Mul-op201 Mul 0.0108971 Default
Mul-op35 Mul 0.0102227 Default
Mul-op117 Mul 0.0101321 Gradients
Mul-op176 Mul 0.007605 Default
Mul-op163 Mul 0.00625286 Default
Mul-op130 Mul 0.00571571 Gradients
Mul-op150 Mul 0.004635 Default
Mul-op183 Mul 0.00165571 Default
Mul-op137 Mul 0.00160786 Default
Mul-op129 Mul 0.00159643 Gradients
Mul-op170 Mul 0.00159571 Default
Mul-op157 Mul 0.00153143 Default
Mul-op144 Mul 0.00147357 Default
Mul-op28 Mul 0.001468 Default
Mul-op122 Mul 0.00122071 Default
Mul-op29 Mul 0.001202 Default
Mul-op31 Mul 0.001166 Default
StridedSliceGrad-op187 StridedSliceGrad 1.3544 Gradients
StridedSliceGrad-op57 StridedSliceGrad 0.152437 Gradients
TransData-op284 TransData 0.325711 Default
TransData-op271 TransData 0.149802 Default
TransData-op229 TransData 0.0921293 Default
TransData-op240 TransData 0.0918471 Default
TransData-op257 TransData 0.0662064 Default
TransData-op274 TransData 0.0373564 Default
TransData-op245 TransData 0.0371764 Default
TransData-op237 TransData 0.0368179 Default
TransData-op231 TransData 0.0327657 Default
TransData-op206 TransData 0.0277764 Default
TransData-op250 TransData 0.020935 Default
TransData-op233 TransData 0.0204121 Default
TransData-op277 TransData 0.01939 Default
TransData-op242 TransData 0.0191893 Default
TransData-op204 TransData 0.0162743 Default
TransData-op255 TransData 0.0145864 Default
TransData-op235 TransData 0.0144157 Default
TransData-op247 TransData 0.012455 Default
TransData-op280 TransData 0.0124257 Default
TransData-op272 TransData 0.00974933 Default
TransData-op215 TransData 0.00940429 Default
TransData-op252 TransData 0.00857286 Default
TransData-op208 TransData 0.00839571 Default
TransData-op216 TransData 0.00669733 Default
TransData-op210 TransData 0.00627071 Default
TransData-op275 TransData 0.00546133 Default
TransData-op278 TransData 0.00440333 Default
TransData-op212 TransData 0.00333286 Default
TransData-op281 TransData 0.00277333 Default
TransData-op214 TransData 0.00242357 Default
ReluGrad-op184 ReluGrad 0.514471 Gradients
ReluGrad-op171 ReluGrad 0.187441 Gradients
ReluGrad-op158 ReluGrad 0.090505 Gradients
ReluGrad-op145 ReluGrad 0.0393679 Gradients
ReluGrad-op131 ReluGrad 0.0222843 Gradients
Cast-op73 Cast 0.251821 Default
Cast-op80 Cast 0.0472729 Default
Cast-op175 Cast 0.0464929 Gradients
Cast-op87 Cast 0.0258764 Default
Cast-op162 Cast 0.0248979 Gradients
Cast-op52 Cast 0.023116 Default
Cast-op94 Cast 0.01493 Default
Cast-op149 Cast 0.0125914 Gradients
Cast-op101 Cast 0.00926143 Default
Cast-op136 Cast 0.00819714 Gradients
Cast-op45 Cast 0.00711 Default
Cast-op46 Cast 0.0046 Default
Cast-op47 Cast 0.00445933 Default
Cast-op115 Cast 0.00266143 Gradients
Cast-op34 Cast 0.00139667 Default
ReLU-op71 ReLU 0.327769 Default
ReLU-op78 ReLU 0.100851 Default
ReLU-op85 ReLU 0.0279364 Default
ReLU-op92 ReLU 0.0167179 Default
ReLU-op99 ReLU 0.0100079 Default
RealDiv-op174 RealDiv 0.137046 Gradients
RealDiv-op76 RealDiv 0.103911 Default
RealDiv-op161 RealDiv 0.0425664 Gradients
RealDiv-op83 RealDiv 0.0367986 Default
RealDiv-op90 RealDiv 0.0254871 Default
RealDiv-op148 RealDiv 0.0215086 Gradients
RealDiv-op97 RealDiv 0.0157514 Default
RealDiv-op135 RealDiv 0.0134129 Gradients
RealDiv-op104 RealDiv 0.00482857 Default
RealDiv-op54 RealDiv 0.00482467 Gradients
RealDiv-op50 RealDiv 0.00455643 Gradients
RealDiv-op113 RealDiv 0.00454714 Gradients
RealDiv-op51 RealDiv 0.00451643 Gradients
RealDiv-op118 RealDiv 0.00161786 Default
RealDiv-op44 RealDiv 0.00143467 Default
StridedSlice-op69 StridedSlice 0.333141 Default
StridedSlice-op41 StridedSlice 0.0124273 Default
Adam-op198 Adam 0.133745 Default
Adam-op202 Adam 0.0842707 Default
Adam-op179 Adam 0.0332157 Default
Adam-op166 Adam 0.010665 Default
Adam-op153 Adam 0.00629286 Default
Adam-op185 Adam 0.00368857 Default
Adam-op172 Adam 0.00317571 Default
Adam-op159 Adam 0.00294786 Default
Adam-op146 Adam 0.00289 Default
Adam-op140 Adam 0.00281857 Default
Adam-op127 Adam 0.00222571 Default
BiasAdd-op77 BiasAdd 0.110156 Default
BiasAdd-op84 BiasAdd 0.0401686 Default
BiasAdd-op91 BiasAdd 0.0217207 Default
BiasAdd-op98 BiasAdd 0.013145 Default
BiasAdd-op105 BiasAdd 0.00447286 Default
BiasAddGrad-op173 BiasAddGrad 0.03003 Gradients
BiasAddGrad-op160 BiasAddGrad 0.0174807 Gradients
BiasAddGrad-op147 BiasAddGrad 0.0123257 Gradients
BiasAddGrad-op134 BiasAddGrad 0.00878357 Gradients
BiasAddGrad-op112 BiasAddGrad 0.00306143 Gradients
Tile-op114 Tile 0.0274664 Gradients
Tile-op56 Tile 0.00871357 Gradients
Tile-op53 Tile 0.00424333 Gradients
Tile-op49 Tile 0.003735 Gradients
ReduceSum-op36 ReduceSum 0.0150733 Default
ReduceSum-op42 ReduceSum 0.00983267 Default
ReduceSum-op111 ReduceSum 0.00314286 Gradients
ReduceSum-op48 ReduceSum 0.00140067 Gradients
ReduceSum-op43 ReduceSum 0.00131533 Gradients
ApplyFtrl-op133 ApplyFtrl 0.0231964 Default
ApplyFtrl-op132 ApplyFtrl 0.00225714 Default
AtomicAddrClean-op434 AtomicAddrClean 0.00771929 Default
AtomicAddrClean-op427 AtomicAddrClean 0.00293714 Default
AtomicAddrClean-op425 AtomicAddrClean 0.001515 Default
AtomicAddrClean-op430 AtomicAddrClean 0.00149714 Default
AtomicAddrClean-op431 AtomicAddrClean 0.00146 Default
AtomicAddrClean-op429 AtomicAddrClean 0.00145714 Default
AtomicAddrClean-op428 AtomicAddrClean 0.00140429 Default
AtomicAddrClean-op418 AtomicAddrClean 0.00137867 Default
AddN-op200 AddN 0.0128364 Default
Square-op38 Square 0.00979933 Default
SigmoidCrossEntropyWithLogitsGrad-op109 SigmoidCrossEntropyWithLogitsGrad 0.00495071 Gradients
SigmoidCrossEntropyWithLogitsGrad-op108 SigmoidCrossEntropyWithLogitsGrad 0.00463143 Gradients
TensorAdd-op106 TensorAdd 0.00392571 Default
TensorAdd-op37 TensorAdd 0.00383267 Default
TensorAdd-op124 TensorAdd 0.00146 Default
SigmoidCrossEntropyWithLogits-op107 SigmoidCrossEntropyWithLogits 0.00480857 Default
ReduceMean-op110 ReduceMean 0.004535 Default
Assign-op30 Assign 0.00136067 Default
Assign-op32 Assign 0.001116 Default
AssignAdd-op414 AssignAdd 0.001688 Default

Some files were not shown because too many files changed in this diff

Loading…
Cancel
Save