You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 6.7 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195
  1. # Using Joint Inference Service in Helmet Detection Scenario
  2. This case introduces how to use joint inference service in helmet detection scenario.
  3. In the safety helmet detection scenario, the helmet detection shows lower performance due to limited resources in edge.
  4. However, the joint inference service can improve overall performance, which uploads hard examples that identified by the hard example mining algorithm to the cloud and infers them.
  5. The data used in the experiment is a video of workers wearing safety helmets.
  6. The joint inference service requires to detect the wearing of safety helmets in the video.
  7. ## Helmet Detection Experiment
  8. ### Install Sedna
  9. Follow the [Sedna installation document](/docs/setup/install.md) to install Sedna.
  10. ### Prepare Data and Model
  11. * step1: download [little model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz) to your edge node.
  12. ```
  13. mkdir -p /data/little-model
  14. cd /data/little-model
  15. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz
  16. tar -zxvf little-model.tar.gz
  17. ```
  18. * step2: download [big model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz) to your cloud node.
  19. ```
  20. mkdir -p /data/big-model
  21. cd /data/big-model
  22. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz
  23. tar -zxvf big-model.tar.gz
  24. ```
  25. ### Prepare Images
  26. This example uses these images:
  27. 1. little model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0```
  28. 2. big model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0```
  29. These images are generated by the script [build_images.sh](/examples/build_image.sh).
  30. ### Create Joint Inference Service
  31. #### Create Big Model Resource Object for Cloud
  32. ```
  33. kubectl create -f - <<EOF
  34. apiVersion: sedna.io/v1alpha1
  35. kind: Model
  36. metadata:
  37. name: helmet-detection-inference-big-model
  38. namespace: default
  39. spec:
  40. url: "/data/big-model/yolov3_darknet.pb"
  41. format: "pb"
  42. EOF
  43. ```
  44. #### Create Little Model Resource Object for Edge
  45. ```
  46. kubectl create -f - <<EOF
  47. apiVersion: sedna.io/v1alpha1
  48. kind: Model
  49. metadata:
  50. name: helmet-detection-inference-little-model
  51. namespace: default
  52. spec:
  53. url: "/data/little-model/yolov3_resnet18.pb"
  54. format: "pb"
  55. EOF
  56. ```
  57. #### Create JointInferenceService
  58. Note the setting of the following parameters, which have to same as the script [little_model.py](/examples/joint_inference/helmet_detection_inference/little_model/little_model.py):
  59. - hardExampleMining: set hard example algorithm from {IBT, CrossEntropy} for inferring in edge side.
  60. - video_url: set the url for video streaming.
  61. - all_examples_inference_output: set your output path for the inference results.
  62. - hard_example_edge_inference_output: set your output path for results of inferring hard examples in edge side.
  63. - hard_example_cloud_inference_output: set your output path for results of inferring hard examples in cloud side.
  64. Make preparation in edge node
  65. ```
  66. mkdir -p /joint_inference/output
  67. ```
  68. Create joint inference service
  69. ```
  70. CLOUD_NODE="cloud-node-name"
  71. EDGE_NODE="edge-node-name"
  72. kubectl create -f - <<EOF
  73. apiVersion: sedna.io/v1alpha1
  74. kind: JointInferenceService
  75. metadata:
  76. name: helmet-detection-inference-example
  77. namespace: default
  78. spec:
  79. edgeWorker:
  80. model:
  81. name: "helmet-detection-inference-little-model"
  82. hardExampleMining:
  83. name: "IBT"
  84. parameters:
  85. - key: "threshold_img"
  86. value: "0.9"
  87. - key: "threshold_box"
  88. value: "0.9"
  89. template:
  90. spec:
  91. nodeName: $EDGE_NODE
  92. containers:
  93. - image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0
  94. imagePullPolicy: IfNotPresent
  95. name: little-model
  96. env: # user defined environments
  97. - name: input_shape
  98. value: "416,736"
  99. - name: "video_url"
  100. value: "rtsp://localhost/video"
  101. - name: "all_examples_inference_output"
  102. value: "/data/output"
  103. - name: "hard_example_cloud_inference_output"
  104. value: "/data/hard_example_cloud_inference_output"
  105. - name: "hard_example_edge_inference_output"
  106. value: "/data/hard_example_edge_inference_output"
  107. resources: # user defined resources
  108. requests:
  109. memory: 64M
  110. cpu: 100m
  111. limits:
  112. memory: 2Gi
  113. volumeMounts:
  114. - name: outputdir
  115. mountPath: /data/
  116. volumes: # user defined volumes
  117. - name: outputdir
  118. hostPath:
  119. # user must create the directory in host
  120. path: /joint_inference/output
  121. type: Directory
  122. cloudWorker:
  123. model:
  124. name: "helmet-detection-inference-big-model"
  125. template:
  126. spec:
  127. nodeName: $CLOUD_NODE
  128. containers:
  129. - image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0
  130. name: big-model
  131. imagePullPolicy: IfNotPresent
  132. env: # user defined environments
  133. - name: "input_shape"
  134. value: "544,544"
  135. resources: # user defined resources
  136. requests:
  137. memory: 2Gi
  138. EOF
  139. ```
  140. ### Check Joint Inference Status
  141. ```
  142. kubectl get jointinferenceservices.sedna.io
  143. ```
  144. ### Mock Video Stream for Inference in Edge Side
  145. * step1: install the open source video streaming server [EasyDarwin](https://github.com/EasyDarwin/EasyDarwin/tree/dev).
  146. * step2: start EasyDarwin server.
  147. * step3: download [video](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz).
  148. * step4: push a video stream to the url (e.g., `rtsp://localhost/video`) that the inference service can connect.
  149. ```
  150. wget https://github.com/EasyDarwin/EasyDarwin/releases/download/v8.1.0/EasyDarwin-linux-8.1.0-1901141151.tar.gz
  151. tar -zxvf EasyDarwin-linux-8.1.0-1901141151.tar.gz
  152. cd EasyDarwin-linux-8.1.0-1901141151
  153. ./start.sh
  154. mkdir -p /data/video
  155. cd /data/video
  156. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz
  157. tar -zxvf video.tar.gz
  158. ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video
  159. ```
  160. ### Check Inference Result
  161. You can check the inference results in the output path (e.g. `/joint_inference/output`) defined in the JointInferenceService config.
  162. * the result of edge inference vs the result of joint inference
  163. ![](images/inference-result.png)