You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 6.8 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197
  1. # Using Joint Inference Service in Helmet Detection Scenario
  2. This case introduces how to use joint inference service in helmet detection scenario.
  3. In the safety helmet detection scenario, the helmet detection shows lower performance due to limited resources in edge.
  4. However, the joint inference service can improve overall performance, which uploads hard examples that identified by the hard example mining algorithm to the cloud and infers them.
  5. The data used in the experiment is a video of workers wearing safety helmets.
  6. The joint inference service requires to detect the wearing of safety helmets in the video.
  7. ## Helmet Detection Experiment
  8. ### Install Sedna
  9. Follow the [Sedna installation document](/docs/setup/install.md) to install Sedna.
  10. ### Prepare Data and Model
  11. * step1: download [little model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz) to your edge node.
  12. ```
  13. mkdir -p /data/little-model
  14. cd /data/little-model
  15. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz
  16. tar -zxvf little-model.tar.gz
  17. ```
  18. * step2: download [big model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz) to your cloud node.
  19. ```
  20. mkdir -p /data/big-model
  21. cd /data/big-model
  22. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz
  23. tar -zxvf big-model.tar.gz
  24. ```
  25. ### Prepare Images
  26. This example uses these images:
  27. 1. little model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0```
  28. 2. big model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0```
  29. These images are generated by the script [build_images.sh](/examples/build_image.sh).
  30. ### Create Joint Inference Service
  31. #### Create Big Model Resource Object for Cloud
  32. ```
  33. kubectl create -f - <<EOF
  34. apiVersion: sedna.io/v1alpha1
  35. kind: Model
  36. metadata:
  37. name: helmet-detection-inference-big-model
  38. namespace: default
  39. spec:
  40. url: "/data/big-model/yolov3_darknet.pb"
  41. format: "pb"
  42. EOF
  43. ```
  44. #### Create Little Model Resource Object for Edge
  45. ```
  46. kubectl create -f - <<EOF
  47. apiVersion: sedna.io/v1alpha1
  48. kind: Model
  49. metadata:
  50. name: helmet-detection-inference-little-model
  51. namespace: default
  52. spec:
  53. url: "/data/little-model/yolov3_resnet18.pb"
  54. format: "pb"
  55. EOF
  56. ```
  57. #### Create JointInferenceService
  58. Note the setting of the following parameters, which have to same as the script [little_model.py](/examples/joint_inference/helmet_detection_inference/little_model/little_model.py):
  59. - hardExampleMining: set hard example algorithm from {IBT, CrossEntropy} for inferring in edge side.
  60. - video_url: set the url for video streaming.
  61. - all_examples_inference_output: set your output path for the inference results.
  62. - hard_example_edge_inference_output: set your output path for results of inferring hard examples in edge side.
  63. - hard_example_cloud_inference_output: set your output path for results of inferring hard examples in cloud side.
  64. Make preparation in edge node
  65. ```
  66. mkdir -p /joint_inference/output
  67. ```
  68. Create joint inference service
  69. ```
  70. CLOUD_NODE="cloud-node-name"
  71. EDGE_NODE="edge-node-name"
  72. kubectl create -f - <<EOF
  73. apiVersion: sedna.io/v1alpha1
  74. kind: JointInferenceService
  75. metadata:
  76. name: helmet-detection-inference-example
  77. namespace: default
  78. spec:
  79. edgeWorker:
  80. model:
  81. name: "helmet-detection-inference-little-model"
  82. hardExampleMining:
  83. name: "IBT"
  84. parameters:
  85. - key: "threshold_img"
  86. value: "0.9"
  87. - key: "threshold_box"
  88. value: "0.9"
  89. template:
  90. spec:
  91. nodeName: $EDGE_NODE
  92. dnsPolicy: ClusterFirstWithHostNet
  93. containers:
  94. - image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0
  95. imagePullPolicy: IfNotPresent
  96. name: little-model
  97. env: # user defined environments
  98. - name: input_shape
  99. value: "416,736"
  100. - name: "video_url"
  101. value: "rtsp://localhost/video"
  102. - name: "all_examples_inference_output"
  103. value: "/data/output"
  104. - name: "hard_example_cloud_inference_output"
  105. value: "/data/hard_example_cloud_inference_output"
  106. - name: "hard_example_edge_inference_output"
  107. value: "/data/hard_example_edge_inference_output"
  108. resources: # user defined resources
  109. requests:
  110. memory: 64M
  111. cpu: 100m
  112. limits:
  113. memory: 2Gi
  114. volumeMounts:
  115. - name: outputdir
  116. mountPath: /data/
  117. volumes: # user defined volumes
  118. - name: outputdir
  119. hostPath:
  120. # user must create the directory in host
  121. path: /joint_inference/output
  122. type: Directory
  123. cloudWorker:
  124. model:
  125. name: "helmet-detection-inference-big-model"
  126. template:
  127. spec:
  128. nodeName: $CLOUD_NODE
  129. dnsPolicy: ClusterFirstWithHostNet
  130. containers:
  131. - image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0
  132. name: big-model
  133. imagePullPolicy: IfNotPresent
  134. env: # user defined environments
  135. - name: "input_shape"
  136. value: "544,544"
  137. resources: # user defined resources
  138. requests:
  139. memory: 2Gi
  140. EOF
  141. ```
  142. ### Check Joint Inference Status
  143. ```
  144. kubectl get jointinferenceservices.sedna.io
  145. ```
  146. ### Mock Video Stream for Inference in Edge Side
  147. * step1: install the open source video streaming server [EasyDarwin](https://github.com/EasyDarwin/EasyDarwin/tree/dev).
  148. * step2: start EasyDarwin server.
  149. * step3: download [video](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz).
  150. * step4: push a video stream to the url (e.g., `rtsp://localhost/video`) that the inference service can connect.
  151. ```
  152. wget https://github.com/EasyDarwin/EasyDarwin/releases/download/v8.1.0/EasyDarwin-linux-8.1.0-1901141151.tar.gz
  153. tar -zxvf EasyDarwin-linux-8.1.0-1901141151.tar.gz
  154. cd EasyDarwin-linux-8.1.0-1901141151
  155. ./start.sh
  156. mkdir -p /data/video
  157. cd /data/video
  158. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz
  159. tar -zxvf video.tar.gz
  160. ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video
  161. ```
  162. ### Check Inference Result
  163. You can check the inference results in the output path (e.g. `/joint_inference/output`) defined in the JointInferenceService config.
  164. * the result of edge inference vs the result of joint inference
  165. ![](images/inference-result.png)