You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 4.9 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149
  1. # Using Joint Inference Service in Helmet Detection Scenario on S3
  2. This example is based on the example: [Using Joint Inference Service in Helmet Detection Scenario](/examples/joint_inference/helmet_detection_inference/README.md).
  3. ### Prepare Nodes
  4. Assume you have created a [KubeEdge](https://github.com/kubeedge/kubeedge) cluster that have one cloud node(e.g., `cloud-node`)
  5. and one edge node(e.g., `edge-node`).
  6. ### Create a secret with your S3 user credential.
  7. ```shell
  8. kubectl create -f - <<EOF
  9. apiVersion: v1
  10. kind: Secret
  11. metadata:
  12. name: mysecret
  13. annotations:
  14. s3-endpoint: play.min.io # replace with your s3 endpoint
  15. s3-usehttps: "1" # by default 1, if testing with minio you can set to 0
  16. stringData:
  17. ACCESS_KEY_ID: Q3AM3UQ867SPQQA43P2F
  18. SECRET_ACCESS_KEY: zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG
  19. EOF
  20. ```
  21. ### Prepare Model
  22. * Download [little model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz)
  23. and [big model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz).
  24. * Put the unzipped model file into the bucket of your cloud storage service.
  25. * Attach the created secret to the Model and create Model.
  26. ```shell
  27. kubectl create -f - <<EOF
  28. apiVersion: sedna.io/v1alpha1
  29. kind: Model
  30. metadata:
  31. name: big-model
  32. spec:
  33. url : "s3://kubeedge/model/big-model/yolov3_darknet.pb"
  34. format: "pb"
  35. credentialName: mysecret
  36. EOF
  37. ```
  38. ```shell
  39. kubectl $action -f - <<EOF
  40. apiVersion: sedna.io/v1alpha1
  41. kind: Model
  42. metadata:
  43. name: little-model
  44. spec:
  45. url: "s3://kubeedge/model/little-model/yolov3_resnet18.pb"
  46. format: "pb"
  47. credentialName: mysecret
  48. EOF
  49. ```
  50. ### Prepare Images
  51. This example uses these images:
  52. 1. little model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0```
  53. 2. big model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0```
  54. These images are generated by the script [build_images.sh](/examples/build_image.sh).
  55. ### Prepare Job
  56. * Make preparation in edge node
  57. ```
  58. mkdir -p /joint_inference/output
  59. ```
  60. * Attach the created secret to the Job and create Job.
  61. ```shell
  62. LITTLE_MODEL_IMAGE=kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0
  63. BIG_MODEL_IMAGE=kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0
  64. kubectl create -f - <<EOF
  65. apiVersion: sedna.io/v1alpha1
  66. kind: JointInferenceService
  67. metadata:
  68. name: helmet-detection-inference-example
  69. namespace: default
  70. spec:
  71. edgeWorker:
  72. model:
  73. name: "helmet-detection-inference-little-model"
  74. hardExampleMining:
  75. name: "IBT"
  76. parameters:
  77. - key: "threshold_img"
  78. value: "0.9"
  79. - key: "threshold_box"
  80. value: "0.9"
  81. template:
  82. spec:
  83. nodeName: edge-node
  84. containers:
  85. - image: $LITTLE_MODEL_IMAGE
  86. imagePullPolicy: IfNotPresent
  87. name: little-model
  88. env: # user defined environments
  89. - name: input_shape
  90. value: "416,736"
  91. - name: "video_url"
  92. value: "rtsp://localhost/video"
  93. - name: "all_examples_inference_output"
  94. value: "/data/output"
  95. - name: "hard_example_cloud_inference_output"
  96. value: "/data/hard_example_cloud_inference_output"
  97. - name: "hard_example_edge_inference_output"
  98. value: "/data/hard_example_edge_inference_output"
  99. resources: # user defined resources
  100. requests:
  101. memory: 64M
  102. cpu: 100m
  103. limits:
  104. memory: 2Gi
  105. volumeMounts:
  106. - name: outputdir
  107. mountPath: /data/
  108. volumes: # user defined volumes
  109. - name: outputdir
  110. hostPath:
  111. # user must create the directory in host
  112. path: /joint_inference/output
  113. type: DirectoryOrCreate
  114. cloudWorker:
  115. model:
  116. name: "helmet-detection-inference-big-model"
  117. template:
  118. spec:
  119. nodeName: cloud-node
  120. containers:
  121. - image: $BIG_MODEL_IMAGE
  122. name: big-model
  123. imagePullPolicy: IfNotPresent
  124. env: # user defined environments
  125. - name: "input_shape"
  126. value: "544,544"
  127. resources: # user defined resources
  128. requests:
  129. memory: 2Gi
  130. EOF
  131. ```
  132. ### Mock Video Stream for Inference in Edge Side
  133. Refer to [here](https://github.com/kubeedge/sedna/tree/main/examples/joint_inference/helmet_detection_inference#mock-video-stream-for-inference-in-edge-side).