You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

quickstart.md 6.2 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183
  1. # Quick Start
  2. The following is showing how to run a joint inference job by sedna.
  3. ## Quick Start
  4. #### 0. Check the Environment
  5. For Sedna all-in-one installation, it requires you:
  6. - 1 VM **(one machine is OK, cluster is not required)**
  7. - 2 CPUs or more
  8. - 2GB+ free memory, depends on node number setting
  9. - 10GB+ free disk space
  10. - Internet connection(docker hub, github etc.)
  11. - Linux platform, such as ubuntu/centos
  12. - Docker 17.06+
  13. you can check the docker version by the following command,
  14. ```bash
  15. docker -v
  16. ```
  17. after doing that, the output will be like this, that means your version fits the bill.
  18. ```
  19. Docker version 19.03.6, build 369ce74a3c
  20. ```
  21. #### 1. Deploy Sedna
  22. Sedna provides three deployment methods, which can be selected according to your actual situation:
  23. - [Install Sedna AllinOne](../setup/all-in-one.md). (used for development, here we use it)
  24. - [Install Sedna local up](../setup/local-up.md).
  25. - [Install Sedna on a cluster](../setup/install.md).
  26. The [all-in-one script](/scripts/installation/all-in-one.sh) is used to install Sedna along with a mini Kubernetes environment locally, including:
  27. - A Kubernetes v1.21 cluster with multi worker nodes, default zero worker node.
  28. - KubeEdge with multi edge nodes, default is latest KubeEdge and one edge node.
  29. - Sedna, default is the latest version.
  30. ```bash
  31. curl https://raw.githubusercontent.com/kubeedge/sedna/master/scripts/installation/all-in-one.sh | NUM_EDGE_NODES=1 bash -
  32. ```
  33. Then you get two nodes `sedna-mini-control-plane` and `sedna-mini-edge0`,you can get into each node by following command:
  34. ```bash
  35. # get into cloud node
  36. docker exec -it sedna-mini-control-plane bash
  37. ```
  38. ```bash
  39. # get into edge node
  40. docker exec -it sedna-mini-edge0 bash
  41. ```
  42. #### 1. Prepare Data and Model File
  43. * step1: download [little model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz) to your edge node.
  44. ```
  45. mkdir -p /data/little-model
  46. cd /data/little-model
  47. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz
  48. tar -zxvf little-model.tar.gz
  49. ```
  50. * step2: download [big model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz) to your cloud node.
  51. ```
  52. mkdir -p /data/big-model
  53. cd /data/big-model
  54. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz
  55. tar -zxvf big-model.tar.gz
  56. ```
  57. #### 2. Create Big Model Resource Object for Cloud
  58. In cloud node:
  59. ```
  60. kubectl create -f - <<EOF
  61. apiVersion: sedna.io/v1alpha1
  62. kind: Model
  63. metadata:
  64. name: helmet-detection-inference-big-model
  65. namespace: default
  66. spec:
  67. url: "/data/big-model/yolov3_darknet.pb"
  68. format: "pb"
  69. EOF
  70. ```
  71. #### 3. Create Little Model Resource Object for Edge
  72. In cloud node:
  73. ```
  74. kubectl create -f - <<EOF
  75. apiVersion: sedna.io/v1alpha1
  76. kind: Model
  77. metadata:
  78. name: helmet-detection-inference-little-model
  79. namespace: default
  80. spec:
  81. url: "/data/little-model/yolov3_resnet18.pb"
  82. format: "pb"
  83. EOF
  84. ```
  85. #### 4. Create JointInferenceService
  86. Note the setting of the following parameters, which have to same as the script [little_model.py](/examples/joint_inference/helmet_detection_inference/little_model/little_model.py):
  87. - hardExampleMining: set hard example algorithm from {IBT, CrossEntropy} for inferring in edge side.
  88. - video_url: set the url for video streaming.
  89. - all_examples_inference_output: set your output path for the inference results.
  90. - hard_example_edge_inference_output: set your output path for results of inferring hard examples in edge side.
  91. - hard_example_cloud_inference_output: set your output path for results of inferring hard examples in cloud side.
  92. Make preparation in edge node
  93. ```
  94. mkdir -p /joint_inference/output
  95. ```
  96. Create joint inference service
  97. ```
  98. CLOUD_NODE="sedna-mini-control-plane"
  99. EDGE_NODE="sedna-mini-edge0"
  100. kubectl create -f https://raw.githubusercontent.com/jaypume/sedna/main/examples/joint_inference/helmet_detection_inference/helmet_detection_inference.yaml
  101. ```
  102. #### 5. Check Joint Inference Status
  103. ```
  104. kubectl get jointinferenceservices.sedna.io
  105. ```
  106. #### 6. Mock Video Stream for Inference in Edge Side
  107. * step1: install the open source video streaming server [EasyDarwin](https://github.com/EasyDarwin/EasyDarwin/tree/dev).
  108. * step2: start EasyDarwin server.
  109. * step3: download [video](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz).
  110. * step4: push a video stream to the url (e.g., `rtsp://localhost/video`) that the inference service can connect.
  111. ```
  112. wget https://github.com/EasyDarwin/EasyDarwin/releases/download/v8.1.0/EasyDarwin-linux-8.1.0-1901141151.tar.gz
  113. tar -zxvf EasyDarwin-linux-8.1.0-1901141151.tar.gz
  114. cd EasyDarwin-linux-8.1.0-1901141151
  115. ./start.sh
  116. mkdir -p /data/video
  117. cd /data/video
  118. wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz
  119. tar -zxvf video.tar.gz
  120. ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video
  121. ```
  122. ### Check Inference Result
  123. You can check the inference results in the output path (e.g. `/joint_inference/output`) defined in the JointInferenceService config.
  124. * the result of edge inference vs the result of joint inference
  125. ![](../../ /images/inference-result.png)
  126. ![](../../examples/joint_inference/helmet_detection_inference/images/inference-result.png)
  127. ## API
  128. - control-plane: Please refer to this [link](api/crd).
  129. - Lib: Please refer to this [link](api/lib).
  130. ## Contributing
  131. Contributions are very welcome!
  132. - control-plane: Please refer to this [link](contributing/control-plane/development.md).
  133. - Lib: Please refer to this [link](contributing/lib/development.md).
  134. ## Community
  135. Sedna is an open source project and in the spirit of openness and freedom, we welcome new contributors to join us.
  136. You can get in touch with the community according to the ways:
  137. * [Github Issues](https://github.com/kubeedge/sedna/issues)
  138. * [Regular Community Meeting](https://zoom.us/j/4167237304)
  139. * [slack channel](https://kubeedge.io/docs/community/slack/)