Signed-off-by: JimmyYang20 <yangjin39@huawei.com>tags/v0.4.0
| @@ -20,6 +20,7 @@ Example: [Using Federated Learning Job in Surface Defect Detection Scenario](./f | |||
| ### Shared Storage | |||
| | Support Protocols |Support Features| Examples |Release| | |||
| | :-------------: | :-------------: |:-------------: | :-------------: | | |||
| | S3 |Joint Inference/Incremental Learning| [Using Incremental Learning Job in Helmet Detection Scenario on S3](./storage/s3/README.md) | 0.2.0+| | |||
| | S3 |Incremental Learning| [Using Incremental Learning Job in Helmet Detection Scenario on S3](storage/s3/incremental_learning/README.md) | 0.3.1+| | |||
| | S3 |Joint Inference| [Using Joint Inference Service in Helmet Detection Scenario on S3](storage/s3/joint_inference/README.md) | 0.3.0+| | |||
| @@ -1,10 +1,15 @@ | |||
| # Using Incremental Learning Job in Helmet Detection Scenario on S3 | |||
| This example based on the example: [Using Incremental Learning Job in Helmet Detection Scenario](/examples/incremental_learning/helmet_detection/README.md) | |||
| This example is based on the example: [Using Incremental Learning Job in Helmet Detection Scenario](/examples/incremental_learning/helmet_detection/README.md). | |||
| ### Create a secret with your S3 user credential. | |||
| ### Prepare Nodes | |||
| Assume you have created a [KubeEdge](https://github.com/kubeedge/kubeedge) cluster that have two cloud nodes(e.g., `cloud-node1`, `cloud-node2`) | |||
| and one edge node(e.g., `edge-node`). | |||
| ```yaml | |||
| ### Create a secret with your S3 user credential. | |||
| ```shell | |||
| kubectl create -f - <<EOF | |||
| apiVersion: v1 | |||
| kind: Secret | |||
| metadata: | |||
| @@ -15,19 +20,15 @@ metadata: | |||
| stringData: # use `stringData` for raw credential string or `data` for base64 encoded string | |||
| ACCESS_KEY_ID: XXXX | |||
| SECRET_ACCESS_KEY: XXXXXXXX | |||
| EOF | |||
| ``` | |||
| ### Attach the created secret to the Model/Dataset/Job. | |||
| `EDGE_NODE` and `CLOUD_NODE` are custom nodes, you can fill it which you actually run. | |||
| ``` | |||
| EDGE_NODE="edge-node" | |||
| CLOUD_NODE="cloud-node" | |||
| ``` | |||
| * Attach the created secret to the Model. | |||
| ### Prepare Model | |||
| * Download [models](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/model.tar.gz). | |||
| * Put the unzipped model file into the bucket of your cloud storage service. | |||
| * Attach the created secret to the Model and create Model. | |||
| ```yaml | |||
| ```shell | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| @@ -40,7 +41,7 @@ metadata: | |||
| EOF | |||
| ``` | |||
| ```yaml | |||
| ```shell | |||
| kubectl $action -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| @@ -50,12 +51,15 @@ spec: | |||
| url: "s3://kubeedge/model/deploy_model/saved_model.pb" | |||
| format: "pb" | |||
| credentialName: mysecret | |||
| EO | |||
| EOF | |||
| ``` | |||
| * Attach the created secret to the Dataset. | |||
| ### Prepare Dataset | |||
| * Download [dataset](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/dataset.tar.gz). | |||
| * Put the unzipped dataset file into the bucket of your cloud storage service. | |||
| * Attach the created secret to Dataset and create Dataset. | |||
| ```yaml | |||
| ```shell | |||
| kubectl $action -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Dataset | |||
| @@ -64,13 +68,42 @@ metadata: | |||
| spec: | |||
| url: "s3://kubeedge/data/helmet_detection/train_data/train_data.txt" | |||
| format: "txt" | |||
| nodeName: $CLOUD_NODE | |||
| nodeName: cloud-node1 | |||
| credentialName: mysecret | |||
| EOF | |||
| ``` | |||
| ### Prepare Image | |||
| This example uses the image: | |||
| ```shell | |||
| kubeedge/sedna-example-incremental-learning-helmet-detection:v0.3.1 | |||
| ``` | |||
| * Attach the created secret to the Job(IncrementalLearningJob). | |||
| ```yaml | |||
| This image is generated by the script [build_images.sh](/examples/build_image.sh), used for creating training, eval and inference worker. | |||
| ### Prepare Job | |||
| * Inference/Train/Eval worker now can be deployed by nodeName and [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) on multiple nodes. | |||
| * Make sure to follow the local dir which exists in edge side. | |||
| ```shell | |||
| mkdir -p /incremental_learning/video/ | |||
| ``` | |||
| * Download [video](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/video.tar.gz), unzip video.tar.gz, and put it into `/incremental_learning/video/`. | |||
| ``` | |||
| cd /incremental_learning/video/ | |||
| wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/video.tar.gz | |||
| tar -zxvf video.tar.gz | |||
| ``` | |||
| * Attach the created secret to the Job and create Job. | |||
| ```shell | |||
| IMAGE=kubeedge/sedna-example-incremental-learning-helmet-detection:v0.3.1 | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: IncrementalLearningJob | |||
| @@ -85,9 +118,9 @@ spec: | |||
| trainSpec: | |||
| template: | |||
| spec: | |||
| nodeName: $CLOUD_NODE | |||
| nodeName: cloud-node1 | |||
| containers: | |||
| - image: kubeedge/sedna-example-incremental-learning-helmet-detection:v0.1.0 | |||
| - image: $IMAGE | |||
| name: train-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: ["train.py"] | |||
| @@ -116,9 +149,9 @@ spec: | |||
| evalSpec: | |||
| template: | |||
| spec: | |||
| nodeName: $CLOUD_NODE | |||
| nodeName: cloud-node2 | |||
| containers: | |||
| - image: kubeedge/sedna-example-incremental-learning-helmet-detection:v0.1.0 | |||
| - image: $IMAGE | |||
| name: eval-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: ["eval.py"] | |||
| @@ -144,9 +177,9 @@ spec: | |||
| value: "0.9" | |||
| template: | |||
| spec: | |||
| nodeName: $EDGE_NODE | |||
| nodeName: edge-node | |||
| containers: | |||
| - image: kubeedge/sedna-example-incremental-learning-helmet-detection:v0.1.0 | |||
| - image: $IMAGE | |||
| name: infer-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: ["inference.py"] | |||
| @@ -169,11 +202,12 @@ spec: | |||
| - name: localvideo | |||
| hostPath: | |||
| path: /incremental_learning/video/ | |||
| type: Directory | |||
| type: DirectoryOrCreate | |||
| - name: hedir | |||
| hostPath: | |||
| path: /incremental_learning/he/ | |||
| type: Directory | |||
| outputDir: "/incremental_learning/output" | |||
| type: DirectoryOrCreate | |||
| credentialName: mysecret | |||
| outputDir: "s3://kubeedge/incremental_learning/output" | |||
| EOF | |||
| ``` | |||
| @@ -0,0 +1,149 @@ | |||
| # Using Joint Inference Service in Helmet Detection Scenario on S3 | |||
| This example is based on the example: [Using Joint Inference Service in Helmet Detection Scenario](/examples/joint_inference/helmet_detection_inference/README.md). | |||
| ### Prepare Nodes | |||
| Assume you have created a [KubeEdge](https://github.com/kubeedge/kubeedge) cluster that have one cloud node(e.g., `cloud-node`) | |||
| and one edge node(e.g., `edge-node`). | |||
| ### Create a secret with your S3 user credential. | |||
| ```shell | |||
| kubectl create -f - <<EOF | |||
| apiVersion: v1 | |||
| kind: Secret | |||
| metadata: | |||
| name: mysecret | |||
| annotations: | |||
| s3-endpoint: s3.amazonaws.com # replace with your s3 endpoint e.g minio-service.kubeflow:9000 | |||
| s3-usehttps: "1" # by default 1, if testing with minio you can set to 0 | |||
| stringData: # use `stringData` for raw credential string or `data` for base64 encoded string | |||
| ACCESS_KEY_ID: XXXX | |||
| SECRET_ACCESS_KEY: XXXXXXXX | |||
| EOF | |||
| ``` | |||
| ### Prepare Model | |||
| * Download [little model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz) | |||
| and [big model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz). | |||
| * Put the unzipped model file into the bucket of your cloud storage service. | |||
| * Attach the created secret to the Model and create Model. | |||
| ```shell | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| metadata: | |||
| name: big-model | |||
| spec: | |||
| url : "s3://kubeedge/model/big-model/yolov3_darknet.pb" | |||
| format: "pb" | |||
| credentialName: mysecret | |||
| EOF | |||
| ``` | |||
| ```shell | |||
| kubectl $action -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| metadata: | |||
| name: little-model | |||
| spec: | |||
| url: "s3://kubeedge/model/little-model/yolov3_resnet18.pb" | |||
| format: "pb" | |||
| credentialName: mysecret | |||
| EOF | |||
| ``` | |||
| ### Prepare Images | |||
| This example uses these images: | |||
| 1. little model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0``` | |||
| 2. big model inference worker: ```kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0``` | |||
| These images are generated by the script [build_images.sh](/examples/build_image.sh). | |||
| ### Prepare Job | |||
| * Make preparation in edge node | |||
| ``` | |||
| mkdir -p /joint_inference/output | |||
| ``` | |||
| * Attach the created secret to the Job and create Job. | |||
| ```shell | |||
| LITTLE_MODEL_IMAGE=kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0 | |||
| BIG_MODEL_IMAGE=kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0 | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: JointInferenceService | |||
| metadata: | |||
| name: helmet-detection-inference-example | |||
| namespace: default | |||
| spec: | |||
| edgeWorker: | |||
| model: | |||
| name: "helmet-detection-inference-little-model" | |||
| hardExampleMining: | |||
| name: "IBT" | |||
| parameters: | |||
| - key: "threshold_img" | |||
| value: "0.9" | |||
| - key: "threshold_box" | |||
| value: "0.9" | |||
| template: | |||
| spec: | |||
| nodeName: edge-node | |||
| containers: | |||
| - image: $LITTLE_MODEL_IMAGE | |||
| imagePullPolicy: IfNotPresent | |||
| name: little-model | |||
| env: # user defined environments | |||
| - name: input_shape | |||
| value: "416,736" | |||
| - name: "video_url" | |||
| value: "rtsp://localhost/video" | |||
| - name: "all_examples_inference_output" | |||
| value: "/data/output" | |||
| - name: "hard_example_cloud_inference_output" | |||
| value: "/data/hard_example_cloud_inference_output" | |||
| - name: "hard_example_edge_inference_output" | |||
| value: "/data/hard_example_edge_inference_output" | |||
| resources: # user defined resources | |||
| requests: | |||
| memory: 64M | |||
| cpu: 100m | |||
| limits: | |||
| memory: 2Gi | |||
| volumeMounts: | |||
| - name: outputdir | |||
| mountPath: /data/ | |||
| volumes: # user defined volumes | |||
| - name: outputdir | |||
| hostPath: | |||
| # user must create the directory in host | |||
| path: /joint_inference/output | |||
| type: DirectoryorCreate | |||
| cloudWorker: | |||
| model: | |||
| name: "helmet-detection-inference-big-model" | |||
| template: | |||
| spec: | |||
| nodeName: cloud-node | |||
| containers: | |||
| - image: $BIG_MODEL_IMAGE | |||
| name: big-model | |||
| imagePullPolicy: IfNotPresent | |||
| env: # user defined environments | |||
| - name: "input_shape" | |||
| value: "544,544" | |||
| resources: # user defined resources | |||
| requests: | |||
| memory: 2Gi | |||
| EOF | |||
| ``` | |||
| ### Mock Video Stream for Inference in Edge Side | |||
| Refer to [here](https://github.com/kubeedge/sedna/tree/main/examples/joint_inference/helmet_detection_inference#mock-video-stream-for-inference-in-edge-side). | |||