# Using Incremental Learning Job in Helmet Detection Scenario on S3
# Using Incremental Learning Job in Helmet Detection Scenario on S3
This example based on the example: [Using Incremental Learning Job in Helmet Detection Scenario](/examples/incremental_learning/helmet_detection/README.md)
This example is based on the example: [Using Incremental Learning Job in Helmet Detection Scenario](/examples/incremental_learning/helmet_detection/README.md).
### Create a secret with your S3 user credential.
### Prepare Nodes
Assume you have created a [KubeEdge](https://github.com/kubeedge/kubeedge) cluster that have two cloud nodes(e.g., `cloud-node1`, `cloud-node2`)
and one edge node(e.g., `edge-node`).
```yaml
### Create a secret with your S3 user credential.
```shell
kubectl create -f - <<EOF
apiVersion: v1
apiVersion: v1
kind: Secret
kind: Secret
metadata:
metadata:
@@ -15,19 +20,15 @@ metadata:
stringData: # use `stringData` for raw credential string or `data` for base64 encoded string
stringData: # use `stringData` for raw credential string or `data` for base64 encoded string
ACCESS_KEY_ID: XXXX
ACCESS_KEY_ID: XXXX
SECRET_ACCESS_KEY: XXXXXXXX
SECRET_ACCESS_KEY: XXXXXXXX
EOF
```
```
### Attach the created secret to the Model/Dataset/Job.
`EDGE_NODE` and `CLOUD_NODE` are custom nodes, you can fill it which you actually run.
* Attach the created secret to the Job(IncrementalLearningJob).
```yaml
This image is generated by the script [build_images.sh](/examples/build_image.sh), used for creating training, eval and inference worker.
### Prepare Job
* Inference/Train/Eval worker now can be deployed by nodeName and [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) on multiple nodes.
* Make sure to follow the local dir which exists in edge side.
```shell
mkdir -p /incremental_learning/video/
```
* Download [video](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/video.tar.gz), unzip video.tar.gz, and put it into `/incremental_learning/video/`.
# Using Joint Inference Service in Helmet Detection Scenario on S3
This example is based on the example: [Using Joint Inference Service in Helmet Detection Scenario](/examples/joint_inference/helmet_detection_inference/README.md).
### Prepare Nodes
Assume you have created a [KubeEdge](https://github.com/kubeedge/kubeedge) cluster that have one cloud node(e.g., `cloud-node`)
and one edge node(e.g., `edge-node`).
### Create a secret with your S3 user credential.
```shell
kubectl create -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: mysecret
annotations:
s3-endpoint: s3.amazonaws.com # replace with your s3 endpoint e.g minio-service.kubeflow:9000
s3-usehttps: "1" # by default 1, if testing with minio you can set to 0
stringData: # use `stringData` for raw credential string or `data` for base64 encoded string
Refer to [here](https://github.com/kubeedge/sedna/tree/main/examples/joint_inference/helmet_detection_inference#mock-video-stream-for-inference-in-edge-side).