| @@ -1,5 +1,5 @@ | |||
| =========================================== | |||
| Sedna Python SDK | |||
| Python API Use Guide | |||
| =========================================== | |||
| .. mdinclude:: ../../../lib/sedna/README.md | |||
| @@ -58,7 +58,7 @@ with open(f'{_base_path}/lib/sedna/VERSION', "r", encoding="utf-8") as fh: | |||
| # -- Project information ----------------------------------------------------- | |||
| project = 'Sedna' | |||
| copyright = '2020, Kubeedge' | |||
| copyright = '2021, Kubeedge' | |||
| author = 'Kubeedge' | |||
| version = __version__ | |||
| @@ -9,11 +9,24 @@ Sedna is an edge-cloud synergy AI project incubated in KubeEdge SIG AI. Benefiti | |||
| Sedna can simply enable edge-cloud synergy capabilities to existing training and inference scripts, bringing the benefits of reducing costs, improving model performance, and protecting data privacy. | |||
| .. toctree:: | |||
| :maxdepth: 1 | |||
| :caption: QUICK START | |||
| :caption: GUIDE | |||
| quickstart | |||
| index/guide | |||
| index/quickstart | |||
| .. toctree:: | |||
| :maxdepth: 1 | |||
| :titlesonly: | |||
| :glob: | |||
| :caption: DEPLOY | |||
| Cluster Installation (for production) <setup/install> | |||
| AllinOne Installation (for development) <setup/all-in-one> | |||
| Standalone Installation (for hello world) <setup/local-up> | |||
| .. toctree:: | |||
| @@ -31,34 +44,28 @@ Sedna can simply enable edge-cloud synergy capabilities to existing training and | |||
| proposals/object-tracking | |||
| .. toctree:: | |||
| :maxdepth: 1 | |||
| :titlesonly: | |||
| :glob: | |||
| :caption: DEPLOY | |||
| Installtion <setup/install> | |||
| Standalone <setup/local-up> | |||
| .. toctree:: | |||
| :maxdepth: 1 | |||
| :glob: | |||
| :caption: EXAMPLES | |||
| examples/federated_learning/surface_defect_detection/README | |||
| examples/incremental_learning/helmet_detection/README | |||
| examples/joint_inference/helmet_detection_inference/README | |||
| examples/incremental_learning/helmet_detection/README | |||
| examples/federated_learning/surface_defect_detection/README | |||
| examples/federated_learning/yolov5_coco128_mistnet/README | |||
| examples/lifelong_learning/atcii/README | |||
| examples/storage/s3/* | |||
| .. toctree:: | |||
| :maxdepth: 1 | |||
| :caption: API | |||
| :caption: API REFERENCE | |||
| :titlesonly: | |||
| :glob: | |||
| api/lib/* | |||
| Python API <autoapi/lib/sedna/index> | |||
| .. toctree:: | |||
| :maxdepth: 1 | |||
| @@ -69,26 +76,17 @@ Sedna can simply enable edge-cloud synergy capabilities to existing training and | |||
| Control Plane <contributing/prepare-environment> | |||
| .. toctree:: | |||
| :maxdepth: 1 | |||
| :caption: API REFERENCE | |||
| :titlesonly: | |||
| :glob: | |||
| autoapi/lib/sedna/index | |||
| .. toctree:: | |||
| :caption: ROADMAP | |||
| :hidden: | |||
| roadmap | |||
| index/roadmap | |||
| RELATED LINKS | |||
| ============= | |||
| .. mdinclude:: related_link.md | |||
| .. mdinclude:: index/related_link.md | |||
| Indices and tables | |||
| ================== | |||
| @@ -0,0 +1,24 @@ | |||
| ## Guide | |||
| - If you are new to Sedna, you can try the command step by step in [quick start](./quickstart.md). | |||
| - If you have played the above example, you can find more [examples](/examples/README.md). | |||
| - If you want to know more about sedna's architecture and component, you can find them in [sedna home]. | |||
| - If you're looking to contribute documentation improvements, you'll specifically want to see the [kubernetes documentation style guide] before [filing an issue][file-an-issue]. | |||
| - If you're planning to contribute code changes, you'll want to read the [development preparation guide] next. | |||
| - If you're planning to add a new synergy feature directly, you'll want to read the [guide][add-feature-guide] next. | |||
| [proposals]: /docs/proposals | |||
| [development preparation guide]: ../contributing/prepare-environment.md | |||
| [add-feature-guide]: ../contributing/control-plane/add-a-new-synergy-feature.md | |||
| [sedna home]: https://github.com/kubeedge/sedna | |||
| [issues]: https://github.com/kubeedge/sedna/issues | |||
| [file-an-issue]: https://github.com/kubeedge/sedna/issues/new/choose | |||
| [file-a-fr]: https://github.com/kubeedge/sedna/issues/new?labels=kind%2Ffeature&template=enhancement.md | |||
| [github]: https://github.com/ | |||
| [kubernetes documentation style guide]: https://github.com/kubernetes/community/blob/master/contributors/guide/style-guide.md | |||
| [recommended Git workflow]: https://github.com/kubernetes/community/blob/master/contributors/guide/github-workflow.md#workflow | |||
| [pull request best practices]: https://github.com/kubernetes/community/blob/master/contributors/guide/pull-requests.md#best-practices-for-faster-reviews | |||
| [Kubernetes help wanted]: https://www.kubernetes.dev/docs/guide/help-wanted/ | |||
| @@ -0,0 +1,183 @@ | |||
| # Quick Start | |||
| The following is showing how to run a joint inference job by sedna. | |||
| ## Quick Start | |||
| #### 0. Check the Environment | |||
| For Sedna all-in-one installation, it requires you: | |||
| - 1 VM **(one machine is OK, cluster is not required)** | |||
| - 2 CPUs or more | |||
| - 2GB+ free memory, depends on node number setting | |||
| - 10GB+ free disk space | |||
| - Internet connection(docker hub, github etc.) | |||
| - Linux platform, such as ubuntu/centos | |||
| - Docker 17.06+ | |||
| you can check the docker version by the following command, | |||
| ```bash | |||
| docker -v | |||
| ``` | |||
| after doing that, the output will be like this, that means your version fits the bill. | |||
| ``` | |||
| Docker version 19.03.6, build 369ce74a3c | |||
| ``` | |||
| #### 1. Deploy Sedna | |||
| Sedna provides three deployment methods, which can be selected according to your actual situation: | |||
| - [Install Sedna AllinOne](setup/all-in-one.md). (used for development, here we use it) | |||
| - [Install Sedna local up](setup/local-up.md). | |||
| - [Install Sedna on a cluster](setup/install.md). | |||
| The [all-in-one script](/scripts/installation/all-in-one.sh) is used to install Sedna along with a mini Kubernetes environment locally, including: | |||
| - A Kubernetes v1.21 cluster with multi worker nodes, default zero worker node. | |||
| - KubeEdge with multi edge nodes, default is latest KubeEdge and one edge node. | |||
| - Sedna, default is the latest version. | |||
| ```bash | |||
| curl https://raw.githubusercontent.com/kubeedge/sedna/master/scripts/installation/all-in-one.sh | NUM_EDGE_NODES=1 bash - | |||
| ``` | |||
| Then you get two nodes `sedna-mini-control-plane` and `sedna-mini-edge0`,you can get into each node by following command: | |||
| ```bash | |||
| # get into cloud node | |||
| docker exec -it sedna-mini-control-plane bash | |||
| ``` | |||
| ```bash | |||
| # get into edge node | |||
| docker exec -it sedna-mini-edge0 bash | |||
| ``` | |||
| #### 1. Prepare Data and Model File | |||
| * step1: download [little model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz) to your edge node. | |||
| ``` | |||
| mkdir -p /data/little-model | |||
| cd /data/little-model | |||
| wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz | |||
| tar -zxvf little-model.tar.gz | |||
| ``` | |||
| * step2: download [big model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz) to your cloud node. | |||
| ``` | |||
| mkdir -p /data/big-model | |||
| cd /data/big-model | |||
| wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz | |||
| tar -zxvf big-model.tar.gz | |||
| ``` | |||
| #### 2. Create Big Model Resource Object for Cloud | |||
| In cloud node: | |||
| ``` | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| metadata: | |||
| name: helmet-detection-inference-big-model | |||
| namespace: default | |||
| spec: | |||
| url: "/data/big-model/yolov3_darknet.pb" | |||
| format: "pb" | |||
| EOF | |||
| ``` | |||
| #### 3. Create Little Model Resource Object for Edge | |||
| In cloud node: | |||
| ``` | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| metadata: | |||
| name: helmet-detection-inference-little-model | |||
| namespace: default | |||
| spec: | |||
| url: "/data/little-model/yolov3_resnet18.pb" | |||
| format: "pb" | |||
| EOF | |||
| ``` | |||
| #### 4. Create JointInferenceService | |||
| Note the setting of the following parameters, which have to same as the script [little_model.py](/examples/joint_inference/helmet_detection_inference/little_model/little_model.py): | |||
| - hardExampleMining: set hard example algorithm from {IBT, CrossEntropy} for inferring in edge side. | |||
| - video_url: set the url for video streaming. | |||
| - all_examples_inference_output: set your output path for the inference results. | |||
| - hard_example_edge_inference_output: set your output path for results of inferring hard examples in edge side. | |||
| - hard_example_cloud_inference_output: set your output path for results of inferring hard examples in cloud side. | |||
| Make preparation in edge node | |||
| ``` | |||
| mkdir -p /joint_inference/output | |||
| ``` | |||
| Create joint inference service | |||
| ``` | |||
| CLOUD_NODE="sedna-mini-control-plane" | |||
| EDGE_NODE="sedna-mini-edge0" | |||
| kubectl create -f https://raw.githubusercontent.com/jaypume/sedna/main/examples/joint_inference/helmet_detection_inference/helmet_detection_inference.yaml | |||
| ``` | |||
| #### 5. Check Joint Inference Status | |||
| ``` | |||
| kubectl get jointinferenceservices.sedna.io | |||
| ``` | |||
| #### 6. Mock Video Stream for Inference in Edge Side | |||
| * step1: install the open source video streaming server [EasyDarwin](https://github.com/EasyDarwin/EasyDarwin/tree/dev). | |||
| * step2: start EasyDarwin server. | |||
| * step3: download [video](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz). | |||
| * step4: push a video stream to the url (e.g., `rtsp://localhost/video`) that the inference service can connect. | |||
| ``` | |||
| wget https://github.com/EasyDarwin/EasyDarwin/releases/download/v8.1.0/EasyDarwin-linux-8.1.0-1901141151.tar.gz | |||
| tar -zxvf EasyDarwin-linux-8.1.0-1901141151.tar.gz | |||
| cd EasyDarwin-linux-8.1.0-1901141151 | |||
| ./start.sh | |||
| mkdir -p /data/video | |||
| cd /data/video | |||
| wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz | |||
| tar -zxvf video.tar.gz | |||
| ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video | |||
| ``` | |||
| ### Check Inference Result | |||
| You can check the inference results in the output path (e.g. `/joint_inference/output`) defined in the JointInferenceService config. | |||
| * the result of edge inference vs the result of joint inference | |||
|  | |||
|  | |||
| ## API | |||
| - control-plane: Please refer to this [link](api/crd). | |||
| - Lib: Please refer to this [link](api/lib). | |||
| ## Contributing | |||
| Contributions are very welcome! | |||
| - control-plane: Please refer to this [link](contributing/control-plane/development.md). | |||
| - Lib: Please refer to this [link](contributing/lib/development.md). | |||
| ## Community | |||
| Sedna is an open source project and in the spirit of openness and freedom, we welcome new contributors to join us. | |||
| You can get in touch with the community according to the ways: | |||
| * [Github Issues](https://github.com/kubeedge/sedna/issues) | |||
| * [Regular Community Meeting](https://zoom.us/j/4167237304) | |||
| * [slack channel](https://app.slack.com/client/TDZ5TGXQW/C01EG84REVB/details) | |||
| @@ -0,0 +1,12 @@ | |||
| ### Release | |||
| [Sedna0.4.0发布,支持表征提取联邦学习,减少边侧资源需求](https://mp.weixin.qq.com/s/_m5q0t0yYY7gnfQUAssjFg) | |||
| [支持边云协同终身学习特性,KubeEdge子项目Sedna 0.3.0版本发布!](https://mp.weixin.qq.com/s/kSFL_pf2BTyVvH5c9zv0Jg) | |||
| [体验边云协同AI框架!KubeEdge子项目Sedna 0.1版本发布](https://mp.weixin.qq.com/s/3Ei8ynSAxnfuoIWYdb7Gpg) | |||
| [加速AI边云协同创新!KubeEdge社区建立Sedna子项目](https://mp.weixin.qq.com/s/FX2DOsctS_Z7CKHndFByRw) | |||
| [边缘智能还能怎么玩?KubeEdge AI SIG 带你飞](https://mp.weixin.qq.com/s/t10_ZrZW42AZoYnisVAbpg) | |||
| ### Meetup and Conference | |||
| [【HDC.Cloud 2021】边云协同,打通AI最后一公里](https://xie.infoq.cn/article/b22e72afe8de50ca34269bb21) | |||
| [KubeEdge Sedna如何实现边缘AI模型精度提升50%](https://www.huaweicloud.com/zhishi/hdc2021-Track-24-18.html) | |||
| @@ -4,18 +4,12 @@ This document defines a high level roadmap for sedna development. | |||
| The [milestones defined in GitHub](https://github.com/kubeedge/sedna/milestones) represent the most up-to-date plans. | |||
| ## 2022 Roadmap | |||
| ## 2021 Q1 Roadmap | |||
| - Support edge model and dataset management. | |||
| - Support incremental learning, with time trigger, sample size trigger, and precision-based trigger, and integrating hard sample discovering algorithm. | |||
| - Support collaborative training, integrating some common weight/gradient compression algorithm. | |||
| ## Future | |||
| - Integrate some common multi-task migration algorithms to resolve the problem of low precision caused by small size samples. | |||
| - Integrate some common multi-task migration algorithms to resolve the problem of low precision caused by small size | |||
| samples. | |||
| - Integrate KubeFlow and ONNX into Sedna, to enable interoperability of edge models with diverse formats. | |||
| - Integrate typical AI frameworks into Sedna, include Tensorflow, Pytorch, PaddlePaddle and Mindspore etc. | |||
| - Integrate typical AI frameworks into Sedna, include Tensorflow, Pytorch, PaddlePaddle and Mindspore etc. | |||
| - Support edge model and dataset management. | |||
| @@ -1,481 +0,0 @@ | |||
| # Quick Start | |||
| ## Introduction | |||
| Sedna provide some examples of running Sedna jobs in [here](/examples/README.md) | |||
| Here is a general guide to quick start an incremental learning job. | |||
| ### Get Sedna | |||
| You can find the latest Sedna release [here](https://github.com/kubeedge/sedna/releases). | |||
| ### Deploying Sedna | |||
| Sedna provides two deployment methods, which can be selected according to your actual situation: | |||
| - Install Sedna on a cluster Step By Step: [guide here](setup/install.md). | |||
| - Install Sedna AllinOne : [guide here](setup/local-up.md). | |||
| ### Component | |||
| Sedna consists of the following components: | |||
|  | |||
| #### GlobalManager | |||
| * Unified edge-cloud synergy AI task management | |||
| * Cross edge-cloud synergy management and collaboration | |||
| * Central Configuration Management | |||
| #### LocalController | |||
| * Local process control of edge-cloud synergy AI tasks | |||
| * Local general management: model, dataset, and status synchronization | |||
| #### Worker | |||
| * Do inference or training, based on existing ML framework. | |||
| * Launch on demand, imagine they are docker containers. | |||
| * Different workers for different features. | |||
| * Could run on edge or cloud. | |||
| #### Lib | |||
| * Expose the Edge AI features to applications, i.e. training or inference programs. | |||
| ### System Design | |||
| There are three stages in a [incremental learning job](./proposals/incremental-learning.md): train/eval/deploy. | |||
|  | |||
| ## Deployment Guide | |||
| ### 1. Prepare | |||
| #### 1.1 Deployment Planning | |||
| In this example, there is only one host with two nodes, which had creating a Kubernetes cluster with `kind`. | |||
| | NAME | ROLES | Ip Address | Operating System | Host Configuration | Storage | Deployment Module | | |||
| | ----- | ------- | ----------------------------- | ----------------------- | ------------------ | ------- | ------------------------------------------------------------ | | |||
| | edge-node | agent,edge | 192.168.0.233 | Ubuntu 18.04.5 LTS | 8C16G | 500G | LC,lib, inference worker | | |||
| | sedna-control-plane | control-plane,master | 172.18.0.2 | Ubuntu 20.10 | 8C16G | 500G | GM,LC,lib,training worker,evaluate worker | | |||
| #### 1.2 Network Requirements | |||
| In this example the node **sedna-control-plane** has a internal-ip `172.18.0.2`, and **edge-node** can access it. | |||
| ### 2. Project Deployment | |||
| #### 2.1 (optional) create virtual env | |||
| ```bash | |||
| python3.6 -m venv venv | |||
| source venv/bin/activate | |||
| pip3 install -U pip | |||
| ``` | |||
| #### 2.2 install sedna SDK | |||
| ```bash | |||
| cd $SENDA_ROOT/lib | |||
| python3.6 setup.py bdist_wheel | |||
| pip3 install dist/sedna*.whl | |||
| ``` | |||
| #### 2.3 Prepare your machine learning model and datasets | |||
| ##### 2.3.1 Encapsulate an Estimators | |||
| Sedna implements several pre-made Estimators in [example](/examples), your can find them from the python scripts called `interface`. | |||
| Sedna supports the Estimators build from popular AI frameworks, such as TensorFlow, Pytorch, PaddlePaddle, MindSpore. Also Custom estimators can be used according to our interface document. | |||
| All Estimators—pre-made or custom ones—are classes should encapsulate the following actions: | |||
| - Training | |||
| - Evaluation | |||
| - Prediction | |||
| - Export/load | |||
| Follow [here](/lib/sedna/README.md) for more details, a [toy_example](/examples/incremental_learning/helmet_detection/training/interface.py) like: | |||
| ```python | |||
| os.environ['BACKEND_TYPE'] = 'TENSORFLOW' | |||
| class Estimator: | |||
| def __init__(self, **kwargs): | |||
| ... | |||
| def train(self, train_data, valid_data=None, **kwargs): | |||
| ... | |||
| def evaluate(self, data, **kwargs): | |||
| ... | |||
| def predict(self, data, **kwargs): | |||
| ... | |||
| def load(self, model_url, **kwargs): | |||
| ... | |||
| def save(self, model_path, **kwargs): | |||
| ... | |||
| def get_weights(self): | |||
| ... | |||
| def set_weights(self, weights): | |||
| ... | |||
| ``` | |||
| ##### 2.3.2 Dataset prepare | |||
| In incremental_learning jobs, the following files will be indispensable: | |||
| - base model: tensorflow object detection Fine-tuning a model from an existing checkpoint. | |||
| - deploy model: tensorflow object detection model, for inference. | |||
| - train data: images with label use for Fine-tuning model. | |||
| - test data: video stream use for model inference. | |||
| ```bash | |||
| # download models, including base model and deploy model. | |||
| cd / | |||
| wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/models.tar.gz | |||
| tar -zxvf models.tar.gz | |||
| # download train data | |||
| cd /data/helmet_detection # notes: files here will be monitored and used to trigger the incremental training | |||
| wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/dataset.tar.gz | |||
| tar -zxvf dataset.tar.gz | |||
| # download test data | |||
| cd /incremental_learning/video/ | |||
| wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection/video.tar.gz | |||
| tar -zxvf video.tar.gz | |||
| ``` | |||
| #### 2.3.3 Scripts prepare | |||
| In incremental_learning jobs, the following scripts will be indispensable: | |||
| - train.py: script for model fine-tuning/training. | |||
| - eval.py: script for model evaluate. | |||
| - inference.py: script for data inference. | |||
| You can also find demos [here](/examples/incremental_learning/helmet_detection). | |||
| Some interfaces should be learn in job pipeline: | |||
| - `BaseConfig` provides the capability of obtaining the config from env | |||
| ```python | |||
| from sedna.common.config import BaseConfig | |||
| train_dataset_url = BaseConfig.train_dataset_url | |||
| model_url = BaseConfig.model_url | |||
| ``` | |||
| - `Context` provides the capability of obtaining the context from CRD | |||
| ```python | |||
| from sedna.common.config import Context | |||
| obj_threshold = Context.get_parameters("obj_threshold") | |||
| nms_threshold = Context.get_parameters("nms_threshold") | |||
| input_shape = Context.get_parameters("input_shape") | |||
| epochs = Context.get_parameters('epochs') | |||
| batch_size = Context.get_parameters('batch_size') | |||
| ``` | |||
| - `datasources` base class, as that core feature of sedna require identifying the features and labels from data input, we specify that the first parameter for train/evaluate of the ML framework | |||
| ```python | |||
| from sedna.datasources import BaseDataSource | |||
| train_data = BaseDataSource(data_type="train") | |||
| train_data.x = [] | |||
| train_data.y = [] | |||
| for item in mnist_ds.create_dict_iterator(): | |||
| train_data.x.append(item["image"].asnumpy()) | |||
| train_data.y.append(item["label"].asnumpy()) | |||
| ``` | |||
| - `sedna.core` contain all edge-cloud features, Please note that each feature has its own parameters. | |||
| - **Hard Example Mining Algorithms** in IncrementalLearning named `hard_example_mining` | |||
| ```python | |||
| from sedna.core.incremental_learning import IncrementalLearning | |||
| hard_example_mining = IncrementalLearning.get_hem_algorithm_from_config( | |||
| threshold_img=0.9 | |||
| ) | |||
| # initial an incremental instance | |||
| incremental_instance = IncrementalLearning( | |||
| estimator=Estimator, | |||
| hard_example_mining=hem_dict | |||
| ) | |||
| # Call the interface according to the job state | |||
| # train.py | |||
| incremental_instance.train(train_data=train_data, epochs=epochs, | |||
| batch_size=batch_size, | |||
| class_names=class_names, | |||
| input_shape=input_shape, | |||
| obj_threshold=obj_threshold, | |||
| nms_threshold=nms_threshold) | |||
| # inference | |||
| results, _, is_hard_example = incremental_instance.inference( | |||
| data, input_shape=input_shape) | |||
| ``` | |||
| ### 3. Configuration | |||
| ##### 3.1 Prepare Image | |||
| This example uses the image: | |||
| ``` | |||
| kubeedge/sedna-example-incremental-learning-helmet-detection:v0.4.0 | |||
| ``` | |||
| This image is generated by the script [build_images.sh](/examples/build_image.sh), used for creating training, eval and inference worker. | |||
| ##### 3.2 Create Incremental Job | |||
| In this example, `$WORKER_NODE` is a custom node, you can fill it which you actually run. | |||
| ``` | |||
| WORKER_NODE="edge-node" | |||
| ``` | |||
| - Create Dataset | |||
| ``` | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Dataset | |||
| metadata: | |||
| name: incremental-dataset | |||
| spec: | |||
| url: "/data/helmet_detection/train_data/train_data.txt" | |||
| format: "txt" | |||
| nodeName: $WORKER_NODE | |||
| EOF | |||
| ``` | |||
| - Create Initial Model to simulate the initial model in incremental learning scenario. | |||
| ``` | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| metadata: | |||
| name: initial-model | |||
| spec: | |||
| url : "/models/base_model" | |||
| format: "ckpt" | |||
| EOF | |||
| ``` | |||
| - Create Deploy Model | |||
| ``` | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: Model | |||
| metadata: | |||
| name: deploy-model | |||
| spec: | |||
| url : "/models/deploy_model/saved_model.pb" | |||
| format: "pb" | |||
| EOF | |||
| ``` | |||
| ### 4. Run | |||
| * incremental learning supports hot model updates and cold model updates. Job support | |||
| cold model updates default. If you want to use hot model updates, please to add the following fields: | |||
| ```yaml | |||
| deploySpec: | |||
| model: | |||
| hotUpdateEnabled: true | |||
| pollPeriodSeconds: 60 # default value is 60 | |||
| ``` | |||
| * create the job: | |||
| ``` | |||
| IMAGE=kubeedge/sedna-example-incremental-learning-helmet-detection:v0.4.0 | |||
| kubectl create -f - <<EOF | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: IncrementalLearningJob | |||
| metadata: | |||
| name: helmet-detection-demo | |||
| spec: | |||
| initialModel: | |||
| name: "initial-model" | |||
| dataset: | |||
| name: "incremental-dataset" | |||
| trainProb: 0.8 | |||
| trainSpec: | |||
| template: | |||
| spec: | |||
| nodeName: $WORKER_NODE | |||
| containers: | |||
| - image: $IMAGE | |||
| name: train-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: ["train.py"] | |||
| env: | |||
| - name: "batch_size" | |||
| value: "32" | |||
| - name: "epochs" | |||
| value: "1" | |||
| - name: "input_shape" | |||
| value: "352,640" | |||
| - name: "class_names" | |||
| value: "person,helmet,helmet-on,helmet-off" | |||
| - name: "nms_threshold" | |||
| value: "0.4" | |||
| - name: "obj_threshold" | |||
| value: "0.3" | |||
| trigger: | |||
| checkPeriodSeconds: 60 | |||
| timer: | |||
| start: 02:00 | |||
| end: 20:00 | |||
| condition: | |||
| operator: ">" | |||
| threshold: 500 | |||
| metric: num_of_samples | |||
| evalSpec: | |||
| template: | |||
| spec: | |||
| nodeName: $WORKER_NODE | |||
| containers: | |||
| - image: $IMAGE | |||
| name: eval-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: ["eval.py"] | |||
| env: | |||
| - name: "input_shape" | |||
| value: "352,640" | |||
| - name: "class_names" | |||
| value: "person,helmet,helmet-on,helmet-off" | |||
| deploySpec: | |||
| model: | |||
| name: "deploy-model" | |||
| hotUpdateEnabled: true | |||
| pollPeriodSeconds: 60 | |||
| trigger: | |||
| condition: | |||
| operator: ">" | |||
| threshold: 0.1 | |||
| metric: precision_delta | |||
| hardExampleMining: | |||
| name: "IBT" | |||
| parameters: | |||
| - key: "threshold_img" | |||
| value: "0.9" | |||
| - key: "threshold_box" | |||
| value: "0.9" | |||
| template: | |||
| spec: | |||
| nodeName: $WORKER_NODE | |||
| containers: | |||
| - image: $IMAGE | |||
| name: infer-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: ["inference.py"] | |||
| env: | |||
| - name: "input_shape" | |||
| value: "352,640" | |||
| - name: "video_url" | |||
| value: "file://video/video.mp4" | |||
| - name: "HE_SAVED_URL" | |||
| value: "/he_saved_url" | |||
| volumeMounts: | |||
| - name: localvideo | |||
| mountPath: /video/ | |||
| - name: hedir | |||
| mountPath: /he_saved_url | |||
| resources: # user defined resources | |||
| limits: | |||
| memory: 2Gi | |||
| volumes: # user defined volumes | |||
| - name: localvideo | |||
| hostPath: | |||
| path: /incremental_learning/video/ | |||
| type: DirectoryorCreate | |||
| - name: hedir | |||
| hostPath: | |||
| path: /incremental_learning/he/ | |||
| type: DirectoryorCreate | |||
| outputDir: "/output" | |||
| EOF | |||
| ``` | |||
| 1. The `Dataset` describes data with labels and `HE_SAVED_URL` indicates the address of the deploy container for uploading hard examples. Users will mark label for the hard examples in the address. | |||
| 2. Ensure that the path of outputDir in the YAML file exists on your node. This path will be directly mounted to the container. | |||
| ### 5. Monitor | |||
| ### Check Incremental Learning Job | |||
| Query the service status: | |||
| ``` | |||
| kubectl get incrementallearningjob helmet-detection-demo | |||
| ``` | |||
| In the `IncrementalLearningJob` resource helmet-detection-demo, the following trigger is configured: | |||
| ``` | |||
| trigger: | |||
| checkPeriodSeconds: 60 | |||
| timer: | |||
| start: 02:00 | |||
| end: 20:00 | |||
| condition: | |||
| operator: ">" | |||
| threshold: 500 | |||
| metric: num_of_samples | |||
| ``` | |||
| ## API | |||
| - control-plane: Please refer to this [link](api/crd). | |||
| - Lib: Please refer to this [link](api/lib). | |||
| ## Contributing | |||
| Contributions are very welcome! | |||
| - control-plane: Please refer to this [link](contributing/control-plane/development.md). | |||
| - Lib: Please refer to this [link](contributing/lib/development.md). | |||
| ## Community | |||
| Sedna is an open source project and in the spirit of openness and freedom, we welcome new contributors to join us. | |||
| You can get in touch with the community according to the ways: | |||
| * [Github Issues](https://github.com/kubeedge/sedna/issues) | |||
| * [Regular Community Meeting](https://zoom.us/j/4167237304) | |||
| * [slack channel](https://app.slack.com/client/TDZ5TGXQW/C01EG84REVB/details) | |||
| @@ -1,9 +0,0 @@ | |||
| [支持边云协同终身学习特性,KubeEdge 子项目 Sedna 0.3.0 版本发布!](https://juejin.cn/post/6970866022286884878) | |||
| [【HDC.Cloud 2021】边云协同,打通AI最后一公里](https://xie.infoq.cn/article/b22e72afe8de50ca34269bb21) | |||
| [KubeEdge Sedna如何实现边缘AI模型精度提升50%](https://www.huaweicloud.com/zhishi/hdc2021-Track-24-18.html) | |||
| [KubeEdge子项目Sedna 0.1版本发布!支持边云协同增量学习、联邦学习、协同推理](https://mp.weixin.qq.com/s/3Ei8ynSAxnfuoIWYdb7Gpg) | |||
| [加速AI边云协同创新!KubeEdge社区建立Sedna子项目](https://cloud.tencent.com/developer/article/1791739) | |||
| @@ -4,7 +4,7 @@ For interested readers, Sedna also has two important components that would be me | |||
| If you don't have an existing Kubernetes, you can: | |||
| 1) Install Kubernetes by following the [Kubernetes website](https://kubernetes.io/docs/setup/). | |||
| 2) Or follow [quick start](quick-start.md) for other options. | |||
| 2) Or follow [quick start](index/quick-start.md) for other options. | |||
| ### Prerequisites | |||
| - [Kubectl][kubectl] with right kubeconfig | |||
| @@ -0,0 +1,105 @@ | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: IncrementalLearningJob | |||
| metadata: | |||
| name: helmet-detection-demo | |||
| spec: | |||
| initialModel: | |||
| name: "initial-model" | |||
| dataset: | |||
| name: "incremental-dataset" | |||
| trainProb: 0.8 | |||
| trainSpec: | |||
| template: | |||
| spec: | |||
| nodeName: $WORKER_NODE | |||
| containers: | |||
| - image: $IMAGE | |||
| name: train-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: [ "train.py" ] | |||
| env: | |||
| - name: "batch_size" | |||
| value: "32" | |||
| - name: "epochs" | |||
| value: "1" | |||
| - name: "input_shape" | |||
| value: "352,640" | |||
| - name: "class_names" | |||
| value: "person,helmet,helmet-on,helmet-off" | |||
| - name: "nms_threshold" | |||
| value: "0.4" | |||
| - name: "obj_threshold" | |||
| value: "0.3" | |||
| trigger: | |||
| checkPeriodSeconds: 60 | |||
| timer: | |||
| start: 02:00 | |||
| end: 20:00 | |||
| condition: | |||
| operator: ">" | |||
| threshold: 500 | |||
| metric: num_of_samples | |||
| evalSpec: | |||
| template: | |||
| spec: | |||
| nodeName: $WORKER_NODE | |||
| containers: | |||
| - image: $IMAGE | |||
| name: eval-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: [ "eval.py" ] | |||
| env: | |||
| - name: "input_shape" | |||
| value: "352,640" | |||
| - name: "class_names" | |||
| value: "person,helmet,helmet-on,helmet-off" | |||
| deploySpec: | |||
| model: | |||
| name: "deploy-model" | |||
| hotUpdateEnabled: true | |||
| pollPeriodSeconds: 60 | |||
| trigger: | |||
| condition: | |||
| operator: ">" | |||
| threshold: 0.1 | |||
| metric: precision_delta | |||
| hardExampleMining: | |||
| name: "IBT" | |||
| parameters: | |||
| - key: "threshold_img" | |||
| value: "0.9" | |||
| - key: "threshold_box" | |||
| value: "0.9" | |||
| template: | |||
| spec: | |||
| nodeName: $WORKER_NODE | |||
| containers: | |||
| - image: $IMAGE | |||
| name: infer-worker | |||
| imagePullPolicy: IfNotPresent | |||
| args: [ "inference.py" ] | |||
| env: | |||
| - name: "input_shape" | |||
| value: "352,640" | |||
| - name: "video_url" | |||
| value: "file://video/video.mp4" | |||
| - name: "HE_SAVED_URL" | |||
| value: "/he_saved_url" | |||
| volumeMounts: | |||
| - name: localvideo | |||
| mountPath: /video/ | |||
| - name: hedir | |||
| mountPath: /he_saved_url | |||
| resources: # user defined resources | |||
| limits: | |||
| memory: 2Gi | |||
| volumes: # user defined volumes | |||
| - name: localvideo | |||
| hostPath: | |||
| path: /incremental_learning/video/ | |||
| type: DirectoryorCreate | |||
| - name: hedir | |||
| hostPath: | |||
| path: /incremental_learning/he/ | |||
| type: DirectoryorCreate | |||
| outputDir: "/output" | |||
| @@ -0,0 +1,66 @@ | |||
| apiVersion: sedna.io/v1alpha1 | |||
| kind: JointInferenceService | |||
| metadata: | |||
| name: helmet-detection-inference-example | |||
| namespace: default | |||
| spec: | |||
| edgeWorker: | |||
| model: | |||
| name: "helmet-detection-inference-little-model" | |||
| hardExampleMining: | |||
| name: "IBT" | |||
| parameters: | |||
| - key: "threshold_img" | |||
| value: "0.9" | |||
| - key: "threshold_box" | |||
| value: "0.9" | |||
| template: | |||
| spec: | |||
| nodeName: $EDGE_NODE | |||
| containers: | |||
| - image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0 | |||
| imagePullPolicy: IfNotPresent | |||
| name: little-model | |||
| env: # user defined environments | |||
| - name: input_shape | |||
| value: "416,736" | |||
| - name: "video_url" | |||
| value: "rtsp://localhost/video" | |||
| - name: "all_examples_inference_output" | |||
| value: "/data/output" | |||
| - name: "hard_example_cloud_inference_output" | |||
| value: "/data/hard_example_cloud_inference_output" | |||
| - name: "hard_example_edge_inference_output" | |||
| value: "/data/hard_example_edge_inference_output" | |||
| resources: # user defined resources | |||
| requests: | |||
| memory: 64M | |||
| cpu: 100m | |||
| limits: | |||
| memory: 2Gi | |||
| volumeMounts: | |||
| - name: outputdir | |||
| mountPath: /data/ | |||
| volumes: # user defined volumes | |||
| - name: outputdir | |||
| hostPath: | |||
| # user must create the directory in host | |||
| path: /joint_inference/output | |||
| type: Directory | |||
| cloudWorker: | |||
| model: | |||
| name: "helmet-detection-inference-big-model" | |||
| template: | |||
| spec: | |||
| nodeName: $CLOUD_NODE | |||
| containers: | |||
| - image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0 | |||
| name: big-model | |||
| imagePullPolicy: IfNotPresent | |||
| env: # user defined environments | |||
| - name: "input_shape" | |||
| value: "544,544" | |||
| resources: # user defined resources | |||
| requests: | |||
| memory: 2Gi | |||