# Quick Start The following is showing how to run a joint inference job by sedna. ## Quick Start #### 0. Check the Environment For Sedna all-in-one installation, it requires you: - 1 VM **(one machine is OK, cluster is not required)** - 2 CPUs or more - 2GB+ free memory, depends on node number setting - 10GB+ free disk space - Internet connection(docker hub, github etc.) - Linux platform, such as ubuntu/centos - Docker 17.06+ you can check the docker version by the following command, ```bash docker -v ``` after doing that, the output will be like this, that means your version fits the bill. ``` Docker version 19.03.6, build 369ce74a3c ``` #### 1. Deploy Sedna Sedna provides three deployment methods, which can be selected according to your actual situation: - [Install Sedna AllinOne](../setup/all-in-one.md). (used for development, here we use it) - [Install Sedna local up](../setup/local-up.md). - [Install Sedna on a cluster](../setup/install.md). The [all-in-one script](/scripts/installation/all-in-one.sh) is used to install Sedna along with a mini Kubernetes environment locally, including: - A Kubernetes v1.21 cluster with multi worker nodes, default zero worker node. - KubeEdge with multi edge nodes, default is latest KubeEdge and one edge node. - Sedna, default is the latest version. ```bash curl https://raw.githubusercontent.com/kubeedge/sedna/master/scripts/installation/all-in-one.sh | NUM_EDGE_NODES=1 bash - ``` Then you get two nodes `sedna-mini-control-plane` and `sedna-mini-edge0`,you can get into each node by following command: ```bash # get into cloud node docker exec -it sedna-mini-control-plane bash ``` ```bash # get into edge node docker exec -it sedna-mini-edge0 bash ``` #### 1. Prepare Data and Model File * step1: download [little model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz) to your edge node. ``` mkdir -p /data/little-model cd /data/little-model wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz tar -zxvf little-model.tar.gz ``` * step2: download [big model](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz) to your cloud node. ``` mkdir -p /data/big-model cd /data/big-model wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz tar -zxvf big-model.tar.gz ``` #### 2. Create Big Model Resource Object for Cloud In cloud node: ``` kubectl create -f - <