The following is showing how to run a joint inference job by sedna.
For Sedna all-in-one installation, it requires you:
you can check the docker version by the following command,
docker -v
after doing that, the output will be like this, that means your version fits the bill.
Docker version 19.03.6, build 369ce74a3c
Sedna provides three deployment methods, which can be selected according to your actual situation:
The all-in-one script is used to install Sedna along with a mini Kubernetes environment locally, including:
curl https://raw.githubusercontent.com/kubeedge/sedna/master/scripts/installation/all-in-one.sh | NUM_EDGE_NODES=1 bash -
Then you get two nodes sedna-mini-control-plane and sedna-mini-edge0,you can get into each node by following command:
# get into cloud node
docker exec -it sedna-mini-control-plane bash
# get into edge node
docker exec -it sedna-mini-edge0 bash
mkdir -p /data/little-model
cd /data/little-model
wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/little-model.tar.gz
tar -zxvf little-model.tar.gz
mkdir -p /data/big-model
cd /data/big-model
wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/big-model.tar.gz
tar -zxvf big-model.tar.gz
In cloud node:
kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind: Model
metadata:
name: helmet-detection-inference-big-model
namespace: default
spec:
url: "/data/big-model/yolov3_darknet.pb"
format: "pb"
EOF
In cloud node:
kubectl create -f - <<EOF
apiVersion: sedna.io/v1alpha1
kind: Model
metadata:
name: helmet-detection-inference-little-model
namespace: default
spec:
url: "/data/little-model/yolov3_resnet18.pb"
format: "pb"
EOF
Note the setting of the following parameters, which have to same as the script little_model.py:
Make preparation in edge node
mkdir -p /joint_inference/output
Create joint inference service
CLOUD_NODE="sedna-mini-control-plane"
EDGE_NODE="sedna-mini-edge0"
kubectl create -f https://raw.githubusercontent.com/jaypume/sedna/main/examples/joint_inference/helmet_detection_inference/helmet_detection_inference.yaml
kubectl get jointinferenceservices.sedna.io
rtsp://localhost/video) that the inference service can connect.wget https://github.com/EasyDarwin/EasyDarwin/releases/download/v8.1.0/EasyDarwin-linux-8.1.0-1901141151.tar.gz
tar -zxvf EasyDarwin-linux-8.1.0-1901141151.tar.gz
cd EasyDarwin-linux-8.1.0-1901141151
./start.sh
mkdir -p /data/video
cd /data/video
wget https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/helmet-detection-inference/video.tar.gz
tar -zxvf video.tar.gz
ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video
You can check the inference results in the output path (e.g. /joint_inference/output) defined in the JointInferenceService config.

Contributions are very welcome!
Sedna is an open source project and in the spirit of openness and freedom, we welcome new contributors to join us.
You can get in touch with the community according to the ways: