Let's start with a typical Edge AI application to introduce inference on edge.
Object tracking is an essential technology in computer vision, which has been widely used for suspect tracking in security surveillance, container tracking in ports, material tracking in warehouses, carrier tracking in epidemiology surveys, and many others. It has become more and more popular in many application scenarios in recent years.
However, in the course of practical application, we found many problems:
A standalone object tracking algorithm often cannot obtain a good result.
Regarding the problems mentioned above, we think that for the object tracking scenario, the collaboration capability of multiple edge nodes could offer a reliable solution.
Therefore, based on KubeEdge's edge-cloud collaboration and resource management capabilities, we propose an end-to-end collaborative object tracking approach employing the flexible synergy of multiple edge nodes. Our proposed approach can enhance data privacy protection, reduce transmission bandwidth, and improve tracking accuracy on edge nodes with limited computing resources.
We propose using Kubernetes Custom Resource Definitions (CRDs) to describe the object tracking service specification/status and a controller to synchronize these updates between edge and cloud.

The MultiEdgeInference CRD will be namespace-scoped.
The tables below summarize the group, kind and API version details for the CRD.
| Field | Description |
|---|---|
| Group | sedna.io |
| APIVersion | v1alpha1 |
| Kind | MultiEdgeInference |
Open API v3 Schema based validation can be used to guard against bad requests.
Invalid values for fields (example string value for a boolean field etc) can be validated using this.
The image below shows the system architecture and its simplified workflow:
The multi edge inference controller starts three separate goroutines called upstream, downstream and multi-edge-inference controller. These are not separate controllers as such but named here for clarity.
The multi-edge-inference controller watches for the updates of multi-edge-inference tasks and the corresponding pods against the K8S API server.
Updates are categorized below along with the possible actions:
| Update Type | Action |
|---|---|
| New Multi-edge-inference Created | Create the cloud/edge worker |
| Multi-edge-inference Deleted | NA. These workers will be deleted by GM. |
| The corresponding pod created/running/completed/failed | Update the status of multi-edge-inference task. |
The downstream controller watches for multi-edge-inference updates against the K8S API server.
Updates are categorized below along with the possible actions that the downstream controller can take:
| Update Type | Action |
|---|---|
| New Multi-edge-inference Created | Sends the task information to LCs. |
| Multi-edge-inference Deleted | The controller sends the delete event to LCs. |
The upstream controller watches for multi-edge-inference task updates from the edge node and applies these updates against the API server in the cloud.
Updates are categorized below along with the possible actions that the upstream controller can take:
| Update Type | Action |
|---|---|
| Multi-edge-inference Reported State Updated | The controller appends the reported status of the multi-edge-inference in the cloud. |