Signed-off-by: Vittorio Cozzolino <vittorio.cozzolino@huawei.com>
Reorganize folder structure of multiedgetracking.
Remove sync function with edge for multiedgertacking controllers (not used currently).
Edit README.md and Dockerfiles.
Remove some explicit parameters from YAML files (use default value).
Add empty line at the end of the YAML and Python files.
Add comments to the multiedgeinference code.
Minor edits to the run.sh script.
Update Docker image version number in YAML files.
Signed-off-by: Vittorio Cozzolino <vittorio.cozzolino@huawei.com>
The image below shows the system architecture and its simplified workflow:


## Components
**ReID Job**: it performs the ReID.
- Available for CPU only.
- Folder with specific implementation `examples/multiedgetracking/reid`.
- Component specs in `lib/sedna/core/multi_edge_tracking/components/reid.py`.
- Defined by the Dockerfile `multi-edge-tracking-reid.Dockerfile`.
- Folder with specific implementation `examples/multiedgeinference/pedestrian_tracking/reid`.
- Component specs in `lib/sedna/core/multi_edge_inference/components/reid.py`.
- Defined by the Dockerfile `multi-edge-inference-reid.Dockerfile`.
**Feature Extraction Service**: it performs the extraction of the features necessary for the ReID step.
- Available for CPU and GPU.
- Folder with specific implementation details `examples/multiedgetracking/feature_extraction`.
- Component specs in `lib/sedna/core/multi_edge_tracking/components/feature_extraction.py`.
- Defined by the Dockerfile `multi-edge-tracking-feature-extraction.Dockerfile` or `multi-edge-tracking-gpu-feature-extraction.Dockerfile`.
- Folder with specific implementation details `examples/multiedgeinference/pedestrian_tracking/feature_extraction`.
- Component specs in `lib/sedna/core/multi_edge_inference/components/feature_extraction.py`.
- Defined by the Dockerfile `multi-edge-inference-feature-extraction.Dockerfile` or `multi-edge-inference-gpu-feature-extraction.Dockerfile`.
- It loads the model defined by the CRD in the YAML file `yaml/models/model_m3l.yaml`.
**VideoAnalytics Job**: it performs tracking of objects (pedestrians) in a video.
- Available for CPU and GPU.
- Folder with specific implementation details `examples/multiedgetracking/detection`.
- AI model code in `examples/multiedgetracking/detection/estimator/bytetracker.py`.
- Component specs in `lib/sedna/core/multi_edge_tracking/components/detection.py`.
- Defined by the Dockerfile `multi-edge-tracking-videoanalytics.Dockerfile` or `multi-edge-tracking-gpu-videoanalytics.Dockerfile`.
- Folder with specific implementation details `examples/multiedgeinference/pedestrian_tracking/detection`.
- AI model code in `examples/multiedgeinference/detection/estimator/bytetracker.py`.
- Component specs in `lib/sedna/core/multi_edge_inference/components/detection.py`.
- Defined by the Dockerfile `multi-edge-inference-videoanalytics.Dockerfile` or `multi-edge-inference-gpu-videoanalytics.Dockerfile`.
- It loads the model defined by the CRD in the YAML file `yaml/models/model_detection.yaml`.
# Build Phase
Go to the `sedna/examples` directory and run: `./build_image.sh -r <your-docker-private-repo> multiedgetracking` to build the Docker images. Remember to **push** the images to your own Docker repository!
Go to the `sedna/examples` directory and run: `./build_image.sh -r <your-docker-private-repo> multiedgeinference` to build the Docker images. Remember to **push** the images to your own Docker repository!
Run `make crds` in the `SEDNA_HOME` and then register the new CRD in the K8S cluster with `make install crds` or:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from sedna.core.multi_edge_inference.components import BaseService
from sedna.core.multi_edge_inference.plugins import PLUGIN, PluggableModel, PluggableNetworkService
from sedna.core.multi_edge_inference.plugins.registered import Feature_Extraction, VideoAnalytics_I
class FEService(BaseService):
"""
In MultiEdgeInference, the Feature Extraction component
is deployed in the edge or the cloud and it used to extract
ReID features from frames received by the ObjectDetector component
and send back to it the enriched data using Kafka or REST API.
Parameters
----------
consumer_topics : List
A list of Kafka topics used to communicate with the Object
Detector service (to receive data from it).
This is accessed only if the Kafka backend is in use.
producer_topics : List
A list of Kafka topics used to communicate with the Object
Detector service (to send data to it).
This is accessed only if the Kafka backend is in use.
plugins : List
A list of PluggableNetworkService. It can be left empty
as the FeatureExtraction service is already preconfigured
to connect to the correct network services.
models : List
A list of PluggableModel. By passing a specific instance
of the model, it is possible to customize the FeatureExtraction
component to, for example, extract differently the objects
features.
timeout: int
It sets a timeout condition to terminate the main fetch loop after the specified
amount of seconds has passed since we received the last frame.
asynchronous: bool
If True, the AI processing will be decoupled from the data acquisition step.
If False, the processing will be sequential. In general, set it to True when
ingesting a stream (e.g., RTSP) and to False when reading from disk
(e.g., a video file).
Examples
--------
>>> model = FeatureExtractionAI() # A class implementing the PluggableModel abstract class (example in pedestrian_tracking/feature_extraction/worker.py)
Base data object exchanged by the MultiEdgeInference components.
"""
def __init__(self, frame_index : int = 0, bbox : List = None, scene = None, confidence : List = None, detection_time : List = None, camera : int = 0, bbox_coord : List = [], tracking_ids : List = [], features : List = [], is_target=False, ID : List = []):
self.userID = "DEFAULT" # Name of the enduser using the application, used to bound the user to the results
self.frame_index = frame_index # Video frame index number