@@ -65,11 +65,11 @@ The tables below summarize the group, kind and API version details for the CRD.
|Kind | LifelongLearningJob |
### Lifelong learning CRD
See the [crd source](/build/crds/sedna/Lifelonglearningjob_v1alpha1.yaml) for details.
See the [crd source](/build/crds/sedna/sedna.io_lifelonglearningjobs.yaml) for details.
### Lifelong learning job type definition
See the [golang source](/pkg/apis/sedna/v1alpha1/Lifelongllearningjob_types.go) for details.
See the [golang source](/pkg/apis/sedna/v1alpha1/lifelonglearningjob_types.go) for details.
#### Validation
[Open API v3 Schema based validation](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#validation) can be used to guard against bad requests.
@@ -80,7 +80,7 @@ Here is a list of validations we need to support :
1. The edgenode name specified in the crd should exist in k8s.
### Lifelong learning job sample
See the [source](/build/crd-samples/sedna/Lifelonglearningjob_v1alpha1.yaml) for an example.
See the [source](/build/crd-samples/sedna/lifelonglearningjob_v1alpha1.yaml) for an example.
# Using Lifelong Learning Job in Thermal Comfort Prediction Scenario
This document introduces how to use lifelong learning job in thermal comfort prediction scenario.
Using the lifelong learning job, our application can automatically retrains, evaluates,
and updates models based on the data generated at the edge.
Using the lifelong learning job, our application can automatically retrain, evaluate,
and update models based on the data generated at the edge.
## Thermal Comfort Prediction Experiment
@@ -16,15 +16,15 @@ In this example, you can use [ASHRAE Global Thermal Comfort Database II](https:/
download [datasets](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/atcii-classifier/dataset.tar.gz), including train, evaluation and incremental dataset.
We provide a well-processed [datasets](https://kubeedge.obs.cn-north-1.myhuaweicloud.com/examples/atcii-classifier/dataset.tar.gz), including train (`trainData.csv`), evaluation (`testData.csv`) and incremental (`trainData2.csv`) dataset.
- name: "model_threshold" # Threshold for filtering deploy models
value: "0.5"
deploySpec:
template:
@@ -109,9 +109,9 @@ spec:
imagePullPolicy: IfNotPresent
args: ["inference.py"]
env:
- name: "UT_SAVED_URL"
- name: "UT_SAVED_URL" # unseen tasks save path
value: "/ut_saved_url"
- name: "infer_dataset_url"
- name: "infer_dataset_url" # simulation of the inference samples
value: "/data/testData.csv"
volumeMounts:
- name: utdir
@@ -134,6 +134,8 @@ spec:
EOF
```
>**Note**: `outputDir` can be set as s3 storage url to save artifacts(model, sample, etc.) into s3, and follow [this](/examples/storage/s3/README.md) to set the credentials.