Add a tag '_speed_buildx_for_cgo_alpine' for Dockerfiles:
1. require golang with CGO_ENABLED=1
2. use alpine image.
So used in 'build/lc/Dockerfile'
Signed-off-by: llhuii <liulinghui@huawei.com>
We use docker-buildx to build our components images for different
platforms.
But for some languages, such as golang, have good builtin build for
multi platforms, and buildx support that.
In Sedna, we have GM/LC written by golang. This commit supports this
function.
Signed-off-by: llhuii <liulinghui@huawei.com>
since we switch to pod template which has large crd yaml, this would
lead to the error `metadata.annotations: Too long: must have at most
262144 bytes` when `kubectl apply -f`
Signed-off-by: llhuii <liulinghui@huawei.com>
Instead of manully maintaining these crd yaml files under
build/crds/sedna, we use kubebuilder to generate them.
Signed-off-by: llhuii <liulinghui@huawei.com>
this could avoid our images be removed which leads to local-up script
failure since edgecore image gc would be triggered when high image
usage(>=80%).
Signed-off-by: llhuii <liulinghui@huawei.com>
1. Add e2e framework code stolen from k8s.io/test/e2e/framework and remove
unnecessary code for simplicity.
2. Add run-e2e.sh and a github action job to run e2e tests.
3. Add a simple dataset testcase, TODO for other CRs.
Signed-off-by: llhuii <liulinghui@huawei.com>
Developers can run `hack/local-up.sh` to setup up a local environment
including:
1. a local k8s cluster with a master node.
2. a kubeedge node.
3. our gm/lc.
Based on the kubeedge-local-up script which builds a local k8s cluster
and kubeedge, our local-up script installs our package locally for
simply developing and preparing for e3e tests.
It does:
1. build the gm/lc/worker images.
2. download kubeedge source code and run its localup script.
3. prepare our k8s env.
4. config gm config and start gm.
5. start lc.
6. add cleanup
For cleanup, it needs to do our cleanups before kubeedge cleanup
otherwise lc cleanup (via kubectl delete) is stuck and lc container is
kept running.
Signed-off-by: llhuii <liulinghui@huawei.com>