Hi Wei

Thanks for testing. Have you tested the HPA adaptor?

Sheng Wu 吴晟
Twitter, wusheng1108


Wei Zhang <[email protected]> 于2021年1月22日周五 下午10:07写道:

> Hi All
>
> I deployed operator ui and fetcher through skywalking-swck v0.2.0 tag and
> it worked very well.  The istio control plane monitoring data is seen in
> the UI.
>
> $ kubectl get all -n sw
>
> NAME                                             READY   STATUS    RESTARTS
>   AGE
> pod/default-oap-59556446d9-ck2z9                 1/1     Running   0
>    23m
> pod/default-ui-6c5549846d-md24c                  1/1     Running   0
>    23m
> pod/istio-prod-cluster-fetcher-84499749d-nf8x5   1/1     Running   0
>    23m
>
> NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)
>                    AGE
> service/default-oap   ClusterIP   10.104.3.147   <none>
>  12800/TCP,11800/TCP,1234/TCP   23m
> service/default-ui    NodePort    10.96.172.18   <none>        80:30561/TCP
>                   23m
>
> NAME                                         READY   UP-TO-DATE   AVAILABLE
>   AGE
> deployment.apps/default-oap                  1/1     1            1
>   23m
> deployment.apps/default-ui                   1/1     1            1
>   23m
> deployment.apps/istio-prod-cluster-fetcher   1/1     1            1
>   23m
>
>
> $ kubectl get all -n istio-system
>
> NAME                                        READY   STATUS    RESTARTS
>  AGE
> pod/istio-egressgateway-ff79ddbc6-dldx8     1/1     Running   0
> 52m
> pod/istio-ingressgateway-6d8576fbcc-nx4qv   1/1     Running   0
> 52m
> pod/istiod-5756c7769c-fllds                 1/1     Running   0
> 52m
> pod/prometheus-6d87d85c88-glz2q             2/2     Running   0
> 49m
>
> NAME                           TYPE           CLUSTER-IP       EXTERNAL-IP
>   PORT(S)
>    AGE
> service/istio-egressgateway    ClusterIP      10.96.32.194     <none>
>  80/TCP,443/TCP,15443/TCP
>   52m
> service/istio-ingressgateway   LoadBalancer   10.102.40.120    <pending>
>
> 15021:32372/TCP,80:31464/TCP,443:31097/TCP,31400:32572/TCP,15443:30768/TCP
>   52m
> service/istiod                 ClusterIP      10.103.84.53     <none>
>  15010/TCP,15012/TCP,443/TCP,15014/TCP
>    52m
> service/prometheus             NodePort       10.100.243.154   <none>
>  9090:32568/TCP
>   49m
>
> NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
> deployment.apps/istio-egressgateway    1/1     1            1           52m
> deployment.apps/istio-ingressgateway   1/1     1            1           52m
> deployment.apps/istiod                 1/1     1            1           52m
> deployment.apps/prometheus             1/1     1            1           49m
> --
>
> Wei Zhang 张伟
> Github, @arugal
>
>
> Hongtao Gao <[email protected]> 于2021年1月21日周四 上午9:56写道:
>
> > The test build of SkyWalking Could on Kubernetes 0.2.0 is now available.
> >
> > We welcome any comments you may have, and will take all feedback into
> > account if a quality vote is called for this build.
> >
> > Release notes:
> >
> >  * https://github.com/apache/skywalking-swck/blob/0.2.0/CHANGES.md
> >
> > Release Candidate:
> >
> >  * https://dist.apache.org/repos/dist/dev/skywalking/swck/$VERSION
> >  * sha512 checksums
> >    -
> >
> f93036b73261a40ffb748724e066a2116315b118e62f8a86cba0cee25f945e59adac69ae5f852c91442565f04fef121906152d0a5b1bea0993ac6b636a5b28cf
> > skywalking-swck-0.2.0-bin.tgz
> >    -
> >
> 3814cb913ac8fc979a5651441230be6a30abb3775a05395ff7d28aaf7d19f158f51dc38aae53f12d1140da2c7bfb73efc0e1668e0fe3bae803431435117d9fe5
> > skywalking-swck-0.2.0-src.tgz
> > Release Tag :
> >
> >  * (Git Tag) 0.2.0
> >
> > Release Commit Hash :
> >
> >  *
> >
> >
> https://github.com/apache/skywalking-swck/tree/74f6b6b88c5838230bf029b29153e2d5330a74bb
> > Keys to verify the Release Candidate :
> >
> >  * https://dist.apache.org/repos/dist/release/skywalking/KEYS
> >
> > Guide to build the release from source :
> >
> >  * https://github.com/apache/skywalking-swck/blob/0.2.0/docs/release.md
> >
> > A vote regarding the quality of this test build will be initiated
> > within the next couple of days.
> >
>

Reply via email to