ansible部署k8s( 二 )

3.2、检查flanneldsystemctl status flanneld|grep Activeip addr show|grep flannelip addr show|grep dockercat /run/flannel/dockercat /run/flannel/subnet.env#### 列出键值存储的目录etcdctl --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem ls -r## 显示如下/kubernetes/kubernetes/network/kubernetes/network/config/kubernetes/network/subnets/kubernetes/network/subnets/172.30.12.0-24/kubernetes/network/subnets/172.30.43.0-24/kubernetes/network/subnets/172.30.9.0-24#### 检查分配的pod网段etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem get /kubernetes/network/config#### 检查分配的pod子网列表etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem ls /kubernetes/network/subnets#### 检查pod网段对于的IP和flannel接口etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --ca-file=/etc/kubernetes/ssl/ca.pem --cert-file=/etc/kubernetes/ssl/flanneld.pem --key-file=/etc/kubernetes/ssl/flanneld-key.pem get /kubernetes/network/subnets/172.30.74.0-243.3、检查nginx和keepalivedps -ef|grep nginxps -ef|grep keepalivednetstat -lntup|grep nginxip add|grep 192.168# 查看VIP,显示如下 inet 192.168.10.11/24 brd 192.168.10.255 scope global noprefixroute ens32inet 192.168.10.100/32 scope global ens323.4、检查kube-apiservernetstat -lntup | grep kube-apiser# 显示如下tcp00 192.168.10.11:64430.0.0.0:*LISTEN115454/kube-apiservkubectl cluster-info# 显示如下Kubernetes master is running at https://192.168.10.100:8443Elasticsearch is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxyKibana is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/kibana-logging/proxyCoreDNS is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxykubernetes-dashboard is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxyMetrics-server is running at https://192.168.10.100:8443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.kubectl get all --all-namespaceskubectl get cs# 显示如下NAMESTATUSMESSAGEERRORcontroller-managerHealthyokschedulerHealthyoketcd-1Healthy{"health":"true"}etcd-2Healthy{"health":"true"}etcd-0Healthy{"health":"true"} #### 打印kube-apiserver写入etcd数据ETCDCTL_API=3 etcdctl --endpoints="https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379" --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem get /registry/ --prefix --keys-only#### 遇到报错unexpected ListAndWatch error: storage/cacher.go:/secrets: Failed to list *core.Secret: unable to transform key "/registry/secrets/kube-system/bootstrap-token-2z8s62": invalid padding on input##### 原因,集群上的,kube-apiserver 的token 不一致 文件是:encryption-config.yaml 必须保证 secret的参数 一致3.5、检查 kube-controller-managernetstat -lntup|grep kube-control# 显示如下tcp00 127.0.0.1:102520.0.0.0:*LISTEN117775/kube-control tcp600 :::10257:::*LISTEN117775/kube-controlkubectl get cskubectl get endpoints kube-controller-manager --namespace=kube-system-o yaml# 显示如下,可以看到 kube12变成leaderapiVersion: v1kind: Endpointsmetadata:annotations:control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube12_753e65bf-1e65-11ea-b9c4-000c293dd01c","leaseDurationSeconds":15,"acquireTime":"2019-12-14T11:32:49Z","renewTime":"2019-12-14T12:43:20Z","leaderTransitions":0}'creationTimestamp: "2019-12-14T11:32:49Z"name: kube-controller-managernamespace: kube-systemresourceVersion: "8282"selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manageruid: 753d2be7-1e65-11ea-b980-000c29e3f4483.6、检查kube-schedulernetstat -lntup|grep kube-sche# 显示如下tcp00 127.0.0.1:102510.0.0.0:*LISTEN119678/kube-schedul tcp600 :::10259:::*LISTEN119678/kube-schedulkubectl get cskubectl get endpoints kube-scheduler --namespace=kube-system-o yaml# 显示如下,可以看到kube12变成leaderapiVersion: v1kind: Endpointsmetadata:annotations:control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube12_89050e00-1e65-11ea-8f5e-000c293dd01c","leaseDurationSeconds":15,"acquireTime":"2019-12-14T11:33:23Z","renewTime":"2019-12-14T12:45:22Z","leaderTransitions":0}'creationTimestamp: "2019-12-14T11:33:23Z"name: kube-schedulernamespace: kube-systemresourceVersion: "8486"selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduleruid: 899d1625-1e65-11ea-b980-000c29e3f448


推荐阅读