EFK 日志系统收集K8s日志 之 容器标准输出日志( 二 )

6、部署单节点Elasticsearch数据库6.1、创建EFK 专属命名空间[root@master-1 es-single-node]# kubectl create ns opsnamespace/ops created6.2、创建elasticsearch.yamlcat >elasticsearch.yaml<< EOFapiVersion: Apps/v1kind: Deploymentmetadata:name: elasticsearchnamespace: opslabels:k8s-app: elasticsearchspec:replicas: 1selector:matchLabels:k8s-app: elasticsearchtemplate:metadata:labels:k8s-app: elasticsearchspec:containers:- image: elasticsearch:7.9.2name: elasticsearchresources:limits:cpu: 2memory: 3Girequests:cpu: 0.5memory: 500Mienv:- name: "discovery.type"value: "single-node"- name: ES_JAVA_OPTSvalue: "-Xms512m -Xmx2g"ports:- containerPort: 9200name: dbprotocol: TCPvolumeMounts:- name: elasticsearch-datamountPath: /usr/share/elasticsearch/datavolumes:- name: elasticsearch-datapersistentVolumeClaim:claimName: es-pvc---apiVersion: v1kind: PersistentVolumeClaimmetadata:name: es-pvcnamespace: opsspec:#指定动态PV 名称storageClassName: "elastic-nfs-client"accessModes:- ReadWriteManyresources:requests:storage: 10Gi---apiVersion: v1kind: Servicemetadata:name: elasticsearchnamespace: opsspec:ports:- port: 9200protocol: TCPtargetPort: 9200selector:k8s-app: elasticsearchEOF[root@master-1 es-single-node]# kubectl apply -f elasticsearch.yaml deployment.apps/elasticsearch createpersistentvolumeclaim/es-pvc createservice/elasticsearch create6.3、查看elasticsearch pod,service 运行状态[root@master-1 es-single-node]# kubectl get pod -n ops -l k8s-app=elasticsearchNAMEREADYSTATUSRESTARTSAGEelasticsearch-97f7d74f5-qr6d41/1Running02m41s[root@master-1 es-single-node]# kubectl get service -n opsNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGEelasticsearchClusterIP10.0.0.126<none>9200/TCP2m41s7、部署kibana 可视化展示7.1、创建kibana.yamlcat >kibana.yaml<< EOFapiVersion: apps/v1kind: Deploymentmetadata:name: kibananamespace: opslabels:k8s-app: kibanaspec:replicas: 1selector:matchLabels:k8s-app: kibanatemplate:metadata:labels:k8s-app: kibanaspec:containers:- name: kibanaimage: kibana:7.9.2resources:limits:cpu: 2memory: 4Girequests:cpu: 0.5memory: 500Mienv:- name: ELASTICSEARCH_HOSTS#指定elasticsearch的servicesname,记得加上命名空间.opsvalue: http://elasticsearch.ops:9200- name: I18N_LOCALEvalue: zh-CNports:- containerPort: 5601name: uiprotocol: TCP---apiVersion: v1kind: Servicemetadata:name: kibananamespace: opsspec:type: NodePortports:- port: 5601protocol: TCPtargetPort: uinodePort: 30601selector:k8s-app: kibanaEOF[root@master-1 es-single-node]# kubectl apply -f kibana.yaml deployment.apps/kibana createservice/kibana create7.2、查看kibana pod,service 运行状态[root@master-1 es-single-node]# kubectl get pod -n ops -l k8s-app=kibanaNAMEREADYSTATUSRESTARTSAGEkibana-5c96d89b65-zgphp1/1Running07m[root@master-1 es-single-node]# kubectl get service -n opsNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGEkibanaNodePort10.0.0.164<none>5601:30601/TCP7m7.3、查看kibana dashboard输入kibana 地址: http://nodeIP:30601

EFK 日志系统收集K8s日志 之 容器标准输出日志

文章插图
 
8、日志收集8.1、收集容器标准输出日志
EFK 日志系统收集K8s日志 之 容器标准输出日志

文章插图
 
大致思路:
? 以DaemonSet方式在每个Node上部署一个Filebeat 的日志收集程序的Pod,采用hostPath 方式把 /var/lib/docker/containers 挂载到Filebeat 容器中,/var/lib/docker/containers 目录下的就是每个容器标准输出的日志
8.1.1 创建 filebeat-kubernetes.yamlcat >filebeat-kubernetes.yaml << EOF---apiVersion: v1kind: ConfigMapmetadata:name: filebeat-confignamespace: opslabels:k8s-app: filebeatdata:filebeat.yml: |-filebeat.config:inputs:# Mounted `filebeat-inputs` configmap:path: ${path.config}/inputs.d/*.yml# Reload inputs configs as they change:reload.enabled: falsemodules:path: ${path.config}/modules.d/*.yml# Reload module configs as they change:reload.enabled: falseoutput.elasticsearch:hosts: ['elasticsearch.ops:9200']---apiVersion: v1kind: ConfigMapmetadata:name: filebeat-inputsnamespace: opslabels:k8s-app: filebeatdata:kubernetes.yml: |-- type: dockercontainers.ids:- "*"processors:- add_kubernetes_metadata:in_cluster: true---apiVersion: apps/v1 kind: DaemonSetmetadata:name: filebeatnamespace: opslabels:k8s-app: filebeatspec:selector:matchLabels:k8s-app: filebeattemplate:metadata:labels:k8s-app: filebeatspec:serviceAccountName: filebeatterminationGracePeriodSeconds: 30containers:- name: filebeatimage: elastic/filebeat:7.9.2args: ["-c", "/etc/filebeat.yml","-e",]securityContext:runAsUser: 0# If using Red Hat OpenShift uncomment this:#privileged: trueresources:limits:memory: 200Mirequests:cpu: 100mmemory: 100MivolumeMounts:- name: configmountPath: /etc/filebeat.ymlreadOnly: truesubPath: filebeat.yml- name: inputsmountPath: /usr/share/filebeat/inputs.dreadOnly: true- name: datamountPath: /usr/share/filebeat/data- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: truevolumes:- name: configconfigMap:defaultMode: 0600name: filebeat-config- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: inputsconfigMap:defaultMode: 0600name: filebeat-inputs# data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart- name: datahostPath:path: /var/lib/filebeat-datatype: DirectoryOrCreate---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: filebeatsubjects:- kind: ServiceAccountname: filebeatnamespace: opsroleRef:kind: ClusterRolename: filebeatapiGroup: rbac.authorization.k8s.io---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: filebeatlabels:k8s-app: filebeatrules:- apiGroups: [""] # "" indicates the core API groupresources:- namespaces- podsverbs:- get- watch- list---apiVersion: v1kind: ServiceAccountmetadata:name: filebeatnamespace: opslabels:k8s-app: filebeatEOF[root@master-1 es-single-node]# kubectl apply -f filebeat-kubernetes.yaml configmap/filebeat-config createconfigmap/filebeat-inputs createdaemonset.apps/filebeat createclusterrolebinding.rbac.authorization.k8s.io/filebeat createclusterrole.rbac.authorization.k8s.io/filebeat createserviceaccount/filebeat create


推荐阅读