Kubernetes资源配置
大约 5 分钟Kubernetes指南Kubernetes资源配置
Kubernetes资源配置
Kubernetes资源配置概述
Kubernetes资源配置是管理容器化应用的核心内容,通过YAML或JSON格式的配置文件定义应用的期望状态。合理的资源配置能够确保应用的稳定性、性能和资源利用率。
资源配置基础
1. 资源请求和限制
Kubernetes通过资源请求(requests)和限制(limits)来管理Pod的资源使用。
CPU资源配置
apiVersion: v1
kind: Pod
metadata:
name: cpu-demo
spec:
containers:
- name: cpu-demo-ctr
image: vish/stress
resources:
requests:
cpu: "0.5" # 请求0.5个CPU核心
limits:
cpu: "1" # 限制最多使用1个CPU核心
args:
- -cpus
- "2"
内存资源配置
apiVersion: v1
kind: Pod
metadata:
name: memory-demo
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
requests:
memory: "50Mi" # 请求50MB内存
limits:
memory: "100Mi" # 限制最多使用100MB内存
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "200M", "--vm-hang", "1"]
资源单位说明
CPU单位:
- 1 CPU = 1个物理核心或1个虚拟核心
- 支持小数和毫核(m):0.1 CPU = 100m CPU
内存单位:
- E, P, T, G, M, K(十进制)
- Ei, Pi, Ti, Gi, Mi, Ki(二进制)
- 例如:1000M = 1G,1024Mi = 1Gi
2. 资源配额(ResourceQuota)
ResourceQuota用于限制命名空间中的资源使用总量。
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
requests.nvidia.com/gpu: 4
3. LimitRange
LimitRange用于设置命名空间中Pod和容器的默认资源请求和限制。
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
工作负载资源配置
1. Deployment资源配置
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
2. StatefulSet资源配置
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
3. DaemonSet资源配置
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
存储资源配置
1. PersistentVolume配置
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
2. PersistentVolumeClaim配置
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
3. StorageClass配置
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
网络资源配置
1. Service配置
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: ClusterIP
2. Ingress配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
3. NetworkPolicy配置
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
配置和密钥管理
1. ConfigMap配置
apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
data:
# property-like keys; each key maps to a simple value
player_initial_lives: "3"
ui_properties_file_name: "user-interface.properties"
# file-like keys
game.properties: |
enemy.types=aliens,monsters
player.maximum-lives=5
user-interface.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
2. Secret配置
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
资源管理最佳实践
1. 资源请求和限制设置
# 合理设置资源请求和限制
apiVersion: v1
kind: Pod
metadata:
name: best-practice-pod
spec:
containers:
- name: app
image: myapp:v1.0
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
2. 健康检查配置
apiVersion: v1
kind: Pod
metadata:
name: liveness-readiness-pod
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
3. 安全上下文配置
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
资源监控和优化
1. 资源使用监控
# 查看节点资源使用
kubectl top nodes
# 查看Pod资源使用
kubectl top pods
# 查看命名空间资源使用
kubectl top pods -n namespace-name
2. 资源优化策略
# 使用HorizontalPodAutoscaler自动扩缩容
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
3. 垂直Pod自动扩缩容
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-app-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app
updatePolicy:
updateMode: "Auto"
资源故障排查
1. 资源不足问题
# 查看Pod状态
kubectl describe pod pod-name
# 查看事件
kubectl get events
# 检查资源配额
kubectl describe resourcequota -n namespace-name
2. 资源使用分析
# 查看节点资源详情
kubectl describe nodes
# 查看Pod资源请求和限制
kubectl describe pod pod-name | grep -A 10 "Resources:"
3. 性能调优
# 分析资源使用模式
kubectl top pods --containers
# 调整资源配额
kubectl patch resourcequota mem-cpu-demo --patch '{"spec":{"hard":{"requests.cpu":"2"}}}'
常用资源配置命令
命令 | 说明 |
---|---|
kubectl apply -f file.yaml | 应用资源配置 |
kubectl get pods | 查看Pod列表 |
kubectl describe pod pod-name | 查看Pod详细信息 |
kubectl top nodes | 查看节点资源使用 |
kubectl top pods | 查看Pod资源使用 |
kubectl edit deployment deployment-name | 编辑Deployment配置 |
kubectl scale deployment deployment-name --replicas=5 | 扩缩容Deployment |
总结
Kubernetes资源配置是容器化应用管理的核心技能。通过合理的资源配置,可以确保应用的稳定性、性能和资源利用率。在实际应用中,应该根据业务需求和资源情况,合理设置资源请求和限制,配置健康检查,实施安全策略,并定期监控和优化资源使用。