使用Kubeadm部署外部OpenStack Cloud Provider
本文档描述了如何在CentOS上使用kubeadm安装单个控制面板Kubernetes集群v1.15,然后部署外部OpenStack云提供程序和Cinder CSI插件以将Cinder卷用作Kubernetes中的持久卷。
在OpenStack中进行准备
该集群在OpenStack VM上运行,因此让我们首先在OpenStack中创建一些东西。
- 该Kubernetes集群的项目/租户
- 该项目的用户使用Kubernetes,以查询节点信息并附加卷等
- 专用网络和子网
- 此专用网络的路由器,并将其连接到公用网络以获取浮动IP
- 所有Kubernetes VM的安全组
- 一个虚拟机作为控制平面节点,几个虚拟机作为工作节点
安全组将具有以下规则来打开Kubernetes的端口。
控制平面节点
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 6443 | Kubernetes API服务器 |
TCP协议 | 2379-2380 | etcd服务器客户端API |
TCP协议 | 10250 | Kubelet API |
TCP协议 | 10251 | 库伯调度器 |
TCP协议 | 10252 | kube-controller-manager |
TCP协议 | 10255 | 只读Kubelet API |
工作节点
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 10250 | Kubelet API |
TCP协议 | 10255 | 只读Kubelet API |
TCP协议 | 30000-32767 | NodePort服务 |
控制平面和辅助节点上的CNI端口
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 179 | Calico BGP网络 |
TCP协议 | 9099 | 印花布菲利克斯(健康检查) |
UDP协议 | 8285 | 绒布 |
UDP协议 | 8472 | 绒布 |
TCP协议 | 6781-6784 | 编织网 |
UDP协议 | 6783-6784 | 编织网 |
仅在使用特定的CNI插件时才需要打开CNI特定的端口。在本指南中,我们将使用Weave Net。在安全组中仅需要打开Weave Net端口(TCP 6781-6784和UDP 6783-6784)。
控制平面节点至少需要2个内核和4GB RAM。启动虚拟机后,请验证其主机名,并确保其与Nova中的节点名相同。
If the hostname is not resolvable, add it to /etc/hosts
.
For example, if the VM is called master1, and it has an internal IP 192.168.1.4. Add that to /etc/hosts
and set hostname to master1.
echo "192.168.1.4 master1" >> /etc/hosts
hostnamectl set-hostname master1
安装Docker和Kubernetes
接下来,我们将按照官方文档使用kubeadm安装docker和Kubernetes。
请注意,这是一个 使用systemd作为cgroup驱动程序的最佳实践 Kubernetes。 如果您使用内部容器注册表,请将其添加到docker config中。
# Install Docker CE
## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2
### Add Docker repository.
yum-config-manager \
--add-repo \
//download.docker.com/linux/centos/docker-ce.repo
## Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce
## Create /etc/docker directory.
mkdir /etc/docker
# Configure the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=//packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=//packages.cloud.google.com/yum/doc/yum-key.gpg //packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# Set SELinux in permissive mode (effectively disabling it)
# Caveat: In a production environment you may not want to disable SELinux, please refer to Kubernetes documents about SELinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
# check if br_netfilter module is loaded
lsmod | grep br_netfilter
# if not, load it explicitly with
modprobe br_netfilter
有关如何创建单个控制平面集群的官方文档,请参见 使用kubeadm创建单个控制平面集群 文档。
我们将主要遵循该文档,但还会为云提供商添加其他内容。
To make things more clear, we'll use a kubeadm-config.yml
for the control-plane node.
在此配置中,我们指定使用外部OpenStack云提供程序,以及在何处找到其配置。
我们还在API服务器的运行时配置中启用了存储API,因此我们可以将OpenStack卷用作Kubernetes中的持久卷。
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "external"
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: "v1.15.1"
apiServer:
extraArgs:
enable-admission-plugins: NodeRestriction
runtime-config: "storage.k8s.io/v1=true"
controllerManager:
extraArgs:
external-cloud-volume-plugin: openstack
extraVolumes:
- name: "cloud-config"
hostPath: "/etc/kubernetes/cloud-config"
mountPath: "/etc/kubernetes/cloud-config"
readOnly: true
pathType: File
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "10.224.0.0/16"
dnsDomain: "cluster.local"
Now we'll create the cloud config, /etc/kubernetes/cloud-config
, for OpenStack.
请注意,此处的租户是我们一开始为所有Kubernetes VM创建的租户。
所有虚拟机都应在该项目/租户中启动。
另外,您需要在此租户中创建一个用户,以便Kubernetes进行查询。
该ca文件是OpenStack API端点的CA根证书,例如 //openstack.cloud:5000/v3
在撰写本文时,云提供商不允许不安全的连接(跳过CA检查)。
[Global]
region=RegionOne
username=username
password=password
auth-url=//openstack.cloud:5000/v3
tenant-id=14ba698c0aec4fd6b7dc8c310f664009
domain-id=default
ca-file=/etc/kubernetes/ca.pem
[LoadBalancer]
subnet-id=b4a9a292-ea48-4125-9fb2-8be2628cb7a1
floating-network-id=bc8a590a-5d65-4525-98f3-f7ef29c727d5
[BlockStorage]
bs-version=v2
[Networking]
public-network-name=public
ipv6-support-disabled=false
下一步运行kubeadm以启动控制平面节点
kubeadm init --config=kubeadm-config.yml
完成初始化后,将admin config复制到.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
At this stage, the control-plane node is created but not ready. All the nodes have the taint node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
and are waiting to be initialized by the cloud-controller-manager.
# kubectl describe no master1
Name: master1
Roles: master
......
Taints: node-role.kubernetes.io/master:NoSchedule
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
......
现在,按照以下步骤将OpenStack云控制器管理器部署到集群中 在kubeadm中使用控制器管理器.
使用cloud-config为openstack云提供商创建一个秘密。
kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml
kubectl apply -f cloud-config-secret.yaml
Get the CA certificate for OpenStack API endpoints and put that into /etc/kubernetes/ca.pem
.
创建RBAC资源。
kubectl apply -f //github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml
kubectl apply -f //github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml
我们将以DaemonSet而不是Pod的形式运行OpenStack云控制器管理器。
管理器将仅在控制平面节点上运行,因此,如果有多个控制平面节点,则将运行多个Pod以实现高可用性。
Create openstack-cloud-controller-manager-ds.yaml
containing the following manifests, then apply it.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: openstack-cloud-controller-manager
namespace: kube-system
labels:
k8s-app: openstack-cloud-controller-manager
spec:
selector:
matchLabels:
k8s-app: openstack-cloud-controller-manager
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
k8s-app: openstack-cloud-controller-manager
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsUser: 1001
tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
- effect: NoSchedule
key: node.kubernetes.io/not-ready
serviceAccountName: cloud-controller-manager
containers:
- name: openstack-cloud-controller-manager
image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0
args:
- /bin/openstack-cloud-controller-manager
- --v=1
- --cloud-config=$(CLOUD_CONFIG)
- --cloud-provider=openstack
- --use-service-account-credentials=true
- --address=127.0.0.1
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/config
name: cloud-config-volume
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes
name: ca-cert
readOnly: true
resources:
requests:
cpu: 200m
env:
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
hostNetwork: true
volumes:
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- name: cloud-config-volume
secret:
secretName: cloud-config
- name: ca-cert
secret:
secretName: openstack-ca-cert
控制器管理器运行时,它将查询OpenStack以获取有关节点的信息并删除污点。在节点信息中,您将看到OpenStack中VM的UUID。
# kubectl describe no master1
Name: master1
Roles: master
......
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
......
sage:docker: network plugin is not ready: cni config uninitialized
......
PodCIDR: 10.224.0.0/24
ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5
现在安装您喜欢的CNI,控制平面节点将准备就绪。
例如,要安装Weave Net,请运行以下命令:
kubectl apply -f "//cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
接下来,我们将设置工作程序节点。
首先,以与在控制平面节点中安装方式相同的方式安装docker和kubeadm。 要将它们加入集群,我们需要从控制平面节点安装输出中获得令牌和ca cert哈希。 如果它已过期或丢失,我们可以使用以下命令重新创建它。
# check if token is expired
kubeadm token list
# re-create token and show join command
kubeadm token create --print-join-command
Create kubeadm-config.yml
for worker nodes with the above token and ca cert hash.
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
bootstrapToken:
apiServerEndpoint: 192.168.1.7:6443
token: 0c0z4p.dnafh6vnmouus569
caCertHashes: ["sha256:fcb3e956a6880c05fc9d09714424b827f57a6fdc8afc44497180905946527adf"]
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "external"
apiServerEndpoint是控制平面节点,令牌和caCertHashes可从打印在“ kubeadm token create”命令输出中的join命令中获取。
运行kubeadm,工作节点将加入集群。
kubeadm join --config kubeadm-config.yml
在这个阶段,我们将拥有一个具有外部OpenStack云提供商的Kubernetes集群。 该提供程序告知Kubernetes Kubernetes节点与OpenStack VM之间的映射。 如果Kubernetes想要将持久卷附加到Pod,则可以从映射中找出Pod在哪个OpenStack VM上运行,然后将底层OpenStack卷附加到VM。
部署Cinder CSI
与Cinder的集成由外部Cinder CSI插件提供,如 煤渣CSI 文档。
我们将执行以下步骤来安装Cinder CSI插件。 首先,使用CA证书为OpenStack的API端点创建一个秘密。与我们在上面的云提供商中使用的证书文件相同。
kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml
kubectl apply -f openstack-ca-cert.yaml
然后创建RBAC资源。
kubectl apply -f //raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml
kubectl apply -f //github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml
煤渣CSI插件包括控制器插件和节点插件。
控制器与Kubernetes API和Cinder API通信以创建/附加/分离/删除Cinder卷。节点插件依次在每个工作节点上运行,以将存储设备(附加的卷)绑定到Pod,并在删除过程中取消绑定。
Create cinder-csi-controllerplugin.yaml
and apply it to create csi controller.
kind: Service
apiVersion: v1
metadata:
name: csi-cinder-controller-service
namespace: kube-system
labels:
app: csi-cinder-controllerplugin
spec:
selector:
app: csi-cinder-controllerplugin
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-cinder-controllerplugin
namespace: kube-system
spec:
serviceName: "csi-cinder-controller-service"
replicas: 1
selector:
matchLabels:
app: csi-cinder-controllerplugin
template:
metadata:
labels:
app: csi-cinder-controllerplugin
spec:
serviceAccount: csi-cinder-controller-sa
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v1.0.1
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v1.0.1
args:
- "--provisioner=csi-cinderplugin"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-snapshotter
image: quay.io/k8scsi/csi-snapshotter:v1.0.1
args:
- "--connection-timeout=15s"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/csi/sockets/pluginproxy/
name: socket-dir
- name: cinder-csi-plugin
image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0
args :
- /bin/cinder-csi-plugin
- "--v=5"
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
- "--cloud-config=$(CLOUD_CONFIG)"
- "--cluster=$(CLUSTER_NAME)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://csi/csi.sock
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
- name: CLUSTER_NAME
value: kubernetes
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: secret-cinderplugin
mountPath: /etc/config
readOnly: true
- mountPath: /etc/kubernetes
name: ca-cert
readOnly: true
volumes:
- name: socket-dir
hostPath:
path: /var/lib/csi/sockets/pluginproxy/
type: DirectoryOrCreate
- name: secret-cinderplugin
secret:
secretName: cloud-config
- name: ca-cert
secret:
secretName: openstack-ca-cert
Create cinder-csi-nodeplugin.yaml
and apply it to create csi node.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-cinder-nodeplugin
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-cinder-nodeplugin
template:
metadata:
labels:
app: csi-cinder-nodeplugin
spec:
serviceAccount: csi-cinder-node-sa
hostNetwork: true
containers:
- name: node-driver-registrar
image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"]
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: cinder-csi-plugin
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0
args :
- /bin/cinder-csi-plugin
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
- "--cloud-config=$(CLOUD_CONFIG)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://csi/csi.sock
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- name: kubelet-dir
mountPath: /var/lib/kubelet
mountPropagation: "Bidirectional"
- name: pods-cloud-data
mountPath: /var/lib/cloud/data
readOnly: true
- name: pods-probe-dir
mountPath: /dev
mountPropagation: "HostToContainer"
- name: secret-cinderplugin
mountPath: /etc/config
readOnly: true
- mountPath: /etc/kubernetes
name: ca-cert
readOnly: true
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/cinder.csi.openstack.org
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: kubelet-dir
hostPath:
path: /var/lib/kubelet
type: Directory
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- name: pods-cloud-data
hostPath:
path: /var/lib/cloud/data
type: Directory
- name: pods-probe-dir
hostPath:
path: /dev
type: Directory
- name: secret-cinderplugin
secret:
secretName: cloud-config
- name: ca-cert
secret:
secretName: openstack-ca-cert
当它们都在运行时,为Cinder创建一个存储类。
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-sc-cinderplugin
provisioner: csi-cinderplugin
然后,我们可以使用此类创建PVC。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myvol
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-sc-cinderplugin
创建PVC时,将相应地创建一个Cinder卷。
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s
在OpenStack中,卷名称将与Kubernetes持久卷生成的名称匹配。在此示例中,它将是: pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad
现在,我们可以使用PVC创建容器。
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
hostPort: 8081
protocol: TCP协议
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myvol
吊舱运行时,该卷将附加到吊舱。 如果回到OpenStack,我们可以看到Cinder卷已安装到运行Pod的worker节点上。
# openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f
+--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments | [{u'server_id': u'1c5e1439-edfa-40ed-91fe-2a0e12bc7eb4', u'attachment_id': u'11a15b30-5c24-41d4-86d9-d92823983a32', u'attached_at': u'2019-07-24T05:02:34.000000', u'host_name': u'compute-6', u'volume_id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f', u'device': u'/dev/vdb', u'id': u'6b5f3296-b0eb-40cd-bd4f-2067a0d6287f'}] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2019-07-24T05:02:18.000000 |
| description | Created by OpenStack 煤渣CSI driver |
| encrypted | False |
| id | 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f |
| migration_status | None |
| multiattach | False |
| name | pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad |
| os-vol-host-attr:host | rbd:volumes@rbd#rbd |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 14ba698c0aec4fd6b7dc8c310f664009 |
| properties | attached_mode='rw', cinder.csi.openstack.org/cluster='kubernetes' |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | in-use |
| type | rbd |
| updated_at | 2019-07-24T05:02:35.000000 |
| user_id | 5f6a7a06f4e3456c890130d56babf591 |
+--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
概要
在本演练中,我们将Kubernetes集群部署在OpenStack VM上,并使用外部OpenStack云提供商将其与OpenStack集成。然后,在此Kubernetes集群上,我们部署了Cinder CSI插件,该插件可以创建Cinder卷并将它们作为持久卷显示在Kubernetes中。