KubeSphere安装

2020-10-23 14:34:00
admin
原创
1478
摘要:KubeSphere安装

KubeSphere安装:

1、首先安装helm

首先下载这个版本helm-v2.16.3-linux-amd64.tar.gz,

然后

tar -zxvf helm-v2.16.3-linux-amd64.tar.gz

然后移动helm的位置

cp linux-amd64/helm  /usr/local/bin/

安装tiller

首先执行下面的

先见权限,新建helm-rbac.yaml

文件内容如下:


apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

然后运行kubectl apply -f helm-rbac.yaml

1、

helm init --service-account=tiller --tiller-image=gcr.io/kubernetes-helm/tiller:v2.16.3   --history-max 300

2、会出现如下的错误

Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version

然后执行

helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.3 --skip-refresh

安装成功之后出现如下的:

3、安装OpenEBS 创建 LocalPV 存储类型

查看节点名称

kubectl get node -o wide


4、确认 master 节点是否有 Taint,如下看到 master 节点有 Taint

kubectl describe node k8s-node1 | grep Taint


5、去掉 master 节点的 Taint

kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-

6、创建 OpenEBS 的 namespace,OpenEBS 相关资源将创建在这个 namespace 下:

kubectl create ns openebs

报错:

Error: failed to download "stable/openebs" (hint: running `helm repo update` may help)


helm repo add stable http://mirror.azure.cn/kubernetes/charts/


7、Helm 命令来安装 OpenEBS

helm install --namespace openebs --name openebs stable/openebs --version 1.5.0

查看

kubectl get pods --all-namespaces


kubectl get sc

安装可能需要5、6分钟的时间

安装成功之后显示如下

8、 openebs-hostpath设置为 默认的 StorageClass


 kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

 9、将master重新设置污点(在安装最新的kubespheres时候,不要重置污点)

kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule
10、找到如下的文件地址


https://github.com/kubesphere/ks-installer/blob/v2.1.1/kubesphere-minimal.yaml

vi kubesphere-minimal.yaml(把上面的文件复制进去)

安装

kubectl apply -f kubesphere-minimal.yaml

监控安装情况

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

最后一步:

kubesphere的卸载

kubectl delete -f kubesphere-minimal.yaml

这样卸载的不干净,如果想卸载的干净,新建delete.sh,内容如下:


#!/usr/bin/env bash

# set -x: Print commands and their arguments as they are executed.
# set -e: Exit immediately if a command exits with a non-zero status.

# set -xe

# delete ks-install
kubectl delete deploy ks-installer -n kubesphere-system

# delete helm
for namespaces in kubesphere-system kubesphere-devops-system kubesphere-monitoring-system kubesphere-logging-system istio-system kube-federation-system kube-system openpitrix-system
do
  helm list -n $namespaces | grep -v NAME | awk '{print $1}' | sort -u | xargs -r -L1 helm uninstall -n $namespaces
done

# delete kubesphere deployment
kubectl delete deployment -n kubesphere-system `kubectl get deployment -n kubesphere-system -o jsonpath="{.items[*].metadata.name}"`

# delete monitor statefulset
kubectl delete statefulset -n kubesphere-monitoring-system `kubectl get statefulset -n kubesphere-monitoring-system -o jsonpath="{.items[*].metadata.name}"`

# delete pvc
pvcs="kubesphere-system|openpitrix-system|kubesphere-monitoring-system|kubesphere-devops-system|kubesphere-logging-system"
kubectl --no-headers=true get pvc --all-namespaces -o custom-columns=:metadata.namespace,:metadata.name | grep -E $pvcs | xargs -n2 kubectl delete pvc -n


# delete rolebindings
delete_role_bindings() {
  for rolebinding in `kubectl -n $1 get rolebindings -l iam.kubesphere.io/user-ref -o jsonpath="{.items[*].metadata.name}"`
  do
    kubectl -n $1 delete rolebinding $rolebinding
  done
}

# delete roles
delete_roles() {
  kubectl -n $1 delete role admin
  kubectl -n $1 delete role operator
  kubectl -n $1 delete role viewer
  for role in `kubectl -n $1 get roles -l iam.kubesphere.io/role-template -o jsonpath="{.items[*].metadata.name}"`
  do
    kubectl -n $1 delete role $role
  done
}

# remove useless labels and finalizers
for ns in `kubectl get ns -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl label ns $ns kubesphere.io/workspace-
  kubectl label ns $ns kubesphere.io/namespace-
  kubectl patch ns $ns -p '{"metadata":{"finalizers":null,"ownerReferences":null}}'
  delete_role_bindings $ns
  delete_roles $ns
done

# delete workspaces
for ws in `kubectl get workspaces -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch workspace $ws -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete workspaces --all

# delete clusters
for cluster in `kubectl get clusters -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch cluster $cluster -p '{"metadata":{"finalizers":null}}' --type=merge
done

# delete validatingwebhookconfigurations
for webhook in ks-events-admission-validate users.iam.kubesphere.io validating-webhook-configuration
do
  kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io $webhook
done

# delete mutatingwebhookconfigurations
for webhook in ks-events-admission-mutate logsidecar-injector-admission-mutate mutating-webhook-configuration
do
  kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io $webhook
done

# delete users
for user in `kubectl get users -o jsonpath="{.items[*].metadata.name}"`
do
  kubectl patch user $user -p '{"metadata":{"finalizers":null}}' --type=merge
done
kubectl delete users --all

# delete crds
for crd in `kubectl get crds -o jsonpath="{.items[*].metadata.name}"`
do
  if [[ $crd == *kubesphere.io ]]; then kubectl delete crd $crd; fi
done

# delete relevance ns
for ns in kubesphere-alerting-system kubesphere-controls-system kubesphere-devops-system kubesphere-logging-system kubesphere-monitoring-system openpitrix-system istio-system
do
  kubectl delete ns $ns
done
发表评论
评论通过审核之后才会显示。
文章分类
联系方式
联系人: 郑州-小万
电话: 13803993919
Email: 1027060531@qq.com
QQ: 1027060531
网址: www.wanhejia.com