午夜无码人妻aⅴ大片色欲张津瑜,国产69久久久欧美黑人A片,色妺妺视频网,久久久久国产综合AV天堂

基于kubernetes如何構(gòu)建Docker集群

這篇文章將為大家詳細(xì)講解有關(guān)基于kubernetes如何構(gòu)建Docker集群,小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,希望大家閱讀完這篇文章后可以有所收獲。

創(chuàng)新互聯(lián)建站是網(wǎng)站建設(shè)技術(shù)企業(yè),為成都企業(yè)提供專業(yè)的網(wǎng)站設(shè)計(jì)制作、做網(wǎng)站,網(wǎng)站設(shè)計(jì),網(wǎng)站制作,網(wǎng)站改版等技術(shù)服務(wù)。擁有10余年豐富建站經(jīng)驗(yàn)和眾多成功案例,為您定制適合企業(yè)的網(wǎng)站。10余年品質(zhì),值得信賴!

1,環(huán)境說明

組件版本說明:

系統(tǒng)版本:ceontos 7

kubernetes版本:0.17.1

etcd版本:2.1.1

docker版本:1.6.2

環(huán)境說明

etcd:172.16.0.3

master:172.16.0.2      kubernetes+docker

minion1:172.16.0.4 kubernetes+docker

minion2:172.16.0.5 kubernetes+docker

2,系統(tǒng)環(huán)境配置

更新yum源

# yum -y install wget ntpdate bind-utils

# wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/epel-release-7-2.noarch.rpm

# yum update

防火墻設(shè)置(根據(jù)個(gè)人情況配置,非必須)

關(guān)閉firewall

# systemctl stop firewalld.service #停止firewall

# systemctl disable firewalld.service #禁止firewall開機(jī)啟動(dòng)

安裝iptalbles

# yum install iptables-services #安裝

# systemctl start iptables.service #最后重啟防火墻使配置生效

# systemctl enable iptables.service #設(shè)置防火墻開機(jī)啟動(dòng)

2,安裝配置etcd

2.1,安裝

# yum install etcd

2.2,配置

[root@etcd ~]# grep -Ev "^#|^$" /etc/etcd/etcd.conf

ETCD_NAME=default

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"

ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:4001"

2.3,啟動(dòng)

[root@etcd ~]# systemctl start etcd.service

2.4,驗(yàn)證

[root@etcd ~]# etcd -version

etcd Version: 2.0.13

Git SHA: 92e3895

Go Version: go1.4.2

Go OS/Arch: linux/amd64

#在master上

[root@master ~]# telnet 172.16.0.3 4001

2.5,配置flannel

[root@etcd ~]# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

{"Network":"172.17.0.0/16"}

[root@etcd ~]# etcdctl get /coreos.com/network/config

{"Network":"172.17.0.0/16"}

3,安裝k8s

服務(wù)器:所有服務(wù)器

#yum install kubernetes

升級(jí)方法:

# mkdir -p /home/install && cd /home/install

# wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.6.2/kubernetes.tar.gz

# tar -zxvf kubernetes.tar.gz

# tar -zxvf kubernetes/server/kubernetes-server-linux-amd64.tar.gz

# cp kubernetes/server/bin/kube* /usr/bin

3.1,master配置k8s

master運(yùn)行三個(gè)組件,包括apiserver、scheduler、controller-manager,相關(guān)配置項(xiàng)也只涉及這三塊。

[/etc/kubernetes/config]

[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/config 

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow_privileged=false"

KUBE_MASTER="--master=http:://172.16.0.2:8080"

[/etc/kubernetes/apiserver]

[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/apiserver

KUBE_API_ADDRESS="--address=0.0.0.0"

KUBE_API_PORT="--port=8080"

KUBELET_PORT="--kubelet_port=10250"

KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:4001"

KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16"

KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota"

KUBE_API_ARGS=""

[/etc/kubernetes/controller-manager]

[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/controller-manager

KUBELET_ADDRESSES="--machines=127.0.0.1,172.16.0.4,172.16.0.5"

KUBE_CONTROLLER_MANAGER_ARGS=""

[/etc/kubernetes/scheduler]

[root@master ~]# grep -Ev "^$|^#" /etc/kubernetes/scheduler 

KUBE_SCHEDULER_ARGS=""

3.2,master上啟動(dòng)k8s服務(wù)

# systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service

# systemctl enable kube-apiserver.service kube-controller-manager.service kube-scheduler.service

3.3,查看k8s版本

[root@master ~]# kubectl version

Client Version: version.Info{Major:"1", Minor:"0+", GitVersion:"v1.0.0-290-gb2dafdaef5acea", GitCommit:"b2dafdaef5aceafad503ab56254b60f80da9e980", GitTreeState:"clean"}

Server Version: version.Info{Major:"1", Minor:"0+", GitVersion:"v1.0.0-290-gb2dafdaef5acea", GitCommit:"b2dafdaef5aceafad503ab56254b60f80da9e980", GitTreeState:"clean"}

報(bào)錯(cuò):

[root@master ~]# kubectl version

Client Version: version.Info{Major:"1", Minor:"0+", GitVersion:"v1.0.0-290-gb2dafdaef5acea", GitCommit:"b2dafdaef5aceafad503ab56254b60f80da9e980", GitTreeState:"clean"}

error: couldn't read version from server: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused

解決:

需要配置k8s,見上!

3.4,minion配置k8s

minion運(yùn)行兩個(gè)組件,kubelet proxy,對(duì)應(yīng)的配置是config和kubelet

minion上還需要配置docker,見3.5

[/etc/kubernetes/config]

[root@minion1 ~]# grep -Ev "^$|^#" /etc/kubernetes/config 

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=0"

KUBE_ALLOW_PRIV="--allow_privileged=false"

KUBE_MASTER="--master=http://172.16.0.2:8080"

[root@localhost ~]# grep -Ev "^$|^#" /etc/kubernetes/kubelet 

KUBELET_ADDRESS="--address=0.0.0.0"

KUBELET_HOSTNAME="--hostname_override=172.16.0.4"

KUBELET_API_SERVER="--api_servers=http://172.16.0.2:8080"

KUBELET_ARGS=""

3.5,minion上配置docker

配置docker,以便遠(yuǎn)程管理。

[root@minion1 ~]# grep -Ev "^$|^#" /etc/sysconfig/docker

OPTIONS='--selinux-enabled -H tcp://0.0.0.0:2375 -H fd://'

DOCKER_CERT_PATH=/etc/docker

#啟動(dòng)docker的時(shí)候可能會(huì)報(bào)錯(cuò),可以先不配置

3.6,配置flanneld

[root@minion1 ~]# grep -Ev "^$|^#" /etc/sysconfig/flanneld

FLANNEL_ETCD="http://172.16.0.3:4001"

FLANNEL_ETCD_KEY="/coreos.com/network"

3.7,minion上啟動(dòng)k8s,docker,flanneld

[root@minion1 ~]# systemctl start docker.service flanneld.service

[root@minion1 ~]# systemctl start kubelet.service kube-proxy.service

如果出現(xiàn)docker0和flannel設(shè)置的ip地址不同,則可采取如下方式修改

#systemctl stop docker

#ifconfig docker0 down

#brctl delbr docker0

#systemctl start docker

3.8,daocker啟動(dòng)報(bào)錯(cuò)

[root@localhost sysconfig]# systemctl start docker

Job for docker.service failed. See 'systemctl status docker.service' and 'journalctl -xn' for details.

[root@localhost sysconfig]# systemctl status docker.service

docker.service - Docker Application Container Engine

   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)

  Drop-In: /usr/lib/systemd/system/docker.service.d

           └─flannel.conf

   Active: failed (Result: exit-code) since 三 2015-09-16 14:18:47 CST; 11s ago

     Docs: http://docs.docker.com

  Process: 9150 ExecStart=/usr/bin/docker -d $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY (code=exited, status=1/FAILURE)

 Main PID: 9150 (code=exited, status=1/FAILURE)

9月 16 14:18:47 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...

9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.842291856+08:00" level=info msg="Listening for...sock)"

9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.861153138+08:00" level=error msg="WARNING: No ...n use"

9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.889459632+08:00" level=info msg="[graphdriver]...per\""

9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.902509183+08:00" level=warning msg="Running mo...tus 1"

9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.907255506+08:00" level=info msg="Firewalld run...false"

9月 16 14:18:47 localhost.localdomain docker[9150]: time="2015-09-16T14:18:47.949811560+08:00" level=fatal msg="Error starti....61.1"

9月 16 14:18:47 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE

9月 16 14:18:47 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.

9月 16 14:18:47 localhost.localdomain systemd[1]: Unit docker.service entered failed state.

Hint: Some lines were ellipsized, use -l to show in full.

[root@localhost sysconfig]# docker -d

INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) 

ERRO[0000] WARNING: No --storage-opt dm.thinpooldev specified, using loopback; this configuration is strongly discouraged for production use 

INFO[0000] [graphdriver] using prior storage driver "devicemapper" 

WARN[0000] Running modprobe bridge nf_nat br_netfilter failed with message: , error: exit status 1 

INFO[0000] Firewalld running: false                     

INFO[0000] Loading containers: start.                   

INFO[0000] Loading containers: done.                    

INFO[0000] Daemon has completed initialization          

INFO[0000] Docker daemon                                 commit=3043001/1.7.1 execdriver=native-0.2 graphdriver=devicemapper version=1.7.1

解決方法:

如果出現(xiàn)docker0和flannel設(shè)置的ip地址不同,則可采取如下方式修改

#systemctl stop docker

#ifconfig docker0 down

#brctl delbr docker0

#systemctl start docker

注意:在用虛擬機(jī)做測試的時(shí)候,假如克隆了minion的虛擬機(jī),兩臺(tái)ninion虛擬機(jī)的docker0和flannel網(wǎng)段地址會(huì)一樣,會(huì)造成NotReady狀態(tài)。所以需要克隆master虛擬主機(jī),配置成minion狀態(tài)。

4,集群操作

查看node信息

[root@master ~]# kubectl get nodes

報(bào)錯(cuò)1:

Error from server: 501: All the given peers are not reachable (failed to propose on members [http://172.16.0.3:4001] twice [last error: Get http://172.16.0.3:4001/v2/keys/registry/minions?quorum=false&recursive=true&sorted=true: dial tcp 172.16.0.3:4001: i/o timeout]) [0]

原因:

docker沒有注冊(cè)到etcd上,查看docker未啟動(dòng)。

報(bào)錯(cuò)2:

[root@localhost ~]# kubectl get nodes

NAME         LABELS                              STATUS

127.0.0.1    kubernetes.io/hostname=127.0.0.1    NotReady

172.16.0.2   kubernetes.io/hostname=172.16.0.2   NotReady

172.16.0.4   kubernetes.io/hostname=172.16.0.4   NotReady

172.16.0.5   kubernetes.io/hostname=172.16.0.5   NotReady

原因:

minion注冊(cè)etcd有問題,需要查看etcd服務(wù)的端口是否是2379(etcd Version: 2.0.13

),在minion上是否能telnet通2379端口

關(guān)于“基于kubernetes如何構(gòu)建Docker集群”這篇文章就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,使各位可以學(xué)到更多知識(shí),如果覺得文章不錯(cuò),請(qǐng)把它分享出去讓更多的人看到。

分享名稱:基于kubernetes如何構(gòu)建Docker集群
網(wǎng)頁鏈接:http://www.ekvhdxd.cn/article10/pjeido.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供搜索引擎優(yōu)化云服務(wù)器、品牌網(wǎng)站設(shè)計(jì)、網(wǎng)站導(dǎo)航網(wǎng)站營銷、全網(wǎng)營銷推廣

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

小程序開發(fā)