CentOS   发布时间:2022-04-02  发布网站:大佬教程  code.js-code.com
大佬教程收集整理的这篇文章主要介绍了基于CentOS7.2安装Kubernetes-v1.2大佬教程大佬觉得挺不错的,现在分享给大家,也给大家做个参考。

概述

摘要 使用swarm构建docker集群之后我们发现面临很多问题 swarm好但是还处于发展阶段功能上有所不足 我们使用kubernetes来解决这个问题 kubernetes 与swarm 比较 优点 复制集与健康维护 服务自发现与负载均衡 灰度升级 垃圾回收 自动回收失效镜像与容器 与容器引擎解耦 不仅仅支持docker容器 用户认证与资源隔离 缺点 大而全意味着 复杂度较高 从部署到使用都

摘要

使用swarm构建docker集群之后我们发现面临很多问题 swarm好但是还处于发展阶段功能上有所不足 我们使用kubernetes来解决这个问题

kubernetes 与swarm 比较

优点

  • 复制集与健康维护

  • 服务自发现与负载均衡

  • 灰度升级

  • 垃圾回收 自动回收失效镜像与容器

  • 与容器引擎解耦 不仅仅支持docker容器

  • 用户认证与资源隔离

缺点

大而全意味着 复杂度较高 从部署到使用都比swarm 复杂的多 相对而已swarm比较轻量级 而且跟docker引擎配合的更好 从精神上我是更支持swarm 奈何现在功能太欠缺 几天前发布了一个 SwarmKit的管理功能 功能多了不少 待其完善以后可以重回swarm的怀抱

K8s 核心概念简介

pod

k8s 中创建的最小部署单元就是pod 而容器是运行在pod里面的 pod可以运行多个容器 pod内的容器可以共享网络和存储相互访问

基于CentOS7.2安装Kubernetes-v1.2

Replication controller

复制及控制器:对多个pod创建相同的副本运行在不同节点 一般不会创建单独的pod而是与rc配合创建 rc控制并管理pod的生命周期维护pod的健康

service

每个容器重新运行后的ip地址都不是固定的 所以要有一个服务方向和负载均衡来处理 service就可以实现这个需求 service创建后可以暴露一个固定的端口 与相应的pod 进行绑定

K8s 核心组件简介

基于CentOS7.2安装Kubernetes-v1.2

apiserver

@H_618_134@提供对外的REST API服务 运行在 master节点 对指令进行验证后 修改etcd的存储数据

@H_618_134@shcheduler

@H_618_134@调度器运行在master节点,通过apiserver定时监控数据变化 运行pod时通过自身调度算法选出可运行的节点

@H_618_134@controller-manager

@H_618_134@控制管理器运行在master节点 分别几大管理器定时运行 分别为

@H_618_134@1)Replication controller 管理器 管理并保存所有的rc的的状态

@H_618_134@2 ) service Endpoint 管理器 对service 绑定的pod 进行实时更新操作 对状态失败的pod进行解绑

@H_618_134@3)Node controller 管理器 定时对集群的节点健康检查与监控

@H_618_134@4)资源配额管理器 追踪集群资源使用情况

@H_618_134@kuctrl (子节点)

@H_618_134@管理维护当前子节点的所有容器 如同步创建新容器 回收镜像垃圾

@H_618_134@kube-proxy (子节点)

对客户端请求进行负载均衡并分配到service后端的pod 是service的具体实现保证了ip的动态变化 proxy 通过修改iptable 实现路由转发

基于CentOS7.2安装Kubernetes-v1.2

工作流程

基于CentOS7.2安装Kubernetes-v1.2


k8s 安装过程:

一、主机规划表:

IP地址 @H_867_197@角色 安装软件包
@H_867_197@启动服务及顺序
192.168.20.60 k8s-master兼minion kubernetes-v1.2.4、etcd-v2.3.2、flAnnel-v0.5.5、docker-v1.11.2


@H_867_197@etcd

@H_867_197@flAnnel

@H_867_197@docker

@H_867_197@kube-apiserver

@H_867_197@kube-controller-manager

@H_867_197@kube-scheduler

kubelet

kube-proxy

192.168.20.61 k8s-minion1 kubernetes-v1.2.4、etcd-v2.3.2、flAnnel-v0.5.5、docker-v1.11.2

etcd

flAnnel

docker

kubelet

kube-proxy

192.168.20.62 k8s-minion2 kubernetes-v1.2.4、etcd-v2.3.2、flAnnel-v0.5.5、docker-v1.11.2

etcd

flAnnel

docker

kubelet

kube-proxy



二、环境准备

系统环境: CentOS-7.2


#yum update

#关闭firewalld,安装iptables

systemctlstopfirewalld.service

systemctldisablefirewalld.service

yum-yinstalliptables-services

systemctlrestartiptables.service

systemctlenableiptables.service


#关闭selinux

sed-i"s/SELINUX=enforcing/SELINUX=disabled/g"/etc/selinux/config

setenforce0

#添加repo

#tee /etc/yum.repos.d/docker.repo <<-'EOF'

[dockerrepo]

name=Docker Repository

baseurl=https://yum.dockerproject.org/repo/main/centos/7/

enabled=1

gpgcheck=1

gpgkey=https://yum.dockerproject.org/gpg

EOF

#yuminstalldocker-ENGIne

#使用内部私有仓库

@H_867_197@#vi/usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerdaemon--insecure-registry=192.168.4.231:5000-Hfd://

#启动docker

systemctl start docker

三、安装etcd集群(为k8s提供存储功能强一致性保证

tarzxfetcd-v2.3.2-linux-amd64.tar.gz

cdetcd-v2.3.2-linux-amd64

cp etcd* /usr/local/bin/

#注册系统服务

#vi/usr/lib/systemd/system/etcd.service

Description=etcd

[service]

Environment=ETCD_NAME=@H_616_477@k8s-master#节点名称,唯一。minion节点就对应改为主机名就好

@H_867_197@Environment=ETCD_data_dir=/var/lib/etcd #存储数据路径,如果集群出现问题,可以删除这个目录重新配。

@H_867_197@Environment=ETCD_INITIAL_ADVERTISE_PEER_URLS=http://192.168.20.60:7001@H_867_197@ #监听地址,其他机器按照本机IP地址修改

@H_867_197@Environment=ETCD_LISTEN_PEER_URLS=http://192.168.20.60:7001@H_867_197@#监听地址,其他机器按照本机IP地址修改

@H_867_197@Environment=ETCD_LISTEN_CLIENT_URLS=http://192.168.20.60:4001,http://127.0.0.1:4001@H_867_197@ #对外监听地址,其他机器按照本机IP地址修改

@H_867_197@Environment=ETCD_ADVERTISE_CLIENT_URLS=http://192.168.20.60:4001@H_867_197@#对外监听地址,其他机器按照本机IP地址修改

@H_867_197@Environment=ETCD_INITIAL_CLUSTER_TOKEN=etcd-k8s-1 #集群名称,三台节点统一

@H_867_197@Environment=ETCD_INITIAL_CLUSTER=k8s-master=http://192.168.20.60:7001,k8s-minion1=http://192.168.20.61:7001,k8s-minion2=http://192.168.20.62:7001@H_867_197@ #集群监控

@H_867_197@Environment=ETCD_INITIAL_CLUSTER_STATE=new

@H_867_197@ExecStart=/usr/local/bin/etcd

@H_867_197@[Install]

@H_867_197@WantedBy=multi-user.target

@H_867_197@#启动服务

@H_867_197@systemctl start etcd

@H_867_197@#检查etcd集群是否正常工作

@H_867_197@[root@k8s-monion2@H_867_197@etcd]#etcdctlcluster-health

@H_867_197@member2d3a022000105975ishealthy:gothealthyresultfromhttp://192.168.20.61:4001

@H_867_197@member34a68a46747ee684ishealthy:gothealthyresultfromhttp://192.168.20.62:4001

@H_867_197@memberfe9e66405caec791ishealthy:gothealthyresultfromhttp://192.168.20.60:4001

@H_867_197@clusterishealthy#出现这个说明已经正常启动了。

@H_867_197@

@H_867_197@

@H_867_197@#然后设置一下打通的内网网段范围

@H_867_197@etcdctl set /coreos.com/network/config '{ "Network": "172.20.0.0/16" }'





@H_867_197@四、安装启动FlAnnel(打通容器间网络,可实现容器跨主机互连)

@H_867_197@tarzxfflAnnel-0.5.5-linux-amd64.tar.gz

@H_867_197@mvflAnnel-0.5.5/usr/local/flAnnel

@H_867_197@cd /usr/local/flAnnel

@H_867_197@#注册系统服务

@H_867_197@#vi/usr/lib/systemd/system/flAnneld.service

@H_867_197@[Unit]

@H_867_197@Description=flAnnel

@H_867_197@After=etcd.service

@H_867_197@After=docker.service

@H_867_197@[service]

@H_867_197@EnvironmentFile=/etc/sysconfig/flAnneld

@H_867_197@ExecStart=/usr/local/flAnnel/flAnneld\

@H_867_197@-etcd-endpoints=${FLAnnEL_ETCD}$FLAnnEL_OPTIONS

@H_867_197@[Install]

@H_867_197@WantedBy=multi-user.target


@H_867_197@#新建配置文件

@H_867_197@#vi /etc/sysconfig/flAnneld

@H_867_197@FLAnnEL_ETCD="http://192.168.20.60:4001,http://192.168.20.61:4001,http://192.168.20.62:4001@H_867_197@"

@H_867_197@

@H_867_197@#启动服务

@H_867_197@systemctlstartflAnneld

@H_867_197@mk-docker-opts.sh-i

source/run/flAnnel/subnet.env

@H_867_197@ifconfigdocker0${FLAnnEL_subnet}

@H_867_197@systemctlrestartdocker

@H_867_197@#验证是否成功

基于CentOS7.2安装Kubernetes-v1.2

五、安装kubernets

1.下载源码包

cd /usr/local/

git clone

https://github.com/kubernetes/kubernetes.git



cd kubernetes/server/



tarzxf kubernetes-server-linux-amd64.tar.gz

cd kubernetes/server/bin

cpkube-apiserverkubectlkube-schedulerkube-controller-managerkube-proxy

kubelet


/usr/local/bin/




2.注册系统服务

#vi/usr/lib/systemd/system/kubelet.service

[Unit]

Description=KubernetesKubeletServer

Documentation=https://github.com/kubernetes/kubernetes

[service]

EnvironmentFile=/etc/kubernetes/config

EnvironmentFile=/etc/kubernetes/kubelet

User=root

ExecStart=/usr/local/bin/kubelet\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_ALLOW_PRIV\

$KUBELEt_address\

$KUBELET_PORT\

$KUBELET_HOSTNAME\

$KUBELET_API_SERVER\

$KUBELET_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

#vi/usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=KubernetesKube-proxyServer

Documentation=https://github.com/kubernetes/kubernetes

[service]

EnvironmentFile=/etc/kubernetes/config

EnvironmentFile=/etc/kubernetes/kube-proxy

User=root

ExecStart=/usr/local/bin/kube-proxy\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_MASTER\

$KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

3.创建配置文件

@H_32_14@mkdir /etc/kubernetes

vi /etc/kubernetes/config

#Commaseparatedlistofnodesintheetcdcluster

KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.20.60:4001"

#loggingtostderrmeanswegetiTinthesystemdjournal

KUBE_LOGTOSTDERR="--logtostderr=true"

#journalmessagelevel,0isdebug

KUBE_LOG_LEVEL="--v=0"

#Shouldthisclusterbeallowedtorunprivilegeddockercontainers

#KUBE_ALLOW_PRIV="--allow-privileged=false"

KUBE_ALLOW_PRIV="--allow-privileged=true"

vi/etc/kubernetes/kubelet

#Theaddressfortheinfoservertoserveon

KUBELEt_address="--address=0.0.0.0"

#Theportfortheinfoservertoserveon

KUBELET_PORT="--port=10250"

#Youmayleavethisblanktousetheactualhostname

KUBELET_HOSTNAME="--hostname-override=192.168.20.60" #master,minion节点填本机的IP

#LOCATIOnoftheapi-server

KUBELET_API_SERVER="--api-servers=http://192.168.20.60:8080"

#Addyourown!

KUBELET_ARGS="--cluster-dns=192.168.20.64--cluster-domain=cluster.local" #后面使用dns插件会用到

vi/etc/kubernetes/kube-proxy

#HowtheReplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Addyourown!

KUBE_PROXY_ARGS="--proxy-mode=userspace" #代理模式,这里使用userspace。而iptables模式效率比较高,但要注意你的内核版本和iptables的版本是否符合要求,要不然会出错。

关于代理模式的选择,可以看国外友人的解释:

http://stackoverflow.com/questions/36088224/what-does-userspace-mode-means-in-kube-proxys-proxy-mode?rq=1

4.以上服务需要在所有节点上启动,下面的是master节点另外需要的服务:

kube-apiserver

kube-controller-manager

kube-scheduler



4.1、配置相关服务

#vi/usr/lib/systemd/system/kube-apiserver.service

After=etcd.service

[service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/apiserver

User=root

ExecStart=/usr/local/bin/kube-apiserver\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_ETCD_SERVERS\

$KUBE_API_ADDRESS\

$KUBE_API_PORT\

$KUBELET_PORT\

$KUBE_ALLOW_PRIV\

$KUBE_serviCE_ADDRESSES\

$KUBE_ADMISSION_CONTROL\

$KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target


#vi/usr/lib/systemd/system/kube-controller-manager.service

[Unit]

Description=KubernetesControlleRMANager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/controller-manager

User=root

ExecStart=/usr/local/bin/kube-controller-manager\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_MASTER\

$KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

#vi/usr/lib/systemd/system/kube-scheduler.service

[Unit]

Description=KubernetesschedulerPlugin

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/scheduler

User=root

ExecStart=/usr/local/bin/kube-scheduler\

$KUBE_LOGTOSTDERR\

$KUBE_LOG_LEVEL\

$KUBE_MASTER\

$KUBE_scheDULER_ARGS

Restart=on-failure

LimitNOFILE=65536

[Install]

@H_867_197@4.2、配置文件

vi /etc/kubernetes/apiserver

#TheaddressontHelocalservertolistento.

KUBE_API_ADDRESS="--address=0.0.0.0"

#TheportontHelocalservertolistenon.

KUBE_API_PORT="--port=8080"

#HowtheReplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Portkubeletslistenon

KUBELET_PORT="--kubelet-port=10250"

#Addressrangetouseforservices

KUBE_serviCE_ADDRESSES="--service-cluster-ip-range=192.168.20.0/24"

#KUBE_serviCE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

#Addyourown!

KUBE_API_ARGS=""

vi /etc/kubernetes/controller-manager

#HowtheReplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Addyourown!

KUBE_CONTROLLER_MANAGER_ARGS=""

vi /etc/kubernetes/scheduler

#HowtheReplicationcontrollerandschedulerfindthekube-apiserver

KUBE_MASTER="--master=http://192.168.20.60:8080"

#Addyourown!

KUBE_scheDULER_ARGS=""

更多配置项可以参官方文档:

http://kubernetes.io/docs/admin/kube-proxy/

#启动master服务

systemctlstartkubelet

systemctlstartkube-proxy

systemctlstartkube-apiserver

systemctlstartkube-controller-manager

systemctlstartkube-scheduler

#启动minion服务

systemctlstartkubelet

systemctlstartkube-proxy

#检查服务是否启动正常

[root@k8s-masterbin]#kubectlgetno

NAMESTATUSAGE

192.168.20.60Ready24s

192.168.20.61Ready46s

192.168.20.62Ready35s

#重启命令

systemctlrestartkubelet

systemctlrestartkube-proxy

systemctlrestartkube-apiserver

systemctlrestart kube-controller-manager

systemctlrestartkube-scheduler

#填坑

pause gcr.io 被墙,没有这个镜像k8s应用不了,报下面错误

image pull Failed for gcr.io/google_containers/pause:2.0


使用docker hub的镜像代替,或者下到本地仓库,然后再重新打tag,并且每个节点都需要这个镜像

docker pullkubernetes/pause

dockertag kubernetes/pause gcr.io/google_containers/pause:2.0


[root@k8s-masteraddons]#dockerimages

REPOSITORYTAGIMAGEIDCREATEDSIZE

192.168.4.231:5000/pause2.02b58359142b09monthsago350.2kB

gcr.io/google_containers/pause2.02b58359142b09monthsago350.2kB

5.官方源码包里有一些插件,如:监控面板、dns

cd /usr/local/kubernetes/cluster/addons/


5.1、dashboard 监控面板插件

cd /usr/local/kubernetes/cluster/addons/dashboard

下面有两个文件

=============================================================

dashboard-controller.yaml #用来设置部署应用,如:副本数,使用镜像,资源控制等等

apiVersion:v1

kind:ReplicationController

Metadata:

#Keepthenameinsyncwithimageversionand

#gce/coreos/kube-manifests/addons/dashboardcounterparts

name:kubernetes-dashboard-v1.0.1

namespace:kube-system

labels:

k8s-app:kubernetes-dashboard

version:v1.0.1

kubernetes.io/cluster-service:"true"

spec:

replicas:1 #副本数量

SELEctor:

k8s-app:kubernetes-dashboard

template:

Metadata:

labels:

k8s-app:kubernetes-dashboard

version:v1.0.1

kubernetes.io/cluster-service:"true"

spec:

containers:

-name:kubernetes-dashboard

image:192.168.4.231:5000/kubernetes-dashboard:v1.0.1

resources:

#keeprequest=limittokeepthiscontaineringuaranteedclass

limits:

cpu:100m

@H_32_14@memory:50Mi

requests:

cpu:100m

@H_32_14@memory:50Mi

ports:

-containerPort:9090

args:

---apiserver-host=http://192.168.20.60:8080#这里需要注意,不加这个参数,会认去找localhost,而不是去master那里取。还有就是这个配置文件各项缩减问题,空格。

livenessProbe:

httpGet:

path:/

port:9090

initialDelaySeconds:30

timeoutSeconds:30

========================================================

dashboard-service.yaml #提供外部访问服务

apiVersion:v1

kind:service

Metadata:

name:kubernetes-dashboard

namespace:kube-system

labels:

k8s-app:kubernetes-dashboard

kubernetes.io/cluster-service:"true"

spec:

SELEctor:

k8s-app:kubernetes-dashboard

ports:

-port:80

targetPort:9090

=========================================================

kubectlcreate-f./ #创建服务

kubectl--namespace=kube-systemgetpo #查看系统服务启动状态

基于CentOS7.2安装Kubernetes-v1.2s.width=650;" src="http://img.code.cc/vcimg/static/loading.png" style="BACkground:url("/e/u261/lang/zh-cn/images/localimage.png") no-repeat center;border:1px solid #ddd;" src="http://chenguomin.blog.51cto.come/u261/themes/default/images/spacer.gif">

kubectl--namespace=kube-systemgetpo-owide #查看系统服务起在哪个节点

基于CentOS7.2安装Kubernetes-v1.2s.width=650;" src="http://img.code.cc/vcimg/static/loading.png" style="BACkground:url("/e/u261/lang/zh-cn/images/localimage.png") no-repeat center;border:1px solid #ddd;" src="http://chenguomin.blog.51cto.come/u261/themes/default/images/spacer.gif">

#若想删除,可以执行下面命令

kubectldelete-f./

在浏览器输入: http://192.168.20.60:8080/ui/

基于CentOS7.2安装Kubernetes-v1.2

@H_618_1727@5.2、DNS 插件安装

@H_419_1722@#使用ip地址方式不太容易记忆,集群内可以使用dns绑定ip并自动更新维护。

cd/usr/local/kubernetes/cluster/addons/dns

cpskydns-rc.yaml.in /opt/dns/skydns-rc.yaml

cpskydns-svc.yaml.in /opt/dns/skydns-svc.yaml


@H_807_1772@#/opt/dns/skydns-rc.yaml 文件

apiVersion:v1

kind:ReplicationController

Metadata:

name:kube-dns-v11

namespace:kube-system

labels:

k8s-app:kube-dns

version:v11

kubernetes.io/cluster-service:"true"

spec:

replicas:1

SELEctor:

k8s-app:kube-dns

version:v11

template:

Metadata:

labels:

k8s-app:kube-dns

version:v11

kubernetes.io/cluster-service:"true"

spec:

containers:

-name:etcd

image:192.168.4.231:5000/etcd-amd64:2.2.1 #我都是先把官方镜像下载到本地仓库

resources:

#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge

#clusters,thensetrequest=limittokeepthiscontainerin

#guaranteedclass.Currently,thiscontainerfallsintothe

#"burstable"categorysothekubeletdoesn'tBACkofffromrestarTingit.

limits:

cpu:100m

@H_201_1737@memory:500Mi

requests:

cpu:100m

@H_201_1737@memory:50Mi

command:

-/usr/local/bin/etcd

--data-dir

-/var/etcd/data

--listen-client-urls

-http://127.0.0.1:2379,http://127.0.0.1:4001

--advertise-client-urls

-http://127.0.0.1:2379,http://127.0.0.1:4001

--initial-cluster-token

-skydns-etcd

volumeMounts:

-name:etcd-storage

@H_201_1737@mountPath:/var/etcd/data

-name:kube2sky

image:192.168.4.231:5000/kube2sky:1.14 #本地仓库取

resources:

#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge

#clusters,thiscontainerfallsintothe

#"burstable"categorysothekubeletdoesn'tBACkofffromrestarTingit.

limits:

cpu:100m

#Kube2skywatchesallpods.

@H_201_1737@memory:200Mi

requests:

cpu:100m

@H_201_1737@memory:50Mi

livenessProbe:

httpGet:

path:/healthz

port:8080

scheR_501_11845@e:http

initialDelaySeconds:60

timeoutSeconds:5

successThreshold:1

failureThreshold:5

readinessProbe:

httpGet:

path:/readiness

port:8081

scheR_501_11845@e:http

#wepollonpodstartupfortheKubernetesmasterserviceand

#onlysetupthe/readinesshttpserveronCEThat'savailable.

initialDelaySeconds:30

timeoutSeconds:5

args:

#command="/kube2sky"

---domain=cluster.local #一个坑,要和/etc/kubernetes/kubelet 内的一致

---kube_master_url=http://192.168.20.60:8080 #master管理节点

-name:skydns

image:192.168.4.231:5000/skydns:2015-10-13-8c72f8c

resources:

#TODO:Setmemorylimitswhenwe'veprofiledthecontainerforlarge

#clusters,thiscontainerfallsintothe

#"burstable"categorysothekubeletdoesn'tBACkofffromrestarTingit.

limits:

cpu:100m

@H_201_1737@memory:200Mi

requests:

cpu:100m

@H_201_1737@memory:50Mi

args:

#command="/skydns"

--machines=http://127.0.0.1:4001

--addr=0.0.0.0:53

--ns-rotate=false

--domain=cluster.local. #另一个坑!! 后面要带"."

ports:

-containerPort:53

name:dns

protocol:UDP

-containerPort:53

name:dns-tcp

protocol:TCP

-name:healthz

image:192.168.4.231:5000/exechealthz:1.0 #本地仓库取镜像

resources:

#keeprequest=limittokeepthiscontaineringuaranteedclass

limits:

cpu:10m

@H_201_1737@memory:20Mi

requests:

cpu:10m

@H_201_1737@memory:20Mi

args:

--cmd=nslookupkubernetes.default.svc.cluster.local127.0.0.1>/dev/null #还是这个坑

--port=8080

ports:

-containerPort:8080

protocol:TCP

volumes:

-name:etcd-storage

emptyDir:{}

dnsPolicy:Default#Don'tuseclusterDNs.


========================================================================

#/opt/dns/skydns-svc.yaml 文件

apiVersion:v1

kind:service

Metadata:

name:kube-dns

namespace:kube-system

labels:

k8s-app:kube-dns

kubernetes.io/cluster-service:"true"

kubernetes.io/name:"KubeDNS"

spec:

SELEctor:

k8s-app:kube-dns

clusterIP:192.168.20.100

ports:

-name:dns

port:53

protocol:UDP

-name:dns-tcp

port:53

protocol:TCP

==================================================================

#启动

cd /opt/dns/

kubectl create -f ./


#查看pod启动状态,看到4/4 个服务都启动完成,那就可以进行下一步验证阶段。

kubectl--namespace=kube-systemgetpod-owide

基于CentOS7.2安装Kubernetes-v1.2

基于CentOS7.2安装Kubernetes-v1.2

@H_618_1727@转官网的验证方法

网址:https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md#userconsent#

How do I test if it is working?

First deploy DNS as described above.

1 Create a simple Pod to use as a test environment.

Create a file named busyBox.yaml with the following contents:

apiVersion:v1kind:PodMetadata:name:busyBoxnamespace:defaultspec:containers:-image:busyBoxcommand:-sleep-"3600"imagePullPolicy:IfNotPresentname:busyBoxrestartPolicy:Always

Then create a pod using this file:

kubectlcreate-fbusyBox.yaml

2 Wait for this pod to go into the running state.

You can get its status with:

@H_510_2489@kubectlgetpodsbusyBox

You should see:

@H_510_2489@NAMEREADYSTATUSRESTARTSAGE busyBox1/1Running0<some-time>

3 Validate DNS works

Once that pod is running,you can exec nslookup in that environment:

@H_510_2489@kubectlexecbusyBox--nslookupkubernetes.default

You should see something like:

@H_510_2489@Server:10.0.0.10 Address1:10.0.0.10 Name:kubernetes.default Address1:10.0.0.1

If you see that,DNS is working correctly.


5.3、manage插件

http://my.oschina.net/fufangchun/blog/703985


@H_791_2511@mkdir /opt/k8s-manage

cd /opt/k8s-manage

================================================

#catk8s-manager-rc.yaml

apiVersion:v1

kind:ReplicationController

Metadata:

name:k8s-manager

namespace:kube-system

labels:

app:k8s-manager

spec:

replicas:1

SELEctor:

app:k8s-manager

template:

Metadata:

labels:

app:k8s-manager

spec:

containers:

-image:@H_361_1540@mlamina/k8s-manager:latest

name:k8s-manager

resources:

limits:

cpu:100m

@H_32_14@memory:50Mi

ports:

-containerPort:80

name:http

=================================================

#catk8s-manager-svr.yaml

apiVersion:v1

kind:service

Metadata:

name:k8s-manager

namespace:kube-system

labels:

app:k8s-manager

spec:

ports:

-port:80

targetPort:http

SELEctor:

app:k8s-manager

=================================================

#启动

kubectl create -f ./


浏览器访问:

http://192.168.20.60:8080/api/v1/proxy/namespaces/kube-system/services/k8s-manager

@L_616_197@

基于CentOS7.2安装Kubernetes-v1.2


实例演示

1.搭zookeeper、activeMQ、redis、mongodb服务

@H_457_2689@mkdir /opt/service/

cd /opt/service

==========================================================

#cat service.yaml


apiVersion:v1

kind:service

Metadata:

name:zk-amq-rds-mgd #服务名称

labels:

run:zk-amq-rds-mgd

spec:

type:NodePort

ports:

-port:2181 #标识

nodePort:31656#master节点对外服务端口

targetPort:2181 #容器内部端口

protocol:TCP #协议类型

name:zk-app #表示名

-port:8161

nodePort:31654

targetPort:8161

protocol:TCP

name:amq-http

-port:61616

@H_502_2774@nodePort:31655

@H_502_2774@targetPort:61616

@H_502_2774@protocol:TCP

@H_502_2774@name:amq-app

-port:27017

@H_491_2797@nodePort:31653

@H_491_2797@targetPort:27017

@H_491_2797@protocol:TCP

@H_491_2797@name:mgd-app

-port:6379

nodePort:31652

targetPort:6379

protocol:TCP

name:rds-app

SELEctor:

run:zk-amq-rds-mgd

---

#apiVersion:extensions/v1beta1

apiVersion:v1

kind:ReplicationController

Metadata:

name:zk-amq-rds-mgd

spec:

replicas:2 #两个副本

template:

Metadata:

labels:

run:zk-amq-rds-mgd

spec:

containers:

-name:zookeeper #应用名称

image:192.168.4.231:5000/zookeeper:0524 #使用本地镜像

imagePullPolicy:IfNotPresent #自动分配到性能好的节点,去拉镜像并启动容器。

ports:

-containerPort:2181 #容器内部服务端口

env:

-name:LANG

value:en_US.UTF-8

volumeMounts:

-mountPath:/tmp/zookeeper #容器内部挂载点

name:zookeeper-d#挂载名称,要与下面配置外部挂载点一致

-name:activemq

image:192.168.4.231:5000/activemq:v2

imagePullPolicy:IfNotPresent

ports:

-containerPort:8161

-containerPort:61616

volumeMounts:

-mountPath:/opt/apache-activemq-5.10.2/data

name:activemq-d

@H_491_2797@-name:mongodb

@H_491_2797@image:192.168.4.231:5000/mongodb:3.0.6

@H_491_2797@imagePullPolicy:IfNotPresent

@H_491_2797@ports:

@H_491_2797@-containerPort:27017

@H_491_2797@volumeMounts:

@H_491_2797@-mountPath:/var/lib/mongo

@H_491_2797@name:mongodb-d

-name:redis

image:192.168.4.231:5000/redis:2.8.25

imagePullPolicy:IfNotPresent

ports:

-containerPort:6379

volumeMounts:

-mountPath:/opt/redis/var

name:redis-d

volumes:

-hostPath:

path:/mnt/mfs/service/zookeeper/data #宿主机挂载点,我这里用了分布式共享存储(MooseFS),这样可以保证多个副本数据的一致性。

name:zookeeper-d

-hostPath:

path:/mnt/mfs/service/activemq/data

-hostPath:

@H_491_2797@path:/mnt/mfs/service/mongodb/data

-hostPath:

path:/mnt/mfs/service/redis/data

name:redis-d

===========================================================================================

#创建服务

kubectl create -f ./

基于CentOS7.2安装Kubernetes-v1.2

基于CentOS7.2安装Kubernetes-v1.2

基于CentOS7.2安装Kubernetes-v1.2

文献:http://my.oschina.net/jayqqaa12/blog/693919#userconsent#

大佬总结

以上是大佬教程为你收集整理的基于CentOS7.2安装Kubernetes-v1.2全部内容,希望文章能够帮你解决基于CentOS7.2安装Kubernetes-v1.2所遇到的程序开发问题。

如果觉得大佬教程网站内容还不错,欢迎将大佬教程推荐给程序员好友。

本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。