Kubernetes安裝指南:從物理服務(wù)器邁向容器環(huán)境
譯文目前可供選擇的容器管理系統(tǒng)已經(jīng)相當豐富,其中包括Amazon EC2 Container Service、Rancher以及Kubernetes等等。這里之所以選擇Kubernetes,是因為它能夠被安裝至多種環(huán)境當中,而且不會導(dǎo)致大家被單一供應(yīng)商所鎖定。
多年以來,我一直利用物理服務(wù)器處理基礎(chǔ)服務(wù),例如Apache、Bind、MySQL以及PHP等。當然,有時候我也會出于成本考量而進行服務(wù)器遷移。不過在服務(wù)器上托管多個網(wǎng)站時往往導(dǎo)致更新工作更加困難,因為它們分別依賴于不同PHP版本??紤]到這些問題,我通常會繼續(xù)使用某些服務(wù)器以避免由遷移帶來的麻煩。相信大家在實際工作中也經(jīng)常面臨這樣的場景。
我一直在關(guān)注Docker生態(tài)系統(tǒng)的發(fā)展狀況,并且意識到一旦向容器進行遷移,我面臨的問題都將得到解決。有鑒于此,我開始將托管于法國的3套Gentoo與遷移至托管于美國的1套Debian以及托管于法國的一套Gentoo。
今天,我將向大家介紹如何利用Kubernetes安裝Docker并實現(xiàn)容器管理。
目前可供選擇的容器管理系統(tǒng)已經(jīng)相當豐富,其中包括Amazon EC2 Container Service、Rancher以及Kubernetes等等。這里之所以選擇Kubernetes,是因為它能夠被安裝至多種環(huán)境當中,而且不會導(dǎo)致大家被單一供應(yīng)商所鎖定。
安裝Docker
第一步非常簡單。
在Debian之上安裝Docker:
apt-get install docker.io
在Gentoo之上安裝Docker:
emerge -v docker
下一步是安裝Kubernetes。
安裝Kubernetes
如果選擇使用Docker簡單安裝方式,那么無需使用集群,但這也會令大家錯失Kubernetes的優(yōu)勢所在。
多Docker環(huán)境顯然屬于最佳選項。該環(huán)境會首先使用Docker服務(wù)啟動etcd與flannel,從而啟用共享網(wǎng)絡(luò)并允許Kubernetes管理并共享其配置。
- root@c1:/home/shared# etcdctl member list
- eacd7f155934262: name=b5.loopingz.com
- peerURLs=http://91.121.82.118:2380
- clientURLs=http://91.121.82.118:2379,http://91.121.82.118:4001
- 2f0f8b2f17fffe3c: name=c2.loopingz.com
- peerURLs=http://198.245.51.134:2380
- clientURLs=http://198.245.51.134:2379,http://198.245.51.134:4001
- 88314cdfe9bc1797: name=default
- peerURLs=http://142.4.214.129:2380
- clientURLs=http://142.4.214.129:2379,http://142.4.214.129:4001
現(xiàn)在大家已經(jīng)擁有多套不同網(wǎng)絡(luò):
0.0.0/16代表Kubernetes服務(wù)/負載均衡器
1.0.0/16為集群各節(jié)點上的pod創(chuàng)建位置,各pod地址將為10.1.xx.0/24。
- flannel.1 Link encap:Ethernet HWaddr c2:67:be:06:2c:11
- inet addr:10.1.72.0 Bcast:0.0.0.0 Mask:255.255.0.0
- inet6 addr: fe80::c067:beff:fe06:2c11/64 Scope:Link
- UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
- RX packets:5412360 errors:0 dropped:0 overruns:0 frame:0
- TX packets:4174530 errors:0 dropped:21 overruns:0 carrier:0
- collisions:0 txqueuelen:0
- RX bytes:1651767659 (1.5 GiB) TX bytes:3453452284 (3.2 GiB)
安裝GlusterFS
根據(jù)具體pod配置與節(jié)點選擇,各容器可被創(chuàng)建至任意集群節(jié)點當中。這意味著大家需要在不同節(jié)點間進行存儲資源共享。在這方面可供選擇的方案也很多,包括Amazon EFS(仍處于beta測試)、谷歌Cloud Storage、GlusterFS等等。
GlusterFS是一款開放式共享存儲解決方案。它能夠以后臺程序方式運行并使用UDP端口24007。
安裝防火墻規(guī)則
為了一次性對全部服務(wù)器上的防火墻加以更新,我創(chuàng)建了一套小型shell腳本,旨在監(jiān)聽etcd節(jié)點上的變更,并根據(jù)規(guī)則與集群節(jié)點情況更新防火墻。
更新防火墻配置
防火墻通過一個fw.conf文件進行配置,并共享于GlusterFS分卷之上:
- #!/bin/sh
- MD5_TARGET=`md5sum /home/shared/configs/firewall/fw.conf | awk '{print $1}'`
- MD5_NEW=`md5sum fw.conf | awk '{print $1}'`
- if [ "$MD5_TARGET" == "$MD5_NEW" ]; then
- echo "Not config change"
- exit 0
- fi
- cp fw.conf /home/shared/configs/firewall/
- etcdctl set /cluster/firewall/update $MD5_NEW
其中curl命令配合?wait=true將在該值被上述腳本所變更時發(fā)生超時。在此之后,其會對該主機上的防火墻進行更新:
- while :
- do
- curl -L http://127.0.0.1:4001/v2/keys/cluster/firewall/update?wait=true
- NEW_HASH=`etcdctl get /cluster/firewall/update`
- if [ "$NEW_HASH" != "$FW_HASH" ]; then
- echo "Update the firewall"
- source /usr/local/bin/firewall_builder
- FW_HASH=$NEW_HASH
- fi
- done
安裝Docker庫
要利用Kubernetes保存容器定義,大家最好建立自己的容器庫。
因此,讓我們利用Kubernetes部署自己的第一個pod:
其默認格式為YAML,但我個人更傾向于使用JSON。
以下為docker-rc.json文件內(nèi)容:
- {
- "apiVersion": "v1",
- "kind": "ReplicationController",
- "metadata": {
- "name": "docker-repository",
- "labels": {
- "app": "docker-repository",
- "version": "v1"
- }
- },
- "spec": {
- "replicas": 1,
- "selector": {
- "app": "docker-repository",
- "version": "v1"
- },
- "template": {
- "metadata": {
- "labels": {
- "app": "docker-repository",
- "version": "v1"
- }
- },
- "spec": {
- "volumes": [
- {
- "name": "config",
- "hostPath": { "path": "/home/shared/configs/docker" }
- },{
- "name": "data",
- "hostPath": { "path": "/home/shared/docker" }
- }
- ],
- "containers": [
- {
- "name": "registry",
- "image": "registry:2.2.1",
- "volumeMounts": [{"name":"config", "mountPath":"/etc/docker/"},{"name":"data", "mountPath": "/var/lib/registry
- "}],
- "resources": {
- "limits": {
- "cpu": "100m",
- "memory": "50Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "50Mi"
- }
- },
- "ports": [
- {
- "containerPort": 5000
- }
- ]
- }
- ]
- }
- }
- }
- }
接下來是docker-svc.json文件內(nèi)容:
- {
- "apiVersion": "v1",
- "kind": "Service",
- "metadata": {
- "name": "docker-repository",
- "labels": {
- "app": "docker-repository"
- }
- },
- "spec": {
- "type": "LoadBalancer",
- "selector": {
- "app": "docker-repository"
- },
- "clusterIP": "10.0.0.204",
- "ports": [
- {
- "protocol": "TCP",
- "port": 5000,
- "targetPort": 5000
- }
- ]
- }
- }
安裝nginx代理
為了對服務(wù)器上的各個域進行托管,反向代理自然必不可少。安裝nginx非常簡單,不過需要注意向其中添加幾條header:
在我的示例中,第一臺主機將為docker.loopingz.com:
- server {
- listen 443;
- server_name docker.loopingz.com;
- access_log /var/log/nginx/docker.loopingz.com_access_log main;
- error_log /var/log/nginx/docker.loopingz.com_error_log info;
- client_max_body_size 0;
- chunked_transfer_encoding on;
- location / {
- include /etc/nginx/conf.d/dev-auth;
- proxy_pass http://10.0.0.204:5000;
- proxy_set_header X-Real-IP $remote_addr;
- proxy_set_header X-Forwarded-Host $host;
- proxy_set_header X-Forwarded-Server $host;
- proxy_set_header X-Forwarded-Proto $scheme;
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
- add_header 'Docker-Distribution-Api-Version' 'registry/2.0' always;
- }
- ssl_ciphers ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!CAMELLIA;
- ssl_prefer_server_ciphers on;
- ssl_certificate /etc/letsencrypt/live/docker.loopingz.com/fullchain.pem;
- ssl_certificate_key /etc/letsencrypt/live/docker.loopingz.com/privkey.pem;
- }
大家可以從文件中看到,我們使用IP 10.0.0.204作為定義于docker-svc.json文件內(nèi)的集群IP地址。Kubernetes會將該IP映射至我們的Docker注冊表容器處。
我希望能夠?qū)ginx部署至全部集群節(jié)點當中,從而保證全部節(jié)點皆可通過http/https入口指向該集群。因此,我們需要為nginx定義replicationController。
- {
- "apiVersion": "v1",
- "kind": "ReplicationController",
- "metadata": {
- "name": "nginx",
- "labels": {
- "app": "nginx",
- "version": "v1"
- }
- },
- "spec": {
- "replicas": 3,
- "selector": {
- "app": "nginx",
- "version": "v1"
- },
- "template": {
- "metadata": {
- "labels": {
- "app": "nginx",
- "version": "v1"
- }
- },
- "spec": {
- "volumes": [
- {
- "name": "config",
- "hostPath": { "path": "/home/shared/configs/nginx/" }
- },{
- "name": "logs",
- "hostPath": { "path": "/home/shared/logs/nginx/" }
- },{
- "name": "certs",
- "hostPath": { "path": "/home/shared/letsencrypt/" }
- },{
- "name": "static",
- "hostPath": { "path": "/home/shared/nginx/" }
- }
- ],
- "containers": [
- {
- "name": "nginx",
- "image": "nginx:latest",
- "volumeMounts": [{"name": "static", "readOnly": true, "mountPath": "/var/www/"},{"name": "certs", "readOnly": true, "mountPath": "/etc/letsencrypt"}, {"name":"logs", "mountPath":"/var/log/nginx/"},{"name":"config", "readOnly": true, "mountPath":"/etc/nginx/conf.d/"}],
- "resources": {
- "limits": {
- "cpu": "100m",
- "memory": "50Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "50Mi"
- }
- },
- "ports": [
- {
- "containerPort": 80,
- "hostPort": 80
- },{
- "containerPort": 443,
- "hostPort": 443
- }
- ]
- }
- ]
- }
- }
- }
- }
進行加密
為了在全部vhost上啟用SSL,我們現(xiàn)在需要利用let’s encrypt以獲取為期三個月的免費SSL證書。排序可自動進行,因此全部nginx主機都將在其配置文件中包含以下內(nèi)容:
- location /.well-known/acme-challenge {
- add_header "Content-Type:" "application/jose+json" always;
- root /etc/nginx/conf.d;
- }
現(xiàn)在大家已經(jīng)了解了Kubernetes的安裝方法——馬上著手嘗試吧!
原文標題:Installing Kubernetes: Moving From Physical Servers to Containers
【51CTO.com獨家譯稿,合作站點轉(zhuǎn)載請注明來源】