티스토리 뷰

DevOps/kubernetes

[NBP] K8S 구성 (Kubespray)

김한성 2020. 9. 2. 12:33

□ Kubernetes cluster 구축 자동화 도구

 

kubeadm, kops, kubespray 등의 Kubernetes cluster 구축 자동화 도구가 있습니다. 해당 포스팅에서는 kubespray(Kubernetes의 서브 프로젝트로서 Incubating 되고 있습니다.)로 구성할것입니다. 구성에 관한 상세한 정보는 다음과 같습니다.

 

master node: 3개
worker node: 3개
bastion node: 1개

IaaS는 NBP에서 진행할 것이며 master node 3개, worker node 3개로 구성하고 bastion node 1개를 추가로 구성할 것입니다. 또한 NBP에서 Network Interface를 생성하여 같은 192.168.100.x 에서 통신할 수 있도록 구성하였습니다.

 

□ bastion 서버에서 hosts 등록하기

sudo vi /etc/hosts

192.168.100.181  cloud-k8s-master001 node01
192.168.100.182  cloud-k8s-master002 node02
192.168.100.183  cloud-k8s-master003 node03
192.168.100.184  cloud-k8s-worker001 node04
192.168.100.185  cloud-k8s-worker002 node05
192.168.100.186  cloud-k8s-worker003 node06

□ bastion 서버에서 ssh-keygen 생성

NBP에서는 Network interface 192 대역을 추가하였습니다
[hskim@cloud-bastion .ssh]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hskim/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hskim/.ssh/id_rsa.
Your public key has been saved in /home/hskim/.ssh/id_rsa.pub.
.
.
.

□ 노드에 ssh-copy-id 진행

[hskim@cloud-analysis-bastion .ssh]$ ssh-copy-id hskim@192.168.100.182
The authenticity of host '192.168.100.182 (192.168.100.182)' can't be established.
ECDSA key fingerprint is 06:34:9d:1e:b3:f4:1b:34:76:4c:2b:9e:56:ac:2a:ta.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hskim@192.168.100.182's password:


Number of key(s) added: 1


Now try logging into the machine, with:  "ssh 'hskim@192.168.100.182'"
and check to make sure that only the key(s) you wanted were added.

□ swap 메모리 사용 중지 (master, worker 노드 전부 해줍니다)

[hskim@cloud-analysis-k8s-master-001 ~]$ free -m
              total        used        free      shared  buff/cache  available
Mem:          3763        323        884          8        2555        3152
Swap:          2047          0        2047
[hskim@cloud-analysis-k8s-master-001 ~]$ swapoff -a
swapoff: Not superuser.
[hskim@cloud-analysis-k8s-master-001 ~]$ sudo swapoff -a
[hskim@cloud-analysis-k8s-master-001 ~]$ free -m
              total        used        free      shared  buff/cache  available
Mem:          3763        322        886          8        2554        3153
Swap:            0          0          0

□ Kubespray를 활용하여 설치를 진행할 것이기 때문에 bastion server에서 Kubespray 설치

 

□ 기본 package install

$ sudo yum -y install epel-release
$ sudo yum install -y python3-pip

□ Kubespray 릴리즈 소스코드 다운로드

[hskim@cloud-analytics-bastion ~]$ git clone https://github.com/kubernetes-sigs/kubespray
Cloning into 'kubespray'...
remote: Enumerating objects: 44731, done.
remote: Total 44731 (delta 0), reused 0 (delta 0), pack-reused 44731
Receiving objects: 100% (44731/44731), 13.01 MiB | 4.21 MiB/s, done.
Resolving deltas: 100% (24956/24956), done.

kubespray 경로로 이동해서 inventory sample 옮기기

$ cd kubespray

$ sudo pip3 install -r requirements.txt

$ cp -rfp inventory/sample inventory/analysis-cluster

□ ini 파일 설정

$ cd /home/hskim/kubespray

[hskim@cloud-analysis-bastion kubespray]$ declare -a IPS=(192.168.100.181 192.168.100.182 192.168.100.183 192.168.100.184 192.168.100.185 192.168.100.186)
[hskim@cloud-analytics-bastion kubespray]$ CONFIG_FILE=inventory/analysis-cluster1/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
DEBUG: Adding group all
DEBUG: Adding group kube-master
DEBUG: Adding group kube-node
DEBUG: Adding group etcd
DEBUG: Adding group k8s-cluster
DEBUG: Adding group calico-rr
DEBUG: adding host node1 to group all
DEBUG: adding host node2 to group all
DEBUG: adding host node3 to group all
DEBUG: adding host node4 to group all
DEBUG: adding host node5 to group all
DEBUG: adding host node6 to group all
DEBUG: adding host node1 to group etcd
DEBUG: adding host node2 to group etcd
DEBUG: adding host node3 to group etcd
DEBUG: adding host node1 to group kube-master
DEBUG: adding host node2 to group kube-master
DEBUG: adding host node1 to group kube-node
DEBUG: adding host node2 to group kube-node
DEBUG: adding host node3 to group kube-node
DEBUG: adding host node4 to group kube-node
DEBUG: adding host node5 to group kube-node
DEBUG: adding host node6 to group kube-node

□ ping test

[hskim@cloud-analysis-bastion kubespray]$ ansible -i inventory/analysis-cluster3/hosts.yaml -m ping all
node2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
node1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
node3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
node5 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
node4 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

□ network 셋팅

$ vi /home/hskim/kubespray/inventory/analysis-cluster3/group_vars/k8s-cluster/k8s-cluster.yml

kube_service_addresses: 172.18.0.0/16

kube_pods_subnet: 172.19.0.0/16

□ ansible-playbook 으로 실행

ansible-playbook -i inventory/analysis-cluster3/hosts.yaml --become --become-user=root cluster.yml

□ ansible로 실행되다가 오류가 나면 reset 명령어로 멈춘 부분부터 다시시작 명령어

 

ansible-playbook -i inventory/analysis-cluster/hosts.yaml --become --become-user=root reset.yml

 

 

댓글
«   2024/12   »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31