인프라/서버

[Openstack] Kolla-ansible로 openstack 설치하기 + ceph

김 숨 2025. 3. 29. 18:32

환경

  • OS : Rocky 9.3
  • 컨테이너 엔진 : Docker
  • kolla-ansible 버전 : 2024.1
  • 서버 hostname : c01, c02, c03

    네트워크
    (VIP : 10.201.125.110) ens3은 internal / ens8은 provider / ens5는 ceph 전용
    • node-1
      • ens3 10.201.125.111 / ens8 10.201.124.111 / ens5 10.201.126.111
    • node-2
      • ens3 10.201.125.112 / ens8 10.201.124.112 / ens5 10.201.126.112
    • node-3
      • ens3 10.201.125.113 / ens8 10.201.124.113 / ens5 10.201.126.113

 

* 각 서버마다 인터페이스가 다르다면 본딩으로 묶어줘 통일 시키는게 좋음 

 

참고 : Open Stack 설치 플로우

더보기

1. 네트워킹 설정

  controller node, compute node, storage node 네트워크 인터페이스 구성 → /etc/hosts에 저장

2. 각 노드 NTP 설정

3. 기초 서비스 설치 및 설정

  MariaDB → RabbitMQ → Memcached → haproxy → etcd(선택)

4. Open Stack 서비스 설치

   *Keystone → *Glance → *Placement → *Nova → *Neutron → Horizon → Cinder ( *필수)

 

Kolla-Ansible 실행 플로우

kolla-genpwd로 수동으로 passwrods.yml 파일 수정 → globals.yml설정 수정 → bootstarp-servers 실행 (: deploy 의존성 설치)→ prechecks 실행 : 노드 상태 환경 체크 → 이미지 pull → Deploy → openstack client 설치 → post-deploy 실행

 

1. 환경 구성

  1. 각 노드별 유저 설정
    user가 sudo 권한을 사용 할 때 비밀번호를 묻지 않도록 설정 필요(ansible)
    * 일반유저가 아닌 root 유저로 진행해도 됨

      
adduser install
passwd install
$ visudo
install ALL=(ALL) NOPASSWD: ALL

 2. 키 교환


      
(kolla) $ ssh-keygen
(kolla) $ ssh-copy-id $USER@localhost
(kolla) $ ssh-copy-id $USER@10.201.125.111
(kolla) $ ssh-copy-id $USER@10.201.125.112
(kolla) $ ssh-copy-id $USER@10.201.125.113

 

 

3. 의존성 패키지 설치


      
sudo dnf install git python3-devel libffi-devel gcc openssl-devel python3-libselinux
python3 -V

 

3. 가상 환경을 위한 의존성 설치

* 가상환경은 필수가 아님(권장사항)


      
sudo mkdir ~/kolla
python3 -m venv ~/kolla/venv
source ~/kolla/venv/bin/activate
(kolla) $ pip install -U pip
(kolla) $ pip install 'ansible-core>=2.15'
#2024.1의 kolla-ansible 실행을 위해선 core 버전 2.15 ~ 2.16 사이여야 함

 

참고) 2024.1버전 의 kolla-ansible 설행을 위해선 core 버전 2.15 ~ 2.16 사이여야 함

os가  ansible-core 패키지 어떤 버전을 지원하는지 체크해보기! 안그러면 설치가 안될 수 도 있음

 

2. kolla-ansible 설치

- git branch를 보고 맞는 버전 설치하기


      
(kolla) $ pip install git+https://opendev.org/openstack/kolla-ansible@stable/2024.1

 

- 디렉토리 생성


      
(kolla) $ sudo mkdir -p /etc/kolla
(kolla) $ sudo chown $USER:$USER /etc/kolla

 

-  인벤토리/passwrod.yml 파일 복사


      
(kolla) $ cp -r ~/kolla/venv/share/kolla-ansible/etc_examples/kolla/* /etc/kolla
(kolla) $ cp ~/kolla/venv/share/kolla-ansible/ansible/inventory/multinode .

3. Ansible Galaxy requirements 설치

  • ansible core가 잘 설치 되어 있어야 실행 됨

      
(kolla) $ kolla-ansible install-deps

** octavia가 있을 경우 아래 명령어 실행 필요


      
kolla-ansible octavia-certificates

4. 초기 구성

  1. multinode 파일 수정

      
[control]
10.201.125.[111:113] ansible_ssh_user=install
[network]
10.201.125.[111:113] ansible_ssh_user=install
[compute]
10.201.125.[111:113] ansible_ssh_user=install
[monitoring]
10.201.125.[111:113] ansible_ssh_user=install
[storage]
10.201.125.[111:113] ansible_ssh_user=install
  1. 비밀번호 생성 (/etc/kolla/passwords.yml 파일에 저장)

      
(kolla) $ kolla-genpwd
  1. globals.yml 파일 수정
    * 최소한의 옵션만 활성화 했음

a. 이미지 옵션


      
###################
# Ansible options
###################
workaround_ansible_issue_8743: yes
###############
# Kolla options
###############
kolla_base_distro: "rocky"
openstack_release: "2024.1"
kolla_internal_vip_address: "10.201.125.110"
kolla_external_vip_address: "{{ kolla_internal_vip_address }}"
##################
# Container engine
##################
kolla_container_engine: docker

b. 네트워킹


      
##############################
# Neutron - Networking Options
##############################
network_interface: "ens3"
neutron_external_interface: "ens8"

c. 서비스 활성화


      
###################
# OpenStack options
###################
enable_glance: "{{ enable_openstack_core | bool }}"
#enable_hacluster: "no"
enable_haproxy: "yes"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_keystone: "{{ enable_openstack_core | bool }}"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "{{ enable_openstack_core | bool }}"
enable_nova: "{{ enable_openstack_core | bool }}"
enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
enable_cinder: "yes"
enable_cinder_backup: "yes"
#enable_cinder_backend_hnas_nfs: "no"
#enable_cinder_backend_iscsi: "{{ enable_cinder_backend_lvm | bool }}"
#enable_cinder_backend_lvm: "yes"
enable_horizon: "{{ enable_openstack_core | bool }}"
##ceph 설정시
ceph_glance_user: "glance"
ceph_glance_keyring: "client.{{ ceph_glance_user }}.keyring"
ceph_glance_pool_name: "images"
ceph_cinder_user: "cinder"
ceph_cinder_keyring: "client.{{ ceph_cinder_user }}.keyring"
ceph_cinder_pool_name: "volumes"
ceph_cinder_backup_user: "cinder-backup"
ceph_cinder_backup_keyring: "client.{{ ceph_cinder_backup_user }}.keyring"
ceph_cinder_backup_pool_name: "backups"
ceph_nova_keyring: "client.nova.keyring"
ceph_nova_user: "nova"
ceph_nova_pool_name: "vms"
keystone_admin_user: "admin"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
prechecks_enable_host_ntp_checks: false

      
egrep -v '^#|^$' /etc/kolla/globals.yml

 위 명령어로 다시 한번 체크


      
---
workaround_ansible_issue_8743: yes
kolla_base_distro: "rocky"
openstack_release: "2024.1"
kolla_internal_vip_address: "10.201.211.10"
kolla_external_vip_address: "{{ kolla_internal_vip_address }}"
kolla_container_engine: docker
network_interface: "bond0"
neutron_external_interface: "bonds0"
enable_openstack_core: "yes"
enable_glance: "{{ enable_openstack_core | bool }}"
enable_haproxy: "yes"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_keystone: "{{ enable_openstack_core | bool }}"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "{{ enable_openstack_core | bool }}"
enable_nova: "{{ enable_openstack_core | bool }}"
enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
enable_cinder: "yes"
enable_horizon: "{{ enable_openstack_core | bool }}"
ceph_glance_user: "glance"
ceph_glance_keyring: "client.{{ ceph_glance_user }}.keyring"
ceph_glance_pool_name: "images"
ceph_cinder_user: "cinder"
ceph_cinder_keyring: "client.{{ ceph_cinder_user }}.keyring"
ceph_cinder_pool_name: "volumes"
ceph_cinder_backup_user: "cinder-backup"
ceph_cinder_backup_keyring: "client.{{ ceph_cinder_backup_user }}.keyring"
ceph_cinder_backup_pool_name: "backups"
ceph_nova_keyring: "client.nova.keyring"
ceph_nova_user: "nova"
ceph_nova_pool_name: "vms"
keystone_admin_user: "admin"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
prechecks_enable_host_ntp_checks: false

 **(참고) enable_cinder_backend_lvm: "yes" 를 했을 경우

LVM 설정시 볼륨 설정 바꿔줘야 함(노드 3개 전부)


      
(kolla) [install@node-1 kolla]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 49G 0 part
├─rl-root 253:0 0 44G 0 lvm /
└─rl-swap 253:1 0 5G 0 lvm [SWAP]
sdb 8:16 0 100G 0 disk
$ sudo pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created.
$ sudo vgcreate cinder-volumes /dev/sdb
Volume group "cinder-volumes" successfully created


5. ceph 설치

* 다른 호스트에 ceph를 설치/배포 하기 위해선

컨테이너 엔진(docker), chrony가 설치되어 있어야 한다.

 

개인적인 의견으로 kolla-ansible  bootstrap-servers를 이용해 미리 설치해두는 것도 좋음

bootstarp-servers의 역할에 아래 동작을 포함하기 때문 

  • Docker 엔진 설치 및 구성, NTP 데몬 구성, 방화벽 비활성화 SELinux 비활성화

      
dnf search release-ceph
dnf install --assumeyes centos-release-ceph-squid
dnf install --assumeyes cephadm
CEPH_RELEASE=19.2.1
curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm
chmod +x cephadm
./cephadm add-repo --release 19.2.1
혹은 ./cephadm add-repo --release Squid
./cephadm install
which cephadm
./cephadm bootstrap --mon-ip 10.201.127.111 --allow-fqdn-hostname
./cephadm shell -- ceph -s
./cephadm install ceph-common
ssh-copy-id -f -i /etc/ceph/ceph.pub root@c02
ssh-copy-id -f -i /etc/ceph/ceph.pub root@c03
ceph status
ceph orch host add c02 10.201.127.112
ceph orch host add c03 10.201.127.113
ceph orch host ls
ceph orch apply mon --placement="c01,c02,c03"
ceph orch apply mgr --placement="c01,c02,c03"
#ceph orch apply mds myfs --placement="c01,c02,c03"
# 해당 서버의 사용 가능한 모든 디스크를 검색하여 자동으로 OSD를 설정
ceph orch apply osd --all-available-devices
ceph status
ceph orch ps

* 참고

 1) dashboard user를 따로 지정할 경우


      
cephadm bootstrap --mon-ip 127.0.0.1 --registry-json cephadm.txt --initial-dashboard-user admin --initial-dashboard-password zbiql951ar --dashboard-password-noupdate --allow-fqdn-hostname

 

2) cluster-spec.yml을 따로 만들어 배포 할경우


      
./cephadm bootstrap --ssh-user=root --mon-ip 10.201.127.113 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json

 

cluster-spec.yml


      
---
service_type: host
addr: 10.201.127.111 ## <XXX.XXX.XXX.XXX>
hostname: c01 ## <ceph-hostname-1>
location:
root: default
labels:
- osd
- mon
- mgr
---
service_type: host
addr: 10.201.127.112
hostname: c02
labels:
- mgr
- osd
- mon
---
service_type: host
addr: 10.201.127.113
hostname: c03
labels:
- osd
- mon
- mgr
---
service_type: mon
placement:
label: "mon"
---
service_type: mds
service_id: fs_name
placement:
label: "mds"
---
service_type: mgr
service_name: mgr
placement:
label: "mgr"
---

 


      
do
ceph osd pool create $pool_name
done
for pool_name in volumes backups images vms metrics; do ceph osd pool create $pool_name; done
ceph auth get-or-create client.cinder mon 'profile rbd' osd 'allow class-read object_prefix rbd_children, profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images' -o /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' -o /etc/ceph/ceph.client.cinder-backup.keyring
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=vms, allow rx pool=images' -o /etc/ceph/ceph.client.nova.keyring
mkdir /etc/kolla/config
mkdir -p /etc/kolla/config/cinder/cinder-volume
mkdir /etc/kolla/config/cinder/cinder-backup
mkdir /etc/kolla/config/glance
mkdir /etc/kolla/config/nova
mkdir /etc/kolla/config/metrics
sed -i 's/\t//g' /etc/ceph/ceph.conf
cat /etc/ceph/ceph.conf
cp /etc/ceph/ceph.conf /etc/kolla/config/cinder/
cp /etc/ceph/ceph.conf /etc/kolla/config/nova/
cp /etc/ceph/ceph.conf /etc/kolla/config/glance/
cp /etc/ceph/ceph.conf /etc/kolla/config/metrics/
cp /etc/ceph/ceph.client.glance.keyring /etc/kolla/config/glance/
cp /etc/ceph/ceph.client.nova.keyring /etc/kolla/config/nova/
cp /etc/ceph/ceph.client.gnocchi.keyring /etc/kolla/config/metrics/
cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/nova/
cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-volume/
cp /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-backup/
cp /etc/ceph/ceph.client.cinder-backup.keyring /etc/kolla/config/cinder/cinder-backup/

6. kolla-ansible 배포


      
(kolla) $ kolla-ansible -i /etc/kolla/multinode bootstrap-servers

      
(kolla) $ kolla-ansible -i /etc/kolla/multinode prechecks

      
(kolla) $ kolla-ansible -i /etc/kolla/multinode deploy

 

 * multinode 파일이 없을 경우 (따로 강제로 지정해주지 않을 경우) /etc/ansible/hosts을 바라보고 실행하게 됨

 

6. OpenStack 사용


      
(kolla) $ pip install python-openstackclient -c https://releases.openstack.org/constraints/upper/2024.1

      
(kolla) $ kolla-ansible -i /etc/kolla/multinode post-deploy

7. 설정 확인


      
(kolla) $ source /etc/kolla/admin-openrc.sh
(kolla) [install@node-1 kolla]$ openstack service list
+----------------------------------+-----------+----------------+
| ID | Name | Type |
+----------------------------------+-----------+----------------+
| 0df7f423e35249bba961acc646be644c | glance | image |
| 3bc3ff492c3a417593c49ea2e03618c8 | placement | placement |
| 4e5c98039dcc4972834404b3bb805d37 | neutron | network |
| 770571f36a0c4737bc50adf48f8b1762 | keystone | identity |
| d8e3ff74aa2045aa9e15402513d8dbc7 | heat-cfn | cloudformation |
| ddf2104703524b9aad7c6d3ed4120849 | nova | compute |
| e5142896643a413b99a0234d174c2c1c | cinderv3 | volumev3 |
| eebfac21cb364766b1b279dd8f86190c | heat | orchestration |
+----------------------------------+-----------+----------------+
(kolla) [install@node-1 kolla]$ openstack compute service list
+--------------------------------------+----------------+--------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+--------------------------------------+----------------+--------+----------+---------+-------+----------------------------+
| b2d34940-6bac-423a-9430-8978d98c6fca | nova-scheduler | node-1 | internal | enabled | up | 2025-02-05T04:12:01.000000 |
| a36b88f5-fd74-4ae7-af83-bb91932ef1f1 | nova-scheduler | node-3 | internal | enabled | up | 2025-02-05T04:12:00.000000 |
| a35d39e6-2ceb-4be4-b584-440a4e2de13d | nova-scheduler | node-2 | internal | enabled | up | 2025-02-05T04:12:01.000000 |
| 95683513-7f6d-40b6-997e-da3a9ce51ebf | nova-conductor | node-3 | internal | enabled | up | 2025-02-05T04:12:02.000000 |
| eff2fe2c-9a69-4372-949d-3d43baa57898 | nova-conductor | node-1 | internal | enabled | up | 2025-02-05T04:12:02.000000 |
| 057333ef-96dd-422d-9512-3d747e7936d3 | nova-conductor | node-2 | internal | enabled | up | 2025-02-05T04:12:02.000000 |
| bdf2078e-502a-4d2f-b3dd-86e38cf3b6fc | nova-compute | node-3 | nova | enabled | up | 2025-02-05T04:12:01.000000 |
| 93b5552b-a843-43c6-a68d-2546152c933d | nova-compute | node-1 | nova | enabled | up | 2025-02-05T04:12:02.000000 |
| 0c4c7914-9130-43a3-8f9c-1088aabeaca2 | nova-compute | node-2 | nova | enabled | up | 2025-02-05T04:12:02.000000 |
+--------------------------------------+----------------+--------+----------+---------+-------+----------------------------+
(kolla) [install@node-1 kolla]$ openstack network agent list
+--------------------------------------+--------------------+--------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+--------+-------------------+-------+-------+---------------------------+
| 2149c5ef-667c-4594-a785-0deeef1d1ed3 | Metadata agent | node-1 | None | :-) | UP | neutron-metadata-agent |
| 50fa1a70-f9bd-46e0-805a-8694c975009c | Metadata agent | node-3 | None | :-) | UP | neutron-metadata-agent |
| 52943a09-b756-4c5d-847f-b7c6e6e8bbc5 | L3 agent | node-3 | nova | :-) | UP | neutron-l3-agent |
| 6701c2ef-3026-492d-aa06-0e1be23ed210 | Open vSwitch agent | node-3 | None | :-) | UP | neutron-openvswitch-agent |
| 70b396f3-19eb-4727-8c67-d5d929c3d344 | Open vSwitch agent | node-1 | None | :-) | UP | neutron-openvswitch-agent |
| 71d765e9-b7db-4833-accc-0cf8f2e51555 | L3 agent | node-1 | nova | :-) | UP | neutron-l3-agent |
| b125deae-e78c-474a-b9f7-c5db8a15fcae | DHCP agent | node-3 | nova | :-) | UP | neutron-dhcp-agent |
| bfcc96f0-4d90-497b-b8ce-b52649f85d66 | L3 agent | node-2 | nova | :-) | UP | neutron-l3-agent |
| cad340fa-4d53-4c4d-a856-02ca16b1c91c | DHCP agent | node-2 | nova | :-) | UP | neutron-dhcp-agent |
| cf7cc91f-8257-4bb0-961f-3cf7ed2aaccf | Open vSwitch agent | node-2 | None | :-) | UP | neutron-openvswitch-agent |
| e14c62c9-793b-4e63-b728-10d9dbd74e2a | Metadata agent | node-2 | None | :-) | UP | neutron-metadata-agent |
| f7b5c05a-1b89-496b-bea5-f15be939302d | DHCP agent | node-1 | nova | :-) | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+--------+-------------------+-------+-------+---------------------------+
(kolla) [install@node-1 kolla]$ openstack volume service list
+------------------+--------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated At |
+------------------+--------------+------+---------+-------+----------------------------+
| cinder-scheduler | node-3 | nova | enabled | up | 2025-02-05T04:12:27.000000 |
| cinder-scheduler | node-2 | nova | enabled | up | 2025-02-05T04:12:27.000000 |
| cinder-scheduler | node-1 | nova | enabled | up | 2025-02-05T04:12:27.000000 |
| cinder-volume | node-1@lvm-1 | nova | enabled | up | 2025-02-05T04:12:30.000000 |
| cinder-volume | node-3@lvm-1 | nova | enabled | up | 2025-02-05T04:12:28.000000 |
| cinder-volume | node-2@lvm-1 | nova | enabled | up | 2025-02-05T04:12:29.000000 |
| cinder-backup | node-3 | nova | enabled | down | 2025-02-05T00:23:09.000000 |
| cinder-backup | node-2 | nova | enabled | down | 2025-02-05T00:23:10.000000 |
| cinder-backup | node-1 | nova | enabled | down | 2025-02-05T00:23:10.000000 |
+------------------+--------------+------+---------+-------+----------------------------+

 

* 네트워킹

 

opensvswich-vswichd컨테이너에 접속 해서 설정 볼 수 있음

`ovs-vsctl show` 

br-int는 OVS (Open vSwitch)가 생성하며, Neutron Open vSwitch agent가 관리

 

각 컨테이너에 접속 할 경우


      
docker exec -it <컨테이너_ID> /bin/bash

 

'인프라 > 서버' 카테고리의 다른 글

Intel Xeon Scalable 시리즈 제품 읽기  (1) 2024.10.04
proxmox 볼륨 잡기  (0) 2024.07.07
Legacy BIOS와 UEFI  (0) 2024.07.05
CloudFlare로 보안 설정 해보기  (0) 2024.07.01
Proxmox VM 삭제하기  (0) 2024.06.16