ceph 分布式存储安装

一.环境准备

主机名 ip 大发棋牌大发棋牌技巧技巧 软件
ceph-1 172.16.100.100 ceph-deploy ceph ceph-radosgw
ceph-2 172.16.100.101 ceph ceph-radosgw
ceph-3 172.16.100.102 ceph ceph-radosgw
1.关闭所有节点防火墙和selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl stop firewalld && systemctl disable firewalld
2.在所有节点添加ceph的yum源
[root@ceph-1 ~]# vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=0
priority=1

[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=0
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=0
priority=1
3.配置ceph集群ssh免密和/etc/hosts (所有节点配置hosts文件)
[root@ceph-1 ~]# ssh-keygen -t rsa  #一路回车
[root@ceph-1 ~]# for host in ceph-1 ceph-2 ceph-3; do ssh-copy-id -i .ssh/id_rsa.pub $host
[root@ceph-1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.100.100 ceph-1
172.16.100.101 ceph-2
172.16.100.102 ceph-3
4. ceph-1作为部署节点安装 ceph-deploy(部署大发棋牌大发棋牌技巧技巧 工具 )
[root@ceph-1 ~]# yum install -y ceph-deploy
5. 在ceph-1上创建集群
[root@ceph-1 ~]# mkdir /etc/ceph
[root@ceph-1 ~]# cd /etc/ceph
[root@ceph-1 ceph]# ceph-deploy new ceph-1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy new ceph-1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f329e1d9668>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f329d954638>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-1']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /usr/sbin/ip link show
[ceph-1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-1][DEBUG ] IP addresses found: [u'172.16.100.100']
[ceph_deploy.new][DEBUG ] Resolving host ceph-1
[ceph_deploy.new][DEBUG ] Monitor ceph-1 at 172.16.100.100
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.16.100.100']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@ceph-1 ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

说明:
ceph.conf                集群配置文件
ceph-deploy-ceph.log     使用ceph-deploy部署的日志文件
ceph.mon.keyring         验证key文件
6. 安装ceph大发棋牌大发棋牌技巧技巧 软件

在所有ceph集群节点上安装ceph和ceph-radosgw大发棋牌大发棋牌技巧技巧 软件 包

yum install ceph ceph-radosgw -y
[root@ceph-1 ceph]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
7. 创建mon监控组件
[root@ceph-1 ceph]# vim ceph.conf 
[global]
fsid = bf545aa3-4f5f-48ee-856d-5b9412370880
mon_initial_members = ceph-1
mon_host = 172.16.100.100
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 172.16.100.0/24    #添加这行

[root@ceph-1 ceph]# ceph-deploy mon create-initial    #创建mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbda63a1fc8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7fbda6389758>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-1
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-1 ...
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.7.1908 Core
[ceph-1][DEBUG ] determining if provided host has same hostname in remote
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] deploying mon to ceph-1
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] remote hostname: ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][DEBUG ] create the mon path if it does not exist
[ceph-1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-1/done
[ceph-1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-1/done
[ceph-1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-1.mon.keyring
[ceph-1][DEBUG ] create the monitor keyring file
[ceph-1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph-1 --keyring /var/lib/ceph/tmp/ceph-ceph-1.mon.keyring --setuser 167 --setgroup 167
[ceph-1][DEBUG ] ceph-mon: renaming mon.noname-a 172.16.100.100:6789/0 to mon.ceph-1
[ceph-1][DEBUG ] ceph-mon: set fsid to bf545aa3-4f5f-48ee-856d-5b9412370880
[ceph-1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ceph-1 for mon.ceph-1
[ceph-1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-1.mon.keyring
[ceph-1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-1][DEBUG ] create the init path if it does not exist
[ceph-1][INFO  ] Running command: systemctl enable ceph.target
[ceph-1][INFO  ] Running command: systemctl enable ceph-mon@ceph-1
[ceph-1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-1.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph-1][INFO  ] Running command: systemctl start ceph-mon@ceph-1
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][DEBUG ] status for monitor: mon.ceph-1
[ceph-1][DEBUG ] {
[ceph-1][DEBUG ]   "election_epoch": 3, 
[ceph-1][DEBUG ]   "extra_probe_peers": [], 
[ceph-1][DEBUG ]   "monmap": {
[ceph-1][DEBUG ]     "created": "2020-10-06 11:41:39.629417", 
[ceph-1][DEBUG ]     "epoch": 1, 
[ceph-1][DEBUG ]     "fsid": "bf545aa3-4f5f-48ee-856d-5b9412370880", 
[ceph-1][DEBUG ]     "modified": "2020-10-06 11:41:39.629417", 
[ceph-1][DEBUG ]     "mons": [
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.16.100.100:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-1", 
[ceph-1][DEBUG ]         "rank": 0
[ceph-1][DEBUG ]       }
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "name": "ceph-1", 
[ceph-1][DEBUG ]   "outside_quorum": [], 
[ceph-1][DEBUG ]   "quorum": [
[ceph-1][DEBUG ]     0
[ceph-1][DEBUG ]   ], 
[ceph-1][DEBUG ]   "rank": 0, 
[ceph-1][DEBUG ]   "state": "leader", 
[ceph-1][DEBUG ]   "sync_provider": []
[ceph-1][DEBUG ] }
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][INFO  ] monitor: mon.ceph-1 is running
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph_deploy.mon][INFO  ] processing monitor mon.ceph-1
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph_deploy.mon][INFO  ] mon.ceph-1 monitor has reached quorum!
[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO  ] Running gatherkeys...
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpRuwczj
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] fetch remote file
[ceph-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-1/keyring auth get client.admin
[ceph-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-1/keyring auth get client.bootstrap-mds
[ceph-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-1/keyring auth get client.bootstrap-mgr
[ceph-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-1/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[ceph-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-1/keyring auth get client.bootstrap-osd
[ceph-1][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph-1/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.client.admin.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpRuwczj
[root@ceph-1 ceph]# ceph -s    #查看集群状态
  cluster:
    id:     bf545aa3-4f5f-48ee-856d-5b9412370880
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-1
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0  objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
8. 创建mgr
ceph-deploy mgr create ceph-1
9. ceph集群磁盘管理
  1. 物理机添加磁盘

  2. 创建OSD磁盘
    将磁盘添加到ceph集群需要OSD

    列出所有节点的磁盘,并使用zap命令清楚磁盘信息准备创建OSD

    [root@ceph-1 ceph]# ceph-deploy disk list ceph-1
    [root@ceph-1 ceph]# ceph-deploy disk list ceph-2
    [root@ceph-1 ceph]# ceph-deploy disk list ceph-3
    
    [root@ceph-1 ceph]# ceph-deploy disk zap ceph-1 /dev/vdb
    [root@ceph-1 ceph]# ceph-deploy disk zap ceph-2 /dev/vdb
    [root@ceph-1 ceph]# ceph-deploy disk zap ceph-3 /dev/vdb

    创建osd磁盘

    [root@ceph-1 ceph]# ceph-deploy osd create --data /dev/vdb ceph-1
    [root@ceph-1 ceph]# ceph-deploy osd create --data /dev/vdb ceph-2
    [root@ceph-1 ceph]# ceph-deploy osd create --data /dev/vdb ceph-3