POOPE 发表于 2021-7-8 18:32:53

TiDB集群运维之扩缩容

  TiDB 集群可以在不中断线上服务的情况下进行扩容和缩容。本文使用 TiUP 扩容缩容集群中的 TiDB、TiKV、PD、TiCDC 或者 TiFlash 节点。
1、扩容前集群状态
# tiup cluster display tidbcluster
Starting component `cluster`: /root/.tiup/components/cluster/v1.5.2/tiup-cluster display tidbcluster
Cluster type:       tidb
Cluster name:       tidbcluster
Cluster version:    v5.1.0
Deploy user:      tidb
SSH type:         builtin
Dashboard URL:      http://192.168.120.201:2379/dashboard
ID                     Role      Host             Ports                            OS/Arch       Status   Data Dir                           Deploy Dir
--                     ----      ----             -----                            -------       ------   --------                           ----------
192.168.120.202:8300   cdc         192.168.120.2028300                           linux/x86_64Up       /u01/tidb/tidb-data/cdc-8300         /u01/tidb/tidb-deploy/cdc-8300
192.168.120.203:8300   cdc         192.168.120.2038300                           linux/x86_64Up       /u01/tidb/tidb-data/cdc-8300         /u01/tidb/tidb-deploy/cdc-8300
192.168.120.203:3000   grafana   192.168.120.2033000                           linux/x86_64Up       -                                    /u01/tidb/tidb-deploy/grafana-3000
192.168.120.201:2379   pd          192.168.120.2012379/2380                        linux/x86_64Up|L|UI/u01/tidb/tidb-data/pd-2379          /u01/tidb/tidb-deploy/pd-2379
192.168.120.203:2379   pd          192.168.120.2032379/2380                        linux/x86_64Up       /u01/tidb/tidb-data/pd-2379          /u01/tidb/tidb-deploy/pd-2379
192.168.120.202:9090   prometheus192.168.120.2029090                           linux/x86_64Up       /u01/tidb/tidb-data/prometheus-9090/u01/tidb/tidb-deploy/prometheus-9090
192.168.120.201:4000   tidb      192.168.120.2014000/10080                     linux/x86_64Up       -                                    /u01/tidb/tidb-deploy/tidb-4000
192.168.120.202:4000   tidb      192.168.120.2024000/10080                     linux/x86_64Up       -                                    /u01/tidb/tidb-deploy/tidb-4000
192.168.120.201:9000   tiflash   192.168.120.2019000/8123/3930/20170/20292/8234linux/x86_64Up       /u01/tidb/tidb-data/tiflash-9000   /u01/tidb/tidb-deploy/tiflash-9000
192.168.120.201:20160tikv      192.168.120.20120160/20180                      linux/x86_64Up       /u01/tidb/tidb-data/tikv-20160       /u01/tidb/tidb-deploy/tikv-20160
192.168.120.202:20161tikv      192.168.120.20220161/20181                      linux/x86_64Up       /u01/tidb/tidb-data/tikv-20161       /u01/tidb/tidb-deploy/tikv-20161
192.168.120.203:20162tikv      192.168.120.20320162/20182                      linux/x86_64Up       /u01/tidb/tidb-data/tikv-20162       /u01/tidb/tidb-deploy/tikv-201622、创建scale-out.yaml文件
  这里新加一个节点204,并在上面部署tidb、cdc以及pd服务,如下:
# vi scale-out.yaml
tidb_servers:
- host: 192.168.120.204
ssh_port: 22
port: 4000
status_port: 10080
deploy_dir: /u01/tidb/tidb-deploy/tidb-4000
log_dir: /u01/tidb/tidb-deploy/tidb-4000/log

cdc_servers:
- host: 192.168.120.204

pd_servers:
- host: 192.168.120.204
ssh_port: 22
name: pd-192.168.120.204-2379
client_port: 2379
peer_port: 2380
deploy_dir: /u01/tidb/tidb-deploy/pd-2379
data_dir: /u01/tidb/tidb-data/pd-2379
log_dir: /u01/tidb/tidb-deploy/pd-2379/log3、执行tiup命令进行扩容
# tiup cluster scale-out tidbcluster scale-out.yaml
Starting component `cluster`: /root/.tiup/components/cluster/v1.5.2/tiup-cluster scale-out tidbcluster scale-out.yaml
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidbcluster
Cluster version: v5.1.0
RoleHost             Ports       OS/Arch       Directories
--------             -----       -------       -----------
pd    192.168.120.2042379/2380   linux/x86_64/u01/tidb/tidb-deploy/pd-2379,/u01/tidb/tidb-data/pd-2379
tidb192.168.120.2044000/10080linux/x86_64/u01/tidb/tidb-deploy/tidb-4000
cdc   192.168.120.2048300      linux/x86_64/u01/tidb/tidb-deploy/cdc-8300,/u01/tidb/tidb-data/cdc-8300
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? : (default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub

- Download tidb:v5.1.0 (linux/amd64) ... Done
+ [ Serial ] - RootSSH: user=root, host=192.168.120.204, port=22, key=/root/.ssh/id_rsa
+ [ Serial ] - EnvInit: user=tidb, host=192.168.120.204
+ [ Serial ] - Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy','/u01/tidb/tidb-data'
+ - UserSSH: user=tidb, host=192.168.120.203
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.203
+ - UserSSH: user=tidb, host=192.168.120.203
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.203

+ [ Serial ] - UserSSH: user=tidb, host=192.168.120.204
+ [ Serial ] - UserSSH: user=tidb, host=192.168.120.204
+ [ Serial ] - Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/cdc-8300','/u01/tidb/tidb-deploy/cdc-8300/bin','/u01/tidb/tidb-deploy/cdc-8300/conf','/u01/tidb/tidb-deploy/cdc-8300/scripts'
+ [ Serial ] - Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/tidb-4000','/u01/tidb/tidb-deploy/tidb-4000/bin','/u01/tidb/tidb-deploy/tidb-4000/conf','/u01/tidb/tidb-deploy/tidb-4000/scripts'
+ [ Serial ] - UserSSH: user=tidb, host=192.168.120.204
+ [ Serial ] - Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/pd-2379','/u01/tidb/tidb-deploy/pd-2379/bin','/u01/tidb/tidb-deploy/pd-2379/conf','/u01/tidb/tidb-deploy/pd-2379/scripts'
- Copy node_exporter -> 192.168.120.204 ... ⠦ Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/monitor-9100','/u01/tidb/tidb-data/monitor-9100','/u01/tidb/tidb-deploy...
+ [ Serial ] - Mkdir: host=192.168.120.204, directories=''
- Copy node_exporter -> 192.168.120.204 ... ⠧ Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/monitor-9100','/u01/tidb/tidb-data/monitor-9100','/u01/tidb/tidb-deploy...
- Copy blackbox_exporter -> 192.168.120.204 ... ⠇ Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/monitor-9100','/u01/tidb/tidb-data/monitor-9100','/u01/tidb/tidb-de...
- Copy node_exporter -> 192.168.120.204 ... ⠏ Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/monitor-9100','/u01/tidb/tidb-data/monitor-9100','/u01/tidb/tidb-deploy...
- Copy node_exporter -> 192.168.120.204 ... ⠋ Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/monitor-9100','/u01/tidb/tidb-data/monitor-9100','/u01/tidb/tidb-deploy...
- Copy node_exporter -> 192.168.120.204 ... ⠙ Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/monitor-9100','/u01/tidb/tidb-data/monitor-9100','/u01/tidb/tidb-deploy...
- Copy node_exporter -> 192.168.120.204 ... ⠇ Mkdir: host=192.168.120.204, directories='/u01/tidb/tidb-deploy/monitor-9100','/u01/tidb/tidb-data/monitor-9100','/u01/tidb/tidb-deploy...
- Copy node_exporter -> 192.168.120.204 ... ⠏ CopyComponent: component=node_exporter, version=, remote=192.168.120.204:/u01/tidb/tidb-deploy/monitor-9100 os=linux, arch=amd64
- Copy node_exporter -> 192.168.120.204 ... Done
+ [ Serial ] - ScaleConfig: cluster=tidbcluster, user=tidb, host=192.168.120.204, service=tidb-4000.service, deploy_dir=/u01/tidb/tidb-deploy/tidb-4000, data_dir=[], log_dir=/u01/tidb/tidb-deploy/tidb-4000/log, cache_dir=
+ [ Serial ] - ScaleConfig: cluster=tidbcluster, user=tidb, host=192.168.120.204, service=pd-2379.service, deploy_dir=/u01/tidb/tidb-deploy/pd-2379, data_dir=, log_dir=/u01/tidb/tidb-deploy/pd-2379/log, cache_dir=
+ [ Serial ] - ScaleConfig: cluster=tidbcluster, user=tidb, host=192.168.120.204, service=cdc-8300.service, deploy_dir=/u01/tidb/tidb-deploy/cdc-8300, data_dir=, log_dir=/u01/tidb/tidb-deploy/cdc-8300/log, cache_dir=
script path: /root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/run_pd_192.168.120.204_2379.sh
+ Check status
Enabling component pd
      Enabling instance 192.168.120.204:2379
      Enable instance 192.168.120.204:2379 success
Enabling component tidb
      Enabling instance 192.168.120.204:4000
      Enable instance 192.168.120.204:4000 success
Enabling component cdc
      Enabling instance 192.168.120.204:8300
      Enable instance 192.168.120.204:8300 success
Enabling component node_exporter
      Enabling instance 192.168.120.204
      Enable 192.168.120.204 success
Enabling component blackbox_exporter
      Enabling instance 192.168.120.204
      Enable 192.168.120.204 success
+ - UserSSH: user=tidb, host=192.168.120.204
+ - UserSSH: user=tidb, host=192.168.120.204
+ - UserSSH: user=tidb, host=192.168.120.204
+ [ Serial ] - Save meta
+ [ Serial ] - StartCluster
Starting component pd
      Starting instance 192.168.120.204:2379
      Start instance 192.168.120.204:2379 success
Starting component tidb
      Starting instance 192.168.120.204:4000
      Start instance 192.168.120.204:4000 success
Starting component cdc
      Starting instance 192.168.120.204:8300
      Start instance 192.168.120.204:8300 success
Starting component node_exporter
      Starting instance 192.168.120.204
      Start 192.168.120.204 success
Starting component blackbox_exporter
      Starting instance 192.168.120.204
      Start 192.168.120.204 success
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.203, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/grafana-3000.service, deploy_dir=/u01/tidb/tidb-deploy/grafana-3000, data_dir=[], log_dir=/u01/tidb/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.204, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/pd-2379.service, deploy_dir=/u01/tidb/tidb-deploy/pd-2379, data_dir=, log_dir=/u01/tidb/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.204, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tidb-4000.service, deploy_dir=/u01/tidb/tidb-deploy/tidb-4000, data_dir=[], log_dir=/u01/tidb/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.203, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/cdc-8300.service, deploy_dir=/u01/tidb/tidb-deploy/cdc-8300, data_dir=, log_dir=/u01/tidb/tidb-deploy/cdc-8300/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.204, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/cdc-8300.service, deploy_dir=/u01/tidb/tidb-deploy/cdc-8300, data_dir=, log_dir=/u01/tidb/tidb-deploy/cdc-8300/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.201, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/pd-2379.service, deploy_dir=/u01/tidb/tidb-deploy/pd-2379, data_dir=, log_dir=/u01/tidb/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.201, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tiflash-9000.service, deploy_dir=/u01/tidb/tidb-deploy/tiflash-9000, data_dir=, log_dir=/u01/tidb/tidb-deploy/tiflash-9000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.201, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tikv-20160.service, deploy_dir=/u01/tidb/tidb-deploy/tikv-20160, data_dir=, log_dir=/u01/tidb/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.202, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/prometheus-9090.service, deploy_dir=/u01/tidb/tidb-deploy/prometheus-9090, data_dir=, log_dir=/u01/tidb/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.202, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/cdc-8300.service, deploy_dir=/u01/tidb/tidb-deploy/cdc-8300, data_dir=, log_dir=/u01/tidb/tidb-deploy/cdc-8300/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.203, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/pd-2379.service, deploy_dir=/u01/tidb/tidb-deploy/pd-2379, data_dir=, log_dir=/u01/tidb/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.201, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tidb-4000.service, deploy_dir=/u01/tidb/tidb-deploy/tidb-4000, data_dir=[], log_dir=/u01/tidb/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.202, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tikv-20161.service, deploy_dir=/u01/tidb/tidb-deploy/tikv-20161, data_dir=, log_dir=/u01/tidb/tidb-deploy/tikv-20161/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.203, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tikv-20162.service, deploy_dir=/u01/tidb/tidb-deploy/tikv-20162, data_dir=, log_dir=/u01/tidb/tidb-deploy/tikv-20162/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - InitConfig: cluster=tidbcluster, user=tidb, host=192.168.120.202, path=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache/tidb-4000.service, deploy_dir=/u01/tidb/tidb-deploy/tidb-4000, data_dir=[], log_dir=/u01/tidb/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidbcluster/config-cache
+ [ Serial ] - SystemCtl: host=192.168.120.202 action=reload prometheus-9090.service
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
Scaled cluster `tidbcluster` out successfully  结果验证如下:
# tiup cluster display tidbcluster
Starting component `cluster`: /root/.tiup/components/cluster/v1.5.2/tiup-cluster display tidbcluster
Cluster type:       tidb
Cluster name:       tidbcluster
Cluster version:    v5.1.0
Deploy user:      tidb
SSH type:         builtin
Dashboard URL:      http://192.168.120.201:2379/dashboard
ID                     Role      Host             Ports                            OS/Arch       Status   Data Dir                           Deploy Dir
--                     ----      ----             -----                            -------       ------   --------                           ----------
192.168.120.202:8300   cdc         192.168.120.2028300                           linux/x86_64Up       /u01/tidb/tidb-data/cdc-8300         /u01/tidb/tidb-deploy/cdc-8300
192.168.120.203:8300   cdc         192.168.120.2038300                           linux/x86_64Up       /u01/tidb/tidb-data/cdc-8300         /u01/tidb/tidb-deploy/cdc-8300
192.168.120.204:8300   cdc         192.168.120.2048300                           linux/x86_64Up       /u01/tidb/tidb-data/cdc-8300         /u01/tidb/tidb-deploy/cdc-8300
192.168.120.203:3000   grafana   192.168.120.2033000                           linux/x86_64Up       -                                    /u01/tidb/tidb-deploy/grafana-3000
192.168.120.201:2379   pd          192.168.120.2012379/2380                        linux/x86_64Up|L|UI/u01/tidb/tidb-data/pd-2379          /u01/tidb/tidb-deploy/pd-2379
192.168.120.203:2379   pd          192.168.120.2032379/2380                        linux/x86_64Up       /u01/tidb/tidb-data/pd-2379          /u01/tidb/tidb-deploy/pd-2379
192.168.120.204:2379   pd          192.168.120.2042379/2380                        linux/x86_64Up       /u01/tidb/tidb-data/pd-2379          /u01/tidb/tidb-deploy/pd-2379
192.168.120.202:9090   prometheus192.168.120.2029090                           linux/x86_64Up       /u01/tidb/tidb-data/prometheus-9090/u01/tidb/tidb-deploy/prometheus-9090
192.168.120.201:4000   tidb      192.168.120.2014000/10080                     linux/x86_64Up       -                                    /u01/tidb/tidb-deploy/tidb-4000
192.168.120.202:4000   tidb      192.168.120.2024000/10080                     linux/x86_64Up       -                                    /u01/tidb/tidb-deploy/tidb-4000
192.168.120.204:4000   tidb      192.168.120.2044000/10080                     linux/x86_64Up       -                                    /u01/tidb/tidb-deploy/tidb-4000
192.168.120.201:9000   tiflash   192.168.120.2019000/8123/3930/20170/20292/8234linux/x86_64Up       /u01/tidb/tidb-data/tiflash-9000   /u01/tidb/tidb-deploy/tiflash-9000
192.168.120.201:20160tikv      192.168.120.20120160/20180                      linux/x86_64Up       /u01/tidb/tidb-data/tikv-20160       /u01/tidb/tidb-deploy/tikv-20160
192.168.120.202:20161tikv      192.168.120.20220161/20181                      linux/x86_64Up       /u01/tidb/tidb-data/tikv-20161       /u01/tidb/tidb-deploy/tikv-20161
192.168.120.203:20162tikv      192.168.120.20320162/20182                      linux/x86_64Up       /u01/tidb/tidb-data/tikv-20162       /u01/tidb/tidb-deploy/tikv-20162
Total nodes: 154、执行tiup命令进行缩容
  缩容很简单,这里将204上的所有服务全部删除掉,如下:
# tiup cluster scale-in tidbcluster --node 192.168.120.204:8300 --node 192.168.120.204:2379 --node 192.168.120.204:4000
Starting component `cluster`: /root/.tiup/components/cluster/v1.5.2/tiup-cluster scale-in tidbcluster --node 192.168.120.204:8300 --node 192.168.120.204:2379 --node 192.168.120.204:4000
This operation will delete the 192.168.120.204:8300,192.168.120.204:2379,192.168.120.204:4000 nodes in `tidbcluster` and all their data.
Do you want to continue? :(default=N) y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidbcluster/ssh/id_rsa.pub
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.203
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.203
+ - UserSSH: user=tidb, host=192.168.120.203
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.204
+ - UserSSH: user=tidb, host=192.168.120.204
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.201
+ - UserSSH: user=tidb, host=192.168.120.202
+ - UserSSH: user=tidb, host=192.168.120.203
+ - UserSSH: user=tidb, host=192.168.120.204
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes: Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[] ShowUptime:false JSON:false Operation:StartOperation}
Stopping component pd
      Stopping instance 192.168.120.204
      Stop pd 192.168.120.204:2379 success
Destroying component pd
Destroying instance 192.168.120.204
Destroy 192.168.120.204 success
- Destroy pd paths:
Stopping component tidb
      Stopping instance 192.168.120.204
      Stop tidb 192.168.120.204:4000 success
Destroying component tidb
Destroying instance 192.168.120.204
Destroy 192.168.120.204 success
- Destroy tidb paths:
Stopping component cdc
      Stopping instance 192.168.120.204
      Stop cdc 192.168.120.204:8300 success
Destroying component cdc
Destroying instance 192.168.120.204
Destroy 192.168.120.204 success
- Destroy cdc paths:
Stopping component node_exporter
      Stopping instance 192.168.120.204
      Stop 192.168.120.204 success
Stopping component blackbox_exporter
      Stopping instance 192.168.120.204
      Stop 192.168.120.204 success
Destroying monitored 192.168.120.204
      Destroying instance 192.168.120.204
Destroy monitored on 192.168.120.204 success
Delete public key 192.168.120.204
Delete public key 192.168.120.204 success
+ [ Serial ] - UpdateMeta: cluster=tidbcluster, deleted=`'192.168.120.204:2379','192.168.120.204:4000','192.168.120.204:8300'`
+ [ Serial ] - UpdateTopology: cluster=tidbcluster
+ Refresh instance configs
- Regenerate config pd -> 192.168.120.201:2379 ... Done
- Regenerate config pd -> 192.168.120.203:2379 ... Done
- Regenerate config tikv -> 192.168.120.201:20160 ... Done
- Regenerate config tikv -> 192.168.120.202:20161 ... Done
- Regenerate config tikv -> 192.168.120.203:20162 ... Done
- Regenerate config tidb -> 192.168.120.201:4000 ... Done
- Regenerate config tidb -> 192.168.120.202:4000 ... Done
- Regenerate config tiflash -> 192.168.120.201:9000 ... Done
- Regenerate config cdc -> 192.168.120.202:8300 ... Done
- Regenerate config cdc -> 192.168.120.203:8300 ... Done
- Regenerate config prometheus -> 192.168.120.202:9090 ... Done
- Regenerate config grafana -> 192.168.120.203:3000 ... Done
+ [ Serial ] - SystemCtl: host=192.168.120.202 action=reload prometheus-9090.service
Scaled cluster `tidbcluster` in successfully5、使用Tiup部署DM集群服务
# vi dm.yaml
---
global:
user: "tidb"
ssh_port: 22
deploy_dir: "/u01/tidb/dm/deploy"
data_dir: "/u01/tidb/dm/data"

master_servers:
- host: 192.168.120.201
- host: 192.168.120.202
- host: 192.168.120.203

worker_servers:
- host: 192.168.120.201
- host: 192.168.120.202
- host: 192.168.120.203

monitoring_servers:
- host: 192.168.120.201

grafana_servers:
- host: 192.168.120.201

alertmanager_servers:
- host: 192.168.120.201
# tiup dm deploy dmcluster v2.0.4 ./dm.yaml --user root
# tiup dm display dmcluster                     
Starting component `dm`: /root/.tiup/components/dm/v1.5.2/tiup-dm display dmcluster
Cluster type:       dm
Cluster name:       dmcluster
Cluster version:    v2.0.4
Deploy user:      tidb
SSH type:         builtin
ID                  Role          Host             Ports      OS/Arch       Status   Data Dir                           Deploy Dir
--                  ----          ----             -----      -------       ------   --------                           ----------
192.168.120.201:9093alertmanager192.168.120.2019093/9094linux/x86_64Up         /u01/tidb/dm/data/alertmanager-9093/u01/tidb/dm/deploy/alertmanager-9093
192.168.120.201:8261dm-master   192.168.120.2018261/8291linux/x86_64Healthy|L/u01/tidb/dm/data/dm-master-8261   /u01/tidb/dm/deploy/dm-master-8261
192.168.120.202:8261dm-master   192.168.120.2028261/8291linux/x86_64Healthy    /u01/tidb/dm/data/dm-master-8261   /u01/tidb/dm/deploy/dm-master-8261
192.168.120.203:8261dm-master   192.168.120.2038261/8291linux/x86_64Healthy    /u01/tidb/dm/data/dm-master-8261   /u01/tidb/dm/deploy/dm-master-8261
192.168.120.201:8262dm-worker   192.168.120.2018262       linux/x86_64Free       /u01/tidb/dm/data/dm-worker-8262   /u01/tidb/dm/deploy/dm-worker-8262
192.168.120.202:8262dm-worker   192.168.120.2028262       linux/x86_64Free       /u01/tidb/dm/data/dm-worker-8262   /u01/tidb/dm/deploy/dm-worker-8262
192.168.120.203:8262dm-worker   192.168.120.2038262       linux/x86_64Free       /u01/tidb/dm/data/dm-worker-8262   /u01/tidb/dm/deploy/dm-worker-8262
192.168.120.201:3000grafana       192.168.120.2013000       linux/x86_64Up         -                                    /u01/tidb/dm/deploy/grafana-3000
192.168.120.201:9090prometheus    192.168.120.2019090       linux/x86_64Up         /u01/tidb/dm/data/prometheus-9090    /u01/tidb/dm/deploy/prometheus-9090
  
文档来源:51CTO技术博客https://blog.51cto.com/candon123/3010062
页: [1]
查看完整版本: TiDB集群运维之扩缩容