副本同步
1.
clickhouse服务端默认配置的 /etc/clickhouse-server/config.xml 中表明
会默认加载 /etc/metrika.xml 文件作为远程服务的替换文件,这里手动配置在其它目录
其中默认的集群服务名称为标签 incl 指定的 clickhouse_remote_servers
<remote_servers incl="clickhouse_remote_servers" />
2.
clickhouse-client -u default --password "" --query "SELECT * FROM system.clusters"
clickhouse服务端默认配置的/etc/clickhouse-server/config.xml中已经默认配置了三个本地测试的分片副本集群,分别为
本地分片测试集群:test_shard_localhost
本地两个分片测试集群:test_cluster_two_shards_localhost
两分片测试集群:test_cluster_two_shards
两分片内部复制测试集群 :test_cluster_two_shards_internal_replication
本地分片安全集群:test_shard_localhost_secure
测试不推荐的分片集群:test_unavailable_shard
①修改 /etc/clickhouse-server/config.xml 文件,把 remote_servers 标签的默认配置注释掉
②在 /etc/clickhouse-server/config.xml 文件中添加以下配置
<!-- 远程服务(分布式配置)集群名称指定为:clickhouse_remote_servers -->
<!-- <remote_servers incl="clickhouse_remote_servers" /> -->
<!-- 在使用复制表时,使用ZK存储副本的元数据 -->
<zookeeper incl="zookeeper-servers" optional="true" />
<!-- 远程服务 remote_servers 的默认替换文件 -->
<include_from>/etc/metrika.xml</include_from>
<!-- clickhouse远程服务:集群配置(3个分片,每个分片只有2个副本) -->
<remote_servers>
<!-- 集群名称:名字自定义 -->
<cluster_3shards_2replicas>
<!-- shard:数据分片1 -->
<shard>
<!-- 一份数据分三台机子,每台机子分三分之二数据,三分之一为当前分片的数据,三分之一为其它分片的副本数据 -->
<internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
<replica> <!-- replica:当前分片的第一个副本 -->
<host>hadoop004</host>
<port>9000</port>
</replica>
<replica> <!-- replica:当前分片的第二个副本 -->
<host>hadoop001</host>
<port>9000</port>
</replica>
</shard>
<!-- shard:数据分片2 -->
<shard>
<!-- false:一次性写入所有副本 true:分别写入其中一个副本,配合zk来进行数据复制 -->
<internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
<replica> <!-- replica:当前分片的第一个副本 -->
<host>hadoop005</host>
<port>9000</port>
</replica>
<replica> <!-- replica:当前分片的第二个副本 -->
<host>hadoop002</host>
<port>9000</port>
</replica>
</shard>
<!-- shard:数据分片3 -->
<shard>
<internal_replication>true</internal_replication> <!-- 分片内的副本节点之间相互同步 -->
<replica> <!-- replica:当前分片的第一个副本 -->
<host>hadoop006</host>
<port>9000</port>
</replica>
<replica> <!-- replica:当前分片的第二个副本 -->
<host>hadoop003</host>
<port>9000</port>
</replica>
</shard>
</cluster_3shards_2replicas>
</remote_servers>
<!-- 宏定义:全局变量,可用{shard}和{replica}表示各个机器配置的值 -->
<!-- <macros> -->
<!-- layer:表示不同集群,这里只用一个集群,所有机器都一样 -->
<!-- <layer>layer01</layer> -->
<!-- 分片名称:一个分片两台机器 -->
<!-- <shard>shard01</shard> -->
<!-- <shard>shard02</shard> -->
<!-- <shard>shard03</shard> -->
<!-- <shard>shard01</shard> -->
<!-- <shard>shard02</shard> -->
<!-- <shard>shard03</shard> -->
<!-- 副本名称:每台机器一个副本 -->
<!-- <replica>replica01_01</replica> -->
<!-- <replica>replica02_02</replica> -->
<!-- <replica>replica03_03</replica> -->
<!-- <replica>replica01_01</replica> -->
<!-- <replica>replica02_02</replica> -->
<!-- <replica>replica03_03</replica> -->
<!-- </macros> -->
<!-- Hadoop001 -->
<macros>
<shard>shard01</shard>
<replica>replica_01_02</replica>
</macros>
<!-- zookeeper集群配置 -->
<zookeeper-servers>
<node index="1">
<host>hadoop001</host>
<port>2181</port>
</node>
<node index="2">
<host>hadoop002</host>
<port>2181</port>
</node>
<node index="3">
<host>hadoop003</host>
<port>2181</port>
</node>
</zookeeper-servers> ③查看<macros>标签配置
sed -n '751, 755p' /etc/clickhouse-server/config.xml
sed -n '78, 81p' /etc/clickhouse-server/config.d/metrika.xml
④添加 /etc/metrika.xml 文件
每台机器上只有<macros> 标签不一样,这里配置的3分片2副本的不同节点参数如下
<!-- Hadoop001 -->
<macros>
<shard>shard01</shard>
<replica>replica_01_02</replica>
</macros>
<!-- Hadoop002 -->
<macros>
<shard>shard02</shard>
<replica>replica_02_02</replica>
</macros>
<!-- Hadoop003 -->
<macros>
<shard>shard03</shard>
<replica>replica_03_02</replica>
</macros>
<!-- Hadoop004 -->
<macros>
<shard>shard01</shard>
<replica>replica_01_01</replica>
</macros>
<!-- Hadoop005 -->
<macros>
<shard>shard02</shard>
<replica>replica_02_01</replica>
</macros>
<!-- Hadoop006 -->
<macros>
<shard>shard03</shard>
<replica>replica_03_01</replica>
</macros>
5.测试
hadoop001执行
create table t_order_rep (
id UInt32,
sku_id String,
total_amount Decimal(16,2),
create_time Datetime
) engine =ReplicatedMergeTree('/clickhouse/table/shard01/t_order_rep','replica01')
partition by toYYYYMMDD(create_time)
primary key (id)
order by (id,sku_id);
insert into t_order_rep values
(101,'sku_001',1000.00,'2021-09-16 18:40:00'),
(102,'sku_002',2000.00,'2021-09-16 18:40:00'),
(103,'sku_004',2500.00,'2021-09-16 18:40:00'),
(104,'sku_002',2000.00,'2021-09-16 18:40:00'),
(105,'sku_003',3000.00,'2021-09-16 12:40:00');hadoop002执行
create table t_order_rep (
id UInt32,
sku_id String,
total_amount Decimal(16,2),
create_time Datetime
) engine =ReplicatedMergeTree('/clickhouse/table/shard01/t_order_rep','replica02')
partition by toYYYYMMDD(create_time)
primary key (id)
order by (id,sku_id);
hadoop001、hadoop002执行查询,发现都能查到数据(副本同步成功)clickhouse-client -m --query="SELECT * FROM t_order_rep;"
分片同步
1、在clickhouse集群的每个节点上定义clickhouse全局变量文件,文件根据metrika.xml定义
hadoop001 shard01 replica01
hadoop002 shard01 replica02
hadoop003 shard02 replica01
hadoop004 shard02 replica02
hadoop005 shard03 replica01
hadoop006 shard03 replica02
echo "export shard=shard01
export replica=replica01" > /home/clickhouse/macros
2、source全局变量文件使其生效
source /home/clickhouse/macros
echo ${shard} ${replica} 3、在每台机器上创建本地集群表
clickhouse-client -m --query "
create table st_order_mt on cluster cluster_3shards_2replicas (
id UInt32,
sku_id String,
total_amount Decimal(16,2),
create_time Datetime
) engine=ReplicatedMergeTree('/clickhouse/tables/${shard}/st_order_mt','${replica}')
partition by toYYYYMMDD(create_time)
primary key (id)
order by (id,sku_id);"已解决:创建之后这里有个bug,提示已经创建,并且不是在本机创建的。但是每台机器执行之后查看表却发现分别在各台机器上创建了一张表了,只是表的分片和副本和本机集群配置不一致
4、验证查询分布式表是否正确创建
clickhouse-client -m --query "SHOW CREATE TABLE st_order_mt"
5、在任意一台机器上创建分布式集群表
clickhouse-client -m --query "
create table st_order_mt_all on cluster cluster_3shards_2replicas (
id UInt32,
sku_id String,
total_amount Decimal(16,2),
create_time Datetime
) engine = Distributed(cluster_3shards_2replicas, default, st_order_mt, hiveHash(sku_id));"
clickhouse-client -m --query "SHOW TABLES"
6、任意一台集群节点向分布式集群表插入数据
将数据插入分布式表,分布式表会将数据平均分配到每个分片中,每个分片的副本之间再相互复制
clickhouse-client -m --query "
insert into st_order_mt_all values
(201,'sku_001',1000.00,'2021-03-01 19:00:00'),
(202,'sku_002',2000.00,'2021-03-01 19:00:00'),
(203,'sku_004',2500.00,'2021-03-01 19:00:00'),
(204,'sku_002',2000.00,'2021-03-01 19:00:00'),
(205,'sku_003',1200.00,'2021-03-02 19:00:00');"
7、查看分布式集群表数据
分别查看本地表和分布式表,查询结果:分布式表查询全量,本地表每个分片查询的数据一致
clickhouse-client -m --query "SELECT * FROM st_order_mt;"
clickhouse-client -m --query "SELECT * FROM st_order_mt_all;"
8、删除本地集群表和分布式集群表
clickhouse-client -m --query "DROP TABLE st_order_mt_all"
clickhouse-client -m --query "DROP TABLE st_order_mt"
clickhouse-client -m --query "SHOW TABLES"