替代ELK:ClickHouse+Kafka+FlieBeat才是最絕的
saas 服務(wù)未來(lái)會(huì)面臨數(shù)據(jù)安全、合規(guī)等問(wèn)題。公司的業(yè)務(wù)需要沉淀一套私有化部署能力,幫助業(yè)務(wù)提升行業(yè)競(jìng)爭(zhēng)力。
為了完善平臺(tái)系統(tǒng)能力、我們需要沉淀一套數(shù)據(jù)體系幫助運(yùn)營(yíng)分析活動(dòng)效果、提升運(yùn)營(yíng)能力。
然而在實(shí)際的開(kāi)發(fā)過(guò)程中,如果直接部署一套大數(shù)據(jù)體系,對(duì)于使用者來(lái)說(shuō)將是一筆比較大的服務(wù)器開(kāi)銷(xiāo)。為此我們選用折中方案完善數(shù)據(jù)分析能力。
Elasticsearch vs ClickHouse
ClickHouse 是一款高性能列式分布式數(shù)據(jù)庫(kù)管理系統(tǒng),我們對(duì) ClickHouse 進(jìn)行了測(cè)試,發(fā)現(xiàn)有下列優(yōu)勢(shì):
ClickHouse 寫(xiě)入吞吐量大
單服務(wù)器日志寫(xiě)入量在 50MB 到 200MB/s,每秒寫(xiě)入超過(guò) 60w 記錄數(shù),是 ES 的 5 倍以上。
在 ES 中比較常見(jiàn)的寫(xiě) Rejected 導(dǎo)致數(shù)據(jù)丟失、寫(xiě)入延遲等問(wèn)題,在 ClickHouse 中不容易發(fā)生。
查詢(xún)速度快
官方宣稱(chēng)數(shù)據(jù)在 pagecache 中,單服務(wù)器查詢(xún)速率大約在 2-30GB/s;沒(méi)在 pagecache 的情況下,查詢(xún)速度取決于磁盤(pán)的讀取速率和數(shù)據(jù)的壓縮率。經(jīng)測(cè)試 ClickHouse 的查詢(xún)速度比 ES 快 5-30 倍以上。
ClickHouse 比 ES 服務(wù)器成本更低
一方面 ClickHouse 的數(shù)據(jù)壓縮比比 ES 高,相同數(shù)據(jù)占用的磁盤(pán)空間只有 ES 的 1/3 到 1/30,節(jié)省了磁盤(pán)空間的同時(shí),也能有效的減少磁盤(pán) IO,這也是ClickHouse查詢(xún)效率更高的原因之一。
圖片
另一方面 ClickHouse 比 ES 占用更少的內(nèi)存,消耗更少的 CPU 資源。我們預(yù)估用 ClickHouse 處理日志可以將服務(wù)器成本降低一半。
圖片
成本分析
在沒(méi)有任何折扣的情況下,基于 aliyun 分析。
圖片
環(huán)境部署
1、zookeeper 集群部署
圖片
yum install java-1.8.0-openjdk-devel.x86_64
/etc/profile 配置環(huán)境變量
更新系統(tǒng)時(shí)間
yum install ntpdate
ntpdate asia.pool.ntp.org
mkdir zookeeper
mkdir ./zookeeper/data
mkdir ./zookeeper/logs
wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.7.1/apache-zookeeper-3.7.1-bin.tar.gz
tar -zvxf apache-zookeeper-3.7.1-bin.tar.gz -C /usr/zookeeper
export ZOOKEEPER_HOME=/usr/zookeeper/apache-zookeeper-3.7.1-bin
export PATH=$ZOOKEEPER_HOME/bin:$PATH
進(jìn)入ZooKeeper配置目錄
cd $ZOOKEEPER_HOME/conf
新建配置文件
vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/zookeeper/data
dataLogDir=/usr/zookeeper/logs
clientPort=2181
server.1=zk1:2888:3888
server.2=zk2:2888:3888
server.3=zk3:2888:3888
在每臺(tái)服務(wù)器上執(zhí)行,給zookeeper創(chuàng)建myid
echo "1" > /usr/zookeeper/data/myid
echo "2" > /usr/zookeeper/data/myid
echo "3" > /usr/zookeeper/data/myid
進(jìn)入ZooKeeper bin目錄
cd $ZOOKEEPER_HOME/bin
sh zkServer.sh start
2、Kafka 集群部署
mkdir -p /usr/kafka
chmod 777 -R /usr/kafka
wget --no-check-certificate https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/3.2.0/kafka_2.12-3.2.0.tgz
tar -zvxf kafka_2.12-3.2.0.tgz -C /usr/kafka
不同的broker Id 設(shè)置不一樣,比如 1,2,3
broker.id=1
listeners=PLAINTEXT://ip:9092
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dir=/usr/kafka/logs
num.partitinotallow=5
num.recovery.threads.per.data.dir=3
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=3
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.cnotallow=zk1:2181,zk2:2181,zk3:2181
zookeeper.connection.timeout.ms=30000
group.initial.rebalance.delay.ms=0
后臺(tái)常駐進(jìn)程啟動(dòng)kafka
nohup /usr/kafka/kafka_2.12-3.2.0/bin/kafka-server-start.sh /usr/kafka/kafka_2.12-3.2.0/config/server.properties >/usr/kafka/logs/kafka.log >&1 &
/usr/kafka/kafka_2.12-3.2.0/bin/kafka-server-stop.sh
$KAFKA_HOME/bin/kafka-topics.sh --list --bootstrap-server ip:9092
$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server ip:9092 --topic test --from-beginning
$KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server ip:9092 --replication-factor 2 --partitions 3 --topic xxx_data
3、FileBeat 部署
sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
Create a file with a .repo extension (for example, elastic.repo) in your /etc/yum.repos.d/ directory and add the following lines:
在/etc/yum.repos.d/ 目錄下創(chuàng)建elastic.repo
[elastic-8.x]
name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
yum install filebeat
systemctl enable filebeat
chkconfig --add filebeat
FileBeat 配置文件說(shuō)明,坑點(diǎn) 1(需設(shè)置 keys_under_root: true)。如果不設(shè)置kafka 的消息字段如下:
圖片
文件目錄:/etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /root/logs/xxx/inner/*.log
json:
如果不設(shè)置該索性,所有的數(shù)據(jù)都存儲(chǔ)在message里面,這樣設(shè)置以后數(shù)據(jù)會(huì)平鋪。
keys_under_root: true
output.kafka:
hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]
topic: 'xxx_data_clickhouse'
partition.round_robin:
reachable_only: false
required_acks: 1
compression: gzip
processors:
剔除filebeat 無(wú)效的字段數(shù)據(jù)
- drop_fields:
fields: ["input", "agent", "ecs", "log", "metadata", "timestamp"]
ignore_missing: false
nohup ./filebeat -e -c /etc/filebeat/filebeat.yml > /user/filebeat/filebeat.log &
輸出到filebeat.log文件中,方便排查
4、clickhouse 部署
圖片
檢查當(dāng)前CPU是否支持SSE 4.2,如果不支持,需要通過(guò)源代碼編譯構(gòu)建
grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
返回 "SSE 4.2 supported" 表示支持,返回 "SSE 4.2 not supported" 表示不支持
創(chuàng)建數(shù)據(jù)保存目錄,將它創(chuàng)建到大容量磁盤(pán)掛載的路徑
mkdir -p /data/clickhouse
修改/etc/hosts文件,添加clickhouse節(jié)點(diǎn)
舉例:
10.190.85.92 bigdata-clickhouse-01
10.190.85.93 bigdata-clickhouse-02
服務(wù)器性能參數(shù)設(shè)置:
cpu頻率調(diào)節(jié),將CPU頻率固定工作在其支持的最高運(yùn)行頻率上,而不動(dòng)態(tài)調(diào)節(jié),性能最好
echo 'performance' | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
內(nèi)存調(diào)節(jié),不要禁用 overcommit
echo 0 | tee /proc/sys/vm/overcommit_memory
始終禁用透明大頁(yè)(transparent huge pages)。它會(huì)干擾內(nèi)存分配器,從而導(dǎo)致顯著的性能下降
echo 'never' | tee /sys/kernel/mm/transparent_hugepage/enabled
首先,需要添加官方存儲(chǔ)庫(kù):
yum install yum-utils
rpm --import <https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG>
yum-config-manager --add-repo <https://repo.clickhouse.tech/rpm/stable/x86_64>
查看clickhouse可安裝的版本:
yum list | grep clickhouse
運(yùn)行安裝命令:
yum -y install clickhouse-server clickhouse-client
修改/etc/clickhouse-server/config.xml配置文件,修改日志級(jí)別為information,默認(rèn)是trace
<level>information</level>
執(zhí)行日志所在目錄:
正常日志
/var/log/clickhouse-server/clickhouse-server.log
異常錯(cuò)誤日志
/var/log/clickhouse-server/clickhouse-server.err.log
查看安裝的clickhouse版本:
clickhouse-server --version
clickhouse-client --password
sudo clickhouse stop
sudo clickhouse tart
sudo clickhouse start
圖片
clickhouse 部署過(guò)程中遇到的一些問(wèn)題如下:
1)clickhouse 創(chuàng)建 kafka 引擎表
CREATE TABLE default.kafka_clickhouse_inner_log ON CLUSTER clickhouse_cluster (
log_uuid String ,
date_partition UInt32 ,
event_name String ,
activity_name String ,
activity_type String ,
activity_id UInt16
) ENGINE = Kafka SETTINGS
kafka_broker_list = 'kafka1:9092,kafka2:9092,kafka3:9092',
kafka_topic_list = 'data_clickhouse',
kafka_group_name = 'clickhouse_xxx',
kafka_format = 'JSONEachRow',
kafka_row_delimiter = '\n',
kafka_num_consumers = 1;
問(wèn)題 1:clikhouse 客戶(hù)端無(wú)法查詢(xún) kafka 引擎表
Direct select is not allowed. To enable use setting stream_like_engine_allow_direct_select.(QUERY_NOT_ALLOWED) (version 22.5.2.53 (official build))
圖片
解決方案:
需要在clickhouse client 創(chuàng)建加上 --stream_like_engine_allow_direct_select 1
clickhouse-client --stream_like_engine_allow_direct_select 1 --password xxxxx
圖片
2)clickhouse 創(chuàng)建本地節(jié)點(diǎn)表
問(wèn)題 2:無(wú)法開(kāi)啟本地表 macro
Code: 62. DB::Exception: There was an error on [10.74.244.57:9000]: Code: 62. DB::Exception: No macro 'shard' in config while processing substitutions in '/clickhouse/tables/default/bi_inner_log_local/{shard}' at '50' or macro is not supported here. (SYNTAX_ERROR) (version 22.5.2.53 (official build)). (SYNTAX_ERROR) (version 22.5.2.53 (official build))
創(chuàng)建本地表(使用復(fù)制去重表引擎)
create table default.bi_inner_log_local ON CLUSTER clickhouse_cluster (
log_uuid String ,
date_partition UInt32 ,
event_name String ,
activity_name String ,
credits_bring Int16 ,
activity_type String ,
activity_id UInt16
) ENGINE = ReplicatedReplacingMergeTree('/clickhouse/tables/default/bi_inner_log_local/{shard}','{replica}')
PARTITION BY date_partition
ORDER BY (event_name,date_partition,log_uuid)
SETTINGS index_granularity = 8192;
解決方案:在不同的 clickhouse 節(jié)點(diǎn)上配置不同的 shard,每一個(gè)節(jié)點(diǎn)的 shard 名稱(chēng)不能一致。關(guān)注公眾號(hào):碼猿技術(shù)專(zhuān)欄,回復(fù)關(guān)鍵詞:1111 獲取阿里內(nèi)部的Java性能調(diào)優(yōu)手冊(cè)
<macros>
<shard>01</shard>
<replica>example01-01-1</replica>
</macros>
圖片
圖片
問(wèn)題 3:clickhouse 中節(jié)點(diǎn)數(shù)據(jù)已經(jīng)存在
Code: 253. DB::Exception: There was an error on : Code: 253. DB::Exception: Replica /clickhouse/tables/default/bi_inner_log_local/01/replicas/example01-01-1 already exists. (REPLICA_IS_ALREADY_EXIST) (version 22.5.2.53 (official build)). (REPLICA_IS_ALREADY_EXIST) (version 22.5.2.53 (official build))
解決方案:進(jìn)入 zookeeper 客戶(hù)端刪除相關(guān)節(jié)點(diǎn),然后再重新創(chuàng)建 ReplicatedReplacingMergeTree 表。這樣可以保障每一個(gè) clickhouse 節(jié)點(diǎn)都會(huì)去消費(fèi) kafka partition 的數(shù)據(jù)。
3)clickhouse 創(chuàng)建集群表
創(chuàng)建分布式表(根據(jù) log_uuid 對(duì)數(shù)據(jù)進(jìn)行分發(fā),相同的 log_uuid 會(huì)發(fā)送到同一個(gè) shard 分片上,用于后續(xù)合并時(shí)的數(shù)據(jù)去重):
CREATE TABLE default.bi_inner_log_all ON CLUSTER clickhouse_cluster AS default.bi_inner_log_local
ENGINE = Distributed(clickhouse_cluster, default, bi_inner_log_local, xxHash32(log_uuid));
問(wèn)題 4:分布式集群表無(wú)法查詢(xún)
Code: 516. DB::Exception: Received from 10.74.244.57:9000. DB::Exception: default: Authentication failed: password is incorrect or there is no user with such name. (AUTHENTICATION_FAILED) (version 22.5.2.53 (official build))
解決方案:
<!--分布式表配置-->
<remote_servers>
<clickhouse_cluster><!--集群名稱(chēng), 可以自定義, 后面在新建庫(kù)、表的時(shí)候需要用到集群名稱(chēng)-->
<shard>
<!--內(nèi)部復(fù)制(默認(rèn)false), 開(kāi)啟后, 在分布式表引擎下, 數(shù)據(jù)寫(xiě)入時(shí)-->
<!--每個(gè)分片只會(huì)去尋找一個(gè)節(jié)點(diǎn)寫(xiě), 并不是每個(gè)都寫(xiě)-->
<internal_replication>true</internal_replication>
<replica>
<host>ip1</host>
<port>9000</port>
<user>default</user>
<password>xxxx</password>
</replica>
</shard>
<shard>
<internal_replication>true</internal_replication>
<replica>
<host>ip2</host>
<port>9000</port>
<user>default</user>
<password>xxxx</password>
</replica>
</shard>
</clickhouse_cluster>
</remote_servers>
4)clickhouse 創(chuàng)建物化視圖
創(chuàng)建物化視圖,把 Kafka 消費(fèi)表消費(fèi)的數(shù)據(jù)同步到 ClickHouse 分布式表。
CREATE MATERIALIZED VIEW default.view_bi_inner_log ON CLUSTER clickhouse_cluster TO default.bi_inner_log_all AS
SELECT
log_uuid ,
date_partition ,
event_name ,
activity_name ,
credits_bring ,
activity_type ,
activity_id
FROM default.kafka_clickhouse_inner_log;
功夫不負(fù)有心人,解決完以上所有的問(wèn)題。數(shù)據(jù)流轉(zhuǎn)通了!本文所有組件都是比較新的版本,所以過(guò)程中問(wèn)題的解決基本都是官方文檔或操作手冊(cè)一步一步的解決。關(guān)注公眾號(hào):碼猿技術(shù)專(zhuān)欄,回復(fù)關(guān)鍵詞:1111 獲取阿里內(nèi)部的Java性能調(diào)優(yōu)手冊(cè)
圖片
總結(jié)一句話:遇到問(wèn)題去官方文檔或--help 去嘗試解決,慢慢的你就會(huì)升華。