
一、前言
在數(shù)據(jù)量大的企業(yè)級(jí)實(shí)踐中,Elasticsearch顯得非常常見,特別是數(shù)據(jù)表超過千萬級(jí)后,無論怎么優(yōu)化,還是有點(diǎn)力不從心!使用中,最首先的問題就是怎么把千萬級(jí)數(shù)據(jù)同步到Elasticsearch中,在一些開源框架中知道了,有專門進(jìn)行同步的!那就是Logstash。在思考,同步完怎么查看呢,這時(shí)Kibana映入眼簾,可視化的界面,讓使用更加的得心應(yīng)手哈?。∵@就是三劍客ELK。不過大多時(shí)候都是進(jìn)行日志采集的,小編沒有用,只是用來解決一個(gè)表的數(shù)據(jù)量大,查詢慢的!后面小編在專門搭建日志采集的ELK。
二、三者介紹
1、Elasticsearch
Elasticsearch 是一個(gè)分布式、RESTful 風(fēng)格的搜索和數(shù)據(jù)分析引擎,能夠解決不斷涌現(xiàn)出的各種用例。作為 Elastic Stack 的核心,Elasticsearch 會(huì)集中存儲(chǔ)您的數(shù)據(jù),讓您飛快完成搜索,微調(diào)相關(guān)性,進(jìn)行強(qiáng)大的分析,并輕松縮放規(guī)模。
2、Kibana
Kibana 是一個(gè)免費(fèi)且開放的用戶界面?,能夠讓您對(duì) Elasticsearch 數(shù)據(jù)進(jìn)行可視化,并讓您在 Elastic Stack 中進(jìn)行導(dǎo)航。您可以進(jìn)行各種操作,從跟蹤查詢負(fù)載,到理解請(qǐng)求如何流經(jīng)您的整個(gè)應(yīng)用,都能輕松完成。
3、Logstash?
Logstash 是免費(fèi)且開放的服務(wù)器端數(shù)據(jù)處理管道,能夠從多個(gè)來源采集數(shù)據(jù),轉(zhuǎn)換數(shù)據(jù),然后將數(shù)據(jù)發(fā)送到您最喜歡的“存儲(chǔ)庫(kù)”中。
三、版本選擇
現(xiàn)在最新版就是8.5,最新的教程少和問題未知,小編選擇7版本的,求一手穩(wěn)定哈!
于是去hub.docker查看了一下,經(jīng)常用的版本,最終確定為:7.17.7。
dockerHub官網(wǎng)地址:https://hub.docker.com/_/elasticsearch
官方規(guī)定:
安裝 Elastic Stack 時(shí),您必須在整個(gè)堆棧中使用相同的版本。例如,如果您使用的是 Elasticsearch 7.17.7,則安裝 Beats 7.17.7、APM Server 7.17.7、Elasticsearch Hadoop 7.17.7、Kibana 7.17.7 和 Logstash 7.17.7

四、搭建mysql
1、拉去MySQL鏡像
sudo docker pull mysql:5.7

(https://img-blog.csdnimg.cn/0709ce9181a04bfcb97504cae7189ba5.png)
2、Docker啟動(dòng)MySQL
sudo docker run -p 3306:3306 --name mysql \
-v /mydata/mysql/log:/var/log/mysql \
-v /mydata/mysql/data:/var/lib/mysql \
-v /mydata/mysql/conf:/etc/mysql \
-e MYSQL_ROOT_PASSWORD=root \
-d mysql:5.7
####這里往下是解釋,不需要粘貼到linux上#############
--name 指定容器名字
-v 將對(duì)應(yīng)文件掛載到linux主機(jī)上
-e 初始化密碼
-p 容器端口映射到主機(jī)的端口(把容器的3306映射到linux中3306,這樣windows上就可以訪問這個(gè)數(shù)據(jù)庫(kù))
-d 后臺(tái)運(yùn)行

3、Docker配置MySQL
vim /mydata/mysql/conf/my.cnf # 創(chuàng)建并進(jìn)入編輯
[client]
default-character-set=utf8
[mysql]
default-character-set=utf8
[mysqld]
init_cnotallow='SET collation_connection = utf8_unicode_ci'
init_cnotallow='SET NAMES utf8'
character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
skip-name-resolve

4、Docker重啟MySQL使配置生效
5、新增數(shù)據(jù)庫(kù)

6、新建測(cè)試表
DROP TABLE IF EXISTS `sys_log`;
CREATE TABLE `sys_log` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '日志主鍵',
`title` varchar(50) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '模塊標(biāo)題',
`business_type` int(2) NULL DEFAULT 0 COMMENT '業(yè)務(wù)類型(0其它 1新增 2修改 3刪除)',
`method` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '方法名稱',
`request_method` varchar(10) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '請(qǐng)求方式',
`oper_name` varchar(50) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '操作人員',
`oper_url` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '請(qǐng)求URL',
`oper_ip` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT '' COMMENT '主機(jī)地址',
`oper_time` datetime(0) NULL DEFAULT NULL COMMENT '操作時(shí)間',
PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 1585197503834284034 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '操作日志記錄' ROW_FORMAT = Dynamic;
SET FOREIGN_KEY_CHECKS = 1;
五、ELK搭建準(zhǔn)備
1、創(chuàng)建掛載的文件
es掛載:
mkdir -p /mydata/elk/elasticsearch/{config,plugins,data,logs}
kibana掛載:
mkdir -p /mydata/elk/kibana/config
logstash掛載:
mkdir -p /mydata/elk/logstash/config
2、ES掛載具體配置
vim /mydata/elk/elasticsearch/config/elasticsearch.yml
輸入下面命令:
http.host: 0.0.0.0
xpack.security.enabled: false
http.host:任何地址都可以訪問。
xpack.security.enabled:關(guān)閉密碼認(rèn)證
3、Kibana掛載具體配置
vim /mydata/elk/kibana/config/kibana.yml
內(nèi)容:
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://192.168.239.131:9200" ]
elasticsearch.hosts:指向es地址
4. Logstash掛載具體配置
vim /mydata/elk/logstash/config/logstash.yml
內(nèi)容:
http.host: 0.0.0.0
xpack.monitoring.elasticsearch.hosts: [ "http://192.168.239.131:9200" ]
記錄存放:
vim /mydata/elk/logstash/config/logstash.conf
內(nèi)容:
jdbc_driver_library?:指定必須要自己下載mysql-connector-java-8.0.28.jar,版本自己決定,下載地址:https://mvnrepository.com/artifact/mysql/mysql-connector-java;
statement:如果sql長(zhǎng),可以指定sql文件,直接指定文件所在位置,這里的位置都為容器內(nèi)部的地址;
last_run_metadata_path:上次記錄存放文件對(duì)應(yīng)上方的log。

input {
stdin {
}
jdbc {
jdbc_connection_string => "jdbc:mysql://192.168.239.131:3306/test?useUnicode=true&characterEncoding=utf8&serverTimeznotallow=UTC"
jdbc_user => "root"
jdbc_password => "root"
jdbc_driver_library => "/usr/share/logstash/config/mysql-connector-java-8.0.28.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "300000"
statement => "SELECT id, title, business_type, method, request_method, oper_name, oper_url, oper_ip, oper_time FROM sys_log"
schedule => "*/1 * * * *"
use_column_value => false
tracking_column_type => "timestamp"
tracking_column => "oper_time"
record_last_run => true
jdbc_default_timezone => "Asia/Shanghai"
last_run_metadata_path => "/usr/share/logstash/config/log"
}
}
output {
elasticsearch {
hosts => ["192.168.239.131:9200"]
index => "sys_log"
document_id => "%{id}"
}
stdout {
codec => json_lines
}
}
流水線指定上面的配置文件:
vim /mydata/elk/logstash/config/pipelines.yml
內(nèi)容:
- pipeline.id: sys_log
path.config: "/usr/share/logstash/config/logstash.conf"
最終/mydata/elk/logstash/config/下的文件。

防止保存沒有修改權(quán)限,可以把上面建的文件夾和文件賦予修改權(quán)限:
五、運(yùn)行容器
1、docker compose一鍵搭建
在elk目錄創(chuàng)建:
內(nèi)容如下:
version: '3'
services:
elasticsearch:
image: elasticsearch:7.17.7
container_name: elasticsearch
ports:
- "9200:9200"
- "9300:9300"
environment:
- cluster.name=elasticsearch
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- /mydata/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins
- /mydata/elk/elasticsearch/data:/usr/share/elasticsearch/data
- /mydata/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
kibana:
image: kibana:7.17.7
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
environment:
I18N_LOCALE: zh-CN
volumes:
- /mydata/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
logstash:
image: logstash:7.17.7
container_name: logstash
ports:
- "5044:5044"
volumes:
- /mydata/elk/logstash/config:/usr/share/logstash/config
depends_on:
- elasticsearch
一定要在docker-compose.yml所在目錄執(zhí)行命令!
運(yùn)行:
完成后可以跳到5進(jìn)行查看kibana!!

1、運(yùn)行ES
docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx512m" -v /mydata/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /mydata/elk/elasticsearch/data:/usr/share/elasticsearch/data -v /mydata/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.17.7

2、運(yùn)行Kibana
docker run --name kibana -e ELASTICSEARCH_HOSTS=http://192.168.239.131:9200 -p 5601:5601 -d kibana:7.17.7

3、運(yùn)行Logstash
docker run -d -p 5044:5044 -v /mydata/elk/logstash/config:/usr/share/logstash/config --name logstash logstash:7.17.7

4. 容器完結(jié)圖

5. 訪問Kibana
http://192.168.239.131:5601/app/home#/。

六、新建索引
PUT /sys_log
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"index": {
"max_result_window": 100000000
}
},
"mappings": {
"dynamic": "strict",
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"business_type": {
"type": "integer"
},
"title": {
"type": "text"
},
"method": {
"type": "text"
},
"request_method": {
"type": "text"
},
"oper_name": {
"type": "text"
},
"oper_url": {
"type": "text"
},
"oper_ip": {
"type": "text"
},
"oper_time": {
"type": "date"
},
"id": {
"type": "long"
}
}
}
}
七、測(cè)試
新增幾條記錄,然后查看Logstash日志

我們?nèi)ibana看一下是否已存在:
輸入命令:
GET /sys_log/_search
{
"query": {
"match_all": {}
}
}
我們看到存在6條,和mysql一致?。?/p>


八、總結(jié)
花費(fèi)了一天時(shí)間,終于搭建完成了,太不容易了!下篇文章搭建ELK日志!