Linux服務(wù)器超級(jí)實(shí)用的Shell腳本,建議收藏!
Shell 腳本是一種強(qiáng)大的工具,可以在各種領(lǐng)域中用于提高工作效率、簡(jiǎn)化任務(wù)和自動(dòng)化常見工作流程。無(wú)論是系統(tǒng)管理、數(shù)據(jù)處理、任務(wù)自動(dòng)化還是快速原型開發(fā),Shell 腳本都是一種重要的編程工具。下面分享幾個(gè)超級(jí)實(shí)用的 Shell 腳本。
1.使用INOTIFY+RSYNC自動(dòng)實(shí)時(shí)同步數(shù)據(jù)
代碼執(zhí)行:bash inotify_rsyncs.sh
inotify_rsyncs.sh腳本內(nèi)容如下:
#!/bin/bash
# Author: Harry
# chkconfig: - 85 15
# description: It is used to serve
# 監(jiān)測(cè)/data路徑下的文件變化,排除Temp目錄
INOTIFY_CMD="inotifywait -mrq -e modify,create,move,delete /data/ --exclude=Temp"
# 同步數(shù)據(jù)
RSYNC_CMD1="rsync -avz /data/ --exclude-from=/etc/rc.d/init.d/exclude.txt harry@10.14.2.102:/data/ --delete"
RSYNC_CMD2="rsync -avz /data/ --exclude-from=/etc/rc.d/init.d/exclude.txt harry@10.14.2.103:/data/ --delete"
$INOTIFY_CMD | while read DIRECTORY EVENT FILE
do
if [ $(pgrep rsync | wc -l) -le 0 ] ; then
$RSYNC_CMD1&&$RSYNC_CMD2 >> rsync.log
fi
done
2.MYSQL自動(dòng)備份以及刪除備份腳本
代碼執(zhí)行:bash db_backup.sh
db_backup.sh腳本內(nèi)容如下:
#!/bin/bash
# Author: Harry
# Description: Database backup script
dbback(){
# 定義變量
db_user="ma_prd"
db_passwd="<password>"
db_path="/data/bakmysql"
db_file="backuprecord"
db_date=`date +%Y%m%d_%H:%M:%S`
# 判斷路徑是否存在
[ -d $db_path ] || exit 2
# 使用mysqldump備份數(shù)據(jù),并用gzip進(jìn)行壓縮
mysqldump -u$db_user -p$db_passwd --single-transaction ma | gzip > $db_path/${db_date}_ma.sql.gz
REVAL=$?
if [ $REVAL -eq 0 ]
then
echo "$db_date ma db is backups successful" >>$db_path/$db_file
else
echo "$db_date ma db is backups failed" >>$db_path/$db_file
fi
}
#刪除超過(guò)7天的備份數(shù)據(jù)
delbak(){
local db_path="/data/bakmysql"
find $db_path -type f -name "*ma*.gz" -mtime +7 -exec rm -rf {} \;
}
dbback
delbak
3.使用curl檢測(cè)網(wǎng)站可用性腳本
代碼執(zhí)行:web_check_with_curl.sh
web_check_with_curl腳本內(nèi)容如下:
#!/usr/bin/env bash
# Author: Harry
# Version:1.1
# Description: Web check with curl
#定義顏色
red='\e[0;31m'
RED='\e[1;31m'
green='\e[0;32m'
GREEN='\e[1;32m'
blue='\e[0;34m'
BLUE='\e[1;34m'
cyan='\e[0;36m'
CYAN='\e[1;36m'
NC='\e[0m'
date=`date +%Y-%m-%d' '%H:%M:%S`
# 定義User Agent
ua="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.9 Safari/537.36"
pass_count=0
fail_count=0
# 需要檢測(cè)的url
urls=(
"http://www.xxx.com"
)
function request(){
status=$(curl -sk -o /dev/null --retry 1 --connect-timeout 1 -w '%{http_code}' --user-agent "$ua" $1)
if [ $status -eq '200' -o $status -eq '301' \
-o $status -eq '302' ]; then
echo -e "[${GREEN} Passed ${NC}] => $1"
((pass_count ++))
else
echo -e "[${RED} Failed ${NC}] => $1"
((fail_count ++))
fi
}
function main(){
echo "Start checking ..."
for((i=0;i<${#urls[*]};i++))
do
request ${urls[i]};
done
# 輸出檢測(cè)通過(guò)和失敗的記錄
echo -e "======================== Summary ======================== "
echo -e "Total: ${cyan} $((pass_count + fail_count))${NC} Passed: ${green}${pass_count}${NC} Failed: ${red}${fail_count}${NC} Time: $date"
}
main $*
4.檢測(cè)并封禁異常IP地址的腳本
代碼執(zhí)行:bash ban_ip.sh
ban_ip.sh腳本內(nèi)容如下:
#!/bin/bash
# 獲取當(dāng)前日期和時(shí)間的格式化字符串
DATE=$(date +%d/%b/%Y:%H:%M)
# 日志文件路徑和封禁記錄文件路徑
LOG_FILE="/usr/local/nginx/logs/access.log"
BANNED_IP_LOG="/usr/local/nginx/logs/banned_ip.log"
# 獲取異常IP地址,使用tail命令讀取日志文件的最后10000行,并使用grep命令篩選出包含當(dāng)前日期和時(shí)間的日志記錄
ABNORMAL_IP=$(tail -n 10000 "$LOG_FILE" | grep "$DATE" | awk '{a[$1]++}END{for(i in a) if(a[i]>10) print i}')
# 封禁異常IP地址
declare -a IP_LIST
for IP in $ABNORMAL_IP; do
if ! iptables -vnL | grep -q "$IP"; then
iptables -I INPUT -s "$IP" -j DROP
echo "$(date +'%F_%T') $IP" >> "$BANNED_IP_LOG"
IP_LIST+=("$IP")
fi
done
# 打印被封禁的IP地址
if [ ${#IP_LIST[@]} -gt 0 ]; then
echo "以下IP地址已被封禁:"
printf "%s\n" "${IP_LIST[@]}"
else
echo "沒有需要封禁的IP地址。"
fi
5.查看網(wǎng)卡實(shí)時(shí)流量腳本
代碼執(zhí)行:bash interface_moniter.sh eth0
interface_moniter.sh腳本內(nèi)容如下:
#!/bin/bash
# 如果沒有傳遞參數(shù),默認(rèn)使用 lo 作為網(wǎng)絡(luò)接口
NIC=${1:-lo}
echo -e " In ------ Out"
while true; do
# 使用awk命令從/proc/net/dev文件中提取指定網(wǎng)絡(luò)接口的接收字節(jié)數(shù)和發(fā)送字節(jié)數(shù),并保存到變量OLD_IN和OLD_OUT中
OLD_IN=$(awk '$0~"'$NIC'"{print $2}' /proc/net/dev)
OLD_OUT=$(awk '$0~"'$NIC'"{print $10}' /proc/net/dev)
# 等待1秒鐘
sleep 1
# 再次使用awk命令提取最新的接收字節(jié)數(shù)和發(fā)送字節(jié)數(shù),并保存到變量NEW_IN和NEW_OUT中。
NEW_IN=$(awk '$0~"'$NIC'"{print $2}' /proc/net/dev)
NEW_OUT=$(awk '$0~"'$NIC'"{print $10}' /proc/net/dev)
# 計(jì)算接收速率和發(fā)送速率,單位為KB/s,并保存到變量IN和OUT中
IN=$(printf "%.1f%s" "$((($NEW_IN-$OLD_IN)/1024))" "KB/s")
OUT=$(printf "%.1f%s" "$((($NEW_OUT-$OLD_OUT)/1024))" "KB/s")
# 使用echo命令輸出接收速率和發(fā)送速率
echo "$IN $OUT"
sleep 1
done
6.訪問(wèn)日志分析腳本代碼執(zhí)行:bash
log_analyze.sh access.log
log_analyze.sh腳本內(nèi)容如下:
#!/bin/bash
# 日志格式: $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for"
LOG_FILE=$1
echo "統(tǒng)計(jì)訪問(wèn)最多的10個(gè)IP"
awk '{a[$1]++}END{print "UV:",length(a);for(v in a)print v,a[v]}' $LOG_FILE | sort -k2 -nr | head -10
echo "----------------------"
echo "統(tǒng)計(jì)時(shí)間段訪問(wèn)最多的IP"
awk '$4>="[01/Dec/2018:13:20:25" && $4<="[27/Nov/2018:16:20:49"{a[$1]++}END{for(v in a)print v,a[v]}' $LOG_FILE | sort -k2 -nr | head -10
echo "----------------------"
echo "統(tǒng)計(jì)訪問(wèn)最多的10個(gè)頁(yè)面"
awk '{a[$7]++}END{print "PV:",length(a);for(v in a){if(a[v]>10)print v,a[v]}}' $LOG_FILE | sort -k2 -nr
echo "----------------------"
echo "統(tǒng)計(jì)訪問(wèn)頁(yè)面狀態(tài)碼數(shù)量"
awk '{a[$7" "$9]++}END{for(v in a){if(a[v]>5)print v,a[v]}}' $LOG_FILE