MySQL 中 varchar(50) 和 varchar(500) 有何區(qū)別,你知道嗎?
一、問(wèn)題描述
我們?cè)谠O(shè)計(jì)表結(jié)構(gòu)的時(shí)候,設(shè)計(jì)規(guī)范里面有一條如下規(guī)則:
- 對(duì)于可變長(zhǎng)度的字段,在滿足條件的前提下,盡可能使用較短的變長(zhǎng)字段長(zhǎng)度。
為什么這么規(guī)定?我在網(wǎng)上查了一下,主要基于兩個(gè)方面
- 基于存儲(chǔ)空間的考慮
- 基于性能的考慮
網(wǎng)上說(shuō)Varchar(50)和varchar(500)存儲(chǔ)空間上是一樣的,真的是這樣嗎?
基于性能考慮,是因?yàn)檫^(guò)長(zhǎng)的字段會(huì)影響到查詢性能?
本文我將帶著這兩個(gè)問(wèn)題探討驗(yàn)證一下
二、驗(yàn)證存儲(chǔ)空間區(qū)別
1.準(zhǔn)備兩張表
CREATE TABLE `category_info_varchar_50` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主鍵',
`name` varchar(50) NOT NULL COMMENT '分類名稱',
`is_show` tinyint(4) NOT NULL DEFAULT '0' COMMENT '是否展示:0 禁用,1啟用',
`sort` int(11) NOT NULL DEFAULT '0' COMMENT '序號(hào)',
`deleted` tinyint(1) DEFAULT '0' COMMENT '是否刪除',
`create_time` datetime NOT NULL COMMENT '創(chuàng)建時(shí)間',
`update_time` datetime NOT NULL COMMENT '更新時(shí)間',
PRIMARY KEY (`id`) USING BTREE,
KEY `idx_name` (`name`) USING BTREE COMMENT '名稱索引'
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='分類';
CREATE TABLE `category_info_varchar_500` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主鍵',
`name` varchar(500) NOT NULL COMMENT '分類名稱',
`is_show` tinyint(4) NOT NULL DEFAULT '0' COMMENT '是否展示:0 禁用,1啟用',
`sort` int(11) NOT NULL DEFAULT '0' COMMENT '序號(hào)',
`deleted` tinyint(1) DEFAULT '0' COMMENT '是否刪除',
`create_time` datetime NOT NULL COMMENT '創(chuàng)建時(shí)間',
`update_time` datetime NOT NULL COMMENT '更新時(shí)間',
PRIMARY KEY (`id`) USING BTREE,
KEY `idx_name` (`name`) USING BTREE COMMENT '名稱索引'
) ENGINE=InnoDB AUTO_INCREMENT=288135 DEFAULT CHARSET=utf8mb4 COMMENT='分類';
2.準(zhǔn)備數(shù)據(jù)
給每張表插入相同的數(shù)據(jù),為了凸顯不同,插入100萬(wàn)條數(shù)據(jù)
DELIMITER $$
CREATE PROCEDURE batchInsertData(IN total INT)
BEGIN
DECLARE start_idx INT DEFAULT 1;
DECLARE end_idx INT;
DECLARE batch_size INT DEFAULT 500;
DECLARE insert_values TEXT;
SET end_idx = LEAST(total, start_idx + batch_size - 1);
WHILE start_idx <= total DO
SET insert_values = '';
WHILE start_idx <= end_idx DO
SET insert_values = CONCAT(insert_values, CONCAT('(\'name', start_idx, '\', 0, 0, 0, NOW(), NOW()),'));
SET start_idx = start_idx + 1;
END WHILE;
SET insert_values = LEFT(insert_values, LENGTH(insert_values) - 1); -- Remove the trailing comma
SET @sql = CONCAT('INSERT INTO category_info_varchar_50 (name, is_show, sort, deleted, create_time, update_time) VALUES ', insert_values, ';');
PREPARE stmt FROM @sql;
EXECUTE stmt;
SET @sql = CONCAT('INSERT INTO category_info_varchar_500 (name, is_show, sort, deleted, create_time, update_time) VALUES ', insert_values, ';');
PREPARE stmt FROM @sql;
EXECUTE stmt;
SET end_idx = LEAST(total, start_idx + batch_size - 1);
END WHILE;
END$$
DELIMITER ;
CALL batchInsertData(1000000);
3.驗(yàn)證存儲(chǔ)空間
查詢第一張表SQL
SELECT
table_schema AS "數(shù)據(jù)庫(kù)",
table_name AS "表名",
table_rows AS "記錄數(shù)",
TRUNCATE ( data_length / 1024 / 1024, 2 ) AS "數(shù)據(jù)容量(MB)",
TRUNCATE ( index_length / 1024 / 1024, 2 ) AS "索引容量(MB)"
FROM
information_schema.TABLES
WHERE
table_schema = 'test_mysql_field'
and TABLE_NAME = 'category_info_varchar_50'
ORDER BY
data_length DESC,
index_length DESC;
查詢結(jié)果
圖片
查詢第二張表SQL
SELECT
table_schema AS "數(shù)據(jù)庫(kù)",
table_name AS "表名",
table_rows AS "記錄數(shù)",
TRUNCATE ( data_length / 1024 / 1024, 2 ) AS "數(shù)據(jù)容量(MB)",
TRUNCATE ( index_length / 1024 / 1024, 2 ) AS "索引容量(MB)"
FROM
information_schema.TABLES
WHERE
table_schema = 'test_mysql_field'
and TABLE_NAME = 'category_info_varchar_500'
ORDER BY
data_length DESC,
index_length DESC;
查詢結(jié)果
圖片
4.結(jié)論
兩張表在占用空間上確實(shí)是一樣的,并無(wú)差別
三.驗(yàn)證性能區(qū)別
1.驗(yàn)證索引覆蓋查詢
select name from category_info_varchar_50 where name = 'name100000'
-- 耗時(shí)0.012s
select name from category_info_varchar_500 where name = 'name100000'
-- 耗時(shí)0.012s
select name from category_info_varchar_50 order by name;
-- 耗時(shí)0.370s
select name from category_info_varchar_500 order by name;
-- 耗時(shí)0.379s
通過(guò)索引覆蓋查詢性能差別不大
2.驗(yàn)證索引查詢
select * from category_info_varchar_50 where name = 'name100000'
--耗時(shí) 0.012s
select * from category_info_varchar_500 where name = 'name100000'
--耗時(shí) 0.012s
select * from category_info_varchar_50 where name in('name100','name1000','name100000','name10000','name1100000',
'name200','name2000','name200000','name20000','name2200000','name300','name3000','name300000','name30000','name3300000',
'name400','name4000','name400000','name40000','name4400000','name500','name5000','name500000','name50000','name5500000',
'name600','name6000','name600000','name60000','name6600000','name700','name7000','name700000','name70000','name7700000','name800',
'name8000','name800000','name80000','name6600000','name900','name9000','name900000','name90000','name9900000')
-- 耗時(shí) 0.011s -0.014s
-- 增加 order by name 耗時(shí) 0.012s - 0.015s
select * from category_info_varchar_50 where name in('name100','name1000','name100000','name10000','name1100000',
'name200','name2000','name200000','name20000','name2200000','name300','name3000','name300000','name30000','name3300000',
'name400','name4000','name400000','name40000','name4400000','name500','name5000','name500000','name50000','name5500000',
'name600','name6000','name600000','name60000','name6600000','name700','name7000','name700000','name70000','name7700000','name800',
'name8000','name800000','name80000','name6600000','name900','name9000','name900000','name90000','name9900000')
-- 耗時(shí) 0.012s -0.014s
-- 增加 order by name 耗時(shí) 0.014s - 0.017s
索引范圍查詢性能基本相同, 增加了order By后開(kāi)始有一定性能差別;
3.驗(yàn)證全表查詢和排序
全表無(wú)排序
select * from category_info_varchar_50;
--耗時(shí) 0.875s
select * from category_info_varchar_500;
--耗時(shí) 0.866s
全表有排序
select * from category_info_varchar_50 order by name ;
--耗時(shí) 1.498s
select * from category_info_varchar_500 order by name ;
--耗時(shí) 4.875s
結(jié)論:
全表掃描無(wú)排序情況下,兩者性能無(wú)差異,在全表有排序的情況下, 兩種性能差異巨大;
分析原因
varchar50 全表執(zhí)行sql分析
圖片
我發(fā)現(xiàn)86%的時(shí)花在數(shù)據(jù)傳輸上,接下來(lái)我們看狀態(tài)部分,關(guān)注Created_tmp_files和sort_merge_passes
圖片
圖片
Created_tmp_files為3。
sort_merge_passes為95。
varchar500 全表執(zhí)行sql分析
圖片
增加了臨時(shí)表排序。
圖片
圖片
Created_tmp_files 為 4。
sort_merge_passes為645。
關(guān)于sort_merge_passes, Mysql給出了如下描述:
Number of merge passes that the sort algorithm has had to do. If this value is large, you may want to increase the value of the sort_buffer_size.
其實(shí)sort_merge_passes對(duì)應(yīng)的就是MySQL做歸并排序的次數(shù),也就是說(shuō),如果sort_merge_passes值比較大,說(shuō)明sort_buffer和要排序的數(shù)據(jù)差距越大,我們可以通過(guò)增大sort_buffer_size或者讓填入sort_buffer_size的鍵值對(duì)更小來(lái)緩解sort_merge_passes歸并排序的次數(shù)。
四.最終結(jié)論
至此,我們不難發(fā)現(xiàn),當(dāng)我們最該字段進(jìn)行排序操作的時(shí)候,Mysql會(huì)根據(jù)該字段的設(shè)計(jì)的長(zhǎng)度進(jìn)行內(nèi)存預(yù)估, 如果設(shè)計(jì)過(guò)大的可變長(zhǎng)度, 會(huì)導(dǎo)致內(nèi)存預(yù)估的值超出sort_buffer_size的大小, 導(dǎo)致mysql采用磁盤臨時(shí)文件排序,最終影響查詢性能。