Oracle Direct NFS網(wǎng)絡(luò)流量測試詳解
在數(shù)據(jù)庫方面,NFS也能得以應(yīng)用。今天我們主要講解的就是Oracle Direct NFS方面的內(nèi)容。文中我們詳盡講解了其中的測試過程,以及配置的具體方法,希望對大家有所幫助。那么,測試了一下Oracle Direct NFS的功能,發(fā)現(xiàn)ORACLE Direct NFS是通過建立多個到NFS Server的TCP連接來提高IO的并發(fā)能力的。前面,我們提過,NFS的IO能力不高的原因是,NFS client端到NFS Server的操作是串行的,正常的NFS client到NFS Server端只建立一個連接,而且只有等前一個請求處理完成后,后一個請求才能處理,這樣在隨機讀IO上就上不去。而Oracle Directd NFS與NFS Server建立多個TCP連接,處理就可以并發(fā)進行了,這樣從理論上說就可以大大提高NFS的性能。
而在實際發(fā)現(xiàn)Direct NFS讀的時候很快,實測到達到了400Mbytes/s,基本沒有發(fā)現(xiàn)瓶頸,但寫的時候,發(fā)現(xiàn)比較慢,insert數(shù)據(jù)時,寫流量只有3.4Mbytes/s左右,寫為何這么慢原因不明,估計是Linux的NFS Server與Oracle Direct NFS配合不好導(dǎo)致。當(dāng)使用rman備份時,如果備份的路徑在Direct NFS指定的路徑中時,也會自動走到Direct NFS模式下。
測試過程:
先修改odm庫,啟動支持Direct nfs的odm庫:
[oracle@nfs_client lib]$ ls -l *odm*
-rw-r–r– 1 oracle oinstall 54764 Sep 11 2008 libnfsodm11.so
lrwxrwxrwx 1 oracle oinstall 12 Jul 8 18:55 libodm11.so -> libodmd11.so
-rw-r–r– 1 oracle oinstall 12755 Sep 11 2008 libodmd11.so
[oracle@nfs_client lib]$ rm libodm11.so
[oracle@nfs_client lib]$ ln -s libnfsodm11.so libodm11.so
在nfs server機器中共享一個目錄,為了使用硬盤不會成為IO瓶頸,使用8塊盤做一個raid0,然后做ext3文件系統(tǒng),做為nfs Server的輸出:
mdadm -C /dev/md0 –level raid0 -c 8 -n 8 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
mkfs -t ext3 /dev/md0
mount /dev/md0 /nfs
然后在/etc/exportfs中配置:
/nfs 192.168.172.132(rw,no_root_squash,insecure)
service nfs restart
在數(shù)據(jù)庫主機上(nfs client端):
[oracle@nfs_client dbs]$ cat oranfstab
server: node_data1
path: 192.168.172.128
export: /nfs mount: /opt/oracle/oradata/nfsmount -t nfs 192.168.172.128:/nfs /opt/oracle/oradata/nfs
兩臺機器通過萬兆網(wǎng)卡連接,測試過網(wǎng)絡(luò)速度可以達到800Mbytes/s以上。#p#
建一個數(shù)據(jù)庫:
CREATE DATABASE oratest
USER SYS IDENTIFIED BY sys
USER SYSTEM IDENTIFIED BY system
CONTROLFILE REUSE
LOGFILE GROUP 1 (’/opt/oracle/oradata/oratest/redo_1_1.log’) SIZE 200M REUSE,
GROUP 2 (’/opt/oracle/oradata/oratest/redo_2_1.log’) SIZE 200M REUSE,
GROUP 3 (’/opt/oracle/oradata/oratest/redo_3_1.log’) SIZE 200M REUSE,
GROUP 4 (’/opt/oracle/oradata/oratest/redo_4_1.log’) SIZE 200M REUSE,
GROUP 5 (’/opt/oracle/oradata/oratest/redo_5_1.log’) SIZE 200M REUSE
MAXLOGFILES 20
MAXLOGMEMBERS 5
MAXLOGHISTORY 1000
MAXDATAFILES 1000
MAXINSTANCES 2
noARCHIVELOG
CHARACTER SET US7ASCII
NATIONAL CHARACTER SET AL16UTF16
DATAFILE ‘/opt/oracle/oradata/oratest/system01.dbf’ SIZE 2046M REUSE
SYSAUX DATAFILE ‘/opt/oracle/oradata/oratest/sysaux01.dbf’ SIZE 2046M REUSE
EXTENT MANAGEMENT LOCAL
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE ‘/opt/oracle/oradata/oratest/temp01.dbf’ SIZE 2046M REUSE
UNDO TABLESPACE undotbs1
DATAFILE ‘/opt/oracle/oradata/oratest/undotbs01.dbf’ SIZE 2046M REUSE
SET TIME_ZONE = ‘+08:00′;
再建一個表空間tbs_testd在在nfs上:
create tablespace tbs_test datafile ‘/opt/oracle/oradata/nfs/test01.dbf’ size 2047M;
SQL> col svrname format a40
SQL> col dirname format a40
SQL> set linesize 200
SQL> select * from v$dnfs_servers;ID SVRNAME DIRNAME MNTPORT NFSPORT WTMAX RTMAX
———- —————————————- —————————————- ———- ———- ———- ———-
1 nfs_server /nfs 907 2049 32768 327681 row selected.
col filename format a40
select * from v$dnfs_files;SQL> select * from v$dnfs_files;
FILENAME FILESIZE PNUM SVR_ID
—————————————- ———- ———- ———-
/opt/oracle/oradata/nfs/test01.dbf 2145394688 9 1SQL> col path format a30
SQL> select * from V$DNFS_CHANNELS;
#p#
PNUM SVRNAME PATH CH_ID SVR_ID SENDS RECVS PINGS
———- —————————————- —————————— ———- ———- ———- ———- ———-
5 nfs_server 192.168.172.128 0 1 9 25 0
9 nfs_server 192.168.172.128 0 1 28 75 0
11 nfs_server 192.168.172.128 0 1 96 250 0
12 nfs_server 192.168.172.128 0 1 166 552 0
13 nfs_server 192.168.172.128 0 1 216 955 0
14 nfs_server 192.168.172.128 0 1 3 7 0
15 nfs_server 192.168.172.128 0 1 351 1057 0
17 nfs_server 192.168.172.128 0 1 899 2708 0
18 nfs_server 192.168.172.128 0 1 3 7 0
19 nfs_server 192.168.172.128 0 1 2 4 0
20 nfs_server 192.168.172.128 0 1 10 30 0
21 nfs_server 192.168.172.128 0 1 37 109 0
22 nfs_server 192.168.172.128 0 1 18 52 0
13 rows selected.
在Oracle Direct NFS server上查看到2049端口的連接:
[root@nfs_server data]# netstat -an |grep 2049
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN
tcp 0 0 192.168.172.128:2049 192.168.172.132:14111 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:51478 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:61228 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:52532 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:10827 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:31047 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:55132 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:866 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:32634 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:54646 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:47987 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:22448 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:49091 ESTABLISHED
執(zhí)行:insert into test select * from test;時使用自已寫的查看網(wǎng)卡流量的腳本iftop查看網(wǎng)絡(luò)流量中,可以寫流量只有3.4Mbytes/s#p#
ifname in_kbytes/s out_kbytes/s all_kbytes/s in_packets/s out_packets/s all_packets/s
——— ———– ———— ———— ———— ————- ————-
eth2 3133 99 3232 2370 770 3140
eth2 3364 147 3511 2559 837 3396
eth2 3630 1511 5142 2828 1845 4673
eth2 3315 103 3419 2517 785 3302
eth2 3380 105 3486 2535 796 3331
eth2 3627 113 3741 2718 854 3572
eth2 3610 112 3722 2704 853 3557
eth2 3586 113 3700 2713 862 3575
eth2 3471 107 3579 2589 804 3393
eth2 3470 108 3578 2618 822 3440
eth2 3347 105 3453 2525 807 3332
eth2 3406 106 3512 2549 809 3358
eth2 3351 106 3458 2547 814 3361
eth2 3248 101 3349 2427 769 3196
eth2 2743 87 2831 2080 666 2746
而執(zhí)行select count(*) from test;時可以看到網(wǎng)絡(luò)流量很高,高的時候達到400Mbytes/s.在NFS Server端查看連接到2049端口的連接數(shù),可以看到有很多個連接,這與使用操作系統(tǒng)的NFS client端是不一樣的,使用操作系統(tǒng)的NFS client端,到服務(wù)器的連接只有一個,由此可見,Oracle Direct NFS通過與服務(wù)器建立多個TCP連接來實現(xiàn)高并發(fā)IO,從而提升NFS的性能。連接的數(shù)目的多少與壓力的大小有關(guān),壓力越大,連接數(shù)越多。
[root@nfs_server nfs]# netstat -an |grep 2049
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN
tcp 166768 0 192.168.172.128:2049 192.168.172.132:20048 ESTABLISHED
tcp 173716 140 192.168.172.128:2049 192.168.172.132:22625 ESTABLISHED
tcp 172772 0 192.168.172.128:2049 192.168.172.132:28796 ESTABLISHED
tcp 170832 0 192.168.172.128:2049 192.168.172.132:4468 ESTABLISHED
tcp 171764 140 192.168.172.128:2049 192.168.172.132:42147 ESTABLISHED
tcp 172684 0 192.168.172.128:2049 192.168.172.132:63693 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:48835 ESTABLISHED
tcp 170500 0 192.168.172.128:2049 192.168.172.132:57326 ESTABLISHED
tcp 171772 0 192.168.172.128:2049 192.168.172.132:43246 ESTABLISHED
tcp 0 0 192.168.172.128:2049 192.168.172.132:36080 ESTABLISHED
udp 0 0 0.0.0.0:2049 0.0.0.0:*