查看HBase表在HDFS中的文件結(jié)構(gòu)
在HBASE中建立一張表結(jié)構(gòu)如下:
- {NAME => 'USER_TEST_TABLE',
- MEMSTORE_FLUSHSIZE => '67108864',
- MAX_FILESIZE => '1073741824',
- FAMILIES => [
- {NAME => 'info', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0',
- COMPRESSION => 'NONE', VERSIONS => '1', TTL => '2147483647',
- BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'
- },
- {NAME => 'info2', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0',
- COMPRESSION => 'NONE', VERSIONS => '1', TTL => '2147483647',
- BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'
- }
- ]
- }
結(jié)構(gòu)圖如下, 往下表插入測試數(shù)據(jù), 下面查看此表在HDFS中文件的存儲方式.
由于在HBase服務(wù)器配置文件中指定的存儲HBase文件的HDFS地址為:
hdfs://HADOOPCLUS01:端口/hbase
登錄到namenode服務(wù)器,這里為HADOOPCLUS01, 用hadoop命令查看hbase在hdfs中此表的文件.
1. 查看Hbase根目錄.
- [hadoop@HADOOPCLUS01 bin]$ hadoop fs -ls hadoop fs -ls /hbase
- Found 37 items
- drwxr-xr-x - hadoop cug-admin 0 2013-03-27 09:29 /hbase/-ROOT-
- drwxr-xr-x - hadoop cug-admin 0 2013-03-27 09:29 /hbase/.META.
- drwxr-xr-x - hadoop cug-admin 0 2013-03-26 13:15 /hbase/.corrupt
- drwxr-xr-x - hadoop cug-admin 0 2013-03-27 09:48 /hbase/.logs
- drwxr-xr-x - hadoop cug-admin 0 2013-03-30 17:49 /hbase/.oldlogs
- drwxr-xr-x - hadoop cug-admin 0 2013-03-30 17:49 /hbase/splitlog
- drwxr-xr-x - hadoop cug-admin 0 2013-03-30 17:49 /hbase/USER_TEST_TABLE
可以看到所有表的信息. 在hdfs中的建立的目錄. 一個表對應(yīng)一個目錄.
-ROOT-表和.META.表也不例外, -ROOT-表和.META.表都有同樣的表結(jié)構(gòu), 關(guān)于兩表的表結(jié)構(gòu)和怎么對應(yīng)HBase整個環(huán)境的表的HRegion, 可以查看上篇轉(zhuǎn)載的文章.
splitlog和.corrupt目錄分別是log split進程用來存儲中間split文件的和損壞的日志文件的。
.logs和.oldlogs目錄為HLog的存儲.
.oldlogs為已經(jīng)失效的HLog(Hlog對HBase數(shù)據(jù)庫寫Put已經(jīng)全部完畢), 后面進行刪除.
HLog File 是一個Sequence File,HLog File 由一條條的 HLog.Entry構(gòu)成??梢哉fEntry是HLog的基本組成部分,也是Read 和Write的基本單位。
Entry由兩個部分組成:HLogKey和WALEdit。
2. 查看建立表hdfs目錄內(nèi)容:
- [hadoop@HADOOPCLUS01 bin]$ hadoop fs -ls /hbase/USER_TEST_TABLE
- Found 2 items
- drwxr-xr-x - hadoop cug-admin 0 2013-03-28 10:18 /hbase/USER_TEST_TABLE/03d99a89a256f0e09d0222709b1d0cbe
- drwxr-xr-x - hadoop cug-admin 0 2013-03-28 10:18 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce
有兩個目錄, 說明此表已經(jīng)分裂成兩個HRegion.
3. 在查看其中一個HRegion的文件目錄.
- [hadoop@HADOOPCLUS01 bin]$ hadoop fs -ls /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce
- Found 4 items
- -rw-r--r-- 3 hadoop cug-admin 1454 2013-03-28 10:18 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/.regioninfo
- drwxr-xr-x - hadoop cug-admin 0 2013-03-29 15:21 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/.tmp
- drwxr-xr-x - hadoop cug-admin 0 2013-03-29 15:21 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info
- drwxr-xr-x - hadoop cug-admin 0 2013-03-28 10:18 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info2
.regioninfo 此HRegion的信息. StartRowKey, EndRowKey. 記錄Region在表中的范圍信息.
info, info2, 兩個ColumnFamily. 為兩個目錄.
4. 再對.regioninfo文件用cat查看內(nèi)容:
亂碼已經(jīng)過濾, 存儲的信息整理:
- [hadoop@HADOOPCLUS01 bin]$ hadoop fs -cat /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/.regioninfo
- USER_TEST_TABLE,AAA-AAA11110UC1,1364437081331.68b8ad74920040ba9f39141e908c67ce.
- AA-AAA11110UC1
- USER_TEST_TABLE
- IS_ROOT false
- IS_META false
- MAX_FILESIZE 1073741824
- MEMSTORE_FLUSHSIZ 6710886
- info
- BLOOMFILTER NONE
- REPLICATION_SCOPEVERSIONS 1
- COMPRESSION NONE
- TTL 2147483647
- BLOCKSIZE 65536
- IN_MEMORY false
- BLOCKCACHE true
- info2
- BLOOMFILTER NONE
- REPLICATION_SCOPEVERSIONS 1
- COMPRESSION NONE
- TTL 2147483647
- BLOCKSIZE 65536
- IN_MEMORY false
- BLOCKCACHE true
- REGION => {NAME => 'USER_TEST_TABLE,\x00\x00\x00\x0A\x00\x00\x00\x09AAA-AAA11110UC1\x00\x00\x00\x00,
- 1364437081331.68b8ad74920040ba9f39141e908c67ce.',
- STARTKEY => '\x00\x00\x00\x0A\x00\x00\x00\x09AAA-AAA11110UC1\x00\x00\x00\x00',
- ENDKEY => '',
- ENCODED => 68b8ad74920040ba9f39141e908c67ce,
- TABLE => {{NAME => 'USER_TEST_TABLE', MAX_FILESIZE => '1073741824',
- MEMSTORE_FLUSHSIZE => '67108864',
- FAMILIES => [{NAME => 'info', BLOOMFILTER => 'NONE',
- REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE',
- TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',
- BLOCKCACHE => 'true'},
- {NAME => 'info2', BLOOMFILTER => 'NONE',
- REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE',
- TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',
- BLOCKCACHE => 'true'}]}}
- VT102VT102VT102VT102VT102VT102VT102VT102
5. 查看info ColumnFamily中信息文件和目錄:
- [hadoop@HADOOPCLUS01 bin]$ hadoop fs -ls /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info
- Found 4 items
- -rw-r--r-- 3 hadoop cug-admin 547290902 2013-03-28 10:18 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/4024386696476133625
- -rw-r--r-- 3 hadoop cug-admin 115507832 2013-03-29 15:20 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/5184431447574744531
- -rw-r--r-- 3 hadoop cug-admin 220368457 2013-03-29 15:13 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/6150330155463854827
- -rw-r--r-- 3 hadoop cug-admin 24207459 2013-03-29 15:21 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/7480382738108050697
6. 查看具體保存HBase數(shù)據(jù)的HDFS文件信息:
- [hadoop@HADOOPCLUS01 bin]$ hadoop fs -ls /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info
- Found 4 items
- -rw-r--r-- 3 hadoop cug-admin 547290902 2013-03-28 10:18 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/4024386696476133625
- -rw-r--r-- 3 hadoop cug-admin 115507832 2013-03-29 15:20 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/5184431447574744531
- -rw-r--r-- 3 hadoop cug-admin 220368457 2013-03-29 15:13 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/6150330155463854827
- -rw-r--r-- 3 hadoop cug-admin 24207459 2013-03-29 15:21 /hbase/USER_TEST_TABLE/68b8ad74920040ba9f39141e908c67ce/info/7480382738108050697
即是上面圖片中插入的其中一部分數(shù)據(jù).
在HBase中存儲時, 對于每個Qualifer有如下5個屬性RowKey, ColumnFamily, Qualifer, TimeStamp, Value.
- AA-AAA11110UDFinfoCountry=1 13560596974000
# AA-AAA11110UDH 部分對應(yīng)RowKey;
# info對應(yīng)了ColumnFamily;
# Country對應(yīng)Qualifer;
# 1對用Value;
# 13560596974000對應(yīng)TimeStamp.
后面將分析RowKey與AA-AAA11110UDH的對應(yīng)關(guān)系.
7. 使用HTTP查看文件:
在上面命令中, 也可以有Http查看Hdfs中的文件, 配置在hdfs-site.xml下面配置:
- <property>
- <name>dfs.datanode.http.address</name>
- <value>0.0.0.0:62075</value>
- </property>
所以訪問HDFS的HTTP的URL為:
- http://HADOOPCLUS02:62075/browseDirectory.jsp?namenodeInfoPort=62070&dir=/
原文鏈接:http://greatwqs.iteye.com/blog/1839232