Java类与HBase数据模型
HBaseConfiguration
包名 : org.apache.hadoop.hbase.HBaseConfiguration
作用:对HBase进行配置。 使用方法演示样例:HBaseConfiguration hconfig = new HBaseConfiguration();hconfig.set("hbase.zookeeper.property.clientPort","2181");
HBaseAdmin
包名 : org.apache.hadoop.hbase.client.HBaseAdmin
作用:提供了一个接口来管理HBase数据库的表信息。它提供的方法包括:创建表。删除表,列出表项。使表有效或无效,以及加入或删除表列族成员等。 使用方法演示样例:
HBaseAdmin admin = new HBaseAdmin(config);admin.disableTable("tablename")
HTableDescriptor
包名: org.apache.hadoop.hbase.HTableDescriptor
作用:包括了表的名字及其相应表的列族。 使用方法演示样例:HTableDescriptor htd = new HTableDescriptor(table);htd.addFamily(new HcolumnDescriptor("family"));
HColumnDescriptor
包名: org.apache.hadoop.hbase.HColumnDescriptor
作用:维护着关于列族的信息,比如版本。压缩设置等。它通常在创建表或者为表加入列族的时候使用。 列族被创建后不能直接改动。仅仅能通过删除,然后又一次创建的方式。
列族被删除的时候,列族里面的数据也会同一时候被删除。 使用方法演示样例:
HTableDescriptor htd = new HTableDescriptor(tablename);HColumnDescriptor col = new HColumnDescriptor("content:");htd.addFamily(col);
HTable
包名: org.apache.hadoop.hbase.client.HTable
作用:能够用来和HBase表直接通信。此方法对于更新操作来说是非线程安全的。 使用方法演示样例:HTable table = new HTable(conf, Bytes.toBytes(tablename));ResultScanner scanner = table.getScanner(family);
HTablePool
包名: org.apache.hadoop.hbase.client.HTablePool
作用:能够解决HTable存在的线程不安全问题。同一时候通过维护固定数量的HTable对象,能够在程序执行期间复用这些HTable资源对象。 说明: 1. HTablePool能够自己主动创建HTable对象,并且对客户端来说使用上是全然透明的。能够避免多线程间数据并发改动问题。 2. HTablePool中的HTable对象之间是公用Configuration连接的,能够能够降低网络开销。HTablePool的使用非常easy:每次进行操作前。通过HTablePool的getTable方法取得一个HTable对象,然后进行put/get/scan/delete等操作,最后通过HTablePool的putTable方法将HTable对象放回到HTablePool中。
/** * A simple pool of HTable instances. * * Each HTablePool acts as a pool for all tables. To use, instantiate an * HTablePool and use {@link #getTable(String)} to get an HTable from the pool. * * This method is not needed anymore, clients should call HTableInterface.close() * rather than returning the tables to the pool * * Once you are done with it, close your instance of {@link HTableInterface} * by calling {@link HTableInterface#close()} rather than returning the tables * to the pool with (deprecated) {@link #putTable(HTableInterface)}. * ** A pool can be created with a maxSize which defines the most HTable * references that will ever be retained for each table. Otherwise the default * is {@link Integer#MAX_VALUE}. * *
* Pool will manage its own connections to the cluster. See * {@link HConnectionManager}. * @deprecated as of 0.98.1. See {@link HConnection#getTable(String)}. */@InterfaceAudience.Private@Deprecatedpublic class HTablePool implements Closeable {}
Put
包名: org.apache.hadoop.hbase.client.Put
作用:用来对单个行执行加入操作。 使用方法演示样例:HTable table = new HTable(conf,Bytes.toBytes(tablename));Put p = new Put(brow);//为指定行创建一个Put操作p.add(family,qualifier,value);table.put(p);
Get
包名: org.apache.hadoop.hbase.client.Get
作用:用来获取单个行的相关信息。使用方法演示样例:
HTable table = new HTable(conf, Bytes.toBytes(tablename));Get g = new Get(Bytes.toBytes(row));table.get(g);
Result
包名: org.apache.hadoop.hbase.client.Result
作用:存储Get或者Scan操作后获取表的单行值。使用此类提供的方法能够直接获取值或者各种Map结构( key-value对)。
ResultScanner
包名: org.apache.hadoop.hbase.client.ResultScanner
作用:存储Get或者Scan操作后获取表的单行值。使用此类提供的方法能够直接获取值或者各种Map结构( key-value对)。
例程
package HbaseAPI;import java.io.IOException;import java.util.List;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.HColumnDescriptor;import org.apache.hadoop.hbase.HTableDescriptor;import org.apache.hadoop.hbase.KeyValue;import org.apache.hadoop.hbase.MasterNotRunningException;import org.apache.hadoop.hbase.ZooKeeperConnectionException;import org.apache.hadoop.hbase.client.Get;import org.apache.hadoop.hbase.client.HBaseAdmin;import org.apache.hadoop.hbase.client.HConnection;import org.apache.hadoop.hbase.client.HConnectionManager;import org.apache.hadoop.hbase.client.HTableInterface;import org.apache.hadoop.hbase.client.Put;import org.apache.hadoop.hbase.client.Result;import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;import org.apache.hadoop.hbase.util.Bytes;public class HBaseConnection { private String rootDir; private String zkServer; private String port; private Configuration conf; private HConnection hConn = null; private HBaseConnection(String rootDir,String zkServer,String port) throws IOException{ this.rootDir = rootDir; this.zkServer = zkServer; this.port = port; conf = HBaseConfiguration.create(); conf.set("hbase.rootdir", rootDir); conf.set("hbase.zookeeper.quorum", zkServer); conf.set("hbase.zookeeper.property.clientPort", port); hConn = HConnectionManager.createConnection(conf); } public void creatTable(String tableName,Listcols){ try { //管理数据库的表信息 HBaseAdmin admin = new HBaseAdmin(conf); if(admin.tableExists(tableName)){ throw new Exception("table exists"); } else{ // HTableDescriptor tableDesc = new HTableDescriptor(tableName); for (String col : cols) { //提供列族 HColumnDescriptor colDesc = new HColumnDescriptor(col); colDesc.setCompressionType(Algorithm.GZ); colDesc.setDataBlockEncoding(DataBlockEncoding.DIFF); tableDesc.addFamily(colDesc); } //创建表 admin.createTable(tableDesc); } } catch (MasterNotRunningException e) { e.printStackTrace(); } catch (ZooKeeperConnectionException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } //插入数据 public void putData(String tableName,List puts) throws IOException{ HTableInterface table = hConn.getTable(tableName); table.put(puts); table.setAutoFlush(false); table.flushCommits(); } //获取数据 public Result getData(String tableName,String rowkey) throws IOException{ HTableInterface table = hConn.getTable(tableName); //用来获取单个行的相关信息 Get get = new Get(Bytes.toBytes(rowkey)); return table.get(get); } public void format(Result result){ //行键 String rowkey = Bytes.toString(result.getRow()); //Return an cells of a Result as an array of KeyValues KeyValue[] kvs = result.raw(); for (KeyValue kv : kvs) { //列族名 String family = Bytes.toString(kv.getFamily()); //列名 String qualifier = Bytes.toString(kv.getQualifier()); String value = Bytes.toString(result.getValue(Bytes.toBytes(family), Bytes.toBytes(qualifier))); System.out.println("rowkey->"+rowkey+", family->" +family+", qualifier->"+qualifier); System.out.println("value->"+value); } } public static void main(String[] args) throws IOException { String rootDir = "hdfs://hadoop1:8020/hbase"; String zkServer = "hadoop1"; String port = "2181"; //初始化 HBaseConnection conn = new HBaseConnection(rootDir,zkServer,port); //创建表 List cols = new LinkedList<>(); cols.add("basicInfo"); cols.add("moreInfo"); conn.creatTable("students", cols); //插入数据 List puts = new LinkedList<>(); Put put1 = new Put(Bytes.toBytes("Tom")); //(列族名,列,值) put1.add(Bytes.toBytes("basicInfo"),Bytes.toBytes("age"),Bytes.toBytes("27")); put1.add(Bytes.toBytes("basicInfo"),Bytes.toBytes("tel"),Bytes.toBytes("3432")); Put put2 = new Put(Bytes.toBytes("Joson")); put2.add(Bytes.toBytes("basicInfo"),Bytes.toBytes("age"),Bytes.toBytes("24")); put2.add(Bytes.toBytes("basicInfo"),Bytes.toBytes("tel"),Bytes.toBytes("34322")); puts.add(put1); puts.add(put2); conn.putData("students", puts); //输出结果 Result result = conn.getData("students", "Tom"); conn.format(result); }}