And what kind of performance do you see vs. what you expect to see? How big is your cluster in production/how much total data will you be storing in production?
On Sunday, August 28, 2016, Manjeet Singh <[email protected]> wrote: > Hi > I performed this testing on 2 node cluster where its i7 core processor with > 16 gb ram 8 core on each node. > > I have very frequent get put operation on hbase using spark streaming and > sql where we r aggregate data on spark group and saving it to hbase > Can you give us more specifics about what kind of performance you're > expecting, Manjeet, and what kind of performance you're actually seeing? > Also, how big is your cluster (i.e. number of nodes, amount of RAM/CPU per > node)? It's also important realize that performance can be impacted by the > write patterns of the data you're trying to query; if compactions haven't > occurred at the time that you try to do your reads, HBase may have to go to > disk repeatedly to access HFiles, even when only accessing columns within > one row. > > On Sat, Aug 27, 2016 at 11:12 AM, Manjeet Singh < > [email protected] <javascript:;>> > wrote: > > > Thanks Vladrodionov for your reply > > I took this design from twiter where a rowkey is twitter id and twites > and > > hastag in column > > > > I hv mob no or ip by which domain visited in column qualifyer. > > > > can you plz tell me how can I index my row key with qualam idk how many > > column I hv > > On 27 Aug 2016 22:21, "Vladimir Rodionov" <[email protected] > <javascript:;>> wrote: > > > > > >> Problem is its very slow > > > > > > rows are not indexed by column qualifier, and you need to scan all of > > them. > > > I suggest you consider different row-key design or > > > add additional index-table for your table. > > > > > > -Vlad > > > > > > On Sat, Aug 27, 2016 at 4:12 AM, Manjeet Singh < > > [email protected] <javascript:;> > > > > > > > wrote: > > > > > > > Hi All, > > > > > > > > can anybody suggest me the improvement in my below code > > > > Purpose os this code to get column qualifier by prefix scan > > > > Problem is its very slow > > > > > > > > > > > > public static ArrayList<String> getColumnQualifyerByPrefixScan > (String > > > > rowKey, String prefix) { > > > > > > > > ArrayList<String> list = null; > > > > try { > > > > > > > > FilterList filterList = new FilterList(FilterList. > > > Operator.MUST_PASS_ALL); > > > > Filter filterB = new QualifierFilter(CompareFilter.CompareOp.EQUAL, > > > > new BinaryPrefixComparator(Bytes.toBytes(prefix))); > > > > filterList.addFilter(filterB); > > > > > > > > list = new ArrayList<String>(); > > > > > > > > Get get1 = new Get(rowKey.getBytes()); > > > > get1.setFilter(filterList); > > > > Result rs1 = hTable.get(get1); > > > > int i = 0; > > > > for (KeyValue kv : rs1.raw()) { > > > > list.add(new String(kv.getQualifier()) + " "); > > > > } > > > > } catch (Exception e) { > > > > //System.out.println(e.getMessage()); > > > > > > > > } > > > > return list; > > > > } > > > > > > > > > > > > > > > > > > > > > > > > On Fri, Aug 26, 2016 at 7:56 PM, Manjeet Singh < > > > [email protected] <javascript:;> > > > > > > > > > wrote: > > > > > > > > > Hi All > > > > > > > > > > I am using wide table approach where I have might have more > > 1,00,0000 > > > > > column qualifier > > > > > > > > > > I am getting problem as below > > > > > Heap size problem by using scan on shell , as a solution I increase > > > java > > > > > heap size by using cloudera manager to 4 GB > > > > > > > > > > > > > > > second I have below Native API code It took very long time to > process > > > can > > > > > any one help me on same? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > public static ArrayList<String> getColumnQualifyerByPrefixScan > > (String > > > > > rowKey, String prefix) { > > > > > > > > > > ArrayList<String> list = null; > > > > > try { > > > > > > > > > > FilterList filterList = new FilterList(FilterList. > > > > Operator.MUST_PASS_ALL); > > > > > Filter filterB = new QualifierFilter(CompareFilter. > CompareOp.EQUAL, > > > > > new BinaryPrefixComparator(Bytes.toBytes(prefix))); > > > > > filterList.addFilter(filterB); > > > > > > > > > > list = new ArrayList<String>(); > > > > > > > > > > Get get1 = new Get(rowKey.getBytes()); > > > > > get1.setFilter(filterList); > > > > > Result rs1 = hTable.get(get1); > > > > > int i = 0; > > > > > for (KeyValue kv : rs1.raw()) { > > > > > list.add(new String(kv.getQualifier()) + " "); > > > > > } > > > > > } catch (Exception e) { > > > > > //System.out.println(e.getMessage()); > > > > > > > > > > } > > > > > return list; > > > > > } > > > > > > > > > > Thanks > > > > > Manjeet > > > > > -- > > > > > luv all > > > > > > > > > > > > > > > > > > > > > -- > > > > luv all > > > > > > > > > > > > > -- > -Dima > -- -Dima
