You would have to measure the incoming/outgoing traffic on the affected
machine.
The easiest is to periodically check the output of ifconfig. If all data is
local and the query just returns a count I would not expect much (any?) network
traffic even after you ran the query multiple times.
Bey
You will get better overall performance by increasing # of regions (salt
buckets)
With 23 x 32 cpu I would try 300 regions at least.
Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com
FYI, scanner caching defaults to 1000 in Phoenix, but as folks have pointed
out, that's not relevant in this case b/c only a single row is returned
from the server for a COUNT(*) query.
On Sat, Dec 21, 2013 at 2:51 PM, Kristoffer Sjögren wrote:
> Yeah, im doing a count(*) query on the 96 region
There are quite a lot of established and time wait connections between the
RS on port 50010, but i dont know a good way of monitoring how much data is
going through each connection (if that's what you meant)?
On Sun, Dec 22, 2013 at 12:00 AM, Kristoffer Sjögren wrote:
> Scans on RS 19 and 23, wh
Scans on RS 19 and 23, which have 5 regions instead of 4, stands out more
than scans on RS 20, 21, 22. But scans on RS 7 and 18, that also have 5
regions are doing fine, not best, but still in the mid-range.
On Sat, Dec 21, 2013 at 11:51 PM, Kristoffer Sjögren wrote:
> Yeah, im doing a count(*)
Yeah, im doing a count(*) query on the 96 region table. Do you mean to
check network traffic between RS?
>From debugging phoenix code I can see that there are 96 scans sent and each
response returned back to the client contain only the sum of rows, which
are then aggregated and returned. So the tr
Thanks Kristoffer,
yeah, that's the right metric. I would put my bet on the slower network.
But you're also doing a select count(*) query in Phoenix, right? So nothing
should really be sent across the network.
When you do the queries, can you check whether there is any network traffic?
-- Lars
Btw, I have tried different number of rows with similar symptom on the bad
RS.
On Sat, Dec 21, 2013 at 10:28 PM, Kristoffer Sjögren wrote:
> @pradeep scanner caching should not be an issue since data transferred to
> the client is tiny.
>
> @lars Yes, the data might be small for this particular
@pradeep scanner caching should not be an issue since data transferred to
the client is tiny.
@lars Yes, the data might be small for this particular case :-)
I have checked everything I can think of on RS (CPU, network, Hbase
console, uptime etc) and nothing stands out, except for the pings (netw
Hi Kristoffer,
For this particular problem. Are many regions on the same RegionServers? Did
you profile those RegionServers? Anything weird on that box?
Pings slower might well be an issue. How's the data locality? (You can check on
a RegionServer's overview page).
If needed, you can issue a majo
What is your scanner caching set to? I haven't worked with Phoenix so I'm
not sure what defaults if any it uses. In 0.94 HBase, I believe the default
caching is set to 1. This could be exacerbating your problem.
On Sat, Dec 21, 2013 at 7:52 PM, Kristoffer Sjögren wrote:
> Yes, im waiting on a re
Yes, im waiting on a response from them. It's just.. the ping difference is
tiny while the scan difference is huge, 2sec vs 4sec.
Note the ping I mentioned is within the cluster. Ping from outside into the
cluster have hardly any (if at all) noticeable difference.
On Sat, Dec 21, 2013 at 8:37 PM
Do you know if machines 19-23 are on a different rack? It seems to me that
your problem might be a networking problem. Whether it is hardware,
configuration or something else entirely, I'm not sure. It might be
worthwhile to talk to your systems administrator to see why pings to these
machines are
Hi
I have been performance tuning HBase 0.94.6 running Phoenix 2.2.0 the last
couple of days and need some help.
Background.
- 23 machine cluster, 32 cores, 4GB heap per RS.
- Table t_24 have 24 online regions (24 salt buckets).
- Table t_96 have 96 online regions (96 salt buckets).
- 10.5 milli
Your question would get better response from cdh-dev mailing list.
Cheers
On Sat, Dec 21, 2013 at 7:01 AM, Kristoffer Sjögren wrote:
> Hi
>
> We are running HBase 0.94.6-cdh4.4.0 and wonder what the best way would be
> to upgrade to 0.94.14 and still have some compliance to cdh?
>
> As far as i
It's a CDH related question, let me forward this thread to the right list.
Thanks,
发件人: Kristoffer Sjögren [sto...@gmail.com]
发送时间: 2013年12月21日 23:01
收件人: user@hbase.apache.org
主题: Upgrade from HBase 0.94.6-cdh4.4.0 to 0.94.14
Hi
We are running HBase 0.94
Hi
We are running HBase 0.94.6-cdh4.4.0 and wonder what the best way would be
to upgrade to 0.94.14 and still have some compliance to cdh?
As far as i know, there are no cloudera apt packages for 0.94.14?
Cheers,
-Kristoffer
17 matches
Mail list logo