Finally it worked for me with following hbase-site.xml change.

<property>
        <name>hbase.snapshot.master.timeoutMillis</name>
        <value>1800000</value>
</property>

<property>
        <name>hbase.snapshot.region.timeout</name>
        <value>1800000</value>
</property>
<property>
    <name>hbase.snapshot.master.timeout.millis</name>
    <value>1800000</value>
</property>

Thanks,
Rahul

On Fri, May 8, 2015 at 12:53 PM, rahul malviya <[email protected]>
wrote:

> Hi Ted,
>
> I tired doing this multiple time and this is what I see. Looks like some
> property settings for my timeout are not good and shell gets a ERROR with
> timeout.
>
> Am I missing some config change as mentioned earlier have these two
> property defined.
>
> hbase-site.xml
>
> <property>
>         <name>hbase.snapshot.master.timeoutMillis</name>
>         <value>1800000</value>
> </property>
> <property>
>         <name>hbase.snapshot.region.timeout</name>
>         <value>1800000</value>
> </property>
> <property>
>    <name>hbase.snapshot.enabled</name>
>    <value>true</value>
> </property>
>
>
>
> tail -f master log:
>
> 2015-05-08 19:47:26,983 WARN org.apache.hadoop.hbase.HBaseConfiguration:
> Config option "hbase.regionserver.lease.period" is deprecated. Instead, use
> "hbase.client.scanner.timeout.period"
>
> hbase shell:
>
> hbase(main):004:0> snapshot 'table','snapshot_test_3'
>
> ERROR: Snapshot 'snapshot_test_3' wasn't completed in expectedTime:60000 ms
>
> Here is some help for this command:
> Take a snapshot of specified table. Examples:
>
>   hbase> snapshot 'sourceTable', 'snapshotName'
>   hbase> snapshot 'namespace:sourceTable', 'snapshotName', {SKIP_FLUSH =>
> true}
>
> hbase(main):005:0> snapshot 'device_table','snapshot_test_3'
>
> ERROR: org.apache.hadoop.hbase.snapshot.SnapshotExistsException: Snapshot
> 'snapshot_test_3' already stored on the filesystem.
>     at
> org.apache.hadoop.hbase.master.snapshot.SnapshotManager.takeSnapshot(SnapshotManager.java:526)
>     at org.apache.hadoop.hbase.master.HMaster.snapshot(HMaster.java:2842)
>     at
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42291)
>     at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>     at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>     at
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>     at java.lang.Thread.run(Thread.java:745)
>
> Here is some help for this command:
> Take a snapshot of specified table. Examples:
>
>   hbase> snapshot 'sourceTable', 'snapshotName'
>   hbase> snapshot 'namespace:sourceTable', 'snapshotName', {SKIP_FLUSH =>
> true}
>
>
> On Fri, May 8, 2015 at 11:58 AM, Ted Yu <[email protected]> wrote:
>
>> bq. ERROR: Snapshot 'snapshot_test' wasn't completed in expectedTime:60000
>> ms
>>
>> Rahul:
>> Please check your master log around 15:15 (when the timeout happened). You
>> should find some clue there.
>>
>> Cheers
>>
>> On Fri, May 8, 2015 at 11:40 AM, rahul malviya <
>> [email protected]>
>> wrote:
>>
>> > We have 16000 regions with approx 70 regions per regionserver.
>> >
>> > Thanks,
>> > Rahul
>> >
>> > On Fri, May 8, 2015 at 11:25 AM, Jean-Marc Spaggiari <
>> > [email protected]> wrote:
>> >
>> > > How many regions do you have on this table, how many RS and how many
>> > > regions per RS?
>> > >
>> > > JM
>> > >
>> > > 2015-05-08 14:21 GMT-04:00 Rahul Malviya <[email protected]
>> >:
>> > >
>> > > > Hi,
>> > > >
>> > > > I want to run MR job on snapshot of large HBase table but I am
>> getting
>> > > > timeout errors while doing so. Can you tell me what I am doing
>> wrong ?
>> > > >
>> > > > I am using hbase-0.98.6-cdh5.2.0.
>> > > >
>> > > > hbase-site.xml
>> > > > <property>
>> > > >         <name>hbase.snapshot.master.timeoutMillis</name>
>> > > >         <value>1800000</value>
>> > > > </property>
>> > > > <property>
>> > > >         <name>hbase.snapshot.region.timeout</name>
>> > > >         <value>1800000</value>
>> > > > </property>
>> > > > <property>
>> > > >    <name>hbase.snapshot.enabled</name>
>> > > >    <value>true</value>
>> > > > </property>
>> > > >
>> > > > Error :
>> > > > $ hbase shell
>> > > > HBase Shell; enter 'help<RETURN>' for list of supported commands.
>> > > > Type "exit<RETURN>" to leave the HBase Shell
>> > > > Version 0.98.6-cdh5.2.0, rUnknown, Sat Oct 11 15:14:25 PDT 2014
>> > > >
>> > > > hbase(main):001:0> snapshot ’table','snapshot_test'
>> > > > SLF4J: Class path contains multiple SLF4J bindings.
>> > > > SLF4J: Found binding in
>> > > >
>> > >
>> >
>> [jar:file:/ngs/app/awdp/RELEASE-5.2/hbase-0.98.6-cdh5.2.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > > > SLF4J: Found binding in
>> > > >
>> > >
>> >
>> [jar:file:/ngs/app/awdp/RELEASE-5.2/hadoop-2.5.0-cdh5.2.0/share/hadoop/mapreduce1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > > > SLF4J: Found binding in
>> > > >
>> > >
>> >
>> [jar:file:/ngs/app/awdp/RELEASE-5.2/hadoop-2.5.0-cdh5.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > > > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings <
>> > > > http://www.slf4j.org/codes.html#multiple_bindings> for an
>> explanation.
>> > > > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> > > >
>> > > > ERROR: Snapshot 'snapshot_test' wasn't completed in
>> expectedTime:60000
>> > ms
>> > > >
>> > > > Here is some help for this command:
>> > > > Take a snapshot of specified table. Examples:
>> > > >
>> > > >  hbase> snapshot 'sourceTable', 'snapshotName'
>> > > >  hbase> snapshot 'namespace:sourceTable', 'snapshotName',
>> {SKIP_FLUSH
>> > =>
>> > > > true}
>> > > >
>> > > >
>> > > > hbase(main):002:0>
>> > > >
>> > > > Thanks,
>> > > > Rahul
>> > >
>> >
>>
>
>

Reply via email to