[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-15924: Attachment: HBASE-15924.master.0004.patch > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch, > HBASE-15924.master.0004.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588112#comment-15588112 ] Loknath Priyatham Teja Singamsetty commented on HBASE-15924: - [~apurtell] {quote} There is an off-by-one error dealing with the autostart window retry limit. I tried this with ./bin/hbase-daemon.sh --autostart-window-retry-limit 3 autostart regionserver then in order to stop autostart had to kill the regionserver 4 times {quote} After testing it again, i felt like this logic is correct. The autostart-window-retry-limit 3 suggests that the process would retry 3 times at max to bring the process up, which implies that 4th time when the process gets killed, that is when the autostart process is killed. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15588054#comment-15588054 ] Loknath Priyatham Teja Singamsetty commented on HBASE-15924: - [~apurtell] {quote} One option is to make a PID file of the supervisor, check if the supervisor PID file exists and is valid, if so then send a signal to the supervisor to terminate it, then terminate the child under watch. {quote} The autostart works by placing a file like this regionserver.autostart under HBASE_PID_DIR. As soon as stop is issued, it first removes this file so that autostart doesn't work anymore. {quote} In another test, I started the regionserver with ./bin/hbase-daemon.sh --autostart-window-retry-limit 3 autostart regionserver and in another SSH session then attempted to stop the regionserver with ./bin/hbase-daemon.sh stop regionserver. This appears to work, although I can see by tailing the regionserver log output file that the regionserver process is partially restarted and rapidly killed. {quote} This didn't occur to me. Please provide the repro steps for the same. Assuming that you are not using sfdc packages as in case of internal packages, we have a diff mechanism using cron which starts the process when killed. Kindly do re-check if you are testing on any of our internal clusters where cron based autorestart is already enabled. Also note that added minor enhancement to wait for 20 sec after the hmaster/regionserver process is killed in ungraceful manner. This will help for any shutdown hook to be executed before the start command is triggered by autostart. Attached new patch. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571460#comment-15571460 ] Loknath Priyatham Teja Singamsetty commented on HBASE-15924: - Will get back on this one. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15571454#comment-15571454 ] Loknath Priyatham Teja Singamsetty commented on HBASE-15924: - [~apurtell] {quote} ./bin/hbase-daemon.sh autostart and autorestart works differently from start in that we don't see a message printed like echo starting $command, logging to $HBASE_LOGOUT {quote} This message has been missing earlier as well. It makes sense to be consistent. Will incorporate the suggested changes. {quote} There is an off-by-one error dealing with the autostart window retry limit. I tried this with ./bin/hbase-daemon.sh --autostart-window-retry-limit 3 autostart regionserver then in order to stop autostart had to kill the regionserver 4 times {quote} Will fix this as part of new patch. The condition used is -gt but instead should be -ge > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15551276#comment-15551276 ] Loknath Priyatham Teja Singamsetty commented on HBASE-15924: - Thanks [~apurtell] > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548182#comment-15548182 ] Loknath Priyatham Teja Singamsetty edited comment on HBASE-15924 at 10/5/16 10:10 AM: --- [~apurtell] [~larsh] [~anoop.hbase] Need help on pushing this patch. The ASF license errors are misleading. Fixed the warnings and errors of shell check. However there are multiple notification messages which are existing from way back coming as part of refactored/displaced code. was (Author: singamteja): [~apurtell] [~larsh] [~anoop.hbase] Need help on pushing this patch. The ASF license errors are misleading. Fixed the warnings and errors of shell check. However there are multiple notification messages which are existing from way back. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548182#comment-15548182 ] Loknath Priyatham Teja Singamsetty edited comment on HBASE-15924 at 10/5/16 9:27 AM: -- [~apurtell] [~larsh] [~anoop.hbase] Need help on pushing this patch. The ASF license errors are misleading. Fixed the warnings and errors of shell check. However there are multiple notification messages which are existing from way back. was (Author: singamteja): [~apurtell] [~larsh] Need help on pushing this patch. The ASF license errors are misleading. Fixed the warnings and errors of shell check. However there are multiple notification messages which are existing from way back. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15548182#comment-15548182 ] Loknath Priyatham Teja Singamsetty commented on HBASE-15924: - [~apurtell] [~larsh] Need help on pushing this patch. The ASF license errors are misleading. Fixed the warnings and errors of shell check. However there are multiple notification messages which are existing from way back. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-15924: Attachment: HBASE-15924.master.0003.patch > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.24 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch, HBASE-15924.master.0003.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-15924: Attachment: HBASE-15924.master.0002.patch > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.23 > > Attachments: HBASE-15924.master.0001.patch, > HBASE-15924.master.0002.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513446#comment-15513446 ] Loknath Priyatham Teja Singamsetty edited comment on HBASE-15924 at 9/22/16 2:37 PM: -- [~apurtell] [~larsh] [~lhofhansl] [~giacomotaylor] Please review the changes submitted to support autorestart with enhanced capability to configure. Please note that as part of this change, some of the helper scripts under "bin" are modified to fix identified bugs while testing autostart functionality in local, pseudo-distributed and distributed modes. Release notes: Now one can start hbase services with enabled "autostart/autorestart" feature in controlled fashion with the help of "autostart-window-size" to define the window period and the "autostart-window-retry-limit" to define the number of times the hbase services have to be restarted upon being killed/terminated abnormally within the provided window perioid. The following cases are supported with "autostart/autorestart": a) --autostart-window-size=0 and --autostart-window-retry-limit=0, indicates infinite window size and no retry limit b) not providing the args, will default to a) c) --autostart-window-size=0 and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit exceeds irrespective of window period d) --autostart-window-size= and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit "y" is exceeded for the last window period "x". was (Author: singamteja): [~apurtell] [~larsh] [~lhofhansl] [~giacomotaylor] Please review the changes submitted to support autorestart with enhanced capability to configure. Please note that as part of this change, some of the helper scripts under "bin" are modified to fix identified bugs while testing autostart functionality in local, pseudo-distributed and distributed modes. Release notes: Now one can start hbase services with enabled "autostart/autorestart" feature in controlled fashion with the help of "--autostart-window-size" to define the window period and the "--autostart-window-retry-limit" to define the number of times the hbase services have to be restarted upon being killed/terminated abnormally within the provided window perioid. The following cases are supported with "autostart/autorestart": a) --autostart-window-size=0 and --autostart-window-retry-limit=0, indicates infinite window size and no retry limit b) not providing the args, will default to a) c) --autostart-window-size=0 and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit exceeds irrespective of window period d) --autostart-window-size= and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit "y" is exceeded for the last window period "x". > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.23 > > Attachments: HBASE-15924.master.0001.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian
[jira] [Commented] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15513446#comment-15513446 ] Loknath Priyatham Teja Singamsetty commented on HBASE-15924: - [~apurtell] [~larsh] [~lhofhansl] [~giacomotaylor] Please review the changes submitted to support autorestart with enhanced capability to configure. Please note that as part of this change, some of the helper scripts under "bin" are modified to fix identified bugs while testing autostart functionality in local, pseudo-distributed and distributed modes. Release notes: Now one can start hbase services with enabled "autostart/autorestart" feature in controlled fashion with the help of "--autostart-window-size" to define the window period and the "--autostart-window-retry-limit" to define the number of times the hbase services have to be restarted upon being killed/terminated abnormally within the provided window perioid. The following cases are supported with "autostart/autorestart": a) --autostart-window-size=0 and --autostart-window-retry-limit=0, indicates infinite window size and no retry limit b) not providing the args, will default to a) c) --autostart-window-size=0 and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit exceeds irrespective of window period d) --autostart-window-size= and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit "y" is exceeded for the last window period "x". > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.23 > > Attachments: HBASE-15924.master.0001.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-15924: Release Note: Now one can start hbase services with enabled "autostart/autorestart" feature in controlled fashion with the help of "--autostart-window-size" to define the window period and the "--autostart-window-retry-limit" to define the number of times the hbase services have to be restarted upon being killed/terminated abnormally within the provided window perioid. The following cases are supported with "autostart/autorestart": a) --autostart-window-size=0 and --autostart-window-retry-limit=0, indicates infinite window size and no retry limit b) not providing the args, will default to a) c) --autostart-window-size=0 and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit exceeds irrespective of window period d) --autostart-window-size= and --autostart-window-retry-limit= indicates the autostart process to bail out if the retry limit "y" is exceeded for the last window period "x". Status: Patch Available (was: In Progress) > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.23 > > Attachments: HBASE-15924.master.0001.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-15924: Attachment: HBASE-15924.master.0001.patch Added autostart capability with ability to configure window size and retry count within the specified window size with the help of --autostart-window-size and --autostart-window-retry-limit > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.23 > > Attachments: HBASE-15924.master.0001.patch > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HBASE-15924 started by Loknath Priyatham Teja Singamsetty . --- > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.23 > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15455829#comment-15455829 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - I have created and attached HBASE-16375.master.001.patch for master branch. Do i need to create separate patches for all the branches > 0.98? Is there any way where the same patch gets applied to all branches > 0.98? > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375.master.001.patch, > HBASE-16375_0.98_and_above.patch, > HBASE-16375_0.98_and_above_with_tests.patch, > HBASE-16375_0.98_and_above_with_tests_format.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Attachment: HBASE-16375.master.001.patch > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375.master.001.patch, > HBASE-16375_0.98_and_above.patch, > HBASE-16375_0.98_and_above_with_tests.patch, > HBASE-16375_0.98_and_above_with_tests_format.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15455715#comment-15455715 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - Attached the patch using --format-patch. In this case, the single patch was applying to master branch as well. So, not attaching multiple patches. > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch, > HBASE-16375_0.98_and_above_with_tests.patch, > HBASE-16375_0.98_and_above_with_tests_format.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Attachment: HBASE-16375_0.98_and_above_with_tests_format.patch > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch, > HBASE-16375_0.98_and_above_with_tests.patch, > HBASE-16375_0.98_and_above_with_tests_format.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15455277#comment-15455277 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - Added the test and attached the patch. Please review. > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch, > HBASE-16375_0.98_and_above_with_tests.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Attachment: HBASE-16375_0.98_and_above_with_tests.patch > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch, > HBASE-16375_0.98_and_above_with_tests.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15448627#comment-15448627 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - [~busbey] [~apurtell] I see that there is TestHBaseTestingUtility.java. However there are no tests which are written specially to check the conf being set already. {quote} String rmAddress = jobConf.get("yarn.resourcemanager.address"); if (rmAddress != null) { conf.set("yarn.resourcemanager.address", rmAddress); } String historyAddress = jobConf.get("mapreduce.jobhistory.address"); if (historyAddress != null) { conf.set("mapreduce.jobhistory.address", historyAddress); } String schedulerAddress = jobConf.get("yarn.resourcemanager.scheduler.address"); if (schedulerAddress != null) { conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); } {quote} newly added configs {quote} String mrJobHistoryWebappAddress = + jobConf.get("mapreduce.jobhistory.webapp.address"); +if (mrJobHistoryWebappAddress != null) { + conf.set("mapreduce.jobhistory.webapp.address", mrJobHistoryWebappAddress); +} +String yarnRMWebappAddress = + jobConf.get("yarn.resourcemanager.webapp.address"); +if (yarnRMWebappAddress != null) { + conf.set("yarn.resourcemanager.webapp.address", yarnRMWebappAddress); +} {quote} If we need code coverage test on the configs, then we need to write a unit test that starts mini mapreduce cluster and verifies that the jobConf from MapReduceTestingShim are being populated to instance variable "conf" of HBaseTestingUtility. Are we looking for such a test to be added here (to cover existing and newly added configs)? > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { >
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15448544#comment-15448544 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - FYI- Just verified, the mapred and yarn configs added to HBTU as part of this JIRA are supported in hadoop 2.2 and above. > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.1.6, 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446285#comment-15446285 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - {quote} I'm unsure if you're saying this is actually a bug in how HBase handles the configs from the MiniCluster instances managed by HBTU or if instead it's an Apache Phoenix test presuming access to some internals that aren't exposed. {quote} Newly added Apache phoenix test needs the configs which aren't exposed from HBTU. > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6, 1.3.1, 1.2.3, > 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446279#comment-15446279 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - Sure. > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6, 1.3.1, 1.2.3, > 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446175#comment-15446175 ] Loknath Priyatham Teja Singamsetty edited comment on HBASE-16375 at 8/29/16 3:33 PM: -- The actual test resides in phoenix which utilises the added configuration for e2e test of async secondary index which uses RM resource manager web address. I am not sure if we need any tests to be added for this in HBase. If needed, please let me know which test to be replicated in hbase test suite. was (Author: singamteja): The actual test resides in phoenix which utilises the added configuration for e2e test of async secondary index which uses RM resource manager web address. I am not sure if we need any tests to be added for this. If needed, please let me know which test to be replicated in hbase test suite. > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6, 1.3.1, 1.2.3, > 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Attachment: (was: HBASE-16375_1.2.0_and_above.patch) > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6, 1.3.1, 1.2.3, > 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Attachment: HBASE-16375_0.98_and_above.patch > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6, 1.3.1, 1.2.3, > 0.98.22, 1.2.4 > > Attachments: HBASE-16375_0.98_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Fix Version/s: 0.98.22 1.1.6 > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6, 1.3.1, 1.2.3, > 0.98.22, 1.2.4 > > Attachments: HBASE-16375_1.2.0_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445589#comment-15445589 ] Loknath Priyatham Teja Singamsetty edited comment on HBASE-16375 at 8/29/16 3:26 PM: -- Attached a patch which has to go in release 0.98 and higher. Kindly review and let me know in case of any modifications. [~busbey] [~apurtell] [~larsh] was (Author: singamteja): Attached a patch which has to go in release 1.2.0 and higher. Kindly review and let me know in case of any modifications. [~busbey] [~apurtell] [~larsh] > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.1.6, 1.3.1, 1.2.3, > 0.98.22, 1.2.4 > > Attachments: HBASE-16375_1.2.0_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446175#comment-15446175 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - The actual test resides in phoenix which utilises the added configuration for e2e test of async secondary index which uses RM resource manager web address. I am not sure if we need any tests to be added for this. If needed, please let me know which test to be replicated in hbase test suite. > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.3.1, 1.2.3, 1.2.4 > > Attachments: HBASE-16375_1.2.0_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15446168#comment-15446168 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - Just came to know that this change is required for both 1.1 and 0.98 as well. Please push the changes to 0.98 and above. CC: [~jamestaylor] > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.3.1, 1.2.3, 1.2.4 > > Attachments: HBASE-16375_1.2.0_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445589#comment-15445589 ] Loknath Priyatham Teja Singamsetty edited comment on HBASE-16375 at 8/29/16 11:25 AM: --- Attached a patch which has to go in release 1.2.0 and higher. Kindly review and let me know in case of any modifications. [~busbey] [~apurtell] [~larsh] was (Author: singamteja): Attached a patch which has to go in release 1.2.0 and higher. [~busbey] [~apurtell] [~larsh] > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.3.1, 1.2.3, 1.2.4 > > Attachments: HBASE-16375_1.2.0_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15445589#comment-15445589 ] Loknath Priyatham Teja Singamsetty commented on HBASE-16375: - Attached a patch which has to go in release 1.2.0 and higher. [~busbey] [~apurtell] [~larsh] > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.3.1, 1.2.3, 1.2.4 > > Attachments: HBASE-16375_1.2.0_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Attachment: HBASE-16375_1.2.0_and_above.patch > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.2.0, 1.3.0, 1.4.0, 1.2.2, 1.3.1, 1.2.3, 1.2.4 > > Attachments: HBASE-16375_1.2.0_and_above.patch > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Fix Version/s: 1.3.1 1.4.0 1.3.0 2.0.0 Status: Patch Available (was: Open) > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 2.0.0, 1.3.0, 1.4.0, 1.3.1, 1.2.3, 1.2.4, 1.2.2, 1.2.0 > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Fix Version/s: 1.2.0 1.2.2 1.2.4 1.2.3 > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 1.2.0, 1.2.2, 1.2.3, 1.2.4 > > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Affects Version/s: 1.2.0 > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Affects Versions: 1.2.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
Loknath Priyatham Teja Singamsetty created HBASE-16375: --- Summary: Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim Key: HBASE-16375 URL: https://issues.apache.org/jira/browse/HBASE-16375 Project: HBase Issue Type: Bug Reporter: Loknath Priyatham Teja Singamsetty Assignee: Loknath Priyatham Teja Singamsetty Starting mapreduce mini cluster using HBaseTestingUtility is not setting "yarn.resourcemanager.webapp.address" and "mapreduce.jobhistory.webapp.address" which are required for getting the submitted yarn apps using mapreduce webapp. These properties are not being copied from jobConf of MapReduceTestingShim resulting in default values. {quote} HBaseTestingUtility.java // Allow the user to override FS URI for this map-reduce cluster to use. mrCluster = new MiniMRCluster(servers, FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, null, null, new JobConf(this.conf)); JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); if (jobConf == null) { jobConf = mrCluster.createJobConf(); } jobConf.set("mapreduce.cluster.local.dir", conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites this while it should not LOG.info("Mini mapreduce cluster started"); // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance and updates settings. // Our HBase MR jobs need several of these settings in order to properly run. So we copy the // necessary config properties here. YARN-129 required adding a few properties. conf.set("mapreduce.jobtracker.address", jobConf.get("mapreduce.jobtracker.address")); // this for mrv2 support; mr1 ignores this conf.set("mapreduce.framework.name", "yarn"); conf.setBoolean("yarn.is.minicluster", true); String rmAddress = jobConf.get("yarn.resourcemanager.address"); if (rmAddress != null) { conf.set("yarn.resourcemanager.address", rmAddress); } String historyAddress = jobConf.get("mapreduce.jobhistory.address"); if (historyAddress != null) { conf.set("mapreduce.jobhistory.address", historyAddress); } String schedulerAddress = jobConf.get("yarn.resourcemanager.scheduler.address"); if (schedulerAddress != null) { conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); } {quote} As a immediate fix for phoenix e2e test to succeed, need the below lines to be added as well {quote} String rmWebappAddress = jobConf.get("yarn.resourcemanager.webapp.address"); if (rmWebappAddress != null) { conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); } String historyWebappAddress = jobConf.get("mapreduce.jobhistory.webapp.address"); if (historyWebappAddress != null) { conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); } {quote} Eventually, we should also see if we can copy over all the jobConf properties to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-16375) Mapreduce mini cluster using HBaseTestingUtility not setting correct resourcemanager and jobhistory webapp address of MapReduceTestingShim
[ https://issues.apache.org/jira/browse/HBASE-16375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-16375: Priority: Minor (was: Major) > Mapreduce mini cluster using HBaseTestingUtility not setting correct > resourcemanager and jobhistory webapp address of MapReduceTestingShim > > > Key: HBASE-16375 > URL: https://issues.apache.org/jira/browse/HBASE-16375 > Project: HBase > Issue Type: Bug >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > > Starting mapreduce mini cluster using HBaseTestingUtility is not setting > "yarn.resourcemanager.webapp.address" and > "mapreduce.jobhistory.webapp.address" which are required for getting the > submitted yarn apps using mapreduce webapp. These properties are not being > copied from jobConf of MapReduceTestingShim resulting in default values. > {quote} > HBaseTestingUtility.java > // Allow the user to override FS URI for this map-reduce cluster to use. > mrCluster = new MiniMRCluster(servers, > FS_URI != null ? FS_URI : FileSystem.get(conf).getUri().toString(), 1, > null, null, new JobConf(this.conf)); > JobConf jobConf = MapreduceTestingShim.getJobConf(mrCluster); > if (jobConf == null) { > jobConf = mrCluster.createJobConf(); > } > jobConf.set("mapreduce.cluster.local.dir", > conf.get("mapreduce.cluster.local.dir")); //Hadoop MiniMR overwrites > this while it should not > LOG.info("Mini mapreduce cluster started"); > // In hadoop2, YARN/MR2 starts a mini cluster with its own conf instance > and updates settings. > // Our HBase MR jobs need several of these settings in order to properly > run. So we copy the > // necessary config properties here. YARN-129 required adding a few > properties. > conf.set("mapreduce.jobtracker.address", > jobConf.get("mapreduce.jobtracker.address")); > // this for mrv2 support; mr1 ignores this > conf.set("mapreduce.framework.name", "yarn"); > conf.setBoolean("yarn.is.minicluster", true); > String rmAddress = jobConf.get("yarn.resourcemanager.address"); > if (rmAddress != null) { > conf.set("yarn.resourcemanager.address", rmAddress); > } > String historyAddress = jobConf.get("mapreduce.jobhistory.address"); > if (historyAddress != null) { > conf.set("mapreduce.jobhistory.address", historyAddress); > } > String schedulerAddress = > jobConf.get("yarn.resourcemanager.scheduler.address"); > if (schedulerAddress != null) { > conf.set("yarn.resourcemanager.scheduler.address", schedulerAddress); > } > {quote} > As a immediate fix for phoenix e2e test to succeed, need the below lines to > be added as well > {quote} > String rmWebappAddress = > jobConf.get("yarn.resourcemanager.webapp.address"); > if (rmWebappAddress != null) { > conf.set("yarn.resourcemanager.webapp.address", rmWebappAddress); > } > String historyWebappAddress = > jobConf.get("mapreduce.jobhistory.webapp.address"); > if (historyWebappAddress != null) { > conf.set("mapreduce.jobhistory.webapp.address", historyWebappAddress); > } > {quote} > Eventually, we should also see if we can copy over all the jobConf properties > to HBaseTestingUtility conf object. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-15924: Description: As part of HBASE-5939, the autorestart for hbase services has been added to deal with scenarios where hbase services (master/regionserver/master-backup) gets killed or goes down leading to unplanned outages. The changes were made to hbase-daemon.sh to support autorestart option. However, the autorestart implementation doesn't work in standalone mode and other than that have few gaps with the implementation as per release notes of HBASE-5939. Here is an attempt to re-design and fix the functionality considering all possible usecases with hbase service operations. Release Notes of HBASE-5939: -- When launched with autorestart, HBase processes will automatically restart if they are not properly terminated, either by a "stop" command or by a cluster stop. To ensure that it does not overload the system when the server itself is corrupted and the process cannot be restarted, the server sleeps for 5 minutes before restarting if it was already started 5 minutes ago previously. To use it, launch the process with "bin/start-hbase autorestart". This option is not fully compatible with the existing "restart" command: if you ask for a restart on a server launched with autorestart, the server will restart but the next server instance won't be automatically restarted. was: As part of HBASE-5939, the autorestart for hbase services has been added to deal with scenarios where hbase services (master/regionserver/master-backup) gets killed or goes down leading to unplanned outages. The changes were made to hbase-daemon.sh to support autorestart option. However, the autorestart implementation doesn't work in standalone mode and other than that have few gaps with the implementation as per release notes of HBASE-5939. Here is an attempt to re-design and fix the functionality considered all possible usecases with hbase service operations. Release Notes of HBASE-5939: -- When launched with autorestart, HBase processes will automatically restart if they are not properly terminated, either by a "stop" command or by a cluster stop. To ensure that it does not overload the system when the server itself is corrupted and the process cannot be restarted, the server sleeps for 5 minutes before restarting if it was already started 5 minutes ago previously. To use it, launch the process with "bin/start-hbase autorestart". This option is not fully compatible with the existing "restart" command: if you ask for a restart on a server launched with autorestart, the server will restart but the next server instance won't be automatically restarted. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.19 > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considering all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
[ https://issues.apache.org/jira/browse/HBASE-15924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated HBASE-15924: Description: As part of HBASE-5939, the autorestart for hbase services has been added to deal with scenarios where hbase services (master/regionserver/master-backup) gets killed or goes down leading to unplanned outages. The changes were made to hbase-daemon.sh to support autorestart option. However, the autorestart implementation doesn't work in standalone mode and other than that have few gaps with the implementation as per release notes of HBASE-5939. Here is an attempt to re-design and fix the functionality considered all possible usecases with hbase service operations. Release Notes of HBASE-5939: -- When launched with autorestart, HBase processes will automatically restart if they are not properly terminated, either by a "stop" command or by a cluster stop. To ensure that it does not overload the system when the server itself is corrupted and the process cannot be restarted, the server sleeps for 5 minutes before restarting if it was already started 5 minutes ago previously. To use it, launch the process with "bin/start-hbase autorestart". This option is not fully compatible with the existing "restart" command: if you ask for a restart on a server launched with autorestart, the server will restart but the next server instance won't be automatically restarted. was: As part of HBASE-5939, the autorestart of hbase services has been added to deal with scenarios where hbase services (master/regionserver/master-backup) gets killed or goes down leading to unplanned outages. The changes were made to hbase-daemon.sh with the help of autorestart option. However, the autorestart implementation doesn't work in standalone mode and have few gaps with the implementation as per release notes of HBASE-5939. Here is an attempt to re-design and fix the functionality considered all possible usecases with hbase service operations. Release Notes of HBASE-5939: -- When launched with autorestart, HBase processes will automatically restart if they are not properly terminated, either by a "stop" command or by a cluster stop. To ensure that it does not overload the system when the server itself is corrupted and the process cannot be restarted, the server sleeps for 5 minutes before restarting if it was already started 5 minutes ago previously. To use it, launch the process with "bin/start-hbase autorestart". This option is not fully compatible with the existing "restart" command: if you ask for a restart on a server launched with autorestart, the server will restart but the next server instance won't be automatically restarted. > Enhance hbase services autorestart capability to hbase-daemon.sh > - > > Key: HBASE-15924 > URL: https://issues.apache.org/jira/browse/HBASE-15924 > Project: HBase > Issue Type: Improvement >Affects Versions: 0.98.19 >Reporter: Loknath Priyatham Teja Singamsetty > Fix For: 0.98.19 > > > As part of HBASE-5939, the autorestart for hbase services has been added to > deal with scenarios where hbase services (master/regionserver/master-backup) > gets killed or goes down leading to unplanned outages. The changes were made > to hbase-daemon.sh to support autorestart option. > However, the autorestart implementation doesn't work in standalone mode and > other than that have few gaps with the implementation as per release notes of > HBASE-5939. Here is an attempt to re-design and fix the functionality > considered all possible usecases with hbase service operations. > Release Notes of HBASE-5939: > -- > When launched with autorestart, HBase processes will automatically restart if > they are not properly terminated, either by a "stop" command or by a cluster > stop. To ensure that it does not overload the system when the server itself > is corrupted and the process cannot be restarted, the server sleeps for 5 > minutes before restarting if it was already started 5 minutes ago previously. > To use it, launch the process with "bin/start-hbase autorestart". This option > is not fully compatible with the existing "restart" command: if you ask for a > restart on a server launched with autorestart, the server will restart but > the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HBASE-15924) Enhance hbase services autorestart capability to hbase-daemon.sh
Loknath Priyatham Teja Singamsetty created HBASE-15924: --- Summary: Enhance hbase services autorestart capability to hbase-daemon.sh Key: HBASE-15924 URL: https://issues.apache.org/jira/browse/HBASE-15924 Project: HBase Issue Type: Improvement Affects Versions: 0.98.19 Reporter: Loknath Priyatham Teja Singamsetty Fix For: 0.98.19 As part of HBASE-5939, the autorestart of hbase services has been added to deal with scenarios where hbase services (master/regionserver/master-backup) gets killed or goes down leading to unplanned outages. The changes were made to hbase-daemon.sh with the help of autorestart option. However, the autorestart implementation doesn't work in standalone mode and have few gaps with the implementation as per release notes of HBASE-5939. Here is an attempt to re-design and fix the functionality considered all possible usecases with hbase service operations. Release Notes of HBASE-5939: -- When launched with autorestart, HBase processes will automatically restart if they are not properly terminated, either by a "stop" command or by a cluster stop. To ensure that it does not overload the system when the server itself is corrupted and the process cannot be restarted, the server sleeps for 5 minutes before restarting if it was already started 5 minutes ago previously. To use it, launch the process with "bin/start-hbase autorestart". This option is not fully compatible with the existing "restart" command: if you ask for a restart on a server launched with autorestart, the server will restart but the next server instance won't be automatically restarted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)