[jira] Created: (ZOOKEEPER-913) Version parser fails to parse "3.3.2-dev" from build.xml.

2010-10-25 Thread Anthony Urso (JIRA)
Version parser fails to parse "3.3.2-dev" from build.xml.
-

 Key: ZOOKEEPER-913
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-913
 Project: Zookeeper
  Issue Type: Bug
  Components: build
Affects Versions: 3.3.1
Reporter: Anthony Urso
Priority: Critical
 Attachments: zk-build.patch, zk-version.patch

Cannot build 3.3.1 from release tarball do to VerGen parser inability to parse 
"3.3.2-dev".

version-info:
 [java] All version-related parameters must be valid integers!
 [java] Exception in thread "main" java.lang.NumberFormatException: For 
input string: "2-dev"
 [java] at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
 [java] at java.lang.Integer.parseInt(Integer.java:481)
 [java] at java.lang.Integer.parseInt(Integer.java:514)
 [java] at 
org.apache.zookeeper.version.util.VerGen.main(VerGen.java:131)
 [java] Java Result: 1


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (ZOOKEEPER-913) Version parser fails to parse "3.3.2-dev" from build.xml.

2010-10-25 Thread Anthony Urso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Urso updated ZOOKEEPER-913:
---

Attachment: zk-version.patch
zk-build.patch

Two possible solutions. Choose the most effective.

> Version parser fails to parse "3.3.2-dev" from build.xml.
> -
>
> Key: ZOOKEEPER-913
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-913
> Project: Zookeeper
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1
>Reporter: Anthony Urso
>Priority: Critical
> Attachments: zk-build.patch, zk-version.patch
>
>
> Cannot build 3.3.1 from release tarball do to VerGen parser inability to 
> parse "3.3.2-dev".
> version-info:
>  [java] All version-related parameters must be valid integers!
>  [java] Exception in thread "main" java.lang.NumberFormatException: For 
> input string: "2-dev"
>  [java]   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>  [java]   at java.lang.Integer.parseInt(Integer.java:481)
>  [java]   at java.lang.Integer.parseInt(Integer.java:514)
>  [java]   at 
> org.apache.zookeeper.version.util.VerGen.main(VerGen.java:131)
>  [java] Java Result: 1

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (ZOOKEEPER-912) ZooKeeper client logs trace and debug messages at level INFO

2010-10-25 Thread Anthony Urso (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anthony Urso updated ZOOKEEPER-912:
---

Attachment: zk-loglevel.patch

This patch attempts to lower the log level of debug and trace messages to where 
they belong.

> ZooKeeper client logs trace and debug messages at level INFO
> 
>
> Key: ZOOKEEPER-912
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-912
> Project: Zookeeper
>  Issue Type: Bug
>  Components: java client
>Affects Versions: 3.3.1
>Reporter: Anthony Urso
>Priority: Minor
> Attachments: zk-loglevel.patch
>
>
> ZK logs a lot of uninformative trace and debug messages to level INFO.  This 
> fuzzes up everything and makes it easy to miss useful log info. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (ZOOKEEPER-912) ZooKeeper client logs trace and debug messages at level INFO

2010-10-25 Thread Anthony Urso (JIRA)
ZooKeeper client logs trace and debug messages at level INFO


 Key: ZOOKEEPER-912
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-912
 Project: Zookeeper
  Issue Type: Bug
  Components: java client
Affects Versions: 3.3.1
Reporter: Anthony Urso
Priority: Minor


ZK logs a lot of uninformative trace and debug messages to level INFO.  This 
fuzzes up everything and makes it easy to miss useful log info. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Apache now has reviewboard

2010-10-25 Thread Henry Robinson
Yes!

On 25 October 2010 22:47, Patrick Hunt  wrote:

> And we're on it: https://reviews.apache.org/groups/zookeeper/
>
> We should rework our
> "howtocommit" to incorporate this.
>
> Patrick
>
> On Mon, Oct 25, 2010 at 10:16 PM, Patrick Hunt  wrote:
>
> > FYI:
> > https://blogs.apache.org/infra/entry/reviewboard_instance_running_at_the
> >
> > We should start using this, I've used it for other projects and it worked
> > out quite well.
> >
> > Patrick
> >
>



-- 
Henry Robinson
Software Engineer
Cloudera
415-994-6679


Re: Apache now has reviewboard

2010-10-25 Thread Patrick Hunt
And we're on it: https://reviews.apache.org/groups/zookeeper/

We should rework our
"howtocommit" to incorporate this.

Patrick

On Mon, Oct 25, 2010 at 10:16 PM, Patrick Hunt  wrote:

> FYI:
> https://blogs.apache.org/infra/entry/reviewboard_instance_running_at_the
>
> We should start using this, I've used it for other projects and it worked
> out quite well.
>
> Patrick
>


Apache now has reviewboard

2010-10-25 Thread Patrick Hunt
FYI:
https://blogs.apache.org/infra/entry/reviewboard_instance_running_at_the

We should start using this, I've used it for other projects and it worked
out quite well.

Patrick


Re: [VOTE] ZooKeeper as TLP?

2010-10-25 Thread Benjamin Reed

+1

On 10/22/2010 02:42 PM, Patrick Hunt wrote:

Please vote as to whether you think ZooKeeper should become a
top-level Apache project, as discussed previously on this list. I've
included below a draft board resolution.

Do folks support sending this request on to the Hadoop PMC?

Patrick



 X. Establish the Apache ZooKeeper Project

WHEREAS, the Board of Directors deems it to be in the best
interests of the Foundation and consistent with the
Foundation's purpose to establish a Project Management
Committee charged with the creation and maintenance of
open-source software related to distributed system coordination
for distribution at no charge to the public.

NOW, THEREFORE, BE IT RESOLVED, that a Project Management
Committee (PMC), to be known as the "Apache ZooKeeper Project",
be and hereby is established pursuant to Bylaws of the
Foundation; and be it further

RESOLVED, that the Apache ZooKeeper Project be and hereby is
responsible for the creation and maintenance of software
related to distributed system coordination; and be it further

RESOLVED, that the office of "Vice President, Apache ZooKeeper" be
and hereby is created, the person holding such office to
serve at the direction of the Board of Directors as the chair
of the Apache ZooKeeper Project, and to have primary responsibility
for management of the projects within the scope of
responsibility of the Apache ZooKeeper Project; and be it further

RESOLVED, that the persons listed immediately below be and
hereby are appointed to serve as the initial members of the
Apache ZooKeeper Project:

  * Patrick Hunt
  * Flavio Junqueira
  * Mahadev Konar
  * Benjamin Reed
  * Henry Robinson

NOW, THEREFORE, BE IT FURTHER RESOLVED, that Patrick Hunt
be appointed to the office of Vice President, Apache ZooKeeper, to
serve in accordance with and subject to the direction of the
Board of Directors and the Bylaws of the Foundation until
death, resignation, retirement, removal or disqualification,
or until a successor is appointed; and be it further

RESOLVED, that the initial Apache ZooKeeper PMC be and hereby is
tasked with the creation of a set of bylaws intended to
encourage open development and increased participation in the
Apache ZooKeeper Project; and be it further

RESOLVED, that the Apache ZooKeeper Project be and hereby
is tasked with the migration and rationalization of the Apache
Hadoop ZooKeeper sub-project; and be it further

RESOLVED, that all responsibilities pertaining to the Apache
Hadoop ZooKeeper sub-project encumbered upon the
Apache Hadoop Project are hereafter discharged.




[jira] Commented: (ZOOKEEPER-897) C Client seg faults during close

2010-10-25 Thread Jared Cantwell (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924748#action_12924748
 ] 

Jared Cantwell commented on ZOOKEEPER-897:
--

we are using the 3.3.2 release.  i don't think the patch leaks memory because 
destroy() will eventually get called (by the reentrant call to 
zookeeper_close()), which calls cleanup_bufs() and frees those buffers, right?  
Also, i had a test that reproduced this error, but it was easiest to reproduce 
if i injected artificial sleeps into the zookeeper.c file.  If that's ok, then 
I can submit that.  Otherwise, i'll see if i can devise a test that can 
reproduce it otherwise.

> C Client seg faults during close
> 
>
> Key: ZOOKEEPER-897
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-897
> Project: Zookeeper
>  Issue Type: Bug
>  Components: c client
>Reporter: Jared Cantwell
>Assignee: Jared Cantwell
> Fix For: 3.3.2, 3.4.0
>
> Attachments: ZOOKEEEPER-897.diff, ZOOKEEPER-897.patch
>
>
> We observed a crash while closing our c client.  It was in the do_io() thread 
> that was processing as during the close() call.
> #0  queue_buffer (list=0x6bd4f8, b=0x0, add_to_front=0) at src/zookeeper.c:969
> #1  0x0046234e in check_events (zh=0x6bd480, events= out>) at src/zookeeper.c:1687
> #2  0x00462d74 in zookeeper_process (zh=0x6bd480, events=2) at 
> src/zookeeper.c:1971
> #3  0x00469c34 in do_io (v=0x6bd480) at src/mt_adaptor.c:311
> #4  0x77bc59ca in start_thread () from /lib/libpthread.so.0
> #5  0x76f706fd in clone () from /lib/libc.so.6
> #6  0x in ?? ()
> We tracked down the sequence of events, and the cause is that input_buffer is 
> being freed from a thread other than the do_io thread that relies on it:
> 1. do_io() call check_events()
> 2. if(events&ZOOKEEPER_READ) branch executes
> 3. if (rc > 0) branch executes
> 4. if (zh->input_buffer != &zh->primer_buffer) branch executes
> .in the meantime..
>  5. zookeeper_close() called
>  6. if (inc_ref_counter(zh,0)!=0) branch executes
>  7. cleanup_bufs() is called
>  8. input_buffer is freed at the end
> . back to check_events().
> 9. queue_events() is called on a NULL buffer.
> I believe the patch is to only call free_completions() in zookeeper_close() 
> and not cleanup_bufs().  The original reason cleanup_bufs() was added was to 
> call any outstanding synhcronous completions, so only free_completions (which 
> is guarded) is needed.  I will submit a patch for review with this change.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (ZOOKEEPER-897) C Client seg faults during close

2010-10-25 Thread Mahadev konar (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahadev konar updated ZOOKEEPER-897:


Status: Open  (was: Patch Available)

jared,
  The patch that you provided leaks memory for the zookeeper client. We have to 
clean up the tosend and to process buffers on close and free them. Did you 
observe the problem with which release?

I had tried to fix all the issues with zookeeper_close() in ZOOKEEPER-591. 
Also, michi has fixed a couple other issues in ZOOKEEPER-804. 

what version of code are you running? Also, can you provide some test case 
which causes this issue? (I know its hard to reproduce but even a test that 
reproduces it once in 10-20 times is good enough).


> C Client seg faults during close
> 
>
> Key: ZOOKEEPER-897
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-897
> Project: Zookeeper
>  Issue Type: Bug
>  Components: c client
>Reporter: Jared Cantwell
>Assignee: Jared Cantwell
> Fix For: 3.3.2, 3.4.0
>
> Attachments: ZOOKEEEPER-897.diff, ZOOKEEPER-897.patch
>
>
> We observed a crash while closing our c client.  It was in the do_io() thread 
> that was processing as during the close() call.
> #0  queue_buffer (list=0x6bd4f8, b=0x0, add_to_front=0) at src/zookeeper.c:969
> #1  0x0046234e in check_events (zh=0x6bd480, events= out>) at src/zookeeper.c:1687
> #2  0x00462d74 in zookeeper_process (zh=0x6bd480, events=2) at 
> src/zookeeper.c:1971
> #3  0x00469c34 in do_io (v=0x6bd480) at src/mt_adaptor.c:311
> #4  0x77bc59ca in start_thread () from /lib/libpthread.so.0
> #5  0x76f706fd in clone () from /lib/libc.so.6
> #6  0x in ?? ()
> We tracked down the sequence of events, and the cause is that input_buffer is 
> being freed from a thread other than the do_io thread that relies on it:
> 1. do_io() call check_events()
> 2. if(events&ZOOKEEPER_READ) branch executes
> 3. if (rc > 0) branch executes
> 4. if (zh->input_buffer != &zh->primer_buffer) branch executes
> .in the meantime..
>  5. zookeeper_close() called
>  6. if (inc_ref_counter(zh,0)!=0) branch executes
>  7. cleanup_bufs() is called
>  8. input_buffer is freed at the end
> . back to check_events().
> 9. queue_events() is called on a NULL buffer.
> I believe the patch is to only call free_completions() in zookeeper_close() 
> and not cleanup_bufs().  The original reason cleanup_bufs() was added was to 
> call any outstanding synhcronous completions, so only free_completions (which 
> is guarded) is needed.  I will submit a patch for review with this change.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (ZOOKEEPER-805) four letter words fail with latest ubuntu nc.openbsd

2010-10-25 Thread Mahadev konar (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924732#action_12924732
 ] 

Mahadev konar commented on ZOOKEEPER-805:
-

sure, do you want to add some documentation to zookeeper admin guide to make it 
clearer on using -q and the issue with openbsd?


> four letter words fail with latest ubuntu nc.openbsd
> 
>
> Key: ZOOKEEPER-805
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-805
> Project: Zookeeper
>  Issue Type: Bug
>  Components: documentation, server
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Patrick Hunt
>Priority: Critical
> Fix For: 3.3.2, 3.4.0
>
>
> In both 3.3 branch and trunk "echo stat|nc localhost 2181" fails against the 
> ZK server on Ubuntu Lucid Lynx.
> I noticed this after upgrading to lucid lynx - which is now shipping openbsd 
> nc as the default:
> OpenBSD netcat (Debian patchlevel 1.89-3ubuntu2)
> vs nc traditional
> [v1.10-38]
> which works fine. Not sure if this is a bug in us or nc.openbsd, but it's 
> currently not working for me. Ugh.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Allowing a ZooKeeper server to be part of multiple clusters

2010-10-25 Thread Vishal K
Hi Mahadev,

It lets one run multiple 2-node clusters. Suppose I have an application that
does a simple 2-way mirroring of my data and uses ZK for clustering. If I
need to support many 2-node clusters, where will I find the spare machines
to run the third instance for each cluster?

-Vishal

On Mon, Oct 25, 2010 at 5:14 PM, Mahadev Konar wrote:

> Hi Vishal,
>  This idea  (2.) had been kicked around intially by Flavio. I think he¹ll
> probably chip in on the discussion. I am just curious on the whats the idea
> behind your proposal? Is this to provide some kind of failure gaurantees
> between a 2 node and 3 node cluster?
>
> Thanks
> mahadev
>
> On 10/25/10 1:05 PM, "Vishal K"  wrote:
>
> > Hi All,
> >
> > I am thinking about the choices one would have to support multiple 2-node
> > clusters. Assume that for some reason one needs to support multiple
> 2-node
> > clusters.
> > This would mean they will have to figure out a way to run a third
> instance
> > of ZK server for each cluster somewhere to ensure that a ZK cluster is
> > available after a failure.
> >
> > This works well if we have to run one or two 2-node clusters. However,
> what
> > if we have to run many 2-node clusters?
> >
> > I have following options:
> > 1. Find m machines to run the third instance of each cluster. Run n/m
> > instances of ZK on each machine.
> > 2. Modify ZooKeeper server to participate in multiple clusters. This will
> > allow us to run y instances of third node where each instance will be
> part
> > of n/y clusters.
> > 3. Run the third instance of ZK server required for the ith cluster on
> one
> > of the server on (i+1)%n cluster. Essentially, distribute the third
> instance
> > across the other clusters.
> >
> > The pros and cons of each approach are fairly obvious. While I prefer the
> > third approach, I would like to check what everyone thinks about the
> second
> > approach.
> >
> > Thanks.
> > -Vishal
> >
>
>


Re: Allowing a ZooKeeper server to be part of multiple clusters

2010-10-25 Thread Mahadev Konar
Hi Vishal,
 This idea  (2.) had been kicked around intially by Flavio. I think he¹ll
probably chip in on the discussion. I am just curious on the whats the idea
behind your proposal? Is this to provide some kind of failure gaurantees
between a 2 node and 3 node cluster?

Thanks
mahadev

On 10/25/10 1:05 PM, "Vishal K"  wrote:

> Hi All,
> 
> I am thinking about the choices one would have to support multiple 2-node
> clusters. Assume that for some reason one needs to support multiple 2-node
> clusters.
> This would mean they will have to figure out a way to run a third instance
> of ZK server for each cluster somewhere to ensure that a ZK cluster is
> available after a failure.
> 
> This works well if we have to run one or two 2-node clusters. However, what
> if we have to run many 2-node clusters?
> 
> I have following options:
> 1. Find m machines to run the third instance of each cluster. Run n/m
> instances of ZK on each machine.
> 2. Modify ZooKeeper server to participate in multiple clusters. This will
> allow us to run y instances of third node where each instance will be part
> of n/y clusters.
> 3. Run the third instance of ZK server required for the ith cluster on one
> of the server on (i+1)%n cluster. Essentially, distribute the third instance
> across the other clusters.
> 
> The pros and cons of each approach are fairly obvious. While I prefer the
> third approach, I would like to check what everyone thinks about the second
> approach.
> 
> Thanks.
> -Vishal
> 



Allowing a ZooKeeper server to be part of multiple clusters

2010-10-25 Thread Vishal K
Hi All,

I am thinking about the choices one would have to support multiple 2-node
clusters. Assume that for some reason one needs to support multiple 2-node
clusters.
This would mean they will have to figure out a way to run a third instance
of ZK server for each cluster somewhere to ensure that a ZK cluster is
available after a failure.

This works well if we have to run one or two 2-node clusters. However, what
if we have to run many 2-node clusters?

I have following options:
1. Find m machines to run the third instance of each cluster. Run n/m
instances of ZK on each machine.
2. Modify ZooKeeper server to participate in multiple clusters. This will
allow us to run y instances of third node where each instance will be part
of n/y clusters.
3. Run the third instance of ZK server required for the ith cluster on one
of the server on (i+1)%n cluster. Essentially, distribute the third instance
across the other clusters.

The pros and cons of each approach are fairly obvious. While I prefer the
third approach, I would like to check what everyone thinks about the second
approach.

Thanks.
-Vishal


[jira] Updated: (ZOOKEEPER-905) enhance zkServer.sh for easier zookeeper automation-izing

2010-10-25 Thread Patrick Hunt (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Hunt updated ZOOKEEPER-905:
---

Fix Version/s: 3.4.0
 Assignee: Nicholas Harteau

> enhance zkServer.sh for easier zookeeper automation-izing
> -
>
> Key: ZOOKEEPER-905
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-905
> Project: Zookeeper
>  Issue Type: Improvement
>  Components: scripts
>Reporter: Nicholas Harteau
>Assignee: Nicholas Harteau
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: zkServer.sh.diff
>
>
> zkServer.sh is good at starting zookeeper and figuring out the right options 
> to pass along.
> unfortunately if you want to wrap zookeeper startup/shutdown in any 
> significant way, you have to reimplement a bunch of the logic there.
> the attached patch addresses a couple simple issues:
> 1. add a 'start-foreground' option to zkServer.sh - this allows things that 
> expect to manage a foregrounded process (daemontools, launchd, etc) to use 
> zkServer.sh instead of rolling their own to launch zookeeper
> 2. add a 'print-cmd' option to zkServer.sh - rather than launching zookeeper 
> from the script, just give me the command you'd normally use to exec 
> zookeeper.  I found this useful when writing automation to start/stop 
> zookeeper as part of smoke testing zookeeper-based applications
> 3. Deal more gracefully with supplying alternate configuration files to 
> zookeeper - currently the script assumes all config files reside in 
> $ZOOCFGDIR - also useful for smoke testing
> 4. communicate extra info ("JMX enabled") about zookeeper on STDERR rather 
> than STDOUT (necessary for #2)
> 5. fixes an issue on macos where readlink doesn't have the '-f' option.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (ZOOKEEPER-904) super digest is not actually acting as a full superuser

2010-10-25 Thread Camille Fournier (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Camille Fournier updated ZOOKEEPER-904:
---

Attachment: ZOOKEEPER-904-332.patch

for 3.3.2 release

> super digest is not actually acting as a full superuser
> ---
>
> Key: ZOOKEEPER-904
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-904
> Project: Zookeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.3.1
>Reporter: Camille Fournier
>Assignee: Camille Fournier
> Fix For: 3.3.2, 3.4.0
>
> Attachments: ZOOKEEPER-904-332.patch, ZOOKEEPER-904.patch
>
>
> The documentation states:
> New in 3.2:  Enables a ZooKeeper ensemble administrator to access the znode 
> hierarchy as a "super" user. In particular no ACL checking occurs for a user 
> authenticated as super.
> However, if a super user does something like:
> zk.setACL("/", Ids.READ_ACL_UNSAFE, -1);
> the super user is now bound by read-only ACL. This is not what I would expect 
> to see given the documentation. It can be fixed by moving the chec for the 
> "super" authId in PrepRequestProcessor.checkACL to before the for(ACL a : 
> acl) loop.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Fix release 3.3.2 planning, status.

2010-10-25 Thread Patrick Hunt
Quick update. 794 was resolved but a new blocker came in in the mean time:
https://issues.apache.org/jira/browse/ZOOKEEPER-907

Ben can you comment on this, a review would be great (I'm willing to
commit).

There are 3 other patch availables for 3.3.2, if someone could review those
as well it would be great to include them.

Patrick

On Wed, Oct 20, 2010 at 2:00 PM, Fournier, Camille F. [Tech] <
camille.fourn...@gs.com> wrote:

> FWIW it looks solid to me.
>
> C
>
> -Original Message-
> From: Patrick Hunt [mailto:ph...@apache.org]
> Sent: Wednesday, October 20, 2010 1:53 PM
> To: zookeeper-dev@hadoop.apache.org
> Subject: Re: Fix release 3.3.2 planning, status.
>
> https://issues.apache.org/jira/browse/ZOOKEEPER-794
> I've done a bunch of
> testing on a number of macines, could someone take a look at this and +1
> it?
> (or not) I'd like to get 3.3.2 moving.
>
> Regards,
>
> Patrick
>
> On Mon, Oct 18, 2010 at 9:19 AM, Patrick Hunt  wrote:
>
> > Hi Camille, unfortunately there's a blocker on 3.3.2 at the moment.
> > http://bit.ly/asOSNl I just updated that patch to fix a build issue,
> > hopefully one of the committers can review asap.
> >
> > Additionally there are a number of other "patch available" patches
> attached
> > to 3.3.2. I'd like to get those included give everyone's done a bunch of
> > work on them. Again, committers need to review/commit/reject
> appropriately.
> >
> > What do ppl think, are we pretty close? Ben/Flavio/Henry/Mahadev please
> > review some of the outstanding patches. Coordinate with me if you have
> > issues/questions.
> >
> > Regards,
> >
> > Patrick
> >
> >
> > On Mon, Oct 18, 2010 at 7:56 AM, Fournier, Camille F. [Tech] <
> > camille.fourn...@gs.com> wrote:
> >
> >> Hi guys,
> >>
> >> Any updates on the 3.3.2 release schedule? Trying to plan a release
> myself
> >> and wondering if I'll have to go to production with patched 3.3.1 or
> have
> >> time to QA with the 3.3.2 release.
> >>
> >> Thanks,
> >> Camille
> >>
> >> -Original Message-
> >> From: Patrick Hunt [mailto:ph...@apache.org]
> >> Sent: Thursday, September 23, 2010 12:45 PM
> >> To: zookeeper-dev@hadoop.apache.org
> >> Subject: Fix release 3.3.2 planning, status.
> >>
> >> Looking at the JIRA queue for 3.3.2 I see that there are two blockers,
> one
> >> is currently PA and the other is pretty close (it has a patch that
> should
> >> go
> >> in soon).
> >>
> >> There are a few JIRAs that already went into the branch that are
> important
> >> to get out there ASAP, esp ZOOKEEPER-846 (fix close issue found by
> hbase).
> >>
> >> One issue that's been slowing us down is hudson. The trunk was not
> passing
> >> it's hudson validation, which was causing a slow down in patch review.
> >> Mahadev and I fixed this. However with recent changes to the hudson
> >> hw/security environment the patch testing process (automated) is broken.
> >> Giri is working on this. In the mean time we'll have to test ourselves.
> >> Committers -- be sure to verify RAT, Findbugs, etc... in addition to
> >> verifying via test. I've setup an additional Hudson environment inside
> >> Cloudera that also verifies the trunk/branch. If issues are found I will
> >> report them (unfortunately I can't provide access to cloudera's hudson
> env
> >> to non-cloudera employees at this time).
> >>
> >> I'd like to clear out the PAs asap and get a release candidate built.
> >> Anyone
> >> see a problem with shooting for an RC mid next week?
> >>
> >> Patrick
> >>
> >
> >
>


[jira] Commented: (ZOOKEEPER-904) super digest is not actually acting as a full superuser

2010-10-25 Thread Camille Fournier (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924673#action_12924673
 ] 

Camille Fournier commented on ZOOKEEPER-904:


I would love it in 3.3.2, will upload a patch for that version.

> super digest is not actually acting as a full superuser
> ---
>
> Key: ZOOKEEPER-904
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-904
> Project: Zookeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.3.1
>Reporter: Camille Fournier
>Assignee: Camille Fournier
> Fix For: 3.3.2, 3.4.0
>
> Attachments: ZOOKEEPER-904.patch
>
>
> The documentation states:
> New in 3.2:  Enables a ZooKeeper ensemble administrator to access the znode 
> hierarchy as a "super" user. In particular no ACL checking occurs for a user 
> authenticated as super.
> However, if a super user does something like:
> zk.setACL("/", Ids.READ_ACL_UNSAFE, -1);
> the super user is now bound by read-only ACL. This is not what I would expect 
> to see given the documentation. It can be fixed by moving the chec for the 
> "super" authId in PrepRequestProcessor.checkACL to before the for(ACL a : 
> acl) loop.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (ZOOKEEPER-904) super digest is not actually acting as a full superuser

2010-10-25 Thread Patrick Hunt (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Hunt updated ZOOKEEPER-904:
---

Fix Version/s: 3.3.2

We should consider this for 3.3.2 as well, or at least 3.3.3

> super digest is not actually acting as a full superuser
> ---
>
> Key: ZOOKEEPER-904
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-904
> Project: Zookeeper
>  Issue Type: Bug
>  Components: server
>Affects Versions: 3.3.1
>Reporter: Camille Fournier
>Assignee: Camille Fournier
> Fix For: 3.3.2, 3.4.0
>
> Attachments: ZOOKEEPER-904.patch
>
>
> The documentation states:
> New in 3.2:  Enables a ZooKeeper ensemble administrator to access the znode 
> hierarchy as a "super" user. In particular no ACL checking occurs for a user 
> authenticated as super.
> However, if a super user does something like:
> zk.setACL("/", Ids.READ_ACL_UNSAFE, -1);
> the super user is now bound by read-only ACL. This is not what I would expect 
> to see given the documentation. It can be fixed by moving the chec for the 
> "super" authId in PrepRequestProcessor.checkACL to before the for(ACL a : 
> acl) loop.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (ZOOKEEPER-896) Improve C client to support dynamic authentication schemes

2010-10-25 Thread Patrick Hunt (JIRA)

[ 
https://issues.apache.org/jira/browse/ZOOKEEPER-896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924669#action_12924669
 ] 

Patrick Hunt commented on ZOOKEEPER-896:


Ben/Mahadev any comments on this approach?

Botond, can you add a test for this?

> Improve C client to support dynamic authentication schemes
> --
>
> Key: ZOOKEEPER-896
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-896
> Project: Zookeeper
>  Issue Type: Improvement
>  Components: c client
>Affects Versions: 3.3.1
>Reporter: Botond Hejj
>Assignee: Botond Hejj
> Fix For: 3.4.0
>
> Attachments: ZOOKEEPER-896.patch
>
>
> When we started exploring zookeeper for our requirements we found the 
> authentication mechanism is not flexible enough.
> We want to use kerberos for authentication but using the current API we ran 
> into a few problems. The idea is that we get a kerberos token on the client 
> side and than send that token to the server with a kerberos scheme. A server 
> side authentication plugin can use that token to authenticate the client and 
> also use the token for authorization.
> We ran into two problems with this approach:
> 1. A different kerberos token is needed for each different server that client 
> can connect to since kerberos uses mutual authentication. That means when the 
> client acquires this kerberos token it has to know which server it connects 
> to and generate the token according to that. The client currently can't 
> generate a token for a specific server. The token stored in the auth_info is 
> used for all the servers.
> 2. The kerberos token might have an expiry time so if the client loses the 
> connection to the server and than it tries to reconnect it should acquire a 
> new token. That is not possible currently since the token is stored in 
> auth_info and reused for every connection.
> The problem can be solved if we allow the client to register a callback for 
> authentication instead a static token. This can be a callback with an 
> argument which passes the current host string. The zookeeper client code 
> could call this callback before it sends the authentication info to the 
> server to get a fresh server specific token.
> This would solve our problem with the kerberos authentication and also could 
> be used for other more dynamic authentication schemes.
> The solution could be generalization also for the java client as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (ZOOKEEPER-896) Improve C client to support dynamic authentication schemes

2010-10-25 Thread Patrick Hunt (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Hunt updated ZOOKEEPER-896:
---

Assignee: Botond Hejj

> Improve C client to support dynamic authentication schemes
> --
>
> Key: ZOOKEEPER-896
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-896
> Project: Zookeeper
>  Issue Type: Improvement
>  Components: c client
>Affects Versions: 3.3.1
>Reporter: Botond Hejj
>Assignee: Botond Hejj
> Fix For: 3.4.0
>
> Attachments: ZOOKEEPER-896.patch
>
>
> When we started exploring zookeeper for our requirements we found the 
> authentication mechanism is not flexible enough.
> We want to use kerberos for authentication but using the current API we ran 
> into a few problems. The idea is that we get a kerberos token on the client 
> side and than send that token to the server with a kerberos scheme. A server 
> side authentication plugin can use that token to authenticate the client and 
> also use the token for authorization.
> We ran into two problems with this approach:
> 1. A different kerberos token is needed for each different server that client 
> can connect to since kerberos uses mutual authentication. That means when the 
> client acquires this kerberos token it has to know which server it connects 
> to and generate the token according to that. The client currently can't 
> generate a token for a specific server. The token stored in the auth_info is 
> used for all the servers.
> 2. The kerberos token might have an expiry time so if the client loses the 
> connection to the server and than it tries to reconnect it should acquire a 
> new token. That is not possible currently since the token is stored in 
> auth_info and reused for every connection.
> The problem can be solved if we allow the client to register a callback for 
> authentication instead a static token. This can be a callback with an 
> argument which passes the current host string. The zookeeper client code 
> could call this callback before it sends the authentication info to the 
> server to get a fresh server specific token.
> This would solve our problem with the kerberos authentication and also could 
> be used for other more dynamic authentication schemes.
> The solution could be generalization also for the java client as well.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (ZOOKEEPER-907) Spurious "KeeperErrorCode = Session moved" messages

2010-10-25 Thread Vishal K (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vishal K updated ZOOKEEPER-907:
---

Attachment: ZOOKEEPER-907.patch_v2

Attaching cleaned-up patch.
Ben, let me know what you think about the test that I suggested.

> Spurious "KeeperErrorCode = Session moved" messages
> ---
>
> Key: ZOOKEEPER-907
> URL: https://issues.apache.org/jira/browse/ZOOKEEPER-907
> Project: Zookeeper
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Vishal K
>Assignee: Vishal K
>Priority: Blocker
> Fix For: 3.3.2, 3.4.0
>
> Attachments: ZOOKEEPER-907.patch, ZOOKEEPER-907.patch_v2
>
>
> The sync request does not set the session owner in Request.
> As a result, the leader keeps printing:
> 2010-07-01 10:55:36,733 - INFO  [ProcessThread:-1:preprequestproces...@405] - 
> Got user-level KeeperException when processing sessionid:0x298d3b1fa9 
> type:sync: cxid:0x6 zxid:0xfffe txntype:unknown reqpath:/ Error 
> Path:null Error:KeeperErrorCode = Session moved

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.