Integrated: 8297385: Remove duplicated null typos in javadoc

2022-11-24 Thread Dongxu Wang
On Wed, 23 Nov 2022 06:58:09 GMT, Dongxu Wang  wrote:

> 8297385: Remove duplicated null typos in javadoc

This pull request has now been integrated.

Changeset: 0ed8b337
Author:    Dongxu Wang 
Committer: Yi Yang 
URL:   
https://git.openjdk.org/jdk/commit/0ed8b337eaa59881a62af5dcc0abb454761f2e71
Stats: 3 lines in 1 file changed: 0 ins; 0 del; 3 mod

8297385: Remove duplicated null typos in javadoc

Reviewed-by: dfuchs, rriggs

-

PR: https://git.openjdk.org/jdk/pull/11311


RFR: 8297385: Remove duplicated null typos in javadoc

2022-11-22 Thread Dongxu Wang
8297385: Remove duplicated null typos in javadoc

-

Commit messages:
 - 8297385: Remove duplicated null typo in javadoc

Changes: https://git.openjdk.org/jdk/pull/11311/files
 Webrev: https://webrevs.openjdk.org/?repo=jdk=11311=00
  Issue: https://bugs.openjdk.org/browse/JDK-8297385
  Stats: 3 lines in 1 file changed: 0 ins; 0 del; 3 mod
  Patch: https://git.openjdk.org/jdk/pull/11311.diff
  Fetch: git fetch https://git.openjdk.org/jdk pull/11311/head:pull/11311

PR: https://git.openjdk.org/jdk/pull/11311


Withdrawn: 8297385: Remove duplicated null typos in javadoc

2022-11-22 Thread Dongxu Wang
On Tue, 15 Nov 2022 15:05:45 GMT, Dongxu Wang  wrote:

> 8297385: Remove duplicated null typos in javadoc

This pull request has been closed without being integrated.

-

PR: https://git.openjdk.org/jdk/pull/11169


Re: RFR: 8297385: Remove duplicated null typos in javadoc [v2]

2022-11-22 Thread Dongxu Wang
On Wed, 23 Nov 2022 06:49:56 GMT, Dongxu Wang  wrote:

>> 8297385: Remove duplicated null typos in javadoc
>
> Dongxu Wang has updated the pull request with a new target base due to a 
> merge or a rebase. The incremental webrev excludes the unrelated changes 
> brought in by the merge/rebase. The pull request contains two additional 
> commits since the last revision:
> 
>  - Merge branch 'openjdk:master' into master
>  - Minor remove duplicate null typo

Use #11311 instead, close this pr.

-

PR: https://git.openjdk.org/jdk/pull/11169


Re: RFR: 8297385: Remove duplicated null typos in javadoc [v2]

2022-11-22 Thread Dongxu Wang
On Wed, 23 Nov 2022 06:43:56 GMT, Yi Yang  wrote:

> This looks good, but I'm not a Reviewer, you still need an approval from 
> Reviewer.

Thanks

-

PR: https://git.openjdk.org/jdk/pull/11169


Re: RFR: 8297385: Remove duplicated null typos in javadoc [v2]

2022-11-22 Thread Dongxu Wang
> 8297385: Remove duplicated null typos in javadoc

Dongxu Wang has updated the pull request with a new target base due to a merge 
or a rebase. The incremental webrev excludes the unrelated changes brought in 
by the merge/rebase. The pull request contains two additional commits since the 
last revision:

 - Merge branch 'openjdk:master' into master
 - Minor remove duplicate null typo

-

Changes:
  - all: https://git.openjdk.org/jdk/pull/11169/files
  - new: https://git.openjdk.org/jdk/pull/11169/files/65327c89..f731384b

Webrevs:
 - full: https://webrevs.openjdk.org/?repo=jdk=11169=01
 - incr: https://webrevs.openjdk.org/?repo=jdk=11169=00-01

  Stats: 44876 lines in 726 files changed: 16653 ins; 16626 del; 11597 mod
  Patch: https://git.openjdk.org/jdk/pull/11169.diff
  Fetch: git fetch https://git.openjdk.org/jdk pull/11169/head:pull/11169

PR: https://git.openjdk.org/jdk/pull/11169


Re: RFR: 8297385: Remove duplicated null typos in javadoc

2022-11-22 Thread Dongxu Wang
On Tue, 22 Nov 2022 03:33:55 GMT, Yi Yang  wrote:

> > > good catch, do you need a JBS issue for this?
> > 
> > 
> > Thank you if you can help with that.
> 
> I filed https://bugs.openjdk.org/browse/JDK-8297385 for this, you can change 
> your PR title and commit message to [8297385: Remove duplicated null typo in 
> javadoc](https://bugs.openjdk.org/browse/JDK-8297385), OpenJDK robot will 
> guide you remaining processes.

Thank you, can you also help review

-

PR: https://git.openjdk.org/jdk/pull/11169


Re: RFR: 8297385: Remove duplicated null typos in javadoc

2022-11-21 Thread Dongxu Wang
On Mon, 21 Nov 2022 15:16:14 GMT, Yi Yang  wrote:

> good catch, do you need a JBS issue for this?

Thank you if you can help with that.

-

PR: https://git.openjdk.org/jdk/pull/11169


RFR: 8297385: Remove duplicated null typos in javadoc

2022-11-21 Thread Dongxu Wang
8297385: Remove duplicated null typos in javadoc

-

Commit messages:
 - Minor remove duplicate null typo

Changes: https://git.openjdk.org/jdk/pull/11169/files
 Webrev: https://webrevs.openjdk.org/?repo=jdk=11169=00
  Issue: https://bugs.openjdk.org/browse/JDK-8297385
  Stats: 3 lines in 1 file changed: 0 ins; 0 del; 3 mod
  Patch: https://git.openjdk.org/jdk/pull/11169.diff
  Fetch: git fetch https://git.openjdk.org/jdk pull/11169/head:pull/11169

PR: https://git.openjdk.org/jdk/pull/11169


sub

2022-05-13 Thread Dongxu Wang



Re: [VOTE] Retire Apache James HUPA

2021-07-26 Thread Dongxu Wang
+1

On Mon, Jul 26, 2021 at 7:38 PM Dongxu 王东旭  wrote:

> +1
>
> ccing Manolo, thank you.
>
> On Mon, Jul 26, 2021 at 10:16 AM Rene Cordier  wrote:
>
>> +1,
>>
>> Rene.
>>
>> On 23/07/2021 16:00, btell...@apache.org wrote:
>> > Hello all,
>> >
>> > Following a first email on the topic [1] I would like to call for a
>> > formal vote on Apache James Hupa retirement.
>> >
>> > [1]
>> https://www.mail-archive.com/server-dev@james.apache.org/msg70575.html
>> >
>> > Rationnals:
>> >   - The latest release (0.3.0) dates from 2012 which is an eternity in
>> > computing.
>> >   - The latest tag on Github is 0.0.3
>> >   - The pom references 0.0.5-SNAPSHOT suggesting that 0.0.4 release is
>> > lost :-(
>> >   - This repository is crippled by multiple CVEs (quick dependabot
>> review):
>> >- CVE-2021-29425 (commons-io)
>> >- GHSA-m6cp-vxjx-65j6 CVE-2017-7656 CVE-2015-2080 CVE-2017-7657
>> > CVE-2019-10241 CVE-2019-10247 (Jetty server)
>> >- CVE-2020-9447 (gwtupload)
>> >- GHSA-g3wg-6mcf-8jj6 (jetty-webapp)
>> >- CVE-2019-17571 (log4j)
>> >- CVE-2016-131 CVE-2016-3092 (commons-fileupload)
>> >   - Sporadic activity since 2012
>> >   - Zero to no exchanges for several years on the mailing lists.
>> >
>> > Given that alternatives exists, given that the project is
>> > likely not mature, unmaintained and unsecure, I propose to retire this
>> > Apache James subproject.
>> >
>> > |Voting rules: - This is a majority vote as stated in [2] for procedural
>> > issues. - The vote starts at Friday 23rd of July 2021, 4pm UTC+7 - The
>> > vote ends at Friday 30th of July 2021, 4pm UTC+7 [2]
>> > https://www.apache.org/foundation/voting.html Following this
>> retirement,
>> > follow up steps are to be taken as described in [3] [3]
>> > https://www.mail-archive.com/server-dev@james.apache.org/msg70585.html
>> | - 1. Get a formal vote on server-dev mailing list
>> >   - 2. Place a RETIRED_PROJECT file marker in the git
>> >   - 3. Add a note in the project README
>> >   - 4. Retire the ISSUE trackers (Project names HUPA and POSTAGE)
>> >   - 5. Announce it on gene...@james.apache.org and announce@apache
>> >   - 6. Add a notice to the Apache website, if present
>> >   - 7. Remove releases from downloads.apache.org
>> >   - 8. Add notices on the Apache release archives (example
>> > https://archive.apache.org/dist/ant/antidote/ <
>> https://archive.apache.org/dist/ant/antidote/>)
>> >
>> > Best regards,
>> >
>> > Benoit Tellier
>> > ||
>> >
>> >
>> > -
>> > To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
>> > For additional commands, e-mail: server-dev-h...@james.apache.org
>> >
>> >
>>
>> -
>> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
>> For additional commands, e-mail: server-dev-h...@james.apache.org
>>
>>


Re: [VOTE] Retire Apache James HUPA

2021-07-26 Thread Dongxu 王东旭
+1

ccing Manolo, thank you.

On Mon, Jul 26, 2021 at 10:16 AM Rene Cordier  wrote:

> +1,
>
> Rene.
>
> On 23/07/2021 16:00, btell...@apache.org wrote:
> > Hello all,
> >
> > Following a first email on the topic [1] I would like to call for a
> > formal vote on Apache James Hupa retirement.
> >
> > [1]
> https://www.mail-archive.com/server-dev@james.apache.org/msg70575.html
> >
> > Rationnals:
> >   - The latest release (0.3.0) dates from 2012 which is an eternity in
> > computing.
> >   - The latest tag on Github is 0.0.3
> >   - The pom references 0.0.5-SNAPSHOT suggesting that 0.0.4 release is
> > lost :-(
> >   - This repository is crippled by multiple CVEs (quick dependabot
> review):
> >- CVE-2021-29425 (commons-io)
> >- GHSA-m6cp-vxjx-65j6 CVE-2017-7656 CVE-2015-2080 CVE-2017-7657
> > CVE-2019-10241 CVE-2019-10247 (Jetty server)
> >- CVE-2020-9447 (gwtupload)
> >- GHSA-g3wg-6mcf-8jj6 (jetty-webapp)
> >- CVE-2019-17571 (log4j)
> >- CVE-2016-131 CVE-2016-3092 (commons-fileupload)
> >   - Sporadic activity since 2012
> >   - Zero to no exchanges for several years on the mailing lists.
> >
> > Given that alternatives exists, given that the project is
> > likely not mature, unmaintained and unsecure, I propose to retire this
> > Apache James subproject.
> >
> > |Voting rules: - This is a majority vote as stated in [2] for procedural
> > issues. - The vote starts at Friday 23rd of July 2021, 4pm UTC+7 - The
> > vote ends at Friday 30th of July 2021, 4pm UTC+7 [2]
> > https://www.apache.org/foundation/voting.html Following this retirement,
> > follow up steps are to be taken as described in [3] [3]
> > https://www.mail-archive.com/server-dev@james.apache.org/msg70585.html
> | - 1. Get a formal vote on server-dev mailing list
> >   - 2. Place a RETIRED_PROJECT file marker in the git
> >   - 3. Add a note in the project README
> >   - 4. Retire the ISSUE trackers (Project names HUPA and POSTAGE)
> >   - 5. Announce it on gene...@james.apache.org and announce@apache
> >   - 6. Add a notice to the Apache website, if present
> >   - 7. Remove releases from downloads.apache.org
> >   - 8. Add notices on the Apache release archives (example
> > https://archive.apache.org/dist/ant/antidote/ <
> https://archive.apache.org/dist/ant/antidote/>)
> >
> > Best regards,
> >
> > Benoit Tellier
> > ||
> >
> >
> > -
> > To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
> > For additional commands, e-mail: server-dev-h...@james.apache.org
> >
> >
>
> -
> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
> For additional commands, e-mail: server-dev-h...@james.apache.org
>
>


Re: [Development] matrix math help needed - https://bugreports.qt.io/browse/QTBUG-84441

2020-05-27 Thread Dongxu Li
Hi,

I think the documentation is actually clear on the order by the scale()
example. I'm not sure whether we should at explicitly the concept of
intrinsic transform:

https://en.wikipedia.org/wiki/Euler_angles#Definition_by_intrinsic_rotations

The current QTransform documentation always states transforming the
coordinate system. Probably, it's supposed to be easier to understand
compared the word "intrinsic".

The order of extrinsic transforms would be exactly the opposite of
intrinsic transforms.

Regards,

Dongxu

On Wed, May 27, 2020 at 10:09 AM Edward Welbourne 
wrote:

>
> > Here is, for example, the documentation of QTransform::scale:
> >
> >   Scales the coordinate system by sx horizontally and sy vertically,
> >   and returns a reference to the matrix.
> >
> > *Nothing* there clearly states, at least to my reading, whether the
> > "new" transform happens *before* or *after* any existing transforms that
> > the QTransform is already doing.
>
> ___
> Development mailing list
> Development@qt-project.org
> https://lists.qt-project.org/listinfo/development
>


-- 
Dongxu Li, Ph.D.
___
Development mailing list
Development@qt-project.org
https://lists.qt-project.org/listinfo/development


[james-hupa] branch trunk updated: Use HTTPS instead of HTTP to resolve dependencies

2020-02-12 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 1c0a2eb  Use HTTPS instead of HTTP to resolve dependencies
 new 6d93850  Merge pull request #1 from 
JLLeitschuh/fix/JLL/use_https_to_resolve_dependencies
1c0a2eb is described below

commit 1c0a2ebeaeb2c39e743940d8465349f8f6148365
Author: Jonathan Leitschuh 
AuthorDate: Mon Feb 10 19:05:34 2020 -0500

Use HTTPS instead of HTTP to resolve dependencies

This fixes a security vulnerability in this project where the `pom.xml`
files were configuring Maven to resolve dependencies over HTTP instead of
HTTPS.

Signed-off-by: Jonathan Leitschuh 
---
 pom.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/pom.xml b/pom.xml
index a65ece7..d932e3b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -335,15 +335,15 @@
 
 
 repo1
-http://repo1.maven.org/maven2/
+https://repo1.maven.org/maven2/
 
 
 JBoss repository
-http://repository.jboss.org/nexus/content/groups/public/
+
https://repository.jboss.org/nexus/content/groups/public/
 
 
sonatype
-   http://oss.sonatype.org/content/repositories/snapshots
+   https://oss.sonatype.org/content/repositories/snapshots
true
false
 


-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: [VOTE] Release Apache Roller 6.0.0

2019-12-09 Thread Dongxu 王东旭
+1

On Tue, Dec 10, 2019 at 11:38 AM Gaurav Saini 
wrote:

> +1
>
> On Tue, Dec 10, 2019, 05:19 Dave  wrote:
>
> > Please vote to release RC2 as Apache Roller 6.0.0. Release candidate
> files
> > are here:
> > https://dist.apache.org/repos/dist/dev/roller/roller-6.0/v6.0.0/
> >
> > Please vote +1 to release or -1 with reason(s) not to release.
> >
> > Thanks,
> > Dave
> >
> >
> > PS. This is the proposed release announcement:
> >
> > The Apache Roller project is pleased to announce the release of Roller
> > 6.0.0.
> >
> > You can find a list of the issues resolved in Roller 6 here:
> > https://issues.apache.org/jira/projects/ROL/versions/12344884
> >
> > In summary, Roller 6 is a new version of Roller with these features:
> > * Web interface has been rewritten to use Twitter bootstrap via the
> Struts
> > 2 Bootstrap tags.
> > * Most dependencies have been upgraded to latest version.
> > * Compiled with Java 11 and requires Java 11.
> > * The installation giude has been converted from OpenOffice to AsciiDocs.
> >
> > It should be relatively easy to ugrade from Roller 5.2.4 to Roller 6
> > because there are no changes to the database schema (that means you can
> > easily roll back if you find problems). The user interface is different
> and
> > we hope you'll find it better, easier to use, more intuitive and with a
> > more modern feel.
> >
> > Thanks to the many contributors to Roller for this new release. We hope
> > you'll enjoy and find it useful.
> >
>


Re: [PATCH] net: Adding parameter detection in __ethtool_get_link_ksettings.

2019-08-26 Thread Dongxu Liu
> On 8/26/19 9:23 AM, Dongxu Liu wrote:
> The __ethtool_get_link_ksettings symbol will be exported,
> and external users may use an illegal address.
> We should check the parameters before using them,
> otherwise the system will crash.
> 
> [ 8980.991134] BUG: unable to handle kernel NULL pointer dereference at   
> (null)
> [ 8980.993049] IP: [] 
> __ethtool_get_link_ksettings+0x27/0x140
> [ 8980.994285] PGD 0
> [ 8980.995013] Oops:  [#1] SMP
> [ 8980.995896] Modules linked in: sch_ingress ...
> [ 8981.013220] CPU: 3 PID: 25174 Comm: kworker/3:3 Tainted: G   O   
> V---   3.10.0-327.36.58.4.x86_64 #1
> [ 8981.017667] Workqueue: events linkwatch_event
> [ 8981.018652] task: 8800a8348000 ti: 8800b045c000 task.ti: 
> 8800b045c000
> [ 8981.020418] RIP: 0010:[]  [] 
> __ethtool_get_link_ksettings+0x27/0x140
> [ 8981.022383] RSP: 0018:8800b045fc88  EFLAGS: 00010202
> [ 8981.023453] RAX:  RBX: 8800b045fcac RCX: 
> 
> [ 8981.024726] RDX: 8800b658f600 RSI: 8800b045fcac RDI: 
> 8802296e
> [ 8981.026000] RBP: 8800b045fc98 R08:  R09: 
> 0001
> [ 8981.027273] R10: 73e0 R11: 082b0cc8adea R12: 
> 8802296e
> [ 8981.028561] R13: 8800b566e8c0 R14: 8800b658f600 R15: 
> 8800b566e000
> [ 8981.029841] FS:  () GS:88023ed8() 
> knlGS:
> [ 8981.031715] CS:  0010 DS:  ES:  CR0: 80050033
> [ 8981.032845] CR2:  CR3: b39a9000 CR4: 
> 003407e0
> [ 8981.034137] DR0:  DR1:  DR2: 
> 
> [ 8981.035427] DR3:  DR6: fffe0ff0 DR7: 
> 0400
> [ 8981.036702] Stack:
> [ 8981.037406]  8800b658f600 9c40 8800b045fce8 
> a047a71d
> [ 8981.039238]  004d 8800b045fcc8 8800b045fd28 
> 815cb198
> [ 8981.041070]  8800b045fcd8 810807e6 e8212951 
> 0001
> [ 8981.042910] Call Trace:
> [ 8981.043660]  [] bond_update_speed_duplex+0x3d/0x90 
> [bonding]
> [ 8981.045424]  [] ? inetdev_event+0x38/0x530
> [ 8981.046554]  [] ? put_online_cpus+0x56/0x80
> [ 8981.047688]  [] bond_netdev_event+0x137/0x360 [bonding]
> ...
> 
> Signed-off-by: Dongxu Liu 
> ---
>  net/core/ethtool.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/net/core/ethtool.c b/net/core/ethtool.c index 
> 6288e69..9a50b64 100644
> --- a/net/core/ethtool.c
> +++ b/net/core/ethtool.c
> @@ -545,6 +545,8 @@ int __ethtool_get_link_ksettings(struct net_device 
> *dev,  {
>   ASSERT_RTNL();
>  
> + if (!dev || !dev->ethtool_ops)
> + return -EOPNOTSUPP;

> I do not believe dev can possibly be NULL at this point.

>   if (!dev->ethtool_ops->get_link_ksettings)
>   return -EOPNOTSUPP;
>  
> 

> I tried to find an appropriate Fixes: tag.

> It seems this particular bug was added either by

> Fixes: 9856909c2abb ("net: bonding: use __ethtool_get_ksettings")

> or generically in :

> Fixes: 3f1ac7a700d0 ("net: ethtool: add new ETHTOOL_xLINKSETTINGS API")

In fact, "dev->ethtool_ops" is a null pointer in my environment.
I didn't get the case where "dev" is a null pointer.
Maybe "if (!dev->ethtool_ops)" is more accurate for this bug.

I found this bug in version 3.10, the function name was __ethtool_get_settings.
After 3f1ac7a700d0 ("net: ethtool: add new ETHTOOL_xLINKSETTINGS API"),
This function evolved into __ethtool_get_link_ksettings.



[PATCH] net: Adding parameter detection in __ethtool_get_link_ksettings.

2019-08-26 Thread Dongxu Liu
The __ethtool_get_link_ksettings symbol will be exported,
and external users may use an illegal address.
We should check the parameters before using them,
otherwise the system will crash.

[ 8980.991134] BUG: unable to handle kernel NULL pointer dereference at 
  (null)
[ 8980.993049] IP: [] __ethtool_get_link_ksettings+0x27/0x140
[ 8980.994285] PGD 0
[ 8980.995013] Oops:  [#1] SMP
[ 8980.995896] Modules linked in: sch_ingress ...
[ 8981.013220] CPU: 3 PID: 25174 Comm: kworker/3:3 Tainted: G   O   
V---   3.10.0-327.36.58.4.x86_64 #1
[ 8981.017667] Workqueue: events linkwatch_event
[ 8981.018652] task: 8800a8348000 ti: 8800b045c000 task.ti: 
8800b045c000
[ 8981.020418] RIP: 0010:[]  [] 
__ethtool_get_link_ksettings+0x27/0x140
[ 8981.022383] RSP: 0018:8800b045fc88  EFLAGS: 00010202
[ 8981.023453] RAX:  RBX: 8800b045fcac RCX: 
[ 8981.024726] RDX: 8800b658f600 RSI: 8800b045fcac RDI: 8802296e
[ 8981.026000] RBP: 8800b045fc98 R08:  R09: 0001
[ 8981.027273] R10: 73e0 R11: 082b0cc8adea R12: 8802296e
[ 8981.028561] R13: 8800b566e8c0 R14: 8800b658f600 R15: 8800b566e000
[ 8981.029841] FS:  () GS:88023ed8() 
knlGS:
[ 8981.031715] CS:  0010 DS:  ES:  CR0: 80050033
[ 8981.032845] CR2:  CR3: b39a9000 CR4: 003407e0
[ 8981.034137] DR0:  DR1:  DR2: 
[ 8981.035427] DR3:  DR6: fffe0ff0 DR7: 0400
[ 8981.036702] Stack:
[ 8981.037406]  8800b658f600 9c40 8800b045fce8 
a047a71d
[ 8981.039238]  004d 8800b045fcc8 8800b045fd28 
815cb198
[ 8981.041070]  8800b045fcd8 810807e6 e8212951 
0001
[ 8981.042910] Call Trace:
[ 8981.043660]  [] bond_update_speed_duplex+0x3d/0x90 
[bonding]
[ 8981.045424]  [] ? inetdev_event+0x38/0x530
[ 8981.046554]  [] ? put_online_cpus+0x56/0x80
[ 8981.047688]  [] bond_netdev_event+0x137/0x360 [bonding]
...

Signed-off-by: Dongxu Liu 
---
 net/core/ethtool.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/core/ethtool.c b/net/core/ethtool.c
index 6288e69..9a50b64 100644
--- a/net/core/ethtool.c
+++ b/net/core/ethtool.c
@@ -545,6 +545,8 @@ int __ethtool_get_link_ksettings(struct net_device *dev,
 {
ASSERT_RTNL();
 
+   if (!dev || !dev->ethtool_ops)
+   return -EOPNOTSUPP;
if (!dev->ethtool_ops->get_link_ksettings)
return -EOPNOTSUPP;
 
-- 
2.12.3




[PATCH] net: Add the same IP detection for duplicate address.

2019-08-20 Thread Dongxu Liu
The network sends an ARP REQUEST packet to determine
whether there is a host with the same IP.
Windows and some other hosts may send the source IP
address instead of 0.
When IN_DEV_ORCONF(in_dev, DROP_GRATUITOUS_ARP) is enable,
the REQUEST will be dropped.
When IN_DEV_ORCONF(in_dev, DROP_GRATUITOUS_ARP) is disable,
The case should be added to the IP conflict handling process.

Signed-off-by: Dongxu Liu 
---
 net/ipv4/arp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
index 05eb42f..a51c921 100644
--- a/net/ipv4/arp.c
+++ b/net/ipv4/arp.c
@@ -801,7 +801,7 @@ static int arp_process(struct net *net, struct sock *sk, 
struct sk_buff *skb)
GFP_ATOMIC);
 
/* Special case: IPv4 duplicate address detection packet (RFC2131) */
-   if (sip == 0) {
+   if (sip == 0 || sip == tip) {
if (arp->ar_op == htons(ARPOP_REQUEST) &&
inet_addr_type_dev_table(net, dev, tip) == RTN_LOCAL &&
!arp_ignore(in_dev, sip, tip))
-- 
2.12.3




[PATCH] net: Fix detection for IPv4 duplicate address.

2019-08-20 Thread Dongxu Liu
The network sends an ARP REQUEST packet to determine
whether there is a host with the same IP.
The source IP address of the packet is 0.
However, Windows may also send the source IP address
to determine, then the source IP address is equal to
the destination IP address.

Signed-off-by: Dongxu Liu 
---
 net/ipv4/arp.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
index 05eb42f..944f8e8 100644
--- a/net/ipv4/arp.c
+++ b/net/ipv4/arp.c
@@ -800,8 +800,11 @@ static int arp_process(struct net *net, struct sock *sk, 
struct sk_buff *skb)
iptunnel_metadata_reply(skb_metadata_dst(skb),
GFP_ATOMIC);
 
-   /* Special case: IPv4 duplicate address detection packet (RFC2131) */
-   if (sip == 0) {
+/* Special case: IPv4 duplicate address detection packet (RFC2131).
+ * Linux usually sends zero to detect duplication, and windows may
+ * send a same ip (not zero, sip equal to tip) to do this detection.
+ */
+   if (sip == 0 || sip == tip) {
if (arp->ar_op == htons(ARPOP_REQUEST) &&
inet_addr_type_dev_table(net, dev, tip) == RTN_LOCAL &&
!arp_ignore(in_dev, sip, tip))
-- 
2.12.3




[no subject]

2019-03-06 Thread Dongxu Wang



[seL4] create a new thread in sel4

2018-09-18 Thread Dongxu Ji
Hello,
My platform is imx6q sabrelite and I have run sel4-test successfully. Now
I'm trying to create a new thread in the initial thread according to sel4
tutorial2. But it didn't run.

The code is shown below:

struct driver_env env;
allocman_t *allocman;

/* function to run in the new thread */
void thread_2(void) {

printf("thread_2: hallo wereld\n");

while (1);
}

int main(void)
{
int error;
reservation_t virtual_reservation;
seL4_BootInfo *info = platsupport_get_bootinfo();

seL4_DebugNameThread(seL4_CapInitThreadTCB, "sel4test-driver");

/* initialise libsel4simple */
simple_default_init_bootinfo(, info);

/* create an allocator */
allocman = bootstrap_use_current_simple(,
ALLOCATOR_STATIC_POOL_SIZE, allocator_mem_pool);

/* create a vka */
allocman_make_vka(, allocman);

/* create a vspace */
error = sel4utils_bootstrap_vspace_with_bootinfo_leaky(,
   ,
simple_get_pd(),
   ,
platsupport_get_bootinfo());

/* fill the allocator with virtual memory */
void *vaddr;
virtual_reservation = vspace_reserve_range(,
   ALLOCATOR_VIRTUAL_POOL_SIZE,
seL4_AllRights, 1, );
if (virtual_reservation.res == 0) {
ZF_LOGF("Failed to provide virtual memory for allocator");
}

 bootstrap_configure_virtual_pool(allocman, vaddr,
 ALLOCATOR_VIRTUAL_POOL_SIZE,
simple_get_pd());

/* Allocate slots for, and obtain the caps for, the hardware we will be
 * using, in the same function.
 */
sel4platsupport_init_default_serial_caps(, ,
, _objects);

vka_t serial_vka = env.vka;
serial_vka.utspace_alloc_at = arch_get_serial_utspace_alloc_at();

/* Construct a simple wrapper for returning the I/O ports.  */
simple_t serial_simple = env.simple;

/* enable serial driver */
platsupport_serial_setup_simple(, _simple,
_vka);

simple_print();

/* get our cspace root cnode */
seL4_CPtr cspace_cap;
cspace_cap = simple_get_cnode();

/* get our vspace root page diretory */
seL4_CPtr pd_cap;
pd_cap = simple_get_pd();

/* create a new TCB */
vka_object_t tcb_object = {0};
error = vka_alloc_tcb(, _object);

/* initialise the new TCB */
error = seL4_TCB_Configure(tcb_object.cptr, seL4_CapNull,  cspace_cap,
seL4_NilData, pd_cap, seL4_NilData, 0, 0);

/* give the new thread a name */
name_thread(tcb_object.cptr, "hello-2: thread_2");

UNUSED seL4_UserContext regs = {0};

/* set instruction pointer where the thread shoud start running */
sel4utils_set_instruction_pointer(, (seL4_Word)thread_2);
error=sel4utils_get_instruction_pointer(regs);

/* check that stack is aligned correctly */
const int stack_alignment_requirement = sizeof(seL4_Word) * 2;
uintptr_t thread_2_stack_top = (uintptr_t)thread_2_stack +
sizeof(thread_2_stack);

/* set stack pointer for the new thread */
sel4utils_set_stack_pointer(, thread_2_stack_top);
error=sel4utils_get_sp(regs);

/* actually write the TCB registers. */
error = seL4_TCB_WriteRegisters(tcb_object.cptr, 0, 0, 2, );

/* start the new thread running */
error = seL4_TCB_Resume(tcb_object.cptr);

/* we are done, say hello */
printf("main: hello world \n");

return 0;
}

And I can get some output:
cspace_cap is 2

pd_cap is 3

tcb_object.cptr is 5f7

tcb_object.ut is 1208

tcb_object.type is 1

tcb_object.size_bits is a
regs.pc is
9640
regs.sp is 3f24f0

Any advice about the issue?

Thank you,
Dongxu Ji
___
Devel mailing list
Devel@sel4.systems
https://sel4.systems/lists/listinfo/devel


Re: [seL4] SMC in seL4

2018-08-28 Thread Dongxu Ji
Hi  Yanyan,
 According to your advice,I modified the avail_p_regs[] in hardware.h file
as tk1. 1 MiB starting from 0x37f0 is reserved for monitor mode.

{ /* .start = */ 0x1000, /* .end = */ 0x37f0}

And then
#define MON_PA_START (0x1000 + 0x27f0)
#define MON_PA_SIZE  (1 << 20)
#define MON_PA_END   (MON_PA_START + MON_PA_SIZE)
#define MON_PA_STACK (MON_PA_END - 0x10)
#define MON_VECTOR_START (MON_PA_START)

set sp:
asm volatile ("cps %0\n\t"
  "isb\n"
  "mov sp, %1\n\t"
 ::"I"(MONITOR_MODE),"r"(MON_PA_STACK));

copy monitor mode vector and write into MVBAR:
uint32_t size = arm_monitor_vector_end - arm_monitor_vector;
printf("Copy monitor mode vector from %x to %x size %x \n",
(arm_monitor_vector), MON_VECTOR_START, size);
memcpy((void *)MON_VECTOR_START, (void *)(arm_monitor_vector), size);
asm volatile ("dmb\n isb\n");
asm volatile ("mcr p15, 0, %0, c12, c0, 1"::"r"(MON_VECTOR_START));

I used the arm_monitor_vector provided by the source code in monitor.S and
just didn't perform the operation 'blx r0'.  I think the it can jump to
smc_handler and return successfully.

#define VECTOR_BASE 0x37f0
#define STACK_TOP   (VECTOR_BASE + (1 << 20) - 0x10)

arm_monitor_vector:
ldr pc, [pc, #28]
ldr pc, [pc, #24]
ldr pc, [pc, #16]
ldr pc, [pc, #16]
ldr pc, [pc, #12]
ldr pc, [pc, #8]
ldr pc, [pc, #4]
ldr pc, [pc, #0]

smc_handler_addr:
.word   VECTOR_BASE + (smc_handler - arm_monitor_vector)//0x102c
smc_halt_addr:
.word   VECTOR_BASE + (smc_halt - arm_monitor_vector)
smc_stack:
.word   STACK_TOP

smc_handler:
/* always have a valid stack */
ldr sp, [pc, #-12]
push {lr}
//blx r0
isb
mrc p15, 0, r5, c1, c1, 0
/* set the NS bit */
orr r5, r5, #1
mcr p15, 0, r5, c1, c1, 0
pop {lr}
isb
movs pc, lr

However, the problem still exists. And I have another question that why set
the sp in monitor mode to sp in SVC mode.
asm volatile ("mov r8, sp\n\t"
  "cps %0\n\t"
      "isb\n"
      "mov sp, r8\n\t"
 ::"I"(MONITOR_MODE));


Thank you,
Dongxu Ji

 于2018年8月28日周二 下午10:10写道:

> Hi Dongxu,
>
>
> As you can see, the arm_monitor_vector uses PC-relative addressing so that
> the code can be moved around in memory.  I think ldr pc, =smc_handler
> breaks this. Also, please set the NS bit in SCR to 1 before returning.
>
>
> To reserve a memory region for the monitor-mode code and data, I suggest
> you modify the avail_p_regs[] in 
> kernel/include/plat/imx6/plat/machine/hardware.h
> file. See the kernel/include/plat/tk1/plat/machine/hardware.h as an example.
> ​
>
>
>
> Regards,
>
> Yanyan
>
>
> --
> *From:* Dongxu Ji 
> *Sent:* Wednesday, August 29, 2018 12:02 AM
> *To:* devel@sel4.systems; Shen, Yanyan (Data61, Kensington NSW)
> *Subject:* Fwd: [seL4] SMC in seL4
>
> Hi  Yanyan,
> 1. It doesn't set the NS bit to 1 in SCR(I just want it to return without
> do anything). The arm_monitor_vector and the smc_handler():
>
> arm_monitor_vector:
> ldr pc, [pc, #28]
> ldr pc, [pc, #24]
> ldr pc, =smc_handler
> ldr pc, [pc, #16]
> ldr pc, [pc, #12]
> ldr pc, [pc, #8]
> ldr pc, [pc, #4]
> ldr pc, [pc, #0]
>
> smc_handler:
> movs pc, lr
>
> 2. I didn't do any extra work other than the boot log:
>
> ..
> ELF-loader started on CPU: ARM Ltd. Cortex-A9 r2p10
>
>   paddr=[2000..203fbfff]
>
> ELF-loading image 'kernel'
>
>   paddr=[1000..10026fff]
>
>   vaddr=[e000..e0026fff]
>
>   virt_entry=e000
>
> ELF-loading image 'sel4test-driver'
>
>   paddr=[10027000..10500fff]
>
>   vaddr=[8000..4e1fff]
>
>   virt_entry=25a6c
>
> Enabling MMU and paging
>
> Jumping to kernel-image entry point...
>
> 3. The initialization operations in platform_init.c:
> set sp:
> #define MONITOR_MODE(0x16)
> #define MON_VECTOR_START(0x1100)
> #define VECTOR_BASE 0x1100
> #define STACK_TOP   (VECTOR_BASE + (1 << 12) - 0x10)
>
> asm volatile ( "mrs r1, cpsr\n\t"
>   "cps %0\n\t"
>   "isb\n"
>   "mov sp, %1\n\t"
>   "msr cpsr, r1\n\t"
>  ::"I"(MONITOR_MODE),"r"(STACK_TOP));
>
> copy monitor mode vector to MON_VECTOR_START  and write into MVBAR:
> uint32_t size = arm_monitor_vector_end - arm_monitor_vector;

[seL4] Fwd: SMC in seL4

2018-08-28 Thread Dongxu Ji
Hi  Yanyan,
1. It doesn't set the NS bit to 1 in SCR(I just want it to return without
do anything). The arm_monitor_vector and the smc_handler():

arm_monitor_vector:
ldr pc, [pc, #28]
ldr pc, [pc, #24]
ldr pc, =smc_handler
ldr pc, [pc, #16]
ldr pc, [pc, #12]
ldr pc, [pc, #8]
ldr pc, [pc, #4]
ldr pc, [pc, #0]

smc_handler:
movs pc, lr

2. I didn't do any extra work other than the boot log:

..
ELF-loader started on CPU: ARM Ltd. Cortex-A9 r2p10

  paddr=[2000..203fbfff]

ELF-loading image 'kernel'

  paddr=[1000..10026fff]

  vaddr=[e000..e0026fff]

  virt_entry=e000

ELF-loading image 'sel4test-driver'

  paddr=[10027000..10500fff]

  vaddr=[8000..4e1fff]

  virt_entry=25a6c

Enabling MMU and paging

Jumping to kernel-image entry point...

3. The initialization operations in platform_init.c:
set sp:
#define MONITOR_MODE(0x16)
#define MON_VECTOR_START(0x1100)
#define VECTOR_BASE 0x1100
#define STACK_TOP   (VECTOR_BASE + (1 << 12) - 0x10)

asm volatile ( "mrs r1, cpsr\n\t"
  "cps %0\n\t"
  "isb\n"
  "mov sp, %1\n\t"
  "msr cpsr, r1\n\t"
 ::"I"(MONITOR_MODE),"r"(STACK_TOP));

copy monitor mode vector to MON_VECTOR_START  and write into MVBAR:
uint32_t size = arm_monitor_vector_end - arm_monitor_vector;
printf("Copy monitor mode vector from %x to %x size %x\n",
(arm_monitor_vector), MON_VECTOR_START, size);
memcpy((void *)MON_VECTOR_START, (void *)(arm_monitor_vector), size);
asm volatile ("dmb\n isb\n");
asm volatile ("mcr p15, 0, %0, c12, c0, 1"::"r"(MON_VECTOR_START));

I enter into SVC mode by software interrupt and call the function smc():
  asm (".arch_extension sec\n");
 asm volatile ("stmfdsp!, {r3-r11, lr}\n\t"
   "dsb\n"
   "smc #0\n"
   "ldmfdsp!, {r3-r11, pc}");

and then the problem arises.

Thank you,
Dongxu Ji


 于2018年8月28日周二 下午8:30写道:

> Hi,
>
> The smc_handle() in monitor.S, it does nothing but "movs pc, lr".
>
> Does it set the NS bit to 1 in SCR?
>
> Also, what did you do to ensure that 0x1100 is not used by the kernel?
>
> Could you share the code (if possible) so that I could better understand
> the problem.
>
> Regards,
> Yanyan
>
>
> --
> *From:* Devel  on behalf of 冀东旭 <
> jidongxu1...@gmail.com>
> *Sent:* Tuesday, August 28, 2018 1:02 PM
> *To:* devel@sel4.systems
> *Subject:* [seL4] SMC in seL4
>
> Hello,
>
> I'm porting sel4 to imx6q sabrelite as the trusted OS in trustzone.  I 
> initialize the monitor mode by setting the sp to  STACK_TOP and copying 
> arm_monitor_vector to MON_VECTOR_START according to the functions 
> "install_monitor_hook()" and "switch_to_mon_mode()" in "platform_init.c".
>
> #define VECTOR_BASE 0x1100(addr is not used by the seL4 kernel)
>
> #define STACK_TOP   (VECTOR_BASE + (1 << 12) - 0x10)
>
> #define MON_VECTOR_START0x1100(The VECTOR_BASE is the same as 
> MON_VECTOR_START)
>
> The smc_handle() in monitor.S, it does nothing but "movs pc, lr".  After 
> calling smc in SVC mode, it hangs without any log.  If I comment the "smc 
> #0", it can return the caller successfully in usr mode.
>
> stmfdsp!, {r3-r11, lr}
> dsb
> smc #0
> ldmfdsp!, {r3-r11, pc}
>
> Is the sp in monitor mode appropriate? Or I need to do something else in 
> initialization operations?  What's wrong with it?  Do you have any ideas?
>
> Thank you!
>
> Dongxu Ji
>
>
___
Devel mailing list
Devel@sel4.systems
https://sel4.systems/lists/listinfo/devel


[james-hupa] branch trunk updated: Change the todo to another line so that we can do there.

2018-03-05 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 6dced18  Change the todo to another line so that we can do there.
6dced18 is described below

commit 6dced18ae5da1e5ddee4f522b715cc2857741435
Author: Echo Wang <don...@apache.org>
AuthorDate: Mon Mar 5 17:58:34 2018 +0800

Change the todo to another line so that we can do there.
---
 .../main/java/org/apache/hupa/client/ui/FolderListView.java| 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java 
b/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
index 8d5b965..b3dbefa 100644
--- a/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
+++ b/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
@@ -29,10 +29,11 @@ import org.apache.hupa.client.activity.ToolBarActivity;
 import org.apache.hupa.client.place.FolderPlace;
 import org.apache.hupa.client.storage.HupaStorage;
 import org.apache.hupa.shared.domain.ImapFolder;
-
 import com.google.gwt.cell.client.AbstractCell;
 import com.google.gwt.core.client.Duration;
 import com.google.gwt.core.client.GWT;
+import com.google.gwt.core.client.Scheduler;
+import com.google.gwt.core.client.Scheduler.ScheduledCommand;
 import com.google.gwt.place.shared.PlaceController;
 import com.google.gwt.query.client.Function;
 import com.google.gwt.safehtml.shared.SafeHtmlBuilder;
@@ -102,13 +103,18 @@ public class FolderListView extends Composite implements 
FolderListActivity.Disp
 msgListDisplay.refresh();
 }
 });
+//TODO not only refresh data, but highlight the folder list item. <= 
https://issues.apache.org/jira/browse/HUPA-117
+Scheduler.get().scheduleDeferred(new ScheduledCommand() {
+public void execute() {
+SelectionChangeEvent.fire(selectionModel);
+}
+});
 pagerPanel.setDisplay(cellList);
 thisView.setWidget(pagerPanel);
 }
 
 @Override
 public void refresh() {
-   //TODO not only refresh data, but highlight the folder list item. <= 
https://issues.apache.org/jira/browse/HUPA-117
 data.refresh();
 }
 

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 05/05: Refactoring README to improve the content.

2018-02-28 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit 379669b416267c43bfecc2e8f654a69b29bc2ba9
Author: Echo Wang <don...@apache.org>
AuthorDate: Thu Mar 1 11:23:28 2018 +0800

Refactoring README to improve the content.
---
 README.md | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/README.md b/README.md
index dd71fc2..045190a 100644
--- a/README.md
+++ b/README.md
@@ -4,7 +4,7 @@ Hupa is a Rich IMAP-based Webmail application written in GWT.
 Hupa has been entirely written in java to be coherent with the language used 
in the James project.
 It has been a development reference using GWT good practices (MVP pattern and 
Unit testing)
 
-It is ready for reading, sending,  and managing messages and folders, but it 
still lacks of many features email clients nowadays have.
+It is ready for reading, sending, and managing messages and folders, but it 
still lacks of many features email clients nowadays have.
 
 # Bulding #
 Hupa use maven as building tool. To build hupa download maven 
(http://maven.apache.org), unpack maven and install it.
@@ -28,7 +28,7 @@ $ java -jar target/hupa-${version}.war
 Then point your browser to the url:
 http://localhost:8282
 
-If you prefer to use any other servlet container you can deploy the provided 
.war file in it.
+If you prefer to use any other servlet container you can deploy the provided 
.war file into it.
 
 # Hupa and IMAP/SMTP servers  #
 Hupa is able to discover most of the imap/smtp configuration based on the 
email domain part.
@@ -40,8 +40,8 @@ email provider servers.
 Hupa is compatible with most email providers, gmail, yahoo, hotmail, outlook, 
exchange, james, etc.
 
 # Eclipse GWT Plugin notes #
-- Hupa uses maven to be built, before inporting the project, you should have 
installed m2eclipse
-and google plugins, then go to Import -> New maven project and select the 
modules:
+- Hupa uses maven to be built, before importing the project, you should have 
installed m2eclipse
+and GWT Eclipse Plugin (3.0.0), then go to Import -> Existing Maven Projects 
and select the modules:
 shared, mock, server, widgets, client and hupa.
 
-- To run hupa in hosted mode, select: Run as -> (Google) Web application.
+- To run hupa in hosted mode, select hupa: Run As/Debug As -> GWT Development 
Mode with Jetty.

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 03/05: Use header5 for the title of doc.

2018-02-28 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit 42cf07669bac239186a5286ffd671a2a04c46932
Author: Echo Wang <don...@apache.org>
AuthorDate: Wed Feb 28 08:24:00 2018 +0800

Use header5 for the title of doc.
---
 README.md | 11 +--
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/README.md b/README.md
index ef15443..912d963 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,4 @@
-
-## Introduction ##
+# Introduction #
 Hupa is a Rich IMAP-based Webmail application written in GWT.
 
 Hupa has been entirely written in java to be coherent with the language used 
in the James project.
@@ -7,12 +6,12 @@ It has been a development reference using GWT good practices 
(MVP pattern and Un
 
 It is ready for reading, sending,  and managing messages and folders, but it 
still lacks of many features email clients nowadays have.
 
-## Bulding ##
+# Bulding #
 Hupa use maven as building tool. To build hupa download maven 
(http://maven.apache.org), unpack maven and install it.
 After that change to hupa directory and execute the following cmd:
 $ mvn clean package
 
-## Configuring server side  
+# Configuring server side  #
 Hupa uses a properties file to know the IMAP and SMTP servers configuration.
 There is an example configuration file in 
'hupa/src/main/webapp/WEB-INF/conf/config.properties'
 
@@ -31,7 +30,7 @@ http://localhost:8282
 
 If you prefer to use any other servlet container you can deploy the provided 
.war file in it.
 
-## Hupa and IMAP/SMTP servers  #
+# Hupa and IMAP/SMTP servers  #
 Hupa is able to discover most of the imap/smtp configuration based on the 
email domain part.
 When you are prompted to login, type your email address and wait few seconds, 
if you click on the
 gear button you can see the configuration discovered by Hupa, you can modify 
it if it does not match
@@ -40,7 +39,7 @@ email provider servers.
 
 Hupa is compatible with most email providers, gmail, yahoo, hotmail, outlook, 
exchange, james, etc.
 
-## Eclipse GWT Plugin notes 
+# Eclipse GWT Plugin notes #
 - Hupa uses maven to be built, before inporting the project, you shoul install 
m2eclipse
 and google plugins, then go to Import -> New maven project and select the 
modules:
 shared, mock, server, widgets, client and hupa.

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 02/05: Remove maven version since we are using maven 3.

2018-02-28 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit fdf35f14b24a4a9ef57b4637564624392a79fa08
Author: Echo Wang <don...@apache.org>
AuthorDate: Tue Feb 27 09:25:15 2018 +0800

Remove maven version since we are using maven 3.
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 9eefe36..ef15443 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,7 @@ It has been a development reference using GWT good practices 
(MVP pattern and Un
 It is ready for reading, sending,  and managing messages and folders, but it 
still lacks of many features email clients nowadays have.
 
 ## Bulding ##
-Hupa use maven2 as building tool. To build hupa download maven2 
(http://maven.apache.org), unpack maven2 and install it.
+Hupa use maven as building tool. To build hupa download maven 
(http://maven.apache.org), unpack maven and install it.
 After that change to hupa directory and execute the following cmd:
 $ mvn clean package
 

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 01/05: Remove commented out content.

2018-02-28 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit 8050e548d63ea3de3304ae46293111f9fc81e8f8
Author: Echo Wang <don...@apache.org>
AuthorDate: Mon Feb 26 09:10:04 2018 +0800

Remove commented out content.
---
 .../org/apache/hupa/client/place/FolderPlace.java| 20 
 1 file changed, 20 deletions(-)

diff --git a/client/src/main/java/org/apache/hupa/client/place/FolderPlace.java 
b/client/src/main/java/org/apache/hupa/client/place/FolderPlace.java
index 540685e..cd0d9ef 100644
--- a/client/src/main/java/org/apache/hupa/client/place/FolderPlace.java
+++ b/client/src/main/java/org/apache/hupa/client/place/FolderPlace.java
@@ -47,24 +47,4 @@ public class FolderPlace extends HupaPlace {
 return place.getToken();
 }
 }
-//
-//@Override
-//public boolean equals(Object o) {
-//if (o == null)
-//return false;
-//if (o == this)
-//return true;
-//if (o.getClass() != getClass())
-//return false;
-//FolderPlace place = (FolderPlace) o;
-//return (token == place.token || (token != null && 
token.equals(place.token)));
-//}
-//
-//@Override
-//public int hashCode() {
-//final int prime = 31;
-//int result = 1;
-//result = prime * result + ((token == null) ? 0 : token.hashCode());
-//return result;
-//}
 }

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] branch trunk updated (697eda5 -> 379669b)

2018-02-28 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git.


from 697eda5  Add TODO tag for issue#HUPA-117.
 new 8050e54  Remove commented out content.
 new fdf35f1  Remove maven version since we are using maven 3.
 new 42cf076  Use header5 for the title of doc.
 new f4b8f1d  Fix typo for README.
 new 379669b  Refactoring README to improve the content.

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 README.md  | 23 +++---
 .../org/apache/hupa/client/place/FolderPlace.java  | 20 ---
 2 files changed, 11 insertions(+), 32 deletions(-)

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 04/05: Fix typo for README.

2018-02-28 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit f4b8f1d17e5b4f599dc03d01f0c9f45e9057782a
Author: Echo Wang <don...@apache.org>
AuthorDate: Thu Mar 1 11:15:32 2018 +0800

Fix typo for README.
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 912d963..dd71fc2 100644
--- a/README.md
+++ b/README.md
@@ -40,7 +40,7 @@ email provider servers.
 Hupa is compatible with most email providers, gmail, yahoo, hotmail, outlook, 
exchange, james, etc.
 
 # Eclipse GWT Plugin notes #
-- Hupa uses maven to be built, before inporting the project, you shoul install 
m2eclipse
+- Hupa uses maven to be built, before inporting the project, you should have 
installed m2eclipse
 and google plugins, then go to Import -> New maven project and select the 
modules:
 shared, mock, server, widgets, client and hupa.
 

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



SUB dubbo

2018-02-26 Thread dongxu



james-site git commit: Update Hupa documents banner.

2018-02-26 Thread dongxu
Repository: james-site
Updated Branches:
  refs/heads/asf-site 8f1511840 -> 652615d9c


Update Hupa documents banner.


Project: http://git-wip-us.apache.org/repos/asf/james-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/james-site/commit/652615d9
Tree: http://git-wip-us.apache.org/repos/asf/james-site/tree/652615d9
Diff: http://git-wip-us.apache.org/repos/asf/james-site/diff/652615d9

Branch: refs/heads/asf-site
Commit: 652615d9c7c5b9ec7a8e725b89a04a5091756752
Parents: 8f15118
Author: Echo Wang 
Authored: Mon Feb 26 19:56:13 2018 +0800
Committer: Echo Wang 
Committed: Mon Feb 26 19:56:13 2018 +0800

--
 content/hupa/images/logos/asf_logo_small.png | Bin 0 -> 12945 bytes
 content/hupa/index.html  |  67 +-
 2 files changed, 27 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/james-site/blob/652615d9/content/hupa/images/logos/asf_logo_small.png
--
diff --git a/content/hupa/images/logos/asf_logo_small.png 
b/content/hupa/images/logos/asf_logo_small.png
new file mode 100644
index 000..e8093ea
Binary files /dev/null and b/content/hupa/images/logos/asf_logo_small.png differ

http://git-wip-us.apache.org/repos/asf/james-site/blob/652615d9/content/hupa/index.html
--
diff --git a/content/hupa/index.html b/content/hupa/index.html
index 06e04fa..ce0e218 100644
--- a/content/hupa/index.html
+++ b/content/hupa/index.html
@@ -60,54 +60,41 @@
 
   
 
-  
-  
-
-
-
-  http://www.apache.org/index.html; 
id="bannerRight">
-  
-
-
-
-
-
-  
+
+
+
+
+
+http://www.apache.org/index.html; id="bannerRight">
+
+
+
+
+
+
+
 
 
-
-
-
-Last Published: 2012-06-07
-  
-Home
-|
-Server
-|
-Hupa
-|
-Protocols
-|
-IMAP
-|
-Mailets
-|
-Mailbox
+
+
+Last Published: 2018-02-26
+
+Home
 |
-Mime4J
+James
 |
-jSieve
+Mime4J
 |
-jSPF
+jSieve
 |
-jDKIM
+jSPF
 |
-MPT
+jDKIM
 |
-Postage
-  
-
-  
+Hupa
+
+
+
   
 
   


-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



james-site git commit: Update XSS issues fixed information for Hupa release 0.0.3.

2018-02-26 Thread dongxu
Repository: james-site
Updated Branches:
  refs/heads/asf-site e6edcf292 -> 8f1511840


Update XSS issues fixed information for Hupa release 0.0.3.


Project: http://git-wip-us.apache.org/repos/asf/james-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/james-site/commit/8f151184
Tree: http://git-wip-us.apache.org/repos/asf/james-site/tree/8f151184
Diff: http://git-wip-us.apache.org/repos/asf/james-site/diff/8f151184

Branch: refs/heads/asf-site
Commit: 8f151184018c42d018352d245bcdcf6efa975207
Parents: e6edcf2
Author: Echo Wang 
Authored: Mon Feb 26 19:21:32 2018 +0800
Committer: Echo Wang 
Committed: Mon Feb 26 19:21:32 2018 +0800

--
 content/hupa/index.html | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/james-site/blob/8f151184/content/hupa/index.html
--
diff --git a/content/hupa/index.html b/content/hupa/index.html
index dd49404..06e04fa 100644
--- a/content/hupa/index.html
+++ b/content/hupa/index.html
@@ -278,7 +278,11 @@
 
 News
 2012
-Jun/2012 - Hupa 0.0.2 released
+Aug/2012 - Hupa 0.0.3 released
+
+Fixes http://svn.apache.org/viewvc?view=revision=1373762; 
target="_blank">various XSS issues CVE-2012-3536
+
+Jun/2012 - Hupa 0.0.2 
released
 
First stable version.
 


-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 01/04: Add license copyright for the roundcube theme.

2018-02-25 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit 1519cf937cf3ef68f8927bad43a2b7ae37d87a0a
Author: Echo Wang <don...@apache.org>
AuthorDate: Sat Feb 24 08:31:44 2018 +0800

Add license copyright for the roundcube theme.
---
 .../src/main/java/org/apache/hupa/client/ui/README | 29 --
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/client/src/main/java/org/apache/hupa/client/ui/README 
b/client/src/main/java/org/apache/hupa/client/ui/README
index 0a344ff..5b7397a 100644
--- a/client/src/main/java/org/apache/hupa/client/ui/README
+++ b/client/src/main/java/org/apache/hupa/client/ui/README
@@ -1,2 +1,27 @@
-The majority of the theme resources in this ui package are copied from 
http://roundcube.net
-Therefore, the theme comply with https://roundcube.net/license/
\ No newline at end of file
+The majority of the theme resources in the ui package are referred from 
http://roundcube.net
+Therefore, the theme complies with https://roundcube.net/license/
+
+The original license README:
+
+ROUNDCUBE WEBMAIL DEFAULT SKIN
+==
+
+This skin package contains the current development theme of the Roundcube
+Webmail software. It can be used, modified and redistributed according to
+the terms described in the LICENSE section.
+
+For information about building or modifiying Roundcube skins please visit
+https://github.com/roundcube/roundcubemail/wiki/Skins
+
+The theme uses icons originally designed by Stephen Horlander and Kevin Gerich
+for Mozilla.org. In case of redistribution giving credit to these artwork
+creators is mandatory.
+
+
+LICENSE
+---
+The contents of this folder are subject to the Creative Commons
+Attribution-ShareAlike License. It is allowed to copy, distribute,
+transmit and to adapt the work by keeping credits to the original
+autors in the README file.
+See http://creativecommons.org/licenses/by-sa/3.0/ for details.

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 03/04: Reformat code.

2018-02-25 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit 73a8e70193b1484bb4f179bd4325d90effb99870
Author: Echo Wang <don...@apache.org>
AuthorDate: Sun Feb 25 17:59:19 2018 +0800

Reformat code.
---
 .../src/main/java/org/apache/hupa/client/ui/FolderListView.java   | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java 
b/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
index f659bef..a36c071 100644
--- a/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
+++ b/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
@@ -70,11 +70,11 @@ public class FolderListView extends Composite implements 
FolderListActivity.Disp
 }
 
 public static final ProvidesKey KEY_PROVIDER = new 
ProvidesKey() {
-  @Override
-  public Object getKey(LabelNode item) {
+@Override
+public Object getKey(LabelNode item) {
 return item == null ? null : item.getPath();
-  }
-};
+}
+};
 
 protected void onAttach() {
 super.onAttach();

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 04/04: Add TODO tag for issue#HUPA-117.

2018-02-25 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit 697eda5a53b74f09b3e4162fa03721284b062e8d
Author: Echo Wang <don...@apache.org>
AuthorDate: Sun Feb 25 18:03:45 2018 +0800

Add TODO tag for issue#HUPA-117.
---
 client/src/main/java/org/apache/hupa/client/ui/FolderListView.java | 1 +
 1 file changed, 1 insertion(+)

diff --git a/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java 
b/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
index a36c071..8d5b965 100644
--- a/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
+++ b/client/src/main/java/org/apache/hupa/client/ui/FolderListView.java
@@ -108,6 +108,7 @@ public class FolderListView extends Composite implements 
FolderListActivity.Disp
 
 @Override
 public void refresh() {
+   //TODO not only refresh data, but highlight the folder list item. <= 
https://issues.apache.org/jira/browse/HUPA-117
 data.refresh();
 }
 

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] 02/04: Clean code.

2018-02-25 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git

commit 8cac67181dcf18cb9b79f5653bc441fd2e2a1e76
Author: Echo Wang <don...@apache.org>
AuthorDate: Sat Feb 24 08:39:51 2018 +0800

Clean code.
---
 client/src/main/java/org/apache/hupa/client/ui/HupaLayoutable.java | 2 ++
 client/src/main/java/org/apache/hupa/client/ui/HupaPlugins.java| 3 ---
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/client/src/main/java/org/apache/hupa/client/ui/HupaLayoutable.java 
b/client/src/main/java/org/apache/hupa/client/ui/HupaLayoutable.java
index d0a921d..a8b3ded 100644
--- a/client/src/main/java/org/apache/hupa/client/ui/HupaLayoutable.java
+++ b/client/src/main/java/org/apache/hupa/client/ui/HupaLayoutable.java
@@ -24,6 +24,7 @@ import org.apache.hupa.client.place.SettingPlace;
 import com.google.gwt.user.client.ui.AcceptsOneWidget;
 
 public interface HupaLayoutable extends Layoutable {
+   
 AcceptsOneWidget getTopBarView();
 
 AcceptsOneWidget getLogoView();
@@ -49,6 +50,7 @@ public interface HupaLayoutable extends Layoutable {
 AcceptsOneWidget getNotificationView();
 
 AcceptsOneWidget getLabelListView();
+
 AcceptsOneWidget getAddressListView();
 
 AcceptsOneWidget getLabelPropertiesView();
diff --git a/client/src/main/java/org/apache/hupa/client/ui/HupaPlugins.java 
b/client/src/main/java/org/apache/hupa/client/ui/HupaPlugins.java
index 1f61b7e..32d9a71 100644
--- a/client/src/main/java/org/apache/hupa/client/ui/HupaPlugins.java
+++ b/client/src/main/java/org/apache/hupa/client/ui/HupaPlugins.java
@@ -6,7 +6,4 @@ public interface HupaPlugins {
 
 }
 
-
-
-
 }

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] branch trunk updated (586ac80 -> 697eda5)

2018-02-25 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git.


from 586ac80  Reformat, remove the blank space.
 new 1519cf9  Add license copyright for the roundcube theme.
 new 8cac671  Clean code.
 new 73a8e70  Reformat code.
 new 697eda5  Add TODO tag for issue#HUPA-117.

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../org/apache/hupa/client/ui/FolderListView.java  |  9 ---
 .../org/apache/hupa/client/ui/HupaLayoutable.java  |  2 ++
 .../org/apache/hupa/client/ui/HupaPlugins.java |  3 ---
 .../src/main/java/org/apache/hupa/client/ui/README | 29 --
 4 files changed, 34 insertions(+), 9 deletions(-)

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[jira] [Assigned] (HUPA-109) Mail Attachment does not work in HTTPS mode In internet Explorer When URL is added in trusted sites

2018-02-25 Thread Dongxu Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HUPA-109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongxu Wang reassigned HUPA-109:


Assignee: Dongxu Wang  (was: Manuel Carrasco Moñino)

> Mail Attachment does not work in HTTPS mode  In internet Explorer When URL is 
> added in trusted sites  
> --
>
> Key: HUPA-109
> URL: https://issues.apache.org/jira/browse/HUPA-109
> Project: James Hupa
>  Issue Type: Bug
>  Components: client, server
> Environment: Windows XP/7
> Internet Explorer 8/9
>Reporter: ajay kumar
>Assignee: Dongxu Wang
>Priority: Blocker
>
> Mail Attachment does not work in HTTPS mode  In internet Explorer When URL is 
> added in trusted sites  
> Steps to Reproduce:
> ***
> 1.Browse the application in HTTPS protocol in Internet Explorer only (version 
> 8,9)
> 2.Add the browsing URL in :-> Internet Options ->Security->Trusted Sites
> 3.Now send a mail with attachment to any person .
> 4.The mail is sent but attachment is not sent.
> Note: if we do not implement point :2 (above) then it works fine.
> It also works fine in other browsers except internet explorer. 
> Can you please diagnose that this can be fixed in code & what is its cause?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[jira] [Assigned] (HUPA-117) The folder item should be highlighted after refreshing the page

2018-02-25 Thread Dongxu Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HUPA-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongxu Wang reassigned HUPA-117:


Assignee: Dongxu Wang  (was: Manuel Carrasco Moñino)

> The folder item should be highlighted after refreshing the page
> ---
>
> Key: HUPA-117
> URL: https://issues.apache.org/jira/browse/HUPA-117
> Project: James Hupa
>  Issue Type: Bug
>  Components: client
>Reporter: Echo Wang
>Assignee: Dongxu Wang
>Priority: Major
>
> The items locate in the left panel.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] branch trunk updated: Reformat, remove the blank space.

2018-02-22 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 586ac80  Reformat, remove the blank space.
586ac80 is described below

commit 586ac8020ec037d289781a0c0b32edb6150731c6
Author: Echo Wang <don...@apache.org>
AuthorDate: Fri Feb 23 15:37:28 2018 +0800

Reformat, remove the blank space.
---
 client/src/main/java/org/apache/hupa/Launcher.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/client/src/main/java/org/apache/hupa/Launcher.java 
b/client/src/main/java/org/apache/hupa/Launcher.java
index 4481a75..2101eef 100644
--- a/client/src/main/java/org/apache/hupa/Launcher.java
+++ b/client/src/main/java/org/apache/hupa/Launcher.java
@@ -32,7 +32,7 @@ import org.eclipse.jetty.webapp.WebAppContext;
 public final class Launcher {
public static void main(String[] args) throws Exception {
 
-   int port = Integer.parseInt(System.getProperty("port", "8282"));
+  int port = Integer.parseInt(System.getProperty("port", "8282"));
   String bindAddress = System.getProperty("host", "0.0.0.0");
 
   InetSocketAddress a = new InetSocketAddress(bindAddress, port);

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



james-site git commit: Update to hupa release 0.0.3 for doc.

2018-02-22 Thread dongxu
Repository: james-site
Updated Branches:
  refs/heads/asf-site d0142be60 -> e6edcf292


Update to hupa release 0.0.3 for doc.


Project: http://git-wip-us.apache.org/repos/asf/james-site/repo
Commit: http://git-wip-us.apache.org/repos/asf/james-site/commit/e6edcf29
Tree: http://git-wip-us.apache.org/repos/asf/james-site/tree/e6edcf29
Diff: http://git-wip-us.apache.org/repos/asf/james-site/diff/e6edcf29

Branch: refs/heads/asf-site
Commit: e6edcf292be49cb6c8ca0bbc1b67a61d29f51968
Parents: d0142be
Author: Echo Wang 
Authored: Thu Feb 22 17:23:48 2018 +0800
Committer: Echo Wang 
Committed: Thu Feb 22 17:23:48 2018 +0800

--
 content/hupa/index.html | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/james-site/blob/e6edcf29/content/hupa/index.html
--
diff --git a/content/hupa/index.html b/content/hupa/index.html
index 950bb39..dd49404 100644
--- a/content/hupa/index.html
+++ b/content/hupa/index.html
@@ -255,17 +255,17 @@
 
 Releases
 
-Last release is Hupa 0.0.2:
-http://repo1.maven.org/maven2/org/apache/james/hupa/hupa/0.0.2/hupa-0.0.2.war;>
 binary  : ready to run or to deploy in any servlet container.
-http://repo1.maven.org/maven2/org/apache/james/hupa/hupa-parent/0.0.2/hupa-parent-0.0.2-source-release.zip;>
 sources .
+Last release is Hupa 0.0.3:
+http://central.maven.org/maven2/org/apache/james/hupa/hupa/0.0.3/hupa-0.0.3.war;>
 binary  : ready to run or to deploy in any servlet container.
+http://central.maven.org/maven2/org/apache/james/hupa/hupa-parent/0.0.3/hupa-parent-0.0.3-source-release.zip;>
 sources .
 
-
+
 
 
 Demo


-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[james-hupa] branch trunk updated: Use google code archive link since google code has been deprecated.

2018-02-21 Thread dongxu
This is an automated email from the ASF dual-hosted git repository.

dongxu pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/james-hupa.git


The following commit(s) were added to refs/heads/trunk by this push:
 new f56eff1  Use google code archive link since google code has been 
deprecated.
f56eff1 is described below

commit f56eff1a08670f757ee6cd2cd7faaebb1a403de1
Author: Echo Wang <don...@apache.org>
AuthorDate: Thu Feb 22 11:18:35 2018 +0800

Use google code archive link since google code has been deprecated.
---
 client/README.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/client/README.txt b/client/README.txt
index 415e6fb..34336fc 100644
--- a/client/README.txt
+++ b/client/README.txt
@@ -1,2 +1,2 @@
 About the com.google.gwt.gen2.event.shared.HandlerManager:
-See http://code.google.com/p/google-web-toolkit-incubator/issues/detail?id=340
+See https://code.google.com/archive/p/google-web-toolkit-incubator/issues/340

-- 
To stop receiving notification emails like this one, please contact
don...@apache.org.

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Fwd: [jira] [Updated] (INFRA-15751) Migrate james-hupa origin to git

2018-01-20 Thread dongxu
Hi Tellier,

Can you, as a PMC, help selfserving git migration for hupa. The
information might be:

Repository name: hupa
Repository description: Apache James Hupa Repo
Commit notification list: server-dev@james.apache.org
GitHub notification list: server-dev@james.apache.org



-- Forwarded message --
From: Chris Lambertus (JIRA) 
Date: Sat, Jan 13, 2018 at 2:05 AM
Subject: [jira] [Updated] (INFRA-15751) Migrate james-hupa origin to git
To: don...@apache.org



 [ 
https://issues.apache.org/jira/browse/INFRA-15751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Lambertus updated INFRA-15751:

Status: Waiting for user  (was: Waiting for Infra)

svn -> git migration is self-serve now. You can request a new repo via
https://gitbox.apache.org/setup/newrepo.html, then use svn2git to test
and tune the migration details. You will need our authors.txt file,
available at http://git-wip-us.apache.org/authors.txt



> Migrate james-hupa origin to git
> 
>
> Key: INFRA-15751
> URL: https://issues.apache.org/jira/browse/INFRA-15751
> Project: Infrastructure
>  Issue Type: Improvement
>  Components: Git, Github
>Reporter: Tellier Benoit
>
> Hi,
> Would it be possible to migrate james-hupa origin from *svn* to *git*?
> This would enhance the development process of that part of the James project, 
> had been asked by active commiters, and was accepted by the PMCs.
> Cheers,
> Benoit Tellier



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: Git Repos

2018-01-01 Thread dongxu
Hi Benoit,

I will follow the issue status.
Thanks for your help.


On Tue, Jan 2, 2018 at 9:55 AM, Benoit Tellier <btell...@linagora.com> wrote:
> Hi Dong Xu,
>
> After a PMC consultation I opened this ticket:
>
> https://issues.apache.org/jira/browse/INFRA-15751
>
> Cheers,
>
> Benoit
>
> Le 20/12/2017 à 08:02, dongxu a écrit :
>> Hey Eric,
>>
>> can you please help push a request to migrate james-hupa from SVN to
>> Git https://reporeq.apache.org/
>>
>> Thanks.
>>
>> On Mon, Dec 18, 2017 at 9:03 AM, dongxu <don...@apache.org> wrote:
>>> Hi guys,
>>>
>>> Can I apply to move james-hupa to git and then continue to maintain
>>> the project?
>>>
>>> On Sun, Jul 3, 2016 at 5:17 PM, Eric Charles <e...@apache.org> wrote:
>>>>>> What about http://james.apache.org/contribute.html where we could also
>>>>>> introduce the different repositories and explain the overall
>>>>>> architecture of the James project and how we accept pull requests from
>>>>>> github.
>>>>>>
>>>>>> Any thoughts?
>>>>>
>>>>>
>>>>> The website needs a lot of love. Thank you for taking your time to go
>>>>> through all these tasks that need to be done.
>>>>>
>>>>
>>>> This can be followed on JAMES-1789 (consolidate documentation) and
>>>> INFRA-12204 (Migrate James website from svnpubsub to the git workflow)
>>>>
>>>>
>>>> -
>>>> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
>>>> For additional commands, e-mail: server-dev-h...@james.apache.org
>>>>
>>
>> -
>> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
>> For additional commands, e-mail: server-dev-h...@james.apache.org
>>
>
> -
> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
> For additional commands, e-mail: server-dev-h...@james.apache.org
>

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: Git Repos

2017-12-19 Thread dongxu
Hey Eric,

can you please help push a request to migrate james-hupa from SVN to
Git https://reporeq.apache.org/

Thanks.

On Mon, Dec 18, 2017 at 9:03 AM, dongxu <don...@apache.org> wrote:
> Hi guys,
>
> Can I apply to move james-hupa to git and then continue to maintain
> the project?
>
> On Sun, Jul 3, 2016 at 5:17 PM, Eric Charles <e...@apache.org> wrote:
>>>> What about http://james.apache.org/contribute.html where we could also
>>>> introduce the different repositories and explain the overall
>>>> architecture of the James project and how we accept pull requests from
>>>> github.
>>>>
>>>> Any thoughts?
>>>
>>>
>>> The website needs a lot of love. Thank you for taking your time to go
>>> through all these tasks that need to be done.
>>>
>>
>> This can be followed on JAMES-1789 (consolidate documentation) and
>> INFRA-12204 (Migrate James website from svnpubsub to the git workflow)
>>
>>
>> -
>> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
>> For additional commands, e-mail: server-dev-h...@james.apache.org
>>

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: Git Repos

2017-12-17 Thread dongxu
Hi guys,

Can I apply to move james-hupa to git and then continue to maintain
the project?

On Sun, Jul 3, 2016 at 5:17 PM, Eric Charles  wrote:
>>> What about http://james.apache.org/contribute.html where we could also
>>> introduce the different repositories and explain the overall
>>> architecture of the James project and how we accept pull requests from
>>> github.
>>>
>>> Any thoughts?
>>
>>
>> The website needs a lot of love. Thank you for taking your time to go
>> through all these tasks that need to be done.
>>
>
> This can be followed on JAMES-1789 (consolidate documentation) and
> INFRA-12204 (Migrate James website from svnpubsub to the git workflow)
>
>
> -
> To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
> For additional commands, e-mail: server-dev-h...@james.apache.org
>

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1818266 - /james/hupa/trunk/pom.xml

2017-12-15 Thread dongxu
Author: dongxu
Date: Fri Dec 15 10:58:43 2017
New Revision: 1818266

URL: http://svn.apache.org/viewvc?rev=1818266=rev
Log:
Remove 404 repository, add jboss repository for cobogw.

Modified:
james/hupa/trunk/pom.xml

Modified: james/hupa/trunk/pom.xml
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/pom.xml?rev=1818266=1818265=1818266=diff
==
--- james/hupa/trunk/pom.xml (original)
+++ james/hupa/trunk/pom.xml Fri Dec 15 10:58:43 2017
@@ -338,18 +338,8 @@
 http://repo1.maven.org/maven2/
 
 
-guice, gin, gwt-vl, gwt-incubator, gwt-dnd
-http://gwtquery-plugins.googlecode.com/svn/mavenrepo
-
-
-gwt-presenter
-GWT Presenter repository at googlecode
-http://gwt-presenter.googlecode.com/svn/maven2
-
-
-cobogw
-Cobogw repository at googlecode
-http://cobogw.googlecode.com/svn/maven2
+JBoss repository
+http://repository.jboss.org/nexus/content/groups/public/
 
 
sonatype



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1818263 - /james/hupa/trunk/README.txt

2017-12-15 Thread dongxu
Author: dongxu
Date: Fri Dec 15 10:36:21 2017
New Revision: 1818263

URL: http://svn.apache.org/viewvc?rev=1818263=rev
Log:
Remove readme.txt..

Removed:
james/hupa/trunk/README.txt


-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1818262 - /james/hupa/trunk/README.md

2017-12-15 Thread dongxu
Author: dongxu
Date: Fri Dec 15 10:35:37 2017
New Revision: 1818262

URL: http://svn.apache.org/viewvc?rev=1818262=rev
Log:
Update readme from txt to md.

Added:
james/hupa/trunk/README.md

Added: james/hupa/trunk/README.md
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/README.md?rev=1818262=auto
==
--- james/hupa/trunk/README.md (added)
+++ james/hupa/trunk/README.md Fri Dec 15 10:35:37 2017
@@ -0,0 +1,48 @@
+
+## Introduction ##
+Hupa is a Rich IMAP-based Webmail application written in GWT.
+
+Hupa has been entirely written in java to be coherent with the language used 
in the James project.
+It has been a development reference using GWT good practices (MVP pattern and 
Unit testing)
+
+It is ready for reading, sending,  and managing messages and folders, but it 
still lacks of many features email clients nowadays have.
+
+## Bulding ##
+Hupa use maven2 as building tool. To build hupa download maven2 
(http://maven.apache.org), unpack maven2 and install it.
+After that change to hupa directory and execute the following cmd:
+$ mvn clean package
+
+## Configuring server side  
+Hupa uses a properties file to know the IMAP and SMTP servers configuration.
+There is an example configuration file in 
'hupa/src/main/webapp/WEB-INF/conf/config.properties'
+
+- You can set your configuration parameters in either of these files:
+  $HOME/.hupa/config.properties
+  /etc/default/hupa
+- Or in any other file if you start your application server with the parameter:
+  -Dhupa.config.file=full_path_to_your_properties_file
+
+# Running Hupa #
+Hupa comes packaged with a servlet-container, so once you have compiled the 
app just run:
+$ java -jar target/hupa-${version}.war
+
+Then point your browser to the url:
+http://localhost:8282
+
+If you prefer to use any other servlet container you can deploy the provided 
.war file in it.
+
+## Hupa and IMAP/SMTP servers  #
+Hupa is able to discover most of the imap/smtp configuration based on the 
email domain part.
+When you are prompted to login, type your email address and wait few seconds, 
if you click on the
+gear button you can see the configuration discovered by Hupa, you can modify 
it if it does not match
+your email provider configuration. Then type your inbox password and you will 
be logged into your
+email provider servers.
+
+Hupa is compatible with most email providers, gmail, yahoo, hotmail, outlook, 
exchange, james, etc.
+
+## Eclipse GWT Plugin notes 
+- Hupa uses maven to be built, before inporting the project, you shoul install 
m2eclipse
+and google plugins, then go to Import -> New maven project and select the 
modules:
+shared, mock, server, widgets, client and hupa.
+
+- To run hupa in hosted mode, select: Run as -> (Google) Web application.



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1802007 - in /james/hupa/trunk: ./ client/pom.xml hupa/pom.xml pom.xml

2017-07-15 Thread dongxu
Author: dongxu
Date: Sat Jul 15 09:36:19 2017
New Revision: 1802007

URL: http://svn.apache.org/viewvc?rev=1802007=rev
Log:
Downgrade dnd and cobogw version since which cannot be found in the central 
maven repository.

Modified:
james/hupa/trunk/   (props changed)
james/hupa/trunk/client/pom.xml
james/hupa/trunk/hupa/pom.xml
james/hupa/trunk/pom.xml

Propchange: james/hupa/trunk/
--
--- svn:ignore (original)
+++ svn:ignore Sat Jul 15 09:36:19 2017
@@ -1,6 +1,4 @@
 target
-tomcat
-www-test
-.*
-war
-coverage.ec
+.settings
+.project
+MANIFEST.MF

Modified: james/hupa/trunk/client/pom.xml
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/pom.xml?rev=1802007=1802006=1802007=diff
==
--- james/hupa/trunk/client/pom.xml (original)
+++ james/hupa/trunk/client/pom.xml Sat Jul 15 09:36:19 2017
@@ -72,7 +72,7 @@
 cobogw
 
 
-com.google.code.gwt-dnd
+com.allen-sauer.gwt.dnd
 gwt-dnd
 
 

Modified: james/hupa/trunk/hupa/pom.xml
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/hupa/pom.xml?rev=1802007=1802006=1802007=diff
==
--- james/hupa/trunk/hupa/pom.xml (original)
+++ james/hupa/trunk/hupa/pom.xml Sat Jul 15 09:36:19 2017
@@ -84,7 +84,7 @@
 cobogw
 
 
-com.google.code.gwt-dnd
+com.allen-sauer.gwt.dnd
 gwt-dnd
 
 

Modified: james/hupa/trunk/pom.xml
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/pom.xml?rev=1802007=1802006=1802007=diff
==
--- james/hupa/trunk/pom.xml (original)
+++ james/hupa/trunk/pom.xml Sat Jul 15 09:36:19 2017
@@ -162,7 +162,7 @@
 
 org.cobogw.gwt
 cobogw
-1.3.2
+1.2.5
 provided
 
 
@@ -202,14 +202,14 @@
 
 com.google.gwt
 gwt-incubator
-20101117-r1766
+2.0.1
 
 
 
 
-com.google.code.gwt-dnd
+com.allen-sauer.gwt.dnd
 gwt-dnd
-3.1.1
+3.1.2
 provided
 
 



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1801935 - /james/hupa/trunk/README.txt

2017-07-14 Thread dongxu
Author: dongxu
Date: Fri Jul 14 10:59:14 2017
New Revision: 1801935

URL: http://svn.apache.org/viewvc?rev=1801935=rev
Log:
Update doc and ping the repository.

Modified:
james/hupa/trunk/README.txt

Modified: james/hupa/trunk/README.txt
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/README.txt?rev=1801935=1801934=1801935=diff
==
--- james/hupa/trunk/README.txt (original)
+++ james/hupa/trunk/README.txt Fri Jul 14 10:59:14 2017
@@ -8,7 +8,7 @@ It has been a development reference usin
 It is ready for reading, sending,  and managing messages and folders, but it 
still lacks of many features email clients nowadays have.
 
 ## Bulding ##
-Hupa use maven2 as build tool. To build hupa download maven2 
(http://maven.apache.org), unpack maven2 and install it.
+Hupa use maven2 as building tool. To build hupa download maven2 
(http://maven.apache.org), unpack maven2 and install it.
 After that change to hupa directory and execute the following cmd:
 $ mvn clean package
 



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



SUB

2017-03-13 Thread dongxu


-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



[jira] [Created] (HUPA-111) Using gwt-polymer-elements for theme

2015-11-13 Thread dongxu (JIRA)
dongxu created HUPA-111:
---

 Summary: Using gwt-polymer-elements for theme
 Key: HUPA-111
 URL: https://issues.apache.org/jira/browse/HUPA-111
 Project: James Hupa
  Issue Type: Improvement
Reporter: dongxu
Assignee: dongxu
Priority: Minor


Replace the current theme by polymer-elements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1684702 - /james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java

2015-06-10 Thread dongxu
Author: dongxu
Date: Wed Jun 10 16:19:15 2015
New Revision: 1684702

URL: http://svn.apache.org/r1684702
Log:
HUPA-110 message select event should take place when user refreshing some 
message detail.

Modified:

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java?rev=1684702r1=1684701r2=1684702view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java
 Wed Jun 10 16:19:15 2015
@@ -133,7 +133,7 @@ public class MessageListActivity extends
 }));
 }
 
-protected void onMessageSelected(Message message) {
+public void onMessageSelected(Message message) {
 antiSelectMessages(display.getGrid().getVisibleItems());
 GetMessageDetailsRequest req = rf.messageDetailsRequest();
 GetMessageDetailsAction action = 
req.create(GetMessageDetailsAction.class);
@@ -242,6 +242,7 @@ public class MessageListActivity extends
 int l = messages.size();
 for (int i = 0; i  l; i++){
 Message m = messages.get(i);
+
MessageListActivity.this.onMessageSelected(m);//FIXME for fixing 
https://issues.apache.org/jira/browse/HUPA-110
 if (m.getUid() == 
event.messageDetails.getUid()) {
 ListIMAPFlag flags = m.getFlags();
 if (!flags.contains(IMAPFlag.SEEN)) {



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[jira] [Resolved] (HUPA-110) All tool buttons are disabled when refreshing some message page.

2015-06-10 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HUPA-110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu resolved HUPA-110.
-
   Resolution: Fixed
Fix Version/s: 0.1

 All tool buttons are disabled when refreshing some message page.
 

 Key: HUPA-110
 URL: https://issues.apache.org/jira/browse/HUPA-110
 Project: James Hupa
  Issue Type: Bug
  Components: client
Affects Versions: 0.1
Reporter: dongxu
Assignee: dongxu
 Fix For: 0.1


 When refreshing on some message page, like :
 http://127.0.0.1:/hupa/Hupa.html#message:INBOX:15205
 Even though the tool buttons are active style, but actually they are not 
 clickable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[jira] [Created] (HUPA-110) All tool buttons are disabled when refreshing some message page.

2015-06-09 Thread dongxu (JIRA)
dongxu created HUPA-110:
---

 Summary: All tool buttons are disabled when refreshing some 
message page.
 Key: HUPA-110
 URL: https://issues.apache.org/jira/browse/HUPA-110
 Project: James Hupa
  Issue Type: Bug
  Components: client
Affects Versions: 0.1
Reporter: dongxu
Assignee: dongxu


When refreshing on some message page, like :
http://127.0.0.1:/hupa/Hupa.html#message:INBOX:15205

Even though the tool buttons are active style, but actually they are not 
clickable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1684470 - in /james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui: ToolBarView.java ToolBarView.ui.xml

2015-06-09 Thread dongxu
Author: dongxu
Date: Tue Jun  9 16:28:51 2015
New Revision: 1684470

URL: http://svn.apache.org/r1684470
Log:
fix the UiHandlers with enable and disable rather than the event registration.

Modified:

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/ToolBarView.java

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/ToolBarView.ui.xml

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/ToolBarView.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/ToolBarView.java?rev=1684470r1=1684469r2=1684470view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/ToolBarView.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/ToolBarView.java
 Tue Jun  9 16:28:51 2015
@@ -29,10 +29,8 @@ import org.apache.hupa.shared.events.Sho
 
 import com.google.gwt.core.client.GWT;
 import com.google.gwt.event.dom.client.ClickEvent;
-import com.google.gwt.event.dom.client.ClickHandler;
 import com.google.gwt.event.dom.client.HasClickHandlers;
 import com.google.gwt.event.shared.EventBus;
-import com.google.gwt.event.shared.HandlerRegistration;
 import com.google.gwt.place.shared.PlaceController;
 import com.google.gwt.resources.client.CssResource;
 import com.google.gwt.uibinder.client.UiBinder;
@@ -44,12 +42,13 @@ import com.google.gwt.user.client.ui.Dec
 import com.google.gwt.user.client.ui.FlowPanel;
 import com.google.gwt.user.client.ui.HTMLPanel;
 import com.google.gwt.user.client.ui.PopupPanel;
+import com.google.gwt.user.client.ui.UIObject;
 import com.google.gwt.user.client.ui.VerticalPanel;
 import com.google.gwt.user.client.ui.Widget;
 import com.google.inject.Inject;
 
 public class ToolBarView extends Composite implements 
ToolBarActivity.Displayable {
-
+   
 @Inject private PlaceController placeController;
 @Inject private EventBus eventBus;
 
@@ -68,18 +67,6 @@ public class ToolBarView extends Composi
 @UiField public HTMLPanel replyAllTip;
 @UiField public HTMLPanel forwardTip;
 
-
-// FIXME:  The handlers management in this view is awful.
-// It should use @UiHandlers with a enable/disble property.
-
-// Absolutely!!!
-
-HandlerRegistration deleteReg;
-HandlerRegistration markReg;
-HandlerRegistration replyReg;
-HandlerRegistration replyAllReg;
-HandlerRegistration forwardReg;
-
 @UiField public Style style;
 
 public interface Style extends CssResource {
@@ -182,61 +169,59 @@ public class ToolBarView extends Composi
 }
 
 @UiHandler(compose)
-public void handleClick(ClickEvent e) {
+public void handleCompose(ClickEvent e) {
 placeController.goTo(new ComposePlace(new).with(parameters));
 }
-
-private ClickHandler forwardHandler = new ClickHandler() {
-
-@Override
-public void onClick(ClickEvent event) {
+
+@UiHandler(forward)
+public void handleForward(ClickEvent e) {
+   if(isEnabled(forward)){
 placeController.goTo(new ComposePlace(forward).with(parameters));
-}
-
-};
-private ClickHandler replyAllHandler = new ClickHandler() {
-
-@Override
-public void onClick(ClickEvent event) {
-placeController.goTo(new 
ComposePlace(replyAll).with(parameters));
-}
-
-};
-private ClickHandler replyHandler = new ClickHandler() {
-
-@Override
-public void onClick(ClickEvent event) {
-placeController.goTo(new ComposePlace(reply).with(parameters));
-}
-
-};
-private ClickHandler deleteHandler = new ClickHandler() {
-
-@Override
-public void onClick(ClickEvent event) {
-eventBus.fireEvent(new DeleteClickEvent());
-}
-};
-
-private ClickHandler markHandler = new ClickHandler() {
-public void onClick(ClickEvent event) {
-// Reposition the popup relative to the button
-Widget source = (Widget) event.getSource();
-int left = source.getAbsoluteLeft();
-int top = source.getAbsoluteTop() + source.getOffsetHeight();
-simplePopup.setPopupPosition(left, top);
-simplePopup.show();
-}
-};
-
-private ClickHandler rawHandler = new ClickHandler() {
-@Override
-public void onClick(ClickEvent event) {
-eventBus.fireEvent(new ShowRawEvent());
-}
-};
-
-private HandlerRegistration rawReg;
+   }
+}
+
+@UiHandler(replyAll)
+public void handleReplyAll(ClickEvent e) {
+   if(isEnabled(replyAll)){
+   placeController.goTo(new 
ComposePlace(replyAll).with(parameters));
+   }
+}
+
+@UiHandler(reply)
+public void handleReply(ClickEvent e) {
+   if(isEnabled(reply)){
+   placeController.goTo(new 
ComposePlace(reply

svn commit: r1683771 - /james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/README

2015-06-05 Thread dongxu
Author: dongxu
Date: Fri Jun  5 14:44:07 2015
New Revision: 1683771

URL: http://svn.apache.org/r1683771
Log:
add ui skin's license information.

Modified:
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/README

Modified: james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/README
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/README?rev=1683771r1=1683770r2=1683771view=diff
==
--- james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/README 
(original)
+++ james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/README Fri 
Jun  5 14:44:07 2015
@@ -1 +1,2 @@
-Lots of theme resources in this package are borrowed from http://roundcube.net
\ No newline at end of file
+The majority of the theme resources in this ui package are copied from 
http://roundcube.net
+Therefore, the theme comply with https://roundcube.net/license/
\ No newline at end of file



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1683766 - /james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java

2015-06-05 Thread dongxu
Author: dongxu
Date: Fri Jun  5 14:34:03 2015
New Revision: 1683766

URL: http://svn.apache.org/r1683766
Log:
remove warnings at _ToolPanel

Modified:

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java?rev=1683766r1=1683765r2=1683766view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/MessageListActivity.java
 Fri Jun  5 14:34:03 2015
@@ -19,8 +19,6 @@
 
 package org.apache.hupa.client.activity;
 
-import static com.google.gwt.query.client.GQuery.console;
-
 import java.util.Collection;
 import java.util.LinkedHashSet;
 import java.util.List;



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1683575 - /james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ComposeActivity.java

2015-06-04 Thread dongxu
Author: dongxu
Date: Thu Jun  4 15:22:08 2015
New Revision: 1683575

URL: http://svn.apache.org/r1683575
Log:
it will redirect the inbox rather than nullpointerexception.

Modified:

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ComposeActivity.java

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ComposeActivity.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ComposeActivity.java?rev=1683575r1=1683574r2=1683575view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ComposeActivity.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ComposeActivity.java
 Thu Jun  4 15:22:08 2015
@@ -395,8 +395,6 @@ public class ComposeActivity extends App
 }
 });
 } else if (forward.equals(place.getToken())) {
-// FIXME will get a NullPointerException given accessing
-// directly from some URL like #/compose:forward
 SendForwardMessageRequest req = rf.sendForwardMessageRequest();
 SendForwardMessageAction action = 
req.create(SendForwardMessageAction.class);
 action.setReferences(oldDetails.getReferences());



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1683571 - in /james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation: EmailListValidator.java NotEmptyValidator.java

2015-06-04 Thread dongxu
Author: dongxu
Date: Thu Jun  4 15:06:12 2015
New Revision: 1683571

URL: http://svn.apache.org/r1683571
Log:
remove some warnings using the suppress warnings.

Modified:

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/EmailListValidator.java

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/NotEmptyValidator.java

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/EmailListValidator.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/EmailListValidator.java?rev=1683571r1=1683570r2=1683571view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/EmailListValidator.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/EmailListValidator.java
 Thu Jun  4 15:06:12 2015
@@ -40,7 +40,8 @@ public class EmailListValidator extends
 this.text = text;
 }
 
-@Override
+@SuppressWarnings(unchecked)
+   @Override
 public void invokeActions(ValidationResult result) {
 for (ValidationActionHasText action : getFailureActions())
 action.invoke(result, text);

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/NotEmptyValidator.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/NotEmptyValidator.java?rev=1683571r1=1683570r2=1683571view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/NotEmptyValidator.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/validation/NotEmptyValidator.java
 Thu Jun  4 15:06:12 2015
@@ -38,7 +38,8 @@ public class NotEmptyValidator extends V
 public NotEmptyValidator(HasText text) {
 this.text = text;
 }
-@Override
+@SuppressWarnings(unchecked)
+   @Override
 public void invokeActions(ValidationResult result) {
 for (ValidationActionHasText action : getFailureActions())
 action.invoke(result, text);



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1683360 - in /james/hupa/trunk: client/src/test/java/org/apache/hupa/client/mock/ mock/src/main/java/org/apache/hupa/server/guice/ mock/src/main/java/org/apache/hupa/server/mock/ server/s

2015-06-03 Thread dongxu
Author: dongxu
Date: Wed Jun  3 15:07:09 2015
New Revision: 1683360

URL: http://svn.apache.org/r1683360
Log:
remove some warnings for the source code.

Modified:

james/hupa/trunk/client/src/test/java/org/apache/hupa/client/mock/MockMessageSendDisplay.java

james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/guice/AbstractGuiceTestModule.java

james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockConstants.java

james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockHttpSession.java

james/hupa/trunk/server/src/main/java/org/apache/hupa/server/service/GetMessageDetailsServiceImpl.java

james/hupa/trunk/server/src/test/java/org/apache/hupa/server/integration/StoreBugTest.java

james/hupa/trunk/server/src/test/java/org/apache/hupa/server/utils/TestUtils.java

james/hupa/trunk/widgets/src/main/java/org/apache/hupa/widgets/editor/ColorPicker.java

james/hupa/trunk/widgets/src/main/java/org/apache/hupa/widgets/editor/Editor.java

james/hupa/trunk/widgets/src/main/java/org/apache/hupa/widgets/editor/FontPicker.java

james/hupa/trunk/widgets/src/main/java/org/apache/hupa/widgets/ui/MultiValueSuggestArea.java

Modified: 
james/hupa/trunk/client/src/test/java/org/apache/hupa/client/mock/MockMessageSendDisplay.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/test/java/org/apache/hupa/client/mock/MockMessageSendDisplay.java?rev=1683360r1=1683359r2=1683360view=diff
==
--- 
james/hupa/trunk/client/src/test/java/org/apache/hupa/client/mock/MockMessageSendDisplay.java
 (original)
+++ 
james/hupa/trunk/client/src/test/java/org/apache/hupa/client/mock/MockMessageSendDisplay.java
 Wed Jun  3 15:07:09 2015
@@ -27,7 +27,6 @@ import org.apache.hupa.widgets.ui.HasEna
 
 import com.google.gwt.event.dom.client.HasClickHandlers;
 import com.google.gwt.event.dom.client.HasFocusHandlers;
-import com.google.gwt.user.client.TakesValue;
 import com.google.gwt.user.client.ui.Focusable;
 import com.google.gwt.user.client.ui.HasHTML;
 import com.google.gwt.user.client.ui.HasText;

Modified: 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/guice/AbstractGuiceTestModule.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/guice/AbstractGuiceTestModule.java?rev=1683360r1=1683359r2=1683360view=diff
==
--- 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/guice/AbstractGuiceTestModule.java
 (original)
+++ 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/guice/AbstractGuiceTestModule.java
 Wed Jun  3 15:07:09 2015
@@ -75,7 +75,8 @@ import com.google.inject.name.Named;
  */
 public abstract class AbstractGuiceTestModule extends AbstractModule{
 
-protected static class TestUser extends UserImpl {
+@SuppressWarnings(serial)
+   protected static class TestUser extends UserImpl {
 
 @Inject
 public TestUser(@Named(Username) String username,

Modified: 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockConstants.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockConstants.java?rev=1683360r1=1683359r2=1683360view=diff
==
--- 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockConstants.java
 (original)
+++ 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockConstants.java
 Wed Jun  3 15:07:09 2015
@@ -33,7 +33,8 @@ public class MockConstants {
 
 public static String SESSION_ID = MockID;
 
-public final static Settings mockSettings = new SettingsImpl() {
+@SuppressWarnings(serial)
+   public final static Settings mockSettings = new SettingsImpl() {
 {
 setInboxFolderName(MockIMAPStore.MOCK_INBOX_FOLDER);
 setSentFolderName(MockIMAPStore.MOCK_SENT_FOLDER);
@@ -69,7 +70,8 @@ public class MockConstants {
 }
 };
 
-public final static User mockUser = new UserImpl() {
+@SuppressWarnings(serial)
+   public final static User mockUser = new UserImpl() {
 {
 setName(MockIMAPStore.MOCK_LOGIN);
 setPassword(MockIMAPStore.MOCK_LOGIN);

Modified: 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockHttpSession.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockHttpSession.java?rev=1683360r1=1683359r2=1683360view=diff
==
--- 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockHttpSession.java
 (original)
+++ 
james/hupa/trunk/mock/src/main/java/org/apache/hupa/server/mock/MockHttpSession.java
 Wed Jun  3 15:07:09 2015
@@ -50,7 +50,7 @@ public class MockHttpSession

[jira] [Updated] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column value is NULL

2015-03-31 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-6644:
--
Description: 
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)
val testData = TestHive.sparkContext.parallelize((1 to 10).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

//inititi
 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)
// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)
 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong queyr number ,when we query like that: 

select  count(1)  from  table_with_partition  where   key1  is not NULL

  was:
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
some problems is solved(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)
val testData = TestHive.sparkContext.parallelize((1 to 10).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )

 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)
// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)
 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong queyr number ,when we query like that: 

select  count(1)  from  table_with_partition  where   key1  is not NULL


 [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), 
 new column value is NULL
 --

 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu

 In hive,the schema of partition may be difference from the table schema. For 
 example, we add new column. When we use spark-sql to query the data of 
 partition which schema is difference from the table schema.
 Some problems have been solved at PR4289 
 (https://github.com/apache/spark/pull/4289), 
 but if you add a new column,put new data into the old partition,new column 
 value is NULL
 [According to the following steps]:
 case class TestData(key: Int, value: String)
 val testData = TestHive.sparkContext.parallelize((1 to 10).map(i = 
 TestData(i, i.toString))).toDF()
   testData.registerTempTable(testData)
 //inititi
  sql(DROP TABLE IF EXISTS table_with_partition )
  sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value 
 string) PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value FROM testData)
 // add column to table
  sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
  sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value,'test',1.11 FROM testData)
  sql(select * from table_with_partition where ds='1' 
 ).collect().foreach(println

[jira] [Updated] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column value is NULL

2015-03-31 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-6644:
--
Description: 
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)

val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)

// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)

 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong queyr number ,when we query like that: 

select  count(1)  from  table_with_partition  where   key1  is not NULL

  was:
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)
val testData = TestHive.sparkContext.parallelize((1 to 10).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

//inititi
 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)
// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)
 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong queyr number ,when we query like that: 

select  count(1)  from  table_with_partition  where   key1  is not NULL


 [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), 
 new column value is NULL
 --

 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu

 In hive,the schema of partition may be difference from the table schema. For 
 example, we add new column. When we use spark-sql to query the data of 
 partition which schema is difference from the table schema.
 Some problems have been solved at PR4289 
 (https://github.com/apache/spark/pull/4289), 
 but if you add a new column,put new data into the old partition,new column 
 value is NULL
 [According to the following steps]:
 case class TestData(key: Int, value: String)
 val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = 
 TestData(i, i.toString))).toDF()
   testData.registerTempTable(testData)
  sql(DROP TABLE IF EXISTS table_with_partition )
  sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value 
 string) PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value FROM testData)
 // add column to table
  sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
  sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value,'test',1.11 FROM testData)
  sql(select * from table_with_partition where ds='1' 
 ).collect().foreach

[jira] [Updated] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column value is NULL

2015-03-31 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-6644:
--
Description: 
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:
--
case class TestData(key: Int, value: String)

val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)

// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)

 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
-
result: 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong query number ,when we query : 

select  count(1)  from  table_with_partition  where   key1  is not NULL

  was:
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)

val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)

// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)

 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong queyr number ,when we query like that: 

select  count(1)  from  table_with_partition  where   key1  is not NULL


 [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), 
 new column value is NULL
 --

 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu

 In hive,the schema of partition may be difference from the table schema. For 
 example, we add new column. When we use spark-sql to query the data of 
 partition which schema is difference from the table schema.
 Some problems have been solved at PR4289 
 (https://github.com/apache/spark/pull/4289), 
 but if you add a new column,put new data into the old partition,new column 
 value is NULL
 [According to the following steps]:
 --
 case class TestData(key: Int, value: String)
 val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = 
 TestData(i, i.toString))).toDF()
   testData.registerTempTable(testData)
  sql(DROP TABLE IF EXISTS table_with_partition )
  sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value 
 string) PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value FROM testData)
 // add column to table
  sql(ALTER TABLE

[jira] [Updated] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column value is NULL

2015-03-31 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-6644:
--
Description: 
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
some problems is solved(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)
val testData = TestHive.sparkContext.parallelize((1 to 10).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )

 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)
// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)
 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong queyr number ,when we query like that: 

select  count(1)  from  table_with_partition  where   key1  is not NULL

  was:
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
some problems is solved(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)
val testData = TestHive.sparkContext.parallelize(
  (1 to 10).map(i = TestData(i, i.toString))).toDF()
testData.registerTempTable(testData)
 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)
// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)
 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]



 [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), 
 new column value is NULL
 --

 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu

 In hive,the schema of partition may be difference from the table schema. For 
 example, we add new column. When we use spark-sql to query the data of 
 partition which schema is difference from the table schema.
 some problems is solved(https://github.com/apache/spark/pull/4289), 
 but if you add a new column,put new data into the old partition,new column 
 value is NULL
 [According to the following steps]:
 case class TestData(key: Int, value: String)
 val testData = TestHive.sparkContext.parallelize((1 to 10).map(i = 
 TestData(i, i.toString))).toDF()
   testData.registerTempTable(testData)
  sql(DROP TABLE IF EXISTS table_with_partition )
  sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value 
 string) PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value FROM testData)
 // add column to table
  sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
  sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value,'test',1.11 FROM testData)
  sql(select * from table_with_partition where ds='1' 
 ).collect().foreach(println)
  
 result : 
 [1,1,null,null,1]
 [2,2,null,null,1]
  
 result we expect:
 [1,1,test,1.11,1]
 [2,2,test,1.11,1]
 This bug will cause the wrong queyr number ,when we query like that: 
 select  count(1

[jira] [Updated] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column value is NULL

2015-03-31 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-6644:
--
Description: 
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if we add new column, and put new data into the old partition schema,new 
column value is NULL

[According to the following steps]:
--
case class TestData(key: Int, value: String)

val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)

// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)

 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
-
result: 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong query number ,when we query : 

select  count(1)  from  table_with_partition  where   key1  is not NULL

  was:
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition schema,new 
column value is NULL

[According to the following steps]:
--
case class TestData(key: Int, value: String)

val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)

// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)

 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
-
result: 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong query number ,when we query : 

select  count(1)  from  table_with_partition  where   key1  is not NULL


 [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), 
 new column value is NULL
 --

 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu

 In hive,the schema of partition may be difference from the table schema. For 
 example, we add new column. When we use spark-sql to query the data of 
 partition which schema is difference from the table schema.
 Some problems have been solved at PR4289 
 (https://github.com/apache/spark/pull/4289), 
 but if we add new column, and put new data into the old partition schema,new 
 column value is NULL
 [According to the following steps]:
 --
 case class TestData(key: Int, value: String)
 val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = 
 TestData(i, i.toString))).toDF()
   testData.registerTempTable(testData)
  sql(DROP TABLE IF EXISTS table_with_partition )
  sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value 
 string) PARTITIONED

[jira] [Updated] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column value is NULL

2015-03-31 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-6644:
--
Summary: [SPARK-SQL]when the partition schema does not match table 
schema(ADD COLUMN), new column value is NULL  (was: [SPARK-SQL]when the 
partition schema does not match table schema(ADD COLUMN), new column is NULL)

 [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), 
 new column value is NULL
 --

 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu

 In hive,the schema of partition may be difference from the table schema. For 
 example, we add new column. When we use spark-sql to query the data of 
 partition which schema is difference from the table schema.
 some problems is solved(https://github.com/apache/spark/pull/4289), 
 but if you add a new column,put new data into the old partition,new column 
 value is NULL
 [According to the following steps]:
 case class TestData(key: Int, value: String)
 val testData = TestHive.sparkContext.parallelize(
   (1 to 10).map(i = TestData(i, i.toString))).toDF()
 testData.registerTempTable(testData)
  sql(DROP TABLE IF EXISTS table_with_partition )
  sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value 
 string) PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value FROM testData)
 // add column to table
  sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
  sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
  sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
 key,value,'test',1.11 FROM testData)
  sql(select * from table_with_partition where ds='1' 
 ).collect().foreach(println)
  
 result : 
 [1,1,null,null,1]
 [2,2,null,null,1]
  
 result we expect:
 [1,1,test,1.11,1]
 [2,2,test,1.11,1]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column is NULL

2015-03-31 Thread dongxu (JIRA)
dongxu created SPARK-6644:
-

 Summary: [SPARK-SQL]when the partition schema does not match table 
schema(ADD COLUMN), new column is NULL
 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu


In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
some problems is solved(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:

case class TestData(key: Int, value: String)
val testData = TestHive.sparkContext.parallelize(
  (1 to 10).map(i = TestData(i, i.toString))).toDF()
testData.registerTempTable(testData)
 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)
// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)
 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
result : 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-6644) [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), new column value is NULL

2015-03-31 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-6644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-6644:
--
Description: 
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition schema,new 
column value is NULL

[According to the following steps]:
--
case class TestData(key: Int, value: String)

val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)

// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)

 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
-
result: 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong query number ,when we query : 

select  count(1)  from  table_with_partition  where   key1  is not NULL

  was:
In hive,the schema of partition may be difference from the table schema. For 
example, we add new column. When we use spark-sql to query the data of 
partition which schema is difference from the table schema.
Some problems have been solved at PR4289 
(https://github.com/apache/spark/pull/4289), 
but if you add a new column,put new data into the old partition,new column 
value is NULL

[According to the following steps]:
--
case class TestData(key: Int, value: String)

val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = TestData(i, 
i.toString))).toDF()
  testData.registerTempTable(testData)

 sql(DROP TABLE IF EXISTS table_with_partition )
 sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value string) 
PARTITIONED by (ds string) location '${tmpDir.toURI.toString}' )
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value FROM testData)

// add column to table
 sql(ALTER TABLE table_with_partition ADD COLUMNS(key1 string))
 sql(ALTER TABLE table_with_partition ADD COLUMNS(destlng double)) 
 sql(INSERT OVERWRITE TABLE table_with_partition  partition (ds='1') SELECT 
key,value,'test',1.11 FROM testData)

 sql(select * from table_with_partition where ds='1' 
).collect().foreach(println)  
 
-
result: 
[1,1,null,null,1]
[2,2,null,null,1]
 
result we expect:
[1,1,test,1.11,1]
[2,2,test,1.11,1]

This bug will cause the wrong query number ,when we query : 

select  count(1)  from  table_with_partition  where   key1  is not NULL


 [SPARK-SQL]when the partition schema does not match table schema(ADD COLUMN), 
 new column value is NULL
 --

 Key: SPARK-6644
 URL: https://issues.apache.org/jira/browse/SPARK-6644
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.3.0
Reporter: dongxu

 In hive,the schema of partition may be difference from the table schema. For 
 example, we add new column. When we use spark-sql to query the data of 
 partition which schema is difference from the table schema.
 Some problems have been solved at PR4289 
 (https://github.com/apache/spark/pull/4289), 
 but if you add a new column,put new data into the old partition schema,new 
 column value is NULL
 [According to the following steps]:
 --
 case class TestData(key: Int, value: String)
 val testData = TestHive.sparkContext.parallelize((1 to 2).map(i = 
 TestData(i, i.toString))).toDF()
   testData.registerTempTable(testData)
  sql(DROP TABLE IF EXISTS table_with_partition )
  sql(sCREATE  TABLE  IF NOT EXISTS  table_with_partition(key int,value 
 string) PARTITIONED by (ds string

Re: svn commit: r1660222 - in /james/hupa/trunk: client/src/main/java/org/apache/hupa/public/Hupa-sd.html hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml

2015-02-22 Thread dongxu
It is working. Thank you @manolo.
By the way, Eclipse can be for Java Luna and I have to add the -Xmx1024m to
the VM args, otherwise console would complain:

Compiling 1 permutation
  Compiling permutation 0...
  [ERROR] OutOfMemoryError: Increase heap size or lower
gwt.jjs.maxThreads
java.lang.OutOfMemoryError: Java heap space
at java.util.HashMap.resize(HashMap.java:559)
at java.util.HashMap.addEntry(HashMap.java:851)
at java.util.HashMap.put(HashMap.java:484)
at com.google.gwt.dev.jjs.impl.JsFunctionClusterer.updateSourceInfoMap(
JsFunctionClusterer.java:234)
at com.google.gwt.dev.jjs.impl.JsAbstractTextTransformer.
recomputeJsAndStatementRanges(JsAbstractTextTransformer.java:132)
at com.google.gwt.dev.jjs.impl.JsFunctionClusterer.exec(
JsFunctionClusterer.java:154)
at com.google.gwt.dev.jjs.JavaToJavaScriptCompiler.
generateJavaScriptCode(JavaToJavaScriptCompiler.java:1169)
at com.google.gwt.dev.jjs.JavaToJavaScriptCompiler.compilePermutation(
JavaToJavaScriptCompiler.java:506)
at com.google.gwt.dev.jjs.UnifiedAst.compilePermutation(
UnifiedAst.java:134)
at com.google.gwt.dev.CompilePerms.compile(CompilePerms.java:195)
at com.google.gwt.dev.ThreadedPermutationWorkerFacto
ry$ThreadedPermutationWorker.compile(ThreadedPermutationWorkerFacto
ry.java:49)
at com.google.gwt.dev.PermutationWorkerFactory$Manager$WorkerThread.run(
PermutationWorkerFactory.java:73)
at java.lang.Thread.run(Thread.java:722)
 [ERROR] Out of memory; to increase the amount of memory, use the
-Xmx flag at startup (java -Xmx128M ...)
  [ERROR] Unrecoverable exception, shutting down
com.google.gwt.core.ext.UnableToCompleteException: (see previous log
entries)
at com.google.gwt.dev.ThreadedPermutationWorkerFacto
ry$ThreadedPermutationWorker.compile(ThreadedPermutationWorkerFacto
ry.java:56)
at com.google.gwt.dev.PermutationWorkerFactory$Manager$WorkerThread.run(
PermutationWorkerFactory.java:73)
at java.lang.Thread.run(Thread.java:722)
  [ERROR] Not all permutation were compiled , completed (0/1)

On Tue, Feb 17, 2015 at 8:58 PM, Manuel Carrasco Moñino man...@apache.org
wrote:

 with all the changes we did to GWT DevMode and Google plugin in last
 releases, it should be very easy.

 Just install Eclipse for JEE (preferably Luna), install GPE (google
 eclipse plugin), import Hupa (using the import existing maven projects
 wizard), then select 'hupa' project (note that it's not hupa-client) and
 under the 'Run as' menu you should have a 'super dev mode' item.

 Since we ar using RF, you should setup the RF annotation processing for
 the hupa project, otherwise you will get RF obfuscate exceptions.

 - Manolo


 On Tue, Feb 17, 2015 at 11:03 AM, dongxu don...@apache.org wrote:

 Hi Manolo,
 Is Hupa working under SuperDevMode now? If so, could you help list the
 shortcut step on how to run hupa in SuperDevMode. Since I was trying to
 make it run but failed.

 Thanks a lot.

 On Tue, Feb 17, 2015 at 6:07 AM, man...@apache.org wrote:

 Author: manolo
 Date: Mon Feb 16 22:07:11 2015
 New Revision: 1660222

 URL: http://svn.apache.org/r1660222
 Log:
 Latest GWT use SD by default, linker must be xsiframe

 Removed:

 james/hupa/trunk/client/src/main/java/org/apache/hupa/public/Hupa-sd.html
 Modified:
 james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml

 Modified:
 james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml
 URL:
 http://svn.apache.org/viewvc/james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml?rev=1660222r1=1660221r2=1660222view=diff

 ==
 --- james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml
 (original)
 +++ james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml
 Mon Feb 16 22:07:11 2015
 @@ -29,9 +29,12 @@
set-configuration-property name=locale.useragent value=Y/

!-- Compile for all browsers --
 -  set-property name=user.agent value=gecko1_8,safari,ie9/
 +  set-property name=user.agent value=gecko1_8,safari,ie9,ie10/
 +  add-linker name=xsiframe/

set-configuration-property name=CssResource.style value=obf/

entry-point class='org.apache.hupa.client.Hupa'/
 +
 +  collapse-all-properties /
  /module



 -
 To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
 For additional commands, e-mail: server-dev-h...@james.apache.org






Re: svn commit: r1660222 - in /james/hupa/trunk: client/src/main/java/org/apache/hupa/public/Hupa-sd.html hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml

2015-02-17 Thread dongxu
Hi Manolo,
Is Hupa working under SuperDevMode now? If so, could you help list the
shortcut step on how to run hupa in SuperDevMode. Since I was trying to
make it run but failed.

Thanks a lot.

On Tue, Feb 17, 2015 at 6:07 AM, man...@apache.org wrote:

 Author: manolo
 Date: Mon Feb 16 22:07:11 2015
 New Revision: 1660222

 URL: http://svn.apache.org/r1660222
 Log:
 Latest GWT use SD by default, linker must be xsiframe

 Removed:

 james/hupa/trunk/client/src/main/java/org/apache/hupa/public/Hupa-sd.html
 Modified:
 james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml

 Modified:
 james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml
 URL:
 http://svn.apache.org/viewvc/james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml?rev=1660222r1=1660221r2=1660222view=diff

 ==
 --- james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml
 (original)
 +++ james/hupa/trunk/hupa/src/main/java/org/apache/hupa/HupaProd.gwt.xml
 Mon Feb 16 22:07:11 2015
 @@ -29,9 +29,12 @@
set-configuration-property name=locale.useragent value=Y/

!-- Compile for all browsers --
 -  set-property name=user.agent value=gecko1_8,safari,ie9/
 +  set-property name=user.agent value=gecko1_8,safari,ie9,ie10/
 +  add-linker name=xsiframe/

set-configuration-property name=CssResource.style value=obf/

entry-point class='org.apache.hupa.client.Hupa'/
 +
 +  collapse-all-properties /
  /module



 -
 To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
 For additional commands, e-mail: server-dev-h...@james.apache.org




[sheepdog] [PATCH 4/5] tests: fix content of 052.out

2015-02-11 Thread Wang dongxu
Since code is printf(%s\n, sd_strerror(rsp-result));, 052.out should add
a new line.

Signed-off-by: Wang dongxu wangdon...@cmss.chinamobile.com
---
 tests/functional/052.out | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tests/functional/052.out b/tests/functional/052.out
index 2a533d5..f4487d0 100644
--- a/tests/functional/052.out
+++ b/tests/functional/052.out
@@ -52,6 +52,7 @@ Failed to read object 807c2b25 Waiting for other 
nodes to join cluster
 Failed to read inode header
   NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
 Cluster status: Waiting for other nodes to join cluster
+
 Failed to read object 807c2b25 Waiting for other nodes to join cluster
 Failed to read inode header
   NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
-- 
2.1.0



-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


[sheepdog] [PATCH 0/5] tests: fix some test cases to suitable for new sheepdog and QEMU

2015-02-11 Thread Wang dongxu
QEMU and sheepdog changes some output formats while upgrading to new version, so
tests/functional test cases need some changes.

Wang dongxu (5):
  tests: avoid qemu-io warning
  tests: avoid qemu-img snapshot warning
  tests: correct vdi list
  tests: fix content of 052.out
  tests:fix vnode strategy output

 tests/functional/013 |  8 
 tests/functional/017 | 14 +++---
 tests/functional/024 |  6 +++---
 tests/functional/025 |  4 ++--
 tests/functional/030.out |  1 +
 tests/functional/039 | 22 +++---
 tests/functional/052.out |  1 +
 tests/functional/058 |  2 +-
 tests/functional/059 |  2 +-
 tests/functional/073.out |  2 +-
 tests/functional/075 |  2 +-
 tests/functional/081.out |  6 +++---
 tests/functional/082.out |  6 +++---
 tests/functional/087.out | 10 +-
 tests/functional/089.out |  2 +-
 tests/functional/090.out |  6 +++---
 tests/functional/096.out |  2 ++
 tests/functional/099.out |  4 ++--
 18 files changed, 52 insertions(+), 48 deletions(-)

-- 
2.1.0



-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


[sheepdog] [PATCH 1/5] tests: avoid qemu-io warning

2015-02-11 Thread Wang dongxu
qemu-io command add a warning message because probing a raw img is dangerous. So
add -f option to avoid this.

Signed-off-by: Wang dongxu wangdon...@cmss.chinamobile.com
---
 tests/functional/013 |  6 +++---
 tests/functional/017 |  2 +-
 tests/functional/024 |  6 +++---
 tests/functional/025 |  4 ++--
 tests/functional/039 | 22 +++---
 tests/functional/058 |  2 +-
 tests/functional/059 |  2 +-
 tests/functional/075 |  2 +-
 8 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/tests/functional/013 b/tests/functional/013
index b35b806..f724841 100755
--- a/tests/functional/013
+++ b/tests/functional/013
@@ -14,11 +14,11 @@ _cluster_format -c 1
 
 _vdi_create test 4G
 for i in `seq 1 9`; do
-$QEMU_IO -c write 0 512 -P $i sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 512 -P $i sheepdog:test | _filter_qemu_io
 $QEMU_IMG snapshot -c tag$i sheepdog:test
 done
 
-$QEMU_IO -c read 0 512 -P 9 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 512 -P 9 sheepdog:test | _filter_qemu_io
 for i in `seq 1 9`; do
-$QEMU_IO -c read 0 512 -P $i sheepdog:test:tag$i | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 512 -P $i sheepdog:test:tag$i | _filter_qemu_io
 done
diff --git a/tests/functional/017 b/tests/functional/017
index 5ebe7da..1c22c76 100755
--- a/tests/functional/017
+++ b/tests/functional/017
@@ -20,7 +20,7 @@ $QEMU_IMG snapshot -c tag3 sheepdog:test
 _vdi_create test2 4G
 $QEMU_IMG snapshot -c tag1 sheepdog:test2
 $QEMU_IMG snapshot -c tag2 sheepdog:test2
-$QEMU_IO -c write 0 512 sheepdog:test2:1 | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 512 sheepdog:test2:1 | _filter_qemu_io
 $QEMU_IMG snapshot -c tag3 sheepdog:test2
 
 $DOG vdi tree | _filter_short_date
diff --git a/tests/functional/024 b/tests/functional/024
index e1c1180..e8a33c4 100755
--- a/tests/functional/024
+++ b/tests/functional/024
@@ -23,14 +23,14 @@ _vdi_create ${VDI_NAME} ${VDI_SIZE}
 sleep 1
 
 echo filling ${VDI_NAME} with data
-$QEMU_IO -c write 0 ${VDI_SIZE} sheepdog:${VDI_NAME} | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 ${VDI_SIZE} sheepdog:${VDI_NAME} | _filter_qemu_io
 
 echo reading back ${VDI_NAME}
-$QEMU_IO -c read 0 1m sheepdog:${VDI_NAME} | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 1m sheepdog:${VDI_NAME} | _filter_qemu_io
 
 echo starting second sheep
 _start_sheep 6
 _wait_for_sheep 7
 
 echo reading data from second sheep
-$QEMU_IO -c read 0 ${VDI_SIZE} sheepdog:localhost:7001:${VDI_NAME} | 
_filter_qemu_io
+$QEMU_IO -f raw -c read 0 ${VDI_SIZE} sheepdog:localhost:7001:${VDI_NAME} | 
_filter_qemu_io
diff --git a/tests/functional/025 b/tests/functional/025
index 8f89ccb..37af0ea 100755
--- a/tests/functional/025
+++ b/tests/functional/025
@@ -26,10 +26,10 @@ echo creating vdi ${NAME}
 $DOG vdi create ${VDI_NAME} ${VDI_SIZE}
 
 echo filling ${VDI_NAME} with data
-$QEMU_IO -c write 0 ${VDI_SIZE} sheepdog:${VDI_NAME} | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 ${VDI_SIZE} sheepdog:${VDI_NAME} | _filter_qemu_io
 
 echo reading back ${VDI_NAME} from second zone
-$QEMU_IO -c read 0 1m sheepdog:localhost:7002:${VDI_NAME} | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 1m sheepdog:localhost:7002:${VDI_NAME} | 
_filter_qemu_io
 
 echo starting a sheep in the third zone
 for i in `seq 3 3`; do
diff --git a/tests/functional/039 b/tests/functional/039
index 5b2540f..fddd4fb 100755
--- a/tests/functional/039
+++ b/tests/functional/039
@@ -13,37 +13,37 @@ _wait_for_sheep 6
 _cluster_format -c 6
 _vdi_create test 4G
 
-$QEMU_IO -c write 0 512 -P 1 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 512 -P 1 sheepdog:test | _filter_qemu_io
 $DOG vdi snapshot test -s snap1
-$QEMU_IO -c write 0 512 -P 2 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 512 -P 2 sheepdog:test | _filter_qemu_io
 
 echo yes | $DOG vdi rollback test -s snap1
-$QEMU_IO -c read 0 512 -P 1 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 512 -P 1 sheepdog:test | _filter_qemu_io
 $DOG vdi tree | _filter_short_date
 _vdi_list
 
-$QEMU_IO -c write 0 512 -P 2 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 512 -P 2 sheepdog:test | _filter_qemu_io
 $DOG vdi snapshot test -s snap2
-$QEMU_IO -c write 0 512 -P 3 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c write 0 512 -P 3 sheepdog:test | _filter_qemu_io
 
 echo yes | $DOG vdi rollback test -s snap1
-$QEMU_IO -c read 0 512 -P 1 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 512 -P 1 sheepdog:test | _filter_qemu_io
 $DOG vdi tree | _filter_short_date
 _vdi_list
 
 echo yes | $DOG vdi rollback test -s snap2
-$QEMU_IO -c read 0 512 -P 2 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 512 -P 2 sheepdog:test | _filter_qemu_io
 $DOG vdi tree | _filter_short_date
 _vdi_list
 
 echo yes | $DOG vdi rollback test -s snap1
-$QEMU_IO -c read 0 512 -P 1 sheepdog:test | _filter_qemu_io
+$QEMU_IO -f raw -c read 0 512 -P 1 sheepdog:test | _filter_qemu_io
 $DOG vdi tree | _filter_short_date
 _vdi_list

[sheepdog] [PATCH 2/5] tests: avoid qemu-img snapshot warning

2015-02-11 Thread Wang dongxu
qemu-img snapshot option will print warining message while probing a raw 
image, so
filter them using sed.

Signed-off-by: Wang dongxu wangdon...@cmss.chinamobile.com
---
 tests/functional/013 |  2 +-
 tests/functional/017 | 12 ++--
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tests/functional/013 b/tests/functional/013
index f724841..d19d8f8 100755
--- a/tests/functional/013
+++ b/tests/functional/013
@@ -15,7 +15,7 @@ _cluster_format -c 1
 _vdi_create test 4G
 for i in `seq 1 9`; do
 $QEMU_IO -f raw -c write 0 512 -P $i sheepdog:test | _filter_qemu_io
-$QEMU_IMG snapshot -c tag$i sheepdog:test
+$QEMU_IMG snapshot -c tag$i sheepdog:test 21 | sed '/WARNING/, +2 d'
 done
 
 $QEMU_IO -f raw -c read 0 512 -P 9 sheepdog:test | _filter_qemu_io
diff --git a/tests/functional/017 b/tests/functional/017
index 1c22c76..2c34a55 100755
--- a/tests/functional/017
+++ b/tests/functional/017
@@ -13,14 +13,14 @@ _wait_for_sheep 6
 _cluster_format -c 1
 
 _vdi_create test 4G
-$QEMU_IMG snapshot -c tag1 sheepdog:test
-$QEMU_IMG snapshot -c tag2 sheepdog:test
-$QEMU_IMG snapshot -c tag3 sheepdog:test
+$QEMU_IMG snapshot -c tag1 sheepdog:test 21 | sed '/WARNING/, +2 d'
+$QEMU_IMG snapshot -c tag2 sheepdog:test 21 | sed '/WARNING/, +2 d'
+$QEMU_IMG snapshot -c tag3 sheepdog:test 21 | sed '/WARNING/, +2 d'
 
 _vdi_create test2 4G
-$QEMU_IMG snapshot -c tag1 sheepdog:test2
-$QEMU_IMG snapshot -c tag2 sheepdog:test2
+$QEMU_IMG snapshot -c tag1 sheepdog:test2 21 | sed '/WARNING/, +2 d'
+$QEMU_IMG snapshot -c tag2 sheepdog:test2 21 | sed '/WARNING/, +2 d'
 $QEMU_IO -f raw -c write 0 512 sheepdog:test2:1 | _filter_qemu_io
-$QEMU_IMG snapshot -c tag3 sheepdog:test2
+$QEMU_IMG snapshot -c tag3 sheepdog:test2 21 | sed '/WARNING/, +2 d'
 
 $DOG vdi tree | _filter_short_date
-- 
2.1.0



-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


[sheepdog] [PATCH 3/5] tests: correct vdi list

2015-02-11 Thread Wang dongxu
dog vdi list add column Block Size Shift, add them to test cases.

Signed-off-by: Wang dongxu wangdon...@cmss.chinamobile.com
---
 tests/functional/073.out |  2 +-
 tests/functional/081.out |  6 +++---
 tests/functional/082.out |  6 +++---
 tests/functional/087.out | 10 +-
 tests/functional/089.out |  2 +-
 tests/functional/090.out |  6 +++---
 tests/functional/099.out |  4 ++--
 7 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/tests/functional/073.out b/tests/functional/073.out
index 8dd2173..3c5fd47 100644
--- a/tests/functional/073.out
+++ b/tests/functional/073.out
@@ -6,6 +6,6 @@ Cluster created at DATE
 
 Epoch Time   Version [Host:Port:V-Nodes,,,]
 DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   test 0  4.0 MB  0.0 MB  0.0 MB DATE   7c2b25  3  
 hello
diff --git a/tests/functional/081.out b/tests/functional/081.out
index 92df8ca..7092e97 100644
--- a/tests/functional/081.out
+++ b/tests/functional/081.out
@@ -55,7 +55,7 @@ vdi.c
  HTTP/1.1 416 Requested Range Not Satisfiable
  HTTP/1.1 416 Requested Range Not Satisfiable
  HTTP/1.1 416 Requested Range Not Satisfiable
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   sd/dog   0   16 PB   56 MB  0.0 MB DATE   5a5cbf4:2  
   sd   0   16 PB  8.0 MB  0.0 MB DATE   7927f24:2  
   sd/sheep 0   16 PB  144 MB  0.0 MB DATE   8ad11e4:2  
@@ -65,7 +65,7 @@ data137
 data19
 data4
 data97
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   sd/dog   0   16 PB   56 MB  0.0 MB DATE   5a5cbf4:2  
   sd   0   16 PB  8.0 MB  0.0 MB DATE   7927f24:2  
   sd/sheep 0   16 PB  144 MB  0.0 MB DATE   8ad11e4:2  
@@ -73,7 +73,7 @@ data97
   sd/sheep/allocator 0   16 PB  268 MB  0.0 MB DATE   fd57fc4:2
  
 dog
 sheep
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   sd/dog   0   16 PB   56 MB  0.0 MB DATE   5a5cbf4:2  
   sd   0   16 PB  8.0 MB  0.0 MB DATE   7927f24:2  
   sd/sheep 0   16 PB  144 MB  0.0 MB DATE   8ad11e4:2  
diff --git a/tests/functional/082.out b/tests/functional/082.out
index b3f4dd9..78c5e6a 100644
--- a/tests/functional/082.out
+++ b/tests/functional/082.out
@@ -60,7 +60,7 @@ trace.c
 treeview.c
 trunk.c
 vdi.c
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   sd/dog   0   16 PB   56 MB  0.0 MB DATE   5a5cbf4:2  
   sd   0   16 PB  8.0 MB  0.0 MB DATE   7927f24:2  
   sd/sheep 0   16 PB  176 MB  0.0 MB DATE   8ad11e4:2  
@@ -78,7 +78,7 @@ data6
 data7
 data8
 data9
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   sd/dog   0   16 PB   56 MB  0.0 MB DATE   5a5cbf4:2  
   sd   0   16 PB  8.0 MB  0.0 MB DATE   7927f24:2  
   sd/sheep 0   16 PB  176 MB  0.0 MB DATE   8ad11e4:2  
@@ -86,7 +86,7 @@ data9
   sd/sheep/allocator 0   16 PB  316 MB  0.0 MB DATE   fd57fc4:2
  
 dog
 sheep
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   sd/dog   0   16 PB   56 MB  0.0 MB DATE   5a5cbf4:2  
   sd   0   16 PB  8.0 MB  0.0 MB DATE   7927f24:2  
   sd/sheep 0   16 PB  176 MB  0.0 MB DATE   8ad11e4:2  
diff --git a/tests/functional/087.out b/tests/functional/087.out
index 04e4210..0fcc7f3 100644
--- a/tests/functional/087.out
+++ b/tests/functional/087.out
@@ -2,11 +2,11 @@ QA output created by 087
 using backend plain store
 206
 206
-  NameIdSizeUsed  SharedCreation time   VDI id  Copies  Tag
+  NameIdSizeUsed  SharedCreation time   VDI id  Copies  
Tag   Block Size Shift
   sd   0   16 PB  4.0 MB  0.0 MB DATE   7927f24:2  
   sd/sheep 0   16 PB  8.0 MB  0.0 MB DATE   8ad11e4:2  
   sd/sheep/allocator 0   16 PB

[sheepdog] [PATCH 5/5] tests:fix vnode strategy output

2015-02-11 Thread Wang dongxu
since commit 5fed9d6, Cluster vnodes strategy information is produced,
so fix them in test cases.

Signed-off-by: Wang dongxu wangdon...@cmss.chinamobile.com
---
 tests/functional/030.out | 1 +
 tests/functional/096.out | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/tests/functional/030.out b/tests/functional/030.out
index 2c788f0..baf51a7 100644
--- a/tests/functional/030.out
+++ b/tests/functional/030.out
@@ -37,6 +37,7 @@ s test22   10 MB   12 MB  0.0 MB DATE   fd3816  3 
   22
   test20   10 MB  0.0 MB   12 MB DATE   fd3817  322
 Cluster status: running, auto-recovery enabled
 Cluster store: plain with 6 redundancy policy
+Cluster vnodes strategy: auto
 Cluster vnode mode: node
 Cluster created at DATE
 
diff --git a/tests/functional/096.out b/tests/functional/096.out
index 2ff9dc6..a555287 100644
--- a/tests/functional/096.out
+++ b/tests/functional/096.out
@@ -27,6 +27,7 @@ $ ../../dog/dog cluster format -c 3
 $ ../../dog/dog cluster info -v
 Cluster status: running, auto-recovery enabled
 Cluster store: plain with 3 redundancy policy
+Cluster vnodes strategy: auto
 Cluster vnode mode: node
 Cluster created at DATE
 
@@ -80,6 +81,7 @@ The cluster's redundancy level is set to 2, the old one was 3.
 $ ../../dog/dog cluster info -v
 Cluster status: running, auto-recovery enabled
 Cluster store: plain with 2 redundancy policy
+Cluster vnodes strategy: auto
 Cluster vnode mode: node
 Cluster created at DATE
 
-- 
2.1.0



-- 
sheepdog mailing list
sheepdog@lists.wpkg.org
https://lists.wpkg.org/mailman/listinfo/sheepdog


[jira] [Updated] (SPARK-5616) Add examples for PySpark API

2015-02-08 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-5616:
--
Description: 
PySpark API examples are less than Spark scala API. For example:  

1.Broadcast: how to use broadcast operation API
2.Module: how to import a other python file in zip file.

Add more examples for freshman who wanna use PySpark.

  was:
PySpark API examples are less than Spark scala API. For example:  

1.Boardcast: how to use boardcast operation APi 
2.Module: how to import a other python file in zip file.

Add more examples for freshman who wanna use PySpark.


 Add examples for PySpark API
 

 Key: SPARK-5616
 URL: https://issues.apache.org/jira/browse/SPARK-5616
 Project: Spark
  Issue Type: Improvement
  Components: PySpark
Reporter: dongxu
Priority: Minor
  Labels: examples, pyspark, python

 PySpark API examples are less than Spark scala API. For example:  
 1.Broadcast: how to use broadcast operation API
 2.Module: how to import a other python file in zip file.
 Add more examples for freshman who wanna use PySpark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5616) Add examples for PySpark API

2015-02-05 Thread dongxu (JIRA)
dongxu created SPARK-5616:
-

 Summary: Add examples for PySpark API
 Key: SPARK-5616
 URL: https://issues.apache.org/jira/browse/SPARK-5616
 Project: Spark
  Issue Type: Improvement
  Components: PySpark
Reporter: dongxu
 Fix For: 1.3.0


PySpark API examples are less than Spark scala API. For example:  

1.Boardcast: how to use boardcast operation APi 
2.Module: how to import a other python file in zip file.

Add more examples for freshman who wanna use PySpark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5527) Add standalone document configiration to explain how to make cluster conf file consistency

2015-02-02 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-5527:
--
Description: 
We must make all node conf file consistent when we start our standalone 
cluster. For example, we set   SPARK_WORKER_INSTANCES=2 to start 2 worker on 
each machine.
I see this code at $SPARK_HOME/sbin/spark-daemon.sh

   if [ $SPARK_MASTER !=  ]; then
  echo rsync from $SPARK_MASTER
  rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' 
--exclude='contrib/hod/logs/*' $SPARK_MASTER/ $SPARK_HOME
fi

I think we better mention it at document .

  was:
We need make all node conf file consistency when we start our standalone 
cluster. For example, we set   SPARK_WORKER_INSTANCES=2 to start 2 worker on 
each machine.
I see this code at $SPARK_HOME/sbin/spark-daemon.sh

   if [ $SPARK_MASTER !=  ]; then
  echo rsync from $SPARK_MASTER
  rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' 
--exclude='contrib/hod/logs/*' $SPARK_MASTER/ $SPARK_HOME
fi

I think we better mention it at document .


 Add standalone document configiration to explain  how to make cluster conf 
 file consistency
 ---

 Key: SPARK-5527
 URL: https://issues.apache.org/jira/browse/SPARK-5527
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 1.2.0, 1.3.0
Reporter: dongxu
Priority: Minor
  Labels: docuentation, starter
   Original Estimate: 10m
  Remaining Estimate: 10m

 We must make all node conf file consistent when we start our standalone 
 cluster. For example, we set   SPARK_WORKER_INSTANCES=2 to start 2 worker 
 on each machine.
 I see this code at $SPARK_HOME/sbin/spark-daemon.sh
if [ $SPARK_MASTER !=  ]; then
   echo rsync from $SPARK_MASTER
   rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' 
 --exclude='contrib/hod/logs/*' $SPARK_MASTER/ $SPARK_HOME
 fi
 I think we better mention it at document .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5527) Improvements to standalone doc. - how to make cluster conf file consistency

2015-02-02 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-5527:
--
Summary: Improvements to standalone doc. - how to make cluster conf file 
consistency  (was: Add standalone document configiration to explain  how to 
make cluster conf file consistency)

 Improvements to standalone doc. - how to make cluster conf file consistency
 ---

 Key: SPARK-5527
 URL: https://issues.apache.org/jira/browse/SPARK-5527
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 1.2.0, 1.3.0
Reporter: dongxu
Priority: Minor
  Labels: docuentation, starter
   Original Estimate: 10m
  Remaining Estimate: 10m

 We must make all node conf file consistent when we start our standalone 
 cluster. For example, we set   SPARK_WORKER_INSTANCES=2 to start 2 worker 
 on each machine.
 I see this code at $SPARK_HOME/sbin/spark-daemon.sh
if [ $SPARK_MASTER !=  ]; then
   echo rsync from $SPARK_MASTER
   rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' 
 --exclude='contrib/hod/logs/*' $SPARK_MASTER/ $SPARK_HOME
 fi
 I think we better mention it at document .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-5527) Improvements to standalone doc. - how to sync cluster conf file

2015-02-02 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-5527:
--
Summary: Improvements to standalone doc. - how to sync cluster conf file  
(was: Improvements to standalone doc. - how to make cluster conf file 
consistency)

 Improvements to standalone doc. - how to sync cluster conf file
 ---

 Key: SPARK-5527
 URL: https://issues.apache.org/jira/browse/SPARK-5527
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 1.2.0, 1.3.0
Reporter: dongxu
Priority: Minor
  Labels: docuentation, starter
   Original Estimate: 10m
  Remaining Estimate: 10m

 We must make all node conf file consistent when we start our standalone 
 cluster. For example, we set   SPARK_WORKER_INSTANCES=2 to start 2 worker 
 on each machine.
 I see this code at $SPARK_HOME/sbin/spark-daemon.sh
if [ $SPARK_MASTER !=  ]; then
   echo rsync from $SPARK_MASTER
   rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' 
 --exclude='contrib/hod/logs/*' $SPARK_MASTER/ $SPARK_HOME
 fi
 I think we better mention it at document .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5527) Add standalone document configiration to explain how to make cluster conf file consistency

2015-02-02 Thread dongxu (JIRA)
dongxu created SPARK-5527:
-

 Summary: Add standalone document configiration to explain  how to 
make cluster conf file consistency
 Key: SPARK-5527
 URL: https://issues.apache.org/jira/browse/SPARK-5527
 Project: Spark
  Issue Type: Documentation
  Components: Documentation
Affects Versions: 1.2.0, 1.3.0
Reporter: dongxu
Priority: Minor


We need make all node conf file consistency when we start our standalone 
cluster. For example, we set   SPARK_WORKER_INSTANCES=2 to start 2 worker on 
each machine.
I see this code at $SPARK_HOME/sbin/spark-daemon.sh

   if [ $SPARK_MASTER !=  ]; then
  echo rsync from $SPARK_MASTER
  rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*' 
--exclude='contrib/hod/logs/*' $SPARK_MASTER/ $SPARK_HOME
fi

I think we better mention it at document .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[sheepdog] [PATCH] Change tests outputs to be suitable for new cluster info format

2015-02-02 Thread Wang Dongxu
From: Wang Dongxu wangdon...@cmss.chinamobile.com

Since commit 4fea6f95a2de90f45f90415f289083c6b29120a7, dog cluster info change
its output format, to make sure tests/functional cases are suitable for the new
format, modified these output.

Signed-off-by: Wang Dongxu wangdon...@cmss.chinamobile.com
---
 tests/functional/001.out |   36 +++---
 tests/functional/002.out |   30 +-
 tests/functional/003.out |   30 +-
 tests/functional/004.out |  100 ++--
 tests/functional/005.out |   90 
 tests/functional/007.out |   20 
 tests/functional/010.out |8 ++--
 tests/functional/025.out |   24 
 tests/functional/030.out |4 +-
 tests/functional/043.out |   82 +++---
 tests/functional/051.out |   18 +++---
 tests/functional/052.out |  116 +-
 tests/functional/053.out |  128 +++---
 tests/functional/054.out |   10 ++--
 tests/functional/055.out |   20 
 tests/functional/056.out |   14 +++---
 tests/functional/057.out |   18 +++---
 tests/functional/063.out |8 ++--
 tests/functional/064.out |   10 ++--
 tests/functional/065.out |8 ++--
 tests/functional/066.out |   72 +-
 tests/functional/068.out |   36 +++---
 tests/functional/069.out |8 ++--
 tests/functional/070.out |   36 +++---
 tests/functional/073.out |4 +-
 tests/functional/085.out |4 +-
 tests/functional/088.out |   12 ++--
 tests/functional/096.out |8 ++--
 tests/functional/098.out |   26 +-
 29 files changed, 490 insertions(+), 490 deletions(-)

diff --git a/tests/functional/001.out b/tests/functional/001.out
index 82d27da..ef39e0a 100644
--- a/tests/functional/001.out
+++ b/tests/functional/001.out
@@ -5,29 +5,29 @@ Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  4 [127.0.0.1:7002]
-DATE  3 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  4 [127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128]
 Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  4 [127.0.0.1:7002]
-DATE  3 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  4 [127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128]
 Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  4 [127.0.0.1:7002]
-DATE  3 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  4 [127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128]
diff --git a/tests/functional/002.out b/tests/functional/002.out
index ce99957..0efa4be 100644
--- a/tests/functional/002.out
+++ b/tests/functional/002.out
@@ -5,26 +5,26 @@ Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  3 [127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
 Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  3 [127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7002:128

[sheepdog] [PATCH] Change tests outputs to be suitable for new cluster info format

2015-02-02 Thread Wang Dongxu
Since commit 4fea6f95a2de90f45f90415f289083c6b29120a7, dog cluster info change
its output format, to make sure tests/functional cases are suitable for the new
format, modified these output.

Signed-off-by: Wang Dongxu wangdon...@cmss.chinamobile.com
---
 tests/functional/001.out |   36 +++---
 tests/functional/002.out |   30 +-
 tests/functional/003.out |   30 +-
 tests/functional/004.out |  100 ++--
 tests/functional/005.out |   90 
 tests/functional/007.out |   20 
 tests/functional/010.out |8 ++--
 tests/functional/025.out |   24 
 tests/functional/030.out |4 +-
 tests/functional/043.out |   82 +++---
 tests/functional/051.out |   18 +++---
 tests/functional/052.out |  116 +-
 tests/functional/053.out |  128 +++---
 tests/functional/054.out |   10 ++--
 tests/functional/055.out |   20 
 tests/functional/056.out |   14 +++---
 tests/functional/057.out |   18 +++---
 tests/functional/063.out |8 ++--
 tests/functional/064.out |   10 ++--
 tests/functional/065.out |8 ++--
 tests/functional/066.out |   72 +-
 tests/functional/068.out |   36 +++---
 tests/functional/069.out |8 ++--
 tests/functional/070.out |   36 +++---
 tests/functional/073.out |4 +-
 tests/functional/085.out |4 +-
 tests/functional/088.out |   12 ++--
 tests/functional/096.out |8 ++--
 tests/functional/098.out |   26 +-
 29 files changed, 490 insertions(+), 490 deletions(-)

diff --git a/tests/functional/001.out b/tests/functional/001.out
index 82d27da..ef39e0a 100644
--- a/tests/functional/001.out
+++ b/tests/functional/001.out
@@ -5,29 +5,29 @@ Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  4 [127.0.0.1:7002]
-DATE  3 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  4 [127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128]
 Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  4 [127.0.0.1:7002]
-DATE  3 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  4 [127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128]
 Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  4 [127.0.0.1:7002]
-DATE  3 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  4 [127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128]
diff --git a/tests/functional/002.out b/tests/functional/002.out
index ce99957..0efa4be 100644
--- a/tests/functional/002.out
+++ b/tests/functional/002.out
@@ -5,26 +5,26 @@ Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  3 [127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  1 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
 Cluster status: running, auto-recovery enabled
 
 Cluster created at DATE
 
-Epoch Time   Version
-DATE  4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
-DATE  3 [127.0.0.1:7002]
-DATE  2 [127.0.0.1:7001, 127.0.0.1:7002]
-DATE  1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002]
+Epoch Time   Version [Host:Port:V-Nodes,,,]
+DATE  4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128]
+DATE  3 [127.0.0.1:7002:128]
+DATE  2 [127.0.0.1:7001:128, 127.0.0.1:7002:128

svn commit: r1637049 - /james/hupa/trunk/emma.test.txt

2014-11-05 Thread dongxu
Author: dongxu
Date: Thu Nov  6 07:46:38 2014
New Revision: 1637049

URL: http://svn.apache.org/r1637049
Log:
remove the test file from emma

Removed:
james/hupa/trunk/emma.test.txt


-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1636808 - /james/hupa/trunk/README.txt

2014-11-04 Thread dongxu
Author: dongxu
Date: Wed Nov  5 07:22:26 2014
New Revision: 1636808

URL: http://svn.apache.org/r1636808
Log:
typo

Modified:
james/hupa/trunk/README.txt

Modified: james/hupa/trunk/README.txt
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/README.txt?rev=1636808r1=1636807r2=1636808view=diff
==
--- james/hupa/trunk/README.txt (original)
+++ james/hupa/trunk/README.txt Wed Nov  5 07:22:26 2014
@@ -1,6 +1,6 @@
 
 ## Introduction ##
-Hupa is an Rich IMAP-based Webmail application written in GWT.
+Hupa is a Rich IMAP-based Webmail application written in GWT.
 
 Hupa has been entirely written in java to be coherent with the language used 
in the James project.
 It has been a development reference using GWT good practices (MVP pattern and 
Unit testing)



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



[jira] [Created] (SPARK-4201) Can't use concat() on partition column in where condition (Hive compatibility problem)

2014-11-02 Thread dongxu (JIRA)
dongxu created SPARK-4201:
-

 Summary: Can't use concat() on partition column in where condition 
(Hive compatibility problem)
 Key: SPARK-4201
 URL: https://issues.apache.org/jira/browse/SPARK-4201
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 1.1.0, 1.0.0
 Environment: Hive 0.12+hadoop 2.4/hadoop 2.2 +spark 1.1
Reporter: dongxu
Priority: Minor


The team used hive to query,we try to  move it to spark-sql.
when I search sentences like that. 
select count(1) from  gulfstream_day_driver_base_2 where  
concat(year,month,day) = '20140929';
It can't work ,but it work well in hive.
I have to rewrite the sql to  select count(1) from  
gulfstream_day_driver_base_2 where  year = 2014 and  month = 09 day= 29.
There are some error logs.
14/11/03 15:05:03 ERROR SparkSQLDriver: Failed in [select count(1) from  
gulfstream_day_driver_base_2 where  concat(year,month,day) = '20140929']
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Aggregate false, [], [SUM(PartialCount#1390L) AS c_0#1337L]
 Exchange SinglePartition
  Aggregate true, [], [COUNT(1) AS PartialCount#1390L]
   HiveTableScan [], (MetastoreRelation default, gulfstream_day_driver_base_2, 
None), 
Some((HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat(year#1339,month#1340,day#1341)
 = 20140929))

at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:47)
at org.apache.spark.sql.execution.Aggregate.execute(Aggregate.scala:126)
at 
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
at 
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
at 
org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:415)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:59)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: 
execute, tree:
Exchange SinglePartition
 Aggregate true, [], [COUNT(1) AS PartialCount#1390L]
  HiveTableScan [], (MetastoreRelation default, gulfstream_day_driver_base_2, 
None), 
Some((HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat(year#1339,month#1340,day#1341)
 = 20140929))

at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:47)
at org.apache.spark.sql.execution.Exchange.execute(Exchange.scala:44)
at 
org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1.apply(Aggregate.scala:128)
at 
org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1.apply(Aggregate.scala:127)
at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:46)
... 16 more
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: 
execute, tree:
Aggregate true, [], [COUNT(1) AS PartialCount#1390L]
 HiveTableScan [], (MetastoreRelation default, gulfstream_day_driver_base_2, 
None), 
Some((HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat(year#1339,month#1340,day#1341)
 = 20140929))

at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:47)
at org.apache.spark.sql.execution.Aggregate.execute(Aggregate.scala:126)
at 
org.apache.spark.sql.execution.Exchange$$anonfun$execute$1.apply(Exchange.scala:86)
at 
org.apache.spark.sql.execution.Exchange$$anonfun$execute$1.apply(Exchange.scala:45)
at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:46)
... 20 more
Caused by: org.apache.spark.SparkException: Task not serializable
at 
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
at org.apache.spark.SparkContext.clean(SparkContext.scala:1242

[jira] [Updated] (SPARK-4201) Can't use concat() on partition column in where condition (Hive compatibility problem)

2014-11-02 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-4201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated SPARK-4201:
--
Description: 
The team used hive to query,we try to  move it to spark-sql.
when I search sentences like that. 
select count(1) from  gulfstream_day_driver_base_2 where  
concat(year,month,day) = '20140929';
It can't work ,but it work well in hive.
I have to rewrite the sql to  select count(1) from  
gulfstream_day_driver_base_2 where  year = 2014 and  month = 09 day= 29.
There are some error log.
14/11/03 15:05:03 ERROR SparkSQLDriver: Failed in [select count(1) from  
gulfstream_day_driver_base_2 where  concat(year,month,day) = '20140929']
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Aggregate false, [], [SUM(PartialCount#1390L) AS c_0#1337L]
 Exchange SinglePartition
  Aggregate true, [], [COUNT(1) AS PartialCount#1390L]
   HiveTableScan [], (MetastoreRelation default, gulfstream_day_driver_base_2, 
None), 
Some((HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat(year#1339,month#1340,day#1341)
 = 20140929))

at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:47)
at org.apache.spark.sql.execution.Aggregate.execute(Aggregate.scala:126)
at 
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:360)
at 
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:360)
at 
org.apache.spark.sql.hive.HiveContext$QueryExecution.stringResult(HiveContext.scala:415)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:59)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: 
execute, tree:
Exchange SinglePartition
 Aggregate true, [], [COUNT(1) AS PartialCount#1390L]
  HiveTableScan [], (MetastoreRelation default, gulfstream_day_driver_base_2, 
None), 
Some((HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat(year#1339,month#1340,day#1341)
 = 20140929))

at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:47)
at org.apache.spark.sql.execution.Exchange.execute(Exchange.scala:44)
at 
org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1.apply(Aggregate.scala:128)
at 
org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1.apply(Aggregate.scala:127)
at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:46)
... 16 more
Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: 
execute, tree:
Aggregate true, [], [COUNT(1) AS PartialCount#1390L]
 HiveTableScan [], (MetastoreRelation default, gulfstream_day_driver_base_2, 
None), 
Some((HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcat(year#1339,month#1340,day#1341)
 = 20140929))

at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:47)
at org.apache.spark.sql.execution.Aggregate.execute(Aggregate.scala:126)
at 
org.apache.spark.sql.execution.Exchange$$anonfun$execute$1.apply(Exchange.scala:86)
at 
org.apache.spark.sql.execution.Exchange$$anonfun$execute$1.apply(Exchange.scala:45)
at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:46)
... 20 more
Caused by: org.apache.spark.SparkException: Task not serializable
at 
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
at org.apache.spark.SparkContext.clean(SparkContext.scala:1242)
at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:597)
at 
org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1.apply(Aggregate.scala:128)
at 
org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1.apply(Aggregate.scala:127)
at 
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:46

[gmx-users] Cannot combine residue position based cog selection with index output for g_select

2014-10-24 Thread Huang Dongxu
Hi,

I have a system consists of a box of water and a monolayer of polymer in
the center. I am trying to extract solvent molecules that have z
coordinates greater than 5 nm using g_select and output an index file.
Basically I want to separate my system into two portions: a box of waters
above the monolayer and the rest of the system containing the monolayer.
With the water only portion extracted by trajconv, I want to use genion to
replace some water molecules with salt and put this modified portion back
to my system that contains the monolayer.

Right now, I am able to use g_select -on -select resname SOL and z5 to
select all atoms that belong to SOL and has z  5 nm. However, this way
does not guarantee that all the atoms selected are from SOL molecules whose
cog are above z=5 plane. And as a result, some atoms that are part of SOL
molecules below the z=5 plane are also selected and the entire selection
contains some complete water molecules and some atoms that are from below
plane water molecules (incomplete waters).

I am not sure with this kind of selection, if I extract it with trajconv,
and replace water with salt by genion, and then put it back to original
system, will I get correct alignment of those atoms from incomplete water
molecules and get the desired configuration. ( I think genion will only
replace complete residue; however, when using trajconv to generate new
coordination files, I am not sure if the system will be re-set to origin or
not. If so then the coordinates will definitely be misplaced and when I put
the modified part back there will be misalignment).

TO AVOID all that, I'd like to use cog of SELECTION option in the
g_select, except for 1 problems: the -on option only works with atom
selection.

So my essential question is: is there a way to use g_select with both the
-on option and cog option? Or is there another way to finish my task?

I have searched the archive and did not find something that answers my
question.

Thank you all for helping out.

Best,

Dongxu Huang
Graduate student at Northwestern University
-- 
Gromacs Users mailing list

* Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a 
mail to gmx-users-requ...@gromacs.org.


Re: [PerlChina] perlchina的注记

2014-08-26 Thread Dongxu Ma
@googlegroups.com。
 通过http://groups.google.com/group/perlchina访问此论坛。
 要查看更多选项,请访问https://groups.google.com/d/optout。

  --
 您收到此邮件是因为您订阅了Google网上论坛中的“PerlChina Mongers 讨论组”论坛。
 要退订此论坛并停止接收此论坛的电子邮件,请发送电子邮件到perlchina+unsubscr...@googlegroups.com。
 要发帖到此论坛,请发送电子邮件至perlchina@googlegroups.com。
 通过http://groups.google.com/group/perlchina访问此论坛。
 要查看更多选项,请访问https://groups.google.com/d/optout。




-- 
-Dongxu

-- 
您收到此邮件是因为您订阅了 Google 网上论坛的“PerlChina Mongers 讨论组”论坛。
要退订此论坛并停止接收此论坛的电子邮件,请发送电子邮件到perlchina+unsubscr...@googlegroups.com。
要向此网上论坛发帖,请发送电子邮件至 perlchina@googlegroups.com。
通过以下网址访问此论坛:http://groups.google.com/group/perlchina。
要查看更多选项,请访问 https://groups.google.com/d/optout。


[jira] [Updated] (ROL-2046) Sorry! We couldn't find your document

2014-08-03 Thread dongxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/ROL-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dongxu updated ROL-2046:


Description: 
After we navigate to the design tab of setting when logging in, we can choose 
the 'basic mobile' for theme.Then we preview the mobile theme and click the 
'View Mobile Weblog', this complain would take place, with complaining that 
Sorry! We couldn't find your document.

[USER SOLUTION]: If you want to go back to normal, you have to remove the 
cookie key roller_user_request_type under roller domain in your browser.

However, we have to say it is a bug.

  was:
After we navigate to the design tab of setting when logging in, we can choose 
the 'basic mobile' for theme.Then we preview the mobile theme and click the 
'View Mobile Weblog', this complain would take place, with complaining that 
Sorry! We couldn't find your document.

[USER SOLUTION], you have to remove the cookie key roller_user_request_type 
under roller domain in your browser.

However, we have to say it is a bug.


 Sorry! We couldn't find your document
 -

 Key: ROL-2046
 URL: https://issues.apache.org/jira/browse/ROL-2046
 Project: Apache Roller
  Issue Type: Bug
  Components: Themes and Macros, User Interface - General
 Environment: Tomcat 7.0.55 installed on Ubuntu 12.04 with MySQL 5.1
Reporter: dongxu
Assignee: Roller Unassigned

 After we navigate to the design tab of setting when logging in, we can choose 
 the 'basic mobile' for theme.Then we preview the mobile theme and click the 
 'View Mobile Weblog', this complain would take place, with complaining that 
 Sorry! We couldn't find your document.
 [USER SOLUTION]: If you want to go back to normal, you have to remove the 
 cookie key roller_user_request_type under roller domain in your browser.
 However, we have to say it is a bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


svn commit: r1613760 - in /james/hupa/trunk/client/src/main/java/org/apache/hupa/client: activity/ ui/

2014-07-27 Thread dongxu
Author: dongxu
Date: Sun Jul 27 07:47:50 2014
New Revision: 1613760

URL: http://svn.apache.org/r1613760
Log:
scrub code

Modified:

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ContactsListActivity.java

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/LabelListActivity.java

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/TopBarActivity.java

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/AddressListView.java

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/MessageContentView.java

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/MessageListView.java

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ContactsListActivity.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ContactsListActivity.java?rev=1613760r1=1613759r2=1613760view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ContactsListActivity.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/ContactsListActivity.java
 Sun Jul 27 07:47:50 2014
@@ -46,8 +46,6 @@ public class ContactsListActivity extend
 
 @Inject private HupaController hupaController;
 @Inject private Displayable display;
-@Inject private LabelPropertiesActivity.Displayable labelProperties;
-
 
 @Override
 public void start(AcceptsOneWidget container, EventBus eventBus) {

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/LabelListActivity.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/LabelListActivity.java?rev=1613760r1=1613759r2=1613760view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/LabelListActivity.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/LabelListActivity.java
 Sun Jul 27 07:47:50 2014
@@ -46,8 +46,6 @@ public class LabelListActivity extends A
 
 @Inject private HupaController hupaController;
 @Inject private Displayable display;
-@Inject private LabelPropertiesActivity.Displayable labelProperties;
-
 
 @Override
 public void start(AcceptsOneWidget container, EventBus eventBus) {

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/TopBarActivity.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/TopBarActivity.java?rev=1613760r1=1613759r2=1613760view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/TopBarActivity.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/activity/TopBarActivity.java
 Sun Jul 27 07:47:50 2014
@@ -20,9 +20,7 @@
 package org.apache.hupa.client.activity;
 
 import org.apache.hupa.client.HupaController;
-import org.apache.hupa.client.place.DefaultPlace;
 import org.apache.hupa.client.rf.LogoutUserRequest;
-import org.apache.hupa.client.ui.LoginLayoutable;
 import org.apache.hupa.shared.domain.LogoutUserResult;
 import org.apache.hupa.shared.events.LogoutEvent;
 
@@ -31,12 +29,10 @@ import com.google.gwt.event.dom.client.C
 import com.google.gwt.event.dom.client.HasClickHandlers;
 import com.google.gwt.event.shared.EventBus;
 import com.google.gwt.uibinder.client.UiField;
-import com.google.gwt.user.client.Window;
 import com.google.gwt.user.client.ui.AcceptsOneWidget;
 import com.google.gwt.user.client.ui.HTML;
 import com.google.gwt.user.client.ui.HTMLPanel;
 import com.google.gwt.user.client.ui.IsWidget;
-import com.google.gwt.user.client.ui.RootLayoutPanel;
 import com.google.inject.Inject;
 import com.google.web.bindery.requestfactory.shared.Receiver;
 import com.google.web.bindery.requestfactory.shared.ServerFailure;
@@ -44,7 +40,6 @@ import com.google.web.bindery.requestfac
 public class TopBarActivity extends AppBaseActivity {
 
 @Inject private Displayable display;
-@Inject private LoginLayoutable loginLayout;
 
 @UiField protected HTMLPanel userLabel;
 

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/AddressListView.java
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/AddressListView.java?rev=1613760r1=1613759r2=1613760view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/AddressListView.java
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/AddressListView.java
 Sun Jul 27 07:47:50 2014
@@ -21,8 +21,6 @@ package

svn commit: r1595968 - /james/hupa/trunk/pom.xml

2014-05-19 Thread dongxu
Author: dongxu
Date: Mon May 19 17:31:16 2014
New Revision: 1595968

URL: http://svn.apache.org/r1595968
Log:
upgrade gwt to the latest 2.6.1

Modified:
james/hupa/trunk/pom.xml

Modified: james/hupa/trunk/pom.xml
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/pom.xml?rev=1595968r1=1595967r2=1595968view=diff
==
--- james/hupa/trunk/pom.xml (original)
+++ james/hupa/trunk/pom.xml Mon May 19 17:31:16 2014
@@ -57,8 +57,8 @@
 /site
 /distributionManagement
 properties
-gwtVersion2.6.0/gwtVersion
-gwtMavenVersion2.6.0/gwtMavenVersion
+gwtVersion2.6.1/gwtVersion
+gwtMavenVersion2.6.1/gwtMavenVersion
 gwt.moduleSuffix /
 gwt.logLevelINFO/gwt.logLevel
 jettyVersion7.3.0.v20110203/jettyVersion



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1580208 - /james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/LoginView.ui.xml

2014-03-22 Thread dongxu
Author: dongxu
Date: Sat Mar 22 13:19:05 2014
New Revision: 1580208

URL: http://svn.apache.org/r1580208
Log:
Disable the glass of setting in login panel

Modified:

james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/LoginView.ui.xml

Modified: 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/LoginView.ui.xml
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/LoginView.ui.xml?rev=1580208r1=1580207r2=1580208view=diff
==
--- 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/LoginView.ui.xml
 (original)
+++ 
james/hupa/trunk/client/src/main/java/org/apache/hupa/client/ui/LoginView.ui.xml
 Sat Mar 22 13:19:05 2014
@@ -272,7 +272,7 @@
 /g:HTML
 /g:FlowPanel
 g:PopupPanel ui:field=settingsPopup styleName={style.imapSetting}
-  modal=true autoHideEnabled=true glassEnabled=true 
+  modal=true autoHideEnabled=true glassEnabled=false 

   g:HTMLPanel
 table
   tr



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



svn commit: r1579882 - /james/hupa/trunk/pom.xml

2014-03-20 Thread dongxu
Author: dongxu
Date: Fri Mar 21 04:47:14 2014
New Revision: 1579882

URL: http://svn.apache.org/r1579882
Log:
Change gQuery version to the latest 1.4.1

Modified:
james/hupa/trunk/pom.xml

Modified: james/hupa/trunk/pom.xml
URL: 
http://svn.apache.org/viewvc/james/hupa/trunk/pom.xml?rev=1579882r1=1579881r2=1579882view=diff
==
--- james/hupa/trunk/pom.xml (original)
+++ james/hupa/trunk/pom.xml Fri Mar 21 04:47:14 2014
@@ -331,7 +331,7 @@
 dependency
 groupIdcom.googlecode.gwtquery/groupId
 artifactIdgwtquery/artifactId
-version1.4.1-SNAPSHOT/version
+version1.4.1/version
 scopeprovided/scope
 /dependency
 /dependencies



-
To unsubscribe, e-mail: server-dev-unsubscr...@james.apache.org
For additional commands, e-mail: server-dev-h...@james.apache.org



Re: [Sikuli-driver] [Question #240496]: Quit in a second on osx 10.9 (Mavericks)

2013-12-09 Thread Dongxu Yang
Question #240496 on Sikuli changed:
https://answers.launchpad.net/sikuli/+question/240496

Status: Open = Solved

Dongxu Yang confirmed that the question is solved:
I have update the JDK 7. It's OK now.

-- 
You received this question notification because you are a member of
Sikuli Drivers, which is an answer contact for Sikuli.

___
Mailing list: https://launchpad.net/~sikuli-driver
Post to : sikuli-driver@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sikuli-driver
More help   : https://help.launchpad.net/ListHelp


[Sikuli-driver] [Question #240496]: Quit in a second on osx 10.9 (Mavericks)

2013-12-08 Thread Dongxu Yang
New question #240496 on Sikuli:
https://answers.launchpad.net/sikuli/+question/240496

I have install the Sikuli on my macbook pro with the osx 10.9 (Mavericks). 
After I copied the sikuli.app into /application folder, I double click the icon 
in my launchpad. but the software quit in a second. If I forget something to 
update? I have update the X11 and Xcode after install the osx 10.9.

Thank you very much!

-- 
You received this question notification because you are a member of
Sikuli Drivers, which is an answer contact for Sikuli.

___
Mailing list: https://launchpad.net/~sikuli-driver
Post to : sikuli-driver@lists.launchpad.net
Unsubscribe : https://launchpad.net/~sikuli-driver
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   >