Re: Dovecot FTS not using plugins

2021-01-11 Thread Joan Moreau

Soirry, I always forget that dovecot does not do multi-threading (why ?)

The process was waiting for another process.

On 2021-01-11 14:57, Aki Tuomi wrote:


On 11/01/2021 16:51 Joan Moreau  wrote:

Hello,
With recent git version of dovecot, I can see that the FTS does not 
use the configured plugin anymore, but tries to sort the mailbox 
directly on the spot (which is of course very painful).
Is there a change in the configuration file in order to recover the 
old behavior ? or something else has changed ?

Thank you
Joan


Can you share `doveconf -n` and output of `doveadm -Dv search -u victim 
text foobar`?


Aki

Dovecot FST not using plugins

2021-01-11 Thread Joan Moreau

Hello,

With recent git version of dovecot, I can see that the FTS does not use 
the configured plugin anymore, but tries to sort the mailbox directly on 
the spot (which is of course very painful).


Is there a change in the configuration file in order to recover the old 
behavior ? or something else has changed ?


Thank you

Joan

Re: [VOTE] couchdb 4.0 transaction semantics

2021-01-09 Thread Joan Touzet
If this proposal means v3.x replicators can't replicate one-shot / 
normal / non-continuous changes from 4.x+ endpoints, that sounds like a 
big break in compatibility.


I'm -0.5, tending towards -1, but mostly because I'm having trouble 
understanding if it's even possible - unless a proposal is being made to 
release a 3.2 that introduces replication compatibility with 4.x in tandem.


-Joan

On 2021-01-09 6:45 p.m., Nick Vatamaniuc wrote:

I withdraw my vote until I can get a clearer view. Nick would you mind

re-stating?

Not at all! The longer version and other considerations was stated in
my last reply to the discussion thread so I assumed that was accepted
as a consensus since nobody replied arguing otherwise.

https://lists.apache.org/thread.html/r45bff6ca4339f775df631f47e77657afbca83ee0ef03c6aa1a1d45cb%40%3Cdev.couchdb.apache.org%3E

But the gist of it is that existing (< 3.x) replicators won't be able
to replicate non-continuous (normal) changes from >= 4.x endpoints.

Regards,
-Nick

On Sat, Jan 9, 2021 at 1:26 AM Joan Touzet  wrote:


Wait, what? I thought you agreed with this approach in that thread.

I withdraw my vote until I can get a clearer view. Nick would you mind
re-stating?

-Joan

On 2021-01-08 11:37 p.m., Nick V wrote:

+1 for 1 through 3

-1 for 4  as I think the exception should apply to normal change feeds as well, 
as described in the thread

Cheers,
-Nick


On Jan 8, 2021, at 17:12, Joan Touzet  wrote:

Thanks, then it's a solid +1 from me.

-Joan


On 2021-01-08 4:13 p.m., Robert Newson wrote:
You are probably thinking of a possible “group commit”. That is anticipated and 
not contradicted by this proposal. This proposal is explicitly about not using 
multiple states of the database for a single doc lookup, view query, etc.

On 8 Jan 2021, at 19:53, Joan Touzet  wrote:


+1.

This is for now I presume, as I thought that there was feeling about
relaxing this restriction somewhat for the 5.0 timeframe? Memory's dim.

-Joan

On 07/01/2021 06:00, Robert Newson wrote:

Hi,

Following on from the discussion at 
https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29%40%3Cdev.couchdb.apache.org%3E
 
<https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29@%3Cdev.couchdb.apache.org%3E>

The proposal is;

"With the exception of the changes endpoint when in feed=continuous mode, that 
all data-bearing responses from CouchDB are constructed from a single, immutable 
snapshot of the database at the time of the request.”

Paul Davis summarised the discussion in four bullet points, reiterated here for 
context;

1. A single CouchDB API call should map to a single FDB transaction
2. We absolutely do not want to return a valid JSON response to any
streaming API that hit a transaction boundary (because data
loss/corruption)
3. We're willing to change the API requirements so that 2 is not an issue.
4. None of this applies to continuous changes since that API call was
never a single snapshot.


Please vote accordingly, we’ll run this as lazy consensus per the bylaws 
(https://couchdb.apache.org/bylaws.html#lazy 
<https://couchdb.apache.org/bylaws.html#lazy>)

B.




Re: [VOTE] couchdb 4.0 transaction semantics

2021-01-08 Thread Joan Touzet

Wait, what? I thought you agreed with this approach in that thread.

I withdraw my vote until I can get a clearer view. Nick would you mind 
re-stating?


-Joan

On 2021-01-08 11:37 p.m., Nick V wrote:

+1 for 1 through 3

-1 for 4  as I think the exception should apply to normal change feeds as well, 
as described in the thread

Cheers,
-Nick


On Jan 8, 2021, at 17:12, Joan Touzet  wrote:

Thanks, then it's a solid +1 from me.

-Joan


On 2021-01-08 4:13 p.m., Robert Newson wrote:
You are probably thinking of a possible “group commit”. That is anticipated and 
not contradicted by this proposal. This proposal is explicitly about not using 
multiple states of the database for a single doc lookup, view query, etc.

On 8 Jan 2021, at 19:53, Joan Touzet  wrote:


+1.

This is for now I presume, as I thought that there was feeling about
relaxing this restriction somewhat for the 5.0 timeframe? Memory's dim.

-Joan

On 07/01/2021 06:00, Robert Newson wrote:

Hi,

Following on from the discussion at 
https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29%40%3Cdev.couchdb.apache.org%3E
 
<https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29@%3Cdev.couchdb.apache.org%3E>

The proposal is;

"With the exception of the changes endpoint when in feed=continuous mode, that 
all data-bearing responses from CouchDB are constructed from a single, immutable 
snapshot of the database at the time of the request.”

Paul Davis summarised the discussion in four bullet points, reiterated here for 
context;

1. A single CouchDB API call should map to a single FDB transaction
2. We absolutely do not want to return a valid JSON response to any
streaming API that hit a transaction boundary (because data
loss/corruption)
3. We're willing to change the API requirements so that 2 is not an issue.
4. None of this applies to continuous changes since that API call was
never a single snapshot.


Please vote accordingly, we’ll run this as lazy consensus per the bylaws 
(https://couchdb.apache.org/bylaws.html#lazy 
<https://couchdb.apache.org/bylaws.html#lazy>)

B.




Re: [VOTE] couchdb 4.0 transaction semantics

2021-01-08 Thread Joan Touzet

Thanks, then it's a solid +1 from me.

-Joan

On 2021-01-08 4:13 p.m., Robert Newson wrote:

You are probably thinking of a possible “group commit”. That is anticipated and 
not contradicted by this proposal. This proposal is explicitly about not using 
multiple states of the database for a single doc lookup, view query, etc.


On 8 Jan 2021, at 19:53, Joan Touzet  wrote:

+1.

This is for now I presume, as I thought that there was feeling about
relaxing this restriction somewhat for the 5.0 timeframe? Memory's dim.

-Joan

On 07/01/2021 06:00, Robert Newson wrote:

Hi,

Following on from the discussion at 
https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29%40%3Cdev.couchdb.apache.org%3E
 
<https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29@%3Cdev.couchdb.apache.org%3E>

The proposal is;

"With the exception of the changes endpoint when in feed=continuous mode, that 
all data-bearing responses from CouchDB are constructed from a single, immutable 
snapshot of the database at the time of the request.”

Paul Davis summarised the discussion in four bullet points, reiterated here for 
context;

1. A single CouchDB API call should map to a single FDB transaction
2. We absolutely do not want to return a valid JSON response to any
streaming API that hit a transaction boundary (because data
loss/corruption)
3. We're willing to change the API requirements so that 2 is not an issue.
4. None of this applies to continuous changes since that API call was
never a single snapshot.


Please vote accordingly, we’ll run this as lazy consensus per the bylaws 
(https://couchdb.apache.org/bylaws.html#lazy 
<https://couchdb.apache.org/bylaws.html#lazy>)

B.






Re: [VOTE] couchdb 4.0 transaction semantics

2021-01-08 Thread Joan Touzet
+1.

This is for now I presume, as I thought that there was feeling about
relaxing this restriction somewhat for the 5.0 timeframe? Memory's dim.

-Joan

On 07/01/2021 06:00, Robert Newson wrote:
> Hi,
> 
> Following on from the discussion at 
> https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29%40%3Cdev.couchdb.apache.org%3E
>  
> <https://lists.apache.org/thread.html/rac6c90c4ae03dc055c7e8be6eca1c1e173cf2f98d2afe6d018e62d29@%3Cdev.couchdb.apache.org%3E>
> 
> The proposal is;
> 
> "With the exception of the changes endpoint when in feed=continuous mode, 
> that all data-bearing responses from CouchDB are constructed from a single, 
> immutable snapshot of the database at the time of the request.”
> 
> Paul Davis summarised the discussion in four bullet points, reiterated here 
> for context;
> 
> 1. A single CouchDB API call should map to a single FDB transaction
> 2. We absolutely do not want to return a valid JSON response to any
> streaming API that hit a transaction boundary (because data
> loss/corruption)
> 3. We're willing to change the API requirements so that 2 is not an issue.
> 4. None of this applies to continuous changes since that API call was
> never a single snapshot.
> 
> 
> Please vote accordingly, we’ll run this as lazy consensus per the bylaws 
> (https://couchdb.apache.org/bylaws.html#lazy 
> <https://couchdb.apache.org/bylaws.html#lazy>)
> 
> B.
> 
> 


RE: Response buffer size

2021-01-08 Thread Joan ventusproxy
Hi,

Sorry, I’m using http async client 4.1.4.

Thanks,
Joan.


From: Joan grupoventus  
Sent: Friday, January 8, 2021 4:57 PM
To: 'Joan ventusproxy' 
Subject: Response buffer size

Hello,

I’m using HttpClient 4.5.7. reading responses from a backend through a 
‘HttpAsyncResponseConsumer’ on the ‘consumeContent’ method  in this way:

while ( (numBytesRead = decoder.read(this.bbuf)) > 0 ) {
( . . . )
}

where  this.bbuf = ByteBuffer.allocate(32768);


The buffer size in the async http instance is configured in this way (‘phccm’ 
is  a ‘PoolingNHttpClientConnectionManager’):
this.phccm.setDefaultConnectionConfig(ConnectionConfig.custom().setBufferSize(32768).setFragmentSizeHint(32768).build()

And on the IOReactor:
IOReactorConfig ioReactorConfig = IOReactorConfig.custom().setRcvBufSize(32768) 
...


But when we start reading the response on the consume content method, the byte 
buffer is filled out just with 16K of data:
Cycle 0 :: bytes read = 15812, total size (K) = 15
Cycle 1 :: bytes read = 16368, total size (K) = 31
Cycle 2 :: bytes read = 16376, total size (K) = 47
Cycle 3 :: bytes read = 16384, total size (K) = 63
Cycle 4 :: bytes read = 16376, total size (K) = 79
Cycle 5 :: bytes read = 16384, total size (K) = 95
Cycle 6 :: bytes read = 16376, total size (K) = 111
Cycle 7 :: bytes read = 16384, total size (K) = 127

What am I missing?

Thanks,

Joan.




-
To unsubscribe, e-mail: httpclient-users-unsubscr...@hc.apache.org
For additional commands, e-mail: httpclient-users-h...@hc.apache.org



Response buffer size

2021-01-08 Thread Joan grupoventus
Hello,

 �

I’m using HttpClient 4.5.7. reading responses from a backend through a 
‘HttpAsyncResponseConsumer’ on the ‘consumeContent’ method  in this way:

 �

while ( (numBytesRead = decoder.read(this.bbuf)) > 0 ) {

( . . . )

}

  

where  this.bbuf = ByteBuffer.allocate(32768);

  

  

The buffer size in the async http instance is configured in this way (‘phccm’ 
is  a ‘PoolingNHttpClientConnectionManager’):

this.phccm.setDefaultConnectionConfig(ConnectionConfig.custom().setBufferSize(32768).setFragmentSizeHint(32768).build()

  

And on the IOReactor:

IOReactorConfig ioReactorConfig = IOReactorConfig.custom().setRcvBufSize(32768) 
...

  

  

But when we start reading the response on the consume content method, the byte 
buffer is filled out just with 16K of data:

Cycle 0 :: bytes read = 15812, total size (K) = 15

Cycle 1 :: bytes read = 16368, total size (K) = 31

Cycle 2 :: bytes read = 16376, total size (K) = 47

Cycle 3 :: bytes read = 16384, total size (K) = 63

Cycle 4 :: bytes read = 16376, total size (K) = 79

Cycle 5 :: bytes read = 16384, total size (K) = 95

Cycle 6 :: bytes read = 16376, total size (K) = 111

Cycle 7 :: bytes read = 16384, total size (K) = 127

 �

What am I missing?

 �

Thanks,

 �

Joan.

 �



Re: Prompt not centered when started as detached

2021-01-06 Thread Joan Albert
Hi Geraint,

> I believe that the above is the answer. The size defaults to 80 x 24.
> 
> If you're a C dev, I think those defaults are in these lines:
> https://git.savannah.gnu.org/cgit/screen.git/tree/src/window.c#n607

Thank you very much for the tip!
I am not a C dev, but I wanted to learn C actually.

Do you think it would be possible (when I have the required skills) for me to
write a patch?
My idea is to have height and width as screen command-line options when
started in detached mode, in order to have more control over them.

Regards,
TS



RE: trap changes made for VRF

2021-01-06 Thread Joan Landry
Can you please also explain why the pre-existing functionality of clientaddr 
x.x.x.x:port – no longer works
and what I need to do to get it to work again?
Also, where should I look for this patch – and any idea on when it might be 
available?
Thanks,
Joan



From: Bart Van Assche 
Sent: Wednesday, January 6, 2021 11:28 PM
To: stann...@cumulusnetworks.com
Cc: Joan Landry ; net-snmp-users@lists.sourceforge.net
Subject: Re: trap changes made for VRF

External email: [bart.vanass...@gmail.com]

Hi Sam,

Can you submit a patch that documents how to use the changes in the following 
two commits:
* 02de400544de ("libsnmp: Set Linux VRF iface on Trap sink IP addresses")
* 3ca90c2c1260 ("libsnmp/transports/UDP: Add support for VRF")
Thanks,

Bart.

On 1/6/21 12:39 PM, Joan Landry wrote:
Can someone please provide a link to the documentation that describes how to 
get rc = netsnmp_bindtodevice(t->sock, ep->iface);
to work – apparently the code that sends traps has been redesigned 
significantly in that NETSNMP_DS_LIB_CLIENT_ADDR no longer works as use to.

What is the change in snmpd.conf that makes this work apparently clientaddr 
x.x.x.x:port – no longer works as it used to.

I have not been able to locate any documentation on these changes or how to set 
the VRF interface or how to allow the code to set an ipaddress and port using 
NETSNMP_DS_LIB_CLIENT_ADDR

Any info on this would be greatly appreciated.


Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


trap changes made for VRF

2021-01-06 Thread Joan Landry
Can someone please provide a link to the documentation that describes how to 
get rc = netsnmp_bindtodevice(t->sock, ep->iface);
to work – apparently the code that sends traps has been redesigned 
significantly in that NETSNMP_DS_LIB_CLIENT_ADDR no longer works as use to.

What is the change in snmpd.conf that makes this work apparently clientaddr 
x.x.x.x:port – no longer works as it used to.

I have not been able to locate any documentation on these changes or how to set 
the VRF interface or how to allow the code to set an ipaddress and port using 
NETSNMP_DS_LIB_CLIENT_ADDR

Any info on this would be greatly appreciated.



Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


RE: snmptrapd for V3 informs

2021-01-06 Thread Joan Landry
Hi,
I am trying to upgrade to net-snmp 5.9 and noticed a change for VRF support 
that was added.
commit 3ca90c2c1260e036a5abd73a40f83d4ded545580
Author: Bart Van Assche mailto:bvanass...@acm.org>>
Date:   Fri Dec 28 11:57:11 2018 -0800

libsnmp/transports/UDP: Add support for VRF

Prior to 5.9 I was using NETSNMP_DS_LIB_CLIENT_ADDR for the vrf source port – 
and when upgrading to 5.9 this no longer appears to work.

Can you tell me what you changed and how to get NETSNMP_DS_LIB_CLIENT_ADDR to 
do what it used to do before these mods  were added.

Thanks,
Joan Landry



From: Feroz 
Sent: Wednesday, January 6, 2021 10:11 AM
To: net-snmp-users@lists.sourceforge.net
Subject: snmptrapd for V3 informs

External email: [net-snmp-users-boun...@lists.sourceforge.net]

Anyone tried forwarding V3 informs with snmptrapd?
Can some one share the snmptrapd.conf file?

-Feroz

Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


RE: snmpd.conf security

2021-01-06 Thread Joan Landry
I switched over to use /var/net-snmp/snmpd.conf and I call update_config but 
the passwords do not get changed to localized keys in the file - the v3 
credentials do work correctly.

What triggers the agent to change the createUser line in the snmpd.conf file to 
remove the passwords - when a new v3 user is added?
Thanks,
Joan





-Original Message-
From: Wes Hardaker 
Sent: Tuesday, January 5, 2021 3:40 PM
To: Joan Landry 
Cc: net-snmp-users@lists.sourceforge.net
Subject: Re: snmpd.conf security

External email: [harda...@users.sourceforge.net]

..
Joan Landry  writes:

> Would like to know if there is a way to make snmpd.conf file more
> secure - as currently it shows the password for a usm user.
> createUser v3user MD5 abcdefghij DES abcdefghij trapsess -r 10 -t 3 -l
> authPriv -u v3user -a MD5 -A abcdefghij -x DES -X abcdefghij
> 10.11.12.98

Per the documentation, a createUser line should *only* go into the persistent 
file (/var/net-snmp/snmpd.conf) and is replaced by the agent with a usmUser 
line after startup.  The usmUser line is also sensitive, however, as it 
contains a private key that is at least localized to just that agent 
fortunately.  That file is written by the process owner and should only be read 
by the process owner (typically root), and is the best that can be achieved 
given the need by the protocol to store localized keys.
--
Wes Hardaker
USC/ISI
Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.


___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


RE: Snmpv3 users details are not deleting from /var/net-snmp/snmpd.conf file

2021-01-06 Thread Joan Landry
Try to call  update_config(); instead.

From: chandrasekharreddy chinnapareddygari 
Sent: Saturday, December 12, 2020 10:54 PM
To: net-snmp-cod...@lists.sourceforge.net; net-snmp-users@lists.sourceforge.net
Subject: Snmpv3 users details are not deleting from /var/net-snmp/snmpd.conf 
file

External email: [net-snmp-users-boun...@lists.sourceforge.net]

Hi team,
I'm using net-snmp 5.8 version .My requirement is conf files should updtae 
without restarting snmpd .

I'm sending SIGHUP signal to update SNMP data with out restarting snmpd . 
snmpv3 details are not updating .
Please help me how to proceed further.


Thanks,
Chandra.



Get Outlook for 
Android

Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


net-snmp snmpd.conf question

2021-01-05 Thread Joan Landry
We are using the snmpd.conf file to configure the net-snmp library.

Would like to know if there is a way to make this file more secure - as 
currently it shows the password for a usm user.
createUser v3user MD5 abcdefghij DES abcdefghij
trapsess -r 10 -t 3 -l authPriv -u v3user -a MD5 -A abcdefghij -x DES -X 
abcdefghij 10.11.12.98

I tried deleting the file, and tried removing the user info after calling 
update_config(); function but that results in connectivity issues.

Any info on how to secure this file would be greatly appreciated.
Thanks,
Joan Landry

Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.

___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


RE: Snmpv3 users details are not deleting from /var/net-snmp/snmpd.conf file

2021-01-05 Thread Joan Landry
Thanks – but my question was regarding actually being the master agent and 
internally updating the net-snmp library with data received via a CLI.
I believe the only way to do this is via the snmpd.conf file.
So any info on this would be greatly appreciated.
Thanks,


From: Larry Hayes 
Sent: Monday, December 14, 2020 10:48 AM
To: chandrasekharreddy chinnapareddygari 
Cc: net-snmp-coders@lists.sourceforge.net; net-snmp-us...@lists.sourceforge.net
Subject: Re: Snmpv3 users details are not deleting from 
/var/net-snmp/snmpd.conf file

External email: [net-snmp-users-boun...@lists.sourceforge.net]

I am no expert, but do deal with creating and deleting SNMP v3 users in my job.

You may have to use the tool, snmpuser to remove V3 users without restarting 
the snmpd daemon.

From the man page:
" snmpusm is an SNMP application that can be used to do simple maintenance on 
the users known to an SNMP agent, by manipulating the agent's User-based 
Security Module (USM) table. The user needs write access to the usmUserTable 
MIB table. This tool can be used to create, delete, clone, and change the 
passphrase of users configured on a running SNMP agent."




On Sat, Dec 12, 2020 at 9:55 PM chandrasekharreddy chinnapareddygari 
mailto:chandrasekhar...@hotmail.com>> wrote:
Hi team,
I'm using net-snmp 5.8 version .My requirement is conf files should updtae 
without restarting snmpd .

I'm sending SIGHUP signal to update SNMP data with out restarting snmpd . 
snmpv3 details are not updating .
Please help me how to proceed further.


Thanks,
Chandra.



Get Outlook for 
Android
___
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders

Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.
___
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders


snmpd.conf security

2021-01-05 Thread Joan Landry
Would like to know if there is a way to make snmpd.conf  file more secure - as 
currently it shows the password for a usm user.
createUser v3user MD5 abcdefghij DES abcdefghij trapsess -r 10 -t 3 -l authPriv 
-u v3user -a MD5 -A abcdefghij -x DES -X abcdefghij 10.11.12.98

I tried deleting the file, and tried removing the user info after calling 
update_config(); function but that results in connectivity issues.

Any info on how to secure this file would be greatly appreciated.
Thanks,
Joan Landry

Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.

___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


Prompt not centered when started as detached

2021-01-05 Thread Joan Albert
Hi everyone,

I am currently starting a screen session with my window manager in detached 
mode (screen -d -m -S personal)
in order for all new terminals to be attached to it afterwards. The
thing is, that when I attach any terminal to it (screen -x personal) the
bash prompt within the screen session is located on the bottom part of the
terminal window, instead of being located at the top as usual.

What I tried, but did not work:
- Starting screen in detached mode manually (unrelated to the window manager)
- Using several screen command options while starting/attaching screen (-O, -A)
- Emptying .screenrc
- Trying with multiple displays

Could it be that detached mode has some kind of default size (which is
smaller than my terminal window?)

This happens in debian testing (bullseye) with xterm(363), bash(5.1) and 
screen(4.08).

Thank you very much,
TS



Re: Una de discs durs

2021-01-03 Thread Joan
Merci Josep,

Un amic m'ha comentat que ell també va sol·lucionar un problema
semblant canviant el cable. Així que provaré això primer...

-- 
Joan Cervan i Andreu
http://personal.calbasi.net

"El meu paper no és transformar el món ni l'home sinó, potser, el de
ser útil, des del meu lloc, als pocs valors sense els quals un món no
val la pena viure'l" A. Camus

i pels que teniu fe:
"Déu no és la Veritat, la Veritat és Déu"
Gandhi


El Sun, 3 Jan 2021 09:29:35 +0100
Josep Lladonosa  va escriure:

> Hola, Joan,
> 
> 
> Que no sigui cosa del cable SATA.
> A la feina hem tingut experiències similars i canviant-lo s'ha resolt.
> 
> La fiabilitat dels discs durs és poca, sempre és recomanable tenir
> còpies de seguretat i fer-los treballar per parelles, en raid 1, per
> exemple.
> 
> Cada fabricant indica la seva garantia.
> Per a mi, els pitjors, Seagate. Els millors, Hitachi (HGST que crec
> que és de Western Digital, ara, i que també està bé).
> 
> Bon any,
> Josep
> 
> El dg., 3 de gen. 2021, 9:01, Joan  va escriure:
> 
> > El problema que tinc m'ha passat dugues vegades en dugues setmanes,
> > i tinc dubtes de si és un tema físic del disc (un disc SATA de 4Tb)
> > no massa vell, de potser un parell d'anys, o un problema del soft
> > que "desgabella" el disc
> >
> > És un disc secundari (el sistema el tinc en un SSD) a on guardo
> > videos, fotos, etc. Un dels meus sospitosos com a causa de tot
> > plegat podria ser l'amule.
> >
> > Bé, la qüestió és que quan arrenco el sistema la cosa va malament, i
> > queda en mode d'emergència, perquè detecta un error:
> >
> > de gen. 02 16:21:12 pc2019 systemd-fsck[502]: magatzem: Inode
> > 38666373 has an invalid extent node (blk 154697780, lblk 0) de gen.
> > 02 16:21:12 pc2019 systemd-fsck[502]: magatzem: UNEXPECTED
> > INCONSISTENCY; RUN fsck MANUALLY. de gen. 02 16:21:12 pc2019
> > systemd-fsck[502]: (i.e., without -a or -p options) de gen.
> > 02 16:21:12 pc2019 systemd-fsck[430]: fsck failed with exit status
> > 4. de gen. 02 16:21:12 pc2019 systemd-fsck[430]: Running request
> > emergency.target/start/replace de gen. 02 16:21:12 pc2019
> > systemd[1]: systemd-fsck@dev-disk-by
> > \x2duuid-eabfd9a3\x2d1b1f\x2d4144\x2da9d3\x2dd514566fa3fb.service:
> > Main process exited, code=exited, status=1/FAILURE de gen. 02
> > 16:21:12 pc2019 systemd[1]:
> > systemd-fsck@dev-disk-by
> > \x2duuid-eabfd9a3\x2d1b1f\x2d4144\x2da9d3\x2dd514566fa3fb.service:
> > Failed with result 'exit-code'. de gen. 02 16:21:12 pc2019
> > systemd[1]: Failed to start File System Check on
> > /dev/disk/by-uuid/eabfd9a3-1b1f-4144-a9d3-d514566fa3fb. de gen. 02
> > 16:21:12 pc2019 systemd[1]: Dependency failed for /media/magatzem.
> > de gen. 02 16:21:12 pc2019 systemd[1]: Dependency failed for Local
> > File Systems. de gen. 02 16:21:12 pc2019 systemd[1]:
> > local-fs.target: Job local-fs.target/start failed with result
> > 'dependency'. de gen. 02 16:21:12 pc2019 systemd[1]:
> > local-fs.target: Triggering OnFailure= dependencies. de gen. 02
> > 16:21:12 pc2019 systemd[1]: media-magatzem.mount: Job
> > media-magatzem.mount/start failed with result 'dependency'.
> >
> > I a mi em mostra aquesta pantalla:
> >
> >
> > https://upload.disroot.org/r/APnYtXLB#NArCJjbVYVzxd9Hui4K9xb9xhkHzk9i1vE++Qf8BQQA=
> >
> > Llavors jo per sol·lucionar-ho gaig un e2fsck -c /dev/sdb1
> >
> > Que em dona aquestes pantalles (les resumeixo, perquè bàsicament
> > son 20 minuts de anar dient "yes" a tot el que em proposa, després
> > de la revisió que dura unes 8 hores o més:
> >
> >
> > https://upload.disroot.org/r/kRLsL2RX#bF9doWYguCMHAvj3APaJNb+GbUBq9zCX2mdrkLJhMAQ=
> >
> > https://upload.disroot.org/r/sYqhJfcy#Wv3pVBo0OuvfosT/i1LfCRx+6sTWwSkpWGDJIl4uTkI=
> >
> > https://upload.disroot.org/r/UTbxj19F#u5TA97h7ykB7KFj58OSPhgFLqwqFBSv00nHAQ8FoPpU=
> >
> > Llavors, les meves preguntes:
> >
> > 1) Us sembla que és un fallo de hard (el disc comença a fer el
> > tonto, amb només 15 mesos), i ja em puc espabilar a comprar-ne un
> > altra i fer-li un clonezilla?
> >
> > 2) Podria ser un problema originat pel software? (en aquest sentit
> > no sé si actualitzar la meva Debian Testing, que no actualitzo en
> > general de cop, sinó a bocinets).
> >
> > 3) No sé si al disc secundari és fa un fsck (o com es digui). Allò
> > que es fa al primari cada nosequantes arrencades. Diria que no, i
> > que és una opció configurable al fstab. El meu fstab és aquest:
> >
> > UUID=... /   ext4errors=remount-ro

Una de discs durs

2021-01-03 Thread Joan
El problema que tinc m'ha passat dugues vegades en dugues setmanes, i
tinc dubtes de si és un tema físic del disc (un disc SATA de 4Tb) no
massa vell, de potser un parell d'anys, o un problema del soft que
"desgabella" el disc

És un disc secundari (el sistema el tinc en un SSD) a on guardo videos,
fotos, etc. Un dels meus sospitosos com a causa de tot plegat podria
ser l'amule.

Bé, la qüestió és que quan arrenco el sistema la cosa va malament, i
queda en mode d'emergència, perquè detecta un error:

de gen. 02 16:21:12 pc2019 systemd-fsck[502]: magatzem: Inode 38666373
has an invalid extent node (blk 154697780, lblk 0) de gen. 02 16:21:12
pc2019 systemd-fsck[502]: magatzem: UNEXPECTED INCONSISTENCY; RUN fsck
MANUALLY. de gen. 02 16:21:12 pc2019 systemd-fsck[502]: (i.e.,
without -a or -p options) de gen. 02 16:21:12 pc2019 systemd-fsck[430]:
fsck failed with exit status 4. de gen. 02 16:21:12 pc2019
systemd-fsck[430]: Running request emergency.target/start/replace de
gen. 02 16:21:12 pc2019 systemd[1]:
systemd-fsck@dev-disk-by\x2duuid-eabfd9a3\x2d1b1f\x2d4144\x2da9d3\x2dd514566fa3fb.service:
Main process exited, code=exited, status=1/FAILURE de gen. 02 16:21:12
pc2019 systemd[1]:
systemd-fsck@dev-disk-by\x2duuid-eabfd9a3\x2d1b1f\x2d4144\x2da9d3\x2dd514566fa3fb.service:
Failed with result 'exit-code'. de gen. 02 16:21:12 pc2019 systemd[1]:
Failed to start File System Check on
/dev/disk/by-uuid/eabfd9a3-1b1f-4144-a9d3-d514566fa3fb. de gen. 02
16:21:12 pc2019 systemd[1]: Dependency failed for /media/magatzem. de
gen. 02 16:21:12 pc2019 systemd[1]: Dependency failed for Local File
Systems. de gen. 02 16:21:12 pc2019 systemd[1]: local-fs.target: Job
local-fs.target/start failed with result 'dependency'. de gen. 02
16:21:12 pc2019 systemd[1]: local-fs.target: Triggering OnFailure=
dependencies. de gen. 02 16:21:12 pc2019 systemd[1]:
media-magatzem.mount: Job media-magatzem.mount/start failed with result
'dependency'.

I a mi em mostra aquesta pantalla:

https://upload.disroot.org/r/APnYtXLB#NArCJjbVYVzxd9Hui4K9xb9xhkHzk9i1vE++Qf8BQQA=

Llavors jo per sol·lucionar-ho gaig un e2fsck -c /dev/sdb1

Que em dona aquestes pantalles (les resumeixo, perquè bàsicament son 20
minuts de anar dient "yes" a tot el que em proposa, després de la
revisió que dura unes 8 hores o més:

https://upload.disroot.org/r/kRLsL2RX#bF9doWYguCMHAvj3APaJNb+GbUBq9zCX2mdrkLJhMAQ=
https://upload.disroot.org/r/sYqhJfcy#Wv3pVBo0OuvfosT/i1LfCRx+6sTWwSkpWGDJIl4uTkI=
https://upload.disroot.org/r/UTbxj19F#u5TA97h7ykB7KFj58OSPhgFLqwqFBSv00nHAQ8FoPpU=

Llavors, les meves preguntes:

1) Us sembla que és un fallo de hard (el disc comença a fer el tonto,
amb només 15 mesos), i ja em puc espabilar a comprar-ne un altra i
fer-li un clonezilla?

2) Podria ser un problema originat pel software? (en aquest sentit no
sé si actualitzar la meva Debian Testing, que no actualitzo en general
de cop, sinó a bocinets).

3) No sé si al disc secundari és fa un fsck (o com es digui). Allò que
es fa al primari cada nosequantes arrencades. Diria que no, i que és
una opció configurable al fstab. El meu fstab és aquest:

UUID=... /   ext4errors=remount-ro 0   1
# /home was on /dev/sdb6 during installation
UUID=... /home   ext4defaults0   2
# swap was on /dev/sdb5 during installation
UUID=...swapsw  0   0
# Segon disc dur 4Tb
UUID=e... /media/magatzem   ext4defaults0   2

(de fet, ara que hi penso, no sé si es fa el fsck a la partició /home,
tampoc). Diria que això te a veure amb el darrer nombre de la columna,
però ara he vist que systemd s'ho munta diferent i només distingeix el
valor zero (o buit), i la resta:

https://unix.stackexchange.com/a/248578

I per tant ja no sé quan ni com es fan el txequejos.

4) Un colega em va comentar que ell força un test SMART via script, no
sé si a l'arrencar... No sé si això és una bona opció... Teniu algun
suggeriment al respecte, per vetllar per la bona salut dels discs
(assumint que si el disc comença a fallar per la seva obsolescència
programada, no hi ha res a fer). 

5) Per cert, sabeu quina garantia tenen, els discos durs? I, en cas de
comprar-ne un de nou, si n'hi ha que donin més fiabilitat?

Fins ara!

-- 
Joan Cervan i Andreu
http://personal.calbasi.net

"El meu paper no és transformar el món ni l'home sinó, potser, el de
ser útil, des del meu lloc, als pocs valors sense els quals un món no
val la pena viure'l" A. Camus

i pels que teniu fe:
"Déu no és la Veritat, la Veritat és Déu"
Gandhi



Re: Format de data incorrecte de «ls -l»

2020-12-27 Thread Joan Montané
Missatge de Jordi Pujol  del dia dg., 27 de
des. 2020 a les 14:57:
>
> Com s'ha de comprovar l'alineament amb el CLDR ?

En el cas que ens ocupa, aquí [1] hi ha les abreviatures dels mesos

[1] 
https://unicode-org.github.io/cldr-staging/charts/latest/summary/ca.html#360c210ad4f23085

>
> Em sembla que tenim diferents punts de vista degut a que fem servir
> diferents nivells de Debian, els meus sistemes son unstable i aquest
> canvi és satisfactori.

És satisfactori si l'ús principal és terminal, però en escriptori fa
que s'usin abreviatures no estàndards, i en alguns casos no es podrà
escriure la data correctament (amb preposició).

> Abans "ls -l" posava sempre la preposició "de" i no deixava veure els
> noms dels mesos, amb aquest canvi el llistat de directoris amb format
> llarg es presenta correctament.
>

No deixava veure els mesos? Això no hauria de passar mai. «ls -l» no
talla els noms dels mesos. Els noms dels mesos sempre els hauries de
veure. Passa que, a sid,
Si la traducció de «ls» indica %Ob (sense preposició, el que ara
mateix hi ha a la traducció), no apareix la preposició, però «ls» no
enquadra els mesos i es desquadren els mesos d'abril, agost i febrer.
Si la traducció de «ls» indica %b (amb preposició). ls "enquadra" els
mesos i apareix tot en columna

El dubté és si «ls -l» honora la cadena de la traducció o no (això
depèn de l'enllaç que indicava en el primer missatge del fil), però en
qualsevol cas, els mesos sempre s'haurien de veure.

O potser et refereixes que no veies els noms dels mesos en algun altre
programa? Això sí que és important. Hi ha cap programa que talli els
noms dels mesos?

> >
> > Aprofito i amplio la solució per a "date" que he comentat en el correu 
> > anterior.
> >
>
> aquest canvi que proposes ja s'ha fet al ca_ES de unstable.

Sí! Com el tema de l'enllaç perquè «ls -l» honori la cadena que indica
el format de data, ja està apedaçat. Tard o d'hora arribarà a stable.

Joan Montané



Re: Debugging gnumach

2020-12-26 Thread Joan Lledó

Hi,

El 25/12/20 a les 23:43, Almudena Garcia ha escrit:

My script is here
https://gist.github.com/AlmuHS/73bae6dadf19b0482a34eaab567bfdfa 



Thanks. This didn't worked for me. Question: what's the content of 
hurd_qemu/hurd.img? is it the image of a single partition or does it 
contain many partitions and GRUB installed on it?




Re: Format de data incorrecte de «ls -l»

2020-12-26 Thread Joan Montané
Missatge de Jordi Pujol  del dia dv., 25 de
des. 2020 a les 9:42:
>
> Bondia a tothom,
> és la primera vegada que escric a aquesta llista,
> treballo amb sistemes Linux Debian des de fa temps i he desenvolupat
> alguna cosa.
> En aquest cas puc aportar una solució sencilla.
> Es tracta de modificar el "/usr/share/i18n/locales/ca_ES"
>
> # patch ca_ES locale
> sed -i.bak -e '/^ab_alt_mon/,/^[^ ]/ {/"de /s//"/g
> /"d/s//"/g }' \
> -e '/^abmon/,/^[^ ]/ {/"de /s//"/g
> /"d/s//"/g }' \
> -e '/febr./s//feb./' \
> -e '/ag./s//ago./' \
> "/usr/share/i18n/locales/ca_ES"
>
> Aixó es pot fer en un sistema ja instal.lat, sortim del shell i tornem
> a entrar per veure el resultat.
>
> $ ls -lA
> total 44
> -rw---   1 jpujol users 80183 25 des. 09:30 .bash_history
> -rw---   1 jpujol users 84057 15 gen.  2020 .bash_history.old
> -rw-r--r--   1 jpujol users  4380 15 gen.  2020 .bashrc
> drwxr-xr-x   2 jpujol users  4096 15 abr.  2020 Públic
> drwx--   3 jpujol users  4096 29 ago. 19:56 .sane
> drwx--x--x   2 jpujol users  4096  7 feb.  2020 .ssh
> Salutacions,
>
> Jordi Pujol
>

Gràcies, Jordi

Això seria equivalent (si es volgués fes que al canvi arribés a
tothom) que canviar els valors associats a %b (mesos abreujats).
No m'agrada perquè trenca l'alineament amb el CLDR i perquè canvia les
abreviatures dels mesos que s'usen arreu.
Té el punt positiu (el gran pro) que facilita la presentació en
columnes en els programes de terminal, cert.

Aprofito i amplio la solució per a "date" que he comentat en el correu anterior.

En el mateix fitxer "/usr/share/i18n/locales/ca_ES" que es comenta més
amunt, caldria canviar la línia:

d_t_fmt"%A, %-d %B de %Y, %T %Z"

I posar-hi:
d_t_fmt"%A, %-d %B de %Y, %T"
date_fmt   "%A, %-d %B de %Y, %T %Z

Això és, es treu la zona horària de la variable d_t_fmt, i es defineix
una nova variable date_fmt que és la que usa "date".

Es guarden els canvis i, en el meu cas, sortir del shell no ha estat
suficinet, m'ha calgut tornar a generar el locale:
locale-gen

I ara date ja mostra la data correctament.

Faig una última reflexió sobre aquest fil.

La meva intenció en obrir el fil era fer-vos arribar la solució al
problema de «ls -l» que no té res a veure amb el canvi dels mesos del
català. És un problema que em estar tocant molt els nassos i a l'estiu
hi vaig dedicar una estona fins que vaig comprendre per què fallava
«ls -l». I vaig tenir el problema diagnosticat i reportat vaig poder
dormir tranquil. Havia ajudat amb un gra de serro a millorar Debian.
Fins que vaig veure que la causa del problema amb "date" era
completament diferent, ;)

El tema del canvi dels mesos és completament diferent. S'han trencat
coses? Sí. Però permeteu-me que deixí el tema de les abreviatures dels
mesos aquí. Portem un munt de missatges i hi ha arguments a favor i en
contra de les abreviatures actuals. Dubto que ens posem d'acord.
Prefereixo gaudir de les vacances de Nadal amb la família, :)

Dit tot això, retorno al meu "mode ràdio" habitual en la llista.
Gràcies a tothom pels comentaris i aportacions.

Bones Festes!
Joan Montané



Re: Format de data incorrecte de «ls -l»

2020-12-26 Thread Joan Montané
Missatge de Ernest Adrogué  del dia dj., 24 de des.
2020 a les 18:29:
>
> 2020-12-24, 13:22 (+0100); Joan Montané escriu:
> > Caldria veure l'impacte del canvi que indiques. Per exemple en
> > aplicacions de calendari. Això també trencaria l'alineament amb el
> > CLDR.
>
> Qui va proposar els canvis al CLDR?  He intentat investigar-ho, però no
> he aclarit res.

Jo participo al CLDR aportant-hi dades, però no vaig demanar aquesta
funcionalitat. Els camps van aparèixer a la base de dades i els
col·laboradors els vam anar omplint. Habitualment els col·laboradors
per al català al CLDR som: un representant d'Apple, un representant de
Google, un representant de Microsoft i algun participant individual,
com jo mateix. El pes del vot dels col·laboradors individuals és molt
més petit que els dels membres d'Unicode. Curiosament, el cas català
ja apareixia en la documentació [1]. Interpreto que alguna de les
grans corporacions anteriors (membres de pes a d'Unicode) és qui ho va
promoure.

[1] 
http://cldr.unicode.org/translation/date-time-1/date-time-patterns#TOC-When-to-use-Standalone-vs.-Formatting

>
> > No tot funcionava perfectament abans del 2016. Aquests canvis són el
> > que permeten tenir el format de data correctament escrit, amb la
> > preposició "de" apostrofa quan cal. Per exemple al calendari de
> > l'escriptori. Em vaig fer un fart de veure "xxx de abril...".
> > Considero que sí, s'havien de fer.
>
> Probablement no calia modificar el local per solucionar un problema al
> calendari del Gnome.  I en el cas improbable que l'única solució fos
> modificar el local, era previsible que aquests canvis comportarien
> problemes a altres programes.  Les mateixes persones que van fer els
> canvis s'haurien d'haver responsabilitzat de fer les adaptacions
> necessàries als programes afectats per tal que seguissin funcionant
> normalment.  Si no estaven disposades a fer-ho, no haurien d'haver
> modificat el local.  No em sembla bé fer modificacions i després
> desentendre's de les conseqüències negatives d'aquestes modificacions.
>

Estic d'acord amb tu en què hi ha feina de corregir problemes, i que
potser no es va preveure el cas de programes que usen fonts
monoespaiades i presenten el contingut en format taula. Caldria haver
fet un repàs més exhaustiu, especialment quan el canvi pot (i de fet
ho fa) trencar alguna cosa.

La comunitat d'usuaris també podem ajudar-hi. Per exemple, el problema
de l'ordre "mes dia" del "ls -l" no té res a veure amb el canvi de les
abreviatures, i portem mínim des de stretch. Han passat 10 anys fins
que algú s'ho ha mirat amb "carinyo", i això que afecta un munt de
llengües, no només el català! Sí, ja sé que tots tenim temps limitat,
però 10 anys ho trobo exagerat.

Ara que he remirat els apunts de quan em vaig mirar el tema de "ls
-l", hi ha un tema similar, però que té una causa diferent (tampoc
relacionada amb el canvi en les abreviatures dels mesos). L'ordre
"date" a buster mostra primer el mes i després el dia del mes. Per
què? Perquè al locale manca la variable date_fmt [2]. Interpreto que
en algun moment "date" va passar d'usar "d_t_fmt" a usar "date_fmt",
la informació no va arribar als mantenidors dels locales i ja tenim el
problema.


[2] https://sourceware.org/bugzilla/show_bug.cgi?id=24054

> > Ara hem detectat 2 problemes completament diferents, i és important
> > diferenciar-los.
> >
> > Problema 1: aquest és general. Si un programa usa %b o %B en el format
> > de data, des del 2016 (en producció des del 2018?) això retorna mesos
> > amb preposició. Ha canviat la cadena del locale. En aquests casos
> > només cal corregir la traducció perquè usi %Ob o %OB. Forma part de la
> > feina de traducció/localització. En canviar els valors %b i %B es va
> > assumir que hi hauria un temps de transició on les traduccions no es
> > veurien bé (apareixeria la preposició duplicada o sense preposició).
> > La majoria de traduccions ja s'ha adaptat a usar %Ob i %OB. En queda
> > algun? Doncs es corregeix en la traducció. És el que caldria fer a
> > mutt.
>
> És que, a part de la preposició, els noms abreviats dels mesos han
> passat de tenir tres caràcters a tenir una llargada variable (amb punt
> inclòs).  Aquestes dades del local no semblen adequades per a programes
> informàtics.  Els programes d'interfície de text estan pensats per
> funcionar en terminals de 80 columnes.  80 columnes vol dir que l'espai
> és extremadament limitat i que la informació textual ha de ser
> minimalista.  Si comencem a incloure punts, caràcters innecessaris i
> textos de llargades variables que trenquen l'alineació, l'usuabilitat
> d'aquests programes es degrada tant que els usuaris probablement no

Debugging gnumach

2020-12-25 Thread Joan Lledó

Hi Hurd,

recently I tried to implement ranges on memory object proxies[1]. I 
never worked on gnumach before, so as expected it failed. That's ok, but 
now I'd like to debug gnumach in order to find the issue.


The docs[2] say I can either use the built-in debugger or gdb over qemu. 
I tried kdb, but I don't know how to find the address of a local 
variable inside a function. I also tried the gdb approach, but I can't 
boot the kernel b/c I don't know how to load the modules from the qemu 
command-line ("panic: No bootstrap code loaded with the kernel!")


- How do you guys debug gnumach? gdb or kdb?

- If gdb, what command do you use?

- If kdb, how do you read the value for a local variable?


---
[1] 
http://git.savannah.gnu.org/cgit/hurd/gnumach.git/log/?h=jlledom-mem-obj-proxy
[2] 
https://www.gnu.org/software/hurd/microkernel/mach/gnumach/debugging.html




Re: Format de data incorrecte de «ls -l»

2020-12-24 Thread Joan Montané
Comento entre paràfrafs,

Missatge de Ernest Adrogué  del dia dj., 24 de des.
2020 a les 12:36:
>
> Hola,
>
> 2020-12-24, 09:41 (+0100); Joan Montané escriu:
> > En fi, que segueixo pensant que la solució de compromís és usar %b i
> > tenir la columna de mesos amb preposició, però ben enquadrada.
>
> Crec que no té sentit incloure la preposició en els noms mesos
> abreviats.  Si utilitzem la versió abreviada és que volem estalviar
> espai, i si volem estalviar espai, la preposició sobra completament.
>

Caldria veure l'impacte del canvi que indiques. Per exemple en
aplicacions de calendari. Això també trencaria l'alineament amb el
CLDR.

> Per altra banda, 'ls' no és l'únic programa afectat.  Un altre programa
> afectat és el 'mutt'.  Probablement n'hi ha d'altres.  Aquests programes
> funcionaven perfectament fins que van modificar el text dels noms dels
> mesos en el local ca_ES, l'abril de 2016 [1].  Aquests canvis són
> l'origen de tots aquests problemes.  Tenint en compte que, després de 4
> anys, ni estan resolts, ni tenim clar com resoldre'ls, em sembla que ens
> hem de plantejar seriosament revertir els canvis al local i deixar els
> noms dels mesos com estaven.

Vejam, crec que barregem coses.

No tot funcionava perfectament abans del 2016. Aquests canvis són el
que permeten tenir el format de data correctament escrit, amb la
preposició "de" apostrofa quan cal. Per exemple al calendari de
l'escriptori. Em vaig fer un fart de veure "xxx de abril...".
Considero que sí, s'havien de fer.

Ara hem detectat 2 problemes completament diferents, i és important
diferenciar-los.

Problema 1: aquest és general. Si un programa usa %b o %B en el format
de data, des del 2016 (en producció des del 2018?) això retorna mesos
amb preposició. Ha canviat la cadena del locale. En aquests casos
només cal corregir la traducció perquè usi %Ob o %OB. Forma part de la
feina de traducció/localització. En canviar els valors %b i %B es va
assumir que hi hauria un temps de transició on les traduccions no es
veurien bé (apareixeria la preposició duplicada o sense preposició).
La majoria de traduccions ja s'ha adaptat a usar %Ob i %OB. En queda
algun? Doncs es corregeix en la traducció. És el que caldria fer a
mutt.

Problema 2: aquest és molt particular de «ls», que era el motiu
d'aquest fil. «ls» no fa cas de la cadena de la traducció, perquè fa
10 anys es van carregar un enllaç. I per això fa l'ordre "dia mes"
aparegui com en anglès "mes dia" en fer «ls -l». Afectat totes les
llengües que volen l'ordre "dia mes" i així ho tenen definit en la
traducció de coreutils (curiosament, l'espanyol no ho té).

Com veieu, són problemes completament diferents. El primer té fàcil
solució. El segon també. El problema "difícil"  de resoldre bé és que
en solucionar el problema 2, apareix un 3r problema.

Problema 3: «ls» internament parseja %b perquè tots tinguin la mateixa
amplada i les columnes queden enquadrades. Això és un problema d'i18n.
Podem millorar «ls» o triar una solució de compromís. Hi ha altres
programes afectats que parsegin les dates internament? Parlem-ne,
perquè aleshores sí que podríem millorar l'i18n d'aquests programes.

>
> Un altre argument a favor de revertir el canvis és que el local ca_ES
> actual utilitza extensions que no figuren a l'estàndard POSIX (almenys
> jo no trobo cap referència als elements 'ab_alt_mon' i 'alt_mon' a
> l'especificació POSIX [2]).  És un problema perquè a l'hora d'adaptar
> els programes per tal funcionin amb aquest local, al mateix temps
> estarem introduint una dependència a aquestes extensions que no formen
> part de POSIX.
>

Interessant. No vaig participar directament en el canvi dels mesos del
locale català, però sí que sé que al CLDR el català està definit com
ara ho tenim al locale de la glibc (primer es va fer el canvi al
CLDR). Potser en un futur l'estàndard POSIX afegirà els mesos
alternatius (el català no és l'única llengua que els usa)? O potser
no?

Joan Montané



Re: Format de data incorrecte de «ls -l»

2020-12-24 Thread Joan Montané
Missatge de Eloi  del dia dj., 24 de des. 2020 a les 9:06:
>
> El 24/12/20 a les 7:29, Joan Montané ha escrit:
> > Missatge de Joan Montané  del dia dj., 24 de des.
> > 2020 a les 6:47:
> >
> >> Tal com ho veig, la millor opció, de compromís, és canviar la cadena a 
> >> coreutils perquè usi %b en comptes de %Ob. Això només afectaria a «ls». 
> >> L'únic inconvenient és que a la columna de mesos tindríem la preposició, 
> >> però les columnes quadraran.
> >>
> > Si algú vol fer aquesta solució de compromís, no és gens complicat.
> >
> > En un directori de treball.
> >
> > Baixeu el fitxer .po (he usat la versió 8.30 per al paquet coreutils de 
> > buster)
> > curl https://translationproject.org/PO-files/ca/coreutils-8.30.79.ca.po
> > -o coreutils.po
> >
> > Canvieu els cadenes amb %Ob a %b del fitxer anterior, amb qualsevol
> > editor de text decent o des de terminal. Aquests canvis només afecte a
> > «ls -l»:
> > sed -e "s/^msgstr \"%e %Ob /msgstr \"%e %b /" < coreutils.po >
> > coreutils-fixed.po
> >
> > Compileu el .po a .mo
> > msgfmt coreutils-fixed.po -o coreutils.mo
> >
> > Amb permisos de root, copieu el .mo al directori que pertoca (si
> > voleu, feu-vos còpia del .mo que esteu a punt de sobreescriure)
> > sudo cp ./coreutils.mo /usr/share/locale/ca/LC_MESSAGES/
> >
> > I ja està, les columnes quadren en fer «ls -l», :)
> >
> > Salut!
> > Joan Montané
> >
> No ho he provat compilant el locale, però hi hauria una altra possible
> solució usant la següent cadena de formatació: "%e %5Ob %Y".
>

Mmmm, bona idea, com dius quedaria alinea a la dreta.

He fet la prova, per a veure com quedaria. Hi ha un altre problema.
%5b o %5Ob compta bytes, no caràcters.
Això vol dir que falla (no compta bé, segons el nostre interès) al
març, perquè en codificació UTF8 la ce trencada ocupa 2 bytes, :_(
En aquest cas manca un espai i desquadra la columna.

En fi, que segueixo pensant que la solució de compromís és usar %b i
tenir la columna de mesos amb preposició, però ben enquadrada.

Salut!
Joan Montané



Re: Format de data incorrecte de «ls -l»

2020-12-23 Thread Joan Montané
Missatge de Joan Montané  del dia dj., 24 de des.
2020 a les 6:47:

> Tal com ho veig, la millor opció, de compromís, és canviar la cadena a 
> coreutils perquè usi %b en comptes de %Ob. Això només afectaria a «ls». 
> L'únic inconvenient és que a la columna de mesos tindríem la preposició, però 
> les columnes quadraran.
>

Si algú vol fer aquesta solució de compromís, no és gens complicat.

En un directori de treball.

Baixeu el fitxer .po (he usat la versió 8.30 per al paquet coreutils de buster)
curl https://translationproject.org/PO-files/ca/coreutils-8.30.79.ca.po
-o coreutils.po

Canvieu els cadenes amb %Ob a %b del fitxer anterior, amb qualsevol
editor de text decent o des de terminal. Aquests canvis només afecte a
«ls -l»:
sed -e "s/^msgstr \"%e %Ob /msgstr \"%e %b /" < coreutils.po >
coreutils-fixed.po

Compileu el .po a .mo
msgfmt coreutils-fixed.po -o coreutils.mo

Amb permisos de root, copieu el .mo al directori que pertoca (si
voleu, feu-vos còpia del .mo que esteu a punt de sobreescriure)
sudo cp ./coreutils.mo /usr/share/locale/ca/LC_MESSAGES/

I ja està, les columnes quadren en fer «ls -l», :)

Salut!
Joan Montané



Re: Format de data incorrecte de «ls -l»

2020-12-23 Thread Joan Montané
Missatge de Josep Ma. Ferrer  del dia dc., 23 de des.
2020 a les 20:54:

>
> Però ha aflorat un altre problema: tots els mesos estan abreviats amb 4
> caràcters, excepte febrer (febr.) i agost (ag.), amb 5 i 3 caràcters
> respectivament. Això fa les columnes de l'ordre «ls -l» partir del mes
> desquadrin:
>
>
Tens raó. No me'n recordava.

És un (d)efecte de «ls». Parseja la cadena de format de data per a fer
quadrar les columnes. En concret parseja el primer %b de la cadena, però en
català tenim (a buster) %OB, per això les columnes no quadren.

No té solució senzilla i perfecta. És un problema doble d'i18n, caldria
tocar dues coses en el codi de «ls» perquè ho pogués fer bé per al català.

Tal com ho veig, la millor opció, de compromís, és canviar la cadena a
coreutils perquè usi %b en comptes de %Ob. Això només afectaria a «ls».
L'únic inconvenient és que a la columna de mesos tindríem la preposició,
però les columnes quadraran.

Les altres opcions requereixen: o molts canvis a «ls», o canviar les
abreviatures dels mesos (afecta a altres programes) o deixar les columnes
desquadrades.

Salutacions,
Joan Montané


Format de data incorrecte de «ls -l»

2020-12-23 Thread Joan Montané
Hola,

Arran del correu de l'altre dia de l'automsqlbackup ja vaig indicar
l'enllaç al bug amb la solució, però per si és útil a cap dels presents, us
escric amb detalls com apedaçar Debian (i derivats) perquè no us torneu
bojos en mirar les dates fer un «ls -l». Un petit regal de Nadal, :)


Descripció del problema:
En fer «ls -l» apareix primer la columna del mes i després el dia del mes.
El mes apareix *sense* preposció a scretch i *amb* preposició a buster.

Solució curta:
*Amb permisos de root*, només cal el directori LC_TIME i un enllaç a
coreutils.mo. Això és:

mkdir /usr/share/locale/ca/LC_TIME

ln -s /usr/share/locale/ca/LC_MESSAGES/coreutils.mo
/usr/share/locale/ca/LC_TIME/coreutils.mo

Amb aquestes dues accions, en fer «ls -l» ja hauríeu de tenir primer la
columna del dia, i després la del mes (sense preposició, tant a stretch com
a buster)

Explicació del problema:
Fa 10 anys [1] es va suprimir l'enllaç a coretuils.mo del directori
LC_TIME. Se suposava que no servia de res, i per això se'l van carregar. El
problema és que l'ordre «ls» sí que el necessita. I afecta un munt de
locales, no només al català. Ja està solucionat a sid [2].

Sense aquest enllaç a coreutils.mo, l'ordre «ls -l» usa la cadena original
de l'anglès per al format de data, que  és "%b %e %Y" (mes dia any). A
stretch %b retorna el mes (la glibc no permetia mesos amb preposició), però
a buster, en català, %b retorna mes amb preposició, i el problema encara és
més molest. Però el problema és que usa la cadena original, perquè no
existeix l'enllaç.

Si existeix l'enllaç a coreutils.mo, aleshores «ls -l» usa la cadena de la
traducció. A stretch és "%e %b %Y" (dia mes any) i a buster "%e %Ob %Y"(dia
mes sense preposició any).

En fi, si no voleu esperar a la següent versió de Debian, i us emprenya
tant com a mi el tema del format de data, ja sabeu la solució :)

Bon Nadal i Bon Any Nou!

Joan Montané

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=584837
[2] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=963513


Re: Problemes amb automysqlbackup combinat amb un "locale" en català

2020-12-20 Thread Joan Montané
Missatge de Josep Lladonosa  del dia dg., 20 de des.
2020 a les 11:56:

> Hola,
>
> Sembla que ja està encaminada la solució però volia aportar la meva opinió:
>
> Per a mi és un "bug" d'aquest script ja que no hauria de suposar que les
> dates no tinguin espais si s'han d'usar en nom de fitxer ja que un nom de
> fitxer pot contenir-ne, d'espais.
>
> Potser es pot reportar un bug al creador del script i fins i tot fer una
> proposta per resoldre'l, "escapant" el nom de fitxer.
>
>
Efectivament, en el cas particular que ens ocupa hi ha dues coses que
coincideixen, i fan aparèixer el problema:

1. En algunes llengües, la variable que conté el nom del mes conté un
espai. És el cas de català si s'usa el format %B per a tots els mesos menys
abril, agost i octubre.
2. L'script no escapa els possibles espais de les variables i si hi ha
espais, aleshores l'script no fa el que se suposa que ha de fer.

L'error és a 2 (no es pot assumir alegrement que una variable no té
espais). El problema es podria rodejar usant "%OB" a l'script per a
extraure els noms de mesos (totes les llengües que he mirat retornen els
mesos sense espai), però això segueix sense garantir que les cadenes no
tindran espais, i el problema descrit a 2 podria tornar a reproduir-se. La
solució bona és escapar els espais de les variables, amb les cometes dobles.

Com diu el Josep, el millor és obrir un bug a automysqlbackup i, idealment,
aportar-hi la solució.

Salut!
Joan Montané

PS: encara gràcies que amb %B "d'abril", "d'agost" i "d'octubre" usen
l'apòstrof tipogràfic, perquè sospito que si usessin l'apòstrof recte,
l'script també hauria tingut problemes en aquests mesos.


[SCM] GNU Mach branch, jlledom-mem-obj-proxy, updated. v1.8-218-g4191c68

2020-12-19 Thread Joan Lled�
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU Mach".

The branch, jlledom-mem-obj-proxy has been updated
   via  4191c68b2121ec78d3ed3580cc3565f638fd56c5 (commit)
  from  4868356b517d60fa6df3a141ae70fecfc9299f60 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -
commit 4191c68b2121ec78d3ed3580cc3565f638fd56c5
Author: Joan Lledó 
Date:   Sat Dec 19 12:36:42 2020 +0100

Memory object proxy: add support for ranges

---

Summary of changes:
 vm/memory_object_proxy.c | 13 -
 vm/memory_object_proxy.h |  4 +++-
 vm/vm_user.c |  9 -
 3 files changed, 19 insertions(+), 7 deletions(-)


hooks/post-receive
-- 
GNU Mach



[SCM] GNU Mach branch, jlledom-mem-obj-proxy, created. v1.8-217-g4868356

2020-12-19 Thread Joan Lled�
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU Mach".

The branch, jlledom-mem-obj-proxy has been created
at  4868356b517d60fa6df3a141ae70fecfc9299f60 (commit)

- Log -
commit 4868356b517d60fa6df3a141ae70fecfc9299f60
Author: Joan Lledó 
Date:   Sun Nov 22 19:34:23 2020 +0100

Remove trailing whitespaces

---


hooks/post-receive
-- 
GNU Mach



Re: Contributing - available projects?

2020-12-17 Thread Joan Lledó

Hi,

El 15/12/20 a les 21:40, Edward Haigh ha escrit:
Of course! I'll set a dev environment up and do a little research on 
existing frameworks, then come back to you.




I remember I used to use the Perl test suite to test the lwip 
translator, you may want to take a look.




[Bug 1908200] [NEW] package phpmyadmin 4:4.9.7+dfsg1-1 failed to install/upgrade: el subprocés «s'ha instal·lat el script phpmyadmin del paquet post-installation» retornà el codi d'eixida d'error 1

2020-12-14 Thread Joan Pagès Rosas
Public bug reported:

This error appeared during the automatic system update, along with
others.

ProblemType: Package
DistroRelease: Ubuntu 20.10
Package: phpmyadmin 4:4.9.7+dfsg1-1
ProcVersionSignature: Ubuntu 5.8.0-33.36-generic 5.8.17
Uname: Linux 5.8.0-33-generic x86_64
ApportVersion: 2.20.11-0ubuntu50.2
Architecture: amd64
CasperMD5CheckResult: skip
Date: Thu Dec 10 09:38:26 2020
ErrorMessage: el subprocés «s'ha instal·lat el script phpmyadmin del paquet 
post-installation» retornà el codi d'eixida d'error 1
InstallationDate: Installed on 2020-11-15 (29 days ago)
InstallationMedia: Ubuntu 20.10 "Groovy Gorilla" - Release amd64 (20201022)
PackageArchitecture: all
Python3Details: /usr/bin/python3.8, Python 3.8.6, python3-minimal, 
3.8.6-0ubuntu1
PythonDetails: N/A
RelatedPackageVersions:
 dpkg 1.20.5ubuntu2
 apt  2.1.10ubuntu0.1
SourcePackage: phpmyadmin
Title: package phpmyadmin 4:4.9.7+dfsg1-1 failed to install/upgrade: el 
subprocés «s'ha instal·lat el script phpmyadmin del paquet post-installation» 
retornà el codi d'eixida d'error 1
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: phpmyadmin (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package groovy

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1908200

Title:
  package phpmyadmin 4:4.9.7+dfsg1-1 failed to install/upgrade: el
  subprocés «s'ha instal·lat el script phpmyadmin del paquet post-
  installation» retornà el codi d'eixida d'error 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/phpmyadmin/+bug/1908200/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

RE: Snmpv3 users details are not deleting from /var/net-snmp/snmpd.conf file

2020-12-14 Thread Joan Landry
Thanks – but my question was regarding actually being the master agent and 
internally updating the net-snmp library with data received via a CLI.
I believe the only way to do this is via the snmpd.conf file.
So any info on this would be greatly appreciated.
Thanks,


From: Larry Hayes 
Sent: Monday, December 14, 2020 10:48 AM
To: chandrasekharreddy chinnapareddygari 
Cc: net-snmp-cod...@lists.sourceforge.net; net-snmp-users@lists.sourceforge.net
Subject: Re: Snmpv3 users details are not deleting from 
/var/net-snmp/snmpd.conf file

External email: [net-snmp-users-boun...@lists.sourceforge.net]

I am no expert, but do deal with creating and deleting SNMP v3 users in my job.

You may have to use the tool, snmpuser to remove V3 users without restarting 
the snmpd daemon.

From the man page:
" snmpusm is an SNMP application that can be used to do simple maintenance on 
the users known to an SNMP agent, by manipulating the agent's User-based 
Security Module (USM) table. The user needs write access to the usmUserTable 
MIB table. This tool can be used to create, delete, clone, and change the 
passphrase of users configured on a running SNMP agent."




On Sat, Dec 12, 2020 at 9:55 PM chandrasekharreddy chinnapareddygari 
mailto:chandrasekhar...@hotmail.com>> wrote:
Hi team,
I'm using net-snmp 5.8 version .My requirement is conf files should updtae 
without restarting snmpd .

I'm sending SIGHUP signal to update SNMP data with out restarting snmpd . 
snmpv3 details are not updating .
Please help me how to proceed further.


Thanks,
Chandra.



Get Outlook for 
Android
___
Net-snmp-coders mailing list
net-snmp-cod...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders

Please see our privacy statement at 
https://www.adva.com/en/about-us/legal/privacy-statement for details of how 
ADVA processes personal information.
___
Net-snmp-users mailing list
Net-snmp-users@lists.sourceforge.net
Please see the following page to unsubscribe or change other options:
https://lists.sourceforge.net/lists/listinfo/net-snmp-users


[Bf-blender-cvs] [68d5ad99839] master: Fix T75539: Cycles missing geometry update when switching displacement method

2020-12-14 Thread Joan Bonet Orantos
Commit: 68d5ad998393856da3285e0d51290b31e5360770
Author: Joan Bonet Orantos
Date:   Mon Dec 14 13:42:34 2020 +0100
Branches: master
https://developer.blender.org/rB68d5ad998393856da3285e0d51290b31e5360770

Fix T75539: Cycles missing geometry update when switching displacement method

The shaders were not tagged for a needed geometry update when the displacement 
method was modified, neither were the Geometry and Object managers.

Reviewed By: kevindietrich

Maniphest Tasks: T75539

Differential Revision: https://developer.blender.org/D8896

===

M   intern/cycles/render/shader.cpp

===

diff --git a/intern/cycles/render/shader.cpp b/intern/cycles/render/shader.cpp
index cf49dedc426..7e06b427e4d 100644
--- a/intern/cycles/render/shader.cpp
+++ b/intern/cycles/render/shader.cpp
@@ -347,8 +347,15 @@ void Shader::tag_update(Scene *scene)
   foreach (ShaderNode *node, graph->nodes)
 node->attributes(this, );
 
-  if (has_displacement && displacement_method == DISPLACE_BOTH) {
-attributes.add(ATTR_STD_POSITION_UNDISPLACED);
+  if (has_displacement) {
+if (displacement_method == DISPLACE_BOTH) {
+  attributes.add(ATTR_STD_POSITION_UNDISPLACED);
+}
+if (displacement_method_is_modified()) {
+  need_update_geometry = true;
+  scene->geometry_manager->need_update = true;
+  scene->object_manager->need_flags_update = true;
+}
   }
 
   /* compare if the attributes changed, mesh manager will check

___
Bf-blender-cvs mailing list
Bf-blender-cvs@blender.org
https://lists.blender.org/mailman/listinfo/bf-blender-cvs


Re: fsck amb raid1 + lvm

2020-12-10 Thread Joan
Mola el servei de distroot ;-)


El Thu, 10 Dec 2020 09:02:15 +0100
Narcis Garcia  va escriure:

> L'enllaç de googleusercontent requereix que s'iniciï sessió com a
> client o usuari registrat de l'empresa.
> Et puc sugerir un servei independent per a compartir imatges
> temporalment, que a més a més funciona amb programari lliure:
> https://upload.disroot.org/
> 
> 
> 
> Narcis Garcia
> 
> __
> I'm using this dedicated address because personal addresses aren't
> masked enough at this mail public archive. Public archive
> administrator should fix this against automated addresses collectors.
> El 9/12/20 a les 20:45, Lluís Gras ha escrit:
> > Hola Àlex i companyia,
> > 
> > El raid + lvm està muntat amb l'instal·lador d'una Stretch que ha
> > passat a Buster i poc més. Jo tampoc tinc experiència amb lvmraid.
> > 
> > A continuació, la captura que vaig fer per si algú en treu més
> > informació.
> > 
> > https://lh3.googleusercontent.com/MVklyTTBM3KWcLjLVO1dR6MJPQAOzd24S-TQ2dnwwOyWpYDuGm2yi59MDH_PzTjI_X1vDkkPKbrp1daxHUNOx6T0LOzFxLtjUYflKiuVXSEVuU1iSGkzfRAI-H_Sf3A3yEwT2ToIXg2HyuWU9jtmoGpBB0I0BOF7feui5w4Z-4pifZYW1L0LC27BgvGTEEK5-qW8zn_wt-woEKd037aj5NUjHCN0XULxmAkN0w2iO1tdcJO373Br080snDsXGyzkFG0qt3CrCqm63f42XbJCkUPEbI_02cWlv60OdT97JinvNlZBgD8aorORcvEGD3f_oG2LxF67ksBjogi3sQsVfBWeRBYWEYCS9cwOHJLmFKTiO4YI61R7Kv6ANbQVcI6P5gUDlHhpY566khi_la2jcCLoFjY5foTuuH1FbQ-1szS5QLt01sKkvSvyIuK4y4Ox0sYMd7VXDCKV1t_ZtnTK4tdoulLBcKNbkPhP9KQBKHWdGhTeL_KcbAtP5YUh3xA8uaWwleeEpuerU3xfYpW0a-TS5iFUp4XWNoEhDzaYBVzDS8oMjYqgjLeelCZzjXGw5qaQ8K6OnioSNQemfsWU3rnMoW8j57zRC9HUgXE6SeMJazaBu0IBl2zcMZFTQIBtGoTWrf6nVz391iE378FYfztoeMkMTw3Xk2F3_twbGj1y4v4U2yKxU5sLpQRqgTU=w495-h880-no?authuser=0
> > <https://lh3.googleusercontent.com/MVklyTTBM3KWcLjLVO1dR6MJPQAOzd24S-TQ2dnwwOyWpYDuGm2yi59MDH_PzTjI_X1vDkkPKbrp1daxHUNOx6T0LOzFxLtjUYflKiuVXSEVuU1iSGkzfRAI-H_Sf3A3yEwT2ToIXg2HyuWU9jtmoGpBB0I0BOF7feui5w4Z-4pifZYW1L0LC27BgvGTEEK5-qW8zn_wt-woEKd037aj5NUjHCN0XULxmAkN0w2iO1tdcJO373Br080snDsXGyzkFG0qt3CrCqm63f42XbJCkUPEbI_02cWlv60OdT97JinvNlZBgD8aorORcvEGD3f_oG2LxF67ksBjogi3sQsVfBWeRBYWEYCS9cwOHJLmFKTiO4YI61R7Kv6ANbQVcI6P5gUDlHhpY566khi_la2jcCLoFjY5foTuuH1FbQ-1szS5QLt01sKkvSvyIuK4y4Ox0sYMd7VXDCKV1t_ZtnTK4tdoulLBcKNbkPhP9KQBKHWdGhTeL_KcbAtP5YUh3xA8uaWwleeEpuerU3xfYpW0a-TS5iFUp4XWNoEhDzaYBVzDS8oMjYqgjLeelCZzjXGw5qaQ8K6OnioSNQemfsWU3rnMoW8j57zRC9HUgXE6SeMJazaBu0IBl2zcMZFTQIBtGoTWrf6nVz391iE378FYfztoeMkMTw3Xk2F3_twbGj1y4v4U2yKxU5sLpQRqgTU=w495-h880-no?authuser=0>
> > 
> > Missatge de Alex Muntada  > <mailto:al...@debian.org>> del dia dc., 9 de des. 2020 a les 10:28:
> > 
> > Hola Lluís
> >   
> > > la pregunta és si algú s'hi ha trobat i perquè el fsck em diu
> > > que ja ha corregit tots els errors i quan torna a arrencar en
> > > torna a trobar en inodes diferents, etc ...  
> > 
> > Jo no m'he trobat mai en la situació que comentes, tot i haver
> > gestionat durant una pila d'anys força servidors amb mdadm en
> > RAID1 i amb LVM per als volums. En aquests anys vam tenir una
> > pila de discos avariats i alguns talls de corrent que el SAI no
> > va poder gestionar, però no recordo un escenari com el que tu
> > descrius.
> > 
> > Pensant-hi una mica se m'acut que potser la diferència en el teu
> > cas sigui si el RAID1 el gestiona lvmraid enlloc de mdadm? No
> > tinc experiència amb lvmraid, així que no et puc dir si els
> > trets van per aquí però és l'única diferència que se m'acut que
> > podria haver-hi entre la teva experiència i la meva.
> > 
> > Salut i records!
> > Alex
> > 
> > --
> >   ⢀⣴⠾⠻⢶⣦⠀
> >   ⣾⠁⢠⠒⠀⣿⡁   Alex Muntada  > <mailto:al...@debian.org>> ⢿⡄⠘⠷⠚⠋   Debian Developer 
> > log.alexm.org <http://log.alexm.org> ⠈⠳⣄
> >   
> 



-- 
Joan Cervan i Andreu
http://personal.calbasi.net

"El meu paper no és transformar el món ni l'home sinó, potser, el de
ser útil, des del meu lloc, als pocs valors sense els quals un món no
val la pena viure'l" A. Camus

i pels que teniu fe:
"Déu no és la Veritat, la Veritat és Déu"
Gandhi



[Phonon] [Bug 430198] New: Cannot play mov files through phonon

2020-12-09 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=430198

Bug ID: 430198
   Summary: Cannot play mov files through phonon
   Product: Phonon
   Version: 4.11.1
  Platform: Neon Packages
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: unassigned-b...@kde.org
  Reporter: aseq...@gmail.com
CC: myr...@kde.org, romain.per...@gmail.com,
sit...@kde.org
  Target Milestone: ---

Created attachment 133959
  --> https://bugs.kde.org/attachment.cgi?id=133959=edit
Media information

SUMMARY

I have a vide in mov format recorded from a Canon DSLR (EOS 550D), that can't
be played with phonon (neither gstreamer nor vlc), in both cases when playing
the file (with either dragon or gwenview) I only get a black screen with no
sound. On the console there are several of these messages:

  WARNING: bool Phonon::FactoryPrivate::createBackend() phonon backend plugin
could not be loaded
  WARNING: bool Phonon::FactoryPrivate::createBackend() phonon backend plugin
could not be loaded
  WARNING: bool Phonon::FactoryPrivate::createBackend() phonon backend plugin
could not be loaded
  WARNING: Phonon::createPath: Cannot connect  Phonon::MediaObject ( no
objectName ) to  Phonon::VideoWidget ( no objectName ).


STEPS TO REPRODUCE
1. Remove phonon-backend-gstreamer
2. Try to play mov file in any program using phonon
3. Black screen with no sound
4. Install again  phonon-backend-gstreamer and remove phonon-backend-vlc
5. Try to play mov file in any program using phonon
6. Black screen with no sound

The same file can be played both without problemes with:
- gst-launch-1.0  filesrc location=MVI_6781.mov ! queue ! decodebin !
autovideosink
- vlc MVI_6781.mov

SOFTWARE/OS VERSIONS

Linux/KDE Plasma: 
KDE Frameworks Version: Frameworks 5.76.0 
Qt Version: Qt 5.15.1 
Plasma Neon

-- 
You are receiving this mail because:
You are watching all bug changes.

[kphotoalbum] [Bug 430097] New: Crash when viewing videos

2020-12-06 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=430097

Bug ID: 430097
   Summary: Crash when viewing videos
   Product: kphotoalbum
   Version: 5.7.0
  Platform: Neon Packages
OS: Linux
Status: REPORTED
  Keywords: drkonqi
  Severity: crash
  Priority: NOR
 Component: general
  Assignee: kpab...@willden.org
  Reporter: aseq...@gmail.com
  Target Milestone: ---

Application: kphotoalbum (5.7.0)

Qt Version: 5.15.1
Frameworks Version: 5.76.0
Operating System: Linux 5.4.0-56-generic x86_64
Windowing system: X11
Distribution: KDE neon User Edition 5.20

-- Information about the crash:
- What I was doing when the application crashed:
I opened one of the videos of my collection, kphotoalbum just crashed, there
still seem to be issues with the video integration.

The crash can be reproduced sometimes.

-- Backtrace:
Application: KPhotoAlbum (kphotoalbum), signal: Aborted

[New LWP 58667]
[New LWP 58668]
[New LWP 58669]
[New LWP 58670]
[New LWP 58671]
[New LWP 58685]
[New LWP 58686]
[New LWP 58687]
[New LWP 59271]
[New LWP 59279]
[New LWP 59281]
[New LWP 59284]
[New LWP 59287]
[New LWP 59288]
[New LWP 59289]
[New LWP 59290]
[New LWP 59291]
[New LWP 59292]
[New LWP 59294]
[New LWP 59318]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x7f69ff30baff in __GI___poll (fds=0x7ffe33504728, nfds=1, timeout=1000) at
../sysdeps/unix/sysv/linux/poll.c:29
[Current thread is 1 (Thread 0x7f69ec64d1c0 (LWP 58665))]

Thread 21 (Thread 0x7f69357fa700 (LWP 59318)):
#0  futex_wait_cancelable (private=, expected=0,
futex_word=0x7f69357f92f8) at ../sysdeps/nptl/futex-internal.h:183
#1  __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x7f69357f92a8,
cond=0x7f69357f92d0) at pthread_cond_wait.c:508
#2  __pthread_cond_wait (cond=0x7f69357f92d0, mutex=0x7f69357f92a8) at
pthread_cond_wait.c:638
#3  0x7f69f8e4494e in base::ConditionVariable::Wait() () from
/usr/lib/x86_64-linux-gnu/libQt5WebEngineCore.so.5
#4  0x in ?? ()

Thread 20 (Thread 0x7f6937fff700 (LWP 59294)):
#0  0x7f69ff23f0c0 in __GI_getenv (name=0x7f69ff3ad434 "NGUAGE",
name@entry=0x7f69ff3ad432 "LANGUAGE") at getenv.c:75
#1  0x7f69ff22f05c in guess_category_value (categoryname=0x7f69ff393493
<_nl_category_names+51> "LC_MESSAGES", category=5) at dcigettext.c:1565
#2  __dcigettext (domainname=0x7f69ff3ad405 <_libc_intl_domainname> "libc",
msgid1=0x7f69ff3ad8ac "Bad file descriptor", msgid2=msgid2@entry=0x0,
plural=plural@entry=0, n=n@entry=0, category=category@entry=5) at
dcigettext.c:647
#3  0x7f69ff22d993 in __GI___dcgettext (domainname=,
msgid=, category=category@entry=5) at dcgettext.c:47
#4  0x7f69ff298672 in __GI___strerror_r (errnum=errnum@entry=9,
buf=buf@entry=0x0, buflen=buflen@entry=0) at _strerror.c:71
#5  0x7f69ff298593 in strerror (errnum=9) at strerror.c:31
#6  0x7f69f3d3dcf6 in event_warn () from
/usr/lib/x86_64-linux-gnu/libevent-2.1.so.7
#7  0x7f69f3d3f768 in ?? () from
/usr/lib/x86_64-linux-gnu/libevent-2.1.so.7
#8  0x7f69f3d35625 in event_base_loop () from
/usr/lib/x86_64-linux-gnu/libevent-2.1.so.7
#9  0x7f69f8e512b3 in
base::MessagePumpLibevent::Run(base::MessagePump::Delegate*) () from
/usr/lib/x86_64-linux-gnu/libQt5WebEngineCore.so.5
#10 0x0100 in ?? ()
#11 0x7f6930001d19 in ?? ()
#12 0x7f6930001d18 in ?? ()
#13 0x7f6937ffe250 in ?? ()
#14 0x7f6937ffe248 in ?? ()
#15 0x7f69300027a0 in ?? ()
#16 0x0008 in ?? ()
#17 0x01833c76 in ?? ()
#18 0x0019 in ?? ()
#19 0x0005c436 in ?? ()
#20 0x7f6937ffe318 in ?? ()
#21 0x7f696006b920 in ?? ()
#22 0x7fff in ?? ()
#23 0x7f696006b9c8 in ?? ()
#24 0x7fff in ?? ()
#25 0x0001 in ?? ()
#26 0x7f6937ffe2d0 in ?? ()
#27 0x7f69f8df4f89 in
base::sequence_manager::internal::ThreadControllerWithMessagePumpImpl::Run(bool,
base::TimeDelta) () from /usr/lib/x86_64-linux-gnu/libQt5WebEngineCore.so.5
#28 0x7f6937ffe318 in ?? ()
#29 0x7f6937ffe3b0 in ?? ()
#30 0x7f6937ffe310 in ?? ()
#31 0x7f6937ffe3b0 in ?? ()
#32 0x in ?? ()

Thread 19 (Thread 0x7f696cff9700 (LWP 59292)):
#0  futex_wait_cancelable (private=, expected=0,
futex_word=0x7f696cff82f8) at ../sysdeps/nptl/futex-internal.h:183
#1  __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x7f696cff82a8,
cond=0x7f696cff82d0) at pthread_cond_wait.c:508
#2  __pthread_cond_wait (cond=0x7f696cff82d0, mutex=0x7f696cff82a8) at
pthread_cond_wait.c:638
#3  0x7f69f8e4494e in base::ConditionVariable::Wait() () from
/usr/lib/x86_64-linux-gnu/libQt5WebEngineCore.so.5
#4  0x in ?? ()

Thread 18 (Thread 0x7f696d7fa700 (LWP 59291)):
#0  futex_wait_cancelable (private=, expected=0,
futex_word=0x7f696d7f91a8) at ../sysdeps/nptl/futex-internal.h:183
#1  

[kphotoalbum] [Bug 427780] Crash when tagging images (not videos)

2020-12-05 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=427780

--- Comment #4 from Joan  ---
Hi, I finally find time to organize the pictures again, about your
questions (today I haven't experienced any crashes so far)

   - Job queue is empty (I remember that back in time there where more
   items in queue and it was blinking all the time)
   - Maybe the importing queue was finally processed or might be some
   improvements in video in current Neon version

If more errors appear I'll let you know

Missatge de Johannes Zarl-Zierl  del dia ds., 21
de nov. 2020 a les 1:44:

> https://bugs.kde.org/show_bug.cgi?id=427780
>
> Johannes Zarl-Zierl  changed:
>
>What|Removed |Added
>
> 
>  CC||johan...@zarl-zierl.at
>
> --- Comment #3 from Johannes Zarl-Zierl  ---
> Hi Joan,
>
> Thanks for your patience and for keeping the crash reports coming.
>
> The backtraces all seem related to the video thumbnailer.
>
> Summarizing your reports so far, it seems that the crashes occur during
> random
> times and do not seem related to a specific task that you are doing.
>
> If you start kphotoalbum and take a look at the background job queue
> (press the
> "LED-like" thing in the status bar) - is it mostly empty? Does it show
> failed
> jobs (red status).
>
> Did you try just opening kphotoalbum and not doing anything? Does it crash
> after a while?
>
> Thanks,
>   Johannes
>
> --
> You are receiving this mail because:
> You reported the bug.
> You are on the CC list for the bug.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Nano branch pruning

2020-12-01 Thread Joan Touzet
Consider this email my +1 on all of them :)

-Joan

On 01/12/2020 10:54, Glynn Bird wrote:
> I've made a start with these PRs:
> 
> https://github.com/apache/couchdb/pull/3285
> https://github.com/apache/couchdb-fauxton/pull/1302
> https://github.com/apache/couchdb-docker/pull/193
> https://github.com/apache/couchdb-pkg/pull/75
> https://github.com/apache/couchdb-helm/pull/47
> https://github.com/apache/couchdb-documentation/pull/609
> 
> 
> On Mon, 30 Nov 2020 at 19:12, Glynn Bird  wrote:
> 
>> I will. I did start to look at this then got sidetracked. I'll look this
>> week.
>>
>> On Mon, 30 Nov 2020 at 19:01, Joan Touzet  wrote:
>>
>>> Glynn, might you find time to work on this this week? I just noticed
>>> that `main` isn't protected on couchdb-documentation, which is a bad
>>> thing.
>>>
>>> If not I'll try and do it myself.
>>>
>>> -Joan
>>>
>>> On 09/11/2020 12:23, Joan Touzet wrote:
>>>> I think I just heard you volunteer to do the PRs on the repos you
>>>> mentioned below! ;)
>>>>
>>>> I'd leave off www, this isn't something where we need the multi-step
>>>> process.
>>>>
>>>> -Joan
>>>>
>>>> On 09/11/2020 03:55, Glynn Bird wrote:
>>>>> It turns out that branch protection doesn't require Infra intervention,
>>>>> there's a bunch of configuration flags that can be switched on via the
>>>>> .asf.yaml file:
>>>>>
>>> https://cwiki.apache.org/confluence/display/INFRA/git+-+.asf.yaml+features#git.asf.yamlfeatures-BranchProtection
>>>>>
>>>>> An simple configuration example would be:
>>>>>
>>>>> github:
>>>>>   protected_branches:
>>>>> main
>>>>>
>>>>> On Thu, 5 Nov 2020 at 11:59, Glynn Bird  wrote:
>>>>>
>>>>>> Created Infra ticket
>>> https://issues.apache.org/jira/browse/INFRA-21076
>>>>>>
>>>>>> On Wed, 4 Nov 2020 at 16:48, Glynn Bird  wrote:
>>>>>>
>>>>>>> https://github.com/apache/couchdb- main not protected
>>>>>>> https://github.com/apache/couchdb-fauxton- main not protected
>>>>>>> https://github.com/apache/couchdb-docker - main not protected
>>>>>>> https://github.com/apache/couchdb-www - still using master
>>>>>>> https://github.com/apache/couchdb-pkg - main not protected
>>>>>>> https://github.com/apache/couchdb-helm - main not protected
>>>>>>> ...
>>>>>>>
>>>>>>> There are numerous others. I couldn't find an example where the main
>>>>>>> branch _was_ protected, but in most cases the old master branch was.
>>>>>>>
>>>>>>> Glynn
>>>>>>>
>>>>>>>
>>>>>>> On Wed, 4 Nov 2020 at 16:36, Glynn Bird 
>>> wrote:
>>>>>>>
>>>>>>>> I'm on it.
>>>>>>>>
>>>>>>>> Glynn
>>>>>>>>
>>>>>>>> On Wed, 4 Nov 2020 at 16:23, Joan Touzet  wrote:
>>>>>>>>
>>>>>>>>> Yes, you'll have to file a ticket with Infra for this.
>>>>>>>>>
>>>>>>>>> We probably need to do this on quite a few repos, which
>>> unfortunately
>>>>>>>>> means that we may be forced to write a script to address it.
>>>>>>>>>
>>>>>>>>> Would you be willing to volunteer to check and see which ones have
>>>>>>>>> unprotected main branches - at least for the big 4-5 or so
>>> (couchdb,
>>>>>>>>> docs, fauxton come to mind)?
>>>>>>>>>
>>>>>>>>> -Joan
>>>>>>>>>
>>>>>>>>> On 04/11/2020 11:13, Glynn Bird wrote:
>>>>>>>>>> I've published the 9.0.0 release of Nano, the Node.js library for
>>>>>>>>> CouchDB
>>>>>>>>>> which features:
>>>>>>>>>>
>>>>>>>>>> - request library replaced with axios ‍
>>>>>>>>>> - rewritten changes follower 
>>>>>>>>>> - fewer dependencies 
>>>>>>>>>>
>>>>>>>>>> https://www.npmjs.com/package/nano
>>>>>>>>>>
>>>>>>>>>> Thanks to all who helped in the work to build and test this
>>> release.
>>>>>>>>>>
>>>>>>>>>> While tidying up afterwards I noticed that the new default main
>>>>>>>>> branch is
>>>>>>>>>> "unprotected" and the old default ("master") is "protected" so I
>>> can't
>>>>>>>>>> delete it.
>>>>>>>>>>
>>>>>>>>>> Is this something Apache Infra would be able to help with?
>>>>>>>>>>
>>>>>>>>>> Glynn
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>
>>>
>>
> 


Re: Nano branch pruning

2020-11-30 Thread Joan Touzet
Glynn, might you find time to work on this this week? I just noticed
that `main` isn't protected on couchdb-documentation, which is a bad thing.

If not I'll try and do it myself.

-Joan

On 09/11/2020 12:23, Joan Touzet wrote:
> I think I just heard you volunteer to do the PRs on the repos you
> mentioned below! ;)
> 
> I'd leave off www, this isn't something where we need the multi-step
> process.
> 
> -Joan
> 
> On 09/11/2020 03:55, Glynn Bird wrote:
>> It turns out that branch protection doesn't require Infra intervention,
>> there's a bunch of configuration flags that can be switched on via the
>> .asf.yaml file:
>> https://cwiki.apache.org/confluence/display/INFRA/git+-+.asf.yaml+features#git.asf.yamlfeatures-BranchProtection
>>
>> An simple configuration example would be:
>>
>> github:
>>   protected_branches:
>> main
>>
>> On Thu, 5 Nov 2020 at 11:59, Glynn Bird  wrote:
>>
>>> Created Infra ticket https://issues.apache.org/jira/browse/INFRA-21076
>>>
>>> On Wed, 4 Nov 2020 at 16:48, Glynn Bird  wrote:
>>>
>>>> https://github.com/apache/couchdb- main not protected
>>>> https://github.com/apache/couchdb-fauxton- main not protected
>>>> https://github.com/apache/couchdb-docker - main not protected
>>>> https://github.com/apache/couchdb-www - still using master
>>>> https://github.com/apache/couchdb-pkg - main not protected
>>>> https://github.com/apache/couchdb-helm - main not protected
>>>> ...
>>>>
>>>> There are numerous others. I couldn't find an example where the main
>>>> branch _was_ protected, but in most cases the old master branch was.
>>>>
>>>> Glynn
>>>>
>>>>
>>>> On Wed, 4 Nov 2020 at 16:36, Glynn Bird  wrote:
>>>>
>>>>> I'm on it.
>>>>>
>>>>> Glynn
>>>>>
>>>>> On Wed, 4 Nov 2020 at 16:23, Joan Touzet  wrote:
>>>>>
>>>>>> Yes, you'll have to file a ticket with Infra for this.
>>>>>>
>>>>>> We probably need to do this on quite a few repos, which unfortunately
>>>>>> means that we may be forced to write a script to address it.
>>>>>>
>>>>>> Would you be willing to volunteer to check and see which ones have
>>>>>> unprotected main branches - at least for the big 4-5 or so (couchdb,
>>>>>> docs, fauxton come to mind)?
>>>>>>
>>>>>> -Joan
>>>>>>
>>>>>> On 04/11/2020 11:13, Glynn Bird wrote:
>>>>>>> I've published the 9.0.0 release of Nano, the Node.js library for
>>>>>> CouchDB
>>>>>>> which features:
>>>>>>>
>>>>>>> - request library replaced with axios ‍
>>>>>>> - rewritten changes follower 
>>>>>>> - fewer dependencies 
>>>>>>>
>>>>>>> https://www.npmjs.com/package/nano
>>>>>>>
>>>>>>> Thanks to all who helped in the work to build and test this release.
>>>>>>>
>>>>>>> While tidying up afterwards I noticed that the new default main
>>>>>> branch is
>>>>>>> "unprotected" and the old default ("master") is "protected" so I can't
>>>>>>> delete it.
>>>>>>>
>>>>>>> Is this something Apache Infra would be able to help with?
>>>>>>>
>>>>>>> Glynn
>>>>>>>
>>>>>>
>>>>>
>>


Reverting a PR, Ilya please reopen

2020-11-30 Thread Joan Touzet
Hi Ilya,

I accidentally merged

https://github.com/apache/couchdb-documentation/pull/550

before realizing that it was 4.0 documentation. I'm going to revert that
now. You'll want to re-prep that PR for merging later; my suspicion is
that we will want a 3.2 or 3.1.2 release before 4.0 comes out and I'd
rather not have 4.0 stuff committed before we're ready to make that
change. (Maintaining the 3.x branch on dev is hard enough, I don't want
to branch the docs prematurely.)

Sorry about that.

-Joan


Re: Does the number of partitions affect performance?

2020-11-26 Thread Joan Touzet
Hi Olaf, I don't have extensive experience with partitioned databases,
but I'm happy to provide some recommendations.

On 26/11/2020 01:48, Olaf Krueger wrote:
> Hi guys,
> 
> I am in the process of preparing our docs for partitioning.
> I guess a lot of us are already using something like a "docType" prop within 
> each document.
> So it seems to be obvious to partition the docs by its "docType".

This isn't typically what partitioned databases are used for. As the
documentation example shows, partitioning is great when you want,
*consistently*, to select a small percentage of documents out of a very
large database.

In the provided example, you have an IoT application, with (let's say)
thousands of sensors that all record a few readings a day:

10 sensors * 10 readings a day = 1mil documents a day

But most often you look at this data per-sensor, and let's say you're
looking at an individual sensor's data for a week. That would be:

10 readings * 7 days = 70 documents / 7mil documents = 0.001%

This is why an index doesn't make sense in this case. Indexing by sensor
name, then retrieving just that 0.001% of the data, requires looking
across all of the shards in your database and getting an answer from all
of them. In CouchDB 2.x, this would be q=8 shards by default - but a
database that is growing by 1mil documents a day might well have q=16,
24, 32 or even more shards. Multiply by n=3 and that could be up to over
100 shards consulted across the cluster to retrieve a very small amount
of documents -- many of which will return no matches.

Partitioning keeps all of the related together in a single shard, so the
query for 70 documents above will all come from a single shard. That
means not waiting for 8*3=24 Erlang sub-processes (internal to CouchDB)
all to respond, then have those results collated, before getting a response.

The more critical portion is that secondary indexes are also scoped only
to that partition. As the documentation says on the last line:

> To be clear, this means that global queries perform identically to queries on 
> non-partitioned databases. Only partitioned queries on a partitioned database 
> benefit from the performance improvements.

So the real reason for jumping through the partitioned database hoops is
only when you know, conclusively, that you're going to want primarily to
ask questions only of your partitions, not globally. Keep in mind that
the recommendation for partitioned databases is to have a very large
number of partitions. That means that if you ever need to ask a global
question, you might not just be consulting 8 or 16 shards, but something
like 10 partitions for your answer. That's considerably slower (and
harder for CouchDB to collate) than asking just 8 shards.

In my opinion, you only want to make this optimization if your data
meets this specific design pattern. (Another example would be a unified,
partition-per-user approach.) Maybe it makes sense in a different ratio
of docs-to-partitions, but I've not had exposure to that scenario (yet).

> Depending on the database, this would lead to a certain number of partitions.
> Let's say 10, or maybe 100 or more over the time.

In your case, standard Mango indexes (or JavaScript queries) is the
right approach. Partitions were introduced for a very specific reason:
when the pattern of user data leads to partitioning better than
CouchDB's automatic sharding algorithm, and where both primary and
secondary index lookups are only ever going to access documents within a
specific partition of documents.

> So I wonder, is there any limit for the number of partitions so that should 
> we think about more wisely about how to partition our database?

You also ask:

> Imagine we have e.g. 100.000 docs "of the same type" within a single 
> partition and we're far away from having 100.000 partitions and more across 
> the database.
> Could this be a hint that our docs are too complex and should rather splittet 
> into smaller docs?

That's a very different question...one that would require looking more
in depth at your documents and query patterns.

I would personally look at Mango partial indexes first - where you build
an index that contains only documents of a certain type. You can then
more easily ask sub-queries of that document type, such as a sub-date
range, or a sub-type.

One last thing: CouchDB 4.x will not (under the covers) implement
partitioned databases, as they provide no speedup in the data storage.
We're keeping the endpoints for now, just for compatibility, but they'll
eventually be dropped. Given this, unless there's a real compelling need
to bake partition-based queries throughout your app code base, I would
avoid them.

-Joan "parted -l /dev/couchdb" Touzet


[kphotoalbum] [Bug 427780] Crash when tagging images (not videos)

2020-11-20 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=427780

--- Comment #2 from Joan  ---
Created attachment 133510
  --> https://bugs.kde.org/attachment.cgi?id=133510=edit
New crash information added by DrKonqi

kphotoalbum (5.7.0) using Qt 5.15.1

- What I was doing when the application crashed:

After tagging some photos I tried to delete some of them.

-- Backtrace (Reduced):
#4  0x55c3fb3638d0 in
BackgroundJobs::HandleVideoThumbnailRequestJob::sendResult
(this=0x55c4078ab780, image=...) at
./BackgroundJobs/HandleVideoThumbnailRequestJob.cpp:106
#5  0x55c3fb363ac7 in
BackgroundJobs::HandleVideoThumbnailRequestJob::frameLoaded
(this=this@entry=0x55c4078ab780, image=...) at
./BackgroundJobs/HandleVideoThumbnailRequestJob.cpp:72
#6  0x55c3fb1655ad in
BackgroundJobs::HandleVideoThumbnailRequestJob::qt_static_metacall
(_o=0x55c4078ab780, _c=, _id=, _a=) at
./obj-x86_64-linux-gnu/kphotoalbum_autogen/UHUIEV64BD/moc_HandleVideoThumbnailRequestJob.cpp:73
#7  0x7fea7b28d980 in doActivate (sender=0x55c403206190,
signal_index=3, argv=0x7ffc24bb99a0) at
../../include/QtCore/../../src/corelib/kernel/qobjectdefs_impl.h:395
[...]
#9  0x55c3fb15ad56 in ImageManager::ExtractOneVideoFrame::result
(this=this@entry=0x55c403206190, _t1=...) at
./obj-x86_64-linux-gnu/kphotoalbum_autogen/NAEE7Z5ID4/moc_ExtractOneVideoFrame.cpp:144

-- 
You are receiving this mail because:
You are watching all bug changes.

[kphotoalbum] [Bug 427780] Crash when tagging images (not videos)

2020-11-20 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=427780

Joan  changed:

   What|Removed |Added

 CC||aseq...@gmail.com

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Test CouchDB 4.0

2020-11-17 Thread Joan Touzet
Hello yhilem,

CouchDB 4.0 isn't anywhere near a packaged release yet, or ready for
end-user testing. It is not at feature parity with previous CouchDBs.
This is why we haven't announced anything publicly.

If you're interested in helping develop along with us, you can check out
the code and run the code through our docker container. Here is how our
CI system does it:

https://github.com/apache/couchdb/blob/main/build-aux/Jenkinsfile.pr

-Joan "not ready for prime time players" Touzet

On 17/11/2020 02:11, yhilem wrote:
> Hi,
> I want to test version 4.0 but I can't find a package for windows 10 or a
> docker image.
> Thanks in advance.
> 
> 
> 
> --
> Sent from: http://couchdb-development.1959287.n2.nabble.com/
> 


Re: M'aconselleu? client git gràfic i senzill

2020-11-16 Thread Joan Albert
Bones!

> Jo considero GitLab més lliure que GitHub per les següents raons:
> 
> 1. Es pot autogestionar, és a dir, obtenir el mateix programari per a
> fer-te la teva instal·lació independent.
> 2. El programari autogestionable és programari lliure (MIT/X11).
> 3. GitHub és propietat de Microsoft, corporació que camina en contra de
> les llibertats tecnològiques.

I ja posats amb aquest tema, vaig buscar plataformes de software 100%
lliure i vaig trobar dues bones opcions: Sourcehut[1] i Codeberg[2] (per
si algú tenia inquietuds amb aquest tema també).

[1] sourcehut.org
[2] codeberg.org

Salut!



Re: M'aconselleu? client git gràfic i senzill

2020-11-14 Thread Joan
El dg. 15 de 11 de 2020 a les 00:27 +0100, en/na Alex Muntada va
escriure:
> Fa temps havia utilitzat força gitk però des que
> 
> vaig configurar el git diff perquè ressalti les diferències a
> 
> nivell de línia ja no em cal.

Com ho fas, això, admb git diff?

Pd.: he vist que el gitk t'aixeca una interfície gràfica, així que si
el poso en un servidor remot al que accedeixo via terminal, no em
xuta...




Re: M'aconselleu? client git gràfic i senzill

2020-11-14 Thread Joan Cervan i Andreu
Em torno a respondre:

He vist que tig mira de "millorar" la sortida de git (i suposo que si
t'hi fiques, igual que s'hi t'hi capbusses amb les comandes de git,
doncs pot ser interessant).

Però no és una sol·lució gràfica (amb tot lo intuitiu i de baixa corba
d'aprenentatge que te una interfície gràfica).

Per tant mantinc la qüestió que feia al començament :-)


El ds. 14 de 11 de 2020 a les 17:18 +0100, en/na Joan Cervan i Andreu
va escriure:
> Em responc a mi mateix perquè diria que instal·lant tig als servidors
> de desenvolupament, ja en tindria prou...
> 
> El ds. 14 de 11 de 2020 a les 17:06 +0100, en/na Joan Cervan i Andreu
> va escriure:
> > Relacionat amb això, per si algú té un suggeriment, us exposo la
> > meva
> > casuística/necessitat:
> > 
> > Jo no treballo en local als meus repositoris, sinó que ho faig en
> > remot, en diversos servidors a on tinc projectes en
> > desenvolupament.
> > 
> > Llavors, m'aniria bé una eina com les que comenteu però que pugui
> > accedir als locals via ssh (en remot).
> > 
> > Coneixeu alguna eina que ho pugui fer?
> > 
> > Joan Cervan
> > calbasi.net
> > 
> > El ds. 14 de 11 de 2020 a les 08:42 +0100, en/na Àlex va escriure:
> > > Bon dia, comunitat,
> > > 
> > > Soc nou amb git. A partir d'ara haurè de fer unes tasques
> > > bàsiques:
> > > editar fitxers locals en markdown i pujar-los a Github, i pujar
> > > algún
> > > pdf també.
> > > 
> > > Dels clients gràfics i senzills de git, quin recomaneu? Gitg? El
> > > pluguin
> > > de Git de Geany? No hi ha cap editor de markdown on alhora pugui
> > > veure
> > > el resultat (WYSIWYG) i alhora pujar canvis via Git? vscode?
> > > 
> > > Treballo amb Debian Testing, escriptoris Mate i XFCE.
> > > 
> > > Gracies i salutacions
> > > 
> > > 
> > > 



Re: M'aconselleu? client git gràfic i senzill

2020-11-14 Thread Joan Cervan i Andreu
Em responc a mi mateix perquè diria que instal·lant tig als servidors
de desenvolupament, ja en tindria prou...

El ds. 14 de 11 de 2020 a les 17:06 +0100, en/na Joan Cervan i Andreu
va escriure:
> Relacionat amb això, per si algú té un suggeriment, us exposo la meva
> casuística/necessitat:
> 
> Jo no treballo en local als meus repositoris, sinó que ho faig en
> remot, en diversos servidors a on tinc projectes en desenvolupament.
> 
> Llavors, m'aniria bé una eina com les que comenteu però que pugui
> accedir als locals via ssh (en remot).
> 
> Coneixeu alguna eina que ho pugui fer?
> 
> Joan Cervan
> calbasi.net
> 
> El ds. 14 de 11 de 2020 a les 08:42 +0100, en/na Àlex va escriure:
> > Bon dia, comunitat,
> > 
> > Soc nou amb git. A partir d'ara haurè de fer unes tasques bàsiques:
> > editar fitxers locals en markdown i pujar-los a Github, i pujar
> > algún
> > pdf també.
> > 
> > Dels clients gràfics i senzills de git, quin recomaneu? Gitg? El
> > pluguin
> > de Git de Geany? No hi ha cap editor de markdown on alhora pugui
> > veure
> > el resultat (WYSIWYG) i alhora pujar canvis via Git? vscode?
> > 
> > Treballo amb Debian Testing, escriptoris Mate i XFCE.
> > 
> > Gracies i salutacions
> > 
> > 
> > 



Re: M'aconselleu? client git gràfic i senzill

2020-11-14 Thread Joan Cervan i Andreu
Relacionat amb això, per si algú té un suggeriment, us exposo la meva
casuística/necessitat:

Jo no treballo en local als meus repositoris, sinó que ho faig en
remot, en diversos servidors a on tinc projectes en desenvolupament.

Llavors, m'aniria bé una eina com les que comenteu però que pugui
accedir als locals via ssh (en remot).

Coneixeu alguna eina que ho pugui fer?

Joan Cervan
calbasi.net

El ds. 14 de 11 de 2020 a les 08:42 +0100, en/na Àlex va escriure:
> Bon dia, comunitat,
> 
> Soc nou amb git. A partir d'ara haurè de fer unes tasques bàsiques:
> editar fitxers locals en markdown i pujar-los a Github, i pujar algún
> pdf també.
> 
> Dels clients gràfics i senzills de git, quin recomaneu? Gitg? El
> pluguin
> de Git de Geany? No hi ha cap editor de markdown on alhora pugui
> veure
> el resultat (WYSIWYG) i alhora pujar canvis via Git? vscode?
> 
> Treballo amb Debian Testing, escriptoris Mate i XFCE.
> 
> Gracies i salutacions
> 
> 
> 



Re: M'aconselleu? client git gràfic i senzill

2020-11-14 Thread Joan
Relacionat amb això, per si algú té un suggeriment, us exposo la meva
casuística/necessitat:

Jo no treballo en local als meus repositoris, sinó que ho faig en
remot, en diversos servidors a on tinc projectes en desenvolupament.

Llavors, m'aniria bé una eina com les que comenteu però que pugui
accedir als locals via ssh (en remot).

Coneixeu alguna eina que ho pugui fer?

Joan Cervan
calbasi.net

> El ds. 14 de 11 de 2020 a les 08:42 +0100, en/na Àlex va escriure:
> > Bon dia, comunitat,
> > 
> > Soc nou amb git. A partir d'ara haurè de fer unes tasques bàsiques:
> > editar fitxers locals en markdown i pujar-los a Github, i pujar
> > algún
> > pdf també.
> > 
> > Dels clients gràfics i senzills de git, quin recomaneu? Gitg? El
> > pluguin
> > de Git de Geany? No hi ha cap editor de markdown on alhora pugui
> > veure
> > el resultat (WYSIWYG) i alhora pujar canvis via Git? vscode?
> > 
> > Treballo amb Debian Testing, escriptoris Mate i XFCE.
> > 
> > Gracies i salutacions
> > 
> > 
> > 



Re: Travis CI migration

2020-11-12 Thread Joan Touzet
Hi Gavin, thanks for doing this. Just yesterday these 3 URLs crossed my
desktop:

  https://mailchi.mp/3d439eeb1098/travis-ciorg-is-moving-to-travis-cicom
  https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing
  https://www.theregister.com/2020/11/02/travis_ci_pricng/

Guess I missed the announcements in the election run-up.

Please migrate these 4 repos for us:

couchdb-docker
couchdb-ci
couchdb-pkg
couchdb-fauxton

We'll eventually get these into Jenkins, but there's no rush, as they
build very infrequently by comparison.

Cheers,
Joan

On 12/11/2020 07:33, Gavin McDonald wrote:
> Hi All,
> 
> I have attempted to start the (required) migration of project builds from
> travis-ci.org
> to travis-ci.com.
> 
> Please check https://travis-ci.com/github/apache and see if your project is
> there.
> If so, please do any needed changes your end to start using travis-ci.com
> 
> If your project is not listed on travis-ci.com and you have been using
> travis-ci.org
> previously, then let me know so I can take a look.
> 
> Thanks
> 


Re: Fem una BSF telemàtica?

2020-11-10 Thread Joan
Jo dubto molt que pugui resoldre cap bug (ves a saber), però si que
podria aprofitar la trobada per fer tasques de debian (principalment de
traducció de la web/documentació al català).

-- 
Joan Cervan i Andreu
http://personal.calbasi.net

"El meu paper no és transformar el món ni l'home sinó, potser, el de
ser útil, des del meu lloc, als pocs valors sense els quals un món no
val la pena viure'l" A. Camus

i pels que teniu fe:
"Déu no és la Veritat, la Veritat és Déu"
Gandhi


El Sun, 8 Nov 2020 22:09:37 +0100
Àlex  va escriure:

> Aquí bugs:
> 
>     https://udd.debian.org/bugs/
> 



-- 
Joan Cervan i Andreu
http://personal.calbasi.net

"El meu paper no és transformar el món ni l'home sinó, potser, el de
ser útil, des del meu lloc, als pocs valors sense els quals un món no
val la pena viure'l" A. Camus

i pels que teniu fe:
"Déu no és la Veritat, la Veritat és Déu"
Gandhi



Re: Nano branch pruning

2020-11-09 Thread Joan Touzet
I think I just heard you volunteer to do the PRs on the repos you
mentioned below! ;)

I'd leave off www, this isn't something where we need the multi-step
process.

-Joan

On 09/11/2020 03:55, Glynn Bird wrote:
> It turns out that branch protection doesn't require Infra intervention,
> there's a bunch of configuration flags that can be switched on via the
> .asf.yaml file:
> https://cwiki.apache.org/confluence/display/INFRA/git+-+.asf.yaml+features#git.asf.yamlfeatures-BranchProtection
> 
> An simple configuration example would be:
> 
> github:
>   protected_branches:
> main
> 
> On Thu, 5 Nov 2020 at 11:59, Glynn Bird  wrote:
> 
>> Created Infra ticket https://issues.apache.org/jira/browse/INFRA-21076
>>
>> On Wed, 4 Nov 2020 at 16:48, Glynn Bird  wrote:
>>
>>> https://github.com/apache/couchdb- main not protected
>>> https://github.com/apache/couchdb-fauxton- main not protected
>>> https://github.com/apache/couchdb-docker - main not protected
>>> https://github.com/apache/couchdb-www - still using master
>>> https://github.com/apache/couchdb-pkg - main not protected
>>> https://github.com/apache/couchdb-helm - main not protected
>>> ...
>>>
>>> There are numerous others. I couldn't find an example where the main
>>> branch _was_ protected, but in most cases the old master branch was.
>>>
>>> Glynn
>>>
>>>
>>> On Wed, 4 Nov 2020 at 16:36, Glynn Bird  wrote:
>>>
>>>> I'm on it.
>>>>
>>>> Glynn
>>>>
>>>> On Wed, 4 Nov 2020 at 16:23, Joan Touzet  wrote:
>>>>
>>>>> Yes, you'll have to file a ticket with Infra for this.
>>>>>
>>>>> We probably need to do this on quite a few repos, which unfortunately
>>>>> means that we may be forced to write a script to address it.
>>>>>
>>>>> Would you be willing to volunteer to check and see which ones have
>>>>> unprotected main branches - at least for the big 4-5 or so (couchdb,
>>>>> docs, fauxton come to mind)?
>>>>>
>>>>> -Joan
>>>>>
>>>>> On 04/11/2020 11:13, Glynn Bird wrote:
>>>>>> I've published the 9.0.0 release of Nano, the Node.js library for
>>>>> CouchDB
>>>>>> which features:
>>>>>>
>>>>>> - request library replaced with axios ‍
>>>>>> - rewritten changes follower 
>>>>>> - fewer dependencies 
>>>>>>
>>>>>> https://www.npmjs.com/package/nano
>>>>>>
>>>>>> Thanks to all who helped in the work to build and test this release.
>>>>>>
>>>>>> While tidying up afterwards I noticed that the new default main
>>>>> branch is
>>>>>> "unprotected" and the old default ("master") is "protected" so I can't
>>>>>> delete it.
>>>>>>
>>>>>> Is this something Apache Infra would be able to help with?
>>>>>>
>>>>>> Glynn
>>>>>>
>>>>>
>>>>
> 


D6628: Fix and normalize license in .desktop files

2020-11-09 Thread Joan Maspons
maspons added inline comments.

INLINE COMMENTS

> metadata.desktop:49
>  X-KDE-PluginInfo-Depends=
> -X-KDE-PluginInfo-License=GPL
>  X-KDE-PluginInfo-EnabledByDefault=true

Are the changes from GPL to LGPL intended?

> metadata.desktop:60
>  X-KDE-PluginInfo-Depends=
> -X-KDE-PluginInfo-License=GPL
>  X-KDE-PluginInfo-EnabledByDefault=true

Are the changes from GPL to LGPL intended?

> metadata.desktop:66
>  X-KDE-PluginInfo-Depends=
> -X-KDE-PluginInfo-License=GPL
>  X-KDE-PluginInfo-EnabledByDefault=true

Are the changes from GPL to LGPL intended?

> metadata.desktop:115
>  X-KDE-PluginInfo-Depends=
> -X-KDE-PluginInfo-License=GPL
>  X-KDE-PluginInfo-EnabledByDefault=true

Are the changes from GPL to LGPL intended?

> plasma-toolbox-paneltoolbox.desktop:119
>  X-KDE-PluginInfo-EnabledByDefault=true
> -X-KDE-PluginInfo-License=GPL
>  X-KDE-PluginInfo-Name=org.kde.paneltoolbox

Are the changes from GPL to LGPL intended?

REPOSITORY
  R119 Plasma Desktop

REVISION DETAIL
  https://phabricator.kde.org/D6628

To: sebas, #plasma, sitter, mart, broulik
Cc: maspons, bam, mak, mart, plasma-devel, Orage, LeGast00n, The-Feren-OS-Dev, 
cblack, jraleigh, zachus, fbampaloukas, ragreen, ZrenBot, ngraham, himcesjf, 
lesliezhai, ali-mohamed, jensreuterberg, abetts, sebas, apol, ahiemstra


Re: Implement paging on the pci arbiter

2020-11-08 Thread Joan Lledó
Hi,

El 3/11/20 a les 23:13, Samuel Thibault ha escrit:
> 
> That would probably work, yes.
> 
> 

I got something pushed to my branch at [1]. But I found the
implementation for pager proxies in gnu mach is incomplete. In
particular I can't restrict a range to be mapped. I think I could fix it
but need some help.

- Would it be enough to add the range info to the memory_object_proxy
  struct and then read them from vm_map and use them to restrict the
  values of the offset and size being sent to vm_map_enter?
  (vm_user.c:395)
- Why is the proxy interface designed to work with arrays? Is vm_map
  supposed to call vm_map_enter multiple times for proxies?

---
1. http://git.savannah.gnu.org/cgit/hurd/hurd.git/log/?h=jlledom-pci-mem



[SCM] Hurd branch, jlledom-pci-mem, created. v0.9.git20200930-6-g5b2100f

2020-11-08 Thread Joan Lled�
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "Hurd".

The branch, jlledom-pci-mem has been created
at  5b2100fb28bb61b9a4585217264e36a3e151b475 (commit)

- Log -
commit 5b2100fb28bb61b9a4585217264e36a3e151b475
Author: Joan Lledó 
Date:   Sun Nov 8 10:23:33 2020 +0100

pci-arbiter: Implement netfs_get_filemap()

* pci-arbiter/netfs_impl.c:
* Implement callback: netfs_get_filemap
* Check whether the file being mapped is a region
  file
* Return the proxy if exists
* Create a new proxy and return it

commit dc859c3d4ba4015a2dae7ce63769238dcb3e
Author: Joan Lledó 
Date:   Sun Nov 8 10:17:21 2020 +0100

pci arbiter: add a memory object proxy to directory entries

* pci-arbiter/pcifs.h:
* struct pcifs_dirent: New field: memproxy
* pci-arbiter/pcifs.c:
* create_dir_entry: Initialize memproxy to MACH_PORT_NULL

commit 374f97ac78bd1a85609ff15f5e82f0c5023195df
Author: Marcus Brinkmann 
Date:   Wed Oct 31 01:14:17 2001 +0100

libnetfs: Implement RPC: io_map

* libnetfs/iostubs.c: implement io_map

commit 0e4f1220a6d7ab4cc75f5f31e58cb4ce3a7f584d
Author: Joan Lledó 
Date:   Thu Nov 5 12:45:37 2020 +0100

libnetfs: new user callback: netfs_get_filemap()

Provide the user with a new callback so they can implement file
mapping over file system nodes.

* libnetfs/netfs.h: Add prototype for netfs_get_filemap

---


hooks/post-receive
-- 
Hurd



Re: vsz_limit

2020-11-06 Thread Joan Moreau

SOrry, my mistake, the conversion type was wrong.

So restrict_get_process_size is indeed consistent with vsz_limit

Now, for the memory usage of the process, getrusage gives only the /max/ 
of the memory used, not the current


THe only way I found is to fopen("/proc/self/status") and read the 
correct line. Do you have a better way ?


thank you

On 2020-11-06 14:16, Joan Moreau wrote:


ok found it,

However, it returns me some random number. Maybe I am missing something

On 2020-11-06 13:57, Aki Tuomi wrote:
Duh... src/lib/restrict-process-size.h

Should be in the installed include files as well,

/usr/include/dovecot/restrict-process-size.h

Aki

On 06/11/2020 15:56 Joan Moreau  wrote:

Hello
I can't find "src/lib/restrict.h" . Is it in dovecot source ?

On 2020-11-06 13:20, Aki Tuomi wrote: Seems I had forgotten that you 
can use src/lib/restrict.h, in particular, restrict_get_process_size() 
to figure out the limit. You can combine this with getrusage to find 
out current usage.


Aki

On 06/11/2020 13:26 Joan Moreau  wrote:

yes, will do so.
It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote: You could also add it as setting 
for the fts_xapian plugin parameters?


Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

Re: vsz_limit

2020-11-06 Thread Joan Moreau

ok found it,

However, it returns me some random number. Maybe I am missing something

On 2020-11-06 13:57, Aki Tuomi wrote:


Duh... src/lib/restrict-process-size.h

Should be in the installed include files as well,

/usr/include/dovecot/restrict-process-size.h

Aki

On 06/11/2020 15:56 Joan Moreau  wrote:

Hello
I can't find "src/lib/restrict.h" . Is it in dovecot source ?

On 2020-11-06 13:20, Aki Tuomi wrote: Seems I had forgotten that you 
can use src/lib/restrict.h, in particular, restrict_get_process_size() 
to figure out the limit. You can combine this with getrusage to find 
out current usage.


Aki

On 06/11/2020 13:26 Joan Moreau  wrote:

yes, will do so.
It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote: You could also add it as setting 
for the fts_xapian plugin parameters?


Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

Re: vsz_limit

2020-11-06 Thread Joan Moreau

Hello

I can't find "src/lib/restrict.h" . Is it in dovecot source ?

On 2020-11-06 13:20, Aki Tuomi wrote:

Seems I had forgotten that you can use src/lib/restrict.h, in 
particular, restrict_get_process_size() to figure out the limit. You 
can combine this with getrusage to find out current usage.


Aki

On 06/11/2020 13:26 Joan Moreau  wrote:

yes, will do so.
It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote: You could also add it as setting 
for the fts_xapian plugin parameters?


Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

Fatal: write(indexer) failed: Resource temporarily unavailable

2020-11-06 Thread Joan Moreau

Hello

I have this issue for Xapian plugin:

https://github.com/grosjo/fts-xapian/issues/62

But I am not sure where can it comes from.

Is dovecot calling some specific function in the plugin after the init, 
that would create such error ?


In doveadm dealing differently with plugins that dovecot core does ?

Has there been some recent changes in the plugin framework that would 
lead to such error ?


Thank you

Re: vsz_limit

2020-11-06 Thread Joan Moreau

yes, will do so.

It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote:


You could also add it as setting for the fts_xapian plugin parameters?

Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

WriteListener.onWritePossible is never called back again if the origin cuts the socket

2020-11-05 Thread Joan ventusproxy
Hello,

Tomcat 8.5.55 (also tried with 8.5.37).
Similar to “Bug 62614 - Async servlet over HTTP/2 WriteListener does not work 
because onWritePossible is never called back again” but using NIO connector:


I’m unable to create an example that reproduces this issue. So I will explain 
what’s happening and I hope someone can give me some clue about what’s going on.

The case is simple: we connect to our servlet using http components setting a 
response timeout of 10 seconds. Since our servlet takes less than 5 seconds to 
get the response from the backend, it returns all the content to the client 
(it’s about 180K).
The point is when we set a response timeout of, for instance, 2 seconds. In 
this case the client closes the socket with the servlet before it can return 
the response. In fact, in most situations, when we set the WriteListener to the 
async response this socket is already closed. In this situation 2 different 
things are randomly happening:

1. The expected, the WriteListener throws an IOException: 
org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken 
pipe, and the ‘onError’ method is called:

while (this.numIterations > 0 && this.os.isReady()) {
this.os.write(this.response, this.startIdx, this.endIdx - 
this.startIdx);  ← The error happens here.
( . . . )
}

2. The unexpected. The ‘onWritePossible’ method is called just once. 
In this case ‘onWritePossible’ is called, this.os.isReady is true and the 
execution enter into the cycle but the above ‘this.os.write’ does not throw any 
exception, then ‘this.os.isReady’ becomes false so the execution exits the 
cycle and the ‘onWritePossible’ method terminates. And it’s never called again 
(neither the ‘onError’ method).


Here I leave a link to the interesting part of the WriteListener code and the 3 
traces: the right one returning the document (trace_OK.txt), the right one 
returning the error (trace_OK_with_broken_pipe.txt) and the wrong one calling 
‘onWritePossible’ just once (trace_KO.txt) : 
https://github.com/joanbalaguero/Tomcat.git

I tried to search a solution for this, no success. I developed a simple test 
case, and it was impossible to reproduce the issue. I’m pretty sure it’s a lack 
of knowledge about how this listener works in this case (sockets already 
closed) but after reading tutorials and more tutorials I’m not able to find the 
solution.

So any help would be very very appreciated.

Thanks for your time.

Joan.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



[www.kde.org] [Bug 428701] New: When returning after doing a donation the page gives a 404

2020-11-04 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=428701

Bug ID: 428701
   Summary: When returning after doing a donation the page gives a
404
   Product: www.kde.org
   Version: unspecified
  Platform: Other
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: kde-...@kde.org
  Reporter: aseq...@gmail.com
  Target Milestone: ---

I just did a donation to kde using the page at
https://kde.org/community/donations/, after confirming the donation I got a
404.
I imagine that the confirmation page is where I could customize the donation
message.

When looking at the page at
https://kde.org/community/donations/previousdonations/ it doesn't show any
messages customized by the users since August

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Nano branch pruning

2020-11-04 Thread Joan Touzet
Yes, you'll have to file a ticket with Infra for this.

We probably need to do this on quite a few repos, which unfortunately
means that we may be forced to write a script to address it.

Would you be willing to volunteer to check and see which ones have
unprotected main branches - at least for the big 4-5 or so (couchdb,
docs, fauxton come to mind)?

-Joan

On 04/11/2020 11:13, Glynn Bird wrote:
> I've published the 9.0.0 release of Nano, the Node.js library for CouchDB
> which features:
> 
> - request library replaced with axios ‍
> - rewritten changes follower 
> - fewer dependencies 
> 
> https://www.npmjs.com/package/nano
> 
> Thanks to all who helped in the work to build and test this release.
> 
> While tidying up afterwards I noticed that the new default main branch is
> "unprotected" and the old default ("master") is "protected" so I can't
> delete it.
> 
> Is this something Apache Infra would be able to help with?
> 
> Glynn
> 


Re: vsz_limit

2020-11-04 Thread Joan Moreau
For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)


I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:


On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you


Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

vsz_limit

2020-11-03 Thread Joan Moreau

Hello

I am looking for help around memory management

1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker


2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?


Thank you

Re: Implement paging on the pci arbiter

2020-11-03 Thread Joan Lledó
Hi,

El 26/8/20 a les 17:01, Samuel Thibault ha escrit:
> I'm unsure if libpager will be useful actually, since all you need is
> to pass on a memory object clamped to the target physical memory. See
> gnumach's support for proxy memory object, which possibly is just
> enough.
> 
> Samuel
> 

I did a research on this and think I could implement file mapping in
libnetfs based on this old patch by Marcus Brinkmann:

https://lists.gnu.org/archive/html/bug-hurd/2001-10/msg00305.html

Thomas Bushnell BSG answered to that message saying

> You must promise the kernel that *any* change to the underlying data
> for the mapped object will never change except with you telling the
> kernel

Which makes sense, but it seems to me it should be OK to enable file
mapping in a net filesystem provided only memory proxy objects are used
for this, since each proxy will belong to a memory object that meets the
requirement of keeping the kernel updated on any change in the
underlying data.

So we can take the implementation of io_map from that patch, but adding
a check to ensure whatever returned by the user from netfs_get_filemap()
is a proxy object. Is there a way to check that?

With that, from the aribter side we only need to get the default pager
and create a proxy for each region file, with its boundaries and
permissions, and write an implementation of netfs_get_filemap which
returns the proper proxy for each request.

What do you think?



signature.asc
Description: OpenPGP digital signature


Re: Docker rate limits likely spell DOOM for any Apache project CI workflow relying on Docker Hub

2020-11-02 Thread Joan Touzet

Hey Gavin,

To avoid the rate limiting, this means that we need to bake CI 
credentials into jobs for accounts inside of the apache org. Those 
credentials need to be used for all `docker pull` commands.


How can we do this in a way that complies with ASF Infra policy?

Thanks,
Joan "the battle wages on / for Toy Soldiers" Touzet


On 2020-11-02 4:57 a.m., Gavin McDonald wrote:

Hi All,

Any project under the 'apache' org on DockerHub are not affected by the
restrictions.

Kind Regards

Gavin "The futures so bright you gotta wear shades" McDonald


On Thu, Oct 29, 2020 at 11:08 PM Gavin McDonald 
wrote:


Hi ,

Just to note I have emailed DockerHub, asking for clarification on our
account and what our benefits are.


On Thu, Oct 29, 2020 at 6:34 PM Allen Wittenauer
 wrote:




On Oct 29, 2020, at 9:21 AM, Joan Touzet  wrote:

(Sidebar about the script's details)


 Sure.


I tried to read the shell script, but I'm not in the headspace to fully

parse it at the moment. If I'm understanding correctly, this will still
catch CouchDB's CI docker images if they haven't changed in a week, which
happens often enough, negating the cache.

 Correct. We actually tried something similar for a while and
discovered that in a lot of cases, upstream packages would disappear (or
worse, have security problems) thus making it look the image is still
"good" when it's not.  So a rebuild weekly at least guarantees some level
of "yup, still good" without having too much of a negative impact.


As a project, we're kind of stuck between a rock and a hard place. We

want to force a docker pull on the base CI image if it's out of date or the
image is corrupted. Otherwise we want to cache forever, not just for a
week. I can probably manage the "do we need to re-pull?" bit with some
clever CI scripting (check for the latest image hash locally, validate the
local image, pull if either fails) but I don't understand how the script
resolves the latter.

 Most projects that use Yetus for their actual CI testing build
the image used for the CI as part of the CI.  It is a multi-stage,
multi-file docker build that has each run use a 'base' Dockerfile (provided
by the project) that rarely changed and a per-run file that Yetus generates
on the fly, with both images tagged by either git sha or branch (depending
upon context). Due to how docker image reference counts on the layers work,
this makes the docker images effectively used as a "rolling cache" and
(beyond a potential weekly cache removal) full builds are rare.. thus
making them relatively cheap (typically <1m runtime) unless the base image
had a change far up the chain (so structure wisely).  Of course, this also
tests the actual image of the CI build as part of the CI.  (What tests the
testers? philosophy)   Given that Jenkins tries really hard to have job
affinity, re-runs were still cheap after the initial one. [Ofc, now that
the cache is getting nuked every day]

 Actually, looking at some of the ci-hadoop jobs, it looks like
yetus is managing the cache on them.  I'm seeing individual run containers
from days ago at least.  So that's a good sign.


Can a exemption list be passed to the script so that images matching a

certain regex are excluded? You say the script ignores labels entirely, so
perhaps not...

 Patches accepted. ;)

 FWIW, I've been testing on my local machine for unrelated reasons
and I keep blowing away running containers I care about so I might end up
adding it myself.  That said: the code was specifically built for CI
systems where the expectation should be that nothing is permanent.




--

*Gavin McDonald*
Systems Administrator
ASF Infrastructure Team






Re: Warning: Incoming Docker Hub rate limits are going to negatively impact CouchDB CI workflows

2020-10-30 Thread Joan Touzet
Following up the follow-up, I did some time comparisons on the 3.x
branch (after fixing a packaging error).

For full builds:

   | Pre-change  |  Post-change
=|===
 Build Release | |
Tarball| 4m34s   | 3m31s
-
   Test and| |
Package|   ~20m38s   |   ~18m06s
-
Publish| 2m38s   | 1m39s


For PR builds, the time difference seems to be negligible, dominated by
the actual test run variance, but it also appears to be about 1 minute
faster by skipping the download.

-Joan "time for taquitos" Touzet


On 29/10/2020 18:09, Joan Touzet wrote:
> Following up,
> 
> I've implemented a new Jenkins job that re-pulls all current couchdbdev
> images on each docker node every night. The job takes 12 minutes to run.
> 
> Once a week, it also runs the "pull them all" set of images I mentioned
> a few months ago, to keep our images on Docker Hub from disappearing.
> (That run takes about an hour and happens Sunday nights, randomly,
> between 0-7h UTC.)
> 
> With this we can drop the "alwaysPull true" in the Jenkinsfiles, which
> should speed up PRs and builds by a couple minutes minutes each.
> 
> Those PRs are up here:
> 
>   https://github.com/apache/couchdb/pull/3234
>   https://github.com/apache/couchdb/pull/3233
> 
> which need +1s to land.
> 
> That leaves a single problem scenario, namely when the couchdbdev images
> are updated and Docker stubbornly refuses to pull the latest image,
> causing build issues. In that case, simply log into our Jenkins and
> click "Build Now" here:
> 
> https://ci-couchdb.apache.org/job/jenkins-cm1/job/Update%20Docker%20Containers/
> 
> 
> which forces the image removal and re-pull.
> 
> I hope this is enough to avoid any rate limiting problems.
> 
> -Joan "Gir, stop singing this instant!" Touzet
> 
> On 2020-10-29 12:05 a.m., Joan Touzet wrote:
>> I just posted about this on the ASF-wide bui...@apache.org list:
>>
>> https://lists.apache.org/thread.html/r5ccf60da8072b3c2b587152256ebaf6a0e7b81182d5e240a2b2a0f02%40%3Cbuilds.apache.org%3E
>>
>>
>> TL;DR: We're not immune, even with our build machines, and the new
>> limits kick in Monday.
>>
>> We can remove some of our workarounds for badly-cached images (at the
>> cost of lots of pain every time the couchdbci build environment image
>> changes), but on a busy day we'll probably still hit the limit.
>>
>> Let's see if ASF Infra comes through quickly with a pull-through caching
>> registry of sufficient size. If so we can make very minor tweaks to our
>> Jenkinsfiles, and go back to life as normal.
>>
>> -Joan "Doom doom, doomy doomy doom" Touzet
>>


Re: Warning: Incoming Docker Hub rate limits are going to negatively impact CouchDB CI workflows

2020-10-29 Thread Joan Touzet

Following up,

I've implemented a new Jenkins job that re-pulls all current couchdbdev 
images on each docker node every night. The job takes 12 minutes to run.


Once a week, it also runs the "pull them all" set of images I mentioned 
a few months ago, to keep our images on Docker Hub from disappearing. 
(That run takes about an hour and happens Sunday nights, randomly, 
between 0-7h UTC.)


With this we can drop the "alwaysPull true" in the Jenkinsfiles, which 
should speed up PRs and builds by a couple minutes minutes each.


Those PRs are up here:

  https://github.com/apache/couchdb/pull/3234
  https://github.com/apache/couchdb/pull/3233

which need +1s to land.

That leaves a single problem scenario, namely when the couchdbdev images 
are updated and Docker stubbornly refuses to pull the latest image, 
causing build issues. In that case, simply log into our Jenkins and 
click "Build Now" here:


https://ci-couchdb.apache.org/job/jenkins-cm1/job/Update%20Docker%20Containers/

which forces the image removal and re-pull.

I hope this is enough to avoid any rate limiting problems.

-Joan "Gir, stop singing this instant!" Touzet

On 2020-10-29 12:05 a.m., Joan Touzet wrote:

I just posted about this on the ASF-wide bui...@apache.org list:

https://lists.apache.org/thread.html/r5ccf60da8072b3c2b587152256ebaf6a0e7b81182d5e240a2b2a0f02%40%3Cbuilds.apache.org%3E

TL;DR: We're not immune, even with our build machines, and the new
limits kick in Monday.

We can remove some of our workarounds for badly-cached images (at the
cost of lots of pain every time the couchdbci build environment image
changes), but on a busy day we'll probably still hit the limit.

Let's see if ASF Infra comes through quickly with a pull-through caching
registry of sufficient size. If so we can make very minor tweaks to our
Jenkinsfiles, and go back to life as normal.

-Joan "Doom doom, doomy doomy doom" Touzet



Re: Docker rate limits likely spell DOOM for any Apache project CI workflow relying on Docker Hub

2020-10-29 Thread Joan Touzet

On 2020-10-29 11:37 a.m., Allen Wittenauer wrote:




On Oct 28, 2020, at 11:57 PM, Chris Lambertus  wrote:

Infra would LOVE a smarter way to clean the cache. We have to use a heavy 
hammer because there are 300+ projects that want a piece of it, and who don’t 
clean up.. We are not build engineers, so we rely on the community to advise us 
in dealing with the challenges we face. I would be very happy to work with you 
on tooling to improve the cleanup if it improves the experience for all 
projects.


I'll work on YETUS-1063 so that things make more sense.  But in short, Yetus' 
"docker-cleanup --sentinel" will  purge container images if they are older than 
a week, then kill stuck containers after 24 hours. That order prevents running jobs from 
getting into trouble.  But it also means that in some cases it doesn't look very clean 
until two or three days later.  But that's ok: it is important to remember that an empty 
cache is a useless cache.  Those values came from experiences with Hadoop and HBase, but 
we can certainly add some way to tune them.  Oh, and unlike the docker tools, it pretty 
much ignores labels.  It does _not_ do anything with volumes, probably something we need 
to add.



(Sidebar about the script's details)

I tried to read the shell script, but I'm not in the headspace to fully 
parse it at the moment. If I'm understanding correctly, this will still 
catch CouchDB's CI docker images if they haven't changed in a week, 
which happens often enough, negating the cache.


As a project, we're kind of stuck between a rock and a hard place. We 
want to force a docker pull on the base CI image if it's out of date or 
the image is corrupted. Otherwise we want to cache forever, not just for 
a week. I can probably manage the "do we need to re-pull?" bit with some 
clever CI scripting (check for the latest image hash locally, validate 
the local image, pull if either fails) but I don't understand how the 
script resolves the latter.


Can a exemption list be passed to the script so that images matching a 
certain regex are excluded? You say the script ignores labels entirely, 
so perhaps not...


-Joan


Re: A problematic side effect of archiving branches

2020-10-29 Thread Joan Touzet




On 2020-10-29 11:08 a.m., Ilya Khlopotov wrote:

Why not just lay down a new tag on main to work around this?

Good idea. This could work. We would need to do it for each dependency if we 
archived branches on it.


I have only archived branches on apache/couchdb . I was planning on 
doing fauxton and documentation as well, but will lay down a tag if 
necessary to solve the problem there.


I've applied the tag as mentioned.

-Joan


On 2020/10/29 14:32:17, Joan Touzet  wrote:

Hi Ilya,

Sorry about this trouble. Based on this feedback I will not pursue
removal of our release branches.

On 2020-10-29 5:56 a.m., Ilya Khlopotov wrote:


❯ git describe --always --tags
archive/prototype/fdb-layer-get-doc-spans-580-gdfb27b48a


but:

$ git checkout 3.x
Branch '3.x' set up to track remote branch '3.x' from 'origin'.
Switched to a new branch '3.x'

$ git describe --always --tags
3.1.1-18-gffbf695ff

This is only happening because the most recent tag found on the 'main'
branch has archive/ in it.

Why not just lay down a new tag on main to work around this? Example:

$ git checkout main
$ git tag post-fdb-merge
$ git describe --always --tags
post-fdb-merge

Future commits past that will reference post-fdb-merge in the git
describe command, and not that archived tag. Further, new branches that
are merged will simply be deleted, not archived, so this shouldn't be an
issue.

-Joan



Re: A problematic side effect of archiving branches

2020-10-29 Thread Joan Touzet

Hi Ilya,

Sorry about this trouble. Based on this feedback I will not pursue 
removal of our release branches.


On 2020-10-29 5:56 a.m., Ilya Khlopotov wrote:


❯ git describe --always --tags
archive/prototype/fdb-layer-get-doc-spans-580-gdfb27b48a


but:

$ git checkout 3.x
Branch '3.x' set up to track remote branch '3.x' from 'origin'.
Switched to a new branch '3.x'

$ git describe --always --tags
3.1.1-18-gffbf695ff

This is only happening because the most recent tag found on the 'main' 
branch has archive/ in it.


Why not just lay down a new tag on main to work around this? Example:

$ git checkout main
$ git tag post-fdb-merge
$ git describe --always --tags
post-fdb-merge

Future commits past that will reference post-fdb-merge in the git 
describe command, and not that archived tag. Further, new branches that 
are merged will simply be deleted, not archived, so this shouldn't be an 
issue.


-Joan


Warning: Incoming Docker Hub rate limits are going to negatively impact CouchDB CI workflows

2020-10-28 Thread Joan Touzet
I just posted about this on the ASF-wide bui...@apache.org list:

https://lists.apache.org/thread.html/r5ccf60da8072b3c2b587152256ebaf6a0e7b81182d5e240a2b2a0f02%40%3Cbuilds.apache.org%3E

TL;DR: We're not immune, even with our build machines, and the new
limits kick in Monday.

We can remove some of our workarounds for badly-cached images (at the
cost of lots of pain every time the couchdbci build environment image
changes), but on a busy day we'll probably still hit the limit.

Let's see if ASF Infra comes through quickly with a pull-through caching
registry of sufficient size. If so we can make very minor tweaks to our
Jenkinsfiles, and go back to life as normal.

-Joan "Doom doom, doomy doomy doom" Touzet


Docker rate limits likely spell DOOM for any Apache project CI workflow relying on Docker Hub

2020-10-28 Thread Joan Touzet
Got your attention?

Here's what arrived in my inbox around 4 hours ago:

> You are receiving this email because of a policy change to Docker products 
> and services you use. On Monday, November 2, 2020 at 9am Pacific Standard 
> Time, Docker will begin enforcing rate limits on container pulls for 
> Anonymous and Free users. Anonymous (unauthenticated) users will be limited 
> to 100 container image pulls every six hours, and Free (authenticated) users 
> will be limited to 200 container image pulls every six hours, when 
> enforcement is fully implemented. 

Their referenced blog posts are here:

https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/

https://www.docker.com/blog/understanding-inner-loop-development-and-pull-rates/

Since I haven't seen this discussed on the builds list yet (and I'm not
subscribed to users@infra), I wanted to make clear the impact. I would
bet that just about every workflow using Jenkins, buildbot, GHA or
otherwise uses uncredential-ed `docker pull` commands. If you're using
the shared Apache CI workers, every pull you're making is counting
towards this 100 pulls/6 hour limit. Multiply that by every ASF project
on those servers, and multiply that again by the total number of PRs /
change requetss / builds per project, and :(

Apache's going to hit these new limits real fast. And we must act fast
to avoid problems, as those new limits kick in **MONDAY**.

Even for those of us lucky enough to have sponsorship for dedicated CI
workers, it's still a problem. Infra has scripts to wipe all
not-currently-in-use Docker containers off of each machine every 24
hours (or did, last I looked). That means you can't rely on local
caching. Other projects may also have added --force to their `docker
pull` requests in their CI workflows, to work around issues with cached,
corrupted downloads (a big problem for us on the shared CI
infrastructure), or to work around issues with the :latest tag caching
when it shouldn't.

This extends beyond projects using CI in the way Docker outlines on
their second blog post linked above, namely their encouragement to use
multi-stage builds. If local caching can't be relied on, there's no
advantage. If what's being pulled down is an image containing that
project's full build environment - this is what CouchDB does and I
expect others do as well, as setting up our build environment, even
automated, takes 30-45 minutes - frequent changes to the build
dependencies require frequent pulls of those images, which cannot be
mitigated via the Docker-recommended multi-stage builds.

=

Proposed solutions:

1. Infra provides credentialed logins through the Docker Hub apache
organisation to projects. Every project would have to update their
Jenkins/buildbot/GHA/etc workflows to consume and use these credentials
for every `docker pull` command. This depends on Apache actually being
exempted for the new limits (I'm not sure, are we?) and those creds
being distributed widely...which may run into Infra Policy issues.

2. Infra provides their own Docker registry. Projects that need images
can host them there. These will be automatically exempt. Infra will have
to plan for sufficient storage (this will get big *fast*) and bandwidth
(same). They will also have to firewall it off from non-Apache projects.

This should be configured as a pull through caching registry, so that
attempts to `docker pull docker.apache.org/ubuntu:latest` will
automatically reach out to hub.docker.com and store that image locally.
Infra can populate this registry with credentials within the ASF Docker
Hub org that are, hopefully, exempt from these requirements.

3. Like #2, but per-project, on Infra-provided VMs. Today this is not
practical, as the standard Infra-provided VM only has ~20GB of local
storage. Just a handful of Docker images will eat that space nearly
immediately.

===

I think #2 above is the most logical and expedient, but it requires a
commitment from Infra to make happen - and to get the message out - with
only 4 days until DOOM.

What does the list think? More importantly, what does Infra think?

-Joan "I'm gonna sing The Doom Song now!" Touzet


Re: [PROPOSAL] Archiving git branches

2020-10-21 Thread Joan Touzet
Hah!

OK, it's all done. At present I cannot remove any of the #.#.# branches
as Infra has protected them. (I may or may not bother to open a ticket
on this.)

Here's the remaining branch list:

  remotes/origin/1.3.x
  remotes/origin/1.4.x
  remotes/origin/1.5.x
  remotes/origin/1.6.x
  remotes/origin/1.x.x
  remotes/origin/1278-add-clustered-db-info
  remotes/origin/2.0.x
  remotes/origin/2.1.x
  remotes/origin/2.3.x
  remotes/origin/2493-remove-auth-cache
  remotes/origin/3.0.x
  remotes/origin/3.x
  remotes/origin/HEAD -> origin/main
  remotes/origin/access
  remotes/origin/bump-ibrowse
  remotes/origin/feat/access-3.x
  remotes/origin/feat/access-master-clean
  remotes/origin/feat/add-same-site-secure/master
  remotes/origin/ioq-per-shard-or-user
  remotes/origin/main
  remotes/origin/master
  remotes/origin/prototype/fdb-layer-db-version-as-vstamps
  remotes/origin/re-enable-most-elixir-tests
  remotes/origin/record_last_compaction
  remotes/origin/smoosh-update-operator-guide
  remotes/origin/update-fauxton-1.2.6

That's about 6x shorter. Phew!

Remember, if you are trying to recover a branch that is now gone, just:

  git checkout -b  archive/

-Joan

On 21/10/2020 14:59, Jan Lehnardt wrote:
> *makes chain saw noises*
> 
> (trimming branches, get it?)
> 
> Thanks Joan!
> 
> Best
> Jan
> —
> 
>> On 21. Oct 2020, at 20:23, Joan Touzet  wrote:
>>
>> I am starting the work now. As there was no response, I'm going to only
>> keep these branches:
>>
>> 2.3.x
>> 3.x
>> main
>> master (with a README saying you're in the wrong place)
>>
>> plus these PR branches:
>>
>> bump-ibrowse (#3208)
>> smoosh-update-operator-guide (#3184)
>> re-enable-most-elixir-tests (#3175)
>> feat/add-same-site-secure/master (#3131)
>> feat/access-master-clean (#3038)
>> prototype/fdb-layer-db-version-as-vstamps (#2952)
>> feat/access-3.x (#2943)
>> ioq-per-shard-or-user (#1998)
>> 1278-add-clustered-db-info (#1443)
>> record_last_compaction (#1272)
>>
>> Any of the above that were targeted to prototype/fdb-layer or master
>> have been re-targeted to main (except a couple clustering-specific ones
>> which were set to 3.x). Please check if this is correct for your PRs.
>>
>> -Joan "clean all the things?" Touzet
>>
>> On 14/10/2020 13:19, Joan Touzet wrote:
>>> A reminder about this: I intend to start this work tomorrow. If you have
>>> any PRs or branches you want left alone, speak now.
>>>
>>> Based on feedback I've received, it sounds like the prototype/fdb-*
>>> branches are now done? If this is **NOT** the case, speak up.
>>>
>>> -Joan
>>>
>>> On 07/10/2020 18:10, Joan Touzet wrote:
>>>> Hi there,
>>>>
>>>> I'd like to clean up our branches in git on the main couchdb repo. This
>>>> would involve deleting some of our obsolete branches, after tagging the
>>>> final revision on each branch. This way, we retain the history but the
>>>> branch no longer appears in the dropdown on GitHub, or in git branch
>>>> listings at the cli.
>>>>
>>>> Example process:
>>>>
>>>> git tag archive/1.3.x 1.3.x
>>>> git branch -d 1.3.x
>>>> git push origin :1.3.x
>>>> git push --tags
>>>>
>>>> If we ever needed the branch back, we just:
>>>>
>>>> git checkout -b 1.3.x archive/1.3.x
>>>>
>>>> I would propose to do this for all branches except:
>>>>
>>>> main
>>>> master (for now)
>>>> 2.3.x
>>>> 3.x
>>>> prototype/fdb-layer
>>>>
>>>> ...plus any branches that have been touched in the past 90 days, that
>>>> still have open PRs, or that someone specifically asks me to retain in
>>>> this thread.
>>>>
>>>> I'd also like to do this on couchdb-documentation and couchdb-fauxton.
>>>>
>>>> I would propose to do this about 1 week from now, let's say on October
>>>> 15th.
>>>>
>>>> Thoughts?
>>>>
>>>> -Joan "fall cleaning" Touzet
> 


Re: [PROPOSAL] Archiving git branches

2020-10-21 Thread Joan Touzet
I am starting the work now. As there was no response, I'm going to only
keep these branches:

2.3.x
3.x
main
master (with a README saying you're in the wrong place)

plus these PR branches:

bump-ibrowse (#3208)
smoosh-update-operator-guide (#3184)
re-enable-most-elixir-tests (#3175)
feat/add-same-site-secure/master (#3131)
feat/access-master-clean (#3038)
prototype/fdb-layer-db-version-as-vstamps (#2952)
feat/access-3.x (#2943)
ioq-per-shard-or-user (#1998)
1278-add-clustered-db-info (#1443)
record_last_compaction (#1272)

Any of the above that were targeted to prototype/fdb-layer or master
have been re-targeted to main (except a couple clustering-specific ones
 which were set to 3.x). Please check if this is correct for your PRs.

-Joan "clean all the things?" Touzet

On 14/10/2020 13:19, Joan Touzet wrote:
> A reminder about this: I intend to start this work tomorrow. If you have
> any PRs or branches you want left alone, speak now.
> 
> Based on feedback I've received, it sounds like the prototype/fdb-*
> branches are now done? If this is **NOT** the case, speak up.
> 
> -Joan
> 
> On 07/10/2020 18:10, Joan Touzet wrote:
>> Hi there,
>>
>> I'd like to clean up our branches in git on the main couchdb repo. This
>> would involve deleting some of our obsolete branches, after tagging the
>> final revision on each branch. This way, we retain the history but the
>> branch no longer appears in the dropdown on GitHub, or in git branch
>> listings at the cli.
>>
>> Example process:
>>
>> git tag archive/1.3.x 1.3.x
>> git branch -d 1.3.x
>> git push origin :1.3.x
>> git push --tags
>>
>> If we ever needed the branch back, we just:
>>
>> git checkout -b 1.3.x archive/1.3.x
>>
>> I would propose to do this for all branches except:
>>
>> main
>> master (for now)
>> 2.3.x
>> 3.x
>> prototype/fdb-layer
>>
>> ...plus any branches that have been touched in the past 90 days, that
>> still have open PRs, or that someone specifically asks me to retain in
>> this thread.
>>
>> I'd also like to do this on couchdb-documentation and couchdb-fauxton.
>>
>> I would propose to do this about 1 week from now, let's say on October
>> 15th.
>>
>> Thoughts?
>>
>> -Joan "fall cleaning" Touzet


Re: Renovació del certificat SSO

2020-10-18 Thread Joan Lledó
Gràcies Alex, al final he creat un nou compte i au.

El 17/10/20 a les 21:57, Alex Muntada ha escrit:
> Hola Joan,
> 
>> Sóc el mantenidor del paquet lwip https://tracker.debian.org/pkg/lwip
>>
>> El meu certificat per al SSO ha expirat, i se suposa que hauria
>> de poder renovar-lo des de https://sso.debian.org/ però he
>> oblidat la contrasenya :(
> 
> Si no recordes l'antiga contrasenya d'alioth em temo molt que no
> podràs tornar a obtenir un certificat. El que pots provar és a
> fer un reset de contrasenya directament al tracker.debian.org:
> 
> https://tracker.debian.org/accounts/+forgot-password/
> 
> Si no et funciona aleshores jo provaria a registrar un nou
> compte:
> 
> https://tracker.debian.org/accounts/register/
> 
> L'antic servei de certificats del sso.debian.org ara només està
> disponible per als desenvolupadors oficials i segurament acabarà
> desapareixent a mesura que es vagi implantant l'autenticació via
> salsa.debian.org. Sembla que al tracker encara no està integrada.
> 
> Si amb tot això encara no te'n surts, pots provar a demanar ajuda
> obrint un bug:
> 
> https://salsa.debian.org/qa/distro-tracker/issues
> 
> Salut,
> Alex
> 
> --
>   ⢀⣴⠾⠻⢶⣦⠀
>   ⣾⠁⢠⠒⠀⣿⡁   Alex Muntada 
>   ⢿⡄⠘⠷⠚⠋   Debian Developer  log.alexm.org
>   ⠈⠳⣄
> 



signature.asc
Description: OpenPGP digital signature


[Bug 1014492] Re: kipi facebook plugin use causes crash

2020-10-17 Thread Joan
Marked as invalid, since version 5.7.0 the kipi plugins are no longer included 
in kphotoalbum (kipi is replaced with purpose library)
Announcement here: https://www.kphotoalbum.org/news/?item=0066

** Changed in: kphotoalbum (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1014492

Title:
  kipi facebook plugin use causes crash

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/kphotoalbum/+bug/1014492/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Renovació del certificat SSO

2020-10-17 Thread Joan Lledó
Hola,

Sóc el mantenidor del paquet lwip https://tracker.debian.org/pkg/lwip

El meu certificat per al SSO ha expirat, i se suposa que hauria de poder
renovar-lo des de https://sso.debian.org/ però he oblidat la contrasenya :(

Algú sap què puc fer o on puc escriure perquè me la renoven?

Gràcies!



Re: Migration and consolidation helm charts for ASF projects from helm/charts to apache/charts git

2020-10-16 Thread Joan Touzet
Hi Jarek,

On 16/10/2020 05:45, Jarek Potiuk wrote:

> Joan - I hope you are back and we can continue the discussion. 

Sorry, I just don't have the time or the energy to carry this further.

I only brought up OO because it was one of the outliers, and the first
"user application" that came to mind when considering Apache projects
that deliver actual end user software vs. e.g. Java libraries. All
deference to Dave Fisher and the OO PMC; they have many concerns that
extend far beyond the terms of this policy. Apologies for pulling that
community into this thread.

I'm glad my comments have given you some food for thought. I would
expect significant push back from Legal and the Board. That said, good
luck with your efforts.

Please drop me from this thread.

-Joan

> I'd love
> to come up with the proposal that will include the specifics of OOffice
> releases. The proposal is here -
> https://cwiki.apache.org/confluence/display/COMDEV/Updates+of+policies+for+the+convenience+packages
> <https://cwiki.apache.org/confluence/display/COMDEV/Updates+of+policies+for+the+convenience+packages>.
> Happy to hear some responses on my comments/proposal
> Also anyone who is interested - I invite you to comment on the proposal.
> 
> My plan is to - possibly - come up with a proposal that we all
> people here will be ok with (hopefully next week) and submit it to the
> legal team of ASF for review/opinions.
> 
> J.
> 
> 
> On Tue, Sep 15, 2020 at 2:06 PM Jarek Potiuk  <mailto:ja...@potiuk.com>> wrote:
> 
> Cool. Thanks Bertrand - I am aware of some of those, but this list
> will be helpful so I can make some of the references to those in my
> proposal (and I encourage everyone else to do so). I am super-happy
> to merge yours with mine. I believe the "spirit" of those is rather
> similar. I am fully aware legal review should be done - I want now
> to get some initial feedback and see what controversies the proposal
> raises and polish it a bit, but I 100% understand the policies are
> binding for the ASF and the risk and legal side of it should be very
> carefully considered.
> 
> Note - I just run through a few of the comments we have there and
> just for the sake of keeping people informed on what has changed so
> far here are some "gists" of my changes comparing to the first draft:
> 
> * there is an open question about the viability of putting all the
> instructions or scripts to build the binary dependencies into 
> released sources. Giving the example of OpenOffice, CouchDB and
> Erlang which makes it next to impossible to do. So I proposed to
> explicitly say that any form of the instructions: scripts, manual
> instruction or POINTERS to the right instructions is fine. Simple
> HTTP link where you can find how to build an external OSS library
> should be perfectly fine. My ultimate goal is that whatever whenever
> the source .tar.gz package is created - the goal is that the user
> can get the sources and following the instructions (including the
> links to instructions) can - potentially rebuild the software
> legally. It might be a complex and recursive process (build a
> library to build a library) and at times those instructions might
> not work (as it is with all the instructions) but at least an
> attempt should be made to make it possible.
> 
> * The "official" 3rd-party binary package is not a good name - I
> replaced it for now with the "maintained OSS" binary package. The
> idea behind it is that shortly - it should be open-source and it
> should be maintained. So while the name does not reflect all the
> subtleties of "maintained" and "OSS" but it reflects the spirit. I
> tried to make the "recursive" definition as much relaxed as possible
> (in terms of SHOULD vs. MUST except with the Licencing which is a MUST) 
> 
> * In pretty much all cases where I write about "best practices",
> they are not absolute requirements - so whenever possible they are
> SHOULD instead of MUST. I am very far from imposing all the best
> practices on all ASF projects - that will be impractical and stupid
> thing to do. I really treat those "best practices" as "beacons" -
> targets that we can have in mind but might never fully achieve them.
> And as long as we have good reason, not to follow those practices -
> by all means we do not have to. But if easy and possible, I see the
> best practices as a powerful message that improves the "Brand" of
> ASF in general from the user perspective. There are no "bonus
> point

Re: Catalan translation of the Debian website, small updates

2020-10-16 Thread Joan Albert Erraez
Hi Laura,

> I was wondering if any of you would like to help update the Catalan
> translation of the Debian website.

Did someone took this up? If not, I would like to help with Catalan 
translation, but unfortunately I do not have enough knowledge of how 
translations are managed within the Debian website.
Is there any tutorial explaining the process? I may not be the only one willing 
to help.

Thank you for your great work!
Joan Albert



[kphotoalbum] [Bug 427780] New: Crash when tagging images (not videos)

2020-10-15 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=427780

Bug ID: 427780
   Summary: Crash when tagging images (not videos)
   Product: kphotoalbum
   Version: 5.7.0
  Platform: Neon Packages
OS: Linux
Status: REPORTED
  Keywords: drkonqi
  Severity: crash
  Priority: NOR
 Component: general
  Assignee: kpab...@willden.org
  Reporter: aseq...@gmail.com
  Target Milestone: ---

Application: kphotoalbum (5.7.0)

Qt Version: 5.15.0
Frameworks Version: 5.75.0
Operating System: Linux 5.4.0-51-generic x86_64
Windowing system: X11
Distribution: KDE neon User Edition 5.20

-- Information about the crash:
- What I was doing when the application crashed:

I have reported some crashes when browsing videos, this time the crash happened
when opening an image


qt.qpa.xcb: QXcbConnection: XCB error: 3 (BadWindow), sequence: 9014, resource
id: 14680208, major code: 40 (TranslateCoords), minor code: 0
Warning: Directory NikonPreview has an unexpected next pointer; ignored.
Warning: Directory NikonPreview, entry 0x0201: Data area exceeds data buffer,
ignoring it.
Warning: Directory NikonPreview, entry 0x has unknown Exif (TIFF) type 0;
setting type size 1.
Warning: Directory Minolta, entry 0x0088: Data area exceeds data buffer,
ignoring it.
Warning: Directory Minolta, entry 0x0088: Data area exceeds data buffer,
ignoring it.
Warning: Directory Canon has an unexpected next pointer; ignored.
Warning: Directory Canon has an unexpected next pointer; ignored.
Warning: Directory Canon has an unexpected next pointer; ignored.
Warning: Directory Canon has an unexpected next pointer; ignored.
KCrash: crashing... crashRecursionCounter = 2
KCrash: Application Name = kphotoalbum path = /usr/bin pid = 1905

The crash can be reproduced sometimes.

-- Backtrace:
Application: KPhotoAlbum (kphotoalbum), signal: Segmentation fault

[New LWP 1907]
[New LWP 1908]
[New LWP 1909]
[New LWP 1910]
[New LWP 1911]
[New LWP 1940]
[New LWP 1941]
[New LWP 1942]
[New LWP 2214]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x7ff7aeac1aff in __GI___poll (fds=0x7ffeaee97528, nfds=1, timeout=1000) at
../sysdeps/unix/sysv/linux/poll.c:29
[Current thread is 1 (Thread 0x7ff79bfa4180 (LWP 1905))]

Thread 10 (Thread 0x7ff7697fc700 (LWP 2214)):
#0  __GI___libc_read (nbytes=10, buf=0x7ff7697fb25e, fd=21) at
../sysdeps/unix/sysv/linux/read.c:26
#1  __GI___libc_read (fd=21, buf=0x7ff7697fb25e, nbytes=10) at
../sysdeps/unix/sysv/linux/read.c:24
#2  0x7ff7a579c955 in pa_read () from
/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-13.99.so
#3  0x7ff7ade85416 in pa_mainloop_prepare () from
/usr/lib/x86_64-linux-gnu/libpulse.so.0
#4  0x7ff7ade85eb4 in pa_mainloop_iterate () from
/usr/lib/x86_64-linux-gnu/libpulse.so.0
#5  0x7ff7ade85f70 in pa_mainloop_run () from
/usr/lib/x86_64-linux-gnu/libpulse.so.0
#6  0x7ff7ade9411d in ?? () from /usr/lib/x86_64-linux-gnu/libpulse.so.0
#7  0x7ff7a57cb67c in ?? () from
/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-13.99.so
#8  0x7ff7ae948609 in start_thread (arg=) at
pthread_create.c:477
#9  0x7ff7aeace293 in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 9 (Thread 0x7ff7723fd700 (LWP 1942)):
#0  futex_wait_cancelable (private=, expected=0,
futex_word=0x55c5d34eeba0) at ../sysdeps/nptl/futex-internal.h:183
#1  __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c5d34eeb50,
cond=0x55c5d34eeb78) at pthread_cond_wait.c:508
#2  __pthread_cond_wait (cond=0x55c5d34eeb78, mutex=0x55c5d34eeb50) at
pthread_cond_wait.c:638
#3  0x7ff7aefbf10b in QWaitConditionPrivate::wait (deadline=...,
this=0x55c5d34eeb50) at thread/qwaitcondition_unix.cpp:146
#4  QWaitCondition::wait (this=, mutex=0x55c5d36af880,
deadline=...) at thread/qwaitcondition_unix.cpp:225
#5  0x7ff7aefbf1d1 in QWaitCondition::wait (this=this@entry=0x55c5d36af878,
mutex=mutex@entry=0x55c5d36af880, time=time@entry=18446744073709551615) at
../../include/QtCore/../../src/corelib/kernel/qdeadlinetimer.h:68
#6  0x55c5cc6e9f5a in ImageManager::AsyncLoader::next (this=0x55c5d36af850)
at ./ImageManager/AsyncLoader.cpp:186
#7  0x55c5cc6e9656 in ImageManager::ImageLoaderThread::run
(this=0x55c5d3670030) at ./ImageManager/ImageLoaderThread.cpp:59
#8  0x7ff7aefb920c in QThreadPrivate::start (arg=0x55c5d3670030) at
thread/qthread_unix.cpp:342
#9  0x7ff7ae948609 in start_thread (arg=) at
pthread_create.c:477
#10 0x7ff7aeace293 in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thread 8 (Thread 0x7ff773fff700 (LWP 1941)):
#0  futex_wait_cancelable (private=, expected=0,
futex_word=0x55c5d34eeba0) at ../sysdeps/nptl/futex-internal.h:183
#1  __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x55c5d34eeb50,
cond=0x55c5d34eeb78) at pthread_cond_wait.c:508
#2  __pthread_cond_wait (cond=0x55c5d34eeb78, 

Re: [PROPOSAL] Archiving git branches

2020-10-14 Thread Joan Touzet
A reminder about this: I intend to start this work tomorrow. If you have
any PRs or branches you want left alone, speak now.

Based on feedback I've received, it sounds like the prototype/fdb-*
branches are now done? If this is **NOT** the case, speak up.

-Joan

On 07/10/2020 18:10, Joan Touzet wrote:
> Hi there,
> 
> I'd like to clean up our branches in git on the main couchdb repo. This
> would involve deleting some of our obsolete branches, after tagging the
> final revision on each branch. This way, we retain the history but the
> branch no longer appears in the dropdown on GitHub, or in git branch
> listings at the cli.
> 
> Example process:
> 
> git tag archive/1.3.x 1.3.x
> git branch -d 1.3.x
> git push origin :1.3.x
> git push --tags
> 
> If we ever needed the branch back, we just:
> 
> git checkout -b 1.3.x archive/1.3.x
> 
> I would propose to do this for all branches except:
> 
> main
> master (for now)
> 2.3.x
> 3.x
> prototype/fdb-layer
> 
> ...plus any branches that have been touched in the past 90 days, that
> still have open PRs, or that someone specifically asks me to retain in
> this thread.
> 
> I'd also like to do this on couchdb-documentation and couchdb-fauxton.
> 
> I would propose to do this about 1 week from now, let's say on October
> 15th.
> 
> Thoughts?
> 
> -Joan "fall cleaning" Touzet


[NOTICE] Default git branch changed to main

2020-10-14 Thread Joan Touzet
By agreement of the PMC and a lazy majority of committers, the default
git branch is now `main` for all of our repositories.

If you have any pending PRs, please be sure to re-target them to this
branch. For any new PRs, be sure to branch off of main for 4.x work. The
3.x branch is unaffected.

Big thanks to Paul Davis who helped corral all the repositories, and ASF
Infra for making this happen.

-Joan


Re: [DISCUSS] Deprecate custom reduce functions

2020-10-13 Thread Joan Touzet

On 13/10/2020 11:48, Robert Samuel Newson wrote:

Hi All,

As part of CouchDB 4.0, which moves the storage tier of CouchDB into 
FoundationDB, we have struggled to reproduce the full map/reduce functionality. 
Happily this has now happened, and that work is now merged to the couchdb main 
branch.


\o/


This functionality includes the use of custom (javascript) reduce functions. It 
is my experience that these are very often problematic, in that much more often 
than not the functions do not significantly reduce the input parameters into a 
smaller result (indeed, sometimes the output is the same or larger than the 
input).


Agreed, it is very rare that I find a well-written custom reduce 
function. It happens, though, and the people who write them are also 
advanced or expert CouchDB users. They would know how to toggle the default.

To that end, I'm asking if we should deprecate the feature entirely.


and, from the reply to Jonathan:


I also think if custom reduce was disabled by default that we would be 
motivated to expand this set of built-in reduce functions.

If deprecation means eventual removal, we need to take additional steps.

What would help inform this decision would be a survey of the community 
for custom reduce functions. If this can then inform writing more 
built-in _reduces that we ship in various 4.x releases, and remove the 
feature in 5.0, that could work.


There needs to be a concerted effort to reach out to users and 
understand these use cases, followed by a similar effort to write 
replacements and have the community vet them. To date we've only added 
two new built-in enhancements I can remember, and that's the HyperLogLog 
stuff, plus the ability to do _sum / _count / _stats on lists and 
objects (which was a Cloudant donation about 6 years ago, IIRC).


Here's some examples of custom reduces I've seen recently that could not 
be satisfied by our current built-ins:


* wallet/balance calculation, based on transactional data
* _stats like functionality, but derived from complex documents that
  have lists of objects that must be iterated over
* advanced statistical calculation: ANOVA, t-test, linear regression,
  bayesian, etc.

None of these are unsolveable, but they will require effort. I'm ready 
to help talk to users if this is the direction we want to go, but I want 
to see a firm commitment by other developers to help implement new 
built-in reduces brought to the table before +1'ing this decision. 
Companies like IBM/Cloudant and Neighbourhoodie have special access 
here, and would be key players in helping get this work done.


Let's contrast this with a famous deprecation that didn't go as well: 
list/show/rewrites removal. Most of us agree that this functionality is 
much better served by parallel servers that have a huge plethora of 
functionality available to them, plus a wide base of support outside of 
our own ecosystem. Critically, these functions are purely 
transformative: none store new data into the database. I'm don't think a 
similar approach makes sense for custom reduce, since those results 
*are* pre-calculated and stored.


One more contrast. Two years ago, I wrote up a spec to introduce VDU and 
update handler functionality into Mango[1]. Here's a situation where 
there was broad user acceptance, and general agreement on the direction 
to move forward. We could arguably deprecate our current approach for 
these once this functionality has built. The problem has been finding 
someone willing to develop it -- I don't have the time.


Looking forward to others' thoughts.

-Joan "developers, developers, developers" Touzet

[1]: https://github.com/apache/couchdb/issues/1554




In scope for this thread is the middle ground proposal that Paul Davis has 
written up here;

https://github.com/apache/couchdb/pull/3214

Where custom reduces are not allowed by default but can be enabled.

The core _ability_ to do custom reduces will always been maintained, this is 
intrinsic to the design of ebtree, the structure we use on top of FoundationDB 
to hold and maintain intermediate reduce values.

My view is that we should merge #3214 and disable custom reduces by default.

B.



Jenkins improvements

2020-10-13 Thread Joan Touzet

Thanks to our CloudBees contact, we now have some new Jenkins functionality:

1. You can now retrigger a PR Jenkins build by posting the
   comment "jenkins rebuild" to your PR. (This may or may not be a regex
   match.) You can still retrigger a Jenkins build by updating your PR.

2. Full build pipeline status is reported back to a PR - all stages.

3. Erroring jobs will report back the last line of the failure.
   (This may or may not be useful given how verbose our `make test`
   output is.)

Enjoy,
Joan "Monday Monday, can't trust that day" Touzet


Re: GitHub PR comment build trigger

2020-10-12 Thread Joan Touzet
And to add to this, with the Blue Ocean UI for Multibranch Pipeline, it 
is a single click to rebuild a build. It's not as friendly as 
commenting, but it's a single button on the results view for your build, 
which is linked right from the PR.


Of course, this is limited to only people who have Jenkins accounts, 
which is all committers to our repo.


-Joan

On 2020-10-12 11:10 p.m., Christopher wrote:

Hi Andor,

I'm not sure if INFRA is going to enable that plugin, but I thought
I'd suggest some alternatives if they don't:

In Accumulo, we set up a "PR Builder" job in Jenkins, that we can
manually trigger. It is a parameterized build that takes two
parameters: PR and PR_Variant.
The PR is the PR number, and the variant is either "head" or "merge".
The branch specifier to check out is:
refs/remotes/origin/pr/${PR}/${PR_variant}
The refspec to fetch from the repository configuration looks like:
+refs/pull/*:refs/remotes/origin/pr/* (you have to click on Advanced
to see this option)
We also use the "Set Build Name" option to: PR #${PR} (Build #${BUILD_NUMBER})
And include a "Pre-Build step" to upload the build description to:
Pull Request #${PR} - ${GIT_COMMIT}

This works well for us. It may work for you also. The only thing is
you have to go to Jenkins to trigger the build manually.

We also use GitHub Actions, which is probably even easier to build,
because GitHub has a "rebuild jobs" option, right in the interface (to
work around transient build problems), and you can configure some
manually triggered jobs as well. We have several that might be useful
examples at: https://github.com/apache/accumulo/tree/main/.github/workflows/

I hope this helps somebody, if not you,

Christopher

On Mon, Oct 12, 2020 at 8:53 AM Andor Molnar  wrote:


Hi,

Sorry if the topic is redundant, I haven’t been following builds@ list for a 
while and couldn’t find the archives online.

Is there already a way to configure GitHub PR comment to trigger build in the 
new Jenkins instance?

I think it was the ‘GitHub PR Comment Build’ plugin in the old instance.
https://plugins.jenkins.io/github-pr-comment-build

Thanks,
Andor




Re: [PROPOSAL] Archiving git branches

2020-10-08 Thread Joan Touzet

Hi Alex, nice to see you!

On 08/10/2020 04:57, Alexander Shorin wrote:

+1, but...

Old release branches could be just dropped without a worry. If something
there wasn't released since today - well, nobody actually has any need for
it. If someone does - make a release, tag it and drop the branch.
Unexpetable action.


The only release branches I'm proposing to keep are those that are 
nominally still active. We haven't officially said that we're done with 
2.x releases yet - if you want to propose a VOTE on that, please do so. 
3.x and main track active releases. master needs to stay until Paul's 
work to replace it with main is done. And I'm not sure everything from 
prototype/fdb-layer is in main yet, I'll let active devs on it comment.


The work to add the tags for restoration is optional, but doesn't hurt 
anyone.



90 days is quite long. 30 would be more than enough according to actual
activity.


Just an "overabundance of caution" :)


And don't we have a policy that merged branches should be deleted
automatically as well?


Yes, but in certain cases (like nebraska-merge) we've been lenient. 
Obviously these can be cleaned up with prejudice.


-Joan


[PROPOSAL] Archiving git branches

2020-10-07 Thread Joan Touzet

Hi there,

I'd like to clean up our branches in git on the main couchdb repo. This 
would involve deleting some of our obsolete branches, after tagging the 
final revision on each branch. This way, we retain the history but the 
branch no longer appears in the dropdown on GitHub, or in git branch 
listings at the cli.


Example process:

git tag archive/1.3.x 1.3.x
git branch -d 1.3.x
git push origin :1.3.x
git push --tags

If we ever needed the branch back, we just:

git checkout -b 1.3.x archive/1.3.x

I would propose to do this for all branches except:

main
master (for now)
2.3.x
3.x
prototype/fdb-layer

...plus any branches that have been touched in the past 90 days, that 
still have open PRs, or that someone specifically asks me to retain in 
this thread.


I'd also like to do this on couchdb-documentation and couchdb-fauxton.

I would propose to do this about 1 week from now, let's say on October 15th.

Thoughts?

-Joan "fall cleaning" Touzet


[kphotoalbum] [Bug 423811] Crash during kphotoalbum usage

2020-10-07 Thread Joan
https://bugs.kde.org/show_bug.cgi?id=423811

--- Comment #5 from Joan  ---
I have been trying kphotoalbum 5.7.0 and I haven't experienced any crash so
far. The environment has change because I'm using neon (so packages have been
updated) and also the release has been upgraded to release 20.04 (instead of
18.04).
For me the ticket could be closed.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [Goanet] Goanet Digest, Vol 15, Issue 630

2020-09-30 Thread joan tellis
Frederic.Re Bosco

Bosco D'souza
Band name  Max Dorado
 They play in Panjim
and on Sundays at Thaal. in Keg Dovelim

On Tue, Sep 29, 2020, 23:10  wrote:

> Send Goanet mailing list submissions to
> goanet@lists.goanet.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.goanet.org/listinfo.cgi/goanet-goanet.org
> or, via email, send a message with subject or body 'help' to
> goanet-requ...@lists.goanet.org
>
> You can reach the person managing the list at
> goanet-ow...@lists.goanet.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Goanet digest..."
>
>
> Today's Topics:
>
>1. Christian Leader of the western world (a billionaire) paid
>   little tax over the years. (BT Yahoo Mail)
>2. Panchayat websites in Goa... check how many work
>   (Frederick Noronha)
>3. GOOD GOVERNANCE (Aires Rodrigues)
>4. : The Radio Ceylon story (eric pinto)
>5. Re: RINGERS BAND GROUP LISTED (Emercio Rodrigues)
>
>
> --
>
> Message: 1
> Date: Mon, 28 Sep 2020 16:45:56 +0100 (BST)
> From: BT Yahoo Mail 
> To: GOANET 
> Subject: [Goanet] Christian Leader of the western world (a
> billionaire) paid little tax over the years.
> Message-ID: <6426314a.5f0f.174d564532e.webtop...@btinternet.com>
> Content-Type: text/plain; charset=UTF-8; format=flowed; delsp=no
>
>
> President Donald Trump paid $750 in federal income taxes in 2016 and the
> same amount in 2017, and paid no taxes at all in several previous years,
> largely because his business empire has reported losing more money than
> it made, according to a new report in The New York Times.
> In a story posted Sunday afternoon, The Times said it had obtained
> tax-return data for Trump and his businesses covering much of the last
> two decades. Trump has refused to release his tax returns - making him
> the only president in recent history to do so - and he went to the
> Supreme Court earlier this year to stop Congress and the Manhattan
> District Attorney from accessing them.
>
> Read more:
>
> https://www.ndtv.com/world-news/donald-trump-avoided-paying-income-taxes-for-years-report-2301940
>
> Eddie
>
>
> --
>
> Message: 2
> Date: Mon, 28 Sep 2020 21:34:51 +0530
> From: Frederick Noronha 
> To: Goanet 
> Subject: [Goanet] Panchayat websites in Goa... check how many work
> Message-ID:
> <
> camcr53k55brr0vzqscqv5cgq-y0m99nf-uhbbotv68uwda-...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> PANCHAYAT WEBSITES
>
> MORMUGAO BLOCK:
> Village Panchayat Nagoa website :-   http://www.nagoapanchayat.com
> Village Panchayat Cansaulim-Arossim-Cuelim website :-
> http://www.vpcansaulim-arossim-cuelim.in
> Village Panchayat Chicalim website :-http://www.vpchicalim.in
> Village Panchayat Chicolna-Bogmalo website :-
> http://www.vpchicolna-bogmalo.in
> Village Panchayat Cortalim website :-http://www.vpcortalim.in
> Village Panchayat Majorda-Utorda.Calata website :-
> http://www.vpmajorda-utorda-calata.in
> Village Panchayat Sancoale website :-http://www.vpsancoale.in
> Village Panchayat Velsao-Pale-Issorcim website :-
> http://www.vpvelsao-pale-issorcim.in
> Village Panchayat Quelossim website :-http://www.vpquelossim.in
> Village Panchayat Verna website :-http://www.vpverna.in
>
> SALCETE BLOCK:
> Village Panchayat Navelim website :-  www.vpnavelim.com
> Village Panchayat Benaulim website :-  www.vpcanabenaulim.com
> Village Panchayat Camrlim - Salcete website :- www.vpcamorlimsalcete.com
> Village Panchayat Varca website :-  www.varcapanchayat.com
> Village Panchayat  Curtorim website :- www.curtorimpanchayat.com
> Village Panchayat Colva:- http://colvapanchayat.in
> Village Panchayat Chandor-Cavorim:- http://vpcandor-cavorim.in
> Village Panchayat Paroda:- http://vpparoda.com
> Village Panchayat Sarzora:- www.sarzorapanchayat.in
> Village Panchayat Guirdolim:- http://vpguirdolim.in
> Village Panchayat Sao Jose de Areal:- www.vpsjda.com
> Village Panchayat Ambelim:- http://vpambelim.in
> Village Panchayat Rumdamol Davorlim:- http://vpdavorlim-dicarpale.com
> Village Panchayat Seraulim:- http://vpseraulim.in
> Village Panchayat Assolna:- www.vpassolna.com
> Village Panchayat Betalbatim:- www.betalbatimpanchayat.in
> Village panchayat Curtorim:- www.curtorimpanchayat.com
> Village Panchayat Aquem - Baixo:- http://vpaquem-baixo.com
> Village Panchayat Velim:- http://vpvelim.in
> Village Panchayat Davorlim Dicarpale:- http://vpdavorlim-dicarpale.com
> Village Panchayat Carmona:- https://vpcarmona.in
> Village Panchayat Chinchinim Deussua:- http://vpchinchinim-deussa.com
> Village Panchayat Cavelossim:- http://vpcavelossim.com
> Village Panchayat Orlim:- https://vporlim.com
> Village Panchayat Macasana:- http://vpmacasana.com/macasana-village
> Village Panchayat Dramapur Sirlim:- 

[Bug 1897286] [NEW] package libc6:amd64 2.31-0ubuntu9.1 failed to install/upgrade: package libc6:amd64 is already installed and configured

2020-09-25 Thread Joan Saló Grau
Public bug reported:

vx

ProblemType: Package
DistroRelease: Ubuntu 20.04
Package: libc6:amd64 2.31-0ubuntu9.1
ProcVersionSignature: Ubuntu 5.4.0-48.52-generic 5.4.60
Uname: Linux 5.4.0-48-generic x86_64
ApportVersion: 2.20.11-0ubuntu27.9
AptdaemonVersion: 1.1.1+bzr982-0ubuntu32.2
Architecture: amd64
CasperMD5CheckResult: skip
CrashReports: 640:0:125:230175:2020-09-25 15:24:12.132361173 +0200:2020-09-25 
15:24:13.132361173 +0200:/var/crash/libc6:amd64.0.crash
Date: Fri Sep 25 15:24:13 2020
DpkgTerminalLog:
 dpkg: error processing package libc6:amd64 (--configure):
  package libc6:amd64 is already installed and configured
ErrorMessage: package libc6:amd64 is already installed and configured
InstallationDate: Installed on 2020-09-25 (0 days ago)
InstallationMedia: Ubuntu 20.04.1 LTS "Focal Fossa" - Release amd64 (20200731)
Python3Details: /usr/bin/python3.8, Python 3.8.2, python3-minimal, 
3.8.2-0ubuntu2
PythonDetails: N/A
RelatedPackageVersions:
 dpkg 1.19.7ubuntu3
 apt  2.0.2ubuntu0.1
SourcePackage: dpkg
Title: package libc6:amd64 2.31-0ubuntu9.1 failed to install/upgrade: package 
libc6:amd64 is already installed and configured
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: dpkg (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: already-installed amd64 apport-package focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1897286

Title:
  package libc6:amd64 2.31-0ubuntu9.1 failed to install/upgrade: package
  libc6:amd64 is already installed and configured

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/1897286/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [DISCUSS] Prometheus endpoint in CouchDB 4.x

2020-09-23 Thread Joan Touzet

Looking at the Prometheus scrape documentation, you can specify a full URL.

https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config

Jan's suggestion of using /_{info|metrics}?accept=prometheus with the 
default being JSON would be better for CouchDB than the default output 
being Prometheus, and is a minor change.


So the job config would look like:

- job_name: 'couchdb'
  targets: my-couchdb-host.domain.com:5984
  metrics_path: /_{info}metrics}
  params:
accept: prometheus

I would be OK with a first pass only providing this format but I would 
strongly prefer the JSON version come ASAP, and be the default for the 
endpoint if no query parameter is passed - unless Prometheus provides 
something we can use as a unique identifier in the request headers. (If 
it does you can use that to eliminate the params: line.)


As I'm on holiday for another 10 days that's about the most I want to 
think about this right now. Jan has covered my other concerns and 
discussion should happen in his thread.


-Joan

On 23/09/2020 05:17, jiangph wrote:

Thanks Joan for your quick response and suggestion.  As we can see, JSON is not 
the standard Prometheus format for _metrics endpoint. In order to be compatible 
with many monitoring tools, what about going with Prometheus standard format 
first and we may add JSON format support later?

Best regards,
Peng Hui


On Sep 23, 2020, at 9:41 AM, Joan Touzet  wrote:

I like this, but not at the expense of JSON output. It would be the only new 
API surface for CouchDB that isn't JSON-based, and there needs to be excellent 
justification for such. Prometheus is well-known enough to be supported, but we 
should continue to put out JSON stats for the foreseeable future.

I know that Prometheus can't send a header, but sending an accepts 
application/json to /_metrics and having it send back the same data as 
Prometheus, but in JSON, would be lovely. If you feel up to it :)

-Joan "on vacation" Touzet

On 2020-09-22 8:55 a.m., jiangph wrote:

Hey all,
We would like to add a Prometheus metrics endpoint for CouchDB and wanted to 
see if the community would be interested in us contributing this to CouchDB 4.x.
Prometheus is a CNCF open-source project and the Prometheus metrics endpoint 
format is supported by many monitoring tools. Its data model is based around 
having a metric name which then contains a label name and a label value:
{=, ...}
And it supports the Counter, Gauge, Histogram, and Summary metric types.
The idea for the new Prometheus endpoint, /_metrics, would be that the endpoint 
is a consolidation of the _stats [1],  _system [2], and _active_tasks [3] 
endpoints.
For _stats and _system, the conversion from JSON to Prometheus-based format 
seems to be straightforward.
JSON format:
{
  "value": {
   "min": 0,
   "max": 0,
   "arithmetic_mean": 0,
   "geometric_mean": 0,
   "harmonic_mean": 0,
   "median": 0,
   "variance": 0,
   "standard_deviation": 0,
...
"percentile": [
[
 50,
 0
],
[
 75,
 0
],
[
 90,
 0
],
[
 95,
 0
],
[
 99,
 0
],
[
 999,
 0
]
   ],
   "histogram": [
[
 0,
 0
]
   ],
}
Prometheus-based format:
couchdb_stats{value="min"} 0
couchdb_stats{value="max"} 0
couchdb_stats{value="percentile50"} 0
couchdb_stats{value="percentile75"} 0
couchdb_stats{value="percentile95"} 0
For _active_tasks, the change will be a bit more complicated, and some fields 
will be added to labels and tags.
JSON format:
{
 "checkpointed_source_seq": 68585,
 "continuous": false,
 "doc_id": null,
 "doc_write_failures": 0,
 "docs_read": 4524,
 "docs_written": 4524,
 "missing_revisions_found": 4524,
 "pid": "<0.1538.5>",
 "progress": 44,
 "replication_id": "9bc1727d74d49d9e157e260bb8bbd1d5",
 "revisions_checked": 4524,
 "source": "mailbox",
 "source_seq": 154419,
 "started_on": 1376116644,
 "target": "http://mailsrv:5984/mailbox <http://mailsrv:5984/mailbox> 
<http://mailsrv:5984/mailbox <http://mailsrv:5984/mailbox>>",
 "type": "replication",
 "updated_on": 1376116651
}
Prometheus-based would look something like:
format:couchdb_active_task{type="replication", source="mailbox", target="http://mailsrv:5984/mailbox 
<http://mailsrv:5984/mailbox> <http://mailsrv:5984/mailbox <http://mailsrv:5984/mailbox>>", docs_count = 
"docs_read"} 4524
couchdb_active_task{type="replication", source="mail

Re: [DISCUSS] Prometheus endpoint in CouchDB 4.x

2020-09-22 Thread Joan Touzet
I like this, but not at the expense of JSON output. It would be the only 
new API surface for CouchDB that isn't JSON-based, and there needs to be 
excellent justification for such. Prometheus is well-known enough to be 
supported, but we should continue to put out JSON stats for the 
foreseeable future.


I know that Prometheus can't send a header, but sending an accepts 
application/json to /_metrics and having it send back the same data as 
Prometheus, but in JSON, would be lovely. If you feel up to it :)


-Joan "on vacation" Touzet

On 2020-09-22 8:55 a.m., jiangph wrote:

Hey all,

We would like to add a Prometheus metrics endpoint for CouchDB and wanted to 
see if the community would be interested in us contributing this to CouchDB 4.x.

Prometheus is a CNCF open-source project and the Prometheus metrics endpoint 
format is supported by many monitoring tools. Its data model is based around 
having a metric name which then contains a label name and a label value:

{=, ...}

And it supports the Counter, Gauge, Histogram, and Summary metric types.

The idea for the new Prometheus endpoint, /_metrics, would be that the endpoint 
is a consolidation of the _stats [1],  _system [2], and _active_tasks [3] 
endpoints.

For _stats and _system, the conversion from JSON to Prometheus-based format 
seems to be straightforward.

JSON format:
{
  "value": {
   "min": 0,
   "max": 0,
   "arithmetic_mean": 0,
   "geometric_mean": 0,
   "harmonic_mean": 0,
   "median": 0,
   "variance": 0,
   "standard_deviation": 0,
...
"percentile": [
[
 50,
 0
],
[
 75,
 0
],
[
 90,
 0
],
[
 95,
 0
],
[
 99,
 0
],
[
 999,
 0
]
   ],
   "histogram": [
[
 0,
 0
]
   ],
}

Prometheus-based format:

couchdb_stats{value="min"} 0
couchdb_stats{value="max"} 0
couchdb_stats{value="percentile50"} 0
couchdb_stats{value="percentile75"} 0
couchdb_stats{value="percentile95"} 0

For _active_tasks, the change will be a bit more complicated, and some fields 
will be added to labels and tags.

JSON format:

{
 "checkpointed_source_seq": 68585,
 "continuous": false,
 "doc_id": null,
 "doc_write_failures": 0,
 "docs_read": 4524,
 "docs_written": 4524,
 "missing_revisions_found": 4524,
 "pid": "<0.1538.5>",
 "progress": 44,
 "replication_id": "9bc1727d74d49d9e157e260bb8bbd1d5",
 "revisions_checked": 4524,
 "source": "mailbox",
 "source_seq": 154419,
 "started_on": 1376116644,
 "target": "http://mailsrv:5984/mailbox <http://mailsrv:5984/mailbox>",
 "type": "replication",
 "updated_on": 1376116651
}

Prometheus-based would look something like:

format:couchdb_active_task{type="replication", source="mailbox", target="http://mailsrv:5984/mailbox 
<http://mailsrv:5984/mailbox>", docs_count = "docs_read"} 4524
couchdb_active_task{type="replication", source="mailbox", target="http://mailsrv:5984/mailbox 
<http://mailsrv:5984/mailbox>", docs_count = "docs_written"} 4524
couchdb_active_task{type="replication", source="mailbox", target="http://mailsrv:5984/mailbox 
<http://mailsrv:5984/mailbox>", docs_count = "missing_revisions_found"} 4524


Best regards,
Garren Smith
Peng Hui Jiang

[1] https://docs.couchdb.org/en/latest/api/server/common.html#node-node-name-stats 
<https://docs.couchdb.org/en/latest/api/server/common.html#node-node-name-stats>
[2] https://docs.couchdb.org/en/latest/api/server/common.html#active-tasks 
<https://docs.couchdb.org/en/latest/api/server/common.html#active-tasks>
[3] https://docs.couchdb.org/en/latest/api/server/common.html#node-node-name-system 
<https://docs.couchdb.org/en/latest/api/server/common.html#node-node-name-system>



Re: [Community-Discuss] AFRINIC Elections Results

2020-09-21 Thread Joan Hope K
Congratulations to all the new leaders.

Joan

On Fri, Sep 18, 2020 at 9:33 PM AMADU YUSIF  wrote:

> Congratulations to you all. I wish you well for this journey and you hope
> you will contribute meaningfully to the development of AFRINIC and the open
> internet.
>
> Yusif
>
>
>
> Sent from my Samsung Galaxy smartphone.
>
>
>  Original message 
> From: Alioune Traore via Community-Discuss 
>
> Date: 18/09/2020 4:57 p.m. (GMT+00:00)
> To: community-discuss@afrinic.net, AFRINIC Communication <
> co...@afrinic.net>
> Subject: Re: [Community-Discuss] AFRINIC Elections Results
>
> Congratulations to all!
>
> Dr Ing. Alioune Badara TRAORE
>
> Membre du conseil de l'AMRTP en charge des TIC
>
> Président FEMAT
>
> Directeur Technique du CNOSM
>
> Ceinture Noire 7ème DAN
> Arbitre Olympique de Taekwondo
> +223 6678 58 31
>
>
> Le vendredi 18 septembre 2020 à 16:23:16 UTC+1, AFRINIC Communication <
> co...@afrinic.net> a écrit :
>
>
>
> []French, Arabic and Portuguese versions below]
>
> Dear colleagues,
>
> It was nice seeing and meeting with you at the AIS’20 online. Please allow
> me to thank you most sincerely on behalf of AFRINIC for attending the
> meeting during which different elections were also held.
>
> As per the elections guidelines, I would like to announce the results of
> the just-concluded elections. The following are the results of the
> elections:
>
>   • PDWG Co-Chair: Abdulkarim Oloyede
>   • NRO-NC Representative: Saul Stein to serve a three-year
> term
>   • Governance Committee: Ali Hussein to serve a three-year
> term
>   • Board - Seat 4 (Central Africa), Serge Kabwika Ilunga, to
> serve a three-year term
>   • Board -  Seat 6 (Eastern Africa), Abdalla Omari, to serve
> a two-year term.
>   • Board - Seat 3 (Indian Ocean), Subramanian Moonesamy, to
> serve a three-year term
>   • Board - Seat 8 (Non-Regional), Benjamin Eshun, to serve a
> three-year term.
>
> Congratulations to all the successful candidates elected to serve in the
> various capacities. We wish you well as you serve our community and the
> organisation.
>
> Thank you all once again for your commitment to AFRINIC and the community
> and for attending the meeting.
>
> Eddy Kayihura
>
> Chief Executive Officer
>
> ….
>
>
> Chers collègues,
>
> Ce fut un plaisir de vous voir et de vous rencontrer à l'AIS'20 en ligne.
> Permettez-moi de vous remercier très sincèrement au nom de l'AFRINIC pour
> votre participation à la réunion au cours de laquelle différentes élections
> ont également eu lieu.
>
> Conformément aux directives relatives aux élections, je voudrais annoncer
> les résultats des élections qui viennent de se terminer. Voici les
> résultats des élections :
>
>   - Coprésident du PDWG : Abdulkarim Oloyede
>   - Représentant du NRO-NC : Saul Stein pour un mandat de
> trois ans
>   - Comité de gouvernance : Ali Hussein pour un mandat de
> trois ans
>   - Conseil - Siège 4 (Afrique centrale), Serge Kabwika
> Ilunga, pour un mandat de trois ans
>   - Conseil - Siège 6 (Afrique de l'Est), Abdalla Omari, pour
> un mandat de deux ans.
>   - Conseil - siège 3 (océan Indien), Moonesamy Moonesamy,
> pour un mandat de trois ans
>   - Conseil - Siège 8 (non régional), Benjamin Eshun, pour un
> mandat de trois ans.
>
> Félicitations à tous les candidats élus pour les différentes fonctions.
> Nous vous souhaitons beaucoup de succès dans votre travail au service de
> notre communauté et de l'organisation.
>
> Merci encore à tous pour votre engagement envers AFRINIC et la communauté
> et pour votre participation à cette réunion.
>
> Eddy Kayihura
>
> Directeur général
>
> ………
>
> زملائي الاعزاء،
>
> كان من الجيد رؤيتك واللقاء معك في AIS’20 عبر الإنترنت. واسمحوا لي أن أتقدم
> لكم بخالص الشكر بالنيابة عن AFRINIC على حضوركم الاجتماع الذي أجريت
> خلاله انتخابات مختلفة.
>
> وفقا لتوجيهات الانتخابات ، أود أن أعلن نتائج الانتخابات التي انتهت لتوها.
> فيما يلي نتائج الانتخابات:
>
>   • الرئيس المشارك PDWG: عبد الكريم أولويدي
>   • ممثل NRO-NC: Saul Stein للخدمة لمدة ثلاث سنوات
>   • لجنة الحوكمة: علي حسين لمدة ثلاث سنوات
>   • مجلس الإدارة - مقعد 4 (أفريقيا الوسطى) ، سيرج كابويكا
> إيلونجا ، لمدة ثلاث سنوات
>   • مجلس الإدارة - مقعد 6 (شرق أفريقيا) ، عبد الله العمري ،
> لمدة عامين.
>   • مجلس الإدارة - المقعد 3 (المحيط الهندي) ، سوبرامانيان
> مونسامي 

<    3   4   5   6   7   8   9   10   11   12   >