+[[#if ! (defined __BIG_ENDIAN__ || defined __LITLE_ENDIAN__)
typo
___
Bug-coreutils mailing list
Bug-coreutils@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-coreutils
I disagree. The = change inserts a conditional branch into the control
flow, with the potential to save a single memory access. I count ca. 2 CPU
cycles for a memory access and ca. 8 CPU cycles for a conditional jump,
therefore I would say that the change slows down the program a bit.
I cannot change 'close-stream', since you own that module. But for
'fwriteerror',
which I use in GNU gettext - and where I don't want to have spurious, timing-
dependent error messages - I'm applying this:
I tend to agree with you on EPIPE, but OTOH this is as bad as it can be.
Can't we
Jim,
Imagine a scenario in which the pipe reader is expected always to
be reading, and so the pipe writer can expect that any write failure with
errno==EPIPE indicates the reader has terminated unexpectedly.
If the writer should terminate first, the reader can still detect the
failure using
Bruno Haible wrote:
Paolo Bonzini wrote:
Is it possible to implement the tee
--ignore-sigpipe as you did (delaying SIGPIPE until the last input
closes, which I also think is the right thing to do) while having
close-stream ignore EPIPE?
Yes it is. The complete patch was posted here
Jim Meyering wrote:
Paolo Bonzini [EMAIL PROTECTED] wrote:
Jim,
Imagine a scenario in which the pipe reader is expected always to
be reading, and so the pipe writer can expect that any write failure with
errno==EPIPE indicates the reader has terminated unexpectedly.
If the writer should
Bruno Haible wrote:
Paolo Bonzini wrote:
http://lists.gnu.org/archive/html/bug-coreutils/2008-09/msg00024.html
Doesn't the comment in patch 2
To make it clear:
- Patch 1 only - applies if close_stdout were modified to ignore EPIPE
always (which Jim has rejected).
- Patch 1 + 2
What else do you propose to cover these cases, if not a global variable?
If only one behavior is needed across an entire package, a dummy module
with gl_MODULE_INDICATOR would do. Better than having fwriteerror do
one thing and close_stdout do another.
Paolo
Bruno Haible wrote:
Jim Meyering wrote:
The RHEL bugzilla I mentioned initially is filed against glibc.
Or perhaps you meant glibc's own bugzilla?
Yes, that's what I meant: I'm not sure that Ulrich Drepper or Jakub Jelinek
get informed about Debian or RHEL bugs.
Jakub surely gets Fedora
A) # tail trace.txt | grep com
- WORKS: produces output
B) # tail trace.txt | grep com | cat
- WORKS: produces output
C) # tail -f trace.txt | grep com
- WORKS: produces output, then waits and reports new lines
D) # tail -f trace.txt | grep com | cat
- FAILS: no output from existing lines,
But if the shared library is installed, you can
instead use your own line-buffer function:
line-buffer()
{
LD_PRELOAD=/t/linebuf.so $@
}
but that doesn't expand the command name when it's an alias or function.
It actually works for functions. To support aliases, you can do
That looks fine as the first param to select
is the highest-numbered file descriptor + 1.
Arguably 1 is more correct than 0.
I think this patch is fine. OTOH 1 is *not* more correct than 0, because it
implies that fd 0 might be tested.
The other uses are for WinSock only, so they should
OK, I'll work on the creation of a GNU project called 'libunistring', that
will export the functions from gnulib as a shared library.
That's simply great to hear.
Paolo
___
Bug-coreutils mailing list
Bug-coreutils@gnu.org
systems have a `utime'
that behaves this way. New programs need not use this macro.
I think that then _this_ is the cross-compilation default to be fixed.
Ok?
Paolo
2009-04-09 Paolo Bonzini bonz...@gnu.org
* lib/autoconf/functions.m4 (AC_FUNC_UTIME_NULL): Assume
On 10/06/2009 11:05 AM, Pádraig Brady wrote:
Also a minor nit in s/Linux/Gnu\/Linux/
Definitely not when it's talking explicitly of a kernel version?
Paolo
$ ./configure
...
Configure findings:
FFI:no (user requested: default)
readline: yes (user requested: default)
libsigsegv: no, consider installing GNU libsigsegv
Especially for packages with many dependencies or with many configuration
opportunities it would
On 10/22/2009 01:09 PM, Pádraig Brady wrote:
Jim Meyering wrote:
Pádraig Brady wrote:
p.s. I'll look at bypassing stdio on input to see
if I can get at least the 2% back
IMHO, even if it did, it would not be worth it.
Right, a quick test here shows only a 0.8% gain from
bypassing stdio.
+ char* buffer = malloc(BLOCKSIZE + 72);
Spacing:
char *buffer = malloc (BLOCKSIZE + 72);
+ if (!buffer)
+ return 1;
Where is that memory freed?
Everything else is fine by me.
Paolo
On Sat, Oct 24, 2009 at 16:04, Bruno Haible br...@clisp.org wrote:
Pádraig Brady wrote:
+ * lib/copy-file.c (copy_file_preserving): Used a 32KiB malloced
buffer.
Fine with me too. Yes, 32 KB is more than you can safely allocate on the stack
in a multithreaded program: The default
On 10/26/2009 11:33 AM, Pádraig Brady wrote:
So how about sort -j,--jobs to match `make`?
Agreed. However, I think that for coreutils programs it should be the
default to use threads whenever possible.
Paolo
Some programs, like 'msgmerge' from GNU gettext, already pay
attention to the OMP_NUM_THREADS variable - a convention shared
by all programs that rely on OpenMP. Can you make the 'sort'
program use the same convention?
I am not working on the multi-threaded sort, but if somebody asks I can
On 10/27/2009 01:16 PM, Pádraig Brady wrote:
I already suggested to the xargs maintainer that `xargs -P`
should be equivalent to xargs -P$(nproc).
I was thinking of an additional option that would automatically decrease
-n so that the requested number of processes is started (then of course
I was thinking of an additional option that would automatically decrease
-n so that the requested number of processes is started (then of course
the load may not be well balanced).
So you mean, rather than the current situation of:
$ yes . | head -n13 | xargs -n4 -P2
. . . .
. . . .
. . . .
.
+diff --git a/lib/regcomp.c b/lib/regcomp.c
This is okay.
diff --git a/gl/lib/regex_internal.c.diff b/gl/lib/regex_internal.c.diff
This is okay. There is one caller of re_node_set_remove_at, in
regexec.c, which might pass SIZE_MAX as the second parameter to
re_node_set_remove_at. This
On 10/29/2009 10:02 AM, Jim Meyering wrote:
IMHO it is a bug fix.
A semantically unsigned variable must never be decremented to -1.
I didn't try to see if it could induce misbehavior.
No, it couldn't. The problem is that the variable is semantically
unsigned in gnulib because of the IMHO
On 10/29/2009 10:24 AM, Jim Meyering wrote:
In making your case, it would be good to be able to itemize
the glibc bug fixes that have not yet been ported to gnulib,
I think there are none pending.
and contrast that gain with the (theoretical?) loss of support for
strings of length= 2GiB.
seq 1 13 | xargs --parallel -P4
1 5 9 13
2 6 10
3 7 11
4 8 12
(Note there's no -n). Same for
seq 1 13 | xargs --parallel
on a 4-core machine. This is _by design_ rearranging files, so it
requires an option.
Right, you're not auto decreasing -n, but when we read all args and
we pass
On 11/04/2009 01:24 AM, Pádraig Brady wrote:
BTW, it wouldn't be ambiguous to the program, nor would it
be different than the existing meaning, but as you say,
users could mistakenly do -P0 when they meant -0P.
So I'll make the arg mandatory, but what to choose?
n is all I can come up with in my
I have updated the new nproc program to use this change in gnulib.
Thanks to Bruno, now nproc has not any logic inside but it is a mere
wrapper around the gnulib module.
I used as arguments to the new program the same names used by the
`nproc_query' enum, except using --overridable instead of
Since I had to reboot to Windows I gave a try at the grep snapshot
under Cygwin. I have Cygwin 1.5.25. All grep tests passed except:
- fmbtest.sh and euc-mb which were skipped
- help-version which failed because the sleep process created when
setting up kill_args keeps the $tmp directory busy
* help-version: Change each *_args variable to a *_setup function.
---
tests/help-version | 156 +++-
1 files changed, 82 insertions(+), 74 deletions(-)
diff --git a/tests/help-version b/tests/help-version
index 21841e4..436a4e2 100755
---
I recently got by private email a report that sed -i changed the line
endings of the file to bare linefeeds on cygwin. The reason for this is
that mkstemp on cygwin hardcodes the flags to O_EXCL|O_BINARY:
http://www.cygwin.com/ml/cygwin-patches/2006-q2/msg00013.html
I fixed it by using
On 07/16/2010 11:47 PM, Paul Eggert wrote:
On 07/16/10 13:27, Paolo Bonzini wrote:
I fixed it by using instead mkostemp(template, 0). From a quick git
grep, it seems like sort and tac are affected by the bug in coreutils.
tac access the temp file in binary mode, so there's no problem
On 07/18/2010 01:54 AM, Eric Blake wrote:
By the way, newer cygwin provides mkostemp() - did you only fix the
problem for older cygwin that lacks mkostemp and thus gets the gnulib
fallback that doesn't force binary?
mkostemp also forces binary? That's a bug IMO, since the caller can
tell
On 07/19/2010 03:28 PM, Eric Blake wrote:
By the way, I don't see your patch for using mkostemp on cygwin in
git://git.sv.gnu.org/sed.git; am I missing something, or is that not the
latest git repository for sed?
I wanted to make sure you liked it before pushing. :) I'll now push it.
I also
On 07/19/2010 06:33 PM, Eric Blake wrote:
Yuck - that means if /tmp is mounted differently than ., then using
mkostemp(,0) will force the wrong line endings (converting binary to
text, or converting text to binary, depending on which direction the
mismatch is between the mount modes). If you
On 07/26/2010 08:53 PM, Paul Eggert wrote:
I noticed thirteen inlines in coreutils/src/sort.c. Just for fun, I
removed them all. In ten cases, removing inline made no difference to
the generated machine code on my platform (RHEL 5, x86-64, GCC 4.1.2,
compiled with the typical gcc -O2). In the
On 08/05/2010 01:44 AM, Simon Josefsson wrote:
Paul Eggertegg...@cs.ucla.edu writes:
Come to think of it, looking at gnulib memxfrm gave me an idea
to improve the performance of GNU sort by bypassing the need for an
memxfrm-like function entirely. I pushed a patch to do that at
On 08/06/2010 01:29 AM, Paul Eggert wrote:
1) why bother with memxfrm as a tie-breaker? isn't memcmp good enough?
If two keys K1 and K2 compare equal, their random hashes are supposed
to compare equal too. So if memcoll(K1,K2)==0, the random hashes must
be the same. Hence we can't just do a
On 07/04/2011 11:08 AM, Pádraig Brady wrote:
* execute.c (open_next_file): Only reopen stdin on windos.
Applied, thanks.
Paolo
[For bug-coreutils: the context here is that sed -i, just like perl -i,
breaks hard links and thus destroys the content of files with 0400
permission].
Il 06/09/2012 12:38, John Darrington ha scritto:
That's expected of programs that break hard links.
I wonder how many users who are not
Il 06/09/2012 14:30, Pádraig Brady ha scritto:
I consider shuf foo -o foo (on a read-write file) to be insecure.
Besides, it works by chance just because it reads everything in memory
first. If it used mmap to process the input file, shuf foo -o foo
would be broken, and the only way to fix
Il 06/09/2012 18:11, Paul Eggert ha scritto:
I consider shuf foo -o foo (on a read-write file) to be insecure.
Besides, it works by chance
It's not by chance. shuf is designed to let you
shuffle a file in-place, and is documented to work,
by analogy with sort -o foo foo. If we ever
Il 06/09/2012 18:35, Paul Eggert ha scritto:
A program that reads the target file will never
be able to observe an intermediate result.
Sure, but that doesn't fix the race condition I
mentioned. If some other process is writing F
while I run 'sed -i F', F is not replaced atomically.
How
Il 06/09/2012 20:21, Bob Friesenhahn ha scritto:
No, unlink/rename sed -i replaces the file atomically. A program that
POSIX rename assures that the destination path always exists if it
already existed.
My bad, I meant link-breaking/rename. Of course you must not unlink first.
Paolo
Il 06/09/2012 19:23, Paul Eggert ha scritto:
The file replacement is atomic. The reading of the file is not.
Sure, but the point is that from the end user's
point of view, 'sed -i' is not atomic, and can't
be expected to be atomic.
Atomic file replacement is what matters for security.
46 matches
Mail list logo