-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
printf '\x41' prints x41 instead of A.
Also it has no --help or --version switch, but I seem to be running
version 8.21-1ubuntu5 ( obviously on ubuntu ).
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird -
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 05/26/2014 12:50 PM, Pádraig Brady wrote:
$ type printf printf is a shell builtin
Of course, darn builtins! Sorry for the noise.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 11/26/2013 06:37 PM, Bernhard Voelker wrote:
As already mentioned, the current implementation is not ideal. It
is a compromise between the requirements which hit 'df' at that
time:
* showing the real root file system instead of early-boot
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 11/24/2013 05:24 AM, Bernhard Voelker wrote:
Thanks for the suggestion, but that is not possible. For the kernel,
all bind mounts are actually equal among each other, and there's no
information about bind flags in /proc/self/mounts (which
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/16/2013 3:19 AM, Peter D. wrote:
Hi,
Is it deliberate that dd can not read from, or write to the host
protected area? Or is it a bug?
The HPA is a feature of the drive, not the OS or software, so dd has
no idea whether or not there is one
On 4/21/2011 4:10 PM, Phillip Susi wrote:
I noticed that some posts to the list today were not being filtered into
the correct mailbox because the List-Id and List-Post fields were
changed from gnu.org to nongnu.org. Was this intentional, and why?
I didn't file a bug report. This message
On 4/1/2010 7:49 AM, phoenix wrote:
Hey guys,
I've found a serious error in the program false: If I pipe some text to it
(e.g. echo i lost the game | false), it does not fail but rather returns a
success. Any suggestions? I need this program very much for completing my
thesis on
On 3/11/2010 7:37 AM, Andreas Schwab wrote:
Incidentally, due to the increasing use of SSD and their tendency not to
reuse recently used blocks it may become again easier in future to
recover data.
Actually once TRIM support becomes common recovering deleted files on
SSD will be impossible
Kevin Pulo wrote:
r...@bebique:~# du -axk /home/kev/mnt/sf
du: cannot access `/home/kev/mnt/sf/home': Permission denied
4 /home/kev/mnt/sf
r...@bebique:~# du -axk --exclude=/home/kev/mnt/sf/home /home/kev/mnt/sf
du: cannot access `/home/kev/mnt/sf/home': Permission denied
4
James Youngman wrote:
This version should be race-free:
find -type f -print0 |
xargs -0 -n 8 --max-procs=16 md5sum ~/md5sums 21
I think that writing into a pipe should be OK, since pipes are
non-seekable. However, with pipes in this situation you still have a
problem if processes try to
Micah Cowan wrote:
He means that there _is_ no optimization. When you're applying ls -i
directly to files (ls -i non-directory, the scenario he mentioned as
not being affected), there is no readdir, there are no directory
entries, and so there is no optimization to be made. A call to stat is
Jim Meyering wrote:
When I say not affected I mean it.
Turning off the readdir optimization affects ls -i only
when it reads directory entries.
You mean you are only disabling the optimization and calling stat()
anyway for directory entries, and not normal files? Then the effect is
only
Jim Meyering wrote:
EVERY application that invokes ls -i is effected.
Please name one.
I'm not sure why this isn't getting through to you. ANY and EVERY
invoker of ls -i that does or possibly could exist is effected by a
degradation of its performance.
Jim Meyering wrote:
Here are two reasons:
- lack of convincing arguments: any program that runs
ls -i non-directory ... is not affected at all.
Of course it is effected -- it takes much longer to run.
- lack of evidence that users would be adversely affected:
the only program
Jim Meyering wrote:
From what I've read, POSIX does not specify this.
If you know of wording that is more precise, please post a quote.
That was my point the standard does not specify that this behavior
is an error, and since every unix system since the dawn of time has
behaved this way,
Wayne Pollock wrote:
How can either ls or readdir be considered correct when
the output is so inconsistent? What behavior do you expect from
backup scripts (and similar tools) that use find (or readdir)?
It seems clear to me that returning the underlying inode numbers
must result in having the
Jim Meyering wrote:
Ultimately, neither POSIX nor any other official standard defines what
is right for coreutils. POSIX usually serves as a fine reference, but
I don't follow it blindly. In rare cases I've had a well-considered
disagreement with some aspect of a standard, and I have
Jim Meyering wrote:
The change I expect to implement does not going against POSIX.
On the contrary, the standard actually says the current readdir
behavior is buggy. See my previous reference to a quote from
readdir's rationale.
Going against the standard behavior means differing in behavior
Andreas Schwab wrote:
It would match the behaviour as defined by ASYNCHRONOUS EVENTS in 1.11
Utility Description Defaults.
Could you quote that section or give me a url to somewhere I can see it
myself? I have no idea what it says nor where to look it up.
Also what about the issue where
Andreas Schwab wrote:
It seems to me that tee should have a SIGPIPE handler which closes the
broken fd and stops trying to write to it, and if ALL outputs have been
closed, exit.
That would not be compatible with POSIX.
In what way?
Also, won't ignoring SIGPIPE cause problems later when tee
Andreas Schwab wrote:
Bruno Haible [EMAIL PROTECTED] writes:
How about adding an option '-p' to 'tee', that causes it to ignore SIGPIPE
while writing to stdout?
Just add a trap '' SIGPIPE before starting tee.
Wouldn't that only trap SIGPIPE sent to the shell, not tee? Aren't all
signal
Hauke Laging wrote:
Hello,
I just read an interesting hint in the German shell Usenet group
([EMAIL PROTECTED]). As I could not find anything about
that point in your mailing list archive I would like to mention it here.
The author claims that he achieved a huge performance increase (more
Philip Rowlands wrote:
Coreutils manpages tend to be short reference sheets listing the
available options. Further documentation is provided in the info
command, as should be mentioned as the end of each manpage.
From the docs:
`-b'
`--binary'
Treat each input file as binary, by reading
Pádraig Brady wrote:
The CPU percentage of dd process sometimes is 30% to 50%,
which is higher than we expect (= 20%), and there is no other big
program running at the same time.
If the disc in SATA ODD is CD-R instead of DVD-R, the percentage is
much smaller(=20).
That just means that dd is
Bob Proulx wrote:
It appears that something failed getting the file system values. Try
debugging this using strace. The following produces useful debug
output on my GNU/Linux system for usb storage devices.
Most likely that is the case as this subfs does not appear to have been
actively
Jim Meyering wrote:
Yes, it's expected, whenever you use a file system with
imperfect-by-definition inode emulation.
AFAIR, the fat driver uses the starting cluster of the file as the inode
number, so unless you defrag or something, it shouldn't change.
Roberto Spadim wrote:
i think that DF is understanding /mnt/smb (smbfs mount point) as / disk
usage
but if i umount it and get again df and du -s /*, df still with 88%
No, df asks the filesystem itself for the information with statfs(), so
the only way it is wrong is if the fs is damaged.
[EMAIL PROTECTED] wrote:
Is it normal to see two users on the same tty?
$ who
jidanni pts/0 ...
ralphpts/0 ...
jim pts/1 ...
$ ls -l /dev/pts/0
crw--w 1 jidanni tty 136, 0 2007-08-17 00:58 /dev/pts/0
The administrator (Dreamhost) says
You could potentially see many more than that
Andreas Schwab wrote:
Chris Moore [EMAIL PROTECTED] writes:
$ mv dir ..
mv: cannot move `dir' to a subdirectory of itself, `../dir'
With coreutils 6.9 you'll get Directory not empty.
That also seems incorrect. Shouldn't the error be A file ( directory )
with that name already exists?
Bob Proulx wrote:
NOT this:
$* nohup.out.$name-$pid
But this:
$* | sed s/^/$name-$pid: / nohup.out
$* | sed s/^/$timestamp: / nohup.out
OH! I see now... yea, that would require active participation.
trap 1 15
if test -t 21 ; then
echo Sending output to 'nohup.out'
Micah Cowan wrote:
Untrue, actually: _handling_ the signals would not handle them in the
exec'd child (for obvious reasons), but POSIX requires blocked signals
to remain blocked after an exec.
Try the following:
#!/bin/sh
trap TSTP
exec yes /dev/null
and then try suspending the
Bob Proulx wrote:
Uhm... I think we drifted from the feature discussion:
How so?
Jack van de Vossenberg wrote:
My request is: could the output be preceded by
1) the name/PID of the process that produces the output.
2) the time that the output was produced.
I don't think that is possible
Bob Proulx wrote:
Well, perhaps in a sense *anything* is possible with enough code to
implement it. However as originally designed and currently written it
is not possible for nohup to do this. It is only possible for nohup
if it were rewritten to be a completely different program. It would
Pádraig Brady wrote:
My request is: could the output be preceded by
1) the name/PID of the process that produces the output.
That's not possible unfortunately, as nohup just
sets things up, and replaces itself with the command.
It might suffice to have separate files for each command,
which
Alfred M. Szmidt wrote:
Standards should never be followed blindly, and standards should be
broken when one thinks one has good reasons.
SI also conflicts with POSIX in this case. Not to mention that SI
does not define prefixes for all possible units, only SI units, and a
byte is not a SI
Jim Meyering wrote:
Which ls option(s) are you using?
I used ls -Ui to list the inode number and do not sort. I expected this
to simply return the contents from getdents, but I see stat64 calls on
each file, I believe in the order they are returned by getdents in,
which causes a massive
Jim Meyering wrote:
That's good, but libc version matters too.
And the kernel version. Here, I have linux-2.6.18 and
Debian/unstable's libc-2.3.6.
How does the kernel or libc version matter at all? What matters is the
on disk filesystem layout and how it is not optimized for fetching stat
I have noticed that performing commands such as ls ( even with -U ) and
du in a Maildir with many thousands of small files takes ages to
complete. I have investigated and believe this is due to the order in
which the files are stat()ed. I believe that these utilities are simply
stat()ing the
Why not simply cap the size at 4 MB? If it is greater than 4 MB just go
with 4 MB instead of 512 bytes. In fact, you might even want to cap it
at less than 4 MB, say 1 MB or 512 KB. I think you will find that any
size larger than the 32-128 kb range yields no further performance
increase
Tony Ernst wrote:
I believe the larger block sizes are especially beneficial with RAID.
I'm adding Geoffrey Wehrman to the CC list, as he understands disk I/O
much better than I do.
I believe most kernels always performs the actual IO in the same size
chunks due to the block layer and cache,
What is atomic about having dd do this? open() with O_DIRECTORY to test
for existence of a directory is exactly what test does isn't it? If
your goal is to test for the existence of a directory then test is what
you want to use, not dd. The purpose of dd is to copy/convert data
between
Paul Eggert wrote:
No, because test -d foo test -r foo is _two_ invocations of
test, not one. A race condition is therefore possible. The race
condition is not possible with dd if=foo iflag=directory count=0.
Ok, so this allows you to atomically test if the named object is both a
This sounds like an autofs problem. I'm running ubuntu and hal auto
mounts removable media when it is inserted. When it is not mounted, df
will not show a line for it at all, since df only shows mounted points.
I think what you are seeing is an autofs mount point being mounted
there which
I'm confused. You can't open() and write() to a directory, so how does
it make any sense to ask dd to set O_DIRECTORY?
Paul Eggert wrote:
I wanted to use dd iflag=directory (to test whether a file is a
directory, atomically), and noticed that dd didn't have it. The use
was a fairly obscure
Or grep.
Paul Eggert wrote:
N Gandhi Raja [EMAIL PROTECTED] writes:
Can we use test command in UNIX to compare a *string *with the
*regular expression*?
No. You might look at 'expr' or 'awk' instead.
___
Bug-coreutils mailing list
Shouldn't it be made consistent? IMHO, the command mv a b/ means move
the file or directory named a into the directory named b, so if b does
not exist or is not a directory, it should fail. If you want to make mv
deviate from this behavior, then at least shouldn't it behave the same
on all
Maybe I misunderstood you but you seem to think that each hard link to
the same file can have different ownerships. This is not the case.
Hard links are just additional names for the same inode, and permissions
and ownership is associated with the inode, not the name(s).
Also I just tested
Hard drive sectors are 512 bytes so use a bs of 512 and skip FF68
blocks. I'm not sure if dd will accept hex numbers, try prefixing it
with a 0x ( the C convention for hex numbers ). Otherwise, convert the
hex number to decimal.
Mark Perino wrote:
How does one convert from LBA to skip,
Most likely this is the access timestamps being updated on the files
being read, try adding the noatime option to your mount options to
prevent this.
Jochen Baier wrote:
hi,
i run in a weird problem: a script which use the sleep command,
generates hardisk access every x seconds. this is
I have always thought that the very name sync is completely
misleading. The option really has nothing at all to do with IO being
synchronous or asynchronous, you can still perform IO either way ( think
non blocking and linux async IO ). What this option really does is
simply cause the cache
Robert Latham wrote:
I mean no offense cutting out most of your points. You describe great
ways to achieve high I/O rates for anyone writing a custom file mover.
I shouldn't have mentioned network file systems. It's a distraction
from the real point of my patch: cp(1) should consider both the
It is a general design philosophy of linux, and unix in general, that
the kernel will not enforce locking of files. This is why you can
upgrade software without rebooting: the old file can be deleted and
replaced with the new file, even though it is still in use. Of course,
it isn't actually
What would such network filesystems report as their blocksize? I have a
feeling it isn't going to be on the order of a MB. At least for local
filesystems, the ideal transfer block size is going to be quite a bit
larger than the filesystem block size ( if the filesystem is even block
I don't see why the filesystem's cluster size should have a thing to do
with the buffer size used to copy files. For optimal performance, the
larger the buffer, the better. Diminishing returns applies of course,
so at some point the increase in buffer size results in little to no
further
there was over a year
between the release of 5.2.1 and 5.92, with nothing in between.
Paul Eggert wrote:
Phillip Susi [EMAIL PROTECTED] writes:
I searched the archives and found a thread from over a year ago
talking about adding support to dd for O_DIRECT, but it is not
documented in the man pages
I searched the archives and found a thread from over a year ago talking
about adding support to dd for O_DIRECT, but it is not documented in the
man pages. Did the man pages not get updated, or did this patch not
make it in?
If O_DIRECT is supported, but not documented, then I wonder: does
56 matches
Mail list logo