bug#29939: HELP:enter a "ls"command, then the os connection will closed

2018-01-01 Thread lil...@chinaunicom.cn
Hi:
When I enter a "ls " command at the directory "/root",the OS connection 
will closed.
   
The release of OS is :
SUSE Linux Enterprise Server 11 (x86_64)
VERSION = 11
PATCHLEVEL = 1

There is no much files at the directory,I used the "du -sh *" command to 
look all the files:
TJTV_APIGW_AEPPortal01:~ # du -sh *
4.0K 1211.cap
16K 123.cap
8.0K 1.cap
0 1.pcap
8.0K 202.cap
152K 211.cap
4.0K 21.cap
4.0K 21.pcap
4.0K 22.cap
4.0K 2.cap
8.0K aaa.cap
4.0K A.cap
4.0K aep1211.cap
4.0K apigw.cap
252K atop-1.27-3.929.x86_64.rpm
56K autoinst.xml
4.0K bin
240K bmuSudoScripts
7.8G breeze
8.0K build_trust_relation.exp
4.0K CpServer
4.0K Desktop
20K Documents
4.0K error.cap
4.0K Foundweakpasswds.txt
4.0K gate.cap
4.0K getOrders
12K ha.cap
8.0K hx.cap
4.0K ideploy_file_history
52K inst-sys
4.0K main
24M OralcePathPak
24M OralcePathPak.tar
4.0K p.cap
4.0K queryOrder
4.0K queryOrder.1
4.0K queryOrder.2
4.0K qyw13.cap
4.0K qyw1.cap
544K remoteExecScript
216K secbackup
4.0K sendsmsauthcode.jsp
8.3M suse11sp1
8.0M suse11sp1.zip
4.0K sysconf.temp
92K uniagent
4.0K weakpasswd.txt
4.0K yoy.cap
4.0K yy.cap





中国联合网络通信有限公司
TV增值业务运营中心
李磊
Tel: 15620012157
Email:lil...@chinaunicom.cn
地址:天津市南开区南马路1151号2楼


bug#29921: O(n^2) performance of rm -r

2018-01-01 Thread Niklas Hambüchen
On 01/01/2018 17.40, Pádraig Brady wrote:
> The user and sys components of the above are largely linear,
> so the increasing delay is waiting on the device.

My understanding of commit 24412edeaf556a was that "waiting for the
device" is what's claimed to be nonlinear with the patch -- it
explicitly mentions that it fixes behaviour that's only present on
seeking devices, so that would be real time, not user/sys, right?

Also, the test added at the end of the patch measures real time, not
user/sys.

So I'm relatively sure that what Jim Meyering was measuring/fixing when
he wrote the patch and commit message was real time.
But it would be nice if he could confirm that.

> To confirm I did a quick check on a ramdisk (/dev/shm)
> to minimize the latency effects, and that showed largely
> linear behaviour in the real column also.

In the patch I linked, it says specifically

  "RAM-backed file systems (tmpfs) are not affected,
  since there is no seek penalty"

So I am not sure what checking on a ramdisk gains us here, unless you
think that this statement in the patch was incorrect.

> Note if you want to blow away a whole tree quite often,
> it may be more efficient to recreate the file system
> on a separate (loopback) device.

While a nice idea, this is not very practical advice, as when an
application goes crazy and writes millions of files onto the disk, I
can't just wipe the entire file system.

A O(n^2) wall-time `rm -r` is a big problem.

Of course it may not be coreutils's fault, but since coreutils was
clearly aware of the issue in the past and claimed it was fixed, I think
we should investigate to be really sure this isn't a regression anywhere
in the ecosystem.

> I presume that's coming from increased latency effects
> as the cached operations are sync'd more to the device
> with increasing numbers of operations.
> There are many Linux kernel knobs for adjusting
> caching behaviour in the io scheduler.

I don't follow this reasoning.
Should not any form of caching eventually follow linear behaviour?

Also, if it were as you said, shouldn't the `touch` also eventually
become quadratic?
It seems not so, creating the dir entries seems perfectly linear.





bug#29921: O(n^2) performance of rm -r

2018-01-01 Thread Pádraig Brady
tag 29921 notabug
close 29921
stop

On 31/12/17 21:07, Niklas Hambüchen wrote:
> Hello,
> 
> in commit
> 
>   rm -r: avoid O(n^2) performance for a directory with very many entries
>   http://git.savannah.gnu.org/cgit/coreutils.git/commit/?id=24412edeaf556a
> 
> it says that `rm -r`
> 
>   "now displays linear performance, even when operating on million-entry
> directories on ext3 and ext4"
> 
> I found this not to be the case on my systems running ext4 on Linux 4.9
> on a WD 4TB spinning disk, coreutils rm 8.28.
> As reported on https://bugs.python.org/issue32453#msg309303, I got:
> 
>  nfiles real   user sys
> 
>  100.51s  0.07s   0.43s
>  202.46s  0.15s   0.89s
>  40   10.78s  0.26s   2.21s
>  80   44.72s  0.58s   6.03s
> 160  180.37s  1.06s  10.70s
> 
> Each 2x increase of number of files results in 4x increased deletion
> time, making for a clean O(n^2) quadratic curve.
> 
> I'm testing this with:
> 
>   set -e
>   rm -rf dirtest/
>   echo  10 && (mkdir dirtest && cd dirtest && seq 1  10 | xargs
> touch) && time rm -r dirtest/
>   echo  20 && (mkdir dirtest && cd dirtest && seq 1  20 | xargs
> touch) && time rm -r dirtest/
>   echo  40 && (mkdir dirtest && cd dirtest && seq 1  40 | xargs
> touch) && time rm -r dirtest/
>   echo  80 && (mkdir dirtest && cd dirtest && seq 1  80 | xargs
> touch) && time rm -r dirtest/
>   echo 160 && (mkdir dirtest && cd dirtest && seq 1 160 | xargs
> touch) && time rm -r dirtest/
> 
> 
> On another system, Ubuntu 16.04, coretuils rm 8.25, with Linux 4.10 on
> ext4, I get:
> 
>  nfiles real   user  sys
> 
>  100.94s  0.06s   0.516s
>  202.94s  0.10s   1.060s
>  40   10.88s  0.30s   2.508s
>  80   34.60s  0.48s   4.912s
> 160  203.87s  0.99s  11.708s
> 
> Also quadratic.
> 
> Same machine on XFS:
> 
>  nfiles real   user sys
> 
>  103.37s  0.04s   0.98s
>  207.20s  0.06s   2.03s
>  40   22.52s  0.16s   5.11s
>  80   53.31s  0.37s  11.46s
> 160  200.83s  0.76s  22.41s
> 
> Quadratic.
> 
> What is the complexity of `rm -r` supposed to be?
> Was it really linear in the past as the patch describes, and this is a
> regression?
> Can we make it work linearly again?

Note the patch mentioned above, sorts by inode order,
which is usually a win, avoiding random access across the device.

The user and sys components of the above are largely linear,
so the increasing delay is waiting on the device.
I presume that's coming from increased latency effects
as the cached operations are sync'd more to the device
with increasing numbers of operations.
There are many Linux kernel knobs for adjusting
caching behaviour in the io scheduler.

To confirm I did a quick check on a ramdisk (/dev/shm)
to minimize the latency effects, and that showed largely
linear behaviour in the real column also.

Note if you want to blow away a whole tree quite often,
it may be more efficient to recreate the file system
on a separate (loopback) device.

cheers,
Pádraig