Re: [U2] [UV] Large File Operations Kill Linux

2013-02-05 Thread Symeon Breen
Ext3 and 132Gb ram all sounds good
RHEL 5   always uses kernel 2.6.18there may be patches available as
Brian says so going through redhat support is the best bet,  it is after all
what you pay for, otherwise you would just have centos.





-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Wols Lists
Sent: 04 February 2013 21:15
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] [UV] Large File Operations Kill Linux

On 04/02/13 21:05, Dan Fitzgerald wrote:
 
 What's the value in /proc/sys/vm/swappiness?

How will that make any difference? 2.6.18-348 SOUNDS like an ancient (in
linux terms) kernel. Are you on RedHat support?

This is a problem with the linux kernel that was addressed recently, iirc.
Large amounts of io from a single process can swamp the queue, and the
latest kernels have it fixed.

If you've got RH support, see if you can find out if that's been backported
into your kernel.

Cheers,
Wol
  
 From: perry.tay...@zirmed.com
 To: u2-users@listserver.u2ug.org
 Date: Mon, 4 Feb 2013 20:53:13 +
 Subject: Re: [U2] [UV] Large File Operations Kill Linux

 We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.

 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon 
 Breen
 Sent: Monday, February 04, 2013 9:23 AM
 To: 'U2 Users List'
 Subject: Re: [U2] [UV] Large File Operations Kill Linux

  A few questions - What linux version/distro are you on and what type 
 of file system, and how much ram do you have

 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry 
 Taylor
 Sent: 04 February 2013 15:57
 To: U2-Users List
 Subject: [U2] [UV] Large File Operations Kill Linux

 Looking for some ideas on how to keep Linux from becoming largely 
 unresponsive when creating large files.  What happens is as the new 
 file is being created the I/O buffer cache quickly fills up with dirty
buffers.
 Until the kernel can flush these out to disk there is no avail 
 buffers for I/O operations from other processes.  .  The most 
 troubling manifestation of this is the transaction logging check 
 point daemon gets *way* behind putting us as risk if we were to have a
failure of some kind.

 I have tried using ionice and renice to slow the file creation down 
 as much as possible.  This help a little but is still a big problem.  
 Any ideas how to get CREATE.FILE/RESIZE to play nice on Linux?

 Thanks.
 Perry
 Perry Taylor
 Senior MV Architect
 ZirMed
 888 West Market Street, Suite 400
 Louisville, KY 40202
 www.zirmed.comhttp://www.zirmed.com/

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.2238 / Virus Database: 2639/5581 - Release Date: 02/04/13

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-05 Thread Hona, David
Yes, sounds like it's been identified and fixed a while ago... like Dan 
says...kernel update will the simple way to address it... (time  outage 
permitting)
https://bugzilla.redhat.com/show_bug.cgi?id=735946 


-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Dan Fitzgerald
Sent: Tuesday, 5 February 2013 9:32 AM
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] [UV] Large File Operations Kill Linux


Other users could have been hanging at malloc. With a swappiness of 100 (on 
some kernels) or 100 (on others) or not 0 or 100(not sure which behavior you 
get on 2.6.18), pages wouldn't be getting freed up quickly enough duing the 
creation/copying of a large file.
 
Another thing to look at (although I prefer the support route, since you have 
it), is /sys/kernel/mm/transparent_hugepage/defrag. Other people who have had 
this problem alleviated it by setting this to never.
 
Of course, others fixed it by updating the kernel. My aged eyes read what you 
have as 2.6.8.1...

** IMPORTANT MESSAGE *   
This e-mail message is intended only for the addressee(s) and contains 
information which may be
confidential. 
If you are not the intended recipient please advise the sender by return email, 
do not use or
disclose the contents, and delete the message and any attachments from your 
system. Unless
specifically indicated, this email does not constitute formal advice or 
commitment by the sender
or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries. 
We can be contacted through our web site: commbank.com.au. 
If you no longer wish to receive commercial electronic messages from us, please 
reply to this
e-mail by typing Unsubscribe in the subject line. 
**



___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-05 Thread Perry Taylor
I have engaged Redhat Support and it has already been escalated to their Kernel 
team so at least it seems I have their attention :).  I'll provide updates as 
they become available.

Perry

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Dan Fitzgerald
Sent: Monday, February 04, 2013 3:32 PM
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] [UV] Large File Operations Kill Linux


Other users could have been hanging at malloc. With a swappiness of 100 (on 
some kernels) or 100 (on others) or not 0 or 100(not sure which behavior you 
get on 2.6.18), pages wouldn't be getting freed up quickly enough duing the 
creation/copying of a large file.
 
Another thing to look at (although I prefer the support route, since you have 
it), is /sys/kernel/mm/transparent_hugepage/defrag. Other people who have had 
this problem alleviated it by setting this to never.
 
Of course, others fixed it by updating the kernel. My aged eyes read what you 
have as 2.6.8.1...
 
 Date: Mon, 4 Feb 2013 21:15:25 +
 From: antli...@youngman.org.uk
 To: u2-users@listserver.u2ug.org
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 On 04/02/13 21:05, Dan Fitzgerald wrote:
  
  What's the value in /proc/sys/vm/swappiness?
 
 How will that make any difference? 2.6.18-348 SOUNDS like an ancient (in
 linux terms) kernel. Are you on RedHat support?
 
 This is a problem with the linux kernel that was addressed recently,
 iirc. Large amounts of io from a single process can swamp the queue, and
 the latest kernels have it fixed.
 
 If you've got RH support, see if you can find out if that's been
 backported into your kernel.
 
 Cheers,
 Wol
   
  From: perry.tay...@zirmed.com
  To: u2-users@listserver.u2ug.org
  Date: Mon, 4 Feb 2013 20:53:13 +
  Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
  We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.
 
  -Original Message-
  From: u2-users-boun...@listserver.u2ug.org 
  [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
  Sent: Monday, February 04, 2013 9:23 AM
  To: 'U2 Users List'
  Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
   A few questions - What linux version/distro are you on and what type of
  file system, and how much ram do you have
 
  -Original Message-
  From: u2-users-boun...@listserver.u2ug.org
  [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
  Sent: 04 February 2013 15:57
  To: U2-Users List
  Subject: [U2] [UV] Large File Operations Kill Linux
 
  Looking for some ideas on how to keep Linux from becoming largely
  unresponsive when creating large files.  What happens is as the new file is
  being created the I/O buffer cache quickly fills up with dirty buffers.
  Until the kernel can flush these out to disk there is no avail buffers for
  I/O operations from other processes.  .  The most troubling manifestation 
  of
  this is the transaction logging check point daemon gets *way* behind 
  putting
  us as risk if we were to have a failure of some kind.
 
  I have tried using ionice and renice to slow the file creation down as much
  as possible.  This help a little but is still a big problem.  Any ideas how
  to get CREATE.FILE/RESIZE to play nice on Linux?
 
  Thanks.
  Perry
  Perry Taylor
  Senior MV Architect
  ZirMed
  888 West Market Street, Suite 400
  Louisville, KY 40202
  www.zirmed.comhttp://www.zirmed.com/
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
  
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

CONFIDENTIALITY NOTICE: This e-mail message, including any 
attachments, is for the sole use of the intended recipient(s) 
and may contain confidential and privileged information.  Any
unauthorized review, use, disclosure or distribution is 
prohibited. ZirMed, Inc. has strict policies regarding the 
content of e-mail communications, specifically Protected Health 
Information, any communications containing such material will 
be returned to the originating party with such advisement 
noted. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the 
original message.
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-05 Thread Jeffrey Butera
Perry

I'm curious how large large is for you?

Jeff Butera
--
A tree falls the way it leans.
Be careful which way you lean.
The Lorax

On Feb 5, 2013, at 5:45 PM, Perry Taylor perry.tay...@zirmed.com wrote:

 I have engaged Redhat Support and it has already been escalated to their 
 Kernel team so at least it seems I have their attention :).  I'll provide 
 updates as they become available.
 
 Perry
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Dan Fitzgerald
 Sent: Monday, February 04, 2013 3:32 PM
 To: u2-users@listserver.u2ug.org
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 
 Other users could have been hanging at malloc. With a swappiness of 100 (on 
 some kernels) or 100 (on others) or not 0 or 100(not sure which behavior 
 you get on 2.6.18), pages wouldn't be getting freed up quickly enough duing 
 the creation/copying of a large file.
 
 Another thing to look at (although I prefer the support route, since you have 
 it), is /sys/kernel/mm/transparent_hugepage/defrag. Other people who have had 
 this problem alleviated it by setting this to never.
 
 Of course, others fixed it by updating the kernel. My aged eyes read what you 
 have as 2.6.8.1...
 
 Date: Mon, 4 Feb 2013 21:15:25 +
 From: antli...@youngman.org.uk
 To: u2-users@listserver.u2ug.org
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 On 04/02/13 21:05, Dan Fitzgerald wrote:
 
 What's the value in /proc/sys/vm/swappiness?
 
 How will that make any difference? 2.6.18-348 SOUNDS like an ancient (in
 linux terms) kernel. Are you on RedHat support?
 
 This is a problem with the linux kernel that was addressed recently,
 iirc. Large amounts of io from a single process can swamp the queue, and
 the latest kernels have it fixed.
 
 If you've got RH support, see if you can find out if that's been
 backported into your kernel.
 
 Cheers,
 Wol
 
 From: perry.tay...@zirmed.com
 To: u2-users@listserver.u2ug.org
 Date: Mon, 4 Feb 2013 20:53:13 +
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
 Sent: Monday, February 04, 2013 9:23 AM
 To: 'U2 Users List'
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 A few questions - What linux version/distro are you on and what type of
 file system, and how much ram do you have
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
 Sent: 04 February 2013 15:57
 To: U2-Users List
 Subject: [U2] [UV] Large File Operations Kill Linux
 
 Looking for some ideas on how to keep Linux from becoming largely
 unresponsive when creating large files.  What happens is as the new file is
 being created the I/O buffer cache quickly fills up with dirty buffers.
 Until the kernel can flush these out to disk there is no avail buffers for
 I/O operations from other processes.  .  The most troubling manifestation 
 of
 this is the transaction logging check point daemon gets *way* behind 
 putting
 us as risk if we were to have a failure of some kind.
 
 I have tried using ionice and renice to slow the file creation down as much
 as possible.  This help a little but is still a big problem.  Any ideas how
 to get CREATE.FILE/RESIZE to play nice on Linux?
 
 Thanks.
 Perry
 Perry Taylor
 Senior MV Architect
 ZirMed
 888 West Market Street, Suite 400
 Louisville, KY 40202
 www.zirmed.comhttp://www.zirmed.com/
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 
 CONFIDENTIALITY NOTICE: This e-mail message, including any 
 attachments, is for the sole use of the intended recipient(s) 
 and may contain confidential and privileged information.  Any
 unauthorized review, use, disclosure or distribution is 
 prohibited. ZirMed, Inc. has strict policies regarding the 
 content of e-mail communications, specifically Protected Health 
 Information, any communications containing such material will 
 be returned to the originating party with such advisement 
 noted. If you are not the intended recipient, please contact 
 the sender by reply e-mail and destroy all copies of the 
 original message.
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-05 Thread Perry Taylor
Here's the one I'm using for the test...

[root@qauv2 zmopsx]# ls  -l /data/traxnl3/trax2011/ERA.DET
-rw-rw 1 perryt trax 123736145920 Feb  5 15:53 
/data/traxnl3/trax2011/ERA.DET

So yeah.. they're pretty big.  (There are others even bigger)  

Perry

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Jeffrey Butera
Sent: Tuesday, February 05, 2013 3:51 PM
To: U2 Users List
Subject: Re: [U2] [UV] Large File Operations Kill Linux

Perry

I'm curious how large large is for you?

Jeff Butera
--
A tree falls the way it leans.
Be careful which way you lean.
The Lorax

On Feb 5, 2013, at 5:45 PM, Perry Taylor perry.tay...@zirmed.com wrote:

 I have engaged Redhat Support and it has already been escalated to their 
 Kernel team so at least it seems I have their attention :).  I'll provide 
 updates as they become available.
 
 Perry
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Dan Fitzgerald
 Sent: Monday, February 04, 2013 3:32 PM
 To: u2-users@listserver.u2ug.org
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 
 Other users could have been hanging at malloc. With a swappiness of 100 (on 
 some kernels) or 100 (on others) or not 0 or 100(not sure which behavior 
 you get on 2.6.18), pages wouldn't be getting freed up quickly enough duing 
 the creation/copying of a large file.
 
 Another thing to look at (although I prefer the support route, since you have 
 it), is /sys/kernel/mm/transparent_hugepage/defrag. Other people who have had 
 this problem alleviated it by setting this to never.
 
 Of course, others fixed it by updating the kernel. My aged eyes read what you 
 have as 2.6.8.1...
 
 Date: Mon, 4 Feb 2013 21:15:25 +
 From: antli...@youngman.org.uk
 To: u2-users@listserver.u2ug.org
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 On 04/02/13 21:05, Dan Fitzgerald wrote:
 
 What's the value in /proc/sys/vm/swappiness?
 
 How will that make any difference? 2.6.18-348 SOUNDS like an ancient (in
 linux terms) kernel. Are you on RedHat support?
 
 This is a problem with the linux kernel that was addressed recently,
 iirc. Large amounts of io from a single process can swamp the queue, and
 the latest kernels have it fixed.
 
 If you've got RH support, see if you can find out if that's been
 backported into your kernel.
 
 Cheers,
 Wol
 
 From: perry.tay...@zirmed.com
 To: u2-users@listserver.u2ug.org
 Date: Mon, 4 Feb 2013 20:53:13 +
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
 Sent: Monday, February 04, 2013 9:23 AM
 To: 'U2 Users List'
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 A few questions - What linux version/distro are you on and what type of
 file system, and how much ram do you have
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
 Sent: 04 February 2013 15:57
 To: U2-Users List
 Subject: [U2] [UV] Large File Operations Kill Linux
 
 Looking for some ideas on how to keep Linux from becoming largely
 unresponsive when creating large files.  What happens is as the new file is
 being created the I/O buffer cache quickly fills up with dirty buffers.
 Until the kernel can flush these out to disk there is no avail buffers for
 I/O operations from other processes.  .  The most troubling manifestation 
 of
 this is the transaction logging check point daemon gets *way* behind 
 putting
 us as risk if we were to have a failure of some kind.
 
 I have tried using ionice and renice to slow the file creation down as much
 as possible.  This help a little but is still a big problem.  Any ideas how
 to get CREATE.FILE/RESIZE to play nice on Linux?
 
 Thanks.
 Perry
 Perry Taylor
 Senior MV Architect
 ZirMed
 888 West Market Street, Suite 400
 Louisville, KY 40202
 www.zirmed.comhttp://www.zirmed.com/
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 
 CONFIDENTIALITY NOTICE: This e-mail message, including any 
 attachments, is for the sole use of the intended recipient(s) 
 and may contain confidential and privileged information.  Any
 unauthorized review, use, disclosure or distribution is 
 prohibited. ZirMed, Inc. has strict policies regarding the 
 content of e-mail communications, specifically Protected Health 
 Information, any communications containing such material will 
 be returned

[U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Perry Taylor
Looking for some ideas on how to keep Linux from becoming largely unresponsive 
when creating large files.  What happens is as the new file is being created 
the I/O buffer cache quickly fills up with dirty buffers.  Until the kernel can 
flush these out to disk there is no avail buffers for I/O operations from other 
processes.  .  The most troubling manifestation of this is the transaction 
logging check point daemon gets *way* behind putting us as risk if we were to 
have a failure of some kind.

I have tried using ionice and renice to slow the file creation down as much as 
possible.  This help a little but is still a big problem.  Any ideas how to get 
CREATE.FILE/RESIZE to play nice on Linux?

Thanks.
Perry
Perry Taylor
Senior MV Architect
ZirMed
888 West Market Street, Suite 400
Louisville, KY 40202
www.zirmed.comhttp://www.zirmed.com/



CONFIDENTIALITY NOTICE: This e-mail message, including any 
attachments, is for the sole use of the intended recipient(s) 
and may contain confidential and privileged information.  Any
unauthorized review, use, disclosure or distribution is 
prohibited. ZirMed, Inc. has strict policies regarding the 
content of e-mail communications, specifically Protected Health 
Information, any communications containing such material will 
be returned to the originating party with such advisement 
noted. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the 
original message.
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Symeon Breen
 A few questions - What linux version/distro are you on and what type of
file system, and how much ram do you have

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
Sent: 04 February 2013 15:57
To: U2-Users List
Subject: [U2] [UV] Large File Operations Kill Linux

Looking for some ideas on how to keep Linux from becoming largely
unresponsive when creating large files.  What happens is as the new file is
being created the I/O buffer cache quickly fills up with dirty buffers.
Until the kernel can flush these out to disk there is no avail buffers for
I/O operations from other processes.  .  The most troubling manifestation of
this is the transaction logging check point daemon gets *way* behind putting
us as risk if we were to have a failure of some kind.

I have tried using ionice and renice to slow the file creation down as much
as possible.  This help a little but is still a big problem.  Any ideas how
to get CREATE.FILE/RESIZE to play nice on Linux?

Thanks.
Perry
Perry Taylor
Senior MV Architect
ZirMed
888 West Market Street, Suite 400
Louisville, KY 40202
www.zirmed.comhttp://www.zirmed.com/



CONFIDENTIALITY NOTICE: This e-mail message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information.  Any unauthorized review, use, disclosure or
distribution is prohibited. ZirMed, Inc. has strict policies regarding the
content of e-mail communications, specifically Protected Health Information,
any communications containing such material will be returned to the
originating party with such advisement noted. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all copies
of the original message.
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.2238 / Virus Database: 2639/5579 - Release Date: 02/03/13

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Perry Taylor
We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
Sent: Monday, February 04, 2013 9:23 AM
To: 'U2 Users List'
Subject: Re: [U2] [UV] Large File Operations Kill Linux

 A few questions - What linux version/distro are you on and what type of
file system, and how much ram do you have

-Original Message-
From: u2-users-boun...@listserver.u2ug.org
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
Sent: 04 February 2013 15:57
To: U2-Users List
Subject: [U2] [UV] Large File Operations Kill Linux

Looking for some ideas on how to keep Linux from becoming largely
unresponsive when creating large files.  What happens is as the new file is
being created the I/O buffer cache quickly fills up with dirty buffers.
Until the kernel can flush these out to disk there is no avail buffers for
I/O operations from other processes.  .  The most troubling manifestation of
this is the transaction logging check point daemon gets *way* behind putting
us as risk if we were to have a failure of some kind.

I have tried using ionice and renice to slow the file creation down as much
as possible.  This help a little but is still a big problem.  Any ideas how
to get CREATE.FILE/RESIZE to play nice on Linux?

Thanks.
Perry
Perry Taylor
Senior MV Architect
ZirMed
888 West Market Street, Suite 400
Louisville, KY 40202
www.zirmed.comhttp://www.zirmed.com/



CONFIDENTIALITY NOTICE: This e-mail message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information.  Any unauthorized review, use, disclosure or
distribution is prohibited. ZirMed, Inc. has strict policies regarding the
content of e-mail communications, specifically Protected Health Information,
any communications containing such material will be returned to the
originating party with such advisement noted. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all copies
of the original message.
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.2238 / Virus Database: 2639/5579 - Release Date: 02/03/13

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Dan Fitzgerald

What's the value in /proc/sys/vm/swappiness?
 
 From: perry.tay...@zirmed.com
 To: u2-users@listserver.u2ug.org
 Date: Mon, 4 Feb 2013 20:53:13 +
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
 Sent: Monday, February 04, 2013 9:23 AM
 To: 'U2 Users List'
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
  A few questions - What linux version/distro are you on and what type of
 file system, and how much ram do you have
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
 Sent: 04 February 2013 15:57
 To: U2-Users List
 Subject: [U2] [UV] Large File Operations Kill Linux
 
 Looking for some ideas on how to keep Linux from becoming largely
 unresponsive when creating large files.  What happens is as the new file is
 being created the I/O buffer cache quickly fills up with dirty buffers.
 Until the kernel can flush these out to disk there is no avail buffers for
 I/O operations from other processes.  .  The most troubling manifestation of
 this is the transaction logging check point daemon gets *way* behind putting
 us as risk if we were to have a failure of some kind.
 
 I have tried using ionice and renice to slow the file creation down as much
 as possible.  This help a little but is still a big problem.  Any ideas how
 to get CREATE.FILE/RESIZE to play nice on Linux?
 
 Thanks.
 Perry
 Perry Taylor
 Senior MV Architect
 ZirMed
 888 West Market Street, Suite 400
 Louisville, KY 40202
 www.zirmed.comhttp://www.zirmed.com/
 
 
 
 CONFIDENTIALITY NOTICE: This e-mail message, including any attachments, is
 for the sole use of the intended recipient(s) and may contain confidential
 and privileged information.  Any unauthorized review, use, disclosure or
 distribution is prohibited. ZirMed, Inc. has strict policies regarding the
 content of e-mail communications, specifically Protected Health Information,
 any communications containing such material will be returned to the
 originating party with such advisement noted. If you are not the intended
 recipient, please contact the sender by reply e-mail and destroy all copies
 of the original message.
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 -
 No virus found in this message.
 Checked by AVG - www.avg.com
 Version: 2012.0.2238 / Virus Database: 2639/5579 - Release Date: 02/03/13
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
  
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Wols Lists
On 04/02/13 21:05, Dan Fitzgerald wrote:
 
 What's the value in /proc/sys/vm/swappiness?

How will that make any difference? 2.6.18-348 SOUNDS like an ancient (in
linux terms) kernel. Are you on RedHat support?

This is a problem with the linux kernel that was addressed recently,
iirc. Large amounts of io from a single process can swamp the queue, and
the latest kernels have it fixed.

If you've got RH support, see if you can find out if that's been
backported into your kernel.

Cheers,
Wol
  
 From: perry.tay...@zirmed.com
 To: u2-users@listserver.u2ug.org
 Date: Mon, 4 Feb 2013 20:53:13 +
 Subject: Re: [U2] [UV] Large File Operations Kill Linux

 We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.

 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
 Sent: Monday, February 04, 2013 9:23 AM
 To: 'U2 Users List'
 Subject: Re: [U2] [UV] Large File Operations Kill Linux

  A few questions - What linux version/distro are you on and what type of
 file system, and how much ram do you have

 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
 Sent: 04 February 2013 15:57
 To: U2-Users List
 Subject: [U2] [UV] Large File Operations Kill Linux

 Looking for some ideas on how to keep Linux from becoming largely
 unresponsive when creating large files.  What happens is as the new file is
 being created the I/O buffer cache quickly fills up with dirty buffers.
 Until the kernel can flush these out to disk there is no avail buffers for
 I/O operations from other processes.  .  The most troubling manifestation of
 this is the transaction logging check point daemon gets *way* behind putting
 us as risk if we were to have a failure of some kind.

 I have tried using ionice and renice to slow the file creation down as much
 as possible.  This help a little but is still a big problem.  Any ideas how
 to get CREATE.FILE/RESIZE to play nice on Linux?

 Thanks.
 Perry
 Perry Taylor
 Senior MV Architect
 ZirMed
 888 West Market Street, Suite 400
 Louisville, KY 40202
 www.zirmed.comhttp://www.zirmed.com/

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Perry Taylor
70.

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Dan Fitzgerald
Sent: Monday, February 04, 2013 2:06 PM
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] [UV] Large File Operations Kill Linux


What's the value in /proc/sys/vm/swappiness?
 
 From: perry.tay...@zirmed.com
 To: u2-users@listserver.u2ug.org
 Date: Mon, 4 Feb 2013 20:53:13 +
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
 Sent: Monday, February 04, 2013 9:23 AM
 To: 'U2 Users List'
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
  A few questions - What linux version/distro are you on and what type of
 file system, and how much ram do you have
 
 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
 Sent: 04 February 2013 15:57
 To: U2-Users List
 Subject: [U2] [UV] Large File Operations Kill Linux
 
 Looking for some ideas on how to keep Linux from becoming largely
 unresponsive when creating large files.  What happens is as the new file is
 being created the I/O buffer cache quickly fills up with dirty buffers.
 Until the kernel can flush these out to disk there is no avail buffers for
 I/O operations from other processes.  .  The most troubling manifestation of
 this is the transaction logging check point daemon gets *way* behind putting
 us as risk if we were to have a failure of some kind.
 
 I have tried using ionice and renice to slow the file creation down as much
 as possible.  This help a little but is still a big problem.  Any ideas how
 to get CREATE.FILE/RESIZE to play nice on Linux?
 
 Thanks.
 Perry
 Perry Taylor
 Senior MV Architect
 ZirMed
 888 West Market Street, Suite 400
 Louisville, KY 40202
 www.zirmed.comhttp://www.zirmed.com/
 
 
 
 CONFIDENTIALITY NOTICE: This e-mail message, including any attachments, is
 for the sole use of the intended recipient(s) and may contain confidential
 and privileged information.  Any unauthorized review, use, disclosure or
 distribution is prohibited. ZirMed, Inc. has strict policies regarding the
 content of e-mail communications, specifically Protected Health Information,
 any communications containing such material will be returned to the
 originating party with such advisement noted. If you are not the intended
 recipient, please contact the sender by reply e-mail and destroy all copies
 of the original message.
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 -
 No virus found in this message.
 Checked by AVG - www.avg.com
 Version: 2012.0.2238 / Virus Database: 2639/5579 - Release Date: 02/03/13
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
  
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Perry Taylor
Yes we are on RH support.  I'll run it by them and see.

Thanks.

-Original Message-
From: u2-users-boun...@listserver.u2ug.org 
[mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Wols Lists
Sent: Monday, February 04, 2013 2:15 PM
To: u2-users@listserver.u2ug.org
Subject: Re: [U2] [UV] Large File Operations Kill Linux

On 04/02/13 21:05, Dan Fitzgerald wrote:
 
 What's the value in /proc/sys/vm/swappiness?

How will that make any difference? 2.6.18-348 SOUNDS like an ancient (in
linux terms) kernel. Are you on RedHat support?

This is a problem with the linux kernel that was addressed recently,
iirc. Large amounts of io from a single process can swamp the queue, and
the latest kernels have it fixed.

If you've got RH support, see if you can find out if that's been
backported into your kernel.

Cheers,
Wol
  
 From: perry.tay...@zirmed.com
 To: u2-users@listserver.u2ug.org
 Date: Mon, 4 Feb 2013 20:53:13 +
 Subject: Re: [U2] [UV] Large File Operations Kill Linux

 We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.

 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org 
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
 Sent: Monday, February 04, 2013 9:23 AM
 To: 'U2 Users List'
 Subject: Re: [U2] [UV] Large File Operations Kill Linux

  A few questions - What linux version/distro are you on and what type of
 file system, and how much ram do you have

 -Original Message-
 From: u2-users-boun...@listserver.u2ug.org
 [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
 Sent: 04 February 2013 15:57
 To: U2-Users List
 Subject: [U2] [UV] Large File Operations Kill Linux

 Looking for some ideas on how to keep Linux from becoming largely
 unresponsive when creating large files.  What happens is as the new file is
 being created the I/O buffer cache quickly fills up with dirty buffers.
 Until the kernel can flush these out to disk there is no avail buffers for
 I/O operations from other processes.  .  The most troubling manifestation of
 this is the transaction logging check point daemon gets *way* behind putting
 us as risk if we were to have a failure of some kind.

 I have tried using ionice and renice to slow the file creation down as much
 as possible.  This help a little but is still a big problem.  Any ideas how
 to get CREATE.FILE/RESIZE to play nice on Linux?

 Thanks.
 Perry
 Perry Taylor
 Senior MV Architect
 ZirMed
 888 West Market Street, Suite 400
 Louisville, KY 40202
 www.zirmed.comhttp://www.zirmed.com/

___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users

CONFIDENTIALITY NOTICE: This e-mail message, including any 
attachments, is for the sole use of the intended recipient(s) 
and may contain confidential and privileged information.  Any
unauthorized review, use, disclosure or distribution is 
prohibited. ZirMed, Inc. has strict policies regarding the 
content of e-mail communications, specifically Protected Health 
Information, any communications containing such material will 
be returned to the originating party with such advisement 
noted. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the 
original message.
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Wols Lists
On 04/02/13 21:34, Perry Taylor wrote:
 Yes we are on RH support.  I'll run it by them and see.

Again, this is from memory, but I think somebody noticed that copying a
single very large file brought a system to its knees until the copy
finished, and the whole thing spiralled from there. Probably about 6
months to a year ago.

Chances are I picked up the story from LWN.
 
 Thanks.

Cheers,
Wol
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users


Re: [U2] [UV] Large File Operations Kill Linux

2013-02-04 Thread Dan Fitzgerald

Other users could have been hanging at malloc. With a swappiness of 100 (on 
some kernels) or 100 (on others) or not 0 or 100(not sure which behavior you 
get on 2.6.18), pages wouldn't be getting freed up quickly enough duing the 
creation/copying of a large file.
 
Another thing to look at (although I prefer the support route, since you have 
it), is /sys/kernel/mm/transparent_hugepage/defrag. Other people who have had 
this problem alleviated it by setting this to never.
 
Of course, others fixed it by updating the kernel. My aged eyes read what you 
have as 2.6.8.1...
 
 Date: Mon, 4 Feb 2013 21:15:25 +
 From: antli...@youngman.org.uk
 To: u2-users@listserver.u2ug.org
 Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
 On 04/02/13 21:05, Dan Fitzgerald wrote:
  
  What's the value in /proc/sys/vm/swappiness?
 
 How will that make any difference? 2.6.18-348 SOUNDS like an ancient (in
 linux terms) kernel. Are you on RedHat support?
 
 This is a problem with the linux kernel that was addressed recently,
 iirc. Large amounts of io from a single process can swamp the queue, and
 the latest kernels have it fixed.
 
 If you've got RH support, see if you can find out if that's been
 backported into your kernel.
 
 Cheers,
 Wol
   
  From: perry.tay...@zirmed.com
  To: u2-users@listserver.u2ug.org
  Date: Mon, 4 Feb 2013 20:53:13 +
  Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
  We're on RHEL5 (2.6.18-348.el5), ext3 and 132GB ram.
 
  -Original Message-
  From: u2-users-boun...@listserver.u2ug.org 
  [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Symeon Breen
  Sent: Monday, February 04, 2013 9:23 AM
  To: 'U2 Users List'
  Subject: Re: [U2] [UV] Large File Operations Kill Linux
 
   A few questions - What linux version/distro are you on and what type of
  file system, and how much ram do you have
 
  -Original Message-
  From: u2-users-boun...@listserver.u2ug.org
  [mailto:u2-users-boun...@listserver.u2ug.org] On Behalf Of Perry Taylor
  Sent: 04 February 2013 15:57
  To: U2-Users List
  Subject: [U2] [UV] Large File Operations Kill Linux
 
  Looking for some ideas on how to keep Linux from becoming largely
  unresponsive when creating large files.  What happens is as the new file is
  being created the I/O buffer cache quickly fills up with dirty buffers.
  Until the kernel can flush these out to disk there is no avail buffers for
  I/O operations from other processes.  .  The most troubling manifestation 
  of
  this is the transaction logging check point daemon gets *way* behind 
  putting
  us as risk if we were to have a failure of some kind.
 
  I have tried using ionice and renice to slow the file creation down as much
  as possible.  This help a little but is still a big problem.  Any ideas how
  to get CREATE.FILE/RESIZE to play nice on Linux?
 
  Thanks.
  Perry
  Perry Taylor
  Senior MV Architect
  ZirMed
  888 West Market Street, Suite 400
  Louisville, KY 40202
  www.zirmed.comhttp://www.zirmed.com/
 
 ___
 U2-Users mailing list
 U2-Users@listserver.u2ug.org
 http://listserver.u2ug.org/mailman/listinfo/u2-users
  
___
U2-Users mailing list
U2-Users@listserver.u2ug.org
http://listserver.u2ug.org/mailman/listinfo/u2-users