Hi,
On 2024/06/07 11:26, Zdenek Kabelac wrote:
Dne 07. 06. 24 v 11:03 Jaco Kroon napsal(a):
Hi,
On 2024/06/07 00:17, Zdenek Kabelac wrote:
Dne 07. 06. 24 v 0:14 Zdenek Kabelac napsal(a):
Dne 05. 06. 24 v 10:59 Jaco Kroon napsal(a):
Hi,
I'm guessing your "simple way" is workable for the
Dne 07. 06. 24 v 11:03 Jaco Kroon napsal(a):
Hi,
On 2024/06/07 00:17, Zdenek Kabelac wrote:
Dne 07. 06. 24 v 0:14 Zdenek Kabelac napsal(a):
Dne 05. 06. 24 v 10:59 Jaco Kroon napsal(a):
Hi,
I'm guessing your "simple way" is workable for the generic case as well,
towards that end, is a rel
Hi,
On 2024/06/07 00:17, Zdenek Kabelac wrote:
Dne 07. 06. 24 v 0:14 Zdenek Kabelac napsal(a):
Dne 05. 06. 24 v 10:59 Jaco Kroon napsal(a):
Hi,
On 2024/06/04 18:07, Zdenek Kabelac wrote:
Dne 04. 06. 24 v 13:52 Jaco Kroon napsal(a):
Last but not least - disk scheduling policies also do have
Dne 07. 06. 24 v 0:14 Zdenek Kabelac napsal(a):
Dne 05. 06. 24 v 10:59 Jaco Kroon napsal(a):
Hi,
On 2024/06/04 18:07, Zdenek Kabelac wrote:
Dne 04. 06. 24 v 13:52 Jaco Kroon napsal(a):
Last but not least - disk scheduling policies also do have impact - to
i.e. ensure better fairness - at the
Dne 05. 06. 24 v 10:59 Jaco Kroon napsal(a):
Hi,
On 2024/06/04 18:07, Zdenek Kabelac wrote:
Dne 04. 06. 24 v 13:52 Jaco Kroon napsal(a):
Last but not least - disk scheduling policies also do have impact - to i.e.
ensure better fairness - at the prices of lower throughput...
We normally use mq
Hi,
On 2024/06/04 18:07, Zdenek Kabelac wrote:
Dne 04. 06. 24 v 13:52 Jaco Kroon napsal(a):
Hi,
On 2024/06/04 12:48, Roger Heflin wrote:
Use the *_bytes values. If they are non-zero then they are used and
that allows setting even below 1% (quite large on anything with a lot
of ram).
I have
Dne 04. 06. 24 v 13:52 Jaco Kroon napsal(a):
Hi,
On 2024/06/04 12:48, Roger Heflin wrote:
Use the *_bytes values. If they are non-zero then they are used and
that allows setting even below 1% (quite large on anything with a lot
of ram).
I have been using this for quite a while:
vm.dirty_back
The SATA disks work ok if you use smartctl to set the SCTERC timeout
as low as possible (smartctl -l scterc,20,20 /dev/${drive} ). I have
a set of commands that starts high and sets it lower with the idea
that each different manufactuers disk will have a different min value
and I simply want it is
Hi,
On 2024/06/04 15:46, Stuart D Gathman wrote:
On Tue, 4 Jun 2024, Roger Heflin wrote:
My experience is that heavy disk io/batch disk io systems work better
with these values being smallish.
I don't see a use case for having large values. It seems to have no
real upside and several down
On Tue, 4 Jun 2024, Roger Heflin wrote:
My experience is that heavy disk io/batch disk io systems work better
with these values being smallish.
I don't see a use case for having large values. It seems to have no
real upside and several downsides. Get the buffer size small enough
and you wi
Hi,
On 2024/06/04 15:30, Roger Heflin wrote:
My experience is that heavy disk io/batch disk io systems work better
with these values being smallish.
Ie both even under 10MB or so.About all having the number larger
has done is trick io benchmarks that don't force a sync at the end,
and/or ap
My experience is that heavy disk io/batch disk io systems work better
with these values being smallish.
Ie both even under 10MB or so.About all having the number larger
has done is trick io benchmarks that don't force a sync at the end,
and/or appear to make large saves happen faster.
There i
Hi,
On 2024/06/04 12:48, Roger Heflin wrote:
Use the *_bytes values. If they are non-zero then they are used and
that allows setting even below 1% (quite large on anything with a lot
of ram).
I have been using this for quite a while:
vm.dirty_background_bytes = 300
vm.dirty_bytes = 50
On Tue, Jun 4, 2024 at 4:06 AM Jaco Kroon wrote:
>
> Hi,
>
> Please refer below.
>
> >>
> >> This could potentially be due to extremely heavy disk IO, or LVM
> >> itself freezing IO.
> >
> > well reducing the percentage of '/proc/sys/vm/dirty_ration' may
> > possibly help
> > when your disk system
Hi,
Please refer below.
On 2024/06/03 21:25, Zdenek Kabelac wrote:
Dne 03. 06. 24 v 14:56 Jaco Kroon napsal(a):
Hi,
Thanks for the insight. Please refer below.
On 2024/05/31 14:34, Zdenek Kabelac wrote:
Dne 30. 05. 24 v 12:21 Jaco Kroon napsal(a):
Hi,
I'm kind of missing here to see your
Dne 03. 06. 24 v 14:56 Jaco Kroon napsal(a):
Hi,
Thanks for the insight. Please refer below.
On 2024/05/31 14:34, Zdenek Kabelac wrote:
Dne 30. 05. 24 v 12:21 Jaco Kroon napsal(a):
Hi,
I'm kind of missing here to see your 'deadlock' scenario from this description.
Well, stuff blocks, until
Hi,
Thanks for the insight. Please refer below.
On 2024/05/31 14:34, Zdenek Kabelac wrote:
Dne 30. 05. 24 v 12:21 Jaco Kroon napsal(a):
Hi,
Possible lvm2 command deadlock scenario:
crowsnest [12:15:47] /run/lvm # fuser //run/lock/lvm/*
/run/lock/lvm/P_global: 17231
/run/lock/lvm/V_lvm: 1608
Dne 30. 05. 24 v 12:21 Jaco Kroon napsal(a):
Hi,
Possible lvm2 command deadlock scenario:
crowsnest [12:15:47] /run/lvm # fuser //run/lock/lvm/*
/run/lock/lvm/P_global: 17231
/run/lock/lvm/V_lvm: 16087 17231
crowsnest [12:15:54] /run/lvm # ps axf | grep -E '16087|17231'
24437 pts/1 S+ 0
Hi,
Possible lvm2 command deadlock scenario:
crowsnest [12:15:47] /run/lvm # fuser //run/lock/lvm/*
/run/lock/lvm/P_global: 17231
/run/lock/lvm/V_lvm: 16087 17231
crowsnest [12:15:54] /run/lvm # ps axf | grep -E '16087|17231'
24437 pts/1 S+ 0:00 | \_ grep --colour=auto -E 16087|17
19 matches
Mail list logo