Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-11 Thread Gleb Kurtsou
On (10/02/2011 16:56), Bruce Cran wrote:
 On Wed, 19 Jan 2011 11:09:31 +0100
 Attila Nagy b...@fsn.hu wrote:
  I hope somebody can find the time to look into this, it's pretty
  annoying... 
 
 It's also listed as a bug on OpenSolaris:
 http://bugs.opensolaris.org/bugdatabase/view_bug.do;?bug_id=6804661

Could you try my patch I've mentioned above in the thread:
http://marc.info/?l=freebsd-fsm=129735686129438w=2

I've reproduce test scenario in OpenSolaris bug report and it worked as
expected for me.

System: amd64, 4GB RAM, ~5GB swap

/boot/loader.conf:
vm.kmem_size=6G
vfs.zfs.prefetch_disable=1
vfs.zfs.txg.timeout=5

# mount -t tmpfs -o size=$((5*1024*1024*1024)) none /mnt
# dd if=/dev/zero of=test bs=1m count=$((3*1024))
# dd if=test of=/dev/zero bs=1m
# dd if=test of=/dev/zero bs=1m
# dd if=test of=/dev/zero bs=1m

top statistics:
Mem: 429M Active, 272M Inact, 2889M Wired, 96K Cache, 1328K Buf, 196M Free
Swap: 5120M Total, 5120M Free
ZFS seems to consume most of RAM

# cp test /mnt

top statistics:
Mem: 2808M Active, 247M Inact, 623M Wired, 104M Cache, 1328K Buf, 5052K Free
Swap: 5120M Total, 619M Used, 4501M Free, 12% Inuse
ZFS cache has shrinked, swap increased, most of tmpfs remains in memory

# df -h /mnt
FilesystemSizeUsed   Avail Capacity  Mounted on
tmpfs 5.0G3.0G2.0G60%/mnt

Thanks,
Gleb.

 
 -- 
 Bruce Cran
 ___
 freebsd...@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-fs
 To unsubscribe, send any mail to freebsd-fs-unsubscr...@freebsd.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-10 Thread Gleb Kurtsou
On (07/02/2011 15:35), Ivan Voras wrote:
 On 7 February 2011 14:37, Gleb Kurtsou gleb.kurt...@gmail.com wrote:
 
  It's up to user to mount tmpfs filesystems of reasonable size to prevent
  resource exhaustion. Anyway, enormously large tmpfs killing all your
  process is not the way to go.
 
 Of course not, but as I see it (from admin perspective), tmpfs should
 behave as close to regular processes in consuming memory as possible
 (where possible; obviously it cannot be subject to the OOM killer :)
 ).
Could you test the patch. It sets file system size to half of RAM by
default and makes tmpfs behave much like regular process for vm
subsystem. It no longer depends on inactive/wired memory stats, but
checks if swap is nearly full. I've added vfs.tmpfs.swap_reserved sysctl
to limit tmpfs growth.

In my tests system didn't panic nor invoked OOM killer while consuming
all available ram and swap. Unfortunately I wasn't able to test it with
ZFS, I'd appreciate if you could run several test to see how ZFS and
tmpfs will behave in low memory situation.

If it works as expected I'm going to implement resize feature, update
man page and change mount option parsing to allow specifying size in
human readable form, e.g. size=1g.

Thanks,
Gleb.
commit 185bc042f0647a38b86aa78c5dda25a4bf0ea3dd
Author: Gleb Kurtsou gleb.kurt...@gmail.com
Date:   Thu Feb 10 18:38:44 2011 +0200

tmpfs: Change the way available memory is calculated

Try to allocate pages until filesystem size limit hit,
fail in low memory situation.

By default set filesystem size to half of available memory

Add vfs.tmpfs.swap_reserved sysctl; set default to 2048 pages (8m or 16m)

Check if free pages available before allocating new node

Reorganize limits and mount option parsing

diff --git a/sys/fs/tmpfs/tmpfs.h b/sys/fs/tmpfs/tmpfs.h
index b1c4249..07f521c 100644
--- a/sys/fs/tmpfs/tmpfs.h
+++ b/sys/fs/tmpfs/tmpfs.h
@@ -487,61 +487,30 @@ int   tmpfs_truncate(struct vnode *, off_t);
  * Memory management stuff.
  */
 
-/* Amount of memory pages to reserve for the system (e.g., to not use by
- * tmpfs).
- * XXX: Should this be tunable through sysctl, for instance? */
-#define TMPFS_PAGES_RESERVED (4 * 1024 * 1024 / PAGE_SIZE)
-
 /*
- * Returns information about the number of available memory pages,
- * including physical and virtual ones.
- *
- * Remember to remove TMPFS_PAGES_RESERVED from the returned value to avoid
- * excessive memory usage.
- *
+ * Number of reserved swap pages should not be lower than
+ * swap_pager_almost_full high water mark.
  */
+#define TMPFS_SWAP_MINRESERVED 1024
+
 static __inline size_t
-tmpfs_mem_info(void)
+tmpfs_pages_max(struct tmpfs_mount *tmp)
 {
-   size_t size;
-
-   size = swap_pager_avail + cnt.v_free_count + cnt.v_inactive_count;
-   size -= size  cnt.v_wire_count ? cnt.v_wire_count : size;
-   return size;
+   return (tmp-tm_pages_max);
 }
 
-/* Returns the maximum size allowed for a tmpfs file system.  This macro
- * must be used instead of directly retrieving the value from tm_pages_max.
- * The reason is that the size of a tmpfs file system is dynamic: it lets
- * the user store files as long as there is enough free memory (including
- * physical memory and swap space).  Therefore, the amount of memory to be
- * used is either the limit imposed by the user during mount time or the
- * amount of available memory, whichever is lower.  To avoid consuming all
- * the memory for a given mount point, the system will always reserve a
- * minimum of TMPFS_PAGES_RESERVED pages, which is also taken into account
- * by this macro (see above). */
 static __inline size_t
-TMPFS_PAGES_MAX(struct tmpfs_mount *tmp)
+tmpfs_pages_used(struct tmpfs_mount *tmp)
 {
-   size_t freepages;
-
-   freepages = tmpfs_mem_info();
-   freepages -= freepages  TMPFS_PAGES_RESERVED ?
-   freepages : TMPFS_PAGES_RESERVED;
+   const size_t node_size = sizeof(struct tmpfs_node) +
+   sizeof(struct tmpfs_dirent);
+   size_t meta_pages;
 
-   return MIN(tmp-tm_pages_max, freepages + tmp-tm_pages_used);
+   meta_pages = howmany((uintmax_t)tmp-tm_nodes_inuse * node_size,
+   PAGE_SIZE);
+   return (meta_pages + tmp-tm_pages_used);
 }
 
-/* Returns the available space for the given file system. */
-#define TMPFS_META_PAGES(tmp) (howmany((tmp)-tm_nodes_inuse * (sizeof(struct 
tmpfs_node) \
-   + sizeof(struct tmpfs_dirent)), PAGE_SIZE))
-#define TMPFS_FILE_PAGES(tmp) ((tmp)-tm_pages_used)
-
-#define TMPFS_PAGES_AVAIL(tmp) (TMPFS_PAGES_MAX(tmp)  \
-   TMPFS_META_PAGES(tmp)+TMPFS_FILE_PAGES(tmp)? \
-   TMPFS_PAGES_MAX(tmp) - TMPFS_META_PAGES(tmp) \
-   - TMPFS_FILE_PAGES(tmp):0)
-
 #endif
 
 /* - */
diff --git a/sys/fs/tmpfs/tmpfs_subr.c b/sys/fs/tmpfs/tmpfs_subr.c

Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-10 Thread Bruce Cran
On Wed, 19 Jan 2011 11:09:31 +0100
Attila Nagy b...@fsn.hu wrote:

 On 01/19/11 09:46, Jeremy Chadwick wrote:
  On Wed, Jan 19, 2011 at 09:37:35AM +0100, Attila Nagy wrote:
  I first noticed this problem on machines with more memory (32GB
  eg.), but now it happens on 4G machines too:
  tmpfs   0B  0B  0B
  100%/tmp
  FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan
  8 22:11:54 CET 2011
 
  Maybe it's related, that I use zfs on these machines...
 
  Sometimes it grows and shrinks, but generally there is no space
  even for a small file, or a socket to create.
  http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html
 
 Oh crap. :(
 
 I hope somebody can find the time to look into this, it's pretty
 annoying... 

It's also listed as a bug on OpenSolaris:
http://bugs.opensolaris.org/bugdatabase/view_bug.do;?bug_id=6804661

-- 
Bruce Cran
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-10 Thread Attila Nagy

 On 02/10/2011 05:56 PM, Bruce Cran wrote:

On Wed, 19 Jan 2011 11:09:31 +0100
Attila Nagyb...@fsn.hu  wrote:


On 01/19/11 09:46, Jeremy Chadwick wrote:

On Wed, Jan 19, 2011 at 09:37:35AM +0100, Attila Nagy wrote:

I first noticed this problem on machines with more memory (32GB
eg.), but now it happens on 4G machines too:
tmpfs   0B  0B  0B
100%/tmp
FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan
8 22:11:54 CET 2011

Maybe it's related, that I use zfs on these machines...

Sometimes it grows and shrinks, but generally there is no space
even for a small file, or a socket to create.

http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html


Oh crap. :(

I hope somebody can find the time to look into this, it's pretty
annoying...

It's also listed as a bug on OpenSolaris:
http://bugs.opensolaris.org/bugdatabase/view_bug.do;?bug_id=6804661

ZFS is a great innovation, which forces sysadmins to learn kernel and VM 
internals. :-O

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-07 Thread Gleb Kurtsou
On (19/01/2011 17:27), Ivan Voras wrote:
 On 19 January 2011 16:02, Kostik Belousov kostik...@gmail.com wrote:
 
  http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch
 
  I don't think this is a complete solution but it's a start. If you can,
  try it and see if it helps.
  This is not a start, and actually a step in the wrong direction.
  Tmpfs is wrong now, but the patch would make the wrongness even bigger.
 
  Issue is that the current tmpfs calculation should not depend on the
  length of the inactive queue or the amount of free pages. This data only
  measures  the pressure on the pagedaemon, and has absolutely no relation
  to the amount of data that can be put into anonymous objects before the
  system comes out of swap.
 
  vm_lowmem handler is invoked in two situations:
  - when KVA cannot satisfy the request for the space allocation;
  - when pagedaemon have to start the scan.
  None of the situations has any direct correlation with the fact that
  tmpfs needs to check, that is Is there enough swap to keep all my
  future anonymous memory requests ?.
 
  Might be, swap reservation numbers can be useful to the tmpfs reporting.
  Also might be, tmpfs should reserve the swap explicitely on start, instead
  of making attempts to guess how much can be allocated at random moment.
 
 Thank you for your explanation! I'm still not very familiar with VM
 and VFS. Could you also read my report at
 http://www.mail-archive.com/freebsd-current@freebsd.org/msg126491.html
 ? I'm curious about the fact that there is lots of 'free' memory here
 in the same situation.
 
 Do you think that there is something which can be done as a band-aid
 without a major modification to tmpfs?

It's up to user to mount tmpfs filesystems of reasonable size to prevent
resource exhaustion. Anyway, enormously large tmpfs killing all your
process is not the way to go.

Unless there are objections, I'm planning to do the following:

1. By default set tmpfs size to max(all swap/2, all memory/2) and print
warning that filesystem size should be specified manually.
Max(swap/2,mem/2) is used as a band-aid for the case when no swap is setup.

3. Remove live filesystem size checks, i.e. do not depend on
free/inact memory.

2. Add support for resizing tmpfs on the fly:
mount -u -o size=newsize /tmpfs

Reserving swap for tmpfs might not be what user expects: generally I use
tmpfs for work dir for building ports, it's unused most of the time.

btw, what linux and opensolaris do when available mem/swap gets low due
to tmpfs and how filesystem size determined at real-time?

Thanks,
Gleb.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-07 Thread Ivan Voras
On 7 February 2011 14:37, Gleb Kurtsou gleb.kurt...@gmail.com wrote:

 It's up to user to mount tmpfs filesystems of reasonable size to prevent
 resource exhaustion. Anyway, enormously large tmpfs killing all your
 process is not the way to go.

Of course not, but as I see it (from admin perspective), tmpfs should
behave as close to regular processes in consuming memory as possible
(where possible; obviously it cannot be subject to the OOM killer :)
).

The problem described in this thread is that there is enough memory in
various lists and tmpfs still reports 0 bytes free. See my message:
the machine had more than 8 GB of free memory (reported by top)
and still 0 bytes free in tmpfs - and that's not counting inactive
and other forms of used memory which could be freed or swapped out
(and also not counting swap).

By as close to regular processes in consuming memory I mean that I
would expect tmpfs to allocate from the same total pool of memory as
processes and be subject to the same mechanisms of VM, including swap.
If that is not possible, I would (again, as an admin) like to extend
the tmpfs(5) man page and other documentation with information about
what types of memory will and will not count towards available to
tmpfs.

 Unless there are objections, I'm planning to do the following:

 1. By default set tmpfs size to max(all swap/2, all memory/2) and print
 warning that filesystem size should be specified manually.
 Max(swap/2,mem/2) is used as a band-aid for the case when no swap is setup.

You mean as a reservation, maximum limit or something else? If a tmpfs
with size of e.g. 16 GB is configured, will the memory be
preallocated? wired?

I don't think there should be default hard size limits to tmpfs - it
should be able to hold sudden bursts of large temp files (using swap
if needed), but that could be achieved by configuring a tmpfs whose
size is RAM+swap if the memory is not preallocated so not a big
problem.

 3. Remove live filesystem size checks, i.e. do not depend on
 free/inact memory.

I'm for it, if it's possible in the light of #1

 2. Add support for resizing tmpfs on the fly:
        mount -u -o size=newsize /tmpfs

ditto.

 Reserving swap for tmpfs might not be what user expects: generally I use
 tmpfs for work dir for building ports, it's unused most of the time.

It looks like we think the opposite of it :) I would like it to be
swapped out if needed, making room for running processes etc. as
regular VM paging algorithms decide. Of course, if that could be
controlled with a flag we'd both be happy :)

 btw, what linux and opensolaris do when available mem/swap gets low due
 to tmpfs and how filesystem size determined at real-time?

There's some information here: http://en.wikipedia.org/wiki/Tmpfs
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-07 Thread Gleb Kurtsou
On (07/02/2011 15:35), Ivan Voras wrote:
 On 7 February 2011 14:37, Gleb Kurtsou gleb.kurt...@gmail.com wrote:
 
  It's up to user to mount tmpfs filesystems of reasonable size to prevent
  resource exhaustion. Anyway, enormously large tmpfs killing all your
  process is not the way to go.
 
 Of course not, but as I see it (from admin perspective), tmpfs should
 behave as close to regular processes in consuming memory as possible
 (where possible; obviously it cannot be subject to the OOM killer :)
 ).
Here is key difference it's not subject to be killed by OOM killer. Thus
exhausting all resource for real. I propose to enforce specifying upper
limit of filesystem size by user.

 The problem described in this thread is that there is enough memory in
 various lists and tmpfs still reports 0 bytes free. See my message:
 the machine had more than 8 GB of free memory (reported by top)
 and still 0 bytes free in tmpfs - and that's not counting inactive
 and other forms of used memory which could be freed or swapped out
 (and also not counting swap).
That's because tmpfs incorrectly checks how much memory is available
including both swap and ram. In VM world that's not so easy.

 By as close to regular processes in consuming memory I mean that I
 would expect tmpfs to allocate from the same total pool of memory as
 processes and be subject to the same mechanisms of VM, including swap.
 If that is not possible, I would (again, as an admin) like to extend
 the tmpfs(5) man page and other documentation with information about
 what types of memory will and will not count towards available to
 tmpfs.
 
  Unless there are objections, I'm planning to do the following:
 
  1. By default set tmpfs size to max(all swap/2, all memory/2) and print
  warning that filesystem size should be specified manually.
  Max(swap/2,mem/2) is used as a band-aid for the case when no swap is setup.
 
 You mean as a reservation, maximum limit or something else? If a tmpfs
 with size of e.g. 16 GB is configured, will the memory be
 preallocated? wired?
Memory in tmpfs is allocated/freed as needed, there is no preallocated
or reserved memory/swap. It already behaves the way you've described.
I'm against preallocating or reserving memory.

There is ramfs in linux that does preallocation, but it looks
deprecated.

 I don't think there should be default hard size limits to tmpfs - it
 should be able to hold sudden bursts of large temp files (using swap
 if needed), but that could be achieved by configuring a tmpfs whose
 size is RAM+swap if the memory is not preallocated so not a big
 problem.
But there is one in Linux (Documentation/filesystems/tmpfs.txt):
 59 size:  The limit of allocated bytes for this tmpfs instance. The 
 60default is half of your physical RAM without swap. If you
 61oversize your tmpfs instances the machine will deadlock
 62since the OOM handler will not be able to free that memory.

That's actually what I've proposed: size=mem/2 vs size=max(mem/2,swap/2)

Limit should be there not to panic the system.

  3. Remove live filesystem size checks, i.e. do not depend on
  free/inact memory.
 
 I'm for it, if it's possible in the light of #1
 
  2. Add support for resizing tmpfs on the fly:
         mount -u -o size=newsize /tmpfs
 
 ditto.
It's trivial. If it can be resized, change maxsize in struct, fail
otherwise.

  Reserving swap for tmpfs might not be what user expects: generally I use
  tmpfs for work dir for building ports, it's unused most of the time.
 
 It looks like we think the opposite of it :) I would like it to be
 swapped out if needed, making room for running processes etc. as
 regular VM paging algorithms decide. Of course, if that could be
 controlled with a flag we'd both be happy :)
Perhaps there is a bit of misunderstanding, it will be swapped out, will
behave exactly as it does now, but will have sane default filesystem
size limit by default, will change semantics of calculating available
memory: I want it to try hard to allocate memory unless filesystem
limit is hit, failing only if there is clearly memory shortage (as I
said, it is for user to properly configure it). 

  btw, what linux and opensolaris do when available mem/swap gets low due
  to tmpfs and how filesystem size determined at real-time?
 
 There's some information here: http://en.wikipedia.org/wiki/Tmpfs
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-02-01 Thread Attila Nagy

On 01/30/11 12:09, Kostik Belousov wrote:

On Wed, Jan 19, 2011 at 05:27:38PM +0100, Ivan Voras wrote:

On 19 January 2011 16:02, Kostik Belousovkostik...@gmail.com  wrote:


http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch

I don't think this is a complete solution but it's a start. If you can,
try it and see if it helps.

This is not a start, and actually a step in the wrong direction.
Tmpfs is wrong now, but the patch would make the wrongness even bigger.

Issue is that the current tmpfs calculation should not depend on the
length of the inactive queue or the amount of free pages. This data only
measures  the pressure on the pagedaemon, and has absolutely no relation
to the amount of data that can be put into anonymous objects before the
system comes out of swap.

vm_lowmem handler is invoked in two situations:
- when KVA cannot satisfy the request for the space allocation;
- when pagedaemon have to start the scan.
None of the situations has any direct correlation with the fact that
tmpfs needs to check, that is Is there enough swap to keep all my
future anonymous memory requests ?.

Might be, swap reservation numbers can be useful to the tmpfs reporting.
Also might be, tmpfs should reserve the swap explicitely on start, instead
of making attempts to guess how much can be allocated at random moment.

Thank you for your explanation! I'm still not very familiar with VM
and VFS. Could you also read my report at
http://www.mail-archive.com/freebsd-current@freebsd.org/msg126491.html
? I'm curious about the fact that there is lots of 'free' memory here
in the same situation.

This is another ugliness in the dynamic calculation. Your wired is around
15GB, that is always greater then available swap + free + inactive.
As result, tmpfs_mem_info() always returns 0.
In this situation TMPFS_PAGES_MAX() seems to return negative value, and
then TMPFS_PAGES_AVAIL() clamps at 0.

Well, if nobody can take care of this now, could you please state this 
in the BUGS section of the tmpfs man page?


Thanks,
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-01-30 Thread Kostik Belousov
On Wed, Jan 19, 2011 at 05:27:38PM +0100, Ivan Voras wrote:
 On 19 January 2011 16:02, Kostik Belousov kostik...@gmail.com wrote:
 
  http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch
 
  I don't think this is a complete solution but it's a start. If you can,
  try it and see if it helps.
  This is not a start, and actually a step in the wrong direction.
  Tmpfs is wrong now, but the patch would make the wrongness even bigger.
 
  Issue is that the current tmpfs calculation should not depend on the
  length of the inactive queue or the amount of free pages. This data only
  measures  the pressure on the pagedaemon, and has absolutely no relation
  to the amount of data that can be put into anonymous objects before the
  system comes out of swap.
 
  vm_lowmem handler is invoked in two situations:
  - when KVA cannot satisfy the request for the space allocation;
  - when pagedaemon have to start the scan.
  None of the situations has any direct correlation with the fact that
  tmpfs needs to check, that is Is there enough swap to keep all my
  future anonymous memory requests ?.
 
  Might be, swap reservation numbers can be useful to the tmpfs reporting.
  Also might be, tmpfs should reserve the swap explicitely on start, instead
  of making attempts to guess how much can be allocated at random moment.
 
 Thank you for your explanation! I'm still not very familiar with VM
 and VFS. Could you also read my report at
 http://www.mail-archive.com/freebsd-current@freebsd.org/msg126491.html
 ? I'm curious about the fact that there is lots of 'free' memory here
 in the same situation.
This is another ugliness in the dynamic calculation. Your wired is around
15GB, that is always greater then available swap + free + inactive.
As result, tmpfs_mem_info() always returns 0.
In this situation TMPFS_PAGES_MAX() seems to return negative value, and
then TMPFS_PAGES_AVAIL() clamps at 0.

 
 Do you think that there is something which can be done as a band-aid
 without a major modification to tmpfs?


pgpZ2A2eFkpjo.pgp
Description: PGP signature


tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-01-19 Thread Attila Nagy

Hi,

I first noticed this problem on machines with more memory (32GB eg.), 
but now it happens on 4G machines too:
tmpfs   0B  0B  0B   100%
/tmp
FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan  8 
22:11:54 CET 2011


Maybe it's related, that I use zfs on these machines...

Sometimes it grows and shrinks, but generally there is no space even for 
a small file, or a socket to create.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-01-19 Thread Jeremy Chadwick
On Wed, Jan 19, 2011 at 09:37:35AM +0100, Attila Nagy wrote:
 I first noticed this problem on machines with more memory (32GB
 eg.), but now it happens on 4G machines too:
 tmpfs   0B  0B  0B
 100%/tmp
 FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan  8
 22:11:54 CET 2011
 
 Maybe it's related, that I use zfs on these machines...
 
 Sometimes it grows and shrinks, but generally there is no space even
 for a small file, or a socket to create.

http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-01-19 Thread Attila Nagy

On 01/19/11 09:46, Jeremy Chadwick wrote:

On Wed, Jan 19, 2011 at 09:37:35AM +0100, Attila Nagy wrote:

I first noticed this problem on machines with more memory (32GB
eg.), but now it happens on 4G machines too:
tmpfs   0B  0B  0B
100%/tmp
FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan  8
22:11:54 CET 2011

Maybe it's related, that I use zfs on these machines...

Sometimes it grows and shrinks, but generally there is no space even
for a small file, or a socket to create.

http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html


Oh crap. :(

I hope somebody can find the time to look into this, it's pretty annoying...
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-01-19 Thread Ivan Voras

On 19/01/2011 11:09, Attila Nagy wrote:

On 01/19/11 09:46, Jeremy Chadwick wrote:

On Wed, Jan 19, 2011 at 09:37:35AM +0100, Attila Nagy wrote:

I first noticed this problem on machines with more memory (32GB
eg.), but now it happens on 4G machines too:
tmpfs 0B 0B 0B
100% /tmp
FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan 8
22:11:54 CET 2011

Maybe it's related, that I use zfs on these machines...

Sometimes it grows and shrinks, but generally there is no space even
for a small file, or a socket to create.

http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html



Oh crap. :(

I hope somebody can find the time to look into this, it's pretty
annoying...


http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch

I don't think this is a complete solution but it's a start. If you can, 
try it and see if it helps.



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-01-19 Thread Kostik Belousov
On Wed, Jan 19, 2011 at 11:39:41AM +0100, Ivan Voras wrote:
 On 19/01/2011 11:09, Attila Nagy wrote:
 On 01/19/11 09:46, Jeremy Chadwick wrote:
 On Wed, Jan 19, 2011 at 09:37:35AM +0100, Attila Nagy wrote:
 I first noticed this problem on machines with more memory (32GB
 eg.), but now it happens on 4G machines too:
 tmpfs 0B 0B 0B
 100% /tmp
 FreeBSD builder 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #0: Sat Jan 8
 22:11:54 CET 2011
 
 Maybe it's related, that I use zfs on these machines...
 
 Sometimes it grows and shrinks, but generally there is no space even
 for a small file, or a socket to create.
 http://lists.freebsd.org/pipermail/freebsd-stable/2011-January/060867.html
 
 
 Oh crap. :(
 
 I hope somebody can find the time to look into this, it's pretty
 annoying...
 
 http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch
 
 I don't think this is a complete solution but it's a start. If you can, 
 try it and see if it helps.
This is not a start, and actually a step in the wrong direction.
Tmpfs is wrong now, but the patch would make the wrongness even bigger.

Issue is that the current tmpfs calculation should not depend on the
length of the inactive queue or the amount of free pages. This data only
measures  the pressure on the pagedaemon, and has absolutely no relation
to the amount of data that can be put into anonymous objects before the
system comes out of swap.

vm_lowmem handler is invoked in two situations:
- when KVA cannot satisfy the request for the space allocation;
- when pagedaemon have to start the scan.
None of the situations has any direct correlation with the fact that
tmpfs needs to check, that is Is there enough swap to keep all my
future anonymous memory requests ?.

Might be, swap reservation numbers can be useful to the tmpfs reporting.
Also might be, tmpfs should reserve the swap explicitely on start, instead
of making attempts to guess how much can be allocated at random moment.


pgptpHEpyasZg.pgp
Description: PGP signature


Re: tmpfs is zero bytes (no free space), maybe a zfs bug?

2011-01-19 Thread Ivan Voras
On 19 January 2011 16:02, Kostik Belousov kostik...@gmail.com wrote:

 http://people.freebsd.org/~ivoras/diffs/tmpfs.h.patch

 I don't think this is a complete solution but it's a start. If you can,
 try it and see if it helps.
 This is not a start, and actually a step in the wrong direction.
 Tmpfs is wrong now, but the patch would make the wrongness even bigger.

 Issue is that the current tmpfs calculation should not depend on the
 length of the inactive queue or the amount of free pages. This data only
 measures  the pressure on the pagedaemon, and has absolutely no relation
 to the amount of data that can be put into anonymous objects before the
 system comes out of swap.

 vm_lowmem handler is invoked in two situations:
 - when KVA cannot satisfy the request for the space allocation;
 - when pagedaemon have to start the scan.
 None of the situations has any direct correlation with the fact that
 tmpfs needs to check, that is Is there enough swap to keep all my
 future anonymous memory requests ?.

 Might be, swap reservation numbers can be useful to the tmpfs reporting.
 Also might be, tmpfs should reserve the swap explicitely on start, instead
 of making attempts to guess how much can be allocated at random moment.

Thank you for your explanation! I'm still not very familiar with VM
and VFS. Could you also read my report at
http://www.mail-archive.com/freebsd-current@freebsd.org/msg126491.html
? I'm curious about the fact that there is lots of 'free' memory here
in the same situation.

Do you think that there is something which can be done as a band-aid
without a major modification to tmpfs?
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org