Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-08 Thread Orvar Korvar
Ok, thanx for your input, guys. So Bvians comment still is valid. I tell the Linux guys that OpenSolaris on 32 bit will fragment the memory to the point that you have to reboot once in a while. It shouldnt corrupt your data when it runs out of RAM. Vodevick. -- This message posted from

Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-08 Thread Joerg Schilling
James C. McPherson [EMAIL PROTECTED] wrote: On Sat, 06 Dec 2008 22:28:36 -0500 Joseph Zhou [EMAIL PROTECTED] wrote: Ian, Tim, again, thank you very much in answering my question. I am a bit disappointed that the whole discussion group does not have one person to stand up and say yeah,

Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-08 Thread Casper . Dik
Ok, thanx for your input, guys. So Bvians comment still is valid. I tell the Linux guys that Open Solaris on 32 bit will fragment the memory to the point that you have to reboot once in a while. It shouldnt corrupt your data when it runs out of RAM. I'm not sure that that is completely true;

Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-08 Thread Casper . Dik
In my experimentation (using my own buffer program), it's the receive side buffering you need. The size of the buffer needs to be large enough to hold 5 seconds worth of data. How much data/second you get will depend on which part of your system is the limiting factor. In my case, with 7200

Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-08 Thread Andrew Gabriel
[EMAIL PROTECTED] wrote: In my experimentation (using my own buffer program), it's the receive side buffering you need. The size of the buffer needs to be large enough to hold 5 seconds worth of data. How much data/second you get will depend on which part of your system is the limiting

Re: [zfs-discuss] How to use mbuffer with zfs send/recv

2008-12-08 Thread Joerg Schilling
[EMAIL PROTECTED] wrote: For ufs ufsdump | ufsrestore I have found that I prefer the buffer on the receive side, but it should be much bigger. ufsrestore starts with creating all directories and that is SLOW. This is why copying filesystems via star is much faster: - There is no pipe

Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-08 Thread Paul Kraus
On Mon, Dec 8, 2008 at 4:39 AM, [EMAIL PROTECTED] wrote: I'm not sure that that is completely true; I've run a small 32-bit file server and it ran for half a year or more (except when I wanted to upgrade) But that system had only 512 MB memory and I made the kernel's VA bigger than the 512

[zfs-discuss] Monitoring ZFS Statistic

2008-12-08 Thread Roman Ivanov
By combining two great tools arcstat and dimstat you can get ZFS statistics in: * table view * chart view * any date/time interval * host to host compare For example, online table and chart view Ream more here http://blogs.sun.com/pomah/entry/monitoring_zfs_statistic -- This

Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-08 Thread Brian Hechinger
On Mon, Dec 08, 2008 at 10:39:45AM +0100, [EMAIL PROTECTED] wrote: Ok, thanx for your input, guys. So Bvians comment still is valid. I tell the Linux guys that Open Solaris on 32 bit will fragment the memory to the point that you have to reboot once in a while. It shouldnt corrupt your data

Re: [zfs-discuss] ZFS fragments 32 bits RAM? Problem?

2008-12-08 Thread Casper . Dik
But it is a problem when you more memory then you can map (1GB will probably still work but 2GB is too big. This here is my problem. I've got 4GB of RAM in the box. It's painful. ;) Have you changed kernelbase? Lowering it will help performance. Casper

Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-08 Thread Nicolas Williams
On Sun, Dec 07, 2008 at 03:20:01PM -0600, Brian Cameron wrote: Thanks for the information. Unfortunately, using chmod/chown does not seem a workable solution to me, unless I am missing something. Normally logindevperm(4) is used for managing the ownership and permissions of device files

Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-08 Thread Courtney Malone
unfortunately i've tried zpool attach -f and exporting and reimporting the pool both with and without the disk present. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-08 Thread Bob Friesenhahn
On Mon, 8 Dec 2008, Joerg Schilling wrote: An OS that feels slower may actuall be much faster just because people have subjective impressions and because one OS may have been optiomized to result in best subjective impressions only. Microsoft Windows is surely faster than any Unix because

Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-08 Thread Casper . Dik
Microsoft Windows is surely faster than any Unix because it executes the foreground task (the program with the highlighted title bar) with far more priority than any other task. Microsoft Windows XP Home Edition is fastest since it maximally cranks up the priority on the foreground task

Re: [zfs-discuss] OpenSolaris vs Linux

2008-12-08 Thread Bob Friesenhahn
On Mon, 8 Dec 2008, [EMAIL PROTECTED] wrote: Solaris does the same thing. (The X server will run the foreground processes with a higher priority) Yes, it does, but I suspect not to the extreme degree as seen under Windows. The performance difference between the foreground and background

[zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-08 Thread Seymour Krebs
Please if anyone can help with this mess, I'd appreciate it. ~# beadm list BE Active Mountpoint SpacePolicy Created -- -- -- --- --- b100- - 6.88Gstatic 2008-10-30 13:59 b101a - -

Re: [zfs-discuss] help please - The pool metadata is corrupted

2008-12-08 Thread Eric Schrock
Well it shows that you're not suffering from a known bug. The symptoms you were describing were the same as those seen when a device spontaneously shrinks within a raid-z vdev. But it looks like the sizes are the same (config asize = asize), so I'm at a loss. - Eric On Sun, Dec 07, 2008 at

Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-08 Thread Will Murnane
On Mon, Dec 8, 2008 at 13:03, Seymour Krebs [EMAIL PROTECTED] wrote: ~# zfs destroy -r rpool/ROOT/b99 cannot destroy 'rpool/ROOT/b99': filesystem has dependent clones Take a look at the output of zfs get origin for the other filesystems in the pool. One of them is a clone of rpool/ROOT/b99; to

Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-08 Thread Miles Nordin
jh == Johan Hartzenberg [EMAIL PROTECTED] writes: jh raid5 suffers from the write-hole problem. this is only when you use it without a battery. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] SMART data

2008-12-08 Thread Joe S
Thanks for the recommendations. I should have mentioned that I already tried that smartmontools. I've read that there are problems with smartmontools and Solaris. Sure enough, I get this error: # /usr/local/sbin/smartctl -a -d ata /dev/rdsk/c2t0d0 smartctl version 5.38 [i386-pc-solaris2.11]

Re: [zfs-discuss] Upgrading my ZFS server

2008-12-08 Thread Joe S
I did not use the Marvell nic. I use an Intel gigabit pci nic (e1000g0). On Sun, Dec 7, 2008 at 2:03 PM, SV [EMAIL PROTECTED] wrote: js.lists , or anyone else who is using a XFX MDA72P7509 Motherboard --- that onboard NIC is a Marvell? - Do you choose not to use it in favor of the Intel

Re: [zfs-discuss] SMART data

2008-12-08 Thread Rob Logan
the sata framework uses the sd driver so its: 4 % smartctl -d scsi -a /dev/rdsk/c4t2d0s0 smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen Home page is http://smartmontools.sourceforge.net/ Device: ATA WDC WD1001FALS-0 Version: 0K05 Serial number: Device type:

Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-08 Thread Brian Cameron
Nicholas: On Sun, Dec 07, 2008 at 03:20:01PM -0600, Brian Cameron wrote: Thanks for the information. Unfortunately, using chmod/chown does not seem a workable solution to me, unless I am missing something. Normally logindevperm(4) is used for managing the ownership and permissions of

Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-08 Thread Nicolas Williams
On Mon, Dec 08, 2008 at 02:22:01PM -0600, Brian Cameron wrote: That said, I don't see why di_devperm_login() couldn't stomp all over the ACL too. So you'll need to make sure that di_devperm_login() doesn't stomp over the ACL, which will probably mean running an ARC case and updating the

Re: [zfs-discuss] SMART data

2008-12-08 Thread Miles Nordin
rl == Rob Logan [EMAIL PROTECTED] writes: rl the sata framework uses the sd driver so its: yes but this is a really tiny and basically useless amount of output compared to what smartctl gives on Linux with SATA disks, where SATA disks also use the sd driver (the same driver Linux uses for

Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-08 Thread Courtney Malone
unfortunately i get the same thing whether i use either 11342560969745958696 or 17096229131581286394: zpool replace data 11342560969745958696 c0t2d0 returns: cannot replace 11342560969745958696 with c0t2d0: cannot replace a replacing device -- This message posted from opensolaris.org

Re: [zfs-discuss] zfs is a co-dependent parent and won't let children leave home

2008-12-08 Thread Seymour Krebs
Will, thanks for the info on the 'zfs get origin' command. I had previously tried to promote the BEs but saw no effect. I can now see with 'origin' that there is a sequential promotion scheme and some fo the BEs had to be promoted twice to free them from their life of servitude and disgrace.

Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-08 Thread Brian Cameron
Nicolas: I agree that the solution of GDM messing with ACL's is not an ideal solution. No matter how we resolve this problem, I think a scenario could be imagined where the audio would not be managed as expected. This is because if multiple users are competing for the same audio device,

Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-08 Thread Nicolas Williams
On Mon, Dec 08, 2008 at 03:27:49PM -0600, Brian Cameron wrote: Once VT is enabled in the Xserver and GDM, users can start multiple graphical logins with GDM. So, if a user logs into the first graphical Ah, right, I'd forgotten this. login, they get the audio device. Then you can use VT

Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-08 Thread Brian Cameron
Nicolas: On Mon, Dec 08, 2008 at 03:27:49PM -0600, Brian Cameron wrote: login, they get the audio device. Then you can use VT switching in GDM to start up a second graphical login. If this user needs text-to-speech, they are out of luck since they can't access the audio device from their

Re: [zfs-discuss] Problem with ZFS and ACL with GDM

2008-12-08 Thread Nicolas Williams
On Mon, Dec 08, 2008 at 04:46:37PM -0600, Brian Cameron wrote: Is there a shortcomming in VT here? I guess it depends on how you think VT should work. My understanding is that VT works on a first-come-first-serve basis, so the first user who calls logindevperm interfaces gets permission.

[zfs-discuss] zfs iscsi sustained write performance

2008-12-08 Thread milosz
hi all, currently having trouble with sustained write performance with my setup... ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly connected to snv_101 w/ intel e1000g nic. basically, given enough time, the sustained write behavior is perfectly periodic. if i copy a large

Re: [zfs-discuss] zfs iscsi sustained write performance

2008-12-08 Thread Rob
(with iostat -xtc 1) it sure would be nice to know if actv 0 so we would know if the lun was busy because its queue is full or just slow (svc_t 200) for tracking errors try `iostat -xcen 1` and `iostat -E` Rob ___

Re: [zfs-discuss] zfs iscsi sustained write performance

2008-12-08 Thread Brent Jones
On Mon, Dec 8, 2008 at 3:09 PM, milosz [EMAIL PROTECTED] wrote: hi all, currently having trouble with sustained write performance with my setup... ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly connected to snv_101 w/ intel e1000g nic. basically, given enough time, the

Re: [zfs-discuss] zfs iscsi sustained write performance

2008-12-08 Thread milosz
compression is off across the board. svc_t is only maxed during the periods of heavy write activity (2-3 seconds every 10 or so seconds)... otherwise disks are basically idling. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zfs iscsi sustained write performance

2008-12-08 Thread Bob Friesenhahn
On Mon, 8 Dec 2008, milosz wrote: compression is off across the board. svc_t is only maxed during the periods of heavy write activity (2-3 seconds every 10 or so seconds)... otherwise disks are basically idling. Check for some hardware anomaly which might impact disks 11, 12, and 13 but

Re: [zfs-discuss] zfs iscsi sustained write performance

2008-12-08 Thread milosz
my apologies... 11s, 12s, and 13s represent the number of seconds in a read/write period, not disks. so, 11 seconds into a period, %b suddenly jumps to 100 after having been 0 for the first 10. -- This message posted from opensolaris.org ___

[zfs-discuss] ZFS resize partitions

2008-12-08 Thread gsorin
Hello, I have the following issue: I'm running solaris 10 in a vmware enviroment. I have a virtual hdd of 8gig (for example). At some point I can increase the hard drive to 10gig. How can I resize the ZFS pool to take advantage of the new available space? The same question applies for physical

Re: [zfs-discuss] ZFS resize partitions

2008-12-08 Thread Romain Chatelain
Hello, Zfs filesystems != zpool this is your mistake I think, Have you tried to export and import your zpool ? I 've read something like that on the list but not sure... -C -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de gsorin Envoyé : mardi 9

[zfs-discuss] zfs ioctls and cli concepts

2008-12-08 Thread shelly
i would like to know how to include a ioctl in zfs_ioctl.c. So would be grateful if somebody explained how zfs ioctls work. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-08 Thread Ross
This is only a guess, but have you tried # zpool replace data c0t2d0 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-08 Thread Ross
And I'm also wondering if it might be worth trying a different disk. I wonder if it's struggling now because it's seeing the same disk as it's already tried to use, or if the zeroing of the disk confused it. Do you have another drive of the same size you could try? -- This message posted from

Re: [zfs-discuss] zpool cannot replace a replacing device

2008-12-08 Thread Courtney Malone
#zpool replace data c0t2d0 cannot replace c0t2d0 with c0t2d0: cannot replace a replacing device I dont have another drive of that size unfortunately, though since the device was zeroed there shouldnt be any pool config data on it -- This message posted from opensolaris.org

Re: [zfs-discuss] SMART data

2008-12-08 Thread Carsten Aulbert
Hi all, Miles Nordin wrote: rl == Rob Logan [EMAIL PROTECTED] writes: rl the sata framework uses the sd driver so its: yes but this is a really tiny and basically useless amount of output compared to what smartctl gives on Linux with SATA disks, where SATA disks also use the sd