Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa

Hi all,

I fully understand that within a cost effective point of view, 
developing the fishworks for a reduced set of hardware makes , alot, of 
sense.
However, i think that Sun/Oracle would increase their user base if they 
make availabe a Fishwork framework certified only for a reduced set of 
hardware, ie :


   * it needs Western Digital HDD firmware version x.y.z
   * it needs a SAS/SATA controller from a specific brand, model and
 firmare ( LSI SAS1068E )
   * if SSD's are used they need to be from vendor X with firmware Y
   * the system motherboard chipset needs to be from vendor X or Y and
 not from Z

Within this possible landscape i'm pretty sure that alot more customers 
would pay for the Fishworks stack and support, given the fact that not 
all customers need aKa can afford, the Unified Storage platform from Sun.


Anyway..Fishworks it's an awesome product! Congratulations for the 
extreme good job.


Regards,
Bruno

Adam Leventhal wrote:
With that said I'm concerned that there appears to be a fork between 
the opensource version of ZFS and ZFS that is part of the Sun/Oracle 
FishWorks 7nnn series appliances.  I understand (implicitly) that Sun 
(/Oracle) as a commercial concern, is free to choose their own 
priorities in terms of how they use their own IP (Intellectual 
Property) - in this case, the source for the ZFS filesystem.


Hey Al,

I'm unaware of specific plans for management either at Sun or at 
Oracle, but from an engineering perspective suffice it to say that it 
is simpler and therefore more cost effective to develop for a single, 
unified code base, to amortize the cost of testing those 
modifications, and to leverage the enthusiastic ZFS community to 
assist with the development and testing of ZFS.


Again, this isn't official policy, just the simple facts on the ground 
from engineering.


I'm not sure what would lead you to believe that there is fork between 
the open source / OpenSolaris ZFS and what we have in Fishworks. 
Indeed, we've made efforts to make sure there is a single ZFS for the 
reason stated above. Any differences that exist are quickly migrated 
to ON as you can see from the consistent work of Eric Schrock.


Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] root pool can not have multiple vdevs ?

2009-10-27 Thread Dennis Clarke

This seems like a bit of a restriction ... is this intended ?

# cat /etc/release
 Solaris Express Community Edition snv_125 SPARC
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
Assembled 05 October 2009

# uname -a
SunOS neptune 5.11 snv_125 sun4u sparc SUNW,Sun-Fire-880

# zpool status
  pool: neptune_rpool
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
neptune_rpool  ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
c1t0d0s0  ONLINE   0 0 0
c1t3d0s0  ONLINE   0 0 0

errors: No known data errors

Now I want to add two more mirrors to that pool because the V880 has more
drives to offer, that are not used at the moment.

So I'd like to add in a mirror of c1t1d0 and c1t4d0 :

# zpool add -f neptune_rpool c1t1d0
cannot label 'c1t1d0': EFI labeled devices are not supported on root pools.

OKay .. I can live with that.

# prtvtoc -h /dev/rdsk/c1t0d0s0 | fmthard -s - /dev/rdsk/c1t1d0s0
fmthard:  New volume table of contents now in place.
# prtvtoc -h /dev/rdsk/c1t0d0s0 | fmthard -s - /dev/rdsk/c1t4d0s0
fmthard:  New volume table of contents now in place.

# zpool add -f neptune_rpool c1t1d0s0
cannot add to 'neptune_rpool': root pool can not have multiple vdevs or
separate logs

So essentially there is no way to grow that zpool. Is this the case?

-- 
Dennis


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root pool can not have multiple vdevs ?

2009-10-27 Thread Fajar A. Nugraha
On Tue, Oct 27, 2009 at 4:39 PM, Dennis Clarke dcla...@blastwave.org wrote:
 So essentially there is no way to grow that zpool. Is this the case?

There's the option of getting a bigger disk and do a send - receive.
I'm guessing the restriction is necessary for simplicity sake to allow
bootloaders work with zfs root.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] root pool can not have multiple vdevs ?

2009-10-27 Thread Casper . Dik

On Tue, Oct 27, 2009 at 4:39 PM, Dennis Clarke dcla...@blastwave.org wrote:
 So essentially there is no way to grow that zpool. Is this the case?

There's the option of getting a bigger disk and do a send - receive.
I'm guessing the restriction is necessary for simplicity sake to allow
bootloaders work with zfs root.


You can grow your slice (I have done that) but that does require space
after your slice.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best practices for zfs create?

2009-10-27 Thread Orvar Korvar
When I create a zfs filesystem there are lots of options. Which options are 
recommended?

I use CIFS and then I choose mixedcasesensitivity=true. But it turns out that 
(due to a bug, fixed in b127) if I have non utf8 characters in the file name, I 
can not see the file in listings. So I should use utf8only which rejects files 
with non utf8 characters in it's file name. I didnt know of this option. Which 
other options do you use?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practices for zfs create?

2009-10-27 Thread Darren J Moffat

Orvar Korvar wrote:

When I create a zfs filesystem there are lots of options. Which options are 
recommended?


That depends on what your needs are.

The first consideration should be what kind of data you are storing.

For a filesystem intended to be full of MP3 files it is wasteful of CPU 
and memory resources to enable compression.


On the other hand if it is full of source code or ASCII text enabling 
compression could potentially improved performance - depending on the 
read and write access patterns.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Cindy Swearingen

Jeremy,

I generally suspect device failures in this case and if possible,
review the contents of /var/adm/messages and fmdump -eV to see
if the pool hang could be attributed to failed or failing devices.

Cindy



On 10/26/09 17:28, Jeremy Kitchen wrote:

Cindy Swearingen wrote:

Hi Jeremy,

Can you use the command below and send me the output, please?

Thanks,

Cindy

# mdb -k

::stacks -m zfs


ack!  it *just* fully died.  I've had our noc folks reset the machine
and I will get this info to you as soon as it happens again (I'm fairly
certain it will, if not on this specific machine, one of our other
machines!)

-Jeremy



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] resolve zfs properties default to actual value

2009-10-27 Thread F. Wessels
Hi,

how can I find out what the actual value when the default applies to a zfs 
property?
# zfs get checksum mpool
NAME PROPERTY VALUE SOURCE
mpool checksum on default

(In this particular case I know what the value is, either fletcher2 or 
fletcher4 depending on the build)
But how can one find out in general what the actual value is? For any property.

thank you,

Frederik
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resolve zfs properties default to actual value

2009-10-27 Thread Cindy Swearingen

Hi Frederik,

In most cases, you can use the zfs get syntax below or you can use the
zfs get all fs-name to review all current property settings.

The checksum property is a bit different in that you need to review
the zfs.1m man page checksum property description to determine the value 
of the default checksum property.


You can use other methods to determine the current checksum value,
but no easy way exists.

The property descriptions and their defaults are up-to-date in
table 6-1, here:

http://docs.sun.com/app/docs/doc/817-2271/gazss?a=view

Thanks,

Cindy

On 10/27/09 09:07, F. Wessels wrote:

Hi,

how can I find out what the actual value when the default applies to a zfs 
property?
# zfs get checksum mpool
NAME PROPERTY VALUE SOURCE
mpool checksum on default

(In this particular case I know what the value is, either fletcher2 or 
fletcher4 depending on the build)
But how can one find out in general what the actual value is? For any property.

thank you,

Frederik

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resolve zfs properties default to actual value

2009-10-27 Thread F. Wessels
Thank you for the reply.

I must admit that upon closer inspection alot of properties indeed do present 
the actual value.
For the checksum property I used zdb - | grep fletcher to determine wether 
it was fletcher2 or fletcher4 was used for checksumming the filesystem. Using 
the OS build number to determine whether this was a fletcher2 filesystem isn't 
reliable.

Perhaps this is a candidate for a RFE:
change the value reported by zfs get checksum dataset from on to the actual 
used algorithm like fletcher2, fletcher4 or sha256

And what about compression? Off is off. But what about on? Is that gzip or 
lzjb? Same problem.

Thanks,

Frederik
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Tim Cook
On Tue, Oct 27, 2009 at 2:35 AM, Bruno Sousa bso...@epinfante.com wrote:

  Hi all,

 I fully understand that within a cost effective point of view, developing
 the fishworks for a reduced set of hardware makes , alot, of sense.
 However, i think that Sun/Oracle would increase their user base if they
 make availabe a Fishwork framework certified only for a reduced set of
 hardware, ie :

- it needs Western Digital HDD firmware version x.y.z
- it needs a SAS/SATA controller from a specific brand, model and
firmare ( LSI SAS1068E )
- if SSD's are used they need to be from vendor X with firmware Y
- the system motherboard chipset needs to be from vendor X or Y and not
from Z

 Within this possible landscape i'm pretty sure that alot more customers
 would pay for the Fishworks stack and support, given the fact that not all
 customers need aKa can afford, the Unified Storage platform from Sun.

 Anyway..Fishworks it's an awesome product! Congratulations for the extreme
 good job.

 Regards,
 Bruno




You're making a very, very bad assumption that the price of Fishworks would
be cheap for just the software.  Sun hardware does not cost that much more
than their competitors when it comes down to it.  You should expect the
software to make up the difference in price if they were to unbundle it.
Heck, I would expect it to be MORE if they're forced into having to deal
with third party vendors that are pointing fingers at software problems vs.
hardware problems and wasting Sun support engineers valuable time.  I think
you'd find yourself unpleasantly surprised at the end price tag.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dumb idea?

2009-10-27 Thread Wing Choi


I remember BFS (BeOS) did something very similar, it had extended 
metadata attributes
akin to having a relational DB built-in. Very searchable, it also had 
tagging with callback
notification on filechanges so you don't have to waste cycles with 
periodic checking.



On 10/26/09 01:22, zfs-discuss-requ...@opensolaris.org wrote:

--

Message: 2
Date: Sun, 25 Oct 2009 19:48:55 +
From: j...@lentecs.com
To: Orvar Korvar knatte_fnatte_tja...@yahoo.com,zfs Discuss
zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Dumb idea?
Message-ID:

1672618364-1256500137-cardhu_decombobulator_blackberry.rim.net-13339056...@bda305.bisx.prod.on.blackberry

Content-Type: text/plain; charset=Windows-1252

This actually sounds a little like what ms is trying to accomplish, in win7, with libraries.  They will act as standard folders if you treat them as such.  But they are really designed to group different pools of files into one easy place.  You just have to configure it to pull from local and remote sources.  I have heard it works well with win home server, and win7 networks. 


Its also similar to what google and the like are doing with their web crawlers.

But I think this is something better left to run on top of the file system.  Rather than integrated into the file system.  A true database and crawling bot would seem to be the better method of implementing this.  


--Original Message--
From: Orvar Korvar
Sender: zfs-discuss-boun...@opensolaris.org
To: zfs Discuss
Subject: [zfs-discuss] Dumb idea?
Sent: Oct 24, 2009 8:12 AM

Would this be possible to implement ontop ZFS? Maybe it is a dumb idea, I dont 
know. What do you think, and how to improve this?

Assume all files are put in the zpool, helter skelter. And then you can create arbitrary different filters that shows you the files you want to see. 


As of now, you have files in one directory structure. This makes the 
organization of the files, hardcoded. You have /Movies/Action and that is it. 
But if you had all movies in one large zpool, and if you could programmatically 
define different structures that act as filters, you could have different 
directory structures.

Programmatically defined directory structure1, that acts on the zpool:
/Movies/Action

Programmatically defined directory structure2:
/Movies/Actors/AlPacino

etc. 


Maybe this is what MS WinFS was about? Maybe tag the files? Maybe a relational 
database ontop ZFS? Maybe no directories at all? I dont know, just brain 
storming. Is this is a dumb idea? Or old idea?
  






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa

Hi,

I can agree that the software is the one that really has the added 
value, but to my opinion allowing a stack like Fishworks to run outside 
the Sun Unified Storage would lead to lower price per unit(Fishwork 
license) but maybe increase revenue. Why an increase in revenues? Well, 
i assume that alot of customers would buy the Fishworks to put into they 
XYZ high-end server.


I often find alot of customers that say that it's far more easy to 
convince the Board of Directors to buy software rather than hardware or 
a appliance.. Maybe the first step could be the possibility to run 
Fishworks in any Sun server without the need to buy their Unified 
Storage server. In this way the support for the software and hardware 
would come from same vendor, therefore avoiding the dance between 
multiple vendor when it comes to fixing issues.


Anyway, only time/market will say what's the best approach.

Bruno


Tim Cook wrote:



On Tue, Oct 27, 2009 at 2:35 AM, Bruno Sousa bso...@epinfante.com 
mailto:bso...@epinfante.com wrote:


Hi all,

I fully understand that within a cost effective point of view,
developing the fishworks for a reduced set of hardware makes ,
alot, of sense.
However, i think that Sun/Oracle would increase their user base if
they make availabe a Fishwork framework certified only for a
reduced set of hardware, ie :

* it needs Western Digital HDD firmware version x.y.z
* it needs a SAS/SATA controller from a specific brand, model
  and firmare ( LSI SAS1068E )
* if SSD's are used they need to be from vendor X with firmware Y
* the system motherboard chipset needs to be from vendor X or
  Y and not from Z

Within this possible landscape i'm pretty sure that alot more
customers would pay for the Fishworks stack and support, given the
fact that not all customers need aKa can afford, the Unified
Storage platform from Sun.

Anyway..Fishworks it's an awesome product! Congratulations for the
extreme good job.

Regards,
Bruno
 




You're making a very, very bad assumption that the price of Fishworks 
would be cheap for just the software.  Sun hardware does not cost 
that much more than their competitors when it comes down to it.  You 
should expect the software to make up the difference in price if they 
were to unbundle it.  Heck, I would expect it to be MORE if they're 
forced into having to deal with third party vendors that are pointing 
fingers at software problems vs. hardware problems and wasting Sun 
support engineers valuable time.  I think you'd find yourself 
unpleasantly surprised at the end price tag.


--Tim

--
This message has been scanned for viruses and
dangerous content by *MailScanner* http://www.mailscanner.info/, and is
believed to be clean. 



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sniping a bad inode in zfs?

2009-10-27 Thread Dale Ghent

I've have a single-fs, mirrored pool on my hands which recently went
through a bout of corruption. I've managed to clean up a good bit of
it but it appears that I'm left with some directories which have bad
refcounts.

For example, I have what should be an empty directory foo which,
when you cd into it and ls -al, it shows a incorrect refcount for a
empty directory:

total 444
drwxr-xr-x   2 dalegusers  3 Aug 17 13:20 ./
drwx--x--x  64 dalegusers117 Aug 17 13:20 ../

Thus, attempts to remove this directory via rmdir fails with
directory not empty and rm -rf gacks with File exists

I can touch a new file in this dir and such, with the refcount
incrementing to 4, and removing it poses no problem, either, with the
refcount decrementing back to 3. However 3 is the wrong number. It
should of course be only 2 (. and ..)

Normally on UFS I would just take the 'nuke it from orbit' route and
use clri to wipe the directory's inode. However, clri doesn't appear
to be zfs aware (there's not even a zfs analog of clri in /usr/lib/fs/
zfs), and I don't immediately see an option in zdb which would help
cure this.

Any suggestions would be appreciated.

/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sniping a bad inode in zfs?

2009-10-27 Thread Toby Thain


On 27-Oct-09, at 1:43 PM, Dale Ghent wrote:


I've have a single-fs, mirrored pool on my hands which recently went
through a bout of corruption. I've managed to clean up a good bit of
it


How did this occur? Isn't a mirrored pool supposed to self heal?

--Toby


but it appears that I'm left with some directories which have bad
refcounts.
...
/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sniping a bad inode in zfs?

2009-10-27 Thread Marion Hakanson
da...@elemental.org said:
 Normally on UFS I would just take the 'nuke it from orbit' route and use clri
 to wipe the directory's inode. However, clri doesn't appear to be zfs aware
 (there's not even a zfs analog of clri in /usr/lib/fs/ zfs), and I don't
 immediately see an option in zdb which would help cure this. 

Well, it might make things worse, but have you tried /usr/sbin/unlink ?
I'm on Solaris-10, so don't know if that's still part of OpenSolaris.

Regards,

Marion



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Richard Elling

On Oct 27, 2009, at 12:35 AM, Bruno Sousa wrote:


Hi all,

I fully understand that within a cost effective point of view,  
developing the fishworks for a reduced set of hardware makes , alot,  
of sense.
However, i think that Sun/Oracle would increase their user base if  
they make availabe a Fishwork framework certified only for a reduced  
set of hardware, ie :

• it needs Western Digital HDD firmware version x.y.z
	• it needs a SAS/SATA controller from a specific brand, model and  
firmare ( LSI SAS1068E )

• if SSD's are used they need to be from vendor X with firmware Y
	• the system motherboard chipset needs to be from vendor X or Y and  
not from Z


Do not underestimate the cost and complexity of maintaining  
compatibility

matrices (I call them sparse matrices for a reason :-)

Within this possible landscape i'm pretty sure that alot more  
customers would pay for the Fishworks stack and support, given the  
fact that not all customers need aKa can afford, the Unified  
Storage platform from Sun.


There are competitors delivered as software-only: NexentaStor seems to  
be

well designed and EON is progressing nicely.
 -- richard



Anyway..Fishworks it's an awesome product! Congratulations for the  
extreme good job.


Regards,
Bruno

Adam Leventhal wrote:


With that said I'm concerned that there appears to be a fork  
between the opensource version of ZFS and ZFS that is part of the  
Sun/Oracle FishWorks 7nnn series appliances.  I understand  
(implicitly) that Sun (/Oracle) as a commercial concern, is free  
to choose their own priorities in terms of how they use their own  
IP (Intellectual Property) - in this case, the source for the ZFS  
filesystem.


Hey Al,

I'm unaware of specific plans for management either at Sun or at  
Oracle, but from an engineering perspective suffice it to say that  
it is simpler and therefore more cost effective to develop for a  
single, unified code base, to amortize the cost of testing those  
modifications, and to leverage the enthusiastic ZFS community to  
assist with the development and testing of ZFS.


Again, this isn't official policy, just the simple facts on the  
ground from engineering.


I'm not sure what would lead you to believe that there is fork  
between the open source / OpenSolaris ZFS and what we have in  
Fishworks. Indeed, we've made efforts to make sure there is a  
single ZFS for the reason stated above. Any differences that exist  
are quickly migrated to ON as you can see from the consistent work  
of Eric Schrock.


Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Jeremy Kitchen
Cindy Swearingen wrote:
 Jeremy,
 
 I generally suspect device failures in this case and if possible,
 review the contents of /var/adm/messages and fmdump -eV to see
 if the pool hang could be attributed to failed or failing devices.

perusing /var/adm/messages, I see:

Oct 22 05:06:11 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:11 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:11 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x0
Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x1
Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x0

lots of messages like that just prior to rsync warnings:

Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
rsync error: error in rsync protocol data stream (code 12) at io.c(453)
[receiver=2.6.9]
Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
rsync error: error in rsync protocol data stream (code 12) at io.c(453)
[receiver=2.6.9]
Oct 22 06:25:27 homiebackup10 rsyncd[776]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]

I think the rsync warnings are indicative of the pool being hung.  So it
would seem that the bus is freaking out and then the pool dies and
that's that?  The strange thing is that this machine is way underloaded
compared to another one we have (which has 5 shelves, so ~150TB of
storage attached) which hasn't really had any problems like this.  We
had issues with that one when rebuilding drives, but it's been pretty
stable since.

looking at fmdump -eV, I see lots and lots of these:

Oct 24 2009 05:02:54.098815545 ereport.io.scsi.cmd.disk.tran
nvlist version: 0
class = ereport.io.scsi.cmd.disk.tran
ena = 0x882108543f200401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path = /p...@0,0/pci8086,4...@5/pci1000,3...@0/s...@30,0
(end detector)

driver-assessment = retry
op-code = 0x28
cdb = 0x28 0x0 0x51 0x9c 0xa5 0x80 0x0 0x0 0x80 0x0
pkt-reason = 0x4
pkt-state = 0x0
pkt-stats = 0x10
__ttl = 0x1
__tod = 0x4ae2ecee 0x5e3ce39



always with the same device name.  So, it would appear that the drive at
 that location is probably broken, and zfs just isn't detecting it properly?

Also, I'm wondering if this is related to the thread just recently
titled [zfs-discuss] SNV_125 MPT warning in logfile, as we're using the
same controller that person mentions.

We're going to order some beefier controllers with the next shipment,
any suggestions on what to get?  If we find that the new controllers
work much better, we may even go as far as replacing the ones in the
existing machines (or at least any machines experiencing these issues).

We're not married to LSI, but we use LSI controllers in our webservers
for the most part and they're pretty solid there (though admittedly
those are hardware raid, rather than JBOD)

Thanks so much for your help!

-Jeremy



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bryan Cantrill

I can agree that the software is the one that really has the added
value, but to my opinion allowing a stack like Fishworks to run outside
the Sun Unified Storage would lead to lower price per unit(Fishwork
license) but maybe increase revenue. 

I'm afraid I don't see that argument at all; I think that the economics
that you're advocating would be more than undermined by the necessarily
higher costs of validating and supporting a broader range of hardware and
firmware...

- Bryan

--
Bryan Cantrill, Sun Microsystems Fishworks.   http://blogs.sun.com/bmc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bob Friesenhahn

On Tue, 27 Oct 2009, Bruno Sousa wrote:

I can agree that the software is the one that really has the added 
value, but to my opinion allowing a stack like Fishworks to run 
outside the Sun Unified Storage would lead to lower price per 
unit(Fishwork license) but maybe increase revenue. Why an increase 
in revenues? Well, i assume that alot of customers would buy the 
Fishworks to put into they XYZ high-end server.


Fishworks products (products that the Fishworks team developed) are 
designed, tweaked, and tuned for particular hardware configurations. 
It is not like general purpose OpenSolaris where the end user gets to 
experiment with hardware configurations and tunings to get the best 
performance (but might not achieve it).


Fishworks engineers are even known to holler at the drives as part 
of the rigorous product testing.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Nils Goroll

Hi Adam,

thank you for your precise statement. Be it only from an engineering 
standpoint, this is the kind of argumentation which I was expecting (and hoping 
for).


I'm not sure what would lead you to believe that there is fork between 
the open source / OpenSolaris ZFS and what we have in Fishworks.


I've caught myself thinking along these lines a couple of weeks ago before the 
fix for http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6604403 got 
integrated. I had thought they must have that fix in fishworks already, and I 
am glad to see that it's been put back into snv_125.


At any rate, I think that the main selling point for 7xxxs is really the add on 
s/w and I believe that making the core technology openly available will 
strengthen the product rather than weakening it.


Thank you all for your great work,

Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Dale Ghent

On Oct 27, 2009, at 2:00 PM, Bryan Cantrill wrote:




  I can agree that the software is the one that really has the added
  value, but to my opinion allowing a stack like Fishworks to run  
outside

  the Sun Unified Storage would lead to lower price per unit(Fishwork
  license) but maybe increase revenue.


I'm afraid I don't see that argument at all; I think that the  
economics
that you're advocating would be more than undermined by the  
necessarily
higher costs of validating and supporting a broader range of  
hardware and

firmware...


(Just playing Devil's Advocate here)

There could be no economics at all. A basic warranty would be provided  
but running a standalone product is a wholly on your own proposition  
once one ventures outside a very small hardware support matrix.


Perhaps Fishworks/AK would have a OpenSolaris edition - leave the bulk  
of the actual hardware support up to a support infrastructure that's  
already geared towards making wide ranges of hardware supportable -  
OpenSolaris/Solaris, after all, does allow that.


Perhaps this could be a version of Fishworks that's not as integrated  
with what you get on a SUS platform; if some of the Fishworks  
functionality that depends on a precise hardware combo could be  
reduced or generalized, perhaps it's worth consideration. Knowing the  
little I do about what's going on under the hood of a SUS system, I  
wouldn't expect the version of Fishworks uses on the SUS systems to  
have 100% parity with a unbundled Fishworks edition - but the core  
features, by and large, would convey.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send... too slow?

2009-10-27 Thread David Dyer-Bennet

On Sun, October 25, 2009 03:45, Orvar Korvar wrote:

 It seems that zfs send... takes quite some time? 300GB takes 10 hours,
 this far. And I have in total 3TB to backup. This means it will take 100
 hours. Is this normal? If I had 30TB to back up, it would take 1000 hours,
 which is more than a month. Can I speed this up?

That seems pretty bad, I back up around 650GB to a USB-2 external drive
(not the world's fastest!!!) in about 7 hours, last I checked the time.

 Is rsync faster? As I have understood it, zfs send.. gives me an exact
 replica, whereas rsync doesnt necessary do that, maybe the ACL are not
 replicated, etc. Is this correct about rsync vs zfs send?

rsync doesn't seem to cover the ACLs, last I looked closely.  Which is why
I converted my working rsync-based version to a zfs send-receive version
that only supports full backups and isn't automated yet.  Grumble.  (The
ACLs are key for in-kernel CIFS, which is what drove me here.)
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Dale Ghent


On Oct 27, 2009, at 2:58 PM, Bryan Cantrill wrote:




I can agree that the software is the one that really has the added
value, but to my opinion allowing a stack like Fishworks to run
outside
the Sun Unified Storage would lead to lower price per unit(Fishwork
license) but maybe increase revenue.


I'm afraid I don't see that argument at all; I think that the
economics
that you're advocating would be more than undermined by the
necessarily
higher costs of validating and supporting a broader range of
hardware and
firmware...


(Just playing Devil's Advocate here)

There could be no economics at all. A basic warranty would be  
provided

but running a standalone product is a wholly on your own proposition
once one ventures outside a very small hardware support matrix.

Perhaps Fishworks/AK would have a OpenSolaris edition - leave the  
bulk

of the actual hardware support up to a support infrastructure that's
already geared towards making wide ranges of hardware supportable -
OpenSolaris/Solaris, after all, does allow that.

Perhaps this could be a version of Fishworks that's not as integrated
with what you get on a SUS platform; if some of the Fishworks
functionality that depends on a precise hardware combo could be
reduced or generalized, perhaps it's worth consideration. Knowing the
little I do about what's going on under the hood of a SUS system, I
wouldn't expect the version of Fishworks uses on the SUS systems to
have 100% parity with a unbundled Fishworks edition - but the core
features, by and large, would convey.


Why would we do this?  I'm all for zero-cost endeavors, but this isn't
zero-cost -- and I'm having a hard time seeing the business case here,
especially when we have so many paying customers for whom the business
case for our time and energy is crystal clear...


Hey, I was just offering food for thought from the technical end :)

Of course the cost in man hours to attain a reasonable, unbundled  
version would have to be justifiable. If that aspect isn't currently  
justifiable, then that's as far as the conversation needs to go.  
However, times change and one day demand could very well justify the  
business costs.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Jeremy Kitchen
Jeremy Kitchen wrote:
 Cindy Swearingen wrote:
 Jeremy,

 I generally suspect device failures in this case and if possible,
 review the contents of /var/adm/messages and fmdump -eV to see
 if the pool hang could be attributed to failed or failing devices.
 
 perusing /var/adm/messages, I see:
 
 Oct 22 05:06:11 homiebackup10 scsi: [ID 365881 kern.info]
 /p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
 Oct 22 05:06:11 homiebackup10   Log info 0x3108 received for target 5.
 Oct 22 05:06:11 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
 scsi_state=0x0
 Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
 /p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
 Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
 Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
 scsi_state=0x1
 Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
 /p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
 Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
 Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
 scsi_state=0x0
 
 lots of messages like that just prior to rsync warnings:
 
 Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
 rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
 Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
 rsync error: error in rsync protocol data stream (code 12) at io.c(453)
 [receiver=2.6.9]
 Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
 rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
 Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
 rsync error: error in rsync protocol data stream (code 12) at io.c(453)
 [receiver=2.6.9]
 Oct 22 06:25:27 homiebackup10 rsyncd[776]: [ID 702911 daemon.warning]
 rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
 
 I think the rsync warnings are indicative of the pool being hung.  So it
 would seem that the bus is freaking out and then the pool dies and
 that's that?  The strange thing is that this machine is way underloaded
 compared to another one we have (which has 5 shelves, so ~150TB of
 storage attached) which hasn't really had any problems like this.  We
 had issues with that one when rebuilding drives, but it's been pretty
 stable since.
 
 looking at fmdump -eV, I see lots and lots of these:
 
 Oct 24 2009 05:02:54.098815545 ereport.io.scsi.cmd.disk.tran
 nvlist version: 0
 class = ereport.io.scsi.cmd.disk.tran
 ena = 0x882108543f200401
 detector = (embedded nvlist)
 nvlist version: 0
 version = 0x0
 scheme = dev
 device-path = 
 /p...@0,0/pci8086,4...@5/pci1000,3...@0/s...@30,0
 (end detector)
 
 driver-assessment = retry
 op-code = 0x28
 cdb = 0x28 0x0 0x51 0x9c 0xa5 0x80 0x0 0x0 0x80 0x0
 pkt-reason = 0x4
 pkt-state = 0x0
 pkt-stats = 0x10
 __ttl = 0x1
 __tod = 0x4ae2ecee 0x5e3ce39

so doing some more reading here on the list and mucking about a bit
more, I've come across this in the fmdump log:

Oct 22 2009 05:03:56.687818542 ereport.fs.zfs.io
nvlist version: 0
class = ereport.fs.zfs.io
ena = 0x99eb889c6fe1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x90ed10dfd0191c3b
vdev = 0xf41193d6d1deedc2
(end detector)

pool = raid3155
pool_guid = 0x90ed10dfd0191c3b
pool_context = 0
pool_failmode = wait
vdev_guid = 0xf41193d6d1deedc2
vdev_type = disk
vdev_path = /dev/dsk/c6t5d0s0
vdev_devid = id1,s...@n5000c50010a7666b/a
parent_guid = 0xcbaa8ea60a3c133
parent_type = raidz
zio_err = 5
zio_offset = 0xab2901da00
zio_size = 0x200
zio_objset = 0x4b
zio_object = 0xa26ef4
zio_level = 0
zio_blkid = 0xf
__ttl = 0x1
__tod = 0x4ae04a2c 0x28ff472e


c6t5d0 is in the problem pool (raid3155) so I've gone ahead and offlined
the drive and will be replacing it shortly.  Hopefully that will take
care of the problem!

If this doesn't solve the problem, do you have any suggestions on what
more I can look at to try to figure out what's wrong?  Is there some
sort of setting I can set which will prevent the zpool from hanging up
the entire system in the event of a single drive failure like this?
It's really annoying to not be able to log into the machine (and having
to forcefully reboot the machine) when this happens.

Thanks again for your help!

-Jeremy



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bryan Cantrill

   I can agree that the software is the one that really has the added
   value, but to my opinion allowing a stack like Fishworks to run  
 outside
   the Sun Unified Storage would lead to lower price per unit(Fishwork
   license) but maybe increase revenue.
 
 I'm afraid I don't see that argument at all; I think that the  
 economics
 that you're advocating would be more than undermined by the  
 necessarily
 higher costs of validating and supporting a broader range of  
 hardware and
 firmware...
 
 (Just playing Devil's Advocate here)
 
 There could be no economics at all. A basic warranty would be provided  
 but running a standalone product is a wholly on your own proposition  
 once one ventures outside a very small hardware support matrix.
 
 Perhaps Fishworks/AK would have a OpenSolaris edition - leave the bulk  
 of the actual hardware support up to a support infrastructure that's  
 already geared towards making wide ranges of hardware supportable -  
 OpenSolaris/Solaris, after all, does allow that.
 
 Perhaps this could be a version of Fishworks that's not as integrated  
 with what you get on a SUS platform; if some of the Fishworks  
 functionality that depends on a precise hardware combo could be  
 reduced or generalized, perhaps it's worth consideration. Knowing the  
 little I do about what's going on under the hood of a SUS system, I  
 wouldn't expect the version of Fishworks uses on the SUS systems to  
 have 100% parity with a unbundled Fishworks edition - but the core  
 features, by and large, would convey.

Why would we do this?  I'm all for zero-cost endeavors, but this isn't
zero-cost -- and I'm having a hard time seeing the business case here,
especially when we have so many paying customers for whom the business
case for our time and energy is crystal clear...

- Bryan

--
Bryan Cantrill, Sun Microsystems Fishworks.   http://blogs.sun.com/bmc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Cindy Swearingen

Hi Jeremy,

The ereport.io.scsi.cmd.disk.tran is describing connections
problems to the /p...@0,0/pci8086,4...@5/pci1000,3...@0/s...@30,0
device. I think the .tran suffix is for transient.

ZFS might be reporting problems with device as well, but if the
zpool/zfs commands are hanging, then it might be difficult to
get this confirmation. The zpool status command will report
device problems.

When a device in a pool fails, then I/O to the pool is blocked,
reads might be successful. See the failmode property description
in zpool.1m.

Is this pool redundant? If so, you can attempt to offline this
device until it is replaced. If you have another device available,
you might replace the suspect drive and see if that solves the
pool hang problem.

Cindy



On 10/27/09 12:04, Jeremy Kitchen wrote:

Cindy Swearingen wrote:

Jeremy,

I generally suspect device failures in this case and if possible,
review the contents of /var/adm/messages and fmdump -eV to see
if the pool hang could be attributed to failed or failing devices.


perusing /var/adm/messages, I see:

Oct 22 05:06:11 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:11 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:11 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x0
Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x1
Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x0

lots of messages like that just prior to rsync warnings:

Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
rsync error: error in rsync protocol data stream (code 12) at io.c(453)
[receiver=2.6.9]
Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
rsync error: error in rsync protocol data stream (code 12) at io.c(453)
[receiver=2.6.9]
Oct 22 06:25:27 homiebackup10 rsyncd[776]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]

I think the rsync warnings are indicative of the pool being hung.  So it
would seem that the bus is freaking out and then the pool dies and
that's that?  The strange thing is that this machine is way underloaded
compared to another one we have (which has 5 shelves, so ~150TB of
storage attached) which hasn't really had any problems like this.  We
had issues with that one when rebuilding drives, but it's been pretty
stable since.

looking at fmdump -eV, I see lots and lots of these:

Oct 24 2009 05:02:54.098815545 ereport.io.scsi.cmd.disk.tran
nvlist version: 0
class = ereport.io.scsi.cmd.disk.tran
ena = 0x882108543f200401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path = /p...@0,0/pci8086,4...@5/pci1000,3...@0/s...@30,0
(end detector)

driver-assessment = retry
op-code = 0x28
cdb = 0x28 0x0 0x51 0x9c 0xa5 0x80 0x0 0x0 0x80 0x0
pkt-reason = 0x4
pkt-state = 0x0
pkt-stats = 0x10
__ttl = 0x1
__tod = 0x4ae2ecee 0x5e3ce39



always with the same device name.  So, it would appear that the drive at
 that location is probably broken, and zfs just isn't detecting it properly?

Also, I'm wondering if this is related to the thread just recently
titled [zfs-discuss] SNV_125 MPT warning in logfile, as we're using the
same controller that person mentions.

We're going to order some beefier controllers with the next shipment,
any suggestions on what to get?  If we find that the new controllers
work much better, we may even go as far as replacing the ones in the
existing machines (or at least any machines experiencing these issues).

We're not married to LSI, but we use LSI controllers in our webservers
for the most part and they're pretty solid there (though admittedly
those are hardware raid, rather than JBOD)

Thanks so much for your help!

-Jeremy


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Tim Cook
On Tue, Oct 27, 2009 at 2:13 PM, Dale Ghent da...@elemental.org wrote:


 On Oct 27, 2009, at 2:58 PM, Bryan Cantrill wrote:


  I can agree that the software is the one that really has the added
 value, but to my opinion allowing a stack like Fishworks to run
 outside
 the Sun Unified Storage would lead to lower price per unit(Fishwork
 license) but maybe increase revenue.


 I'm afraid I don't see that argument at all; I think that the
 economics
 that you're advocating would be more than undermined by the
 necessarily
 higher costs of validating and supporting a broader range of
 hardware and
 firmware...


 (Just playing Devil's Advocate here)

 There could be no economics at all. A basic warranty would be provided
 but running a standalone product is a wholly on your own proposition
 once one ventures outside a very small hardware support matrix.

 Perhaps Fishworks/AK would have a OpenSolaris edition - leave the bulk
 of the actual hardware support up to a support infrastructure that's
 already geared towards making wide ranges of hardware supportable -
 OpenSolaris/Solaris, after all, does allow that.

 Perhaps this could be a version of Fishworks that's not as integrated
 with what you get on a SUS platform; if some of the Fishworks
 functionality that depends on a precise hardware combo could be
 reduced or generalized, perhaps it's worth consideration. Knowing the
 little I do about what's going on under the hood of a SUS system, I
 wouldn't expect the version of Fishworks uses on the SUS systems to
 have 100% parity with a unbundled Fishworks edition - but the core
 features, by and large, would convey.


 Why would we do this?  I'm all for zero-cost endeavors, but this isn't
 zero-cost -- and I'm having a hard time seeing the business case here,
 especially when we have so many paying customers for whom the business
 case for our time and energy is crystal clear...


 Hey, I was just offering food for thought from the technical end :)

 Of course the cost in man hours to attain a reasonable, unbundled version
 would have to be justifiable. If that aspect isn't currently justifiable,
 then that's as far as the conversation needs to go. However, times change
 and one day demand could very well justify the business costs.


 /dale




The problem is, most of the things that make fishworks desirable are the
things that wouldn't work.  Want to light up a failed drive with an LED?
 Clustering?  Timeouts for failed hardware?

The fact of the matter is, people asking for this are people that aren't
willing to spend the money that Sun would be asking for anyways.  I mean,
seriously, a 7110 is $10,000 LIST!  Assuming you absolutely despise
bartering on price, you can get the thing for 20% off just by using try and
buy.  If you're balking at that price, you wouldn't like the price of the
software.  No amount of but you don't have to support it is going to
change that.  I think you're failing to take into consideration the PR
suicide it would be for Sun to offer fishworks on any platform people want,
offer support contracts (that's the ONLY way this will make them money), and
then turn around and tell people the reasony feature XYZ isnt' working is
because their hardware isn't supported... oh, and they have no plans to ever
add support either.

I honestly can't believe this is even a discussion.  What next, are you
going to ask NetApp to support ONTAP on Dell systems, and EMC to support
Enginuity on HP blades?

Just because the underpinnings are based on an open source OS that supports
many platforms doesn't mean this customized build can or ever should.

And one last example... QLogic and Brocade FC switches run Linux... I
wouldn't expect or ask them to make a version that I could run on a desktop
full of HBA's to act as my very own FC switch even though it is entirely
possible for them to do so.

And just as a reminder... if you look back through the archives, I am FAR
from a Sun fanboy... I just feel you guys aren't even grounded in reality
when making these requests.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool getting in a stuck state?

2009-10-27 Thread Cindy Swearingen

Jeremy,

I can't comment on your hardware because I'm not familiar with it.

If you have a storage pool with ZFS redundancy and one device fails
or begins failing, then the pool keeps going, in a degraded mode but
is generally available.

You can try setting the failmode property to continue, which would
allow reads to continue in case of a device failure, might prevent
the pool from hanging.

If offlining the disk or replacing the disk doesn't help, let us know.

Cindy

On 10/27/09 13:13, Jeremy Kitchen wrote:

Jeremy Kitchen wrote:

Cindy Swearingen wrote:

Jeremy,

I generally suspect device failures in this case and if possible,
review the contents of /var/adm/messages and fmdump -eV to see
if the pool hang could be attributed to failed or failing devices.

perusing /var/adm/messages, I see:

Oct 22 05:06:11 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:11 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:11 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x0
Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x1
Oct 22 05:06:19 homiebackup10 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,4...@1/pci1000,3...@0 (mpt1):
Oct 22 05:06:19 homiebackup10   Log info 0x3108 received for target 5.
Oct 22 05:06:19 homiebackup10   scsi_status=0x0, ioc_status=0x804b,
scsi_state=0x0

lots of messages like that just prior to rsync warnings:

Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
Oct 22 05:55:29 homiebackup10 rsyncd[29746]: [ID 702911 daemon.warning]
rsync error: error in rsync protocol data stream (code 12) at io.c(453)
[receiver=2.6.9]
Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
Oct 22 06:10:29 homiebackup10 rsyncd[178]: [ID 702911 daemon.warning]
rsync error: error in rsync protocol data stream (code 12) at io.c(453)
[receiver=2.6.9]
Oct 22 06:25:27 homiebackup10 rsyncd[776]: [ID 702911 daemon.warning]
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]

I think the rsync warnings are indicative of the pool being hung.  So it
would seem that the bus is freaking out and then the pool dies and
that's that?  The strange thing is that this machine is way underloaded
compared to another one we have (which has 5 shelves, so ~150TB of
storage attached) which hasn't really had any problems like this.  We
had issues with that one when rebuilding drives, but it's been pretty
stable since.

looking at fmdump -eV, I see lots and lots of these:

Oct 24 2009 05:02:54.098815545 ereport.io.scsi.cmd.disk.tran
nvlist version: 0
class = ereport.io.scsi.cmd.disk.tran
ena = 0x882108543f200401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path = /p...@0,0/pci8086,4...@5/pci1000,3...@0/s...@30,0
(end detector)

driver-assessment = retry
op-code = 0x28
cdb = 0x28 0x0 0x51 0x9c 0xa5 0x80 0x0 0x0 0x80 0x0
pkt-reason = 0x4
pkt-state = 0x0
pkt-stats = 0x10
__ttl = 0x1
__tod = 0x4ae2ecee 0x5e3ce39


so doing some more reading here on the list and mucking about a bit
more, I've come across this in the fmdump log:

Oct 22 2009 05:03:56.687818542 ereport.fs.zfs.io
nvlist version: 0
class = ereport.fs.zfs.io
ena = 0x99eb889c6fe1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x90ed10dfd0191c3b
vdev = 0xf41193d6d1deedc2
(end detector)

pool = raid3155
pool_guid = 0x90ed10dfd0191c3b
pool_context = 0
pool_failmode = wait
vdev_guid = 0xf41193d6d1deedc2
vdev_type = disk
vdev_path = /dev/dsk/c6t5d0s0
vdev_devid = id1,s...@n5000c50010a7666b/a
parent_guid = 0xcbaa8ea60a3c133
parent_type = raidz
zio_err = 5
zio_offset = 0xab2901da00
zio_size = 0x200
zio_objset = 0x4b
zio_object = 0xa26ef4
zio_level = 0
zio_blkid = 0xf
__ttl = 0x1
__tod = 0x4ae04a2c 0x28ff472e


c6t5d0 is in the problem pool (raid3155) so I've gone ahead and offlined
the drive and will be replacing it shortly.  Hopefully that will take
care of the problem!

If this doesn't solve the problem, do you have any suggestions on what
more I can look at to try to figure out what's wrong?  Is there some
sort of setting I can set which will prevent the zpool from hanging up
the entire system in the 

Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Rob Logan


 are you going to ask NetApp to support ONTAP on Dell systems,

well, ONTAP 5.0 is built on freebsd, so it wouldn't be too
hard to boot on dell hardware. Hay, at least it can do
aggregates larger than 16T now...
http://www.netapp.com/us/library/technical-reports/tr-3786.html

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] gigabyte iram

2009-10-27 Thread Miles Nordin
 ma == Matthias Appel matthias.ap...@lanlabor.com writes:

ma At the moment I'm considering using a Gigabyte iRAM as ZIL

acard ans-9010 backs up to CF on power loss.  also has a sneaky ``ECC
emulation'' mode that rearranges non-ECC memory into blocks and then
dedicates some bits to ECC, so you don't have to pay unfair prices for
the 9th bit out of every eight.  See the ``quick install'' pdf on
acard's site which is written as a FAQ.

it is a little goofy because it goes into this sort of ``crisis mode''
when you yank the power, where it starts using the battery and writing
to the CF, which it doesn't do under normal operation.  This exposes
it to UPS-like silly problems if the battery is bad, or the CF is bad,
or the CF isn't shoved in all the way, whatever, while something like
stec that uses the dram and commits to flash continuously can report
its failure before you have to count on it, which is particularly
well-adapted to something like a slog that's not evenv used unless
there's a power failure.  but...the acard thing gives stec performance
for intel prices!  let us know how your .tw ramdisk scheme goes.

ma (speed up sync. IO especially for NFS)?

IIUC it will never give you more improvement than disabling the ZIL,
so you should try disabling the ZIL and running tests to get an upper
bound on the improvement you can expect.


pgp4OsHgQ1WAE.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Trevor Pretty






Bruno Sousa wrote:

  
  
Hi,
  
I can agree that the software is the one that really has the added
value, but to my opinion allowing a stack like Fishworks to run outside
the Sun Unified Storage would lead to lower price per unit(Fishwork
license) but maybe increase revenue. Why an increase in revenues? Well,
i assume that alot of customers would buy the Fishworks to put into
they XYZ high-end server.

But in Bryan's blog.. http://blogs.sun.com/bmc/date/200811

"but one that also embedded an apt acronym: "FISH", Mike explained,
stood for "fully-integrated software and hardware" -- which is exactly
what we wanted to go build. I agreed that it captured us perfectly --
and Fishworks was born."

Bruno I agree it would be great to have this sort of BUI on
OpenSolaris, for example it makes CIFS integration in a AD/Windows shop
a breeze, even I got it to work in a couple of minutes, but this would
not be FISH. 

What the Fishworks team have shown is that Sun can make a admin GUI
that is easy to use if they have a goal. Perhaps Oracle will help, but
I see more lost sales of Solaris due it it being "difficult to manage"
than any other reason. We may all not like MS Windows, but you can't
say it's not easy to use. Compare it's RBAC implementation with
Solaris. One is a straight forward tick GUI (admittedly not very
extensible as far as I can see), the other a complete nightmare of
files that need editing with vi! Guess which one is used the most? 

OpenSolaris is getting there, but 99% of all Sun's customers never see
it as they are on Solaris 10. I recently bought a laptop just to run
OpenSolaris and most things "just work"; it's my preferred desktop at
home, but it still only does the simple stuff that Mac and Windows have
done for years. Using any of the advance features however requires a
degree in Systems Engineering. 

Ever wondered what makes Apple so successful? Apple makes FISH.






www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-27 Thread Miles Nordin
 jc == Jake Caferilla j...@tanooshka.com writes:

jc But remember, this example system, has massive parallel
jc scalability.

jc I issue 2 read requests, both read requests return after 1
jc minute.

yeah, it's interesting to consider overall software stacks that might
have unnecessary serialization points.

For example, Postfix will do one or two fsync()'s per mail it receives
from the Interent into its internal queue, but I think it can be
working on receiving many mails at once, all waiting on fsync's
together.  If the filesystem stack pushes these fsync's all the way
down to the disk and serializes them, you'll only end up with a few
parallel I/O transactions open, because almost everything postfix
wants to do is ``write, sync, let me know when done.''

OTOH, if you ran three instances of Postfix on the same single box
with three separate queue diectories on three separate
filesystems/zpools, inside zones for example, and split the mail load
among them using MX records or a load balancer, you might through this
ridiculous arrangement increase the parallelism of I/O reaching the
disk even on a system that serializes fsync's.

but if ZFS is smart enough to block several threads on fsync at once,
batch up their work to a single ZIL write-and-sync, then the 
three-instance scheme will have no benefit.

anyway, though, I do agree it makes sense to quote ``latency'' as the
write-and-sync latency, equivalent to a hard disk with the write cache
off, and if you want to tell a story about how much further you can
stretch performance through parallelism, quote ``sustained write
io/s''.  I don't think it's as difficult to quantify write performance
as you make out, and obviuosly hard disk vendors are already
specifying in this honest way otherwise their write latency numbers
would be stupidly low, but it sounds like SSD vendors are sometimes
waving their hands around ``oh it's all chips anyway we didn't know
what you MEANT it's AMBIGUOUS is that a rabbit over there?'' giving
dishonest latency numbers.  Latency numbers for writes that will not
survive power loss are *MEANINGLESS*.  period.  And that is worth
complaining about.


pgpyNHH4IuLbZ.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-27 Thread Andrew Gabriel

Miles Nordin wrote:

For example, Postfix will do one or two fsync()'s per mail it receives
from the Interent into its internal queue, but I think it can be
working on receiving many mails at once, all waiting on fsync's
together.  If the filesystem stack pushes these fsync's all the way
down to the disk and serializes them, you'll only end up with a few
parallel I/O transactions open, because almost everything postfix
wants to do is ``write, sync, let me know when done.''

OTOH, if you ran three instances of Postfix on the same single box
with three separate queue diectories on three separate
filesystems/zpools, inside zones for example, and split the mail load
among them using MX records or a load balancer, you might through this
ridiculous arrangement increase the parallelism of I/O reaching the
disk even on a system that serializes fsync's.

but if ZFS is smart enough to block several threads on fsync at once,
batch up their work to a single ZIL write-and-sync, then the 
three-instance scheme will have no benefit.
  


ZFS does exactly this.
I demonstrate it on the SSD Discovery Days I run periodically in the UK.

--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread Paul Lyons
I know this is opensolaris and Solaris, but I'm stuck...

I want to demonstrate to my client how to recover an unbootable system from a 
zfs snapshot. (Say some dope rm -rf /kernel/drv...) Running Solaris 10 U8 sparc.

Normal procedures are boot cdrom -s (or boot net -s) 
zpool import rpool
zfs rollback snapshot
reboot and all is well

I've done this before with earlier rev's of Sol 10.

When I boot off Solaris 10 U8 I get the error that pool is formatted using an 
incompatible version. Status show the pool as Newer Version

I know Update 8 has version 15, but it looks like the miniroot from the 
install media is only version 10.

This is not good.

Any advice? I am already thinking about installing U7 on my test box to 
demonstrate. Glad I haven't rolled out u8 into production.

Thanks,

Paul
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa


Trevor, 

Could not agree more, but not every costumer likes to have
only a fancy GUI, even that this GUI is very well designed. 

However my
point of view is based on the fact that the part of the software behind the
Fishworks could be possible to install in other Sun servers, besides the
7xxx series. 

Regarding APPLE...well they have marketing gurus  

Bruno


On Wed, 28 Oct 2009 09:47:31 +1300, Trevor Pretty  wrote: 

Bruno Sousa
wrote:  Hi,

 I can agree that the software is the one that really has the
added value, but to my opinion allowing a stack like Fishworks to run
outside the Sun Unified Storage would lead to lower price per unit(Fishwork
license) but maybe increase revenue. Why an increase in revenues? Well, i
assume that alot of customers would buy the Fishworks to put into they XYZ
high-end server.

But in Bryan's blog..
http://blogs.sun.com/bmc/date/200811 [1]

but one that also embedded an
apt acronym: FISH, Mike explained, stood for fully-integrated software
and hardware -- which is exactly what we wanted to go build. I agreed that
it captured us perfectly -- and Fishworks was born.

Bruno I agree it
would be great to have this sort of BUI on OpenSolaris, for example it
makes CIFS integration in a AD/Windows shop a breeze, even I got it to work
in a couple of minutes, but this would not be FISH. 

 What the Fishworks
team have shown is that Sun can make a admin GUI that is easy to use if
they have a goal. Perhaps Oracle will help, but I see more lost sales of
Solaris due it it being difficult to manage than any other reason. We may
all not like MS Windows, but you can't say it's not easy to use. Compare
it's RBAC implementation with Solaris. One is a straight forward tick GUI
(admittedly not very extensible as far as I can see), the other a complete
nightmare of files that need editing with vi! Guess which one is used the
most? 

 OpenSolaris is getting there, but 99% of all Sun's customers never
see it as they are on Solaris 10. I recently bought a laptop just to run
OpenSolaris and most things just work; it's my preferred desktop at home,
but it still only does the simple stuff that Mac and Windows have done for
years. Using any of the advance features however requires a degree in
Systems Engineering. 

 Ever wondered what makes Apple so successful? Apple
makes FISH.

__  

www.eagle.co.nz [2]  

  This email is confidential and
may be legally privileged. If received in error please destroy and
immediately notify us. 

-- 
This message has been scanned for viruses and

dangerous content by MAILSCANNER [3], and is 
believed to be clean.   

--

Bruno Sousa
 

Links:
--
[1] http://blogs.sun.com/bmc/date/200811
[2]
http://www.eagle.co.nz/
[3] http://www.mailscanner.info/

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

inline: smiley-laughing.gif___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread Tim Cook
On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons paulrly...@gmail.com wrote:

 I know this is opensolaris and Solaris, but I'm stuck...

 I want to demonstrate to my client how to recover an unbootable system from
 a zfs snapshot. (Say some dope rm -rf /kernel/drv...) Running Solaris 10 U8
 sparc.

 Normal procedures are boot cdrom -s (or boot net -s)
 zpool import rpool
 zfs rollback snapshot
 reboot and all is well

 I've done this before with earlier rev's of Sol 10.

 When I boot off Solaris 10 U8 I get the error that pool is formatted using
 an incompatible version. Status show the pool as Newer Version

 I know Update 8 has version 15, but it looks like the miniroot from the
 install media is only version 10.

 This is not good.

 Any advice? I am already thinking about installing U7 on my test box to
 demonstrate. Glad I haven't rolled out u8 into production.

 Thanks,

 Paul




Not to be a jerk, but is there a question in there?  The system told you
exactly what is wrong, and you seem to already know.  You're booting from an
old cd that has an old version of zfs.  Grab a new iso.

How would you expect a system that shipped with verison 10 of zfs to know
what to do with version 15?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
I just curious to see how much effort would it take to put the software of
FISH running within a Sun X4275...
Anyway..lets wait and see.

Bruno

On Tue, 27 Oct 2009 13:29:24 -0500 (CDT), Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
 On Tue, 27 Oct 2009, Bruno Sousa wrote:
 
 I can agree that the software is the one that really has the added 
 value, but to my opinion allowing a stack like Fishworks to run 
 outside the Sun Unified Storage would lead to lower price per 
 unit(Fishwork license) but maybe increase revenue. Why an increase 
 in revenues? Well, i assume that alot of customers would buy the 
 Fishworks to put into they XYZ high-end server.
 
 Fishworks products (products that the Fishworks team developed) are 
 designed, tweaked, and tuned for particular hardware configurations. 
 It is not like general purpose OpenSolaris where the end user gets to 
 experiment with hardware configurations and tunings to get the best 
 performance (but might not achieve it).
 
 Fishworks engineers are even known to holler at the drives as part 
 of the rigorous product testing.
 
 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

-- 
Bruno Sousa

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread dick hoogendijk

Tim Cook wrote:



On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons paulrly...@gmail.com 
mailto:paulrly...@gmail.com wrote:


When I boot off Solaris 10 U8 I get the error that pool is
formatted using an incompatible version.


You're booting from an old cd that has an old version of zfs.  Grab a 
new iso. 
It might be that I can't read but does OP not state he is booting off 
Solaris 10 update 8 DVD?
What can be newer than that one? If the miniroot really only supports 
ZFS v10 then this is indeed not good (unworkable/unusable/..)


--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS 10u8 10/09 | OpenSolaris 2010.03 b125
+ All that's really worth doing is what we do for others (Lewis Carrol)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa
Hi,

Given the fact that i worked in the Healthcare industry and alot of my
former customers wished to be able to run the former Sun NAS 5310 software
in other hardware, i can see a interesting possible business case.
In my former job, my customers liked the software used in the Sun
StorageTek NAS appliance, but very few of them liked the concept of
appliance..they prefer to have the same software in a non-appliance
configuration, even if that means that SUN only has 1 server to support
such a solution.

Anyway i fully understand that the FISHworks is a combination of hw with
software with some specific targets in mind, and for that i think FISHworks
is the best of what the market has to offer these days...

Bruno


On Tue, 27 Oct 2009 18:58:19 +, Bryan Cantrill b...@eng.sun.com
wrote:
   I can agree that the software is the one that really has the added
   value, but to my opinion allowing a stack like Fishworks to run  
 outside
   the Sun Unified Storage would lead to lower price per unit(Fishwork
   license) but maybe increase revenue.
 
 I'm afraid I don't see that argument at all; I think that the  
 economics
 that you're advocating would be more than undermined by the  
 necessarily
 higher costs of validating and supporting a broader range of  
 hardware and
 firmware...
 
 (Just playing Devil's Advocate here)
 
 There could be no economics at all. A basic warranty would be provided 

 but running a standalone product is a wholly on your own proposition  
 once one ventures outside a very small hardware support matrix.
 
 Perhaps Fishworks/AK would have a OpenSolaris edition - leave the bulk 

 of the actual hardware support up to a support infrastructure that's  
 already geared towards making wide ranges of hardware supportable -  
 OpenSolaris/Solaris, after all, does allow that.
 
 Perhaps this could be a version of Fishworks that's not as integrated  
 with what you get on a SUS platform; if some of the Fishworks  
 functionality that depends on a precise hardware combo could be  
 reduced or generalized, perhaps it's worth consideration. Knowing the  
 little I do about what's going on under the hood of a SUS system, I  
 wouldn't expect the version of Fishworks uses on the SUS systems to  
 have 100% parity with a unbundled Fishworks edition - but the core  
 features, by and large, would convey.
 
 Why would we do this?  I'm all for zero-cost endeavors, but this isn't
 zero-cost -- and I'm having a hard time seeing the business case here,
 especially when we have so many paying customers for whom the business
 case for our time and energy is crystal clear...
 
   - Bryan
 

--
 Bryan Cantrill, Sun Microsystems Fishworks.  
http://blogs.sun.com/bmc
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Bruno Sousa

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import single user mode incompatible version

2009-10-27 Thread Tim Cook
On Tue, Oct 27, 2009 at 4:59 PM, dick hoogendijk d...@nagual.nl wrote:

 Tim Cook wrote:



 On Tue, Oct 27, 2009 at 4:25 PM, Paul Lyons paulrly...@gmail.commailto:
 paulrly...@gmail.com wrote:

When I boot off Solaris 10 U8 I get the error that pool is
formatted using an incompatible version.


 You're booting from an old cd that has an old version of zfs.  Grab a new
 iso.

 It might be that I can't read but does OP not state he is booting off
 Solaris 10 update 8 DVD?
 What can be newer than that one? If the miniroot really only supports ZFS
 v10 then this is indeed not good (unworkable/unusable/..)


Assuming he didn't accidentally burn the wrong media... and the 10u8 does in
fact have 15 as the default version, but 10 as the mini-root (that sounds
more than a bit odd to me), it's a matter of simply grabbing an opensolaris
ISO instead to do the exact same thing.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Bruno Sousa


Hi, 

Maybe during this emails you have missed the point that no one is
requesting anything..we are just discussing a possible usage of FISHworks
outside of the 7xxx series..more specific in other Sun Server. 

If i
choose the personal point of view, my biggest wish is that i would love to
run FISHwork in something else rather than a appliance..who knows, maybe
Solaris 11 within a Sun X4275 instead of a 71110. 

Bruno 

On Tue, 27 Oct
2009 14:29:57 -0500, Tim Cook  wrote: 

 On Tue, Oct 27, 2009 at 2:13 PM,
Dale Ghent  wrote:

 On Oct 27, 2009, at 2:58 PM, Bryan Cantrill wrote:

  
I can agree that the software is the one that really has the added
 value,
but to my opinion allowing a stack like Fishworks to run
 outside
 the Sun
Unified Storage would lead to lower price per unit(Fishwork
 license) but
maybe increase revenue.

 I'm afraid I don't see that argument at all; I
think that the
 economics
 that you're advocating would be more than
undermined by the
 necessarily
 higher costs of validating and supporting a
broader range of
 hardware and
 firmware...

 (Just playing Devil's
Advocate here)

 There could be no economics at all. A basic warranty would
be provided
 but running a standalone product is a wholly on your own
proposition
 once one ventures outside a very small hardware support
matrix.

 Perhaps Fishworks/AK would have a OpenSolaris edition - leave the
bulk
 of the actual hardware support up to a support infrastructure that's

already geared towards making wide ranges of hardware supportable -

OpenSolaris/Solaris, after all, does allow that.

 Perhaps this could be a
version of Fishworks that's not as integrated
 with what you get on a SUS
platform; if some of the Fishworks
 functionality that depends on a precise
hardware combo could be
 reduced or generalized, perhaps it's worth
consideration. Knowing the
 little I do about what's going on under the
hood of a SUS system, I
 wouldn't expect the version of Fishworks uses on
the SUS systems to
 have 100% parity with a unbundled Fishworks edition -
but the core
 features, by and large, would convey.
   Why would we do
this? I'm all for zero-cost endeavors, but this isn't
 zero-cost -- and I'm
having a hard time seeing the business case here,
 especially when we have
so many paying customers for whom the business
 case for our time and
energy is crystal clear...

 Hey, I was just offering food for thought from
the technical end :)

 Of course the cost in man hours to attain a
reasonable, unbundled version would have to be justifiable. If that aspect
isn't currently justifiable, then that's as far as the conversation needs
to go. However, times change and one day demand could very well justify the
business costs.  

 /dale   

The problem is, most of the things that make
fishworks desirable are the things that wouldn't work. Want to light up a
failed drive with an LED? Clustering? Timeouts for failed hardware?

The
fact of the matter is, people asking for this are people that aren't
willing to spend the money that Sun would be asking for anyways. I mean,
seriously, a 7110 is $10,000 LIST! Assuming you absolutely despise
bartering on price, you can get the thing for 20% off just by using try and
buy. If you're balking at that price, you wouldn't like the price of the
software. No amount of but you don't have to support it is going to
change that. I think you're failing to take into consideration the PR
suicide it would be for Sun to offer fishworks on any platform people want,
offer support contracts (that's the ONLY way this will make them money),
and then turn around and tell people the reasony feature XYZ isnt' working
is because their hardware isn't supported... oh, and they have no plans to
ever add support either.

I honestly can't believe this is even a
discussion. What next, are you going to ask NetApp to support ONTAP on Dell
systems, and EMC to support Enginuity on HP blades?

Just because the
underpinnings are based on an open source OS that supports many platforms
doesn't mean this customized build can or ever should.

And one last
example... QLogic and Brocade FC switches run Linux... I wouldn't expect or
ask them to make a version that I could run on a desktop full of HBA's to
act as my very own FC switch even though it is entirely possible for them
to do so.

And just as a reminder... if you look back through the archives,
I am FAR from a Sun fanboy... I just feel you guys aren't even grounded in
reality when making these requests.

--Tim  

-- 
This message has been
scanned for viruses and 
dangerous content by MAILSCANNER [2], and is

believed to be clean.  

-- 
Bruno Sousa
 

Links:
--
[1]
mailto:da...@elemental.org
[2] http://www.mailscanner.info/

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool failmode

2009-10-27 Thread deniz rende
Hi,

I am trying to understand the behavior of zpool failmode=continue rpool.

I've read the man page regarding to this and I understand that the default mode 
is set to wait. So If I set up my zfs pool to continue, in the case of loss of 
connectivity, what is this setting supposed to do? 

Does setting failmode=continue prevent the system not to panic when an event 
like loss of connectivity or pool failure occur? What does the EIO term refer 
to in the man page for this setting? 

Could somebody explain what really this setting does?

Thanks

Deniz
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool failmode

2009-10-27 Thread Tim Cook
On Tue, Oct 27, 2009 at 5:13 PM, deniz rende solarisw...@gmail.com wrote:

 Hi,

 I am trying to understand the behavior of zpool failmode=continue rpool.

 I've read the man page regarding to this and I understand that the default
 mode is set to wait. So If I set up my zfs pool to continue, in the case of
 loss of connectivity, what is this setting supposed to do?

 Does setting failmode=continue prevent the system not to panic when an
 event like loss of connectivity or pool failure occur? What does the EIO
 term refer to in the man page for this setting?

 Could somebody explain what really this setting does?

 Thanks

 Deniz


wait causes all I/O to hang while the system attempts to retry the
device.  continue will cause the system to continue on as if nothing has
changed.  panic will cause the system to panic and core dump.  The only
real advantage I see in wait is that it will alert the admin to a failure
rather quickly if you aren't checking the health of the system on a regular
basis.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Erast
As far as I know, its an effort! Not just for x4275 specifically, but in 
general with any other x86 hardware and storage oriented software. A lot 
of work required to support a final solution as well. What Nexenta does 
with its version of NexentaStor is enabling third-party Partners to 
integrate software into a HW/SW solutions ready for production use. 
There is even a social network for Nexenta partners, where Partners 
talks to each other as well as to Nexenta experts and polishing their 
final NexentaStor solutions. Its a process and it works!


List of Partners: http://www.nexenta.com/partners

Bruno Sousa wrote:

I just curious to see how much effort would it take to put the software of
FISH running within a Sun X4275...
Anyway..lets wait and see.

Bruno

On Tue, 27 Oct 2009 13:29:24 -0500 (CDT), Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:

On Tue, 27 Oct 2009, Bruno Sousa wrote:

I can agree that the software is the one that really has the added 
value, but to my opinion allowing a stack like Fishworks to run 
outside the Sun Unified Storage would lead to lower price per 
unit(Fishwork license) but maybe increase revenue. Why an increase 
in revenues? Well, i assume that alot of customers would buy the 
Fishworks to put into they XYZ high-end server.
Fishworks products (products that the Fishworks team developed) are 
designed, tweaked, and tuned for particular hardware configurations. 
It is not like general purpose OpenSolaris where the end user gets to 
experiment with hardware configurations and tunings to get the best 
performance (but might not achieve it).


Fishworks engineers are even known to holler at the drives as part 
of the rigorous product testing.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us,

http://www.simplesystems.org/users/bfriesen/

GraphicsMagick Maintainer,http://www.GraphicsMagick.org/



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Moving an dataset zpool from a local zone to the global zone

2009-10-27 Thread Peter
I'm wondering if anyone has run into this before.

I've got a zone that has a zpool added into it as a dataset.  I need to remove 
the zpool from the zone and mount on the global zone directly.


I can remove the dataset from the zone config and remove the zpool from the 
zone. But I am having trouble getting the zpool to mount again on the global 
zone. It imports and exports fine but will not let me change the mount point 
stating that it is part of a nonglobal zone.  Any thoughts on this ?


ie: 
  server1 
  runs zone1
  datapool is a zpool exported to zone1 as a dataset

I am going to remove the zone and need server1 to host  datapool and all it's 
filesystems.

Also datapool is not the rootpool of the zone.
 
Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Eric D. Mudama

On Tue, Oct 27 at 18:58, Bryan Cantrill wrote:

Why would we do this?  I'm all for zero-cost endeavors, but this isn't
zero-cost -- and I'm having a hard time seeing the business case here,
especially when we have so many paying customers for whom the business
case for our time and energy is crystal clear...

- Bryan


I don't have a need for a large 7110 box, my group's file serving
needs are quite small.  I decided on a Dell T610 running OpenSolaris,
with half the drives populated now and half to be populated as we get
close to filling them up.  Pair of mirrored vdevs for performance,
with an SSD cache.

I'd have loved to have, instead, the nice fishworks gui interface to
the whole thing, and if that existed on something like an X2270,
that's what we would have bought instead of the Dell box.

Ultimately, I wanted the simplicity of a Drobo, capable of saturating
a Gig-E port or two, in an easy to maintain and administer system.
One and a half out of three ain't bad, but Fishworks GUI on a 4-disk
X2270 would have been a 3 for 3 solution I believe.  We just can't
afford to spend $8-10k to try a 7110 which is likely complete
overkill for our needs, and we have no expectation of our business
growing into it within the next two years.

$2k was our absolute ceiling for a trial purchase, and I knew that if
my OpenSolaris experiment didn't work out, I could just repurpose the
Dell box with Debian, EXT3, software RAID and Samba and get a 75-80%
solution.

Yes, this may not make business sense for Sun-as-structured, but
someone will figure out how to scratch that itch because it's real for
a LOT of small businesses.  They want that low cost entry into a
business-grade NAS without having to build it themselves, something
that's a step up from a whitebox 2-disk mirror from some no-name
vendor who won't exist in 6 months.

--eric

PS: Not having enough engineers to support a growing and paying
customer base is a *good* problem to have.  The opposite is much, much
worse.


--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Tim Cook
On Wed, Oct 28, 2009 at 12:15 AM, Eric D. Mudama
edmud...@bounceswoosh.orgwrote:

 On Tue, Oct 27 at 18:58, Bryan Cantrill wrote:

 Why would we do this?  I'm all for zero-cost endeavors, but this isn't
 zero-cost -- and I'm having a hard time seeing the business case here,
 especially when we have so many paying customers for whom the business
 case for our time and energy is crystal clear...

- Bryan


 I don't have a need for a large 7110 box, my group's file serving
 needs are quite small.  I decided on a Dell T610 running OpenSolaris,
 with half the drives populated now and half to be populated as we get
 close to filling them up.  Pair of mirrored vdevs for performance,
 with an SSD cache.

 I'd have loved to have, instead, the nice fishworks gui interface to
 the whole thing, and if that existed on something like an X2270,
 that's what we would have bought instead of the Dell box.

 Ultimately, I wanted the simplicity of a Drobo, capable of saturating
 a Gig-E port or two, in an easy to maintain and administer system.
 One and a half out of three ain't bad, but Fishworks GUI on a 4-disk
 X2270 would have been a 3 for 3 solution I believe.  We just can't
 afford to spend $8-10k to try a 7110 which is likely complete
 overkill for our needs, and we have no expectation of our business
 growing into it within the next two years.

 $2k was our absolute ceiling for a trial purchase, and I knew that if
 my OpenSolaris experiment didn't work out, I could just repurpose the
 Dell box with Debian, EXT3, software RAID and Samba and get a 75-80%
 solution.

 Yes, this may not make business sense for Sun-as-structured, but
 someone will figure out how to scratch that itch because it's real for
 a LOT of small businesses.  They want that low cost entry into a
 business-grade NAS without having to build it themselves, something
 that's a step up from a whitebox 2-disk mirror from some no-name
 vendor who won't exist in 6 months.

 --eric

 PS: Not having enough engineers to support a growing and paying
 customer base is a *good* problem to have.  The opposite is much, much
 worse.



So use Nexenta?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread C. Bergström

Tim Cook wrote:



PS: Not having enough engineers to support a growing and paying
customer base is a *good* problem to have.  The opposite is much, much
worse.



So use Nexenta?

Got data you care about?

Verify extensively before you jump to that ship.. :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss