Ross Walker wrote:
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn
wrote:
On Wed, 18 Aug 2010, Joerg Schilling wrote:
Linus is right with his primary decision, but this also applies for static
linking. See Lawrence Rosen for more information, the GPL does not distinct
between static an
On 2010-Aug-18 04:40:21 +0800, Joerg Schilling
wrote:
>Ian Collins wrote:
>> Some application benefit from the extended register set and function
>> call ABI, others suffer due to increased sizes impacting the cache.
>
>Well, please verify your claims as they do not meet my experience.
I would
On 08/18/10 08:40 AM, Joerg Schilling wrote:
Ian Collins wrote:
Some application benefit from the extended register set and function
call ABI, others suffer due to increased sizes impacting the cache.
Well, please verify your claims as they do not meet my experience.
It may be that
On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn
wrote:
> On Wed, 18 Aug 2010, Joerg Schilling wrote:
>>
>> Linus is right with his primary decision, but this also applies for static
>> linking. See Lawrence Rosen for more information, the GPL does not distinct
>> between static and dynamic linkin
On 8/18/10 3:58 PM -0400 Linder, Doug wrote:
Erik Trimble wrote:
That said, stability vs new features has NOTHING to do with the OSS
development model. It has everything to do with the RELEASE model.
[...]
All that said, using the OSS model for actual *development* of an
Operating System is co
Well I seemed to have hit on that hot button topic of NFSv4, (good
thing I didn't mention that we are running IPv4).
To get back to the topic, is anyone running ZFS group quota on large
filesystem with lots of smaller files and thousands
for groups per filesystem, or have any quota related experin
>
> Also the linux NFSv4 client is bugged (as in hang-the-whole-machine bugged).
> I am deploying a new osol fileserver for home directories and I'm using NFSv3
> + automounter (because I am also using one dataset per user, and thus I have
> to mount each home dir separately).
We are also in th
Il giorno 18/ago/2010, alle ore 21.24, David Magda ha scritto:
> On Wed, August 18, 2010 15:14, Linder, Doug wrote:
>> I've noticed that everytime someone mentions using NFS with ZFS here, they
>> always seem to be using NFSv3. Is there a reason for this that I just
>> don't know about?
> At $WOR
In message <4c6c4e30.7060...@ianshome.com>, Ian Collins writes:
>If you count Monday this week as lately, we have never had to wait more
>than 24 hours for replacement drives for our 45x0 or 7000 series
Same here, but two weeks ago for a failed drive in an X4150.
Last week SunSolve was sending
On 08/19/10 03:44 AM, Ethan Erchinger wrote:
Have you dealt with Sun/Oracle support lately? lololol It's a disaster.
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever. The 7000
series support is no better, possi
On Wed, Aug 18, 2010 at 1:34 PM, Miles Nordin wrote:
> > "ee" == Ethan Erchinger writes:
>
>ee> We've had a failed disk in a fully support Sun system for over
>ee> 3 weeks, Explorer data turned in, and been given the runaround
>ee> forever.
>
> that sucks.
>
> but while NetApp ma
this is a 64 bit system, and I already used 2 of these drives in a raidz1 pool
and they worked great, except I needed to use the SATA controller card and not
the motherboard SATA. Any ideas?
--
This message posted from opensolaris.org
___
zfs-discuss m
Hmm still running zdb since last night. Anyone have any suggestions or advice
how to proceed with this issue?
Thanks,
Matthew Ellison
Begin forwarded message:
> From: Matthew Ellison
> Date: August 18, 2010 3:15:39 AM EDT
> To: zfs-discuss@opensolaris.org
> Subject: Kernel panic on import /
On 08/19/10 04:56 AM, seth keith wrote:
I had a perfectly working 7 drive raidz pool using some on board STATA
connectors and some on PCI SATA controller cards. My pool was using 500GB
drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi
) drives. This process resulte
Erik Trimble wrote:
> That said, stability vs new features has NOTHING to do with the OSS
> development model. It has everything to do with the RELEASE model.
> [...]
> All that said, using the OSS model for actual *development* of an
> Operating System is considerably superior to using a close
On 8/18/2010 12:24 PM, Linder, Doug wrote:
On Fri, Aug 13 at 19:06, Frank Cusack wrote:
OpenSolaris is for enthusiasts and great great folks like Nexenta.
Solaris lags so far behind it's not really an upgrade path.
It's often hard for OSS-minded people to believe, but there are an awful lot
On Fri, Aug 13 at 19:06, Frank Cusack wrote:
> OpenSolaris is for enthusiasts and great great folks like Nexenta.
> Solaris lags so far behind it's not really an upgrade path.
It's often hard for OSS-minded people to believe, but there are an awful lot of
places that actively DO NOT want the la
On Wed, August 18, 2010 15:14, Linder, Doug wrote:
> I've noticed that everytime someone mentions using NFS with ZFS here, they
> always seem to be using NFSv3. Is there a reason for this that I just
> don't know about? To me, using NFSv4 is a no-brainer. ZFS supports it
> natively, it supports
Jordan Schwartz wrote:
> There is one large filesystem per server that is served via NFSv3 to
I've noticed that everytime someone mentions using NFS with ZFS here, they
always seem to be using NFSv3. Is there a reason for this that I just don't
know about? To me, using NFSv4 is a no-brainer.
> "ee" == Ethan Erchinger writes:
ee> We've had a failed disk in a fully support Sun system for over
ee> 3 weeks, Explorer data turned in, and been given the runaround
ee> forever.
that sucks.
but while NetApp may replace your disk immediately, they are an
abusive partner with
Thanks Cindy,
I just needed to delete all luns before
sbdadm delete-lu 600144F00800270514BC4C1E29FB0001
itadm delete-target -f
iqn.1986-03.com.sun:02:f38e0b34-be30-ca29-dfbd-d1d28cd75502
And then I was able to destroy ZFS system itself.
--
This message posted from opensolaris.org
_
You need to let the resilver complete before you can detach the spare. This is
a known problem, CR 6909724.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724
On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote:
> Hi!
>
> I had trouble with my raidz in the way, that some o
Hi!
I had trouble with my raidz in the way, that some of the blockdevices where not
found by the OSOL Box the other day, so the spare device was hooked on
automatically.
After fixing the problem, the missing device came back online, but I am unable
to detach the spare device, even though all d
On Wed, Aug 18, 2010 at 11:44 AM, Ethan Erchinger wrote:
>
> Frank wrote:
>> Have you dealt with RedHat "Enterprise" support? lol.
>
> Have you dealt with Sun/Oracle support lately? lololol It's a disaster.
> We've had a failed disk in a fully support Sun system for over 3 weeks,
> Explorer data
+1: This thread is relevant and productive discourse that'll assist
OpenSolaris orphans in pending migration choices.
On 08/18/10 12:27, Edward Ned Harvey wrote:
Compatibility of ZFS& Linux, as well as the future development of ZFS, and
the health and future of opensolaris / solaris, oracle&
Hi Alxen4,
If /tank/macbook0-data is a ZFS volume that has been shared as an iSCSI
LUN, then you will need to unshare/remove those features before removing
it.
Thanks,
Cindy
On 08/18/10 00:10, Alxen4 wrote:
I have a pool with zvolume (Opensolaris b134)
When I try zpool destroy tank I get "po
On 8/18/10 9:29 AM -0700 Ethan Erchinger wrote:
Edward wrote:
I have had wonderful support, up to and including recently, on my Sun
hardware.
I wish we had the same luck. We've been handed off between 3 different
"technicians" at this point, each one asking for the same information.
Do they
"Garrett D'Amore" wrote:
> All of this is entirely legal conjecture, by people who aren't lawyers,
> for issues that have not been tested by court and are clearly subject to
> interpretation. Since it no longer is relevant to the topic of the
> list, can we please either take the discussion offl
I had a perfectly working 7 drive raidz pool using some on board STATA
connectors and some on PCI SATA controller cards. My pool was using 500GB
drives. I had the stupid idea to replace my 500GB drives with 2TB ( Mitsubishi
) drives. This process resulted in me loosing much of my data ( see my o
What you say is true only on the system itself. On an NFS client system, 30
seconds of lost data in the middle of a file (as per my earlier example) is a
corrupt file.
-original message-
Subject: Re: [zfs-discuss] Solaris startup script location
From: Edward Ned Harvey
Date: 18/08/2010 17:17
>
On Aug 18, 2010, at 5:11 AM, Paul Kraus wrote:
> On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble
> wrote:
>
>> I tried this with NetBackup, and decided against it pretty rapidly.
>> Basically, we
>> got hardly any dedup at all. (Something like 3%; compression gave us
>> much better results.) Tin
Edward wrote:
> That is really weird. What are you calling "failed?" If you're
getting
> either a red blinking light, or a checksum failure on a device in a
zpool...
> You should get your replacement with no trouble.
Yes, failed, with all the normal "failed" signs, cfgadm not finding it,
"FAULTE
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Garrett D'Amore
>
> interpretation. Since it no longer is relevant to the topic of the
> list, can we please either take the discussion offline, or agree to
> just
> let the topic die (on the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ethan Erchinger
>
> We've had a failed disk in a fully support Sun system for over 3 weeks,
> Explorer data turned in, and been given the runaround forever. The
> 7000
> series support is no b
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alxen4
>
> For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
Other people have already corrected you about ramdisk for log.
It's already been said, use SSD, or disable ZIL comp
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Alxen4
>
> Disabling ZIL converts all synchronous calls to asynchronous which
> makes ZSF to report data acknowledgment before it actually was written
> to stable storage which in turn improves
All of this is entirely legal conjecture, by people who aren't lawyers,
for issues that have not been tested by court and are clearly subject to
interpretation. Since it no longer is relevant to the topic of the
list, can we please either take the discussion offline, or agree to just
let the topic
Its hard to tell what caused the smart predictive failure message,
like a temp fluctuation. If ZFS noticed that a disk wasn't available
yet, then I would expect a message to that effect.
In any case, I think I would have a replacement disk available.
The important thing is that you continue to m
Frank wrote:
> Have you dealt with RedHat "Enterprise" support? lol.
Have you dealt with Sun/Oracle support lately? lololol It's a disaster.
We've had a failed disk in a fully support Sun system for over 3 weeks,
Explorer data turned in, and been given the runaround forever. The 7000
series su
On Wed, 18 Aug 2010, Joerg Schilling wrote:
Linus is right with his primary decision, but this also applies for static
linking. See Lawrence Rosen for more information, the GPL does not distinct
between static and dynamic linking.
GPLv2 does not address linking at all and only makes vague refe
On Wed, Aug 18, 2010 at 12:16:04AM -0700, Alxen4 wrote:
> Is there any way run start-up script before non-root pool is mounted ?
>
> For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
> So I need to create ramdisk before actual pool is mounted otherwise it
> complains that log devi
On Wed, Aug 18, 2010 at 7:51 AM, Peter Tribble wrote:
> I tried this with NetBackup, and decided against it pretty rapidly.
> Basically, we
> got hardly any dedup at all. (Something like 3%; compression gave us
> much better results.) Tiny changes in block alignment completely ruin the
> possibil
On Wed, Aug 18, 2010 at 9:48 AM, Sigbjorn Lie wrote:
> Hi,
>
> We are considering using a ZFS based storage as a staging disk for Networker.
> We're aiming at
> providing enough storage to be able to keep 3 months worth of backups on
> disk, before it's moved
> to tape.
>
> To provide storage fo
Hello,
we use ZFS on Solaris 10u8 as a backup to disk solution with EMC Networker.
We use the standard recordsize 128k and zfs compression.
Dedup we can't use, because of Solaris 10.
But we working on to use more feature and look for more improvements...
But we are happy with this solution.
H
Miles Nordin wrote:
> > "gd" == Garrett D'Amore writes:
>
> >> Joerg is correct that CDDL code can legally live right
> >> alongside the GPLv2 kernel code and run in the same program.
>
> gd> My understanding is that no, this is not possible.
>
> GPLv2 and CDDL are incompatible
Il giorno 18/ago/2010, alle ore 10.20, Alxen4 ha scritto:
> My NFS Client is ESXi so the major question is there risk of corruption for
> VMware images if I disable ZIL ?
I do the same use of ZFS. I had a huge improvement in performance by using
mirrors instead of raidz. How is your zpool confi
Thanks.Everything is clear now.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
We are considering using a ZFS based storage as a staging disk for Networker.
We're aiming at
providing enough storage to be able to keep 3 months worth of backups on disk,
before it's moved
to tape.
To provide storage for 3 months of backups, we want to utilize the dedup
functionality in
On Wed, 2010-08-18 at 01:20 -0700, Alxen4 wrote:
> Thanks...Now I think I understand...
>
> Let me summarize it andd let me know if I'm wrong.
>
> Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF
> to report data acknowledgment before it actually was written to stable
Alxen4 wrote:
Thanks...Now I think I understand...
Let me summarize it andd let me know if I'm wrong.
Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to
report data acknowledgment before it actually was written to stable storage
which in turn improves performance
Thanks...Now I think I understand...
Let me summarize it andd let me know if I'm wrong.
Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to
report data acknowledgment before it actually was written to stable storage
which in turn improves performance but might cause
Andrew Gabriel wrote:
Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise
it complains that log device is missing :)
For sure
On Wed, 2010-08-18 at 00:49 -0700, Alxen4 wrote:
> Any argumentation why ?
Because a RAMDISK defeats the purpose of a ZIL, which is to provide a
fast *stable storage* for data being written. If you are using a
RAMDISK, you are not getting any non-volatility guarantees that the ZIL
is supposed to
Any argumentation why ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Alxen4 wrote:
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device is missing :)
For sure I can manually remove/a
On Wed, 2010-08-18 at 00:16 -0700, Alxen4 wrote:
> Is there any way run start-up script before non-root pool is mounted ?
>
> For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
> So I need to create ramdisk before actual pool is mounted otherwise it
> complains that log device is m
I have a box running snv_134 that had a little boo-boo.
The problem first started a couple of weeks ago with some corruption on two
filesystems in a 11 disk 10tb raidz2 set. I ran a couple of scrubs that
revealed a handful of corrupt files on my 2 de-duplicated zfs filesystems. No
biggie.
I
Is there any way run start-up script before non-root pool is mounted ?
For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it
complains that log device is missing :)
For sure I can manually remove/and add it by sc
58 matches
Mail list logo