-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED]
Kristof Van Damme wrote:
Hi All,
We have set up a zpool on OpenSolaris 2008.11, but have difficulties copying
files with special chars in the name when the name is encoded in ISO8859-15.
When the name is in UTF8 we don't have this problem. We get Operation not
supported.
We want to copy
I tried this question in the CIFS forum and didn't get any responses, but maybe
it is more appropriate for this forum.
I have many large zfs filesystems on Solaris 10 servers that I would like to
upgrade to OpenSolaris so the filesystems can be shared using the CIFS Service
(I'm currently
Hi Tim,
Thanks for having a look.
The 'utf8only' setting is set to off.
Important bit of additional information:
We only seem to have this problem when copying to a zfs filesystem with the
casesensitivity=mixed property. We need this because the filesystem will
eventually be shared over NFS and
On Wed, Dec 10, 2008 at 10:51 AM, Jay Anderson [EMAIL PROTECTED]wrote:
I tried this question in the CIFS forum and didn't get any responses, but
maybe it is more appropriate for this forum.
I have many large zfs filesystems on Solaris 10 servers that I would like
to upgrade to OpenSolaris so
First a background to my problem. I am using a Windows XP Pro laptop to host a
VirtualBox OpenSolaris guest. The VDI file is now 30 GB big but a df tells me
that the ZFS pool is using only 20 GB. I want to free up that 10 GB. Hold on, I
know what you're thinking, but this is not a VirtualBox
On 10 December, 2008 - Alexander Cerna sent me these 1,1K bytes:
First a background to my problem. I am using a Windows XP Pro laptop
to host a VirtualBox OpenSolaris guest. The VDI file is now 30 GB big
but a df tells me that the ZFS pool is using only 20 GB. I want to
free up that 10 GB.
On Wed, Dec 10, 2008 at 11:40:16AM -0600, Tim wrote:
On Wed, Dec 10, 2008 at 10:51 AM, Jay Anderson [EMAIL PROTECTED]wrote:
I have many large zfs filesystems on Solaris 10 servers that I would like
to upgrade to OpenSolaris so the filesystems can be shared using the CIFS
Service (I'm
Large sites that have centralized their data with a SAN typically have
a storage device export block-oriented storage to a server, with a
fibre-channel or Iscsi connection between the two. The server sees
this as a single virtual disk. On the storage device, the blocks of
data may be spread
Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.
It's an install option.
- --
The correct sig. delimiter is --
On Wed, 10 Dec 2008, Gary Mills wrote:
This is a split responsibility configuration where the storage device
is responsible for integrity of the storage and ZFS is responsible for
integrity of the filesystem. How can it be made to behave in a
reliable manner? Can ZFS be better than UFS in
On Wed, Dec 10, 2008 at 18:46, Gary Mills [EMAIL PROTECTED] wrote:
The
storage device provides reliability and integrity for the blocks of
data that it serves, and does this well.
But not well enough. Even if the storage does a perfect job keeping
its bits correct on disk, there are a lot of
On Wed, Dec 10, 2008 at 11:46 AM, Nico wrote:
On Wed, Dec 10, 2008 at 10:51 AM, Jay Anderson wrote:
I have many large zfs filesystems on Solaris 10 servers that I would like
to upgrade to OpenSolaris so the filesystems can be shared using the CIFS
Service (I'm currently using Samba). ZFS on
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins wrote:
I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.
It's an install option.
But I am not installing, but doing a Live
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
disk. UFS is most common, but ZFS has a number of advantages over
UFS. Two of these are dynamic space management and snapshots. There
are also a number of
On Wed, Dec 10, 2008 at 11:13:21AM -0800, Jay Anderson wrote:
The casesensitivity option is just like utf8only and normalization, it
can only be set at creation time. The result from attempting to change
it on an existing filesystem:
# zfs set casesensitivity=mixed pool0/data1
I agree completely with your assessment of the problems Gary, when ZFS can't
correct your data you do seem to be at high risk of loosing data, although some
people are able to recover it with the help of a couple of helpful souls on
this forum.
I can think of one scenario where you might be
On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
disk. UFS is most common, but ZFS has a number of advantages over
UFS. Two of these are
On 12/10/08 12:15, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ian Collins wrote:
I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.
It's an install option.
On Wed, 10 Dec 2008 11:13:21 PST
Jay Anderson [EMAIL PROTECTED] wrote:
Yep, that's why I'm planning to upgrade to OpenSolaris.
And do you think it really is stable/secure enough for a production
server replacing S10?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS
Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
disk. UFS is most common, but ZFS has a number of advantages
I've never encountered that error myself, so I'm not at all sure this
suggestion will work, but I did run into something similar and the answer was
to install Windows on the drive and then pop the drive back in my server.
Prior to that, OpenSolaris/ZFS remembered the disk and wouldn't let me
On Wed, Dec 10, 2008 at 12:58:48PM -0800, Richard Elling wrote:
Nicolas Williams wrote:
But note that the setup you describe puts ZFS in no worse a situation
than any other filesystem.
Well, actually, it does. ZFS is susceptible to a class of failure modes
I classify as kill the canary
nw == Nicolas Williams [EMAIL PROTECTED] writes:
wm == Will Murnane [EMAIL PROTECTED] writes:
nw ZFS has very strong error detection built-in,
nw ZFS can also store multiple copies of data and metadata even
nw in non-mirrored/non-RAID-Z pools.
nw Whoever is making those
re == Richard Elling [EMAIL PROTECTED] writes:
re ZFS will detect errors and complain about them, which results
re in people blaming ZFS (the canary).
this is some really sketchy spin.
Sometimes you will say ZFS stores multiple copies of metadata, so even
on an unredundant pool a few
On Tue, Dec 9, 2008 at 8:37 AM, Courtney Malone
[EMAIL PROTECTED] wrote:
I have another drive on the way, which will be handy in the future, but it
doesn't solve the problem that zfs wont let me manipulate that pool in a
manner that will return it to a non-degraded state, (even with a
When I create a volume I am unable to mount it locally. I pretty sure it has
something to do with the other volumes in the same ZFS pool being shared out as
ISCSI luns. For some reason ZFS things the base volume is ISCSI. Is there a
flag that I am missing? Thanks in advanced for the help.
On Wed, 10 Dec 2008, Miles Nordin wrote:
The objection, to review, is that people are losing entire ZFS pools
on SAN's more often than UFS pools on the same SAN. This is
It sounds like you have access to a source of information that the
rest of us don't have access to. Perhaps it is a secret
On Wed, Dec 10, 2008 at 02:08:28PM -0800, John Smith wrote:
When I create a volume I am unable to mount it locally. I pretty sure
it has something to do with the other volumes in the same ZFS pool
being shared out as ISCSI luns. For some reason ZFS things the base
volume is ISCSI. Is there a
Should SMF have a THAW method that fires when a system is woken from being
hibernated?
I think it should. The zfs snapshot service could use this to snapshot on thaw
and while it would be possible to leave a daemon around to catch the signal
that would potentially leave lots of daemons around
Bug report filed on 12/9, #6782540
http://bugs.opensolaris.org/view_bug.do?bug_id=6782540
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
I was wondering if there are any problems with cyrus and ZFS? Or have
all the problems of yester-release been ironed out?
Thanks, Jonny
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
+--
| On 2008-12-10 16:48:37, Jonny Gerold wrote:
|
| Hello,
| I was wondering if there are any problems with cyrus and ZFS? Or have
| all the problems of yester-release been ironed out?
Yester-release?
I've been using
It sounds like you have access to a source of information that the
rest of us don't have access to.
I think if you read the archives of this mailing list, and compare it to the
discussions on the other Solaris mailing lists re UFS, it's a reasonable
conclusion.
--
This message posted from
A while back I started a thread Possible ZFS panic on Solaris 10
Update 6 and it now turns out the cause is one incremental stream.
Sending this stream to an x86 system running Solaris 10 Update 6 or SXCE
b103 (and probably anything in between) panics the receiving system.
I have a crash dump
35 matches
Mail list logo