Hi
Is it possible to convert live 3 disks zpool from raidz to raidz2
And is it possible to add 1 new disk to raidz configuration without backups and
recreating zpool from cratch.
Thanks
This message posted from opensolaris.org
___
zfs-discuss
Hi there,
On Mon, 2007-04-02 at 00:37 -0700, homerun wrote:
Is it possible to convert live 3 disks zpool from raidz to raidz2
Unfortunately not - you'd need to backup your data, destroy the pool,
create the new pool and restore your data.
And is it possible to add 1 new disk to raidz
Jason J. W. Williams writes:
Hi Guys,
Rather than starting a new thread I thought I'd continue this thread.
I've been running Build 54 on a Thumper since Mid January and wanted
to ask a question about the zfs_arc_max setting. We set it to
0x1 #4GB, however its creeping over
I did some more testing, here is what I found:
- I can destroy older and newer snapshots, just not that particular snapshot
- I added some more memory total 1GB, now after I start the destroy command,
~500MB RAM are taken right away, there is still ~200MB or so left.
o The machine is
You are definitely hitting a bug.. Not sure which one (hopefully someone else
will chime in on that.) It should take mere milliseconds to destroy a snapshot
regardless of size.
Do you have any disk errors?
What would happen if you scrubbed the pool?
Eric
This message posted from
Hi,
One of my legacy machines was created without an adequate swap partition, so I
added a ZFS volume and added it to the server's swap as illustrated in the ZFS
admin guide:
swap -a /dev/zvol/dsk/tank/vol
This works fine except it doesn't persist across reboots. Do I need to add this
to
On Mon, Apr 02, 2007 at 12:37:24AM -0700, homerun wrote:
Is it possible to convert live 3 disks zpool from raidz to raidz2
And is it possible to add 1 new disk to raidz configuration without
backups and recreating zpool from scratch.
The reason that's not possible is because RAID-Z uses a
Chris Jackson wrote:
Hi,
One of my legacy machines was created without an adequate swap partition, so I
added a ZFS volume and added it to the server's swap as illustrated in the ZFS
admin guide:
swap -a /dev/zvol/dsk/tank/vol
This works fine except it doesn't persist across reboots. Do I
Hello,
i´ve already read many posts about checksum error on zpools but i like to have
some more informations, please.
We use 2 sun servers (amd x64, SunOS, 5.10 Generic_118855-36, hopefully all
patches) with two hardware raids (raid 10) connected through fibre channel.
Disk space is about 3
On March 27, 2007 1:43:01 PM +0800 Wee Yeh Tan [EMAIL PROTECTED] wrote:
On 3/24/07, Frank Cusack [EMAIL PROTECTED] wrote:
On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan [EMAIL PROTECTED] wrote:
I should be able to reply to you next Tuesday -- my 6140 SATA
expansion tray is due to arrive.
I have a couple of questions:
1.)
I am working with an v240 IMAP server that is currently set up with 3 zfs
pools, one (conf-pool) on the internal disks, and two (email-pool, and
email1-pool) that are spread across 12 disks in an attached JBOD like so:
pool: email-pool
state: ONLINE
If I create a symlink inside a zfs file system and point the link to a
file on a ufs file system on the same node how much space should I
expect to see taken in the pool as used? Has this changed in the last
few months? I know work is being done under 6516171 to make symlinks
dittoable but I
Hello Joseph,
Monday, April 2, 2007, 9:42:24 PM, you wrote:
JB I have a couple of questions:
JB 1.)
JB I am working with an v240 IMAP server that is currently set up with 3 zfs
JB pools, one (conf-pool) on the internal disks, and two (email-pool, and
JB email1-pool) that are spread across 12
Robert Milkowski wrote:
JB So, normally, when the script runs, all snapshots finish in maybe a minute
JB total. However, on Sundays, it continues to take longer and longer. On
JB 2/25 it took 30 minutes, and this last Sunday, it took 2:11. The only
JB thing special thing about Sunday's
All file systems provide writes by default which are
atomic with respect to readers of the file.
Surely, only in the absence of a crash - otherwise,
POSIX would require implementation of transactional
write semantics in all file systems. Or is that what
you meant by the last sentence in
On Mon, Apr 02, 2007 at 03:27:39PM -0700, Anton B. Rang wrote:
All file systems provide writes by default which are
atomic with respect to readers of the file.
Surely, only in the absence of a crash - otherwise,
POSIX would require implementation of transactional
write semantics in
Joseph Barbey wrote:
Robert Milkowski wrote:
JB So, normally, when the script runs, all snapshots finish in maybe
a minute
JB total. However, on Sundays, it continues to take longer and
longer. On
JB 2/25 it took 30 minutes, and this last Sunday, it took 2:11. The
only
JB thing special
Eric Schrock wrote:
This can't be done due to the way ZFS property inheritance works in the
DSL. You can explicitly set it to the empty string, but you can't unset
the property alltogether. This is exactly why the 'zfs get -s local'
option exists, so you can find only locally-set properties.
Miroslav Pendev wrote:
I did some more testing, here is what I found:
- I can destroy older and newer snapshots, just not that particular snapshot
- I added some more memory total 1GB, now after I start the destroy command, ~500MB RAM are taken right away, there is still ~200MB or so left.
o
Niclas Sodergard wrote:
On 3/29/07, Ed Plese [EMAIL PROTECTED] wrote:
Is there a solution here but to move the zone root to a smaller disk?
Set a quota (10G should work just fine) on the filesystem and then
perform the zone install. Afterwards remove the quota.
Thanks, seems to work just
It's hard to say precisely, but asymptotically you should see one znode one
directory entry (plus a bit of associated tree overhead) for smaller symlinks
(56 bytes?) and an additional data block of 512 or 1024 bytes for larger
symlinks.
This message posted from opensolaris.org
21 matches
Mail list logo