On May 20, 2009, at 6:32 PM, zhihui Chen wrote:
I have created a pool on external storage with B114. Then I export
this pool and import it on another system with B110.
But this import will fail and show error: cannot import 'tpool':
pool is formatted using a newer ZFS version.
Any big chan
zhihui Chen wrote:
I have created a pool on external storage with B114. Then I export this pool
and import it on another system with B110.But this import will fail and show
error: cannot import 'tpool': pool is formatted using a newer ZFS version.
Any big change in ZFS with B114 leads to this co
I have created a pool on external storage with B114. Then I export this pool
and import it on another system with B110.But this import will fail and show
error: cannot import 'tpool': pool is formatted using a newer ZFS version.
Any big change in ZFS with B114 leads to this compatibility issue?
T
Oh I forgot the more important question.
Importing all the user quota settings; Currently as a long file of "zfs
set" commands, which is taking a really long time. For example,
yesterday's import is still running.
Are there bulk-import solutions? Like zfs set -f file.txt or similar?
If not
robert.ho...@allatnet.de wrote:
Hi there,
i'm building a small NAS with 5x1TB Disks. The disks contains at the moment
some data, ntfs as the fs and aren't a raid.
Now my im wondering if its possible to add the parity later. So that i add step
by step one disk to the pool. And when i add the l
Matthew Ahrens wrote:
Thanks for the feedback!
Thank you for the work, it sure is great!
This should work, at least on Solaris clients. Perhaps you can only
request information about yourself from the client?
Odd, but I just assumed it wouldn't work and didn't check further. But
t
-bash-3.2# zpool export exchbk
cannot remove device links for 'exchbk/exchbk-2': dataset is busy
this is a zvol used for a comstar iscsi backend:
-bash-3.2# stmfadm list-lu -v
LU Name: 600144F0EAC009004A0A4F410001
Operational Status: Offline
Provider Name : sbd
Alias
On Wed, 20 May 2009, Wong, James (Nagi Long) wrote:
I understand zfs has high performance write, even with lots snapshots.
How about read performance? To me, it will be a very fragmented due to
the COW. And no matter how careful you allocate the blocks, you can
only put a SHARED block in the p
Wong, James (Nagi Long) wrote:
Hi,
I’d like to know more about this topic. However, googling doesn’t
give me much. If you guys know of any info, please point me to it.
I understand zfs has high performance write, even with lots snapshots.
How about read performance? To me, it will
Not sure if this is a wacky question.
Given a slog device does not really need much more than 10 GB. If I was to
use a pair of X25-E (or STEC devices or whatever) in a mirror as the boot
device and then either 1. created a loopback file vdev or 2. separate
mirrored slice for the slog would this ca
Hi,
I'd like to know more about this topic. However, googling doesn't give
me much. If you guys know of any info, please point me to it.
I understand zfs has high performance write, even with lots snapshots.
How about read performance? To me, it will be a very fragmented due to
the COW. A
Richard Elling wrote:
Will Murnane wrote:
On Wed, May 20, 2009 at 12:42, Miles Nordin wrote:
"djm" == Darren J Moffat writes:
djm> a) it was highly dangerous and involved using multiple
djm> different zfs kernel modules was well as
however...utter hogwash! Nothing is
> "re" == Richard Elling writes:
re> Whoa.
re> The slog is a top-level vdev like the others. The current
re> situation is that loss of a top-level vdev results in a pool
re> that cannot be imported.
this taxonomy is wilfully ignorant of the touted way pools will keep
workin
Will Murnane wrote:
On Wed, May 20, 2009 at 12:42, Miles Nordin wrote:
"djm" == Darren J Moffat writes:
djm> a) it was highly dangerous and involved using multiple
djm> different zfs kernel modules was well as
however...utter hogwash! Nothing is ``highly dangerous'' wh
On Wed, May 20, 2009 at 12:42, Miles Nordin wrote:
>> "djm" == Darren J Moffat writes:
> djm> a) it was highly dangerous and involved using multiple
> djm> different zfs kernel modules was well as
>
> however...utter hogwash! Nothing is ``highly dangerous'' when your
> pool is completely
Jorgen,
"quota user1" only prints out information if user1's quota is exceeded.
Try "quota -v user1".
e.g.
(server)
suzuki# zfs set userqu...@lling=10m foo/fs
suzuki# share
-...@foo/fs/foo/fs rw ""
(client)
headers# quota -v lling
Disk quotas for lling (uid 23498):
Fil
Jorgen Lundman wrote:
I have been playing around with osol-nv-b114 version, and the ZFS user
and group quotas.
First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone
else involved).
Thanks for the feedback!
I was unable to get ZFS quota to work with rquota. (Ie, NFS mount t
> "djm" == Darren J Moffat writes:
djm> I do; because I've done it to my own personal data pool.
djm> However it is not a procedure I'm willing to tell anyone how
djm> to do - so please don't ask -
k, fine, fair enough and noted.
djm> a) it was highly dangerous and involved usi
> "re" == Richard Elling writes:
re> in the case of a properly exported pool, we should be allowed
re> to import sans slog.
seems so, but the non properly exported case is still important. for
example NFS HA clusters would stop working if slogs were always
ignored on import---you ha
On Tue, 19 May 2009, Dave wrote:
Paul B. Henson wrote:
I was checking with Sun support regarding this issue, and they say "The CR
currently has a high priority and the fix is understood. However, there is
no eta, workaround, nor IDR."
If it's a high priority, and it's known how to fix it, I
You can add disks to to a RAID-Z configuration. The following shows how
to convert a storage pool with one RAID–Z device to storage pool with
two RAID-Z devices with 3 disks.
# *zpool status*
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
Paul B. Henson wrote:
On Wed, 20 May 2009, Darren J Moffat wrote:
How Sun Services reports the status of escalations to customers under
contract is not a discussion for a public alias like this so I won't
comment on this.
Heh, but maybe it should be a discussion for some internal forum; more
On Wed, 20 May 2009, Darren J Moffat wrote:
> How Sun Services reports the status of escalations to customers under
> contract is not a discussion for a public alias like this so I won't
> comment on this.
Heh, but maybe it should be a discussion for some internal forum; more
information = less a
Paul B. Henson wrote:
On Wed, 20 May 2009, Darren J Moffat wrote:
Why do you think there is no progress ?
Sorry if that's a wrong assumption, but I posted questions regarding it to
this list with no response from a Sun employee until yours, and the
engineer assigned to my support ticket was u
On Wed, 20 May 2009, Darren J Moffat wrote:
> Why do you think there is no progress ?
Sorry if that's a wrong assumption, but I posted questions regarding it to
this list with no response from a Sun employee until yours, and the
engineer assigned to my support ticket was unable to provide any
inf
Hi there,
i'm building a small NAS with 5x1TB Disks. The disks contains at the moment
some data, ntfs as the fs and aren't a raid.
Now my im wondering if its possible to add the parity later. So that i add step
by step one disk to the pool. And when i add the last disk, i enable the parity.
(i
>
> Currently, no.
>
> If you haven't filled the pool more than 1 disk worth, you can juggle
> the data around..
>
> * Create a new pool with just one disk, copy everything over (either via
> zfs send+recv or just plain copy).
> * Create a 4 disk raidz1 with one disk missing:
> * mkfile -n 250g /
On 20 May, 2009 - Sera sent me these 2,7K bytes:
> Hi,
> I'm new in this mail list and my english is very bad.
>
> I installed opensolario 0811, and I created a pool_nas raidz1 with 3 discs:
>
> zpool create -f pool_nas raidz1 c3d0 c3d1 c4d0
>
> zpool status pool_nas
> pool: pool_nas
> state
Hi,
I'm new in this mail list and my english is very bad.
I installed opensolario 0811, and I created a pool_nas raidz1 with 3 discs:
zpool create -f pool_nas raidz1 c3d0 c3d1 c4d0
zpool status pool_nas
pool: pool_nas
state: ONLINE
scrub: none requested
config:
NAMESTATE
Paul B. Henson wrote:
I was checking with Sun support regarding this issue, and they say "The CR
currently has a high priority and the fix is understood. However, there is
no eta, workaround, nor IDR."
If it's a high priority, and it's known how to fix it, I was curious as to
why has there been
I have been playing around with osol-nv-b114 version, and the ZFS user
and group quotas.
First of all, it is fantastic. Thank you all! (Sun, Ahrens and anyone
else involved).
I'm currently copying over one of the smaller user areas, and setting up
their quotas, so I have yet to start large
31 matches
Mail list logo