Hi All,
Is it possible to create a ZPool on SVM volumes ? What are the limitations
for this ?
on a solaris machine, how many number of zpools we can create ? Is there any
limitation on number of zpools per system ?
-Mastahn
-
Choose the
dudekula mastan wrote:
Is it possible to create a ZPool on SVM volumes ? What are the
limitations for this ?
Not as far as I am aware. libdiskmgmt gets in the
way - it protects you.
on a solaris machine, how many number of zpools we can create ? Is there
any limitation on number of zpools
Hello Louwtjie,
Monday, June 4, 2007, 9:14:26 AM, you wrote:
LB On 5/30/07, James C. McPherson [EMAIL PROTECTED] wrote:
Louwtjie Burger wrote:
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG-XPCIESAS-E-Z
card, 3Gb
Robert Milkowski wrote:
Hello Louwtjie,
Monday, June 4, 2007, 9:14:26 AM, you wrote:
LB On 5/30/07, James C. McPherson [EMAIL PROTECTED] wrote:
Louwtjie Burger wrote:
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun
I would suggest that this thread will be moved to an
apple-related
list since it has nothing to do with zfs anymore.
Hmm, I don't know how you figure this has nothing to do with zfs. This is all
about zfs and seems to me zfs-discuss is the perfect thread for it.
This message posted from
Once you switch over to zfs root, adding new hardware
should just behave
as what you expect on ufs root.
Copy /devices and /dev is just a one-time thing (as
part of
'installation') to setup the initial zfs root.
Ok, but what about the first boot? Why can't /devices and /dev be generated
I would suggest that this thread will be moved to an
apple-related
list since it has nothing to do with zfs anymore.
Hmm, I don't know how you figure this has nothing to do with zfs. This is all
about zfs and seems to me zfs-discuss is the perfect thread for it.
Because the discussion
Hello James,
Wednesday, June 13, 2007, 1:06:22 PM, you wrote:
JCM Robert Milkowski wrote:
Hello Louwtjie,
Monday, June 4, 2007, 9:14:26 AM, you wrote:
LB On 5/30/07, James C. McPherson [EMAIL PROTECTED] wrote:
Louwtjie Burger wrote:
I know the above mentioned kit (2530) is new, but has
Robert Milkowski wrote:
...
JCM Yes, my team's test plan did include ST2530 array attached
JCM to SAS hba.
But there's 2530 with RAID controller and SAS external ports.
To clarify I was asking about expansion trays without any RAID
controllers - just 2530 jbod attached with dual links to a host
Robert Milkowski wrote:
...
JCM As far as I understand it, I do not think that a plain
JCM jbod version of the ST2530 is supported. I believe that
JCM a jbod attached to the ST2540 (fc-connected) is supported.
If it works it doesn't have to be supported.
and practically speaking, I expect
On Tue, 12 Jun 2007, Tim Cook wrote:
This pool should have 7 drives total, which it does, but for some reason
c4d0 is displayed twice. Once as online (which it is), and once as
unavail (which it is not).
What's the name of the 7th drive? Did you take all the drives from the
old system and
I have a system that is running Solaris 10 Update 3 TX with 1 zpool and 5
zones. Everything on it is running fine. I take the drive to my disk duplicator
and dupe it bit by bit to another drive, put the newly duped drive in the same
machine and boot it up everything boots up fine. Then I do a
Hello,
as the president of the french OSUG [1], I'll give a talk about ZFS and zones
at RMLL [2] (libre software meeting) and I have few questions about Jeff
Bonwick's slides [3], especially for slide 11.
I just want to be sure to understand good, here my understanding: (I hope I'm
not
Hello Bruno,
Wednesday, June 13, 2007, 3:45:07 PM, you wrote:
BB Hello,
BB as the president of the french OSUG [1], I'll give a talk about
BB ZFS and zones at RMLL [2] (libre software meeting) and I have few
BB questions about Jeff Bonwick's slides [3], especially for slide 11.
BB I just want
Hello,
I have the following situation:
1) A ZFS filesystem, created with zfs create:
- multipack/u01
2) Data created in said filesystem
3) A snapshot taken of this filesystem:
- multipack/[EMAIL PROTECTED]
4) A clone filesystem created from the snapshot:
- multipack/u09
multipack/u01
Hi Lin,
A few moments after replying to your post, I had an idea. I had tweaked with
almost every part of the script but I couldn't figure out what the difference
was between the script and the manual execution.
The difference is (as I found later) that when I created the ZFS root fs by
On Wed, 13 Jun 2007, James C. McPherson wrote:
Robert Milkowski wrote:
...
JCM As far as I understand it, I do not think that a plain
JCM jbod version of the ST2530 is supported. I believe that
JCM a jbod attached to the ST2540 (fc-connected) is supported.
If it works it doesn't have to be
dudekula mastan wrote:
Is it possible to create a ZPool on SVM volumes ? What are the
limitations for this ?
Not as far as I am aware. libdiskmgmt gets in the
way - it protects you.
Should be able to. We've had some threads about ZFS on top of SVM.
dudekula mastan wrote:
Is it possible to create a ZPool on SVM volumes ? What are the limitations
for this ?
Not as far as I am aware. libdiskmgmt gets in the
way - it protects you.
This is incorrect. If you attempt to use the same underlying disks, then
libdiskmgmt will protect you.
On Mon, 11 Jun 2007, Rick Mann wrote:
ZFS Readonly implemntation is loaded!
Is that a copy-n-paste error, or is that typo in the actual output?
It's a typo in the actual output.
This message posted from opensolaris.org
___
zfs-discuss mailing
On 13-Jun-07, at 1:14 PM, Rick Mann wrote:
From (http://www.informationweek.com/news/showArticle.jhtml;?
articleID=199903525)
... Croll explained, ZFS is not the default file system for
Leopard. We are exploring it as a file system option for high-end
storage systems with really large
I just want to be sure to understand good, here my understanding: (I
hope I'm not totally wrong ;p)
The slide demonstrate how an existing file is modified. The boxes in
blue represents the existing data, and the green ones the new data. So
when an application wants modified the existing
From
(http://www.informationweek.com/news/showArticle.jhtml;?articleID=199903525)
---
[...]
Seeking to clarify a statement made on Monday by Brian Croll, senior director
of Mac OS X Product Marketing, to two InformationWeek reporters that Apple's
new Leopard operating system would not include
Toby Thain, et al,
I am guessing here, but to just be able to access
the FS data locally without the headaches of
verifying FS consistency, write caches, etc.
Mitchell Erblich
Toby Thain wrote:
On 13-Jun-07, at 1:14 PM, Rick Mann
Toby Thain wrote:
What possible use is read only ZFS?
A user of an OS that _does_ support read+write ZFS might, for
example, have one spare USB disk/drive.
The user may opt for ZFS for that one disk, gaining the benefits of
COW, rollback etc..
The user will be able to read (only) that
The whole read-only business sounds like baloney to me. Read-only ZFS
implies that the file system would be created elsewhere - and I don't know
if there will be continuing compatibility between
Solaris/Linux(FUSE)/FreeBSD implementations - so they would presumably
support read-only of Solaris'
So you can migrate all your ZFS volumes to HFS+ ;-)
Toby Thain [EMAIL PROTECTED] 6/13/2007 12:22 PM
On 13-Jun-07, at 1:14 PM, Rick Mann wrote:
From (http://www.informationweek.com/news/showArticle.jhtml;?
articleID=199903525)
... Croll explained, ZFS is not the default file system for
2007/6/10, arb [EMAIL PROTECTED]:
Hello, I'm new to OpenSolaris and ZFS so my apologies if my questions are naive!
I've got solaris express (b52) and a zfs mirror, but this command locks up my
box within 5 seconds:
% cmp first_4GB_file second_4GB_file
It's not just these two 4GB files, any
On June 13, 2007 9:14:48 AM -0700 Rick Mann [EMAIL PROTECTED] wrote:
From
(http://www.informationweek.com/news/showArticle.jhtml;?articleID=199903
525)
...
In a follow-up interview today, Croll explained, ZFS is not the default
file system for Leopard. We are exploring it as a file system
So it's been what, a day (2?) and no one has tried to import a pool on
the Leopard beta?
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno =
ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
zfs`dsl_dir_tempreserve_space+0x4e
On 13-Jun-07, at 4:09 PM, Frank Cusack wrote:
On June 13, 2007 9:14:48 AM -0700 Rick Mann [EMAIL PROTECTED]
wrote:
From
(http://www.informationweek.com/news/showArticle.jhtml;?
articleID=199903
525)
...
In a follow-up interview today, Croll explained, ZFS is not the
default
file system
Al Hopper wrote:
On Wed, 13 Jun 2007, James C. McPherson wrote:
Robert Milkowski wrote:
...
JCM As far as I understand it, I do not think that a plain
JCM jbod version of the ST2530 is supported. I believe that
JCM a jbod attached to the ST2540 (fc-connected) is supported.
If it works it
OK, so I get the reason behind this message but I do not understand
why we're unmounting the clone filesystem in the first place?
This is bug 6472202 'zfs rollback' and 'zfs rename' requires that clones be
unmounted.
Sorry about that,
--matt
___
[EMAIL PROTECTED] wrote:
I believe we should rather educate other people that st_size/24 is a bad
solution.
That's all well and good but fixing all clients, including potentially
really old ones, might not be feasible. Being correct doesn't help
our customers.
To summarize my
On Wed, Jun 13, 2007 at 05:27:18PM -0700, Matthew Ahrens wrote:
[EMAIL PROTECTED] wrote:
I believe we should rather educate other people that st_size/24 is a bad
solution.
That's all well and good but fixing all clients, including potentially
really old ones, might not be feasible.
On Thu, 14 Jun 2007, James C. McPherson wrote:
Al Hopper wrote:
On Wed, 13 Jun 2007, James C. McPherson wrote:
Robert Milkowski wrote:
...
JCM As far as I understand it, I do not think that a plain
JCM jbod version of the ST2530 is supported. I believe that
JCM a jbod attached to the ST2540
Manoj Joseph wrote:
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno =
ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
zfs`dsl_dir_tempreserve_space+0x4e
On Wed, Jun 13, 2007 at 05:27:18PM -0700, Matthew Ahrens wrote:
To summarize my understanding of this issue: st_size on directories is
undefined; apps/libs which do anything other than display it are broken.
However, we should avoid exercising this bug in these broken apps if
possible.
Ed Ravin wrote:
As mentioned before, NetBSD's scandir(3) implementation was one. The
NetBSD project has fixed this in their CVS. OpenBSD and FreeBSD's scandir()
looks like another, I'll have to drop them a line.
...
Thanks much for investigating this and pushing for fixes!
--matt
On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
As mentioned before, NetBSD's scandir(3) implementation was one. The
NetBSD project has fixed this in their CVS. OpenBSD and FreeBSD's scandir()
looks like another, I'll have to drop them a line.
I heard from an OpenBSD developer who
Matthew Ahrens wrote:
Manoj Joseph wrote:
Hi,
I find that fchmod(2) on a zfs filesystem can sometimes generate errno
= ENOSPC. However this error value is not in the manpage of fchmod(2).
Here's where ENOSPC is generated.
zfs`dsl_dir_tempreserve_impl
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAMESTATE READ WRITE CKSUM
tankONLINE 0
43 matches
Mail list logo