It starts with Z, which makes it the one of the last to be considered if
it's listed alphabetically?
Nathan.
Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
plzz its urgent...
help me.
This message posted from opensolaris.org
Tell you what, go away, do some work on your own reading around the topics
you're interested in, and when you come back try asking some more intelligent
questions instead of expecting us to do everything for you.
You'll find you get much more helpful replies that way. Right now you come
My server runs S10u5. All slices are UFS. I run a couple of sparse
zones on a seperate slice mounted on /zones.
When S10u6 comes out booting of ZFS will become possible. That is great
news. However, will it be possible to have those zones I run now too?
I always understood ZFS and root zones are
Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
plzz its urgent...
help me.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Lori Alt wrote:
Alan Burlison wrote:
NAME USED AVAIL REFER MOUNTPOINT
pool/ROOT 5.58G 53.4G18K legacy
What's the legacy mount for? Is it related to zones?
Basically, it means that we don't want it mounted at all
because it's a placeholder dataset. It's
Did anybody ever get this card working? SuperMicro only have Windows and Linux
drivers listed on their site. Do Sun's generic drivers work with this card?
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Mon, Aug 4, 2008 at 8:02 AM, Ross [EMAIL PROTECTED] wrote:
Did anybody ever get this card working? SuperMicro only have Windows and
Linux drivers listed on their site. Do Sun's generic drivers work with this
card?
Still waiting to buy a set. I've already got the supermicro marvell
Todd E. Moore wrote:
I'm used to using fstat() and other calls to get atime, ctime, and mtime
values, but I understand that the znode also stores a files creation
time in crtime attribute.
Which system call can I use to retrieve this information?
You can use the getattrat() or
Did anybody ever find out if this option worked? Does setting this hidden
option mean that drives are always listed and named according to the order they
are physically connected?
Ross
Hi Kent:
So, in using the lsiutil utility, what do I find,
but the following option: (this was under
Darren J Moffat wrote:
Lori Alt wrote:
Alan Burlison wrote:
NAME USED AVAIL REFER MOUNTPOINT
pool/ROOT 5.58G 53.4G18K legacy
What's the legacy mount for? Is it related to zones?
Basically, it means that we don't want it mounted at all
I'm trying to import a pool I just exported but I can't, even -f doesn't help.
Every time I try I'm getting an error:
cannot import 'rc-pool': one or more devices is currently unavailable
Now I suspect the reason it's not happy is that the pool used to have a ZIL :)
However I know the pool
Lori Alt wrote:
Darren J Moffat wrote:
Lori Alt wrote:
Alan Burlison wrote:
NAME USED AVAIL REFER MOUNTPOINT
pool/ROOT 5.58G 53.4G18K legacy
What's the legacy mount for? Is it related to zones?
Basically, it means that we don't
Ross wrote:
I'm trying to import a pool I just exported but I can't, even -f doesn't
help. Every time I try I'm getting an error:
cannot import 'rc-pool': one or more devices is currently unavailable
Now I suspect the reason it's not happy is that the pool used to have a ZIL :)
Richard Elling wrote:
Ross wrote:
I'm trying to import a pool I just exported but I can't, even -f doesn't
help. Every time I try I'm getting an error:
cannot import 'rc-pool': one or more devices is currently unavailable
Now I suspect the reason it's not happy is that the pool used to
Machine is running x86 snv_94 after recent upgrade from opensolaris 2008.05.
ZFS and zpool reported no troubles except suggesting upgrade for from ver.10 to
ver.11. seemed like a good idea at the time. system up for several days after
that point then took down for some unrelated maintenance.
Seymour Krebs wrote:
Machine is running x86 snv_94 after recent upgrade from opensolaris 2008.05.
ZFS and zpool reported no troubles except suggesting upgrade for from ver.10
to ver.11. seemed like a good idea at the time. system up for several days
after that point then took down for
Michael Schuster wrote:
Lori Alt wrote:
Darren J Moffat wrote:
Lori Alt wrote:
Alan Burlison wrote:
NAME USED AVAIL REFER MOUNTPOINT
pool/ROOT 5.58G 53.4G18K legacy
What's the legacy mount for? Is it related to zones?
--
Via iPhone 3G
On 04-août-08, at 19:46, Lori Alt [EMAIL PROTECTED] wrote:
I'll try to help, but I'm confused by a few things. First, when
you say that you upgraded from OpenSolaris 2008.05 to snv_94,
what do you mean? Because I'm not sure how one upgrades
an IPS-based release to the
The first attempt at this went well...
Anyway, he meant updating to the latest Indiana repo, which is based
on snv_94.
Regards,
-mg
--
Via iPhone 3G
On 04-août-08, at 19:46, Lori Alt [EMAIL PROTECTED] wrote:
Seymour Krebs wrote:
Machine is running x86 snv_94 after recent upgrade from
Did you do the extra required grub step between the 'pkg image-update' and
rebooting? If I recall correctly, it needs to happen once between snv_86 (which
I think is stock OS2008.05 and snv_89+
I think the grub step is documented at opensolaris.org in the downloads section
where it talks about
Hey folks,
just saw another cool news this morning - Nexenta Systems released
documentation for remote API and Windows SDK with demos for accessing
NexentaStor. News itself:
http://www.nexenta.com/corp/index.php?option=com_contenttask=viewid=154Itemid=56
ZFS and the rest of appliance
On Aug 3, 2008, at 8:46 PM, Rahul wrote:
hi
can you give some disadvantages of the ZFS file system??
plzz its urgent...
help me.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
So, it looks like only snv94 is capable of understanding the upgraded zfs
pool.
[EMAIL PROTECTED]:~# zpool history
no pools available
[EMAIL PROTECTED]:~# zpool import
pool: rpool
id: 17601658646371843627
state: UNAVAIL
status: The pool was last accessed by another system.
action: The
Created and shared zfs pool containing user file systems on Solaris server.
Mount this on Solaris client. Can view user file systems on client but not user
files. If I mount the user file systems on the client, then can see files. Both
running Solaris 10. What is going wrong? Thanks
This
On Mon, Aug 4, 2008 at 16:26, John Stoneback [EMAIL PROTECTED] wrote:
Created and shared zfs pool containing user file systems on Solaris server.
Mount this on Solaris client. Can view user file systems on client but not
user files. If I mount the user file systems on the client, then can see
On Mon, Aug 4, 2008 at 6:49 AM, Tim [EMAIL PROTECTED] wrote:
really had the motivation or the cash to do so yet. I've been keeping my
eye out for a board that supports the opteron 165 and the wider lane dual
pci-E slots that isn't stricly a *gaming* board. I'm starting to think the
PS (prescript?): t's considered good form to reply all so that other
people can see and thus reply to your question as well. After all, I
could have been hit by a bus between the last message and this one ;)
On Mon, Aug 4, 2008 at 16:42, John P. Stoneback [EMAIL PROTECTED] wrote:
Will,
Thanks
Thanks for the link. I'll consider those, but it still means a new CPU, and
it appears it does not support any of the opteron line-up.
On Mon, Aug 4, 2008 at 3:58 PM, Brandon High [EMAIL PROTECTED] wrote:
On Mon, Aug 4, 2008 at 6:49 AM, Tim [EMAIL PROTECTED] wrote:
really had the
And I can certainly vouch for that series of chipsets... I have a
750a-sli chipset (the one below the 790) and the SATA ports (in AHCI
mode) Just Work(tm) under nevada / opensolaris.
I'm yet to give it a while on S10, mostly as I pretty much run nevada
everywhere... As S10 does indeed have an
On Mon, Aug 4, 2008 at 2:52 PM, Tim [EMAIL PROTECTED] wrote:
Thanks for the link. I'll consider those, but it still means a new CPU, and
it appears it does not support any of the opteron line-up.
It should support any AM2/AM2+ dual-core Opteron like the 1220, etc.
as well as the quad-core
bh == Brandon High [EMAIL PROTECTED] writes:
nk == Nathan Kroenert [EMAIL PROTECTED] writes:
nk And I can certainly vouch for that series of chipsets... I
nk have a 750a-sli chipset (the one below the 790)
um...what?
750a is an nVidia chip
np == Neal Pollack [EMAIL PROTECTED] writes:
wj == wan jm [EMAIL PROTECTED] writes:
np Yes, it's too easy to administer. This makes it rough to
np charge a lot as a sysadmin.
yeah, sure, until you get a simple question like this:
wj there are two disks in one ZFS pool used as
re == Richard Elling [EMAIL PROTECTED] writes:
pf == Paul Fisher [EMAIL PROTECTED] writes:
re I was able to reproduce this in b93, but might have a
re different interpretation
You weren't able to reproduce the hang of 'zpool status'?
Your 'zpool status' was after the FMA fault kicked
Miles Nordin wrote:
The other is requirement for workaround of the BA/B2 stepping TLB bug.
This was fixed some months ago, and it should be hard to find
the old B2 chips anymore (not many were made or sold).
-- richard
___
zfs-discuss mailing
thanks for the help folks esp. Mikee who had experienced a similar problem and
provided a concise solution
basically, after an excruciating download of the sxce_b94.iso, I was able to
boot from the dvd, zpool import -f rpool.
this gave me failures to mount x4 unable to create mount point. so i
35 matches
Mail list logo