On Tue, 1 Jul 2008, Brian McBride wrote:
Customer:
I would like to know more about zfs's checksum feature. I'm guessing
it is something that is applied to the data and not the disks (as in
raid-5).
Data and metadata.
For performance reasons, I turned off checksum on our zfs filesystem
Hi,
How difficult would it be to write some code to change the GUID of a pool?
Thanks
Peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
How difficult would it be to write some code to change the GUID of a pool?
As a recreational hack, not hard at all. But I cannot recommend it
in good conscience, because if the pool contains more than one disk,
the GUID change cannot possibly be atomic. If you were to crash or
lose power in
Let ZFS deal with the redundancy part. I'm not
counting redundancy offered by traditional RAID as
you can see by just posts in this forums that -
1. It doesn't work.
2. It bites when you least expect it to.
3. You can do nothing but resort to tapes and LOT of
aspirin when you get bitten.
Can you try just deleting the zpool.cache file and let it rebuild on import? I
would guess a listing of your old devices were in there when the system came
back up with new stuff. The OS stayed the same.
This message posted from opensolaris.org
___
On Wed, Jul 2, 2008 at 9:55 AM, Peter Pickford [EMAIL PROTECTED] wrote:
Hi,
How difficult would it be to write some code to change the GUID of a pool?
Not too difficult - I did it some time ago for a customer, who wanted it badly.
I guess you are trying to import pools cloned by the storage
Dave Miner wrote:
jan damborsky wrote:
...
[2] dump and swap devices will be considered optional
dump and swap devices will be considered optional during
fresh installation and will be created only if there is
appropriate space available on disk provided.
Minimum disk space required will
Jeff Bonwick wrote:
To be honest, it is not quite clear to me, how we might utilize
dumpadm(1M) to help us to calculate/recommend size of dump device.
Could you please elaborate more on this ?
dumpadm(1M) -c specifies the dump content, which can be kernel, kernel plus
current process, or all
Hi Robert,
you are quite welcome !
Thank you very much for your comments.
Jan
Robert Milkowski wrote:
Hello jan,
Tuesday, July 1, 2008, 11:09:54 AM, you wrote:
jd Hi all,
jd Based on the further comments I received, following
jd approach would be taken as far as calculating default
Hi
I have managed to get this:
HOSTNAME$ zpool status
pool: zp01
state: ONLINE
scrub: resilver completed with 0 errors on Wed Jul 2 11:55:27 2008
config:
NAMESTATE READ WRITE CKSUM
zp01ONLINE 0 0 0
c0t2d0ONLINE 0 0
Hi,
According to the Sun Handbook, there is a new array :
SAS interface
12 disks SAS or SATA
ZFS could be used nicely with this box.
There is an another version called
J4400 with 24 disks.
Doc is here :
http://docs.sun.com/app/docs/coll/j4200
Does someone know price and availability for these
This array has not been formally announced yet and information on
general availability is not available as far as I know. I saw the
docs last week and the product was supposed to be launched a couple of
weeks ago.
Unofficially this is Sun's continued push to develop cheaper storage
Hi Tommaso
Have a look at the man page for zfs and the attach section in
particular, it will do the job nicely.
Enda
Tommaso Boccali wrote:
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none
Brian McBride wrote:
I have some questions from a customer about zfs checksums.
Could anyone answer some of these? Thanks.
Brian
Customer:
I would like to know more about zfs's checksum feature. I'm guessing
it is something that is applied to the data and not the disks (as in
raid-5).
On 02 July, 2008 - Mark McDonald sent me these 0,7K bytes:
Hi
I have managed to get this:
HOSTNAME$ zpool status
pool: zp01
state: ONLINE
scrub: resilver completed with 0 errors on Wed Jul 2 11:55:27 2008
config:
NAMESTATE READ WRITE CKSUM
zp01
On Jun 30, 2008, at 19:19, Jeff Bonwick wrote:
Dump is mandatory in the sense that losing crash dumps is criminal.
Swap is more complex. It's certainly not mandatory. Not so long ago,
swap was typically larger than physical memory.
These two statements kind of imply that dump and swap are
David Magda wrote:
Quite often swap and dump are the same device, at least in the
installs that I've worked with, and I think the default for Solaris
is that if dump is not explicitly specified it defaults to swap, yes?
Is there any reason why they should be separate?
I beleive
Mark,
If you don't want to backup the data, destroy the pool, and
recreate the pool as a mirrored configuration, then another
option it to attach two more disks to create 2 mirrors of 2
disks.
See the output below.
Cindy
# zpool create zp01 c1t3d0 c1t4d0
# zpool status
pool: zp01
state:
David Magda wrote:
On Jun 30, 2008, at 19:19, Jeff Bonwick wrote:
Dump is mandatory in the sense that losing crash dumps is criminal.
Swap is more complex. It's certainly not mandatory. Not so long ago,
swap was typically larger than physical memory.
These two statements kind of imply
On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote:
Quite often swap and dump are the same device, at least in the
installs that I've worked with, and I think the default for Solaris
is that if dump is not explicitly specified it defaults to swap, yes?
Is there any reason why
Depends on what benefit you are looking for.
If you are looking ways to improve redundancy you can still benefit from ZFS
a) ZFS snap shots will give you the ability to withstand soft/user
errors.
b) ZFS checksum...
c) ZFS can mirror (synch or async) a 6140'lun to an other
Availibilty may depend on where you are located but J4200 and J4400 are
available for most regions.
Those equipment is engineered to go well with Sun open storage components
like ZFS.
Besides price advantage, J4200 and J4400 offers unmatched bandwith to hosts
or to stacking units.
You can get
Hi all.
We currenty move out a number of iSCSI servers based on Thumpers
(x4500) running both, Solaris 10 and OpenSolaris build 90+. The
targets on the machines are based on ZVOLs. Some of the clients use those
iSCSI disks to build mirrored Zpools. As the volumes size on the x4500
can easily
Christiaan Willemsen wrote:
Hi Richard,
Richard Elling wrote:
It should cost less than a RAID array...
Advertisement: Sun's low-end servers have 16 DIMM slots.
Sadly, those are by far more expensive than what I have here from our
own server supplier...
ok, that pushed a button. Let's
Tommaso Boccali wrote:
Ciao,
the rot filesystem of my thumper is a ZFS with a single disk:
bash-3.2# zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
rpool ONLINE 0 0 0
Mike Gerdts wrote:
On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote:
Quite often swap and dump are the same device, at least in the
installs that I've worked with, and I think the default for Solaris
is that if dump is not explicitly specified it defaults to swap, yes?
sanjay nadkarni (Laptop) wrote:
Mike Gerdts wrote:
On Wed, Jul 2, 2008 at 10:08 AM, David Magda [EMAIL PROTECTED] wrote:
Quite often swap and dump are the same device, at least in the
installs that I've worked with, and I think the default for Solaris
is that if dump is not
Kyle McDonald wrote:
David Magda wrote:
Quite often swap and dump are the same device, at least in the
installs that I've worked with, and I think the default for Solaris
is that if dump is not explicitly specified it defaults to swap, yes?
Is there any reason why they should be
A few things to try: put in a different ethernet card
if you have one,
on one or more ends. Realtek works, but I've been
unimpressed with
their performance in the past. An Intel x1 pci
express card will only
run you around $40, and I've seen much better results
with them.
I first
On Wed, Jul 2, 2008 at 13:16, Juho Mäkinen [EMAIL PROTECTED] wrote:
Then I went and bought an Intel PCI Gigabit Ethernet card for 25€ which seems
to have solved the problem. I still need to do some testing though to verify.
Glad to hear it.
Is hardware checksum offloading enabled on either
On Wed, Jul 02, 2008 at 04:49:26AM -0700, Ben B. wrote:
According to the Sun Handbook, there is a new array :
SAS interface
12 disks SAS or SATA
ZFS could be used nicely with this box.
Doesn't seem to have any NVRAM storage on board, so seems like JBOD.
There is an another version called
So when are they going to release msrp?
On 7/2/08, Mertol Ozyoney [EMAIL PROTECTED] wrote:
Availibilty may depend on where you are located but J4200 and J4400 are
available for most regions.
Those equipment is engineered to go well with Sun open storage components
like ZFS.
Besides price
I created a filesystem dedicated to /var/log so I could keep compression on the
logs. Unfortunately, this caused problems at boot time because my log ZFS
dataset couldn't be mounted because /var/log already contained bits. Some of
that, to be fair, could be fixed by having some SMF services
Dan McDonald wrote:
I created a filesystem dedicated to /var/log so I could keep compression on
the logs. Unfortunately, this caused problems at boot time because my log
ZFS dataset couldn't be mounted because /var/log already contained bits.
Some of that, to be fair, could be fixed by
Dan McDonald wrote:
I created a filesystem dedicated to /var/log so I could keep compression on
the logs. Unfortunately, this caused problems at boot time because my log
ZFS dataset couldn't be mounted because /var/log already contained bits.
Some of that, to be fair, could be fixed by
I was making my way through the evil tuning guide and noticed a couple
updates that seem appropriate. I tried to create an account to be
able to add this into the discussion tab but account creation seems
to be a NOP.
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#RFEs
-
Remember, you can not delete a device, so be careful what you add.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Orvar Korvar wrote:
Remember, you can not delete a device, so be careful what you add.
You can detach disks from mirrors.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 02 Jul 2008 13:41:18 -0700
Richard Elling [EMAIL PROTECTED] wrote:
Orvar Korvar wrote:
Remember, you can not delete a device, so be careful what you add.
You can detach disks from mirrors.
So, a mirror of two disks becomes a system of two seperate disks?
--
Dick Hoogendijk --
dick hoogendijk wrote:
On Wed, 02 Jul 2008 13:41:18 -0700
Richard Elling [EMAIL PROTECTED] wrote:
Orvar Korvar wrote:
Remember, you can not delete a device, so be careful what you add.
You can detach disks from mirrors.
So, a mirror of two disks becomes a system of
Can't say about /var/log, but I have a system here with /var on zfs.
My assumption was that, not just /var/log, but essentially all of /var is
supposed to be runtime cruft, and so can be treated equally.
This message posted from opensolaris.org
___
Akhilesh Mritunjai wrote:
Can't say about /var/log, but I have a system here with /var on zfs.
My assumption was that, not just /var/log, but essentially all of /var is
supposed to be runtime cruft, and so can be treated equally.
Not really. Please see the man page for filesystem for
to answer my own question -- yes, it worked beautifully (zpool import -f tank).
Now to figure out why my network connection doesn't want to work after being
set up the exact same way again :(
This message posted from opensolaris.org
___
zfs-discuss
# rm /etc/zfs/zpool.cache
# zpool import
pool: zfs
id: 3801622416844369872
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on on another system, but can be imported using
the '-f' flag.
I'll have to do some thunkin' on this. We just need to get back one of the
disks, both would be great, but one more would do the trick.
After all other avenues have been tried, one thing that you can try is to use
the 2008.05 livecd and boot into the livecd without installing the OS. Import
45 matches
Mail list logo