Generally, you choose your data pool config based on data size,
redundancy, and performance requirements. If those are all satisfied with
your single mirror, the only thing left for you to do is think about
splitting your data off onto a separate pool due to better performance
etc. (Because
Hi. I have a development system on Intel commodity hardware with a 500G ZFS
root mirror. I have another 500G drive same as the other two. Is there any
way to use this disk to good advantage in this box? I don't think I need any
more redundancy, I would like to increase performance if possible. I
Thanks guys.
I'm only planning to move some directories. Not the complete dataset.
Just a couble of Gb's.
Would it be save to use mv
Of course, provided your system doesn't crash during the move.
Using rsync -avPHK --remove-source-files SRC/ DST/
isn't that just as copying files ?
Hans J. Albertsson hans.j.alberts...@branneriet.se wrote:
I think the problem is with disks that are 4k organised, but report
their blocksize as 512.
If the disk reports it's blocksize correctly as 4096, then ZFS should
not have a problem.
At least my 2TB Seagate Barracuda disks seemed
On 08/30/2012 12:07 PM, Anonymous wrote:
Hi. I have a spare off the shelf consumer PC and was thinking about loading
Solaris on it for a development box since I use Studio @work and like it
better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
has only one drive
I asked what I thought was a simple question but most of the answers don't
have too much to do with the question. Now it seems to be an argument of
your filesystem is better than any other filesystem. I don't think it is
because I have seen the horror stories lurking on this list. I had no
I turned compression on for several ZFS filesystems and found performance
was still fine. I turned gzip on and it was also fine and compression on
certain filesystems is excellent. I realize all the files that were on the
filesystem when compression=on did not get the benefit of gzip when I set
Thanks for the help Chris!
Cheers,
Fritz
You wrote:
original and and rename the new one, or zfs send or ?? Can I do a send and
receive into a filesystem with attributes set as I want or does the receive
keep the same attributes as well? Thank you.
That will work. Just create the new
If you plan to generate a lot of data, why use the root pool? You can put
the /home and /proj filesystems (/export/...) on a separate pool, thus
off-loading the root pool.
I don't, it's a development box with not alot happening.
My two cents,
thanks
Hi Dave,
Hi Cindy.
Consider the easiest configuration first and it will probably save
you time and money in the long run, like this:
73g x 73g mirror (one large s0 on each disk) - rpool
73g x 73g mirror (use whole disks) - data pool
Then, get yourself two replacement disks, a good
I am having a problem after a new install of Solaris 10. The installed rpool
works fine when I have only those disks connected. When I connect disks from
an rpool I created during a previous installation, my newly installed rpool
is ignored even though the BIOS (x86) is set to boot only from the
Hi Roy, things got alot worse since my first email. I don't know what
happened but I can't import the old pool at all. It shows no errors but when
I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
nv) which looks like is fixed by patch 6801926 which is applied in Solaris
You wrote:
Hi Roy, things got alot worse since my first email. I don't know what
happened but I can't import the old pool at all. It shows no errors but when
I import it I get a kernel panic from assertion failed: zvol_get_stats(os,
nv) which looks like is fixed by patch 6801926 which
On Solaris 10 If I install using ZFS root on only one drive is there a way
to add another drive as a mirror later? Sorry if this was discussed
already. I searched the archives and couldn't find the answer. Thank you.
___
zfs-discuss mailing list
Thank you all for your answers and links :-)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello all,
Trying to reply to everyone so far in one post.
casper@oracle.com said
Did you try:
iostat -En
I issued that command and I see (soft) errors from all 4 drives. There is a
serial no. field in the message headers but it is has no contents.
messages in
Richard Elling said
If the errors bubble up to ZFS, then they will be shown in the output of
zpool status
On the console I was seeing retryable read errors that eventually
failed. The block number and drive path were included but not any info I
could relate to the actual disk.
zpool status
I've been watching the heat control issue carefully since I had to take a
job offshore (cough reverse H1B cough) in a place without adequate AC and I
was able to get them to ship my servers and some other gear. Then I read
Intel is guaranteeing their servers will work up to 100 degrees F ambient
You wrote:
2012-07-23 18:37, Anonymous wrote:
Really, it would be so helpful to know which drives we can buy with
confidence and which should be avoided...is there any way to know from the
manufacturers web sites or do you have to actually buy one and see what it
does? Thanks
It depends on the model. Consumer models are less likely to
immediately flush. My understanding that this is done in part to do
some write coalescing and reduce the number of P/E cycles. Enterprise
models should either flush, or contain a super capacitor that provides
enough power for the
Hi Darren,
On 08/30/12 11:07, Anonymous wrote:
Hi. I have a spare off the shelf consumer PC and was thinking about loading
Solaris on it for a development box since I use Studio @work and like it
better than gcc. I was thinking maybe it isn't so smart to use ZFS since it
has only one
Thank you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
22 matches
Mail list logo