We have a server with a couple X-25E's and a bunch of larger SATA
disks.
To save space, we want to install Solaris 10 (our install is only about
1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL
attached to a zpool created from the SATA drives.
Currently we do this by
On 7/1/2010 10:17 PM, Neil Perrin wrote:
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using ARC?
Or is there some sort of memory
However, SVM+UFS is more annoying to work with as far as LiveUpgrade is
concerned. We'd love to use a ZFS root, but that requires that the
entire SSD be dedicated as an rpool leaving no space for ZIL. Or does
it?
It appears that we could do a:
# zfs create -V 24G rpool/zil
On our
On Thu, Jul 1, 2010 at 04:33, Lutz Schumann presa...@storageconcepts.de wrote:
Hello list,
I wanted to test deduplication a little and did a experiment.
My question was: can I dedupe infinite or is ther a upper limit ?
So for that I did a very basic test.
- I created a ramdisk-pool (1GB)
On 07/ 2/10 04:12 PM, Peter Taps wrote:
Folks,
While going through a quick tutorial on zfs, I came across a way to create zfs
filesystem within a filesystem. For example:
# zfs create mytest/peter
where mytest is a zpool filesystem.
When does this way, the new filesystem has the mount point
On 01/07/2010 23:58, Derek Olsen wrote:
Folks.
My env is Solaris 10 update 8 amd64. Does LUN alignment matter when I'm
creating zpool's on disks (lun's) with EFI labels and providing zpool the
entire disk?
http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified
--
On Jul 1, 2010, at 7:29 PM, Derek Olsen wrote:
doh! It turns out the host in question is actually a Solaris 10 update 6
host. It appears that an Solaris 10 update 8 host actually sets the start
sector at 256.
Yes, this is a silly bug, fixed years ago.
So to simplify the question. If
Dear Cindy and Edward
Many thanks for your input. Indeed there is something wrong with the SSD.
Smartmontools confirm me also couples of errors.
So I open a case and hopefully they will replace the SSD. What I learned?
- Be careful of special offers
- Use also rock solid components for your
Sorry roy, but reading the post you pointed me
meaning about 1,2GB per 1TB stored on 128kB blocks
I have 1,5TB and 4 Gb of RAM, and not all of this is deduped.
Why you say it's *way* too small. It should be *way* enough.
From the performance point of view, it is not a problem, I use that machine
We have a server with a couple X-25E's and a bunch of
larger SATA
disks.
To save space, we want to install Solaris 10 (our
install is only about
1.4GB) to the X-25E's and use the remaining space on
the SSD's for ZIL
attached to a zpool created from the SATA drives.
Currently we do
- Original Message -
Sorry roy, but reading the post you pointed me
meaning about 1,2GB per 1TB stored on 128kB blocks
I have 1,5TB and 4 Gb of RAM, and not all of this is deduped.
Why you say it's *way* too small. It should be *way* enough.
From the performance point of view, it is
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Peter Taps
# zfs create mytest/peter
where mytest is a zpool filesystem.
When does it make sense to create such a filesystem versus just
creating a directory?
This is a thorny bush,
On 07/02/10 00:57, Erik Trimble wrote:
On 7/1/2010 10:17 PM, Neil Perrin wrote:
On 07/01/10 22:33, Erik Trimble wrote:
On 7/1/2010 9:23 PM, Geoff Nordli wrote:
Hi Erik.
Are you saying the DDT will automatically look to be stored in an
L2ARC device if one exists in the pool, instead of using
Cindy Swearingen cindy.swearingen at oracle.com writes:
Cindy - this discusses how to rename the rpool temporarily. Is there a way to
do it permanently and will it break anything? I have to rename a root pool
because of a type-o.
This is on a Solaris sparc environment.
Please help!
thanks
On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote:
We have a server with a couple X-25E's and a bunch of
larger SATA
disks.
To save space, we want to install Solaris 10 (our
install is only about
1.4GB) to the X-25E's and use the remaining space on
the SSD's for ZIL
- Original Message -
Cindy Swearingen cindy.swearingen at oracle.com writes:
Cindy - this discusses how to rename the rpool temporarily. Is there a
way to
do it permanently and will it break anything? I have to rename a root
pool
because of a type-o.
This is on a Solaris sparc
Hi Ray,
In general, using components from one pool for another pool is
discouraged because this configuration can cause deadlocks. Using this
configuration for ZIL usage would probably work fine (with a performance
hit because of the volume) until something unforeseen goes wrong. This
config is
Thank you all, especially Edward, for the enlightenment.
Regards,
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
so - if i boot up off of cdrom and do an export of the root pool under name
rpool1 can i reimport it under rpool2 (using the same disk) and keep it at
that name permanently and have not issues with booting in the future or any
other O/S related issues?
That is the question. :-)
On Fri, Jul 2,
Hi,
I don't know about the rest of your test, but writing
zeroes to a ZFS
filesystem is probably not a very good test, because
ZFS recognizes
these blocks of zeroes and doesn't actually write
anything. Unless
maybe encryption is on, but maybe not even then.
Not true. If I want ZFS to
Hi Julie,
I think the answer is no, you cannot rename the root pool and expect
that any other O/S-related boot operation will complete successfully.
Live Upgrade in particular would be unhappy and changing the root
dataset mount point might cause the system not to boot.
Thanks,
Cindy
On
On 7/2/2010 6:30 AM, Neil Perrin wrote:
On 07/02/10 00:57, Erik Trimble wrote:
That's what I assumed. One further thought, though. Is the DDT is
treated as a single entity - so it's *all* either in the ARC or in
the L2ARC? Or does it move one entry at a time into the L2ARC as it
fills the
I have created one of my system from a flash archive which was created from a
system running zfs root .. but since its update 6 it didn;t work with flash
archive .. after the system is built from the same flash archive .. system is
up but i get
following error for rpool ..
How can i remove
np == Neil Perrin neil.per...@oracle.com writes:
np The L2ARC just holds blocks that have been evicted from the
np ARC due to memory pressure. The DDT is no different than any
np other object (e.g. file).
The other cacheable objects require pointers to stay in the ARC
pointing to
On 02/07/2010 17:57, Cindy Swearingen wrote:
I think the answer is no, you cannot rename the root pool and expect
that any other O/S-related boot operation will complete successfully.
Live Upgrade in particular would be unhappy and changing the root
dataset mount point might cause the system not
On 02/07/2010 17:56, Lutz Schumann wrote:
I don't know about the rest of your test, but writing
zeroes to a ZFS
filesystem is probably not a very good test, because
ZFS recognizes
these blocks of zeroes and doesn't actually write
anything. Unless
maybe encryption is on, but maybe not even then.
On Fri, Jul 02, 2010 at 08:18:48AM -0700, Erik Ableson wrote:
Le 2 juil. 2010 à 16:30, Ray Van Dolson rvandol...@esri.com a écrit :
On Fri, Jul 02, 2010 at 03:40:26AM -0700, Ben Taylor wrote:
We have a server with a couple X-25E's and a bunch of larger SATA
disks.
To save space, we
On 07/02/10 11:14, Erik Trimble wrote:
On 7/2/2010 6:30 AM, Neil Perrin wrote:
On 07/02/10 00:57, Erik Trimble wrote:
That's what I assumed. One further thought, though. Is the DDT is
treated as a single entity - so it's *all* either in the ARC or in
the L2ARC? Or does it move one entry at
On Jul 1, 2010, at 10:28 AM, Andrew Jones wrote:
Victor,
I've reproduced the crash and have vmdump.0 and dump device files. How do I
query the stack on crash for your analysis? What other analysis should I
provide?
Output of 'echo ::threadlist -v | mdb 0' can be a good start in this
Dear Forum
Just technical infos for all who have the same problem with AMD Chipset and
Kingston SSDs.
Official statement from Kingston :
We have 2 major problem sources:
- chipset comes from AMD (there is no better chipset for SSDs than Intel)
- your OS is not Windows (in which case all
I see in NexentaStor's announcement of Community Edition 3.0.3 they mention
some backported patches in this release.
Aside from their management features / UI what is the core OS difference if we
move to Nexenta from OpenSolaris b134?
These DeDup bugs are my main frustration - if a staff
Andrew,
Looks like the zpool is telling you the devices are
still doing work of
some kind, or that there are locks still held.
Agreed; it appears the CSV1 volume is in a fundamentally inconsistent state
following the aborted zfs destroy attempt. See later in this thread where
Victor
As most others have - I've been having issues with dedup.
Here's my situation, 4TB pool for daily backups of sql server - dedup enabled -
so a typical directory has 100+ files that are mostly identical (some all are
identical).
If I do rm * OpenSolaris is dead, zfs hung, etc. sometimes it
I think I'll try booting from a b134 Live CD and see
that will let me fix things.
Sadly it appears not - at least not straight away.
Running zpool import now gives
pool: storage2
id: 14701046672203578408
state: FAULTED
status: The pool was last accessed by another system.
action: The
On Jul 2, 2010, at 12:53 PM, Steve Radich, BitShop, Inc. wrote:
I see in NexentaStor's announcement of Community Edition 3.0.3 they mention
some backported patches in this release.
Yes. These patches are in the code tree, currently at b143 (~18 weeks
newer than b134)
Aside from their
On Fri, Jul 2, 2010 at 1:18 AM, Ray Van Dolson rvandol...@esri.com wrote:
We have a server with a couple X-25E's and a bunch of larger SATA
disks.
To save space, we want to install Solaris 10 (our install is only about
1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL
On Fri, Jul 2, 2010 at 8:06 PM, Richard Elling rich...@nexenta.com wrote:
On Jul 2, 2010, at 12:53 PM, Steve Radich, BitShop, Inc. wrote:
I see in NexentaStor's announcement of Community Edition 3.0.3 they
mention some backported patches in this release.
Yes. These patches are in the code
On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
Given that the most basic of functionality was broken in Nexenta, and not
Opensolaris, and I couldn't get a single response, I have a hard time
recommending ANYONE go to Nexenta. It's great they're employing you now, but
the community edition has
On Fri, Jul 2, 2010 at 9:25 PM, Richard Elling rich...@nexenta.com wrote:
On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
Given that the most basic of functionality was broken in Nexenta, and not
Opensolaris, and I couldn't get a single response, I have a hard time
recommending ANYONE go to
On 3/07/10 12:25 PM, Richard Elling wrote:
On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
Given that the most basic of functionality was broken in Nexenta, and not
Opensolaris, and I couldn't get a single response, I have a hard time
recommending ANYONE go to Nexenta. It's great they're
On Fri, Jul 2, 2010 at 9:55 PM, James C. McPherson j...@opensolaris.orgwrote:
On 3/07/10 12:25 PM, Richard Elling wrote:
On Jul 2, 2010, at 6:48 PM, Tim Cook wrote:
Given that the most basic of functionality was broken in Nexenta, and not
Opensolaris, and I couldn't get a single response,
41 matches
Mail list logo