and a broken
/a/zones/myzonename.
The errors lucreate delivers:
8<
Creating snapshot for on
rootpool/ROOT/s10u8-01/zo...@s10u8-2010010
<mailto:rootpool/ROOT/s10u8-01/zo...@s10u8-20100106> 6.
Creating clone for rootpool/ROOT/s10u8-01/zo...@s10u8-2010010
<mailto:rootpool/ROOT/s1
0n Wed, Jan 06, 2010 at 11:00:49PM -0800, Richard Elling wrote:
>On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote:
>>
>>0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
>>
>>> Rather, ZFS works very nicely with "hardware RAID" systems or JBODs
>>>
On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote:
0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
Rather, ZFS works very nicely with "hardware RAID" systems or JBODs
iSCSI, et.al. You can happily add the
Im not sure how ZFS works very nicely with say for example an EM
On my NAS I use Velitium: http://sourceforge.net/projects/velitium/ which goes
down to about 70MB at the smallest.
(2010/01/07 15:23), Frank Cusack wrote:
been searching and searching ...
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, T
(2010/01/07 15:23), Frank Cusack wrote:
been searching and searching ...
i know many (most?) folks here are using opensolaris. surely most of you
are not using the default heavyweight install with gnome et al.? how are
you installing a minimal "server" system?
i've been banging my head forever
0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
>Rather, ZFS works very nicely with "hardware RAID" systems or JBODs
>iSCSI, et.al. You can happily add the
Im not sure how ZFS works very nicely with say for example an EMC Cx310 array ?
-Alex
IMPORTANT: This ema
been searching and searching ...
i know many (most?) folks here are using opensolaris. surely most of you
are not using the default heavyweight install with gnome et al.? how are
you installing a minimal "server" system?
i've been banging my head forever with samba and ADS and want to try
open
On Wed, Jan 6, 2010 at 20:41, Mark Bennett wrote:
> Will,
>
> sorry for picking an old thread,
That's okay---I liked this thread ;)
> but you mentioned a psu monitor to supplement the CSE-PTJBOD-CB1.
> I have two of these and am interested in your design.
> Oddly, the LSI backplane chipset suppor
Will,
sorry for picking an old thread, but you mentioned a psu monitor to supplement
the CSE-PTJBOD-CB1.
I have two of these and am interested in your design.
Oddly, the LSI backplane chipset supports 2 x i2c busses that Supermicro didn't
make use of for monitoring the psu's.
Mark.
--
This mes
On 06/01/2010 11:03, Lutz Schumann wrote:
Snapshots do not impact write performance. Deletion of the snapshots seems to
be also a constant operation (time taken = number of snapshots x some time).
that's not entirely true. By having a snapshot you are not releasing the
space forcing zfs
On Wed, Jan 6, 2010 at 2:40 PM, Saso Kiselkov wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Buffering the writes in the OS would work for me as well - I've got RAM
> to spare. Slowing down rm is perhaps one way to go, but definitely not a
> real solution. On rare occasions I could s
A good place to review before you begin testing the new dedup features
in the Nevada release is the ZFS dedup FAQ, which includes a list of
known issues:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/dedup
Thanks,
Cindy
On 01/05/10 08:38, Bob Friesenhahn wrote:
On Mon, 4 Jan 2010,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Buffering the writes in the OS would work for me as well - I've got RAM
to spare. Slowing down rm is perhaps one way to go, but definitely not a
real solution. On rare occasions I could still get lockups, leading to
screwed up recordings and if its one
Check if your card has the latest firmware.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Jan 6, 2010 at 4:30 PM, Wes Felter wrote:
> Michael Herf wrote:
>
>> I agree that RAID-DP is much more scalable for reads than RAIDZx, and
>> this basically turns into a cost concern at scale.
>>
>> The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be
>> used instead of n
On Jan 6, 2010, at 1:30 PM, Wes Felter wrote:
Michael Herf wrote:
I agree that RAID-DP is much more scalable for reads than RAIDZx, and
this basically turns into a cost concern at scale.
The raw cost/GB for ZFS is much lower, so even a 3-way mirror could
be
used instead of netapp. But this
On Wed, 6 Jan 2010, Saso Kiselkov wrote:
I'm aware of the theory and realize that deleting stuff requires writes.
I'm also running on the latest b130 and write stuff to disk in large
128k chunks. The thing I was wondering about is whether there is a
mechanism that might lower the I/O scheduling
Hi Yuriy,
A couple of options are:
1. If you can boot this system from a system or DVD that is running
at least build 128, then you can try to import the pool using the
following syntax:
zpool import -F -c
The -F option provides a way to rollback the last few transactions.
I'm not sure I un
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm aware of the theory and realize that deleting stuff requires writes.
I'm also running on the latest b130 and write stuff to disk in large
128k chunks. The thing I was wondering about is whether there is a
mechanism that might lower the I/O schedul
Michael Herf wrote:
I agree that RAID-DP is much more scalable for reads than RAIDZx, and
this basically turns into a cost concern at scale.
The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be
used instead of netapp. But this certainly reduces the cost advantage
significantly
Having gotten back to the Rep and asked further questions, I'm forced to
agree - the rep doesn't know what they're talking about.
It does look like the Intel based 40G Kinsgston may not yet be available
in australia.
What a drag. :-)
T
On 6/01/2010 10:46 PM, Al Hopper wrote:
On Tue, Jan 5,
> meandering off topic here ...
>
> i use one of those 64G kingston jmicron/toshiba drives in my mac.
>
> The "stuttering" problems attributed to the older jmicron drives are
> non-existent with this one in my experience.
>
>
This is great news. I've read this but it's good to know that someone on
On Mon, Jan 04, 2010 at 08:33:23PM -0600, Al Hopper wrote:
> On Mon, Jan 4, 2010 at 4:39 PM, Thomas Burgess wrote:
> >
> > I'm PRETTY sure the kingston drives i ordered are as good/better
> >
> > i just didnt' know that they weren't "good enough"
>
> I disagree that those drives are "good enough"
zpool split
http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck
I came across this around noon today, originally on http://c0t0d0s0.org .
More here:
http://opensolaris.org/jive/thread.jspa?threadID=113685&tstart=60
Too bad this probably won't make it to the final release of OpenSola
On Wed, 6 Jan 2010, Saso Kiselkov wrote:
I've encountered a new problem on the opposite end of my app - the
write() calls to disk sometimes block for a terribly long time (5-10
seconds) when I start deleting stuff on the filesystem where my recorder
processes are writing. Looking at iostat I can
Note to self: drink coffee before posting :-)
Thanks Glenn, et.al.
-- richard
On Jan 6, 2010, at 9:54 AM, Glenn Lagasse wrote:
* Richard Elling (richard.ell...@gmail.com) wrote:
Hi Pradeep,
This is the ZFS forum. You might have better luck on the caiman-
discuss
forum which is where the fol
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've encountered a new problem on the opposite end of my app - the
write() calls to disk sometimes block for a terribly long time (5-10
seconds) when I start deleting stuff on the filesystem where my recorder
processes are writing. Looking at iostat I
On Tue, Jan 5, 2010 at 10:35 AM, Carl Rathman wrote:
> On Tue, Jan 5, 2010 at 10:12 AM, Richard Elling
> wrote:
>> On Jan 5, 2010, at 7:54 AM, Carl Rathman wrote:
>>
>>> I didn't mean to destroy the pool. I used zpool destroy on a zvol,
>>> when I should have used zfs destroy.
>>>
>>> When I use
* Richard Elling (richard.ell...@gmail.com) wrote:
> Hi Pradeep,
> This is the ZFS forum. You might have better luck on the caiman-discuss
> forum which is where the folks who work on the installers hang out.
Except, that's not where the people who work on the legacy Solaris 10
installers hang out
On Wed, 6 Jan 2010, R.G. Keen wrote:
I probably won't ever trust these drives; they were just convenient
for the test system, and may have the advantage (?!) of more
failures to try out the beauties of zfs.
The drives are probably just fine. Most likely Seagate "unbricked"
them and install
Hi Pradeep,
This is the ZFS forum. You might have better luck on the caiman-discuss
forum which is where the folks who work on the installers hang out.
-- richard
On Jan 6, 2010, at 5:26 AM, Pradeep wrote:
Hi ,
I am trying to install solaris10 update8 on a san array using
solaris
jumpst
Well that wasn't it, but it got me doing better Google searches...I have a
Realtek rge0 interface...
http://opensolaris.org/jive/thread.jspa?messageID=439296 . I wasn't
considering the net interface because the problem didn't manifest for me
until I turned zfs compression on.
http://sigtar.com/
Michael Herf wrote:
> I've written about my slow-to-dedupe RAIDZ.
>
> After a week of.waitingI finally bought a
> little $100 30G OCZ
> Vertex and plugged it in as a cache.
>
> After <2 hours of warmup, my zfs send/receive rate on
> the pool is
> >16MB/sec (reading and writing each at 16M
stupid question, but it wouldent by any chance be an Intel Network adapter?
had a weird problem on Windows which had the same issue... new net driver
solved the problem... wonder if the Intel drive has the same problem on
Solaris...
--Tiernan
On Wed, Jan 6, 2010 at 2:28 PM, John wrote:
> I'm us
Well, there had to be some reason that they had enough of them come back to run
a "recertifying" program. 8-)
I rather expected something of that sort; thanks for doing the homework for me!
I appreciate the help.
I probably won't ever trust these drives; they were just convenient for the
test
I'm using snv_111 to host iSCSI for my backups. This went fine until I enabled
compression on the volume. About halfway through a backup (~250gb done),
Solaris loses its network connection with no errors logged (/var/adm/messages
and /var/log/* with no entries for an hour preceding). After refor
Hi ,
I am trying to install solaris10 update8 on a san array using solaris
jumpstart server .
My configuration is
- > san install on a array disk using Mellanox Infinihost card
-> Modified x86.miniroot with hermon and ib packages
-> Used solaris jumpstart server for installation
Installatio
Looks like this part got cut off somehow:
the filesystem mount point is set to /usr/local/local. I just want to
do a simple backup/restore, can anyone tell me something obvious that I'm not
doing right?
Using OpenSolaris development build 130.
Thanks,
John
--
This message posted from opensol
Hi Folks --
I'm an old ZFS user, but brand new to send/receive for backing up. I've
created a zfs filesystem
using:
zfs create \
-o mountpoint=/usr/local \
-o casesensitivity=mixed \
-o nbmand=on \
-o sharesmb=ro...@192.168.1 \
rpool/local
and am backing up with
zfs snapshot rpool/l
On Tue, Jan 5, 2010 at 11:57 PM, Eric D. Mudama
wrote:
> On Wed, Jan 6 at 14:56, Tristan Ball wrote:
>>
>> For those searching list archives, the SNV125-S2/40GB given below is not
>> based on the Intel controller.
>>
>> I queried Kingston directly about this because there appears to be so
>> much
Snapshots do not impact write performance. Deletion of the snapshots seems to
be also a constant operation (time taken = number of snapshots x some time).
However see http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786.
When importing a pool with many snapshots (which happens dur
so you're saying that if i do something like fail a drive, then remove it,
with my controller i'll have a crash more than likely?
I can understand why sas might be better for some stuff but i thought sata
was supposed to support hot swap as well.especially with the controller
i chose...
Don't
On 06 January, 2010 - Thomas Burgess sent me these 5,8K bytes:
> I think the confusing part is that the 64gb version seems to use a different
> controller all together
It does.
> I couldn't find any SNV125-S2/40's in stock so i got 3 SNV125-S2/64's
> thinking it would be the same,m only bigger..
Wow, that is cheap for an "enterprise class" drive.
A little over 1/3 of the reviews at newegg rated this drive as very poor
http://www.newegg.com/Product/ProductReview.aspx?Item=N82E16822148295
Hopefully, they've fixed whatever issues with your drives :-)
Be sure to do the firmware update
h
The earlier (2008) Opensolaris drivers tended to crash the server if you pulled
out an active drive. It may have improved in later releases.
In the case of the Sun Storage Appliances, the sata (and sas) drivers used are
different from those in Opensolaris and are considerably better featured.
My
45 matches
Mail list logo