, will they?
Solaris 11.1 has ZFS with SCSI UNMAP support.
Seem to have skipped that one... Are there any related tools e.g. to
release all zero blocks or the like? Of course it's up to the admin
then to know what all this is about or to wreck the data
Thomas
Thanks for all the answers more inline)
On 01/18/2013 02:42 AM, Richard Elling wrote:
On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us
mailto:bfrie...@simple.dallas.tx.us wrote:
On Wed, 16 Jan 2013, Thomas Nau wrote:
Dear all
I've a question concerning possible
through all the blocks and we hardly
see network average traffic going over 45MB/s (almost idle 1G link).
So here's the question: would increasing/decreasing the volblocksize improve
the send/receive operation and what influence might show for the iSCSI side?
Thanks for any help
Thomas
Jamie
We ran Into the same and had to migrate the pool while imported read-only. On
top we were adviced to NOT use an L2ARC. Maybe you should consider that as well
Thomas
Am 12.12.2012 um 19:21 schrieb Jamie Krier jamie.kr...@gmail.com:
I've hit this bug on four of my Solaris 11 servers
to rewrite a
tool to do it?
Subsidiary: Is there an official response of Oracle in front of such
case? How do they officially deal with Binary Copied disks, as it's
common to do such copy with UFS to copy SAP environment or Databases...
Thanks in advance,
Thomas
On Thu, 11 Oct 2012, Freddie Cash wrote:
On Thu, Oct 11, 2012 at 2:47 PM, andy thomas a...@time-domain.co.uk wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and not to
partition or slice
On Thu, 11 Oct 2012, Richard Elling wrote:
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom phillip.wagst...@gmail.com
wrote:
On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use
According to a Sun document called something like 'ZFS best practice' I
read some time ago, best practice was to use the entire disk for ZFS and
not to partition or slice it in any way. Does this advice hold good for
FreeBSD as well?
I looked at a server earlier this week that was running
I have a ZFS filseystem and create weekly snapshots over a period of 5
weeks called week01, week02, week03, week04 and week05 respectively. Ny
question is: how do the snapshots relate to each other - does week03
contain the changes made since week02 or does it contain all the changes
made
and
then change it back to start at cylinder 1.
I always leave cylinder 0 alone since then.
Thomas
2012-06-16 18:23, Richard Elling skrev:
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
by the way
when you format start with cylinder 1 donot use 0
There is no requirement for skipping
Dear all
I'm about to answer my own question with some really useful hints
from Steve, thanks for that!!!
On 03/02/2012 07:43 AM, Thomas Nau wrote:
Dear all
I asked before but without much feedback. As the issue
is persistent I want to give it another try. We disabled
panicing
On Thu, 16 Feb 2012, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of andy thomas
One of my most vital servers is a Netra 150 dating from 1997 - still going
strong, crammed with 12 x 300 Gb disks and running Solaris 9
On Wed, 15 Feb 2012, David Dyer-Bennet wrote:
While I'm not in need of upgrading my server at an emergency level, I'm
starting to think about it -- to be prepared (and an upgrade could be
triggered by a failure at this point; my server dates to 2006).
One of my most vital servers is a Netra
RAIDz1 pools on this
server but these are working fine.
Andy
-
Andy Thomas,
Time Domain Systems
Tel: +44 (0)7866 556626
Fax: +44 (0)20 8372 2582
http://www.time-domain.co.uk
___
zfs-discuss mailing list
zfs-discuss
On Tue, 14 Feb 2012, Richard Elling wrote:
Hi Andy
On Feb 14, 2012, at 10:37 AM, andy thomas wrote:
On one of our servers, we have a RAIDz1 ZFS pool called 'maths2' consisting of
7 x 300 Gb disks which in turn contains a single ZFS filesystem called 'home'.
Yesterday, using the 'ls
Bob,
On 01/31/2012 09:54 PM, Bob Friesenhahn wrote:
On Tue, 31 Jan 2012, Thomas Nau wrote:
Dear all
We have two JBODs with 20 or 21 drives available per JBOD hooked up
to a server. We are considering the following setups:
RAIDZ2 made of 4 drives
RAIDZ2 made of 6 drives
The first
but the system goes down when
a JBOD goes down. Each of the JBOD comes with dual controllers, redundant
fans and power supplies so do I need to be paranoid and use option #1?
Of course it also gives us more IOPs but high end logging devices should take
care of that
Thanks for any hint
Thomas
not rely on benchmarks that provide a summary metric.
-- richard
I had good experience with filebench. I resembles your workload as
good as you are able to describe it but takes some time to get things
setup if you cannot find your workload in one of the many provided
examples
Thomas
Does anyone know where I can still find the SUNWsmbs and SUNWsmbskr
packages for the Sparc version of OpenSolaris? I wanted to experiment with
ZFS/CIFS on my Sparc server but the ZFS share command fails with:
zfs set sharesmb=on tank1/windows
cannot share 'tank1/windows': smb
Dear all
We use a STEC ZeusRAM as a log device for a 200TB RAID-Z2 pool.
As they are supposed to be read only after a crash or when booting and
those nice things are pretty expensive I'm wondering if mirroring
the log devices is a must / highly recommended
Thomas
692513669251366925136 86118400 86118400 86118400 83 0
0 83
So does it look good, bad or ugly ;)
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You're probably hitting bug 7056738 - http://wesunsolve.net/bugid/id/7056738
Looks like it's not fixed yet @ oracle anyway...
Were you using crypto on your datasets ?
Regards,
Thomas
On Tue, 16 Aug 2011 09:33:34 -0700 (PDT)
Stu Whitefish swhitef...@yahoo.com wrote:
- Original Message
ZIL is no option but I expected a much better performance
especially the ZEUS RAM only gets us a speed-up of about 1.8x
Is this test realistic for a typical fileserver scenario or does it require many
more clients to push the limits?
Thanks
Thomas
___
zfs
Have you already extracted the core file of the kernel crash ?
(and btw activated dump device for such dumping happen at next reboot...)
Have you also tried applying the latest kernel/zfs patches and try
importing the pool afterwards ?
Thomas
On 08/18/2011 06:40 PM, Stu Whitefish wrote:
Hi
Tim
the client is identical as the server but no SAS drives attached.
Also right now only one 1gbit Intel NIC Is available
Thomas
Am 18.08.2011 um 17:49 schrieb Tim Cook t...@cook.ms:
What are the specs on the client?
On Aug 18, 2011 10:28 AM, Thomas Nau thomas@uni-ulm.de wrote:
Dear
but are there any
other things I should take into consideration? It's not a major problem as
the system is intended for storage and users are not supposed to go in and
untar huge tarfiles on it as it's not a fast system ;-)
Andy
Andy Thomas,
Time Domain Systems
Tel: +44 (0
On Sat, 13 Aug 2011, Bob Friesenhahn wrote:
On Sat, 13 Aug 2011, andy thomas wrote:
However, one of our users recently put a 35 Gb tar.gz file on this server
and uncompressed it to a 215 Gb tar file. But when he tried to untar it,
after about 43 Gb had been extracted we noticed the disk usage
On Sat, 13 Aug 2011, Joerg Schilling wrote:
andy thomas a...@time-domain.co.uk wrote:
What 'tar' program were you using? Make sure to also try using the
Solaris-provided tar rather than something like GNU tar.
I was using GNU tar actually as the original archive was created on a
Linux
with a
4K
sector device that doesn't lie (eg, iscsi target).
Are you refering to the ahift patches and what do you mean by tricking them
by using an iscsi target?
Thanks,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Dear all
Sorry if it's kind of off-topic for the list but after talking
to lots of vendors I'm running out of ideas...
We are looking for JBOD systems which
(1) hold 20+ 3.3 SATA drives
(2) are rack mountable
(3) have all the nive hot-swap stuff
(4) allow 2 hosts to connect via SAS (4+ lines
require use of SAS drives, no SATA (while the single-path BPs are okay with
both
SAS and SATA). Still, according to the forums, SATA disks on shared backplanes
often give too much headache and may give too little performance in
comparison...
I would be fine with SAS as well
Thomas
So there is no current way to specify the creation of a 3 disk raid-z
array with a known missing disk?
On 12/5/06, David Bustos david.bus...@sun.com wrote:
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
I currently have a 400GB disk that is full of data on a linux system.
If I
I'm having some very strange nfs issues that are driving me somewhat mad.
I'm running b134 and have been for months now, without issue. Recently i
enabled 2 services to get bonjoir notificatons working in osx
/network/dns/multicast:default
/system/avahi-bridge-dsd:default
and i added a few
Hi all
I'm currently moving a fairly big dataset (~2TB) within the same zpool. Data is
being moved from a dataset to another, which has dedup enabled.
The transfer started at quite a slow transfer speed — maybe 12MB/s. But it is
now crawling to a near halt. Only 800GB has been moved in 48
Thanks, I'm going to do that. I'm just worried about corrupting my data, or
other problems. I wanted to make sure there is nothing I really should be
careful with.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
You are saying ZFS will detect and rectify this kind of corruption in a
deduped pool automatically if enough redundancy is present? Can that fail
sometimes? Under what conditions?
I would hate to restore a 1.5TB pool from backup just because one 5MB file
is gone bust. And I have a known good
you can upgrade by changing to the dev repositoryor if you don't mind
re-installing you can download the b134 image at genunix
http://www.genunix.org/
On Sat, Aug 21, 2010 at 1:25 AM, Long Tran opensolaris.stor...@gmail.comwrote:
Hi,
I hit ZFS bug that it would be resolve in latter snv
On Thu, Aug 19, 2010 at 4:33 PM, Mike Kirk mike.k...@halcyoninc.com wrote:
Hi all,
Halcyon recently started to add ZFS pool stats to our Solaris Agent, and
because many people were interested in the previous OpenSolaris beta* we've
rolled it into our OpenSolaris build as well.
I've already
df serves a purpose though.
There are other commands which output that information..
On Thu, Aug 19, 2010 at 3:01 PM, Fred Liu fred_...@issi.com wrote:
Not sure if there was similar threads in this list before.
Three scenarios:
1): df cannot count snapshot space in a file system with quota
can't the zfs command provide that information?
2010/8/20 Fred Liu fred_...@issi.com
Can you shed more lights on **other commands** which output that
information?
Appreciations.
Fred
*From:* Thomas Burgess [mailto:wonsl...@gmail.com]
*Sent:* 星期五, 八月 20, 2010 17:34
*To:* Fred Liu
as for the difference between the two df's, one is the gnu df (liek you'd
have on linux) and the other is the solaris df.
2010/8/20 Thomas Burgess wonsl...@gmail.com
can't the zfs command provide that information?
2010/8/20 Fred Liu fred_...@issi.com
Can you shed more lights on **other
default
cn03/3 mlslabel none default
cn03/3 com.sun:auto-snapshot true inherited from cn03
Thanks.
Fred
*From:* Thomas Burgess [mailto:wonsl...@gmail.com]
*Sent:* 星期五, 八月 20, 2010 18:44
*To:* Fred Liu
*Cc:* ZFS Discuss
*Subject
I've been running opensolaris for months, and today while poking around, i
noticed a ton of errors in my logs...I'm wondering what they mean and if
it's anything to worry about
I've found a few things on google but not a whole lotanyways, heres a
pastie of the log
http://pastie.org/1104916
On Mon, Aug 16, 2010 at 11:17 PM, Frank Cusack
frank+lists/z...@linetwo.netwrote:
On 8/16/10 9:57 AM -0400 Ross Walker wrote:
No, the only real issue is the license and I highly doubt Oracle will
re-release ZFS under GPL to dilute it's competitive advantage.
You're saying Oracle wants to
On Wed, Aug 11, 2010 at 12:57 AM, Peter Taps ptr...@yahoo.com wrote:
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
Consider my earlier example of 3 disks zpool configured for raidz-1. To
On Fri, Aug 6, 2010 at 6:44 AM, P-O Yliniemi p...@bsd-guide.net wrote:
Hello!
I have built a OpenSolaris / ZFS based storage system for one of our
customers. The configuration is about this:
Motherboard/CPU: SuperMicro X7SBE / Xeon (something, sorry - can't remember
and do not have my
I've found the Seagate 7200.12 1tb drives and Hitachi 7k2000 2TB drives to
be by far the best.
I've read lots of horror stories about any WD drive with 4k
sectorsit'sbest to stay away from them.
I've also read plenty of people say that the green drives are terrible.
On Wed, Jul 21, 2010 at 12:42 PM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
Are there any drawbacks to partition a SSD in two parts and use L2ARC on
one partition, and ZIL on the other? Any thoughts?
--
This message posted from opensolaris.org
On Fri, Jul 23, 2010 at 3:11 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
Hi,
I've been searching around on the Internet to fine some help with this, but
have been
unsuccessfull so far.
I have some performance issues with my file server. I have an OpenSolaris
server with a Pentium D
3GHz
On Fri, Jul 23, 2010 at 5:00 AM, Sigbjorn Lie sigbj...@nixtra.com wrote:
I see I have already received several replies, thanks to all!
I would not like to risk losing any data, so I believe a ZIL device would
be the way for me. I see
these exists in different prices. Any reason why I would
://www.opencsw.org/packages/CSWmbuffer/
- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Conclusion: This device will make an excellent slog device. I'll order
them today ;)
I have one and i love it...I sliced it though, used 9 gb for ZIL and the
rest for L2ARC (my server is on a smallish network with about 10 clients)
It made a huge difference in NFS performance and other
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen pa...@iki.fi wrote:
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote:
Well, I've searched my brains out and I can't seem to find a reason for
this.
I'm getting bad to medium performance with my new test storage device.
I've got 24
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. ceco...@uga.eduwrote:
Oh! Yes. dedup. not compression, but dedup, yes.
dedup may be your problem...it requires some heavy ram and/or decent L2ARC
from what i've been reading.
___
zfs-discuss
Also, the disks were replaced one at a time last year from 73GB to 300GB to
increase the size of the pool. Any idea why the pool is showing up as the
wrong size in b134 and have anything else to try? I don't want to upgrade
the pool version yet and then not be able to revert back...
On Mon, Jun 14, 2010 at 4:41 AM, Arne Jansen sensi...@gmx.net wrote:
Hi,
I known it's been discussed here more than once, and I read the
Evil tuning guide, but I didn't find a definitive statement:
There is absolutely no sense in having slog devices larger than
then main memory, because it
in production after pulling data from
the backup tapes. Scrubbing didn't show any error so any idea what's
behind the problem? Any chance to fix the FS?
Thomas
---
panic[cpu3]/thread=ff0503498400: BAD TRAP: type=e (#pf Page fault)
rp=ff001e937320 addr=20 occurred in module zfs due
Thanks for the link Arne.
On 06/13/2010 03:57 PM, Arne Jansen wrote:
Thomas Nau wrote:
Dear all
We ran into a nasty problem the other day. One of our mirrored zpool
hosts several ZFS filesystems. After a reboot (all FS mounted at that
time an in use) the machine paniced (console output
Arne,
On 06/13/2010 03:57 PM, Arne Jansen wrote:
Thomas Nau wrote:
Dear all
We ran into a nasty problem the other day. One of our mirrored zpool
hosts several ZFS filesystems. After a reboot (all FS mounted at that
time an in use) the machine paniced (console output further down). After
Yeah, this is what I was thinking too...
Is there anyway to retain snapshot data this way? I've read about the ZFS
replay/mirror features, but my impression was that this was more so for a
development mirror for testing rather than a reliable backup? This is the
only way I know of that
On Sun, Jun 13, 2010 at 12:18 AM, Joe Auty j...@netmusician.org wrote:
Thomas Burgess wrote:
Yeah, this is what I was thinking too...
Is there anyway to retain snapshot data this way? I've read about the ZFS
replay/mirror features, but my impression was that this was more so
recommend that you
turn of disk write caching of Virtual Box. Search the OpenSolaris forum
of Virtual Box. There is an article somewhere how to do this. IIRC the
subject is somethink like 'zfs pool curruption'. But it is also
somewhere in the docs.
HTH,
Thomas
Very interesting. This could be useful for a number of us. Would you be willing
to share your work?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, May 26, 2010 at 5:47 PM, Brandon High bh...@freaks.com wrote:
On Sat, May 15, 2010 at 4:01 AM, Marc Bevand m.bev...@gmail.com wrote:
I have done quite some research over the past few years on the best (ie.
simple, robust, inexpensive, and performant) SATA/SAS controllers for
ZFS.
I thought it didI couldn't imagine sun using that chip in the original
thumper if it didn't suppoer NCQalso, i've read where people have had to
DISABLE ncq on this driver to fix one bug or another (as a work around)
On Wed, May 26, 2010 at 8:40 PM, Marty Faltesek
I was just wondering:
I added a SLOG/ZIL to my new system today...i noticed that the L2ARC shows
up under it's own headingbut the SLOG/ZIL doesn'tis this correct?
see:
capacity operationsbandwidth
poolalloc free read write read write
--
The last couple times i've read this questions, people normally responded
with:
It depends
you might not even NEED a slog, there is a script floating around which can
help determine that...
If you could benefit from one, it's going to be IOPS which help youso if
the usb drive has more
Is there a best practice on keeping a backup of the zpool.cache file? Is it
possible? Does it change with changes to vdevs?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
i am running the last release from the genunix page
uname -a output:
SunOS wonslung-raidz2 5.11 snv_134 i86pc i386 i86pc Solaris
On Tue, May 25, 2010 at 10:33 AM, Cindy Swearingen
cindy.swearin...@oracle.com wrote:
Hi Thomas,
This looks like a display bug. I'm seeing it too.
Let me know
On Tue, May 25, 2010 at 11:27 AM, Edward Ned Harvey
solar...@nedharvey.comwrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL
At least to me, this was not clearly not asking about losing zil and was
not clearly asking about power loss. Sorry for answering the question
you
thought you didn't ask.
I was only responding to your response of WRONG!!! The guy wasn't wrong in
regards to my questions. I'm sorry for
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 24 May 2010, Thomas Burgess wrote:
It's a sandforce sf-1500 model but without a supercapheres some info
on it:
Maximum Performance
* Max Read: up to 270MB/s
* Max Write: up to 250MB/s
Also, let me note, it came with a 3 year warranty so I expect it to last at
least 3 years...but if it doesn't, i'll just return it under the warranty.
On Tue, May 25, 2010 at 1:26 PM, Thomas Burgess wonsl...@gmail.com wrote:
On Tue, May 25, 2010 at 12:38 PM, Bob Friesenhahn
bfrie
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn't have a supercap so lets' say dataloss
occursis it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the
ZFS is always consistent on-disk, by design. Loss of the ZIL will result
in loss of the data in the ZIL which hasn't been flushed out to the hard
drives, but otherwise, the data on the hard drives is consistent and
uncorrupted.
This is what i thought. I have read this list on and off
Not familiar with that model
It's a sandforce sf-1500 model but without a supercapheres some info on
it:
Maximum Performance
- Max Read: up to 270MB/s
- Max Write: up to 250MB/s
- Sustained Write: up to 235MB/s
- Random Write 4k: 15,000 IOPS
- Max 4k IOPS: 50,000
From earlier in the thread, it sounds like none of the SF-1500 based
drives even have a supercap, so it doesn't seem that they'd necessarily
be a better choice than the SLC-based X-25E at this point unless you
need more write IOPS...
Ray
I think the upcoming OCZ Vertex 2 Pro will have a
did this come out?
http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
i was googling trying to find info about the next release and ran across
this
Does this mean it's actually about to come out before the end of the month
or is this something else?
never mindjust found more info on this...shoudl have held back from
asking
On Mon, May 24, 2010 at 1:26 AM, Thomas Burgess wonsl...@gmail.com wrote:
did this come out?
http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
i was googling trying to find info about the next
. If you don't, there's nothing you can do.
It probably taking a while to restart because the sends that were
interrupted need to be rolled back.
Sent from my Nexus One.
On May 21, 2010 9:44 PM, Thomas Burgess wonsl...@gmail.com wrote:
I can't tell you for sure
For some reason the server
install smartmontools
There is no package for it but it's EASY to install
once you do, you can get ouput like this:
pfexec /usr/local/sbin/smartctl -d sat,12 -a /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen,
i don't think there is but it's dirt simple to install.
I followed the instructions here:
http://cafenate.wordpress.com/2009/02/22/setting-up-smartmontools-on-opensolaris/
On Sat, May 22, 2010 at 3:19 AM, Andreas Iannou
andreas_wants_the_w...@hotmail.com wrote:
Thanks Thomas, I thought
:22 PM, Thomas Burgess wonsl...@gmail.com
wrote:
yah, it seems that rsync is faster for what i need anywaysat least
right
now...
If you don't have snapshots you want to keep in the new copy, then
probably...
-B
--
Brandon High : bh...@freaks.com
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot
I had to reinstall with the settings correct.
the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on
if not, then you may need to reinstall with it on
just to make sure i understand what is going on here,
you have a rpool which is having performance issues, and you discovered ahci
was disabled?
you enabled it, and now it won't boot. correct?
This happened to me and the solution was to export my storage pool and
reinstall my rpool with the
it
was basically an ide emulation mode for sata, long story short i ended up
with opensolaris installed in IDE mode.
I had to reinstall. I tried the livecd/import method and it still failed to
boot.
On Sat, May 22, 2010 at 5:30 PM, Ian Collins i...@ianshome.com wrote:
On 05/23/10 08:52 AM, Thomas
this old thread has info on how to switch from ide-sata mode
http://opensolaris.org/jive/thread.jspa?messageID=448758#448758
On Sat, May 22, 2010 at 5:32 PM, Ian Collins i...@ianshome.com wrote:
On 05/23/10 08:43 AM, Brian wrote:
Is there a way within opensolaris to detect if AHCI is
GREAT, glad it worked for you!
On Sat, May 22, 2010 at 7:39 PM, Brian broco...@vt.edu wrote:
Ok. What worked for me was booting with the live CD and doing:
pfexec zpool import -f rpool
reboot
After that I was able to boot with AHCI enabled. The performance issues I
was seeing are now
I'm confusedI have a filesystem on server 1 called tank/nas/dump
I made a snapshot called first
zfs snapshot tank/nas/d...@first
then i did a zfs send/recv like:
zfs send tank/nas/d...@first | ssh wonsl...@192.168.1.xx /bin/pfexec
/usr/sbin/zfs recv tank/nas/dump
this worked fine, next
On Sat, May 22, 2010 at 9:26 PM, Ian Collins i...@ianshome.com wrote:
On 05/23/10 01:18 PM, Thomas Burgess wrote:
this worked fine, next today, i wanted to send what has changed
i did
zfs snapshot tank/nas/d...@second
now, heres where i'm confusedfrom reading the man page i
ok, so forcing just basically makes it drop whatever changes were made
Thats what i was wondering...this is what i expected
On Sun, May 23, 2010 at 12:05 AM, Ian Collins i...@ianshome.com wrote:
On 05/23/10 03:56 PM, Thomas Burgess wrote:
let me ask a question though.
Lets say i have
On the PCIe side, I noticed there's a new card coming from LSI that claims
150,000 4k random writes. Unfortunately this might end up being an OEM-only
card.
I also notice on the ddrdrive site that they now have an opensolaris driver and
are offering it in a beta program.
--
This message
I seem to be getting decent speed with arcfour (this was what i was using to
begin with)
Thanks for all the helpthis honestly was just me being stupid...looking
back on yesterday, i can't even remember what i was doing wrong nowi was
REALLY tired when i asked this question.
On Fri, May
supported_frequencies_Hz
8:10:12:15:20
supported_max_cstates 0
vendor_id AuthenticAMD
On Mon, May 17, 2010 at 5:55 PM, Dennis Clarke dcla...@blastwave.orgwrote:
On 05-17-10, Thomas Burgess wonsl
Something i've been meaning to ask
I'm transfering some data from my older server to my newer one. the older
server has a socket 775 intel Q9550 8 gb ddr2 800 20 1TB drives in raidz2 (3
vdevs, 2 with 7 drives one with 6) connected to 3 AOC-SAT2-MV8 cards spread
as evenly across them as i
is 3 zfs recv's random?
On Fri, May 21, 2010 at 10:03 PM, Brandon High bh...@freaks.com wrote:
On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wonsl...@gmail.com
wrote:
shouldn't the newer server have LESS load?
Please forgive my ubernoobness.
Depends on what it's doing!
Load average
wider
stripes instead of 3 i'd gain another TB or twofor my use i don't think
that would be a horrible thing.
On Fri, May 21, 2010 at 10:03 PM, Brandon High bh...@freaks.com wrote:
On Fri, May 21, 2010 at 5:54 PM, Thomas Burgess wonsl...@gmail.com
wrote:
shouldn't the newer server have
up?
it's stuck on Reading ZFS config
and there is a FLURRY of hard drive lights blinking (all 10 in sync )
On Sat, May 22, 2010 at 12:26 AM, Brandon High bh...@freaks.com wrote:
On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess wonsl...@gmail.com
wrote:
is 3 zfs recv's random?
It might
yah, it seems that rsync is faster for what i need anywaysat least right
now...
On Sat, May 22, 2010 at 1:07 AM, Ian Collins i...@ianshome.com wrote:
On 05/22/10 04:44 PM, Thomas Burgess wrote:
I can't tell you for sure
For some reason the server lost power and it's taking forever
3.14.2 6 13 c6t6d0
0.9 201.9 34.2 25338.0 3.8 0.5 18.92.6 51 52 c8t5d0
0.00.00.00.0 0.0 0.00.00.0 0 0 c4t7d0
On Sat, May 22, 2010 at 12:26 AM, Brandon High bh...@freaks.com wrote:
On Fri, May 21, 2010 at 7:57 PM, Thomas Burgess wonsl
1 - 100 of 413 matches
Mail list logo