On Jan 30, 2008 1:34 AM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
> If this is Sun's cp, file a bug. It's failing to notice that it didn't
> provide a large enough buffer to getdents(), so it only got partial results.
>
> Of course, the getdents() API is rather unfortunate. It appears the only
> sa
>Christopher Gorski wrote:
>
>> I noticed that the first calls in the "cp" and "ls" to getdents() return
>> similar file lists, with the same values.
>>
>> However, in the "ls", it makes a second call to getdents():
>
>If this is Sun's cp, file a bug. It's failing to notice that it didn't
>prov
>That code appears to error out and return incomplete results if a) the
>filename is too long or b) an integer overflows. Christopher's
>filenames are only 96 chars; could Unicode be involved somehow? b)
>seems unlikely in the extreme. It still seems like a bug, but I don't
>see where it is.
Jonathan Loran writes:
>
> Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
> that exist in the various recent Open Solaris flavors? I would like to
> move my ZIL to solid state storage, but I fear I can't do it until I
> have another update. Heck, I would be hap
"Will Murnane" <[EMAIL PROTECTED]> wrote:
> On Jan 30, 2008 1:34 AM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
> > If this is Sun's cp, file a bug. It's failing to notice that it didn't
> > provide a large enough buffer to getdents(), so it only got partial results.
> >
> > Of course, the getdents(
Christopher Gorski <[EMAIL PROTECTED]> wrote:
> > Of course, the getdents() API is rather unfortunate. It appears the only
> > safe algorithm is:
> >
> > while ((r = getdents(...)) > 0) {
> > /* process results */
> > }
> > if (r < 0) {
> > /* handle error */
> > }
> >
> > You _always_
[EMAIL PROTECTED] wrote:
> And "ls" would fail in the same manner.
>
>
> There's one piece of code in "cp" (see usr/src/cmd/mv/mv.c) which
> short-circuits a readdir-loop:
>
> while ((dp = readdir(srcdirp)) != NULL) {
> int ret;
>
> if ((ret = traverse_attr
Hello Christopher,
Wednesday, January 30, 2008, 7:27:01 AM, you wrote:
CG> Carson Gaspar wrote:
>> Christopher Gorski wrote:
>>
>>> I noticed that the first calls in the "cp" and "ls" to getdents() return
>>> similar file lists, with the same values.
>>>
>>> However, in the "ls", it makes a sec
Joerg Schilling wrote:
> "Will Murnane" <[EMAIL PROTECTED]> wrote:
>
>> On Jan 30, 2008 1:34 AM, Carson Gaspar <[EMAIL PROTECTED]> wrote:
>>> If this is Sun's cp, file a bug. It's failing to notice that it didn't
>>> provide a large enough buffer to getdents(), so it only got partial results.
>>>
Christopher Gorski <[EMAIL PROTECTED]> wrote:
> I am able to replicate the problem in bash using:
> #truss -tall -vall -o /tmp/getdents.bin.cp.truss /bin/cp -pr
> /pond/photos/* /pond/copytestsame/
>
> So I'm assuming that's using /bin/cp
>
> Also, from my _very limited_ investigation this morning
Robert Milkowski <[EMAIL PROTECTED]> wrote:
> If you could re-create empty files - exactly the same directory
> atructure and file names, check if you still got a problem.
> If you do, then if you could send a script here (mkdir's -p and touch)
> so we can investigate.
If you like to replicate a
Hi all
I've a Sun X4500 with 48 disk of 750Go
The server come with Solaris install on two disk. That's mean I've got 46
disk for ZFS.
When I look the defautl configuration of the zpool
zpool create -f zpool1 raidz c0t0d0 c1t0d0 c4t0d0 c6t0d0 c7t0d0
zpool add -f zpool1 raidz c0t1d0 c1t1d0 c4t1d
>Also, from my _very limited_ investigation this morning, it seems tha=
>t
>#grep Err /tmp/getdents.bin.cp.truss | grep -v ENOENT | grep getdents
>
>returns entries such as:
>getdents64(0, 0xFEC92000, 8192) Err#9 EBADF
>getdents64(0, 0xFEC92000, 8192) Err#9 EBADF
>
Le 30/01/2008 à 11:01:35-0500, Kyle McDonald a écrit
> Albert Shih wrote:
>> What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
>> that's mean I can make raidz with 6 or 7 or any number of disk).
>>
>>
> Depending on needs for space vs. performance, I'd probably pixk ei
On 1/30/08, Albert Shih <[EMAIL PROTECTED]> wrote:
Thanks for the tips...
>
> How you can check the speed (I'm totally newbie on Solaris)
>
> I've use
>
> mkfile 10g
>
> for write and I've got same perf with 5*9 or 9*5.
>
> Have you some advice about tool like iozone ?
>
> Regards.
>
Albert Shih wrote:
> What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
> that's mean I can make raidz with 6 or 7 or any number of disk).
>
>
Depending on needs for space vs. performance, I'd probably pixk eithr
5*9 or 9*5, with 1 hot spare.
-Kyle
> Regards.
>
> -
Hello Joerg,
Wednesday, January 30, 2008, 2:56:27 PM, you wrote:
JS> Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> If you could re-create empty files - exactly the same directory
>> atructure and file names, check if you still got a problem.
>> If you do, then if you could send a script here (m
[EMAIL PROTECTED] wrote:
>
>
> >Also, from my _very limited_ investigation this morning, it seems tha=
> >t
> >#grep Err /tmp/getdents.bin.cp.truss | grep -v ENOENT | grep getdents
> >
> >returns entries such as:
> >getdents64(0, 0xFEC92000, 8192) Err#9 EBADF
> >getdents64(0, 0xFEC
Roch - PAE wrote:
> Jonathan Loran writes:
> >
> > Is it true that Solaris 10 u4 does not have any of the nice ZIL controls
> > that exist in the various recent Open Solaris flavors? I would like to
> > move my ZIL to solid state storage, but I fear I can't do it until I
> > have anothe
Neil Perrin wrote:
>
>
> Roch - PAE wrote:
>> Jonathan Loran writes:
>> > > Is it true that Solaris 10 u4 does not have any of the nice ZIL
>> controls > that exist in the various recent Open Solaris flavors? I
>> would like to > move my ZIL to solid state storage, but I fear I
>> can't d
Are you already running with zfs_nocacheflush=1? We have SAN arrays with dual
battery-backed controllers for the cache, so we definitely have this set on all
our production systems. It makes a big difference for us.
As I said before I don't see the catastrophe in disabling ZIL though.
We act
[EMAIL PROTECTED] said:
> I'd take a look at bonnie++
> http://www.sunfreeware.com/programlistintel10.html#bonnie++
Also filebench:
http://www.solarisinternals.com/wiki/index.php/FileBench
You'll see the most difference between 5x9 and 9x5 in small random reads:
http://blogs.sun.com/relling/e
On Jan 30, 2008, at 3:44 PM, Vincent Fox wrote:
> What we ended up doing, for political reasons, was putting the
> squeeze on our Sun reps and getting a 10u4 kernel spin patch with...
> what did they call it? Oh yeah "a big wad of ZFS fixes". So this
> ends up being a hug PITA because for
[EMAIL PROTECTED] said:
> I feel like we're being hung out to dry here. I've got 70TB on 9 various
> Solaris 10 u4 servers, with different data sets. All of these are NFS
> servers. Two servers have a ton of small files, with a lot of read and
> write updating, and NFS performance on these ar
Hello,
I'm planning to use VMware Server on Ubuntu to host multiple VMs, one
of which will be a Solaris instance for the purposes of ZFS
I would give the ZFS VM two physical disks for my zpool, e.g. /dev/sda
and /dev/sdb, in addition to the VMware virtual disk for the Solaris
OS
Now I know that S
Lewis Thompson wrote:
> Hello,
>
> I'm planning to use VMware Server on Ubuntu to host multiple VMs, one
> of which will be a Solaris instance for the purposes of ZFS
> I would give the ZFS VM two physical disks for my zpool, e.g. /dev/sda
> and /dev/sdb, in addition to the VMware virtual disk for
Vincent Fox wrote:
> Are you already running with zfs_nocacheflush=1? We have SAN arrays with
> dual battery-backed controllers for the cache, so we definitely have this set
> on all our production systems. It makes a big difference for us.
>
>
No, we're not using the zfs_nocacheflush=1, b
Jonathan Loran wrote:
> Vincent Fox wrote:
>> Are you already running with zfs_nocacheflush=1? We have SAN arrays with
>> dual battery-backed controllers for the cache, so we definitely have this
>> set on all our production systems. It makes a big difference for us.
>>
>>
> No, we're not
> No, we're not using the zfs_nocacheflush=1, but our
> SAN array's are set
> to cache all writebacks, so it shouldn't be needed.
> I may test this, if
> get the chance to reboot one of the servers, but
> I'll bet the storage
> rrays' are working correctly.
Bzzzt, wrong.
Read up on a few thr
I'm running Nevada build 81 on x86 on an Ultra 40.
# uname -a
SunOS zbit 5.11 snv_81 i86pc i386 i86pc
Memory size: 8191 Megabytes
I started with this zfs pool many dozens of builds ago, approx a year ago.
I do live upgrade and zfs upgrade every few builds.
When I have not accessed the zfs file sy
Any chance the disks are being powered down, and you are waiting for
them to power back up?
Nathan. :)
Neal Pollack wrote:
> I'm running Nevada build 81 on x86 on an Ultra 40.
> # uname -a
> SunOS zbit 5.11 snv_81 i86pc i386 i86pc
> Memory size: 8191 Megabytes
>
> I started with this zfs pool m
Last 2 weeks we had 2 zpools corrupted.
Pool was visible via zpool import, but could not be imported anymore. During
import attempt we got I/O error,
After a first powercut we lost our jumpstart/nfsroot zpool (another pool was
still OK). Luckaly jumpstart data was backed up and easely restored,
Since I spend a lot of time going from machine to machine so I thought
I'd carry a pool with me on a couple of USB keys. It all works fine
but it's slow, so I thought I'd attach a file vdev to the pool and
then offline the USB devices for speed, then undo when I want to take
the keys with m
kristof wrote:
> Last 2 weeks we had 2 zpools corrupted.
>
> Pool was visible via zpool import, but could not be imported anymore. During
> import attempt we got I/O error,
>
What exactly was the error message?
Also look at the fma messages, as they are often more precise.
-- richard
> After
Hello,
I have a Dell 2950 with a Perc 5/i, two 300GB 15K SAS drives in a RAID0 array.
I am considering going to ZFS and I would like to get some feedback about which
situation would yield the highest performance: using the Perc 5/i to provide a
hardware RAID0 that is presented as a single vol
Hi everybody,
Greg pointed me to
http://lists.freebsd.org/pipermail/freebsd-stable/2008-January/040136.html
from a Daniel Eriksson:
If you import and export more than one zpool FreeBSD will panic during
shutdown. This bug is present in both RELENG_7 and RELENG_7_0 (I have
not tes
>
> However, I'm also unhappy about having to wait for S10U6 for the separate
> ZIL and/or cache features of ZFS. The lack of NV ZIL on our new Thumper
> makes it painfully slow over NFS for the large number of file create/delete
> type of workload.
I did a bit of testing on this (because I'm in
37 matches
Mail list logo