Re: Sun V100 with >127Gb drives on 6.0 supported and working now?

2016-09-09 Thread Daniel Ouellet
On 9/7/16 12:31 PM, Daniel Ouellet wrote:
> I always used to re-install, but only rename my partition, not redoing
> them. However I changed my auto-install as well and in the proceed
> forgot to NOT partition above 127Gb  or to be exact 268,435,440 block of
> 512 bytes as in the pass the server ALWAYS crash if you try to go beyond
> and the V100 simply doesn't support >127Gb.
> 
> I never had a problem with doing that, it's fine, but now I notice in
> 6.0 and may be earlier as well, just never tested it as it was a
> discover by mistake this time around that I sure can format now drives
> bigger then 127GB and no issue so far.

Just to close the loop on this. It does work and digging up to find out
why I find this finally:

http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/sys/dev/pci/pciide.c.diff?r1=1.321=1.322=h

"Revision 0xc4 and earlier of the Acer Labs M5229 UDMA IDE controller
can't do DMA for LBA48 commands.  Work around this issue by (silently)
falling back to PIO for LBA48 commands.  Access to the tail end of large
disks will be much slower, but at least it works."

>From NetBSD (Takeshi Nakayama).

ok jsg@, krw@, deraadt@

Now I know I wasn't crazy and thanks for the improvement! And sadly I
miss that commit I guess. Better late then never...

The only thing is as the diff said the speed is slower. Not so bad however.

from average 42 sec for writing a 1GB file to 164 secs for the same size
when it reach the area >137GB on the drive.

Sure looks like it can now.

I did with this to see it:

for n in `jot 100`; do dd if=/dev/zero of=/free/test$n bs=1m count=1000
>> /tmp/test 2>&1; done;

and look at the stats for each write.

That's a big partition at the end of a 160GB drive.

But my boxes do not operate at UDMA mode 5 to start with. So switching
to PIO mode 4 is not a huge difference in this setup.

wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2

I do not believe I have any of the shielded cables to allow to run >
UDMA mode 2 anyway.



Re: Sun V100 with >127Gb drives on 6.0 supported and working now?

2016-09-08 Thread Daniel Ouellet
On 9/7/16 4:55 PM, Michael Plura wrote:
> On Wed, 7 Sep 2016 12:31:58 -0400
> Daniel Ouellet  wrote:
> 
>> A quick question on this as I only notice this in the last few days by
>> accident actually, and I want to know if that's real or not.
>> ...
>> and the V100 simply doesn't support >127Gb.
>> ...
>> discover by mistake this time around that I sure can format now drives
>> bigger then 127GB and no issue so far.
>> ...
>> Am I shooting myself in the foot if I try now as so far I haven't see
>> any problems doing it, but it's been only a week so far.
> 
> Probably. From dmesg on my V100:
> 
>  ebus0 at pci0 dev 7 function 0 "Acer Labs M1533 ISA" rev 0xc3
> 
> The Sun V100 is using an AcerLabs (ALi) south-bridge chipset M1535D,
> that supports ATA-5/UDMA66 which supports CHS 28-bit mode - that
> is 128 GiB / 137 GB.
> 
> ALi did a M1535D+ chipset that supported ATA-6/UDMA100 with LBA48 which
> can address bigger drives. Some BIOS on x86 didnt support this, but there
> you could use bigger drives with a apropiate driver of the OS.
> 
> For your V100 I think you just start overwriting the first bytes of the
> driver if you reach 128 GiB... but I'm not sure. You might test that with
> dd.
> 

That's just it, I am not sure. It appear not to write over, but I will
test this more aggressively.

Here is an example of DMESG for one I work with for testing now.

The controller is still not any advance one:

ebus0 at pci0 dev 7 function 0 "Acer Labs M1533 ISA" rev 0x00

But I also see this:

wd0: 16-sector PIO, LBA48, 152627MB, 312581808 sectors
^
And I sure don't see  ATA-6/UDMA100 or M1535D+ anywhere.

But I do see this one: Acer Labs M5229 UDMA IDE

It's just that before you couldn't even format it at all, as soon as you
pass 268,435,440 sectors, the server crash, but now well so far it
format it. I copy data on it and will do more and still works.
I just wanted to know exactly what I should be looking for to know if
that was just a fluck, or if there is something supported now that
wasn't before and that somehow pass the previous limits I had.

I will need to go in each one and check carefully what I see, if there
is differences or not.

I wish I would know exactly what I may be looking for. I would save a
lots of testing, but in the end if that's what it takes I will do it.

Having a way to know for sure would be nice.

It's just that coming to this by accident and now I am really trying to
find the answer for it. More curious then not I suppose as it's not like
these servers are so powerful to put them to great work, but I guess I
really love them.

Thanks for your feedback, I appreciate it!


console is /pci@1f,0/isa@7/serial@0,3f8
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California.  All rights reserved.
Copyright (c) 1995-2016 OpenBSD. All rights reserved.
http://www.OpenBSD.org

OpenBSD 6.0 (GENERIC) #1094: Tue Jul 26 16:40:58 MDT 2016
dera...@sparc64.openbsd.org:/usr/src/sys/arch/sparc64/compile/GENERIC
real mem = 2147483648 (2048MB)
avail mem = 2094309376 (1997MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root: Sun Fire V100 (UltraSPARC-IIe 548MHz)
cpu0 at mainbus0: SUNW,UltraSPARC-IIe (rev 3.3) @ 548 MHz
cpu0: physical 16K instruction (32 b/l), 16K data (32 b/l), 512K
external (64 b/l)
psycho0 at mainbus0: SUNW,sabre, impl 0, version 0, ign 7c0
psycho0: bus range 0-0, PCI bus 0
psycho0: dvma map 6000-7fff
pci0 at psycho0
ebus0 at pci0 dev 7 function 0 "Acer Labs M1533 ISA" rev 0x00
"dma" at ebus0 addr 0- ivec 0x2a not configured
rtc0 at ebus0 addr 70-71: m5819
power0 at ebus0 addr 2000-2007 ivec 0x23
lom0 at ebus0 addr 8010-8011 ivec 0x2a: LOMlite2 rev 3.12
com0 at ebus0 addr 3f8-3ff ivec 0x2b: ns16550a, 16 byte fifo
com0: console
com1 at ebus0 addr 2e8-2ef ivec 0x2b: ns16550a, 16 byte fifo
"flashprom" at ebus0 addr 0-7 not configured
alipm0 at pci0 dev 3 function 0 "Acer Labs M7101 Power" rev 0x00: 74KHz
clock
iic0 at alipm0
"max1617" at alipm0 addr 0x18 skipped due to alipm0 bugs
spdmem0 at iic0 addr 0x54: 512MB SDRAM registered ECC PC133CL2
spdmem1 at iic0 addr 0x55: 512MB SDRAM registered ECC PC133CL2
spdmem2 at iic0 addr 0x56: 512MB SDRAM registered ECC PC133CL2
spdmem3 at iic0 addr 0x57: 512MB SDRAM registered ECC PC133CL2
dc0 at pci0 dev 12 function 0 "Davicom DM9102" rev 0x31: ivec 0x7c6,
address 00:03:ba:2b:62:16
amphy0 at dc0 phy 1: DM9102 10/100 PHY, rev. 0
dc1 at pci0 dev 5 function 0 "Davicom DM9102" rev 0x31: ivec 0x7dc,
address 00:03:ba:2b:62:17
amphy1 at dc1 phy 1: DM9102 10/100 PHY, rev. 0
ohci0 at pci0 dev 10 function 0 "Acer Labs M5237 USB" rev 0x03: ivec
0x7e4, version 1.0, legacy support
pciide0 at pci0 dev 13 function 0 "Acer Labs M5229 UDMA IDE" rev 0xc3:
DMA, channel 0 configured to native-PCI, channel 1 configured to native-PCI
pciide0: using ivec 0x7cc for native-PCI interrupt
wd0 at pciide0 channel 0 drive 0: 
wd0: 16-sector PIO, LBA48, 152627MB, 

Re: Sun V100 with >127Gb drives on 6.0 supported and working now?

2016-09-08 Thread Michael Plura
On Wed, 7 Sep 2016 12:31:58 -0400
Daniel Ouellet  wrote:

> A quick question on this as I only notice this in the last few days by
> accident actually, and I want to know if that's real or not.
> ...
> and the V100 simply doesn't support >127Gb.
> ...
> discover by mistake this time around that I sure can format now drives
> bigger then 127GB and no issue so far.
> ...
> Am I shooting myself in the foot if I try now as so far I haven't see
> any problems doing it, but it's been only a week so far.

Probably. From dmesg on my V100:

 ebus0 at pci0 dev 7 function 0 "Acer Labs M1533 ISA" rev 0xc3

The Sun V100 is using an AcerLabs (ALi) south-bridge chipset M1535D,
that supports ATA-5/UDMA66 which supports CHS 28-bit mode - that
is 128 GiB / 137 GB.

ALi did a M1535D+ chipset that supported ATA-6/UDMA100 with LBA48 which
can address bigger drives. Some BIOS on x86 didnt support this, but there
you could use bigger drives with a apropiate driver of the OS.

For your V100 I think you just start overwriting the first bytes of the
driver if you reach 128 GiB... but I'm not sure. You might test that with
dd.


> I would very much appreciate to know for sure as I still run >100 of
> them. Nothing critical on these servers, but extending them more would
> be nice as so many new one are put in place, not having to replace these
> too for lack of space is nice as I still have plenty of new IDE drives
> in boxes as I did purchase them in bulk years ago planing ahead for dead
> one, that still run great.
> Anyone knows for sure if >127GB is no problem to use at all now on these
> so loved servers?

The V100 is the cheap IDE-version of the SCSI-V120... but I like them for
whatever reason too. Their little mainboard inside is somwhat... cute! :)


Michael

> Thanks,
> 
> Daniel



Sun V100 with >127Gb drives on 6.0 supported and working now?

2016-09-07 Thread Daniel Ouellet
A quick question on this as I only notice this in the last few days by
accident actually, and I want to know if that's real or not.

I always used to re-install, but only rename my partition, not redoing
them. However I changed my auto-install as well and in the proceed
forgot to NOT partition above 127Gb  or to be exact 268,435,440 block of
512 bytes as in the pass the server ALWAYS crash if you try to go beyond
and the V100 simply doesn't support >127Gb.

I never had a problem with doing that, it's fine, but now I notice in
6.0 and may be earlier as well, just never tested it as it was a
discover by mistake this time around that I sure can format now drives
bigger then 127GB and no issue so far.

I am not saying it is good or normal or support or will work for ever. I
do not know, that's why my question is about.

Is this due to the work done in disklabel and all a few years ago and is
now somehow bypassing the BIOS or what ever it is on the V100 now that
is it possible to use drive bigger then 127GB?

Am I shooting myself in the foot if I try now as so far I haven't see
any problems doing it, but it's been only a week so far.

I would very much appreciate to know for sure as I still run >100 of
them. Nothing critical on these servers, but extending them more would
be nice as so many new one are put in place, not having to replace these
too for lack of space is nice as I still have plenty of new IDE drives
in boxes as I did purchase them in bulk years ago planing ahead for dead
one, that still run great.

Anyone knows for sure if >127GB is no problem to use at all now on these
so loved servers?

Thanks,

Daniel