Re: [coreboot] Wiki page for Lenovo Thinkpad W520

2017-07-17 Thread Charlotte Plusplus
Hello

I would be very happy to help with pictures, source, binaries, info etc.
But since Denver, I have been just a bit behind. Too many side projects :-)

Anyway, I will get back in touch soon. If you use my previously released
binary, you should be fine.

The port to the current coreboot I did in Denver has new ram issues that I
am investigating. It may need again some dirty workarounds, like adding +1
or +2 to some SPD settings to get memtest work without any failure.
Optimus is still not working yet.

Thanks
Charlotte


On Sun, Jul 16, 2017 at 8:49 AM, Nico Rikken  wrote:

> I created a wiki page for the Lenovo W520 based on information shared
> on this mailinglist:
>
> https://www.coreboot.org/Board:lenovo/w520
>
> I hope others can contribute more details, so I can benefit when taking
> on the challenge of flashing a W520 myself.
>
> Kind regards,
> Nico Rikken (NL)
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Wiki page for Lenovo Thinkpad W520

2017-07-20 Thread Charlotte Plusplus
Hello

I did the rebase in Denver. I have the same ram issues.

Unless you mean the changes have happened in the last few weeks? I have
some exams in August, I can try to rebase again in August or just send the
current patch. But it should come with a big fat warning that memory errors
do occur as show on a memtest

Charlotte

On Tue, Jul 18, 2017 at 2:55 AM, Iru Cai  wrote:

> Hi Charlotte,
>
> On Tue, Jul 18, 2017 at 1:37 PM, Charlotte Plusplus <
> pluspluscharlo...@gmail.com> wrote:
>
>> Hello
>>
>> I would be very happy to help with pictures, source, binaries, info etc.
>> But since Denver, I have been just a bit behind. Too many side projects :-)
>>
>> Anyway, I will get back in touch soon. If you use my previously released
>> binary, you should be fine.
>>
>> The port to the current coreboot I did in Denver has new ram issues that
>> I am investigating. It may need again some dirty workarounds, like adding
>> +1 or +2 to some SPD settings to get memtest work without any failure.
>> Optimus is still not working yet.
>>
>
> Maybe you need to rebase your code on master coreboot, which has some
> improvements on Sandy/Ivy Bridge RAM init. There's also some changes in the
> codebase.
>
> Iru
>
>
>> Thanks
>> Charlotte
>>
>>
>> On Sun, Jul 16, 2017 at 8:49 AM, Nico Rikken  wrote:
>>
>>> I created a wiki page for the Lenovo W520 based on information shared
>>> on this mailinglist:
>>>
>>> https://www.coreboot.org/Board:lenovo/w520
>>>
>>> I hope others can contribute more details, so I can benefit when taking
>>> on the challenge of flashing a W520 myself.
>>>
>>> Kind regards,
>>> Nico Rikken (NL)
>>>
>>
>>
>> --
>> coreboot mailing list: coreboot@coreboot.org
>> https://mail.coreboot.org/mailman/listinfo/coreboot
>>
>
>
>
> --
> Please do not send me Microsoft Office/Apple iWork documents. Send
> OpenDocument instead! http://fsf.org/campaigns/opendocument/
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Cherry Trail Support

2017-07-20 Thread Charlotte Plusplus
It is this the small cool machine I bought with me in Denver and showed
around.

Cherry trail seems to have made its way to a lot of cheap laptop and
tablets too (BestBuy Insignia 11' tablet)

On Sun, Jul 16, 2017 at 5:48 AM, Arne Zachlod  wrote:

> Hi all,
>
> I got a GPD Pocket with an Atom X7-Z8750 (Cherry Trail).
>
> I would like to use coreboot with it, but there seems to be no support
> for Cherry Trail, is support for this platform planned/worked on or is
> this more likely not going to happen?
>
> Thanks,
> Arne
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://mail.coreboot.org/mailman/listinfo/coreboot
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://mail.coreboot.org/mailman/listinfo/coreboot

[coreboot] Patch: support for the Lenovo Thinkpad W520

2016-11-06 Thread Charlotte Plusplus
Hello

I am new to coreboot. I tried to run coreboot on my Thinkpad W520 to
replace my Sandy Bridge CPU by a Ivy Bridge CPU, since several people
reported it worked very well.

Following their suggestions, I simply tried to flash a T520 image to
save time. It didn't work at all. I was very angry I had wasted almost
2 days doing that. So I took the time to port coreboot to the Thinkpad
W520 following the specs from
http://kythuatphancung.vn/uploads/download/5165b_Wistron_Kendo-3_WS_-_10222-SA_-_ThinkPad_W520.pdf
and explore the issues I had (black screen, no boot)

First, please find the patch attached to add support to the Thinkpad
W520 mainboard. This patch is based on the T520 mainboard as the W520
is very close. However, it needed romstage additions for ram init (cf
p103 of the pdf)

I can now boot with various combinations of population of the Dimm
slots, with the 4 slots all working, and the Dimm correctly identified
by memtest

However, I can not use my normal 8Gb Dimm sticks. One of them work,
but 2x8Gb does not work most of the time. I get errors in memtest when
I am lucky to boot. The sticks are DDR3L 2133 (bus speed 1066 MHz) and
they worked fine with the bios 1.36: I tested the sticks for over 24h
with memtest86 7.1 just a week ago to check how hot my old CPU would
get.

After further investigation, if I run memtest86 7.1 on the bios 1.36,
it shows the correct information and timings: tCL-tRCD-tRP-tRAS are
11-11-11-29, and the RAM is running at 1066MHz.

If I run memtest86 on coreboot, it shows the RAM is running at 931MHz
with timings 10-10-10-28. All this would be fine if it didn't cause
errors on the memtest86! It looks like the SPD decoding is buggy, and
instead of using the optimal configuration (11-11-11) coreboot picks
the next one.

I would be happy to fix that but I don't know where to look. I
correctly put max_mem_clock_mhz = 1066 in devicetree.cb so I don't
understand. Can I hardcode timing settings? Is there any other place I
should check for SPD issues?

There are some other W520 specific issues. I will try to prepare a
wiki page to explain them. The big ones are:
 - the bottom plastic case is conductive: if you add a J100 header,
you must trim the ends or you will have shorts and the computer will
not boot
 - if you flash with a raspberry pi, you do not need external power.
the 3V from pin 17 are enough. WP and HOLD can be left floating if you
want. But you must insert all the pins at once, otherwise the W25Q64
may not respond (it is weird, I see no logical reason, but it works
reliably)
 - native graphics initialization gave me video artifacts in seabios.
Using the VGA bios fixed that

On https://www.coreboot.org/Board:lenovo/t520 I see issues:
yellow USB port isn't powered in power-off state.
DisplayPort only connected to Discrete GPU
TPM. At the moment there is only basic support inside coreboot...
Boot time issues ( keyboard rest timeout )
ultrabay hot plug (event missing?)
some power management states missing

I am new to coreboot. I could try to add the missing power management
states. But can I please ask for pointers and suggestions? What is
missing? Is there any documentation?

Also, in romstage I am not sure of :
/* Disable unused devices (board specific) */
RCBA32(FD) = 0x1ee51fe3;

The W520 is a bit different from a t420 and I think it may not be the
same. How do I compute this number?

Also, can I please ask for some help on USBDEBUG? I have a FT232R
cable and enabled CONFIG_USBDEBUG_DONGLE_FTDI_FT232H. But I receive
nothing. Is it necessary to have a FT232H? I looked at the code and it
does not seem very specific. Having debug information would be helpful
to fix these ram issues.

Charlotte


w520.tar.gz
Description: GNU Zip compressed data
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

[coreboot] Removing Intel ME from the X230 coreboot

2016-11-06 Thread Charlotte Plusplus
Hello

Did you release any of your work to remove the ME for a X230?

I am interested in attempting that on a W520. On
https://www.coreboot.org/pipermail/coreboot/2016-September/082049.html
you mention you left ROMP, BUP, KERNEL, POLICY and FTCS. Any update on
that? Is it the minimal set??

I assume you made a script that takes a ME firmware, reads it
partition, then dd some parts of it to 0xFF. Can I please get a copy??

Thanks!

-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Patch: support for the Lenovo Thinkpad W520

2016-11-07 Thread Charlotte Plusplus
Hello

On 11/6/16, Patrick Rudolph  wrote:
> thanks for adding support for W520.

My pleasure to help! As soon as I get the RAM issues fixed, I will try
to add another mainboard. This was quite fun (reading the block
diagram, making sure the ports match, etc)

My only real issue at the moment is the RAM. I wish I could fix that
soon, because I really need my laptop to work!! And since I replaced
the CPU by one that is not supported in the bios, coreboot is my only
option :-/

I can't even boot reliably with more than 1 ram stick inserted :-(

> Am 06.11.2016 um 09:33 schrieb Charlotte Plusplus:
> Please use git (
> https://www.coreboot.org/Development_Guidelines#How_to_contribute ) and
> gerrit ( http://review.coreboot.org/#/q/status:open ) to upload your
> patch. This allows easy reviewing and commenting, including an automatic
> build for every change made.

Sorry, I haven't used git yet ever, except to download code. I thought
an archive would be ok. I will read your links and prepare a
submission in the correct format. Thanks for the explanation!

> Please note that I didn't test anything beyond DDR3-800. Some people
> reported that DDR3-1866 is working.
> Can you try to limit max frequency to DDR3-1333 or DDR3-1600 (using
> max_mem_clock_mhz) and tell if it's working ?

Currently doing that. I have read a bit more about memory, and
apparently it could also be due to Intel XMP.

> Native raminit is done in
> coreboot/src/northbridge/intel/sandybridge/raminit.c
> Please have a look at this file first. I don't think that there are SPD
> issues, but there might be issues with frequencies of + DDR3-1600.

I looked at that. I see you recently added XMP support:
https://www.coreboot.org/pipermail/coreboot-gerrit/2016-February/040779.html

The sticks I am using are Corsair CMSX16GX3M2B2133C1, which use XMP.
They are dual voltage, 1.35 and 1.5V

Here is the output from decode-dimm run in the BIOS, and from
coreboot. I can also add screenshots from memtest86 7.1, which showed
the proper SPD settings and speed. Speed tests curves gave numbers
compatibles with 2133, which is above DDR3-1600 as bios 1.36 did not
restrict their speed. I did test them for over a day in memtest86 7.1.

Since the sticks work reliably in the default bios as confirmed by
memtest86 71, the problem seems to be coreboot specific: memtest86+
5.0 gives me various error that never showed up before. And with one
stick, coreboot won't even boot.

I don't understand how the existing raminit code could have issues
with frequencies over DDR3-1600.

My best guess is it may be due to using the CL10 profile (even if it
is "correct") and ignoring the XMP CL11 profile, because I see the
current code only select the 1st XMP profile, which could be the cause
of the error if say the 2nd XMP profile is the one that should be
used.

It all looks innocent, but it is a serious error, as one 8G sticks
gives me tens of thousands of errors in memtest86. I think this could
cause serious data corruption :-(

>> I am new to coreboot. I could try to add the missing power management
>> states. But can I please ask for pointers and suggestions? What is
>> missing? Is there any documentation?
> Power management is done through ACPI.
> You'd need to figure out which ACPI functions are used by your operating  
> system and implement/fix them.

I am using Linux, Arch or Ubuntu. I don't really understand what you
mean there. I thought ACPI functions for power management were quite
standard. After reading my dmesg, I thought the linux kernel PSTATE
and CPUFREQ drivers were working fine. I need to do more tests with
powertop.

Is the power management state complete enough from coreboot standpoint
to work with the current linux kernel??

If not, could you point me to an Ivy Bridge board that has the most
complete implementation of power management, so I can use it as an
example?

I see some boards have cstates support, like
src/mainboard/lenovo/x200/cstates.c and after googling I found that
libreboot had "Higher battery life on GM45 (X200, T400, T500, R400)
due to higher cstates now being supported (thanks Arthur Heymans). C4
power states also supported. Higher battery life on i945 (X60, T60,
MacBook2,1) due to better CPU
C-state settings. (Deep C4, Dynamicl L2 shrinking, C2E).

Patch is in:
https://notabug.org/vimuser/libreboot/commit/89819c5ce3cd5c9a38e9e7e817573dca52cbabcb

However, I could not need documentation or explainations for the
numbers used (which differ in coreboot and libreboot)

I have found 
http://www.intel.com/content/www/us/en/support/processors/06619.html
but that is not super helpful.

I would be happy to write proper power management support, but I would
really appreciate some links to the documentation or some examples.

A related question: how do I write a ACPI method to receive a call
from userland and do something 

Re: [coreboot] Patch: support for the Lenovo Thinkpad W520

2016-11-07 Thread Charlotte Plusplus
Hello

On 11/6/16, Nico Huber  wrote:
> http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/6-chipset-c200-chipset-datasheet.pdf

Thanks a lot for the documentation the detailed  the explaination!

I will try to fix my devicetree.

Now if only I could  find something like that for power  management!!

-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


Re: [coreboot] Removing Intel ME from the X230 coreboot

2016-11-07 Thread Charlotte Plusplus
Hello

Thanks a lot for your detailed report. I would be happy to contribute
to your tests with my Ivy Bridge Thinkpad W520.

I have been using the ME from
https://www.dropbox.com/sh/mq1e2a54vv030nz/AAAgPSRYlPhuXiZiRE9qJbwQa

It is 1.5MB, which leaves me about 7MB for the payload:

CONFIG_CBFS_SIZE=0x68

With the following layout:

  1 :0fff fd
  2 0018:007f bios
  3 3000:0017 me
  4 1000:2fff gbe

The system did not poweroff after 30 min, but I would like to strip
down everything I can out of the intel ME. Your approach is
interesting. What is the minimal size you can presently get?

Using your python script on my 1.5 me.bin, I get FFs after 0x906C0

Can I adjust the IFD accordingly to scrape this extra space, or will I
run into problems as Tramell said the ME wants to write  to its own
partition?

Have you been able to test Igor suggestions to further trim the intel ME?

Thanks
Charlotte

On 11/6/16, Federico Amedeo Izzo  wrote:
> On 11/06/2016 10:11 AM, Charlotte Plusplus wrote:
>> Hello
>>
>> Did you release any of your work to remove the ME for a X230?
>>
>> I am interested in attempting that on a W520. On
>> https://www.coreboot.org/pipermail/coreboot/2016-September/082049.html
>> you mention you left ROMP, BUP, KERNEL, POLICY and FTCS. Any update on
>> that? Is it the minimal set??
>>
>> I assume you made a script that takes a ME firmware, reads it
>> partition, then dd some parts of it to 0xFF. Can I please get a copy??
>>
>> Thanks!
>>
> Hello,
>
> Lately I worked with Nicola Corna on disabling the intel ME, following
> the results of Trammell Hudson.
>
> We also made a python script that removes all the partitions but FTPR
> and the corresponding entry in the FPT table, the results boots fine on
> a Sandy Bridge X220 & X220t but leads to a brick on Nehalem X201.
>
> You can find more details here along with the script
> https://www.coreboot.org/pipermail/coreboot/2016-November/082331.html
>
> Federico
>
>
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>

-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot


[coreboot] Weird issue: payloads not showing after seabios

2016-11-07 Thread Charlotte Plusplus
Hello

I am sorry if this is a total noob question, but I can't get any other
payload to show other that the default seabios payload or memtest86.

nvramcui, tint, coreinfo (all automatically added by coreboot as secondary
payloads) or grub (added manually with cbfstool) show nothing, or cause a
reboot.

I first thought that was due to my ram issues, but even after finding low
DDR3 that works reliably in coreboot, I still can't get other payloads.

Just to be sure I tried everything:
 - I used CONFIG_MAINBOARD_DO_NATIVE_VGA_INIT: weird font (maybe because I
have a FHD 1680x1080 screen instead of some different hardcoded resolution)
 - I tried CONFIG_VGA_ROM_RUN: garbled screen with wavy lines

I am back to none of these options, and a VGA rom initialized by Seabios. I
am using version 1.9.3 and the current coreboot head. I am now trying with
seabios 1.10  (d7adf6044a4c772b497e97272adf97426b34a249)

I have read https://www.coreboot.org/SeaBIOS#coreboot. I use at the moment
the .config attached, but I don't understand what is happening. I don't
have debug output yet, I just ordered a FT232H, as it was far less
expansive than a BBB. It will only arrive next week.

I think at least something should show on the screen when I press Esc and
select another payload. Only memtest86 works at the moment.

I am making some stupid mistake??

$ cat /opt/coreboot/blobs/payloads/seabios-minime.config
CONFIG_USE_OPTION_TABLE=y
CONFIG_COLLECT_TIMESTAMPS=y
CONFIG_USE_BLOBS=y
CONFIG_VENDOR_LENOVO=y
CONFIG_CBFS_SIZE=0x68
CONFIG_BOARD_LENOVO_W520=y
CONFIG_USBDEBUG=y
CONFIG_ENABLE_VMX=y
CONFIG_PCIEXP_CLK_PM=y
CONFIG_CONSOLE_USB=y


w520-reliable.config
Description: XML document
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

[coreboot] More details about ram issues

2016-11-08 Thread Charlotte Plusplus
Hello

On Tue, Nov 8, 2016 at 11:30 AM, Patrick Rudolph  wrote:

> Good point, I've should have mentioned that.
>

Well, I found out by myself the hard way after scratching my head. It is
the best way to learn I suppose :-)


> At the moment there's no such functionality I'm aware of. Of course you
> can comment store_current_mrc_cache() and it will always do full
> raminit training.
>

The boot is already quite slow, around 6 seconds. cbmem -t shows about 1.5
seconds are spend at the beginning of ram init and another at the end of
ramstage. I don't know if there is a way to trim that safely, but I am
reluctant to add anything that will cause me to stare longer at a black
screen while testing this ram stuff.

My best idea is make using mrc_cache conditional to having a reboot count
<10 (or the largest value that can be held, or some impossible value that
can still be stored there with nvramtool), and manually setting the reboot
count with nvramtool when I want. That should be a simple workaround, if
the nvram variables are accessible at this point (I have to check)

In init_dram_ddr3(), would something like that work to replace mrc_cache =
find_current_mrc_cache

unsigned char byte;
byte = cmos_read(RTC_BOOT_BYTE);

if (boot_count(byte) < 10) {
mrc_cache = find_current_mrc_cache();
}

> Alternatively, a really nice thing to have would be a nvram option to
> > pass max_mem_clock_mhz without having to recompile, and invalidate
> > this cache if the settings do not match, to cover cases like this
> > one.  Also, overclockers would love that. So this option could even
> > be used to pass custom SPD timings, as some bios do. It would require
> > more thinking on what to do and how, but it looks like a nice feature
> > to me. Would it be acceptable to add that to coreboot?
> >
> That's a nice idea, but there's no interface (yet). It also would be
> nice to have a vendor independent solution.
>

If other vendors support passing ram parameters, why not?

At the moment I'm thinking of a nvram option like
MemoryOverride: freq, tCAS, tRCD, tRP, tRAS

Why would it be an impossible interface? Just read the nvram, if you find
the option, override the defaults values of ctr->tCK, ctrl->CAS,
ctrl->tRCD, ctrl->tRP and ctrl->tRAS in dram_init. Do this unless the
setting can't be applied by the chips.

I have no idea whether this is possible, because until about a week ago I
had no idea how the ram init worked. It just seems to make sense to do it
this way.

Another nice use: memtest could write to the nvram to increase the
latencies when it finds error, update the reboot count to the value that
would cause the update of the MRC cache, and issue a reboot. coreboot would
then use these new values, and memtest would try again to see if the new
setup is stable.

That would be a nice way to automate the tedious step of making the ram
work.

It could also help validate ram init algorithms, since I now understand
they are a bit of a black box. I mean, it's not normal that reading and
applying the SPD information from my sticks results in an unstable system.

An option for a safety check should be added somewhere.

The raminit code is based on sandybridge, and doesn't have all ivy
> bridge optimizations. For exmaple it isn't possible to use a base
> frequency of 100Mhz (no DDR3-2000 or DDR3-1800 modes supported).
>

I didn't know that :-( After some googling I see it was reversed. There
must be no documentation so no easy way to add Ivy Bridge optimizations :-(

I think it's another good reason to have a way to interface ram setting
with userland tools.

Also, XMP would require to adjust the voltage (especially for profile 2
that is currently not read), but I don't think it is done currently. Or
maybe I just don't understand it. This means the information we get from
reading the XMP profile, which also contains some voltage information, can
not be relied upon for stability, as only half of it is applied.

Maybe using XMP should be made a config time option (with warning about
stability since the voltage information is ignored, and only timings are
used) until the voltages can also be applied?

I will try to run more tests to see if ignoring XMP can help me with my
stability issues before I go on with the overrides.

Please have a look at dram_timing(). I don't think it's a good idea to
> do tests without having logging output, but it's up to you.
>

I have no way to log output at the moment. I am waiting for my FT232. All I
can do I check the stability in memtest. I also prefer memtest verdict as a
final arbitrator, because all that matters is stability.


> You can enable raminit logging by checking "Output verbose RAM init
> debug messages" in menu Debugging. It doesn't work well with cbmem
> logging.
>

Already done, and indeed I have truncated lines :-(

Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] More details about ram issues

2016-11-10 Thread Charlotte Plusplus
So I did many more tests today (more than 6h, and flashing around 30
times), with SPD settings hardcoded into raminit, and without the mrc cache
interfering.

TLDR: coreboot tries to increase the frequency without increasing the
voltage, and that doesn't work for all memory.

Basically, with the problematic ram sticks, I can boot perfectly fine at
DDR3-1866 speed, but even the slower setting 11-11-11-31 gives errors on
the memtest. This is inconsistent with information I found about my memory
from its SPD information, and from other people who overclock this exact
same memory.

Even at 10-10-10-27, I still get errors at DDR3-1600 speeds. Far fewer than
before, but some errors sill.

After reading more about XMP and SPD, it is my understanding that :
 - JEDEC specs stop at 1600, and after that XMP is required
 - even before 1600, XMP also offers profiles, and they are not optional:
some memory is otherwise unable to work at its advertised speed
 - XMP profiles are some kind of overclocking: they usually require
adjusting the voltage, to deal with this increased speed
 - in XMP profile bytes, voltage increase information is given precisely
 - nowhere in the code I saw anything increasing the voltage, while XMP
requires that

I conclude that while there may be errors in selecting the SPD settings,
even if the SPD is manually corrected with known-good settings, or if
overshooting with very generous latencies, some errors do remain as the ram
is being asked to operate outside its voltage specifications (given the
frequency)

1.5V is a JEDEC spec, but RAM is advertised based on the information
contained in the XMP profiles, which at the moment do not seem fully
supported.

I do not know how to adjust the voltage (it should require talking to the
IMC of the CPU)  but I think that as soon as this is done, stability should
improve.

If someone can propose a patch doing that (either using the voltage read
from SPD, or by manually entering voltage information), I will be happy to
test it.

For now, I urge caution when operating even at DDR-1866 frequencies. Most
boards do set up 933 as their max_mem_clock_mhz. It is not very prudent to
do that until the voltage situation can be solved.

Charlote
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] More details about ram issues

2016-11-11 Thread Charlotte Plusplus
Hello

On Fri, Nov 11, 2016 at 7:53 AM, Nico Huber  wrote:

> > After reading more about XMP and SPD, it is my understanding that :
> >  - JEDEC specs stop at 1600, and after that XMP is required
> >  - even before 1600, XMP also offers profiles, and they are not optional:
> > some memory is otherwise unable to work at its advertised speed
>
> This would mean the memory is just broken. But that's what I suspect of
> any memory that's supposed to run out of spec.
>

I think it is a being too extreme. All of the ram sold, and most of what is
being used contains XMP profiles. It is hard to say how much of the
installed base may have problems when using XMP profiles without adapting
the voltage since not many people use coreboot, and not many of those who
use coreboot will be running memtest for hours.


> >  - XMP profiles are some kind of overclocking: they usually require
> > adjusting the voltage, to deal with this increased speed
>
> Not kind of overclocking, simply overclocking. There's only one voltage
> specified for DDR3, IIRC.
>

It is indeed Intel fault for rating the IMC only to 1600, and putting a
hack on top of that to make RAM go faster. But they made this hack a
standard, adopted by manufacturers. So it is not just overclocking in the
usual sense. It is Intel and ram-manufacturers validated overclocking,
where the XMP profiles contain speed settings + voltage, and negociate with
the system to get this voltage.

It is stable, or it wouldn't be used in production by so many bioses.


> JEDEC is the standard. If the XMP support is half-baked it should be
> disabled by default. Maybe we should even put a warning in the log if we
> encounter an XMP profile with anything else than 1.5V (if it's common
> that those DIMMs are broken ex factory).
>

So many standards to chose from lol I wouldn't call the XMP support half
baked. It is a very nice addition, as based on my understanding, some
combination of chipsets + RAM may not even be able boot without XMP
profiles. The XMP implementation just needs to be completed to also do the
voltage part.

Likewise, we can't say that 99% of the DIMMs are broken. The XMP profiles
have been tested and validated.

In my opinion, XMP should be a compile time option, defaulting to y, but
with a warning of possible ram errors.

Most of the boards have a max_mem_clock_mhz at 933, which concerns me just
as much in terms of ram errors.

I would suggest RAM settings to override that, and the selected SPD
settings. This way that unstable settings detected in memtest86 can be
adjusted in nvramgui without having to recompile.

I will do more research to see if I can do that in userland (like MSR can
be used for CPU overclocking, there must be a way to specify ram voltage)

End result until XMP voltage can be adjustable in some way or another:
 - most system will be unaffected
 - unstable system can put max_mem_clock_mhz override to 666 or see if they
can get something better with manual SPD
 - userland programs may automate that last part
 - when XMP voltage is supported (and 100 MHz for ivy bridge instead of 133
as Patrick noted, etc), coreboot will have gained much more flexibility in
RAM initialization to deal with similar situations that may arise with new
specs

It will also be very nice to have all that work without blobs.

Depending on the board the voltage might not be configurable at all. Why
> should it be if there is only one voltage defined in the standard?
>

No, XMP calls for precise voltage. From wikipedia:

bit 0 Module Vdd voltage twentieths (0.00 or 0.05)
bits 4:1 Module Vdd voltage tenths (0.0–0.9)
bits 6:5 Module Vdd voltage units (0–2)

The standard call for the DIMM asking the system a specific voltage. It is
a negociation of speed, latency and voltage, cf
https://en.wikipedia.org/wiki/Serial_presence_detect#Extreme_Memory_Profile_.28XMP.29


> If the board can work at that frequency, that's just fine. If the
> voltage is a problem, it's due to the memory module. IMHO, the rule
> should be to ignore SPD frequency settings that include an out of spec
> voltage
>

XMP is a specification. It should be supported. I think the only mistake is
that the voltage part is not being applied. It's not out of spec, it's
within another spec.


> If you want to do further testing, you can try to find out which com-
> binations of processor and DIMMs work with the Vendor BIOS or the MRC
> blob (I wouldn't expect that it supports non-JEDEC stuff, but it would
> be nice to know if something can be fixed in coreboot easily).


I tested the Lenovo bios in depth with memtest86. It works just fine.

Could you give me some suggestions to use the MRC blob? I don't see
anything like that in coreboot soure. (And actually, if it's a intel blob,
I would expect that it will support XMP)

I will try to find a way to adjust the voltage from userland. I will do
more test when I have either the blob thing or the voltage working.

Thanks
Charlotte
-- 
coreboot mailing list: coreboot@coreboo

Re: [coreboot] More details about ram issues

2016-11-11 Thread Charlotte Plusplus
Hello


On Fri, Nov 11, 2016 at 5:32 PM, Nico Huber  wrote:

> The XMP implementation just needs to be completed to also do > the voltage
> part.
>
> Which either does not work if the board is not designed for it or would
> include hardware modifications.
>

No it doesn't. The voltage is directly controlled by the CPU pins (cf infra)


> > No, XMP calls for precise voltage. From wikipedia:
> (...)
>
> I know what it is. It's not the standard board manufacturers usually
> work with.



> > I tested the Lenovo bios in depth with memtest86. It works just fine.
>
> Which is a sign that the voltage setting isn't causing trouble. Since the
> W520 doesn't support anything beside 1.5V, as Patrick pointed out.
>

I don't know which standards manufacturers work with. Likewise, I can't
guess which voltage the W520 supports.

I am only basing myself on the specs. I read them the best I could, and I
stand by my conclusion.

On the W520 block diagram page 6, I see DDR3_VREF_CA and DQ comes from
RSVD#D1 and RSVD#B4.

In
http://www.versalogic.com/support/Downloads/Copperhead/3rd-gen-core-family-mobile-vol-1-datasheet.pdf
I see the pinout name:
SA_DIMM_VREFDQ B4
  SB_DIMM_VREFDQ D1

In
http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/3rd-gen-core-family-mobile-vol-1-datasheet.pdf
I see: " Memory Channel A/B DIMM DQ Voltage Reference:
These output pins are connected to the DIMMs, and are programmed to
have a reference voltage with optimized margin.
The nominal source impedance for these pins is 150 Ω.
The step size is 7.7 mV for DDR3 (with no load) and 6.99 mV for
DDR3L/DDR3L-RS (with no load). "

I interpret that as the CPU being directly in control of the voltage feed
to the DIMM, with 2 reference voltage 1.5 and 1.3V, that can be changed in
increments of 7mv.

So I don't understand why the CPU could not be giving say 1.6V in a DDR3
setting if the XMP profiles contains the information and the IMC is
instructed to do so, as it is what the XMP standard is for.

Maybe I fully misunderstand that. Yet googling for "cpuz ddr3" and looking
at the pictures, I see in the XMP profiles: 1.650V for the first 2 hits,
then 1.600V, etc.

This is just a random google search, but non 1.5V profiles do not seem as
rare as you think.

> Could you give me some suggestions to use the MRC blob? I don't see
> > anything like that in coreboot soure. (And actually, if it's a intel
> blob,
> > I would expect that it will support XMP)
>
> Enable CONFIG_USE_BLOBS and disable CONFIG_USE_NATIVE_RAMINIT. And you
> have to implement mainboard_fill_pei_data() in romstage.c (e.g. like
> kontron/ktqm77). If it's working, you can compare settings made by the
> blob and the native raminit with inteltool dumps. You can of course com-
> pare settings of the vendor BIOS too, but I'd expect less noise from the
> MRC blob
>

I will do that. I have now received my FT232H and just 5 minutes ago I made
a debug cable (FT232H on one side, CP210X on the other side, total cost $15
if anyone else need a cheap debug cable). It seems to work just fine as I
get much more debug output than in cbmem.

It should help a lot.

Thanks
Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] More details about ram issues

2016-11-11 Thread Charlotte Plusplus
Hello

On Fri, Nov 11, 2016 at 5:37 PM, Nico Huber  wrote:

> > The W520 does only have 1.5V DDR voltage. If it's stable with vendor
> > bios, it's not a DDR voltage problem at all.
>

Based on my reading of the block diagram and crossing that with a cpu
pinout and the cpu specs, I disagree.  The W520 indeed only support 1.5V,
if you mean 1.5V vs 1.3 "low voltage" DDR3L.

But SA_DIMM_VREFDQ is in direct control of the DDR3 voltage: "The step size
is 7.7 mV". So it supports 1.5V +- k*0.007V, with k being given by the XMP
profile.

In case this is not clear, on
http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/3rd-gen-core-family-mobile-vol-1-datasheet.pdf
:
Page 30 : "The processor memory controller has the capability of generating
the DDR3 Reference  Voltage (VREF) internally for both read (RDVREF) and
write (VREFDQ) operations. The generated VREF can be changed in small
steps, and an optimum VREF value is  determined for both during a cold boot
through advanced DDR3 training procedures in  order to provide the best
voltage and signal margins."

That seems to be a lot of evidence in the voltage not being an absolutely
fixed 1.500V. It is something more flexible!!!

> That's what sandybridge raminit does. Only XMP profiles with DDR
> > voltage of 1.5V are used. Profiles that do have other voltage
> > setting are ignored.
>
> Good to know, I already started worrying about your code just by reading
> emails. Should have looked in the code instead ;) my apologies.
>

Yes, in spd_xmp_decode_ddr3, profiles not using 1.5V are discarded. I
believe this is the problem. When I did a google image search of "cpuz
ddr3", the first few hits showed me 1.6V and 1.65V XMP profiles. So there
are quite a few of such profiles out there. I'm not alone.

At the moment, I do not have any better explaination as to why my ram is
not stable than XMP profiles being not followed.

Patrick said above:"I don't think that XMP is the problem. My guess is that
raminit doesn't  set all required registers to fine tune the memory
controller to get it stable."

Maybe it is the explanation, and XMP profiles are indeed not needed at all.
Maybe I am very wrong in my analysis.

At the moment, I would just like to have the ram on my W520 stable when it
operates within specifications (and I mean within a XMP profile), as I was
planning to use the W520  as my main laptop, and I can't :-(

I thought porting coreboot to the W520 would help me do that. These ram
issues are really bothering me. I can't have unstable RAM on my main
laptop. This is why I am extremely motivated to make it work.

I will be making more tests tonight. I included the patch #17389 your
posted today: nb/intel/sandybridge/raminit: Fix CAS Write Latency

I disabled all my SPD hardcoding, and only disabled the MRC cache, so that
I can alternate between normal and fallback to run more tests without
reflashing.

I have strictly no experience with coreboot and I'm learning on the go.
Your help in fixing the RAM issues would be greatly appreciated

Thanks,
Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] More details about ram issues

2016-11-12 Thread Charlotte Plusplus
Hello

On Sat, Nov 12, 2016 at 7:28 PM, Nico Huber  wrote:

> Looks like it's booting the fallback path, maybe because something went
> wrong on the normal path? I haven't got USB debug working with the MRC
> blob reliably for over a year now, btw. But you should definitely see
> messages before coreboot jumps into the blob. If you get there and it
> seems to hang, disable USB debug in romstage (the blob doesn't have de-
> bug output anyway).
>

It was late, I must have done something really stupid. Because this morning
I've got usb debug working. Output attached. The ram is correctly detected,
and at the right non-XMP frequency.  I only had to update the default
MMCONF in the w520 Kconfig, otherwise the boot would hang.


> A wild guess: TS stands for thermal sensor (for each DIMM). Shouldn't
> matter.


Ok, I will leave that at 0


> But according to your comments A2 and A4 should be exchanged
> (matters if your DIMMs have different SPDs or you didn't populate all
> slots).


Oops you're right. I did a fancy listing of read_spd like 0,2,1,3, but it
hides the correct order Instead of making that obvious by ordering 0,1,2,3.
Fixed.

Now I get a boot with the mrc.bin blob, but it messes up something with the
video init, as I get a black screen. So I can't get the memtest results and
check whether everything works. I tried various things: native video init
or not, pcie_init=0 or =1, same result: black screen. Output attached.

Would you have any idea why?

If I add a default order to seabios, I think can still try to use that to
check the ram settings with inteltool by ssh, and compare the results to
the native ram settings. I just wish I could quickly check the stability
too.

Charlotte


blob-raminit.cap
Description: application/vnd.tcpdump.pcap
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

[coreboot] using overlayfs to have several coreboot dev envs

2016-11-13 Thread Charlotte Plusplus
Hello

With the cross compiling tool chain, coreboot takes 1G. If you are a bit
short on space, or if you want to save writes to your SSD, instead of
having multiple copies of the coreboot source folder, I have found out
overlayfs is very practical.

If you have done git clone in /opt/coreboot/src/, simply create 4 extra
folders there:
coreboot-normal
coreboot-fallback
coreboot-normal.upper
coreboo-fallback.upper

The first 2 will contain a pseudo filesystem, the last 2 will contain the
files that uniquely different between your versions

Then run:

mount -t overlayfs overlay -o lowerdir=/opt/coreboot/src/
coreboot,upperdir=/opt/coreboot/src/coreboot-normal.upper
/opt/coreboot/src/coreboot-normal
mount -t overlayfs overlay -o lowerdir=/opt/coreboot/src/
coreboot,upperdir=/opt/coreboot/src/coreboot-fallback.upper/
/opt/coreboot/src/coreboot-fallback

You can have as many as you want in parallel. Useful if you are testing a
feature but want to alternate quickly without having to recompile the other
branch

When you are done, umount your folders, you will see your changes are only
in the .upper folder. The original folder will not be affected.

Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

[coreboot] native video init question

2016-11-13 Thread Charlotte Plusplus
Hello

Here is the current status of my W520:
 - native video init gives a garbled image (picture available upon request
lol). it may be due to the resolution of the screen being hardcoded
somewhere, or more likely me using the wrong information since the W520
uses 1980x1080
 - non native video works fine
 - with native ram init, the memory tends to be unstable unless great care
is used (basically blindly increasing SPD latencies and setting
max_mem_clock_mhz=666)
 - with native ram init, even with that sometimes the boots stop on:
"discover timC write:
CPB
t123: 1048, 6000, 7620"
 - with non native ram init, there is no video
 - various dmaerr unless iommu is disabled
 - the USB3 controller is not showing in lscpi, even if I am quite sure it
is on the right PCIE (likely due to the RCBA32 I am using)
 - the modem codec is not detected by snd-hda-intel even if the probe is
forced
 - some other unknown things may not work.

At the moment, I am trying to advance with native video first as it is a
low hanging fruit: I'm quite sure the gfx.did or LVDS settings in my
devicetree must be wrong.

How can I guess the right ones after booting with the videorom?

Thanks
Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] More details about ram issues

2016-11-14 Thread Charlotte Plusplus
Ooops, my mistake. You are right, I did not understand the difference
between IO voltage and VDD voltage.

I should have been more careful, I was wrong, sorry :-(

In case anyone is interested in making the hardware support 1.35V (which I
won't do), here is the specsheet for the voltage regulator. It has equation
6 explaining how to chose R2 to get the wanted voltage.

http://www.ti.com/lit/ds/symlink/tps51916.pdf

Supporting 1.35V should just be a matter of swapping R2, which can still be
hard with SMD. Maybe when the motherboards are cheap enough?



On Sat, Nov 12, 2016 at 6:46 PM, Nico Huber  wrote:

> On 12.11.2016 05:00, Charlotte Plusplus wrote:
> > Hello
> >
> > On Fri, Nov 11, 2016 at 5:37 PM, Nico Huber  wrote:
> >
> >>> The W520 does only have 1.5V DDR voltage. If it's stable with vendor
> >>> bios, it's not a DDR voltage problem at all.
> >>
> >
> > Based on my reading of the block diagram and crossing that with a cpu
> > pinout and the cpu specs, I disagree.  The W520 indeed only support 1.5V,
> > if you mean 1.5V vs 1.3 "low voltage" DDR3L.
> >
> > But SA_DIMM_VREFDQ is in direct control of the DDR3 voltage: "The step
> size
> > is 7.7 mV". So it supports 1.5V +- k*0.007V, with k being given by the
> XMP
> > profile.
> >
> > In case this is not clear, on
> > http://www.intel.com/content/dam/www/public/us/en/
> documents/datasheets/3rd-gen-core-family-mobile-vol-1-datasheet.pdf
> > :
> > Page 30 : "The processor memory controller has the capability of
> generating
> > the DDR3 Reference  Voltage (VREF) internally for both read (RDVREF) and
> > write (VREFDQ) operations. The generated VREF can be changed in small
> > steps, and an optimum VREF value is  determined for both during a cold
> boot
> > through advanced DDR3 training procedures in  order to provide the best
> > voltage and signal margins."
> >
> > That seems to be a lot of evidence in the voltage not being an absolutely
> > fixed 1.500V. It is something more flexible!!!
>
> You're confusing the memory's operating voltage (what XMP and DDR3 vs.
> DDR3L is about) with i/o voltages. VrefCA is the reference for Command/
> Address lines and VrefDQ is the reference for Data lines from what I
> understand.
>
> For the operating voltage have a look at page 86 in the Kendo-3 schema-
> tics. The TPS51916 provides the voltage VCC1R5A which is VDD for the
> DIMMs. You can see from its datasheet that it's controlled by REFIN
> which is generated by dividing VTTREF (1.8V) by the resistors R1056
> (10K) and R843 (48K7). If those resistors were configurable in any
> way, you could control the operating voltage. But they are just plain
> resistors.
>
> So even if the processor had dedicated pins to control the voltage, they
> are not used in this machine.
>
> Nico
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

[coreboot] Why is subsystemid required to find a uPD7020200

2016-11-14 Thread Charlotte Plusplus
Hello

The USB3 on a W520 is provided by a uPD7020200 on PCIe port 7, so I just
had in my device tree:
device pci 1c.6 on end

Yet in the boot I was getting:
 PCI: Static device PCI: 00:1c.6 not found, disabling it.

After googlging about this, I found the x230 uses subsystemid
 device pci 1c.6 on subsystemid 0x17aa 0x21db end

After trying that, I could get the USB3 to be detected and working
correctly on the W520.

Nowhere else I need subsystemid, so I don't understand why it is needed
here.
I don't want to copy something I don't understand, especially if it may be
needed in other places.

Could someone explain when subsystemid is required?

Thanks
Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Why is subsystemid required to find a uPD7020200

2016-11-15 Thread Charlotte Plusplus
Hello

No, I don't have this error anymore. I changed first the RCBA, to no
effect. It was only after I added the subsystemid that  the XHCI controller
was detected on linux and windows.

I use linux mostly, but I have a windows 10 for tests. USB3 works fine for
both, while previously it did not show up in the lspci of either.

Wat's interesting is that I do have some problems with the colormeter
(X-Rite) on windows. I will try to add the subsystemid to see if it helps
the driver.

Next on my list: getting native video init to work!

Thanks
Charlotte

On Tue, Nov 15, 2016 at 8:57 AM, Nico Huber  wrote:

> On 15.11.2016 03:03, Charlotte Plusplus wrote:
> > Hello
> >
> > The USB3 on a W520 is provided by a uPD7020200 on PCIe port 7, so I just
> > had in my device tree:
> > device pci 1c.6 on end
> >
> > Yet in the boot I was getting:
> >  PCI: Static device PCI: 00:1c.6 not found, disabling it.
>
> This error message is unrelated to the subsystem id. In this case, core-
> boot can't even set the id. It looks more like 00:1c.6 was (still?) dis-
> abled through the function disable (FD) register? I don't know if it
> gets cleared on every reboot.Do you still set it in early romstage?
>
> Nico
>
> >
> > After googlging about this, I found the x230 uses subsystemid
> >  device pci 1c.6 on subsystemid 0x17aa 0x21db end
> >
> > After trying that, I could get the USB3 to be detected and working
> > correctly on the W520.
> >
> > Nowhere else I need subsystemid, so I don't understand why it is needed
> > here.
> > I don't want to copy something I don't understand, especially if it may
> be
> > needed in other places.
> >
> > Could someone explain when subsystemid is required?
> >
> > Thanks
> > Charlotte
> >
> >
> >
>
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] native video init question

2016-11-16 Thread Charlotte Plusplus
Hello

Yes I have included this patch 17389 in my build. Unfortunately, it gave me
the raminit logs failures attached before. I will try to see what's stored
in the MRC cache. I would like to make the ram init conditional on the boot
status, which means having 2 mrc cache: one for normal, one for fallback.
This way, I flush one and debug ram issues with less fear (because the W520
I am porting coreboot to is my main laptop)

For native video, I have 2 memory segments:
Memory at f140 (64-bit, non-prefetchable) [size=4M]
Memory at e000 (64-bit, prefetchable) [size=256M]

I guess I should always read from the first one.

It seems to work, but I am always getting the same result at different
addresses
$ ./iotools mem_read32 f14C7200
0xf000ff53
$ ./iotools mem_read32 f14C7201
0xf000ff53
$ ./iotools mem_read32 f14C7202
0xf000ff53
$ ./iotools mem_read32 f14C7203
0xf000ff53
$ ./iotools mem_read32  f140
0xf000ff53

Here is the romstage that I tried using with non native raminit. It gave me
no video, but besides that it goes to payload and work fine. I wonder if I
should declare the HD4000 in the peidata. It seems the video was just not
initialized at all.

For brightness in the payloads, if it cause security issues, I guess I can
do without it.

Charlotte



On Wed, Nov 16, 2016 at 7:32 AM, Nico Huber  wrote:

> On 16.11.2016 06:08, Charlotte Plusplus wrote:
> > Hello
> >
> > On Tue, Nov 15, 2016 at 6:46 PM, Nico Huber  wrote:
> >
> >> I've seen a garbled image, too, lately. When I built with native
> >> raminit by chance but with a completely different gfx init code
> >> (3rdparty/libgfxinit). So it might still be some problem in the
> >> raminit. This was also on an Ivy Bridge system with four DIMMs,
> >> btw. I suspected that the native raminit just wasn't tested in that
> >> configuration.
> >>
> >
> > Interesting, I noticed some patterns with raminit too. Most of my
> problems
> > with the native raminit happen with 4x 8Gb DIMMS. The more sticks I have,
> > the less likely native ram training is to succeed. I have logged a few of
> > the failed attempt in case anyone else is interested (attached).
> >
> > Basically, the training can fail several times with the exact same
> > parameters that it later succeeds with. Also, failure is a function of
> > speed. All the attempts I have done but not attached can be summed up
> like
> > this: failure of the native ram training is also more likely with a MCU
> > over 666MHz. But whenever the raminit succeed, the memory is stable in
> > memtests (I did several passes to check.
> >
> > Now that I can use Windows 10 with Coreboot, I decided to experiment a
> bit
> > more. First, I tried changing the SPD settings with Thaiphoon Burner. The
> > sticks I have advertise they supported both 1.35 and 1.5V profiles (SPD:
> > 006=03) which I feared might cause issue. Changing that to 1.5V only
> (SPD:
> > 006=00) did not help, even if it did help with another computer that I
> > borrowed to do some comparisons with (I was afraid my sticks were at
> fault)
> >
> > Then I tried manually filling XMP profile 1 with known to be working
> values
> > (either published, or found during native raminit training). It seemed to
> > help but the results were inconsistent. Between my tests for different
> > value, I was clearing the MRC cache.
> >
> > Then I had a weird idea: what if the ram training or the MRC cache
> clearing
> > was the cause of the problem? I changed my protocol to do: clear cache,
> > train after changing the SPD/MCU frequency/etc, then when it succeeds
> > disable the MRC cache clearing hook, and do a few reboots or a power off
> > before doing the memtest. And this was sufficient to get stabilility at
> > 666Mhz and frequencies above without having to tweak the SPD anymore
> (like
> > adding some latency to the detected values)
> >
> > Currently 800Mhz is validated, I haven't tested 933 MHz because ram
> > training success seems to be a probability that goes down with the
> > frequency, and pressing on the power button every time it fails quickly
> > gets boring!
> >
> > I have no way to prove that, but I believe that the ram training by
> itself
> > interferes with the stability of the RAM, or that there is some non
> > deterministic part in the code. I don't know why or how, it's a lot of
> code
> > that I haven't read in detail, but it's what my tests suggests. I would
> > love to compare these results to ones the blob raminit would give. But
> blob
> > raminit causes video issues. So I'm trying to focus on native video init
>

[coreboot] Powersavings: 8W of difference between bios and coreboot

2016-11-17 Thread Charlotte Plusplus
Hello

Since I started using coreboot, I have noticed my battery doesn't seem to
last as long. I have a very simple linux install on a separate partition
that I use to compare power consumption, so I decided to verify if my
perception was true. After some quick tests, I noticed the power profile
seem to have changed a whole lot, so I decided to check exactly how much by
running some tests.

I had saved results when running the thinkpad bios, so I decided to get
more measurements to compare the thinkpad bios to coreboot on the W520. The
test environment is X with just a xterm open to display powertop results.
Nothing else is running: there are just 130 wakeups per seconds (with 100
coming from tick_sched_timer), and according to powertop all the cores are
in mode C7-IVB between about 98 to 99% of the time. So it should be the
best environment to have accurate results without software interfering.

To reduce sources of error, I always start from a cold boot, with the
machine cold, no previous fan activity. With normal power saving options
(everything toggled in powertop), network stopped, and measurement repeated
every 5 seconds during 10 min and averaged, I get:
BIOS: 11W Coreboot: 19W

With the powersave governor
BIOS: 9W  Coreboot: 17W

With brightness reduced to 200
BIOS: 8W  Coreboot: 16W

The steps between each operation seem identical (activating powersave: -2W,
reducing brightness: -1W) however the baseline is off by about 8W. This is
on the exact same software environement, running the exact same linux
kernel with the exact same kernel options.  (and no, the NVidia GPU was not
enabled in corebooy. Both results were on integrated GPU only)

I found that weird, so to confirm these results I put the hard drive into a
borrowed W530. It has the exact same CPU and screen as my W520, the same
USB peripherals (except a colormeter) and a similar memory configuration (4
dimms): I get about the same as the bios results I got before: 11W on the
minimal install, going down to 8W at maximum power saving.

However, with coreboot I just can't do any better than 16W.

The difference of 8W is quite a lot, as with all powersavings applied,
running coreboot mean taking 2x as much power. The room temperature hasn't
changed (I have a thermostat), and I don't believe a single USB peripheral
can take 8W.

So I think there is only one conclusion: I have a lot of work to do on my
W520 port to get even close to the initial power savings

So I wonder if there is anything I can do to improve power savings? What
are some basic suggestions?? The full 8W difference means there should be
at least a few low hanging fruits!!

Thanks
Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] native video init question

2016-11-17 Thread Charlotte Plusplus
Ok, I found a way to get usable inteltool results: I ran inteltool -f on a
borrowed W530 that has the exact same CPU 3940XM, the exact same integrated
GPU (HD4000) and the exact same screen as my W520. Only the southbridge is
different: QM77 instead of QM67

Now I will have to check the documentation of the registers to use this
information to get native video init working.

Is there anything else I should run before I have to give it back??

Charlotte

On Wed, Nov 16, 2016 at 10:06 PM, Charlotte Plusplus <
pluspluscharlo...@gmail.com> wrote:

> Hello
>
> Yes I have included this patch 17389 in my build. Unfortunately, it gave
> me the raminit logs failures attached before. I will try to see what's
> stored in the MRC cache. I would like to make the ram init conditional on
> the boot status, which means having 2 mrc cache: one for normal, one for
> fallback. This way, I flush one and debug ram issues with less fear
> (because the W520 I am porting coreboot to is my main laptop)
>
> For native video, I have 2 memory segments:
> Memory at f140 (64-bit, non-prefetchable) [size=4M]
> Memory at e000 (64-bit, prefetchable) [size=256M]
>
> I guess I should always read from the first one.
>
> It seems to work, but I am always getting the same result at different
> addresses
> $ ./iotools mem_read32 f14C7200
> 0xf000ff53
> $ ./iotools mem_read32 f14C7201
> 0xf000ff53
> $ ./iotools mem_read32 f14C7202
> 0xf000ff53
> $ ./iotools mem_read32 f14C7203
> 0xf000ff53
> $ ./iotools mem_read32  f140
> 0xf000ff53
>
> Here is the romstage that I tried using with non native raminit. It gave
> me no video, but besides that it goes to payload and work fine. I wonder if
> I should declare the HD4000 in the peidata. It seems the video was just not
> initialized at all.
>
> For brightness in the payloads, if it cause security issues, I guess I can
> do without it.
>
> Charlotte
>
>
>
> On Wed, Nov 16, 2016 at 7:32 AM, Nico Huber 
> wrote:
>
>> On 16.11.2016 06:08, Charlotte Plusplus wrote:
>> > Hello
>> >
>> > On Tue, Nov 15, 2016 at 6:46 PM, Nico Huber  wrote:
>> >
>> >> I've seen a garbled image, too, lately. When I built with native
>> >> raminit by chance but with a completely different gfx init code
>> >> (3rdparty/libgfxinit). So it might still be some problem in the
>> >> raminit. This was also on an Ivy Bridge system with four DIMMs,
>> >> btw. I suspected that the native raminit just wasn't tested in that
>> >> configuration.
>> >>
>> >
>> > Interesting, I noticed some patterns with raminit too. Most of my
>> problems
>> > with the native raminit happen with 4x 8Gb DIMMS. The more sticks I
>> have,
>> > the less likely native ram training is to succeed. I have logged a few
>> of
>> > the failed attempt in case anyone else is interested (attached).
>> >
>> > Basically, the training can fail several times with the exact same
>> > parameters that it later succeeds with. Also, failure is a function of
>> > speed. All the attempts I have done but not attached can be summed up
>> like
>> > this: failure of the native ram training is also more likely with a MCU
>> > over 666MHz. But whenever the raminit succeed, the memory is stable in
>> > memtests (I did several passes to check.
>> >
>> > Now that I can use Windows 10 with Coreboot, I decided to experiment a
>> bit
>> > more. First, I tried changing the SPD settings with Thaiphoon Burner.
>> The
>> > sticks I have advertise they supported both 1.35 and 1.5V profiles (SPD:
>> > 006=03) which I feared might cause issue. Changing that to 1.5V only
>> (SPD:
>> > 006=00) did not help, even if it did help with another computer that I
>> > borrowed to do some comparisons with (I was afraid my sticks were at
>> fault)
>> >
>> > Then I tried manually filling XMP profile 1 with known to be working
>> values
>> > (either published, or found during native raminit training). It seemed
>> to
>> > help but the results were inconsistent. Between my tests for different
>> > value, I was clearing the MRC cache.
>> >
>> > Then I had a weird idea: what if the ram training or the MRC cache
>> clearing
>> > was the cause of the problem? I changed my protocol to do: clear cache,
>> > train after changing the SPD/MCU frequency/etc, then when it succeeds
>> > disable the MRC cache clearing hook, and do a few reboots or a power off
>> > before doing the memtest. And this was sufficient to g

Re: [coreboot] Powersavings: 8W of difference between bios and coreboot

2016-11-18 Thread Charlotte Plusplus
Hello

Super interesting, I didn't know all that!

Currently, I have only set in devicetree:

device pci 00.0 on end # host bridge
device pci 01.0 off end # NVidia
device pci 02.0 on end # Intel

and in my nvram options I have:
hybrid_graphics_mode = Integrated Only

I assumed that would be sufficient to turn off the power for the dGPU until
I figure out a way to make it work. I will enable the NVidia and do more
power tests.

After reading hybrid_graphics.c, I understand a bit more, but I also have
more questions:
 - can I add register "pcie_hotplug_map" = "{ 0, 1, 0}" to make the NVidia
removable?
(so that the operating system will not freak out when the NVidia disappears
from the PCI bus if I find a way to control the power by talking to the EC
and send
GFXCORE_ON_D + the other signal for the VRAM)

 - can you change the connection of the displayport? Based on the
specsheets, it is connected to the dGPU, while the internal display is
connected to the iGPU. If is it possible to control the muxes for the
internal display, I suppose it is possible for the other displays as well.
(I have just tested and I do not have any video on the displayport, and
xrandr does not detect anything)

 - there seem to be some missing IDs in pci_device_ids_nvidia : cf
http://envytools.readthedocs.io/en/latest/hw/pciid.html which agrees with
the W530 : 0x0ffb, so I will  propose a patch:
0x0dfa, /* Nvidia NVS Quadro 1000m Lenovo W520 */
0x0ffb, /* Nvidia NVS Quadro K1000m Lenovo W530
*/
0x0ffc, /* Nvidia NVS Quadro K2000m Lenovo W530 */

It may also be the reason why the Nvidia is still getting power, as Iru
noted that hybrid_graphics should turn off the power. I will test that
separately.

- until I can find a better solution, I am thinking of letting the Nvidia
show on on the PCIe bus and then sending commands to get in into advanced
sleep - like on https://wireless.wiki.kernel.org/en/users/documentation/aspm

It should be possible as the w530 lspci -v shows:
Capabilities: [60] Power Management version 3
Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
PME(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-

Would you have a better idea?

Thanks
Charlotte

On Fri, Nov 18, 2016 at 4:52 AM, Felix Held 
wrote:

> Hi!
>
> I don't know if Charlotte has added the ID of the dGPU to
>> src/drivers/lenovo/hybrid_graphics.c. Does the dGPU consume power after
>> hybrid_graphics.c disable the dGPU?
>>
> OPTIMUS_ENABLE is a PCH GPIO and controls the muxes that select if the
> internal display is connected to the iGPU or the dGPU; that's done in
> hybrid_graphics.c.
> GFXCORE_ON_D (and another signal that controls the power supply of the
> VRAM) are driven by the Thinker-1 chip; those switch on/off the power
> supply of the GPU. So to really disable the GPU you probably have to ask
> the EC to make that chip turn off the voltage for the GPU and VRAM.
>
>
> Regards
> Felix
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Powersavings: 8W of difference between bios and coreboot

2016-11-18 Thread Charlotte Plusplus
Quick update:

Just having "device pci 01.0 on end" in the devicetree results in the
following powertop measurements:
26W after boot, 21W with power savings applied, 20W at maximum power savings

So just declaring the nvidia makes things much worse, even without really
using the dGPU for anything. (there are no nvidia modules, to make sure it
would not interfere with the measurements)

Since it was in D0, I tried setting the Nvidia gpu in D3 state
setpci -s 01:00.0 CAP_PM+4.b
08
setpci -s 01:00.0 CAP_PM+4.b=0b

It didn't help with the power. To confirm whether disabling the Nvidia pcie
interface results in this +4W in maximal power savings, I completed
src/drivers/lenovo/hybrid_graphics.c with the PCI ID from my card. I could
return to 17W with all power savings applied.

So I'm back to trying to find a way to turn off the power completely with
the EC, and hoping this is where the extra 8W come from.

After some googling I found ec_access, but I don't know which register to
write to.

In case anyone is interested, here are the pci devices, of course ASPM is
already enabled (root: 0xB0 : 0x43, peripheral: 0x88 : 0x4B) and I am not
using the dGPU for anything at all.

$ cat /sys/devices/pci:00/:00:01.0/:01:00.0/power/runtime_status
suspended
$ cat /sys/devices/pci:00/:00:01.0/:01:00.1/power/runtime_status
active

$ lspci -s  00:01.0 -xxx

00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core
processor PCI Express Root Port (rev 09
)
00: 86 80 51 01 07 04 10 00 09 00 04 06 10 00 81 00
10: 00 00 00 00 00 00 00 00 00 01 01 00 20 20 00 20
20: 00 f0 00 f1 01 c0 f1 d1 00 00 00 00 00 00 00 00
30: 00 00 00 00 88 00 00 00 00 00 00 00 0b 01 03 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0a
80: 01 90 03 c8 08 00 00 00 0d 80 00 00 86 80 00 00
90: 05 a0 01 00 d8 02 e0 fe 00 00 00 00 00 00 00 00
a0: 10 00 42 01 01 80 00 00 00 00 00 00 03 ad 61 02
b0: 43 00 02 51 00 00 04 00 00 00 48 00 08 00 00 00
c0: 00 00 00 00 00 00 00 00 00 00 00 00 0e 00 00 00
d0: 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 01 00 00 00 00 00 01 00 10 00

#0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f

$ lspci -d 10de:0dfa -xxx
01:00.0 VGA compatible controller: NVIDIA Corporation GF108GLM [Quadro
1000M] (rev a1)
00: de 10 fa 0d 03 00 10 00 a1 00 00 03 10 00 80 00
10: 00 00 00 f0 0c 00 00 c0 00 00 00 00 0c 00 00 d0
20: 00 00 00 00 01 20 00 00 00 00 00 00 00 00 00 00
30: 00 00 00 f1 60 00 00 00 00 00 00 00 0b 01 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 01 00 00 00 ce d6 23 00 00 00 00 00
60: 01 68 03 00 08 00 00 00 05 78 80 00 00 00 00 00
70: 00 00 00 00 00 00 00 00 10 b4 02 00 a0 8d 00 00
80: 10 28 00 00 02 2d 05 00 4b 01 02 11 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 10 00 00 00
a0: 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00
b0: 00 00 00 00 09 00 14 01 00 00 00 00 00 00 00 00
c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

#0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f

# 78 contains 10, on 88 : 4b

$ lspci -d 10de:0bea -xxx

01:00.1 Audio device: NVIDIA Corporation GF108 High Definition Audio
Controller (rev a1)
00: de 10 ea 0b 06 00 18 00 a1 00 03 04 10 00 80 00
10: 00 00 08 f1 00 00 00 00 00 00 00 00 00 00 00 00
20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
30: 00 00 00 00 60 00 00 00 00 00 00 00 0b 02 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 ce d6 23 00 00 00 00 00
60: 01 68 03 00 08 00 00 00 05 78 80 00 00 00 00 00
70: 00 00 00 00 00 00 00 00 10 00 02 00 a0 8d 00 00
80: 10 28 00 00 02 2d 05 00 4b 01 02 11 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 10 00 00 00
a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Charlotte


On Fri, Nov 18, 2016 at 4:04 PM, Charlotte Plusplus <
pluspluscharlo...@gmail.com> wrote:

> Hello
>
> Super interesting, I didn't know all that!
>
> Currently, I have only set in devicetree:
>
> device pci 00.0 on end # host bridge
> device pci 01.0 off end # NVidia
> device pci 02.0 on end # Intel
>
> and in my nvram options I have:
> hybrid_graphics_mode = Integrated Only
>
> I assumed that would be sufficient to turn off the power for the dGPU
> until I figure out a way to make it work. I will enable the NVidia and do
> more power 

Re: [coreboot] Powersavings: 8W of difference between bios and coreboot

2016-11-19 Thread Charlotte Plusplus
  If(VSD0)
{
Or(0x08, Local0, Local0)
}
If(Local0)
{
If(VUPC)
{
\VSDS(Local0, Arg0)
}
}
Else
{
NoOp
}
}
}




On Sat, Nov 19, 2016 at 1:57 AM, Charlotte Plusplus <
pluspluscharlo...@gmail.com> wrote:

> Quick update:
>
> Just having "device pci 01.0 on end" in the devicetree results in the
> following powertop measurements:
> 26W after boot, 21W with power savings applied, 20W at maximum power
> savings
>
> So just declaring the nvidia makes things much worse, even without really
> using the dGPU for anything. (there are no nvidia modules, to make sure it
> would not interfere with the measurements)
>
> Since it was in D0, I tried setting the Nvidia gpu in D3 state
> setpci -s 01:00.0 CAP_PM+4.b
> 08
> setpci -s 01:00.0 CAP_PM+4.b=0b
>
> It didn't help with the power. To confirm whether disabling the Nvidia
> pcie interface results in this +4W in maximal power savings, I completed
> src/drivers/lenovo/hybrid_graphics.c with the PCI ID from my card. I
> could return to 17W with all power savings applied.
>
> So I'm back to trying to find a way to turn off the power completely with
> the EC, and hoping this is where the extra 8W come from.
>
> After some googling I found ec_access, but I don't know which register to
> write to.
>
> In case anyone is interested, here are the pci devices, of course ASPM is
> already enabled (root: 0xB0 : 0x43, peripheral: 0x88 : 0x4B) and I am not
> using the dGPU for anything at all.
>
> $ cat /sys/devices/pci:00/:00:01.0/:01:00.0/power/runt
> ime_status
> suspended
> $ cat /sys/devices/pci:00/:00:01.0/:01:00.1/power/runt
> ime_status
> active
>
> $ lspci -s  00:01.0 -xxx
>
> 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core
> processor PCI Express Root Port (rev 09
> )
> 00: 86 80 51 01 07 04 10 00 09 00 04 06 10 00 81 00
> 10: 00 00 00 00 00 00 00 00 00 01 01 00 20 20 00 20
> 20: 00 f0 00 f1 01 c0 f1 d1 00 00 00 00 00 00 00 00
> 30: 00 00 00 00 88 00 00 00 00 00 00 00 0b 01 03 00
> 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0a
> 80: 01 90 03 c8 08 00 00 00 0d 80 00 00 86 80 00 00
> 90: 05 a0 01 00 d8 02 e0 fe 00 00 00 00 00 00 00 00
> a0: 10 00 42 01 01 80 00 00 00 00 00 00 03 ad 61 02
> b0: 43 00 02 51 00 00 04 00 00 00 48 00 08 00 00 00
> c0: 00 00 00 00 00 00 00 00 00 00 00 00 0e 00 00 00
> d0: 03 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> f0: 00 00 00 00 00 00 01 00 00 00 00 00 01 00 10 00
>
> #0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
>
> $ lspci -d 10de:0dfa -xxx
> 01:00.0 VGA compatible controller: NVIDIA Corporation GF108GLM [Quadro
> 1000M] (rev a1)
> 00: de 10 fa 0d 03 00 10 00 a1 00 00 03 10 00 80 00
> 10: 00 00 00 f0 0c 00 00 c0 00 00 00 00 0c 00 00 d0
> 20: 00 00 00 00 01 20 00 00 00 00 00 00 00 00 00 00
> 30: 00 00 00 f1 60 00 00 00 00 00 00 00 0b 01 00 00
> 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 50: 00 00 00 00 01 00 00 00 ce d6 23 00 00 00 00 00
> 60: 01 68 03 00 08 00 00 00 05 78 80 00 00 00 00 00
> 70: 00 00 00 00 00 00 00 00 10 b4 02 00 a0 8d 00 00
> 80: 10 28 00 00 02 2d 05 00 4b 01 02 11 00 00 00 00
> 90: 00 00 00 00 00 00 00 00 00 00 00 00 10 00 00 00
> a0: 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00
> b0: 00 00 00 00 09 00 14 01 00 00 00 00 00 00 00 00
> c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>
> #0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
>
> # 78 contains 10, on 88 : 4b
>
> $ lspci -d 10de:0bea -xxx
>
> 01:00.1 Audio device: NVIDIA Corporation GF108 High Definition Audio
> Controller (rev a1)
> 00: de 10 ea 0b 06 00 18 00 a1 00 03 04 10 00 80 00
> 10: 00 00 08 f1 00 00 00 00 00 00 00 00 00 00 00 00
> 20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 30: 00 00 00 00 60 00 00 00 00 00 00 00 0b 02 00 00
> 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 50: 00 00 00 00 00 00 00 00 ce d6 23 00 00 00 00 00
> 60: 01 68 03 00 08 00 00 00 05 78 80 00 00 00 00 00
> 70: 00 00 00 00 00 00 

Re: [coreboot] native video init question

2016-11-19 Thread Charlotte Plusplus
So I used inteltool, checked to compute the values, then even tried with
autoport.

The number match, the EDID is correct, and the panel is properly detected.
But the internal display still shows a garbled picture with stretched
letters.  I'm out of ideas to get native video init working.

register "gfx.did" = "{ 0x8100, 0x8240, 0x8410,
0x8410, 0x0005 }"
register "gfx.use_spread_spectrum_clock" = "1"
register "gfx.link_frequency_270_mhz" = "1"

register "gpu_dp_b_hotplug" = "4"
register "gpu_dp_c_hotplug" = "4"
# Enable DisplayPort Hotplug with 6ms pulse
register "gpu_dp_d_hotplug" = "0x06"

# Enable Panel as LVDS and configure power delays
register "gpu_panel_port_select" = "0"  # LVDS
register "gpu_panel_power_cycle_delay" = "6"
register "gpu_panel_power_up_delay" = "300" # T1+T2:
30ms
register "gpu_panel_power_down_delay" = "300"   # T5+T6:
30ms
register "gpu_panel_power_backlight_on_delay" = "3000"  # T3: 300ms
register "gpu_panel_power_backlight_off_delay" = "3000" # T4: 300ms
#0x0c6014: 0x89046004
register "gpu_cpu_backlight" = "0x1155"
register "gpu_pch_backlight" = "0x11551155"


Log:

PCI: 00:00.0 init ...
Disabling PEG12.
Disabling PEG11.
Disabling PEG10.
Disabling PEG60.
Disabling PEG IO clock.
Set BIOS_RESET_CPL
CPU  POWER_UNIT: 8
CPU  TDP: 440
CPU TDP: 55 Watts
CPU POWER_LIMIT HI: 33318
CPU POWER_LIMIT LO: 14451128
CPU POWER_LIMIT NOMINAL HI: 0
CPU POWER_LIMIT NOMIAL LO: 30
PCI: 00:00.0 init finished in 6714 usecs
PCI: 00:02.0 init ...
GT Power Management Init
IVB GT2 35W Power Meter Weights
GT Power Management Init (post VBIOS)
Initializing VGA without OPROM.
EDID:
00 ff ff ff ff ff ff 00 06 af ed 11 00 00 00 00
00 16 01 04 90 22 13 78 02 21 35 ad 50 37 aa 24
11 50 54 00 00 00 01 01 01 01 01 01 01 01 01 01
01 01 01 01 01 01 7c 38 80 d4 70 38 32 40 3c 30
aa 00 58 c1 10 00 00 18 7c 38 80 7e 72 38 32 40
3c 30 aa 00 58 c1 10 00 00 18 00 00 00 fe 00 41
55 4f 0a 20 20 20 20 20 20 20 20 20 00 00 00 fe
00 42 31 35 36 48 54 4e 30 31 2e 31 20 0a 00 81
Extracted contents:
header:  00 ff ff ff ff ff ff 00
serial number:   06 af ed 11 00 00 00 00 00 16
version: 01 04
basic params:90 22 13 78 02
chroma info: 21 35 ad 50 37 aa 24 11 50 54
established: 00 00 00
standard:01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01
descriptor 1:7c 38 80 d4 70 38 32 40 3c 30 aa 00 58 c1 10 00 00 18
descriptor 2:7c 38 80 7e 72 38 32 40 3c 30 aa 00 58 c1 10 00 00 18
descriptor 3:00 00 00 fe 00 41 55 4f 0a 20 20 20 20 20 20 20 20 20
descriptor 4:00 00 00 fe 00 42 31 35 36 48 54 4e 30 31 2e 31 20 0a
extensions:  00
checksum:81

Manufacturer: AUO Model 11ed Serial Number 0
Made week 0 of 2012
EDID version: 1.4
Digital display
6 bits per primary color channel
Digital interface is not defined
Maximum image size: 34 cm x 19 cm
Gamma: 220%
Check DPMS levels
Supported color formats: RGB 4:4:4
First detailed timing is preferred timing
Established timings supported:
Standard timings supported:
Detailed timings
Hex of detail: 7c3880d4703832403c30aa0058c11018
Detailed mode (IN HEX): Clock 144600 KHz, 158 mm x c1 mm
   0780 07bc 07ec 0854 hborder 0
   0438 0442 044c 046a vborder 0
   -hsync -vsync
Did detailed timing
Hex of detail: 7c38807e723832403c30aa0058c11018
Detailed mode (IN HEX): Clock 144600 KHz, 158 mm x c1 mm
   0780 07bc 07ec 09fe hborder 0
   0438 0442 044c 046a vborder 0
   -hsync -vsync
Hex of detail: 00fe0041554f0a202020202020202020
ASCII string: AUO
Hex of detail: 00fe004231353648544e30312e31200a
ASCII string: B156HTN01.1
Checksum
Checksum: 0x81 (valid)
bringing up panel at resolution 1920 x 1080
Borders 0 x 0
Blank 212 x 50
Sync 48 x 10
Front porch 60 x 10
Spread spectrum clock
Dual channel
Polarities 1, 1
Data M1=10108272, N1=8388608
Link frequency 27 kHz
Link M1=280785, N1=524288
Pixel N=7, M1=22, M2=8, P1=2
Pixel clock 144489 kHz
waiting for panel powerup
panel powered up


On Thu, Nov 17, 2016 at 11:54 PM, Charlotte Plusplus <
pluspluscharlo...@gmail.com> wrote:

> Ok, I found a way to get usable inteltool results: I ran inteltool -f on a
> borrowed W530 that has the exact same CPU 3940XM, the exact same integrated
> GPU (HD4000) and the exact same screen as my W520. Only the southbridge is
> different: QM77 instead of QM67
>
> Now I will have to check the documentation of the registers to use this
> information to get native video init working.
>
> Is there anything

Re: [coreboot] native video init question

2016-11-20 Thread Charlotte Plusplus
Hello

The correct timings are detected by X (cf below), so I checked the
existing gma_ivybridge init and I thought it may calculate clocks
wrong.

Then I found about https://review.coreboot.org/#/c/16504/ addressing
just this, so I tried to port it to gma_ivybridge

Unfortunately I must I have done something really wrong, because I now
get no display at all.

Before:
Data M1=10108272, N1=8388608
Link frequency 27 kHz
Link M1=280785, N1=524288
Pixel N=7, M1=22, M2=8, P1=2
Pixel clock 144489 kHz

After:
Data M1=10108272, N1=8388608
Link frequency 27 kHz
Link M1=280785, N1=524288
Pixel N=2, M1=13, M2=4, P1=2
Pixel clock 144642 kHz


My patch:
--- gma_ivybridge_lvds.c2016-11-20 14:24:57.944308878 -0500
+++ gma_ivybridge_lvds.c.orig   2016-11-20 12:51:24.273120162 -0500
@@ -29,9 +29,6 @@
 #include 
 #include 
 #include 
-#include 
-
-#define BASE_FREQUENCY 10

 static void link_train(u8 *mmio)
 {
@@ -142,7 +139,6 @@
int i;
u8 edid_data[128];
struct edid edid;
-   u32 target_frequency;

if (!IS_ENABLED(CONFIG_MAINBOARD_DO_NATIVE_VGA_INIT))
return 0;
@@ -209,19 +205,17 @@
u32 hfront_porch = edid.mode.hso;
u32 vfront_porch = edid.mode.vso;

-   u32 smallest_err = 0x;
+   u32 candp1, candn;
+   u32 best_delta = 0x;

+   u32 target_frequency = (
+   edid.mode.lvds_dual_channel ? edid.mode.pixel_clock
+   : (2 * edid.mode.pixel_clock));
u32 pixel_p1 = 1;
-   u32 pixel_p2;
u32 pixel_n = 1;
u32 pixel_m1 = 1;
u32 pixel_m2 = 1;

-   /* p2 divisor must 7 for dual channel LVDS */
-   /* and 14 for single channel LVDS */
-   pixel_p2 = edid.mode.lvds_dual_channel ? 7 : 14;
-   target_frequency = edid.mode.pixel_clock;
-
vga_textmode_init();
if (IS_ENABLED(CONFIG_FRAMEBUFFER_KEEP_VESA_MODE)) {
vga_sr_write(1, 1);
@@ -249,34 +243,40 @@
write32(mmio + LGC_PALETTE(0) + 4 * i, i * 0x010101);
}

-   /* Find suitable divisors, m1, m2, p1, n.  */
-   /* refclock * (5 * (m1 + 2) + (m1 + 2)) / (n + 2) / p1 / p2 */
-   /* should be closest to target frequency as possible */
-   u32 candn, candm1, candm2, candp1;
-   for (candm1 = 8; candm1 <= 18; candm1++) {
-   for (candm2 = 3; candm2 <= 7; candm2++) {
-   for (candn = 1; candn <= 6; candn++) {
-   for (candp1 = 1; candp1 <= 8; candp1++) {
-   u32 m = 5 * (candm1 + 2) + (candm2 + 2);
-   u32 p = candp1 * pixel_p2;
-   u32 vco = 
DIV_ROUND_CLOSEST(BASE_FREQUENCY * m, candn + 2);
-   u32 dot = DIV_ROUND_CLOSEST(vco, p);
-   u32 this_err = ABS(dot - 
target_frequency);
-   if ((m < 70) || (m > 120))
-   continue;
-   if (this_err < smallest_err) {
-   smallest_err = this_err;
-   pixel_n = candn;
-   pixel_m1 = candm1;
-   pixel_m2 = candm2;
-   pixel_p1 = candp1;
-   }
-   }
+   /* Find suitable divisors.  */
+   for (candp1 = 1; candp1 <= 8; candp1++) {
+   for (candn = 5; candn <= 10; candn++) {
+   u32 cur_frequency;
+   u32 m; /* 77 - 131.  */
+   u32 denom; /* 35 - 560.  */
+   u32 current_delta;
+
+   denom = candn * candp1 * 7;
+   /* Doesnt overflow for up to
+  500 kHz = 5 GHz.  */
+   m = (target_frequency * denom + 6) / 12;
+
+   if (m < 77 || m > 131)
+   continue;
+
+   cur_frequency = (12 * m) / denom;
+   if (target_frequency > cur_frequency)
+   current_delta = target_frequency - 
cur_frequency;
+   else
+   current_delta = cur_frequency - 
target_frequency;
+
+
+   if (best_delta > current_delta) {
+   best_delta = current_delta;
+   pixel_n = candn;
+   pixel_p1 = candp1;
+   pixel_m2 = ((m + 3) % 5) + 7;
+   pixel_m1 = (m - pixel_m2) / 5;
}
}
}

-   if (smallest_err == 0x) {
+   if (best

Re: [coreboot] using overlayfs to have several coreboot dev envs

2016-11-20 Thread Charlotte Plusplus
If you refactor that code, could you make it easier to add the fallback?
One of the main reason I use overlayfs is to keep a separate fallback

Overlayfs may remain a good option when working on separate source trees,
but when the differences are just in the .config. it would be nice to
specify the .config-fallback, the .config-normal, run make and get a
coreboot.rom without having to manually give the file that needs to be
updated. I wrote scripts to do just that.

On Sun, Nov 20, 2016 at 4:36 PM, ron minnich  wrote:

>
>
> On Sun, Nov 20, 2016 at 1:00 PM Matt DeVillier 
> wrote:
>
>> On Sun, Nov 20, 2016 at 2:51 PM, ron minnich  wrote:
>>
>> I had the same thought even while writing that note. So option 2 for the
>> config file is to create it at the top level: config.${MAINBOARD) or
>> somewhere else. Would that work?
>>
>>
>> using top level for config files would really clutter the root dir when
>> building for a large # of boards (I have ~25 I'm building for currently);
>> perhaps a 'configs' subdir would make sense?
>>
>
>
> yes, and the configs would go well in a .gitignore ...
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Nehalem not booting with two ram sticks

2016-11-22 Thread Charlotte Plusplus
I have had similar issues with Corsair ram on the W520 recently: sometimes
not booting at all, sometimes being unstable (in memest) after a succesfull
raminit.

The only way I could get the 4 dimms to work was to hardcode some SPDs, or
set the MCU to a much slower speed.

Like you, I found removing even 1 stick did help a lot: the raminit
succeeded much more frequently as a higher MCU, even if this MCU still
lower than the one the RAM is rated for, or that worked in the factory bios.

I tried using the MRC blob to compare the timings, but I must have done
something wrong in my code as it didn't work at all

My guess is something is really wrong in the raminit code. I read up too
much specs and code for no result, so I just gave up on this.

Hopefully more people getting similar problems will mean the MRC will be
added back as an option next to the native raminit. This would facilite
comparison on all boards, and identification of whatever bug there may be.
(imagine figuring out native video init issues if there was no way to use a
VGA option rom)

Charlotte

On Tue, Nov 22, 2016 at 8:22 AM, Andrey Korolyov  wrote:

> On Tue, Nov 22, 2016 at 3:35 PM, Federico Amedeo Izzo
>  wrote:
> > Hello,
> >
> > I have a problem with my ThinkPad X201 (nehalem)
> >
> > I have two sticks of Samsung 4GB 2Rx8 PC3-10600S (1333MHz)
> > When i use only one of them in one of the two slots, the computer boots
> > fine,
> > but when i use both of them in the two slots, the computer doesn't boot,
> > the screen doens't even turn on.
> >
> > I dumped the logs via EHCI but they seem normal, in fact both the
> > working combination and the broken one make 34 or so iterations of
> > Timings dumping,
> > but then the working conf. start booting, while the broken one freezes
> > without printing error messages on the EHCI.
> >
> > I have tried adding more `printk` calls in
> > `src/northbridge/intel/nehalem/raminit.c`
> > but ended up in a brick, probably because i slowed down the
> > initialization too much.
> >
> > I attach three EHCI logs:
> > - the first stick in the first slot: working
> > - the second stick in the second slot: working
> > - both stick inserted: not working
> >
> > Also i find difficult to understand the code in `raminit.c` of nehalem
> > because it lacks almost completely of comments, with respect to
> > raminit.c of sandybridge for example.
>
> I`ve seen simular issue on my x201(t), the workaround could be a
> hardcoded SPD limitation from above for memory clock speed. Using
> different memory sticks (Kingstons rather than Samsungs) is 'solving'
> problem as well. I`ve not paid necessary attention to the problem at
> the time, so if you have some spare cycles, you could possibly want to
> figure out right SPD settings. The simplest way is to use decode-dimms
> from i2c-tools or CPU-z and to compare vendor`s settings with single-
> and dual-dimm setups and coreboot`s with a single dimm.
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Nehalem not booting with two ram sticks

2016-11-22 Thread Charlotte Plusplus
Hello

On Tue, Nov 22, 2016 at 2:50 PM, Kyösti Mälkki 
wrote:

>
>> I tried using the MRC blob to compare the timings, but I must have done
>> something wrong in my code as it didn't work at all
>>
>
> See following commits:
>  5c10abe nb/intel/sandybridge: increase MMCONF_BASE_ADDRESS
>  0306e6a intel/sandybridge: Fix builds with System Agent blob
>  f9c4197 northbridge/sandybridge/raminit_mrc.c: fix missing include
>

Thanks, I keep an eye but I hadn't noticed them. I will try again the next
few days.


Was MRC blob ever really an option for lenovo/xxx boards? I did not check
> history.
>

I have no idea. I just want to try and see if it can be made to work.

Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Nehalem not booting with two ram sticks

2016-11-22 Thread Charlotte Plusplus
Hello


On Tue, Nov 22, 2016 at 4:28 PM, Zoran Stojsavljevic <
zoran.stojsavlje...@gmail.com> wrote:

> If MCU is later, could you, please, explain how you did this in IVB
> Coreboot code (since this might be beneficial to Federico's attempts)?
>

Edit devicetree.cb and set:

 register "max_mem_clock_mhz" = "666"

Here I understood that you tried to compare IVB raminit.c source code with
> MRC algorithm, embedded in BIOS itself. And I have here one ignorant
> question: what is the difference between IVB (I assumed in this case SNB
> (tock), since I could not find IVB (tick) in rc/northbridge/intel/)
> raminit.c source code and MRC from IVB BIOS (there MUST be some difference,
> it is obvious, doesn't it)?
>

I suppose there is one. I don't know. I want to investigate. I will
certainly try again by adding the patches suggested by Kyosti. As soon as I
can get the MRC blob to work, I can make some better guesses about what is
going wrong (I tried so many various things already) by having some
reference points.

At the moment, I want to make a first public commit  for the W520, but with
the RAM issues (only stable with the MCU at 666), and no native video init,
and the power consumption issues, I'm not sure how helpful it will be.

For the video init, the most plausible course of action is to see how the
timings of the existing hardware initialization differ from say Xorg or the
Linux Kernel, since the clock code is different and was found to be wrong
on another patch. I'm glad I have examples.

We don't have that with raminit.

This is exactly the main point! And the main question here is the
> following: who wrote raminit.c code, and does this person did it using
> parts of IVB/SNB MRC source code? In other words, was this person member of
> SNB BIOS team from INTEL CCG?
>

It was made by 2 persons I think, based on some reverse work

>
> This is also a good point. I need clarification on the following:*
> "...will mean the MRC will be added back as an option next to the native
> raminit"*. Do you mean to have IVB/SNB MRC binary blob with defined APIs
> to be used as alternative to IVB/SNB raminit.c, since I am certain INTEL
> will not allow to have complete MRC added in Coreboot as source code (never
> was, never will be)?
>

Yeah, the blob. I don't like blob, but I like to have the option of
something that works if I need to investigate why something else is not
working. Or just for an initial release.
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Rettungsboot

2016-11-27 Thread Charlotte Plusplus
Hello

On Sat, Nov 26, 2016 at 6:19 PM, Trammell Hudson  wrote:

> The 4MB flash in the older thinkpads is a little tight, but still
> sufficient for a text-based modern Linux kernel -- the biggest issue is
> the cryptsetup tool brings in quite a few dependencies right now,
> which complicates using it with a fully encrypted drive.
>

Using busybox, cryptsetup, dmsetup, blkid and the libraries, my inittrd.gz
is 2.6M
I didn't do any special optimization.

I suppose not using glibc but a smaller libc could make it smaller


> With 8-16 MB you can have a write-protected, interactive shell version
> that can mount a USB drive and run spiflash tools to recover from
> failures, and a second, read-write version that can be reflashed by the
> system's owner with all the fancy features.


In my ideal scenario, coreboot would have the 2 images (normal, fallback)
both starting the same payload (a minimal linux kernel) to save space.

/init would be a shell script using nvram to check whether it is running in
normal or in fallback.

In normal mode, the kernel would just kexec the kernel that is used
normally, using cmdline parameters found in the CBFS

In fallback mode, busybox, flashtools and diagnosis tools (cbmem, ectool,
inteltool, etc) would let the user at least mount a thumbdrive to reflash a
working image

This would make development for new boards simpler. FT232 and serial
consoles and ISP cables are nice, but I prefer to stay in software. Unless
I mess up big time, I don't see any good reason to not use normal/fallback
and flashrom straight on the board.

Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Rettungsboot

2016-11-27 Thread Charlotte Plusplus
If I had more flash, I would like that too, just for kicks.

With the amount of flash we have, sharing the kernel and initrd doesn't
seem like a bad idea.

If the cmdline for the normal mode can be modified easily by updating the
cbfs with flashrom, I don't see any drawback to using kexec from the shared
kernel

Charlotte

On Sun, Nov 27, 2016 at 7:26 PM, ron minnich  wrote:

>
>
> On Sun, Nov 27, 2016 at 4:22 PM Charlotte Plusplus <
> pluspluscharlo...@gmail.com> wrote:
>
>>
>>
>> In my ideal scenario, coreboot would have the 2 images (normal, fallback)
>> both starting the same payload (a minimal linux kernel) to save space.
>>
>
> at Los Alamos we found we wanted a fallback kernel and initramfs too.
>
> But YMMV.
>
> But your overall picture is much like what we had at Los Alamos and it
> works really well.
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

[coreboot] How to access temperature sensors on 'modern' thinkpads?

2016-11-27 Thread Charlotte Plusplus
Hello

I am still interested in lowering the power consumption. To try to find the
culprit, I am thinking about monitoring temperature sensors.

On the W520 specs, I see on p 70 ("Thermal sensor"):
S0: PCH/BASE COVER
S1: NVIDIA
S2: GBE
S3: WWAN
S4: DIMM(TOP)
S5: DIMM(BOTTOM)
S6: WLAN
S7: EXPRESS SLOT

On the W530 specs, I see on p 69 ("Thermal sensor"):
S0: PCH/BASE COVER
S1: NVIDIA
S2: DIMM(TOP)
S3: WLAN
S4: N/A
S5: DIMM(BOTTOM)
S6: GBE & WWAN

So basically, on the specs, I see at least 7 temperature sensors, connected
straight to a smbus  chip (EMC1438-2-AP-TR-GP on a w520) or straight to the
new EC (MEC1619L-GP on a w530). But I can't read any of them: even when
using the default bios on a W530, I can't get any data.

Apparently the userland tools do not support new thinkpads:
/proc/acpi/ibm/thermal shows nothing. lm-sensors doesn't help - I only get
the CPU sensors

Using ec_access, the offsets used on older thinkpads (as seen on
http://www.thinkwiki.org/wiki/Thermal_Sensors) remain at 0,

In case the temperatures are shown on other offsets, I tried to acquire
values for 3600 seconds while doing periods of activity. Plotting all the
offset only shows temperature-like readings on 0x78 (the 121st entry).
Other candidates with non-zero activity are the 48, 133, 134, 171, 205, and
206 entries, but the plot doesn't look like temperature readings (CSV data
available upon request)

So where is the data from these 7 temperature sensors??

I was told the Thinker1 can be accessed by writing to the EC, and exposes
more values, but I don't have any example on how to do that.

In the W530 specs, I read "To be monitor S4/5/6 line, EC should enable the
function with register" - but I can only read one sensor. I suppose this is
what was meant by accessing the Thinker 1.

Anyone has any example??

After some googling, I found a reference to an advanced temperature
management mode on
https://sourceforge.net/p/ibm-acpi/mailman/ibm-acpi-devel/?viewmonth=201104
: but nothing about how to use it.

In the W520 specs, I read:
H8 I2C Bus 2 ADDRESS : 4DH

Resistor value | SMBUS Address
GND  | 1001 100, 4Ch
270| 1001 101, 4Dh
560| 1001 110, 4Eh
1K | 1001 111, 4Fh
1.5K  | 1001 001, 49h
2.7K  | 1001 010, 4Ah
5.6K  | 1001 011, 4Bh
>=18K   | 0011 000, 18h

Based on my best guess, I suppose on the w520, reading some special address
on the smbus will get me that data from that temperature sensors, as
selected by a resistor. But I get nothing with i2cdump -y 6 0x4C bp

Is there some code showing how to read the temperature sensors on either a
xx20 serie thinkpad or a xx30 serie thinkpad?

Thanks
Charlotte
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] Rettungsboot

2016-11-27 Thread Charlotte Plusplus
I don't know about you, but once I have a minimal working kernel or a
coreboot fallback, I never really update them. So having no way to recover
them without hardware intervention is fine. The kernel I may recompile,
patch, etc would be somewhere else.

The job of this minimal kernel and initrd would just be to kexec the other
kernel, and let you recover coreboot if needed.

Having both of them write protected is just fine, if the cmdline used for
the kexec is be read from another part of the spi for when you have to add
some kernel parameters


On Sun, Nov 27, 2016 at 8:09 PM, Trammell Hudson  wrote:

> On Sun, Nov 27, 2016 at 07:30:07PM -0500, Charlotte Plusplus wrote:
> > [...]
> > With the amount of flash we have, sharing the kernel and initrd doesn't
> > seem like a bad idea.
>
> The problem is if a bad kernel or initrd is flashed then there is no
> way to recover without hardware intervention.  Having a truly minimal
> recovery kernel with USB and a spiflash writer makes it possible
> to boot into some sort of mode to reocver from that failure.
>
> For both root of trust as well as reliability concerns, the recovery
> image at the top of the SPI flash should be read-only with the BP bits
> and the WP# pin enabled.  That way hardware is required to really mess
> it up.
>
> --
> Trammell
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] latest greatest thinkpad with coreboot

2016-12-02 Thread Charlotte Plusplus
I see recommendations for a X230, but I disagree. If you really want the
best, it's a W520 or a W530.

In either, you can have 32G of Ram, and you can replace the default CPU
with a Intel i7 3940XM cpu. But only on the W520 you will have a full size
display port and (more important) an eSATAp connector.

On thinkpads, you can usually have 3 drives:
 - a normal 2.5" SSD
 - another 2.5" in the optibay
 - a mSata in the WWAN port

But with the eSATAp connector, you can have 4 drives, one of which will be
external - either connected to the side of the laptop or to the dock.
Useful for backups at proper SATA speeds.

About the screen, the dock has extra display port connectors, and the
internal LCD is 1920x1080, not high resolution but good enough. The CPU
support vt-x and vt-d, so external displays can be used with qemu vfio (I
don't have a dock yet, but I plan to do that soon)

If we are talking about CPU, in theory, a modern P70 is faster. But if you
overclock the 3940XM to 4.6 GHz, there is no faster thinkpad on the market.
Cf the results from:
https://thinkpad-forum.de/threads/199076-Projekt-quot-Das-Letzte-Thinkpad-quot
(it requires a shell script to change the TDP and Turbo multipliers)

The only issues with the W520 on coreboot is the power consumption, which I
hope to fix when I will understand better how to talk to the EC to properly
disable the NVidia GPU like the default bios does.

With only the integrated 9 cell battery, I get about 4h, while putting my
SSD into a similar W530 (same CPU) where I can do proper power management
on the default bios, I get 7h (don't know how much precisely)

Add a slice battery if you need more battery time (approx double)

Charlotte


On Thu, Dec 1, 2016 at 12:04 PM, ron minnich  wrote:

> what's the latest best one? What's the battery life like (can't be worse
> than this mac pro  that's always hot and now seems to have a life of 90
> minutes, always). How much dram/ssd can I jam in it?
>
> thanks
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot

Re: [coreboot] 8 GB DIMMs on Nehalem (Arrandale)

2017-01-21 Thread Charlotte Plusplus
Addressing over 8G is not supported by the chipset used on nehalem thinkpad
laptops (X201)

Stupid limitation, but it is not the CPU fault.



On Sat, Jan 21, 2017 at 5:55 AM, Stefan Tauner <
stefan.tau...@alumni.tuwien.ac.at> wrote:

> Hi Vladimir,
>
> since you have REed the raminit for Nehalem I'd like you to ask if you
> have any knowledge, information or pointers about using 8 GB DIMMs with
> it or even using more than 8 GB in total. In my case it is about an
> Arrandale i5-520M (in a Thinkpad 410s).
>
> I know that an i7-820QM (Clarksfield) is perfectly capable of working
> with 8 GB DIMMs and probably up to 32 GB or even more (the Thinkpad
> W510 has 4 DIMM slots and I have tested it with 20 GB) and that is from
> around the same time as the Arrendale chips - which does not mean
> anything but I still refuse to accept that Nehalem is that limited. The
> official specs are not trustworthy IMHO and cpuid(1) and /proc/cpuinfo
> show the same physical address width of 36 bits (which would indicate a
> maximum of 64 GB).
>
> The current raminit for Nehalem in coreboot is not able to train the two
> 8 GB DIMMs I have tested so far. I have added a debug output to
> choose_reg178 in the first loop before the margins are compared to
> STANDARD_MIN_MARGIN that shows that all margins are 0. If there is
> anything I could try or information I can provide, please let me know.
>
> The (ancient) vendor firmware I've been using on the T410s does
> sometimes manage to boot Linux with an 8 GB DIMM (dmesg is attached
> including the e820 map), but it is quite broken and memtest86 locks up
> or reboots within seconds so that's probably not a good target for RE
> efforts. :)
>
> --
> Kind regards/Mit freundlichen Grüßen, Stefan Tauner
>
> --
> coreboot mailing list: coreboot@coreboot.org
> https://www.coreboot.org/mailman/listinfo/coreboot
>
-- 
coreboot mailing list: coreboot@coreboot.org
https://www.coreboot.org/mailman/listinfo/coreboot