Re: Building computer

2013-10-04 Thread Paul Cartwright
On 10/03/2013 05:11 PM, Stan Hoeppner wrote:
 No apology necessary.  I'm on many open lists (LKML) where hitting
 reply-to-list only goes to the sender.  So I've been guilty myself a few
 times.
  
so that is list specific... I wondered, because sometimes I hit reply 
it goes to the person, other times it goes to the list.. Thunderbird..

-- 
Paul Cartwright
Registered Linux User #367800 and new counter #561587


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524e8e0c.2050...@gmail.com



Re: Building computer

2013-10-04 Thread Stan Hoeppner
On 10/4/2013 4:44 AM, Paul Cartwright wrote:
 On 10/03/2013 05:11 PM, Stan Hoeppner wrote:
 No apology necessary.  I'm on many open lists (LKML) where hitting
 reply-to-list only goes to the sender.  So I've been guilty myself a few
 times.
  
 so that is list specific... I wondered, because sometimes I hit reply 
 it goes to the person, other times it goes to the list.. Thunderbird..

This isn't an issue with TBird, or any MUA.  The clients simply
obey/honor the list headers.  For example:

X-Mailing-List: debian-user@lists.debian.org archive/latest/657649
List-Id: debian-user.lists.debian.org
List-Post: mailto:debian-user@lists.debian.org
Precedence: list

The List-Post: header contains the list posting address.  When you hit
reply-to-list in TBird this is the address it selects for populating
the To: field in the reply.

If you reply to a message sent from a listserver that does not provide a
List-Post: header, then the address in the Reply-To: header is
inserted into the To: field of the reply.

Ergo, reply-to-list only works if a List-Post header is present.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524f42b7.3050...@hardwarefreak.com



[OT] List software, duplicate mail, or not. (was ... Re: Building computer)

2013-10-04 Thread Chris Bannister
On Fri, Oct 04, 2013 at 05:44:44AM -0400, Paul Cartwright wrote:
 On 10/03/2013 05:11 PM, Stan Hoeppner wrote:
  No apology necessary.  I'm on many open lists (LKML) where hitting
  reply-to-list only goes to the sender.  So I've been guilty myself a few
  times.
   
 so that is list specific... I wondered, because sometimes I hit reply 
 it goes to the person, other times it goes to the list.. Thunderbird..

I think it has been discussed before, but depending on the list software
used and the settings, if someone CC's you then you get a private mail
but not a list mail - it still goes to the list, it's just that *you*
don't get a copy!

It seems as though this started because people who are from Windows
might get confused with getting two copies of a mail!!!

A couple of times I recd a private email which was also addressed to a
list and deleted it (because it makes more sense to reply to the list,
obviously) but when I opened that list's mailbox -- it wasn't there!!

None of the Debian lists I am subscribed to have this serious problem.

There is a personal setting which you can change from your subscription
page, (you're given a password and reminded monthly what is is) which
you can toggle: (unfortunately, on some lists default is yes.)

(*) No
( ) Yes

Avoid duplicate copies of messages?

When you are listed explicitly in the To: or Cc: headers of a list
message, you can opt to not receive another copy from the mailing list.
Select Yes to avoid receiving copies from the mailing list; select No to
receive copies.
-

Sorry to labour the point, but what is happening to logic in the world
today? 

-- 
If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing. --- Malcolm X


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131005040546.GC740@tal



Screen resolution [was: Re: Building computer]

2013-10-03 Thread Anthony Campbell
On 02 Oct 2013, Stan Hoeppner wrote:

[snip] 
  No apology needed. You did in fact peg my age correctly; I will be 61 next
  month. And I too have known people who run less than the native resolution 
  to make the fonts bigger. When I get to that point, though, I will simply 
  increase 
  the font size so I don't get jaggies and blurry letters. Right now my eyes 
  need 
  the sharpness of the image, not a size increase. 
 
[snip] 

Increasing the size of the term font isn't a solution for the very tiny menu
fonts you get in some applications, e.g. imagemagick. For this reason,
at least on my desktop with a CRT monitor, I use 1280x1024 instead of
1600x1200. There is no obvious degradation of quality, at least to my
worsening eyes. On my laptop this doesn't work well because you get
distortion at any resolution other than the native.

AC

-- 
Anthony Campbell - a...@acampbell.org.uk 
http://www.acupuncturecourse.org.uk 
http://www.smashwords.com/profile.view/acampbell
https://itunes.apple.com/ca/artist/anthony-campbell/id73235412






-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131003074542.ga...@arcadia.home.gateway



Re: Building computer

2013-10-03 Thread Lisi Reisz
On Thursday 03 October 2013 05:52:06 Stan Hoeppner wrote:
  In my early 40s

A mere strippling!  And there was I pegging you as a sage*. ;-)

Lisi

* a teacher venerable for years, and of sound judgment (Wiktionary)

P.S. Sorry Stan - I did not mean to send to you personally.  I am 
trying to train myself not to click on reply for the list, but not 
being very successful.  Old dogs and new tricks? ;-(


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/201310031349.02882.lisi.re...@gmail.com



Re: Building computer

2013-10-03 Thread Rhiamom

 On Oct 3, 2013, at 12:52 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
 
 If your broadband router has QOS features, many newer models do, you
 should be able to program it to give priority to game traffic.  That
 would prevent the lag and stutter due to Skype or any other traffic.
 It's usually pretty easy to setup, if it supports it.  Shoot me the
 make/model# and I'll tell you if it does, and if so I'll point you to
 the instructions.

No, thank you. I am acting as if I have no clue about this at home. This 
is because that weekly piano lesson is more important than my silly game.
I would have to prioritize the Skype traffic. I don't usually raid that night, 
Monday, as the raids are full of the last chance of the week losers. It's not
a big issue. But I am no stranger to opening ports for bit torrent clients, and
 even WoW for those background downloads.

We have a modem so old it doesn't even have wireless capability, so we have 
our own wireless router.

--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/d0d39148-c49b-45d5-b56d-ce3653913...@gmail.com



Re: Building computer

2013-10-03 Thread Stan Hoeppner
On 10/3/2013 7:49 AM, Lisi Reisz wrote:
 On Thursday 03 October 2013 05:52:06 Stan Hoeppner wrote:
  In my early 40s
 
 A mere strippling!  And there was I pegging you as a sage*. ;-)
 
 Lisi
 
 * a teacher venerable for years, and of sound judgment (Wiktionary)

I'm sorry, I'm not [The One]. I'm just another guy.
--  Neo to Trinity, The Matrix

 P.S. Sorry Stan - I did not mean to send to you personally.  I am 
 trying to train myself not to click on reply for the list, but not 
 being very successful.  Old dogs and new tricks? ;-(

No apology necessary.  I'm on many open lists (LKML) where hitting
reply-to-list only goes to the sender.  So I've been guilty myself a few
times.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524ddd74.4000...@hardwarefreak.com



Re: Building a computer for compiling and CPU emulation (Re: Building computer)

2013-10-02 Thread Stan Hoeppner
On 10/1/2013 9:16 AM, Joel Rees wrote:
 On Tue, Oct 1, 2013 at 7:00 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/29/2013 6:01 AM, Joel Rees wrote:
 ...
 http://en.wikipedia.org/wiki/List_of_AMD_mobile_microprocessors

 tells me that AMD A4-1250, AMD A6-1450, and E2-1800 all have AMD-V.

 The E2-1800 has a half-Meg L2 cache, but higher speed CPU (1.7GHz).

 The A4-1250 has 1 Meg L2 cache but lower speed CPU (1.0 GHz).

 The A6-1450 has 2 Meg L2 cache to share between twice the CPUs.

 The A4-1450 is probably the best choice, not taking cost into
 consideration.  Here's why:

 1.  Turbo core up to 1.4GHz

 2.  All 3 models have 512KB L2/core, no advantage for any

 3.  Temash core has better IPC than Zacate

 4.  4 cores @ 1.4GHz should give better compile times than
 2 cores @ 1.8GHz.  4 @ 1GHz should as well.  -j4 or higher

 5.  A6-1450 is 9W chip, E2-1800 is 18W chip, longer run time
 28nm vs 40nm

 The one downside is that for non-compute intensive operations, such as
 normal interactive GUI apps, say PDF viewing, browser rendering, etc, it
 may be considerably slower than the 1.8GHz E2-1800, due to the 800MHz
 clock deficit, as the turbo core may not kick in a lot here.  And WRT
 turbo core, I'm not quite sure what this means:

 Selected parts support Turbo Dock technology, that can increase CPU and
 GPU frequencies when external cooling is available.

 So if the unit you purchase doesn't have a variable speed fan that can
 fulfill this requirement, this may mean you can't get 1.4GHz turbo mode.
  And I'm just guessing that devices of this class may not include forced
 air cooling.  Sorry I don't have all the answers here, but maybe this
 helps get you a bit closer.

 One thing I can assure you of is that for your stated use case, IIUI
 correctly, all of these CPUs are very likely woefully inadequate for the
 task.
 
 Yes and no. Most of the compiling will be much smaller than a kernel,
 not even complete packages. I think I said it but you've clipped that
 part, but this is a course in programming that includes writing some
 drivers.

Got it.  Yeah, compiling a few hundred or thousand lines of code
shouldn't be that bad.

 Emulating the superH processor is going to be a bit demanding,

Instruction set level emulation is going to be extremely demanding if
you intend to run a complete emulated OS environment.  If you strictly
use it for things like executing and debugging individual subroutines,
code segments, etc, it shouldn't be too bad.

 particularly if I find myself wanting to compile a superH kernel and
 not having access to the school labs over a long holiday. I am aware
 of that and will plan accordingly.

Yep.

 Mostly, I was trying to dig up the AMD-V support and something in the
 thread pointed me the right direction.

Having AMD-V or Intel VT isn't an absolute requirement.  Both will speed
up context switches.  As important as the new thread switching hardware
in these CPUs is the size of the TLBs.  CPUs without virtualization
support tend to have inadequate TLBs.  When you switch between VMs you
end up flushing most of the TLB entries and reloading them.

CPUs with large L2/L3 caches help mitigate this to a degree.  Which is
one of the reasons you see ginormous caches on server oriented CPUs,
Xeon and Opteron, up to 34MB combined L2/L3.  Most server workloads are
transaction oriented and throughput is dictated by network/disk latency.
 A huge cache does nothing for you here.  But when you run many virtual
machines it provides serious benefit.

This is one of the reasons I recommended going for the largest cache you
can get.  Unfortunately there's not much difference in cache sizes in
the processor class you're looking at.

Have you considered a refurb or used laptop?  The Core2 Duo SL9600
offers 2.13GHz clock, 6MB L2 cache, at 17W.  The Mobile PhenomII P650
2.6GHz clock w/2MB L2, 25W.

 I'm not planning getting more than AMD-V in a portable machine. Don't
 want to carry a boat battery with me. :-p

Heheh, love the deep cycle reference.  Though if you're like many mobile
users, in the States anyway, most of the time a wall plug isn't far
away, in either distance or time.  Never visited Japan so I can comment
on the situation there.

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524bc151.9070...@hardwarefreak.com



Re: Building computer

2013-10-02 Thread Stan Hoeppner
On 10/1/2013 5:13 PM, Catherine Gramze wrote:
 On Sep 30, 2013, at 10:33 PM, Stan Hoeppner s...@hardwarefreak.com
 wrote:
 
 Actually they were, up to the point you finally told us what screen
 resolution you use.  That changes things quite a bit, or I should say
 changed one thing dramatically.
 
 I recommended a fast dual core CPU because more cores will be wasted.
 The i3-4340 3.6GHz Haswell would have been as fast as the quad core you
 bought, and saved ~$40-50.  4GB RAM, more than 4GB is wasted, but as I
 said previously it's cheap so buy more if you want, won't hurt.  An SSD,
 and APU graphics.  The only change in that recommendation, now that I
 know your screen resolution, is shifting to a very fast high bandwidth
 discrete card.  2560x1440 is a pretty insanely high gaming res if you
 want high frame rates and smooth rendering at high detail.  Most hard
 core gamers wouldn't touch 2560x1440 without SLI/Xfire.
 
 If going w/a single reasonably priced card you're going to want/need a
 model with a 384 bit bus.  Extremely high resolutions require extremely
 high memory bandwidth.  The 384bit nVidia models are all above $600.
 The AMD 7950s can be had in the low $200s, and the 7970s in the low
 $300s.  Both are 384bit.
 
 You are right about the quad core not making any real difference. I have
 run the activity monitor and observed how one core is at 70% and another
 at 22% and two others unused. Average user applications are not yet written
 to take advantage of multiple core processors.

Nor will they ever be.  I'm glad you looked into it and see it now.  If
you search the list archives you'll see I've written a bit on the
multi-core issue, made the dual core recommendation many times, and went
so far as to suggest a new dual core design that AMD or Intel could
produce that would benefit everyone.  But they're shackled to multi-core
now because their marketing of the past 6 years or so has tried to
convince everyone that more cores are better.  Why?  Because they're
unable to increase per core performance at a rate which justifies buying
new CPUs.  If they did they'd have to reverse their marketing message,
and that is just fraught with problems.

The problem of efficient parallel programming will be with us for a VERY
long time.  And it's not that programmers, specifically game
programmers, aren't crafty or smart enough to write parallel code.  They
are far more capable than others.  The problems is that only a finite
amount of a given program's flow logic can be parallelized.  The rest
must remain serial.  And that serial part dominates most desktop
applications, including games.  That's why more cores don't help beyond
two for the vast majority of desktop applications and games.  See
Amdahl's law:  http://en.wikipedia.org/wiki/Amdahl%27s_law

This is the limiting factor.  And this is why I implore people to buy
the fastest dual core and forgo the quad, six, eight core models.  And
in fact, for non gamer daily use, I recommend the single core AMD
Sempron, because a dual core is wasted with Firefox, Thunderbird, Flash,
Adobe Reader, etc.

I failed to make a convincing case before you purchased Catherine.  But
at least you'll now be armed with this information when you make your
next purchase.  Financially it's not a huge deal, maybe $50 more in this
case, 10% of the system price, for the quad core.  But two cores will
forever be wasted, and that $50 could have gone toward the discrete GPU
you need.

 I am glad you finally understand that my desire for a dedicated video card
 with a substantial amount of dedicated video ram was not just a case of
 bigger, better, faster, more but based on the reality of what is required
 to
 do what I do. I can't imagine what screen resolution you thought I had in
 mind.
 1024x768? It hurts my old eyes to look at those.

We all make incorrect assumptions at times.  I keyed on one thing you
stated, which didn't get corrected until way late in the thread.  That
was I am retired.  When I saw that I pegged you at 60+, or at least
late 50s.  I only know a few people in that age range, or older, playing
WOW or any games, and they do it on big box brand PCs with integrated
video on ~21 in screens.  However, they run a resolution much lower than
the panel native res so it's easier to read text without needing to use
the 2nd/3rd lenses of their bi/trifocals, for instance 1280x720 instead
of the panel native 1920x1080.  At 1280x720 a discrete GPU is overkill
for their needs.

This was my mistake for not asking point blank early in the thread what
res you were running instead of making assumptions based on your retired
status.  If I had asked more questions up front we could have avoided
the contention.  For that I apologize.

 I have also looked at my memory usage. At this very moment, not running
 WoW, I have 5.22 gig being used. 4 gig would not be sufficient for me.

You would be correct if the number you're looking at reflected
application memory usage.  But it doesn't.  

Re: Building computer

2013-10-02 Thread Jeff Bauer

On 10/02/2013 04:59 AM, Stan Hoeppner wrote:

On 10/1/2013 5:13 PM, Catherine Gramze wrote:


I have also looked at my memory usage. At this very moment, not running
WoW, I have 5.22 gig being used. 4 gig would not be sufficient for me.

You would be correct if the number you're looking at reflected
application memory usage.  But it doesn't.  On any of the modern
operating systems one must damn near be a computer scientist to see the
actual memory usage.  The 5.22GB, this is on Debian, yes?  The system
monitor?  This reports process and cache memory usage.  The buffer/cache
will literally eat nearly all available memory all the time on Linux,
then free some when an application process needs it.  I've never used
OSX but it's probably similar in its desktop reporting tool.

This will really throw you for a loop.  Open a shell window and execute

~$ sudo echo 3  /proc/sys/vm/drop_caches

Wait a few seconds and see what happens to that 5.22GB number.  Then
report back what you find.  You can do this while playing WOW as well.
That number will drop like a rock and WOW will keep on going, because
the memory you're freeing with that command is cache.  And again, Linux
will eat nearly all RAM for cache if the system is up long enough.



free is another quick way to see where all your RAM went.

Regards,

Jeff




--
hangout: ##b0rked on irc.freenode.net
diversion: http://alienjeff.net - visit The Fringe
quote: The foundation of authority is based upon the consent of the people. - 
Thomas Hooker


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/524bf870.8020...@charter.net



Re: Building computer

2013-10-02 Thread Stan Hoeppner
On 10/2/2013 5:41 AM, Jeff Bauer wrote:
 On 10/02/2013 04:59 AM, Stan Hoeppner wrote:
 On 10/1/2013 5:13 PM, Catherine Gramze wrote:

 I have also looked at my memory usage. At this very moment, not running
 WoW, I have 5.22 gig being used. 4 gig would not be sufficient for me.
 You would be correct if the number you're looking at reflected
 application memory usage.  But it doesn't.  On any of the modern
 operating systems one must damn near be a computer scientist to see the
 actual memory usage.  The 5.22GB, this is on Debian, yes?  The system
 monitor?  This reports process and cache memory usage.  The buffer/cache
 will literally eat nearly all available memory all the time on Linux,
 then free some when an application process needs it.  I've never used
 OSX but it's probably similar in its desktop reporting tool.

 This will really throw you for a loop.  Open a shell window and execute

 ~$ sudo echo 3  /proc/sys/vm/drop_caches

 Wait a few seconds and see what happens to that 5.22GB number.  Then
 report back what you find.  You can do this while playing WOW as well.
 That number will drop like a rock and WOW will keep on going, because
 the memory you're freeing with that command is cache.  And again, Linux
 will eat nearly all RAM for cache if the system is up long enough.

 
 free is another quick way to see where all your RAM went.

Telling her to use free is premature at this point.  Especially if
you're not going to instruct her on how to use it.  Otherwise it will
only serve to confuse her.  Which is precisely why I didn't mention it,
at least not yet.

You've apparently not been following the thread for the past week Jeff.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524c16f8.9010...@hardwarefreak.com



Re: Building computer

2013-10-02 Thread Jonathan Dowland
On Wed, Oct 02, 2013 at 03:59:15AM -0500, Stan Hoeppner wrote:
 This will really throw you for a loop.  Open a shell window and execute
 
 ~$ sudo echo 3  /proc/sys/vm/drop_caches

She'll probably get 'Permission denied', you need elevated permissions
for the writing process, which in this case is the outer shell. Try
instead

  $ sudo sh -c echo 3  /proc/sys/vm/drop_caches


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131002132320.GA9052@debian



Re: Building computer

2013-10-02 Thread Rhiamom

 On Oct 2, 2013, at 4:59 AM, Stan Hoeppner s...@hardwarefreak.com wrote
 
 This is the limiting factor.  And this is why I implore people to buy
 the fastest dual core and forgo the quad, six, eight core models.  And
 in fact, for non gamer daily use, I recommend the single core AMD
 Sempron, because a dual core is wasted with Firefox, Thunderbird, Flash,
 Adobe Reader, etc.
 
 I failed to make a convincing case before you purchased Catherine.  But
 at least you'll now be armed with this information when you make your
 next purchase.  Financially it's not a huge deal, maybe $50 more in this
 case, 10% of the system price, for the quad core.  But two cores will
 forever be wasted, and that $50 could have gone toward the discrete GPU
 you need.

I never suggested you were not correct about the CPU.  I observed the lack 
of utilization of multiple cores on my first dual-core machine in 2006. I got 
The Haswell for the speed, lower power consumption, and presumably less 
heat generation. And possible resale value later on.
 
 
 This was my mistake for not asking point blank early in the thread what
 res you were running instead of making assumptions based on your retired
 status.  If I had asked more questions up front we could have avoided
 the contention.  For that I apologize.

No apology needed. You did in fact peg my age correctly; I will be 61 next
month. And I too have known people who run less than the native resolution 
to make the fonts bigger. When I get to that point, though, I will simply 
increase 
the font size so I don't get jaggies and blurry letters. Right now my eyes need 
the sharpness of the image, not a size increase. 
 
 You would be correct if the number you're looking at reflected
 application memory usage.  But it doesn't.  On any of the modern
 operating systems one must damn near be a computer scientist to see the
 actual memory usage.  The 5.22GB, this is on Debian, yes?  The system
 monitor?  This reports process and cache memory usage.  The buffer/cache
 will literally eat nearly all available memory all the time on Linux,
 then free some when an application process needs it.  I've never used
 OSX but it's probably similar in its desktop reporting tool.

This was in OS X. The memory use would be similar in Debian, I assume. About 
a quarter of the used memory was inactive which I assume was the cache. Still
too close for comfort for me, as WoW was not running, nor ventrilo, and WoW 
does background downloads of the almost-weekly patches while you play, so even 
more processes.
 
 This will really throw you for a loop.  Open a shell window and execute
 
 ~$ sudo echo 3  /proc/sys/vm/drop_caches
 
 Wait a few seconds and see what happens to that 5.22GB number.  Then
 report back what you find.  You can do this while playing WOW as well.
 That number will drop like a rock and WOW will keep on going, because
 the memory you're freeing with that command is cache.  And again, Linux
 will eat nearly all RAM for cache if the system is up long enough.

It is the nature of *nix to gobble up memory, yes. It will use what is there 
whether it needs it or not. But my Debian box has not arrived yet, so I can't
run that command there. I could try it in terminal on my iMac, and it would 
probably work. 
 
 I do thank you for the advice pertaining to a 384 bit bus and a gig more
 video ram than I was planning to get. That is advice that I will be
 following.
 
 You're welcome.  Keep in mind that at 2560x1440 the 7950/7970 may still
 not be fast enough for full detail in WOW with GPU settings on high.
 The extra GB of VRAM won't get utilized but you need the memory
 bandwidth of a 384bit bus.  Nobody sells, AFAICT, a 2GB model using
 these GPUs.
 
 I can't tell you where the setting resides, or if you have to edit
 xorg.conf, but you will want to use double buffering, not triple
 buffering.  You'll also want to disable full screen antialiasing (FSAA)
 and anisotropic filtering, or set them to very low values such as 2x or
 4x, or play with the settings until you strike the right balance.  They
 are variable from off to 16x.  These are driver settings for the GPU.
 They affect the image quality by smoothing the pixels of straight lines
 and the edges of objects in the scene, i.e. removing jaggies, such as
 on the ears or dangling hair of characters, the tip of arrows sticking
 out of a quiver, etc.

Oddly enough, these are also WoW in-game settings. The Mac section of the
WoW forum has specific advice on how to set those for every model of Mac 
that can run the game. 
 
 You may be able to tweak these on the MAC to get acceptable smoothness
 from your 6970 as well.  GPUs are infinitely tweakable to balance speed
 against image quality.

I usually have acceptable smoothness, but I might need to tweak my settings a 
bit more. I was getting lag and jerkiness in the newest raid. I strongly 
suspect it 
was my husband giving a piano lesson via Skype while I was in the raid, though.

Cg
 


--
To 

Re: Building computer

2013-10-02 Thread Celejar
On Wed, 2 Oct 2013 10:42:57 -0400
Rhiamom rhia...@gmail.com wrote:

  On Oct 2, 2013, at 4:59 AM, Stan Hoeppner s...@hardwarefreak.com wrote

...

  You would be correct if the number you're looking at reflected
  application memory usage.  But it doesn't.  On any of the modern
  operating systems one must damn near be a computer scientist to see the
  actual memory usage.  The 5.22GB, this is on Debian, yes?  The system
  monitor?  This reports process and cache memory usage.  The buffer/cache
  will literally eat nearly all available memory all the time on Linux,
  then free some when an application process needs it.  I've never used
  OSX but it's probably similar in its desktop reporting tool.
 
 This was in OS X. The memory use would be similar in Debian, I assume. About 
 a quarter of the used memory was inactive which I assume was the cache. 
 Still
 too close for comfort for me, as WoW was not running, nor ventrilo, and WoW 
 does background downloads of the almost-weekly patches while you play, so 
 even 
 more processes.

It may be necessary to be a computer scientist to understand stuff like
VIRT / RES / SHR, but it's trivial to separate out process and cache
usage - one just looks at the second, rather than first, line of 'free'
output.

Celejar


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20131002112831.3d24899b6fd524e4c160a...@gmail.com



Re: Building computer

2013-10-02 Thread Stan Hoeppner
On 10/2/2013 8:23 AM, Jonathan Dowland wrote:
 On Wed, Oct 02, 2013 at 03:59:15AM -0500, Stan Hoeppner wrote:
 This will really throw you for a loop.  Open a shell window and execute

 ~$ sudo echo 3  /proc/sys/vm/drop_caches
 
 She'll probably get 'Permission denied', you need elevated permissions
 for the writing process, which in this case is the outer shell. Try
 instead
 
   $ sudo sh -c echo 3  /proc/sys/vm/drop_caches

Thanks for the tip Jonathan.  I never use sudo, so I was winging it a
bit.  And I wasn't about to instruct her to login as root or su to root...

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524ce6d1.50...@hardwarefreak.com



Re: Building computer

2013-10-02 Thread Stan Hoeppner
On 10/2/2013 9:42 AM, Rhiamom wrote:
 
 On Oct 2, 2013, at 4:59 AM, Stan Hoeppner s...@hardwarefreak.com wrote

 This is the limiting factor.  And this is why I implore people to buy
 the fastest dual core and forgo the quad, six, eight core models.  And
 in fact, for non gamer daily use, I recommend the single core AMD
 Sempron, because a dual core is wasted with Firefox, Thunderbird, Flash,
 Adobe Reader, etc.

 I failed to make a convincing case before you purchased Catherine.  But
 at least you'll now be armed with this information when you make your
 next purchase.  Financially it's not a huge deal, maybe $50 more in this
 case, 10% of the system price, for the quad core.  But two cores will
 forever be wasted, and that $50 could have gone toward the discrete GPU
 you need.
 
 I never suggested you were not correct about the CPU.  I observed the lack 
 of utilization of multiple cores on my first dual-core machine in 2006. I got 
 The Haswell for the speed, lower power consumption, and presumably less 
 heat generation. And possible resale value later on.


 This was my mistake for not asking point blank early in the thread what
 res you were running instead of making assumptions based on your retired
 status.  If I had asked more questions up front we could have avoided
 the contention.  For that I apologize.
 
 No apology needed. You did in fact peg my age correctly; I will be 61 next
 month. And I too have known people who run less than the native resolution 
 to make the fonts bigger. When I get to that point, though, I will simply 
 increase 
 the font size so I don't get jaggies and blurry letters. Right now my eyes 
 need 
 the sharpness of the image, not a size increase. 

I'm lucky so far.  In my early 40s and never needed glasses.  Though I
have noticed recently I must hold fine print further away to read it,
such as product warning labels, the two paragraphs of exclusions on the
phone/cable teaser rate flyers, etc.  No issues with native res yet,
thankfully.

 You would be correct if the number you're looking at reflected
 application memory usage.  But it doesn't.  On any of the modern
 operating systems one must damn near be a computer scientist to see the
 actual memory usage.  The 5.22GB, this is on Debian, yes?  The system
 monitor?  This reports process and cache memory usage.  The buffer/cache
 will literally eat nearly all available memory all the time on Linux,
 then free some when an application process needs it.  I've never used
 OSX but it's probably similar in its desktop reporting tool.
 
 This was in OS X. The memory use would be similar in Debian, I assume. About

A Wheezy desktop install isn't going to eat nearly as much RAM as
desktop OSX, as there are far fewer service daemons loaded at startup.

 a quarter of the used memory was inactive which I assume was the cache. 
 Still

I'd guess that quarter is what has been paged to disk, not the cache.
If you had no applications running, just the desktop and background
processes, that shouldn't eat anywhere near 3.5GB.  I'd think the cache
is in what's left, not in the inactive.

 too close for comfort for me, as WoW was not running, nor ventrilo, and WoW 
 does background downloads of the almost-weekly patches while you play, so 
 even 
 more processes.

 This will really throw you for a loop.  Open a shell window and execute

 ~$ sudo echo 3  /proc/sys/vm/drop_caches

 Wait a few seconds and see what happens to that 5.22GB number.  Then
 report back what you find.  You can do this while playing WOW as well.
 That number will drop like a rock and WOW will keep on going, because
 the memory you're freeing with that command is cache.  And again, Linux
 will eat nearly all RAM for cache if the system is up long enough.
 
 It is the nature of *nix to gobble up memory, yes. It will use what is there 
 whether it needs it or not.  

Yes, the memory is used for block caching.  The reason this is done is
because the cost to drop cached lines is minimal when the memory is
needed by an active process.  So instead of wasting memory, Linux in
particular, caches just about every block.

 But my Debian box has not arrived yet, so I can't
 run that command there. I could try it in terminal on my iMac, and it would 
 probably work. 

This won't work on OSX.  The /proc/sys/vm/ parameters in Linux are
unique to Linux.

 I do thank you for the advice pertaining to a 384 bit bus and a gig more
 video ram than I was planning to get. That is advice that I will be
 following.

 You're welcome.  Keep in mind that at 2560x1440 the 7950/7970 may still
 not be fast enough for full detail in WOW with GPU settings on high.
 The extra GB of VRAM won't get utilized but you need the memory
 bandwidth of a 384bit bus.  Nobody sells, AFAICT, a 2GB model using
 these GPUs.

 I can't tell you where the setting resides, or if you have to edit
 xorg.conf, but you will want to use double buffering, not triple
 buffering.  You'll also want to disable full screen 

Re: Building computer

2013-10-01 Thread Ralf Mardorf
On Mon, 2013-09-30 at 20:15 -0400, Charles Kroeger wrote:
 On Sun, 29 Sep 2013 00:00:03 +0200
 Ralf Mardorf ralf.mard...@alice-dsl.net wrote:
 
  in
  Germany in 2012 was around 0.26 €/kWh. (0.26 EUR = 0.351604 USD)

Btw. this is the statistical mean for Germany.

 In Western New York last bill was circa 0.21$/KWh
 
 In West Texas for September circa 0.14 $/KWh
 
 1.00 EUR = 1.35229 USD  Mid-market rates: 2013-10-01 00:12 UTC
 
 What's it cost elsewhere?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380614642.1110.2.camel@archlinux



Re: Building computer

2013-10-01 Thread Ralf Mardorf
On Tue, 2013-10-01 at 10:04 +0200, Ralf Mardorf wrote:
 On Mon, 2013-09-30 at 20:15 -0400, Charles Kroeger wrote:
  On Sun, 29 Sep 2013 00:00:03 +0200
  Ralf Mardorf ralf.mard...@alice-dsl.net wrote:
  
   in
   Germany in 2012 was around 0.26 €/kWh. (0.26 EUR = 0.351604 USD)
 
 Btw. this is the statistical mean for Germany.
 
  In Western New York last bill was circa 0.21$/KWh
  
  In West Texas for September circa 0.14 $/KWh
  
  1.00 EUR = 1.35229 USD  Mid-market rates: 2013-10-01 00:12 UTC
  
  What's it cost elsewhere?

PS:

Für private Haushalte lag der Strompreis 2012 in Deutschland im Mittel
bei 26,4 ct/kWh nach Angaben des Europäischen Amtes für Statistik
(Eurostat) - https://de.wikipedia.org/wiki/Strompreis

There might be information about the mean for other countries too. At
least Doug had to pay more than others from the USA.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380615100.1110.5.camel@archlinux



Re: Building computer

2013-10-01 Thread Jerry Stuckle

On 10/1/2013 12:38 AM, Doug wrote:

On 09/30/2013 08:15 PM, Charles Kroeger wrote:

On Sun, 29 Sep 2013 00:00:03 +0200
Ralf Mardorf ralf.mard...@alice-dsl.net wrote:


in
Germany in 2012 was around 0.26 €/kWh. (0.26 EUR = 0.351604 USD)


In Western New York last bill was circa 0.21$/KWh

In West Texas for September circa 0.14 $/KWh

1.00 EUR = 1.35229 USD  Mid-market rates: 2013-10-01 00:12 UTC

What's it cost elsewhere?


You have to be careful about this. Here's my last electric bill
from Long Island Power Authority, for one month:

DELIVERY AND SYSTEM CHARGES
Basic service: 31 days @ $.3600 $11.16
258KWH @ $.0587  22.11
1256KWH @ $.0975122.46
Subtotal:   155.73

POWER SUPPLY CHARGES
1514KWH @ $ .094136 142.52

EFFICIENCY $ RENEWABLE CHARGE
1514KWH @ $ .006246   9.46

OTHER CHARGES
NY State Assessment   3.75
Revenue-Based PILOTS  6.52
Suffolk Property Tax Adjustment   6.35
Sales Tax @ 2.5%  8.11
Subtotal:24.73

Total Charges:  $   332.44

Now if you take $332.44 and divide it by 1514 KWH, the
price is now 22 cents per KWH.
I guess that's not bad by German standards, but it's nothing
like the prices shown unloaded, above.

As shown, this is at Suffolk County, Long Island, New York,
for the month of August, 2013.

--doug



Yes, you have to be careful here.  Besides generation charges, there are 
transmission charges, distribution charges, maintenance charges, 
non-renewable energy surcharges, renewable energy surcharges, Wednesday 
Afternoon Green Fees for the C-suite charges, padding the pockets of 
politicians charges, and all kinds of other charges.  The actual 
generation costs are only a minor part of the total bill.


But here, also, they are outlawing incandescent bulbs, starting with 
100W.  Next will be 75W, and so on.


One good thing about tablets - they don't use nearly as much energy as 
desktops! :)


Jerry


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/524aaf85.9090...@attglobal.net



Re: Building a computer for compiling and CPU emulation (Re: Building computer)

2013-10-01 Thread Joel Rees
On Tue, Oct 1, 2013 at 7:00 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/29/2013 6:01 AM, Joel Rees wrote:
 ...
 http://en.wikipedia.org/wiki/List_of_AMD_mobile_microprocessors

 tells me that AMD A4-1250, AMD A6-1450, and E2-1800 all have AMD-V.

 The E2-1800 has a half-Meg L2 cache, but higher speed CPU (1.7GHz).

 The A4-1250 has 1 Meg L2 cache but lower speed CPU (1.0 GHz).

 The A6-1450 has 2 Meg L2 cache to share between twice the CPUs.

 The A4-1450 is probably the best choice, not taking cost into
 consideration.  Here's why:

 1.  Turbo core up to 1.4GHz

 2.  All 3 models have 512KB L2/core, no advantage for any

 3.  Temash core has better IPC than Zacate

 4.  4 cores @ 1.4GHz should give better compile times than
 2 cores @ 1.8GHz.  4 @ 1GHz should as well.  -j4 or higher

 5.  A6-1450 is 9W chip, E2-1800 is 18W chip, longer run time
 28nm vs 40nm

 The one downside is that for non-compute intensive operations, such as
 normal interactive GUI apps, say PDF viewing, browser rendering, etc, it
 may be considerably slower than the 1.8GHz E2-1800, due to the 800MHz
 clock deficit, as the turbo core may not kick in a lot here.  And WRT
 turbo core, I'm not quite sure what this means:

 Selected parts support Turbo Dock technology, that can increase CPU and
 GPU frequencies when external cooling is available.

 So if the unit you purchase doesn't have a variable speed fan that can
 fulfill this requirement, this may mean you can't get 1.4GHz turbo mode.
  And I'm just guessing that devices of this class may not include forced
 air cooling.  Sorry I don't have all the answers here, but maybe this
 helps get you a bit closer.

 One thing I can assure you of is that for your stated use case, IIUI
 correctly, all of these CPUs are very likely woefully inadequate for the
 task.

Yes and no. Most of the compiling will be much smaller than a kernel,
not even complete packages. I think I said it but you've clipped that
part, but this is a course in programming that includes writing some
drivers.

Emulating the superH processor is going to be a bit demanding,
particularly if I find myself wanting to compile a superH kernel and
not having access to the school labs over a long holiday. I am aware
of that and will plan accordingly.

Mostly, I was trying to dig up the AMD-V support and something in the
thread pointed me the right direction.

I'm not planning getting more than AMD-V in a portable machine. Don't
want to carry a boat battery with me. :-p

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAAr43iOfa495=zs8ezdry1hnjv_bi39_ckvvrgzrs2zwero...@mail.gmail.com



Re: Building computer

2013-10-01 Thread Charles Kroeger
On Tue, 01 Oct 2013 06:40:01 +0200
Doug dmcgarr...@optonline.net wrote:

 it's nothing
 like the prices shown unloaded, above.

.21 cents is nothing like .22 cents, who knew.

-- 
CK


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/bb067hf76q...@mid.individual.net



Re: Building computer

2013-10-01 Thread Charles Kroeger
On Tue, 01 Oct 2013 04:40:01 +0200
Stan Hoeppner s...@hardwarefreak.com wrote:

 People convinced against their will are of the same opinion still.


Stan, you know what Jesus said: cast your pearls before swine and they will 
turn
on you and tear you to pieces.

There are opinions and facts. Many now think their opinions are facts. Too many 
I
think.

-- 
CK


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/bb077nf76q...@mid.individual.net



Re: Building computer

2013-10-01 Thread Stan Hoeppner
On 10/1/2013 12:29 AM, Rhiamom wrote:
 
 
 Sent from my iPad
 
 On Sep 30, 2013, at 10:33 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
...
 It's quite funny to see someone of your knowledge level tell me I'm
 wrong by quoting the cardboard box as your evidence, while I'm
 demonstrating how the transistors and everything else work to get to a
 realistic set of requirements...
 
 This is the crux of the matter. Your requirements are not realistic for how 

Actually they were, up to the point you finally told us what screen
resolution you use.  That changes things quite a bit, or I should say
changed one thing dramatically.

I recommended a fast dual core CPU because more cores will be wasted.
The i3-4340 3.6GHz Haswell would have been as fast as the quad core you
bought, and saved ~$40-50.  4GB RAM, more than 4GB is wasted, but as I
said previously it's cheap so buy more if you want, won't hurt.  An SSD,
and APU graphics.  The only change in that recommendation, now that I
know your screen resolution, is shifting to a very fast high bandwidth
discrete card.  2560x1440 is a pretty insanely high gaming res if you
want high frame rates and smooth rendering at high detail.  Most hard
core gamers wouldn't touch 2560x1440 without SLI/Xfire.

If going w/a single reasonably priced card you're going to want/need a
model with a 384 bit bus.  Extremely high resolutions require extremely
high memory bandwidth.  The 384bit nVidia models are all above $600.
The AMD 7950s can be had in the low $200s, and the 7970s in the low
$300s.  Both are 384bit.

 I want to use my computer. You may be able to happily exist on your minimal
 memory, ruthlessly eliminating background processes and OS features. I do 
 not choose to do that. Your expert knowledge is worthless to me, because it
 requires me to alter the basic way I use my computer. In fact, it is worse 
 than 
 useless, because some poor sap might follow your advice and then wonder
 why they have performance issues with their brand new computer.

No it doesn't change the way you use your computer.  Because the specs I
gave actually match how you currently use your computer.  You simply
don't know it, because you're not using the tools at your disposal which
inform you of what system resources you're using.

Run top, install Munin, etc, and look at the percentage of each CPU core
that is used, and how much memory is used by your applications.  You'll
be very surprised.  Then look at the GPU driver control panel while
running WOW and see how much of the video RAM is in use.  At 2560x1440
it may be pretty high.

The 7950/7970 both sport 3GB of VRAM do you shouldn't fall short there.

 It doesn't matter as you already bought your system.  But I find it
 interesting that you will be running integrated graphics for the time
 being, after you stated this is wholly inadequate.

 I also find it interesting that not once did you mention that you may
 try your old 6970 in the new box, before plunking down unnecessary cash
 on yet another high end video card.
 
 Yes, I will be running the integrated graphics for a few weeks while I adapt 
 to 
 the new box. It is only temporary. 
 
 The 6970 is in my iMac, and will remain there. Note that even with 2 gig of 
 dedicated video memory I am not able to play WoW on all high settings with 
 the 6970. 

Again, that's not because there's not enough GPU memory, it's because
the DRAM bus isn't fast enough, or the chip itself isn't fast enough, or
both, for that insanely high resolution.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524af3e9.5010...@hardwarefreak.com



Re: Building computer

2013-10-01 Thread David Guntner
http://lists.alioth.debian.org/mailman/listinfo/d-community-offtopic

Stan Hoeppner grabbed a keyboard and wrote:
 On 10/1/2013 12:29 AM, Rhiamom wrote:


 Sent from my iPad

 On Sep 30, 2013, at 10:33 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 ...
 It's quite funny to see someone of your knowledge level tell me I'm
 wrong by quoting the cardboard box as your evidence, while I'm
 demonstrating how the transistors and everything else work to get to a
 realistic set of requirements...

 This is the crux of the matter. Your requirements are not realistic for how 
 
 Actually they were, up to the point you finally told us what screen
 resolution you use.  That changes things quite a bit, or I should say
 changed one thing dramatically.
 
 I recommended a fast dual core CPU because more cores will be wasted.
 The i3-4340 3.6GHz Haswell would have been as fast as the quad core you
 bought, and saved ~$40-50.  4GB RAM, more than 4GB is wasted, but as I
 said previously it's cheap so buy more if you want, won't hurt.  An SSD,
 and APU graphics.  The only change in that recommendation, now that I
 know your screen resolution, is shifting to a very fast high bandwidth
 discrete card.  2560x1440 is a pretty insanely high gaming res if you
 want high frame rates and smooth rendering at high detail.  Most hard
 core gamers wouldn't touch 2560x1440 without SLI/Xfire.
 
 If going w/a single reasonably priced card you're going to want/need a
 model with a 384 bit bus.  Extremely high resolutions require extremely
 high memory bandwidth.  The 384bit nVidia models are all above $600.
 The AMD 7950s can be had in the low $200s, and the 7970s in the low
 $300s.  Both are 384bit.
 
 I want to use my computer. You may be able to happily exist on your minimal
 memory, ruthlessly eliminating background processes and OS features. I do 
 not choose to do that. Your expert knowledge is worthless to me, because it
 requires me to alter the basic way I use my computer. In fact, it is worse 
 than 
 useless, because some poor sap might follow your advice and then wonder
 why they have performance issues with their brand new computer.
 
 No it doesn't change the way you use your computer.  Because the specs I
 gave actually match how you currently use your computer.  You simply
 don't know it, because you're not using the tools at your disposal which
 inform you of what system resources you're using.
 
 Run top, install Munin, etc, and look at the percentage of each CPU core
 that is used, and how much memory is used by your applications.  You'll
 be very surprised.  Then look at the GPU driver control panel while
 running WOW and see how much of the video RAM is in use.  At 2560x1440
 it may be pretty high.
 
 The 7950/7970 both sport 3GB of VRAM do you shouldn't fall short there.
 
 It doesn't matter as you already bought your system.  But I find it
 interesting that you will be running integrated graphics for the time
 being, after you stated this is wholly inadequate.

 I also find it interesting that not once did you mention that you may
 try your old 6970 in the new box, before plunking down unnecessary cash
 on yet another high end video card.

 Yes, I will be running the integrated graphics for a few weeks while I adapt 
 to 
 the new box. It is only temporary. 

 The 6970 is in my iMac, and will remain there. Note that even with 2 gig of 
 dedicated video memory I am not able to play WoW on all high settings with 
 the 6970. 
 
 Again, that's not because there's not enough GPU memory, it's because
 the DRAM bus isn't fast enough, or the chip itself isn't fast enough, or
 both, for that insanely high resolution.
 




smime.p7s
Description: S/MIME Cryptographic Signature


Re: Building computer

2013-10-01 Thread Doug
On 10/01/2013 10:57 AM, Charles Kroeger wrote:
 On Tue, 01 Oct 2013 06:40:01 +0200
 Doug dmcgarr...@optonline.net wrote:
 
 it's nothing
 like the prices shown unloaded, above.
 
 .21 cents is nothing like .22 cents, who knew.
 
The unloaded prices above refer to the 8.6 cents and the 9.75 cents
shown on the bill.

--doug

-- 
Blessed are the peacemakers..for they shall be shot at from both sides.
--A.M.Greeley


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524b4735@optonline.net



Re: Building computer

2013-10-01 Thread Catherine Gramze
 On Sep 30, 2013, at 10:33 PM, Stan Hoeppner s...@hardwarefreak.com
wrote:

Actually they were, up to the point you finally told us what screen
resolution you use.  That changes things quite a bit, or I should say
changed one thing dramatically.

I recommended a fast dual core CPU because more cores will be wasted.
The i3-4340 3.6GHz Haswell would have been as fast as the quad core you
bought, and saved ~$40-50.  4GB RAM, more than 4GB is wasted, but as I
said previously it's cheap so buy more if you want, won't hurt.  An SSD,
and APU graphics.  The only change in that recommendation, now that I
know your screen resolution, is shifting to a very fast high bandwidth
discrete card.  2560x1440 is a pretty insanely high gaming res if you
want high frame rates and smooth rendering at high detail.  Most hard
core gamers wouldn't touch 2560x1440 without SLI/Xfire.

If going w/a single reasonably priced card you're going to want/need a
model with a 384 bit bus.  Extremely high resolutions require extremely
high memory bandwidth.  The 384bit nVidia models are all above $600.
The AMD 7950s can be had in the low $200s, and the 7970s in the low
$300s.  Both are 384bit.

You are right about the quad core not making any real difference. I have
run the activity monitor and observed how one core is at 70% and another
at 22% and two others unused. Average user applications are not yet written
to take advantage of multiple core processors.

I am glad you finally understand that my desire for a dedicated video card
with a substantial amount of dedicated video ram was not just a case of
bigger, better, faster, more but based on the reality of what is required
to
do what I do. I can't imagine what screen resolution you thought I had in
mind.
1024x768? It hurts my old eyes to look at those.

I have also looked at my memory usage. At this very moment, not running
WoW, I have 5.22 gig being used. 4 gig would not be sufficient for me.

I do thank you for the advice pertaining to a 384 bit bus and a gig more
video ram than I was planning to get. That is advice that I will be
following.


On Tue, Oct 1, 2013 at 12:10 PM, Stan Hoeppner s...@hardwarefreak.comwrote:

 On 10/1/2013 12:29 AM, Rhiamom wrote:
 
 
  Sent from my iPad
 
  On Sep 30, 2013, at 10:33 PM, Stan Hoeppner s...@hardwarefreak.com
 wrote:
 ...
  It's quite funny to see someone of your knowledge level tell me I'm
  wrong by quoting the cardboard box as your evidence, while I'm
  demonstrating how the transistors and everything else work to get to a
  realistic set of requirements...
 
  This is the crux of the matter. Your requirements are not realistic for
 how

 Actually they were, up to the point you finally told us what screen
 resolution you use.  That changes things quite a bit, or I should say
 changed one thing dramatically.

 I recommended a fast dual core CPU because more cores will be wasted.
 The i3-4340 3.6GHz Haswell would have been as fast as the quad core you
 bought, and saved ~$40-50.  4GB RAM, more than 4GB is wasted, but as I
 said previously it's cheap so buy more if you want, won't hurt.  An SSD,
 and APU graphics.  The only change in that recommendation, now that I
 know your screen resolution, is shifting to a very fast high bandwidth
 discrete card.  2560x1440 is a pretty insanely high gaming res if you
 want high frame rates and smooth rendering at high detail.  Most hard
 core gamers wouldn't touch 2560x1440 without SLI/Xfire.

 If going w/a single reasonably priced card you're going to want/need a
 model with a 384 bit bus.  Extremely high resolutions require extremely
 high memory bandwidth.  The 384bit nVidia models are all above $600.
 The AMD 7950s can be had in the low $200s, and the 7970s in the low
 $300s.  Both are 384bit.

  I want to use my computer. You may be able to happily exist on your
 minimal
  memory, ruthlessly eliminating background processes and OS features. I do
  not choose to do that. Your expert knowledge is worthless to me, because
 it
  requires me to alter the basic way I use my computer. In fact, it is
 worse than
  useless, because some poor sap might follow your advice and then wonder
  why they have performance issues with their brand new computer.

 No it doesn't change the way you use your computer.  Because the specs I
 gave actually match how you currently use your computer.  You simply
 don't know it, because you're not using the tools at your disposal which
 inform you of what system resources you're using.

 Run top, install Munin, etc, and look at the percentage of each CPU core
 that is used, and how much memory is used by your applications.  You'll
 be very surprised.  Then look at the GPU driver control panel while
 running WOW and see how much of the video RAM is in use.  At 2560x1440
 it may be pretty high.

 The 7950/7970 both sport 3GB of VRAM do you shouldn't fall short there.

  It doesn't matter as you already bought your system.  But I find it
  interesting that you will be running 

Re: Building computer

2013-09-30 Thread Marko Randjelovic
Never waste money on Windows OEM license. If you cannot afford retail license, 
better do not buy Windows at all. If you had retail license, you could 
reinstall Windows in Legacy mode and then install Linux on other partitions and 
you would be able to simply choose which OS to boot from lilo/grub boot loader.

-- 
http://mr.flossdaily.org


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130930123642.71fc9...@eunet.rs



Re: Building computer

2013-09-30 Thread Stan Hoeppner
On 9/28/2013 9:16 PM, berenger.mo...@neutralite.org wrote:
 
 
 Le 28.09.2013 22:46, Stan Hoeppner a écrit :
 On 9/28/2013 8:14 AM, berenger.mo...@neutralite.org wrote:


 Le 28.09.2013 13:33, Stan Hoeppner a écrit :
 Hi Catherine,

 I haven't caught up with the rest of the thread but just wanted to
 address a couple points here.

 On 9/26/2013 11:12 AM, Catherine Gramze wrote:

 On Sep 26, 2013, at 1:05 AM, Stan Hoeppner s...@hardwarefreak.com
 wrote:


 What desktop applications are you using that require 8GB, let alone
 16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over buy
 DRAM, that's a personal choice.  It will likely not improve
 performance
 in any meaningful way, for WOW in Wine, or anything else.


 I will be running more than one app at a time. For example WoW, a
 browser, a Ventrilo client, and a chat client at minimum.

 4GB is more than plenty, unless WOW has turned into a complete and
 total
 memory hog.  Obviously it eats more running through Wine emulation. But
 Win and WOW combined shouldn't eat more than 2GB, so you have 2GB left
 to the rest, which is plenty.

 I am only quickly reading the thread, and it's the 2nd time I see wine
 associated with emulation.
 As the name says, WINE Is Not An Emulator, it does not emulate a
 computer, it does not emulate the windows' kernel, it emulates nothing.

 snip

 Whether you call it an emulator, translator, simulator, or Santa Claus,
 it 'provides' the Windows APIs to the application, the DLLs, etc.
 Providing this does require additional memory.  It's not a large amount
 by today's standards, but it is non negligible.  I made that case above
 and previously in the thread.
 
 So, you do think that an emulator is the same as a dynamic library? If
 yes, well... why not. But then, since all DL are emulators, do not use
 that word to prove that they'll consume more resources, being CPU or
 memory.
 
 So I'm unclear as to why you picked my reply for your rebuttal, given
 we're on the same page.
 
 The reason for which I replied, is that an emulator emulates a complete
 system and this have a huge overhead. WINE, as a dynamic library, could,
 theoretically ( I say theoretically because I did not ran any tests,
 I'll be honest on that point. Plus, it's impossible to have strictly the
 same costs ) have the same overhead than window's API. In practice, it
 will have a small CPU overhead, but to say it's not a small one, one
 should give some valgrind analysis.
 
 About the reason to reply to your post and not another one, it was
 because it was the second one saying that, that I had read on the same
 thread at the moment I replied :)
 It was not personal.
 
 The problem with 3D game performance under Wine
 
 I will not say it does not cost more than running on windows. I have no
 proofs. But it is not related to the fact it is an emulator, it can only
 be related to the fact it is a badder implementation, or one with more
 more layers.
 
 is not memory
 consumption, but the CPU overhead,
 
 CPU for 3D stuff? You might be right and I am probably wrong, but could
 not it be because linux's 3D drivers are less good than windows' ones?
 This is a real question, not a troll, and the reason of that opinion of
 mine is quite easy to understand, and so, probably very simplistic:
 video games are mostly targeting windows' users, and so, there were more
 money and time spent on enhancements... on windows' side.

This is absolutely a valid point.  And it surely depends on which GPU
one uses, whether they use an open source reverse engineered 3D driver,
or a proprietary vendor binary only driver.

Much has been written in the technical press regarding the significant
resources both AMD and nVidia expend optimizing their MS Windows drivers
for specific games.  And in fact, in nVidia's case, the driver control
panel allows the end user to optimize driver parameters on a per game
basis.  Some of these optimizations reduce the load on the host CPU,
some reduce the load on the GPU's shaders, some reduce the load on the
GPU memory subsystem, etc.  So there's not question a given game can run
faster on Windows than Linux, simply due to this level of optimization.

 Well, again, I admit, I have no benchmark to prove my words.
 Of course, on a more technical point, I can agree that one more layer
 for OpenGL related stuff might have a cost. But, that cost might also be
 removed at compile time.

Recall the optimizations above?  On of the tricks the optimized Windows
drivers often perform is replacing an expensive inbound GL operation
with a less expensive one that may sacrifice detail for speed, or
replace a complex operation with a series of less complex calls that
execute more quickly on the host CPU or in a particular GPU shader.
They can do this because they heavily profile the execution of each
Windows game, or at least a great many of them.  These optimizations, or
shortcuts, are not present in the Linux binary blob closed drivers, and
certainly nothing 

Re: Building computer

2013-09-30 Thread Catherine Gramze

On Sep 30, 2013, at 5:00 PM, Stan Hoeppner s...@hardwarefreak.com wrote:

 No, no.  Context is important here.  The OP stated she needed more than
 4GB of RAM.  I stated that WOW on Wine shouldn't even require 2GB, but
 for argument's sake, if it did need 2GB, then all of her other stuff
 would easily fit in the remaining 2GB.
 
 I stated previously that I've never played WOW, nor WOW on Wine.  So I
 can't just fire it up and look at top to see what it uses.  I had some
 friends that were playing WOW back when it first came out, almost 10
 years ago.  They were playing on Win XP machines with 256MB RAM and
 GeForce 4 MX cards with 32MB VRAM.  Obviously the game has evolved since
 then, but no game increases in processing complexity/load by a factor of
 10x in its lifetime.

And you would be WRONG, as I stated previously. Bear in mind you were wanting 
me 
to get 4 gig of ram, *of which 1.8 would be shared with the integrated video as 
video
ram*. That leaves me with barely enough ram to meet the minimum spec with no 
other apps running, and leaves me well below the recommended amount of ram, 
even with no other apps running. I clearly indicated I would be running a 
browser, 
a chat client, and a ventrilo client as well as WoW all at the same time, and 
that the 
graphics settings the minimum specs allow are unacceptable. You may be a 
hardware
whiz, but you are no gamer and seem unable to comprehend that gaming is the 
most 
hardware demanding use of a computer.

From the Blizzard website:  
https://us.battle.net/support/en/article/world-of-warcraft-system-requirements

Windows
Minimum Requirements
Operating SystemWindows® XP/Windows Vista®/Windows® 7/Windows® 8 
(Updated with the latest Service Packs)
Processor   Intel® Pentium® D or AMD Athlon™ 64 X2  
Video   NVIDIA® GeForce® 6800 or
ATI™ Radeon™ X1600 Pro (256 MB) 
Memory  2 GB RAM (1 GB Windows® XP) 


Recommended Specifications
Operating SystemWindows® 7/ Windows® 8 64-bit with latest service pack
Processor   Intel® Core 2 Duo 2.2 GHz, AMD Athlon™ 64 X2 2.6GHz or better
Video   NVIDIA GeForce 8800 GT, ATI™ Radeon HD 4830 (512 MB) or better
Memory 4 GB RAM

Updated: Sep 12, 2013 


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building a computer for compiling and CPU emulation (Re: Building computer)

2013-09-30 Thread Stan Hoeppner
On 9/29/2013 6:01 AM, Joel Rees wrote:
...
 http://en.wikipedia.org/wiki/List_of_AMD_mobile_microprocessors
 
 tells me that AMD A4-1250, AMD A6-1450, and E2-1800 all have AMD-V.
 
 The E2-1800 has a half-Meg L2 cache, but higher speed CPU (1.7GHz).
 
 The A4-1250 has 1 Meg L2 cache but lower speed CPU (1.0 GHz).
 
 The A6-1450 has 2 Meg L2 cache to share between twice the CPUs.

The A4-1450 is probably the best choice, not taking cost into
consideration.  Here's why:

1.  Turbo core up to 1.4GHz

2.  All 3 models have 512KB L2/core, no advantage for any

3.  Temash core has better IPC than Zacate

4.  4 cores @ 1.4GHz should give better compile times than
2 cores @ 1.8GHz.  4 @ 1GHz should as well.  -j4 or higher

5.  A6-1450 is 9W chip, E2-1800 is 18W chip, longer run time
28nm vs 40nm

The one downside is that for non-compute intensive operations, such as
normal interactive GUI apps, say PDF viewing, browser rendering, etc, it
may be considerably slower than the 1.8GHz E2-1800, due to the 800MHz
clock deficit, as the turbo core may not kick in a lot here.  And WRT
turbo core, I'm not quite sure what this means:

Selected parts support Turbo Dock technology, that can increase CPU and
GPU frequencies when external cooling is available.

So if the unit you purchase doesn't have a variable speed fan that can
fulfill this requirement, this may mean you can't get 1.4GHz turbo mode.
 And I'm just guessing that devices of this class may not include forced
air cooling.  Sorry I don't have all the answers here, but maybe this
helps get you a bit closer.

One thing I can assure you of is that for your stated use case, IIUI
correctly, all of these CPUs are very likely woefully inadequate for the
task.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5249f491.4090...@hardwarefreak.com



Re: Building computer

2013-09-30 Thread Charles Kroeger
On Sun, 29 Sep 2013 00:00:03 +0200
Ralf Mardorf ralf.mard...@alice-dsl.net wrote:

 in
 Germany in 2012 was around 0.26 €/kWh. (0.26 EUR = 0.351604 USD)

In Western New York last bill was circa 0.21$/KWh

In West Texas for September circa 0.14 $/KWh

1.00 EUR = 1.35229 USD  Mid-market rates: 2013-10-01 00:12 UTC

What's it cost elsewhere?

-- 
CK 


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/bauig4fp3i...@mid.individual.net



Re: Building computer

2013-09-30 Thread Stan Hoeppner
On 9/30/2013 4:32 PM, Catherine Gramze wrote:
 
 On Sep 30, 2013, at 5:00 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 
 No, no.  Context is important here.  The OP stated she needed more than
 4GB of RAM.  I stated that WOW on Wine shouldn't even require 2GB, but
 for argument's sake, if it did need 2GB, then all of her other stuff
 would easily fit in the remaining 2GB.

 I stated previously that I've never played WOW, nor WOW on Wine.  So I
 can't just fire it up and look at top to see what it uses.  I had some
 friends that were playing WOW back when it first came out, almost 10
 years ago.  They were playing on Win XP machines with 256MB RAM and
 GeForce 4 MX cards with 32MB VRAM.  Obviously the game has evolved since
 then, but no game increases in processing complexity/load by a factor of
 10x in its lifetime.
 
 And you would be WRONG, as I stated previously. 

Catherine, saying this twice, in CAPS this time, isn't going to make the
statement true.

You simply do not have the technical knowledge and understanding to
speak intelligently on this subject.  Quoting requirements from the side
of the box as you do below demonstrates this with aplomb.

Despite my efforts to educate you, are you unable or unwilling to be
educated.

 Bear in mind you were wanting me 
 to get 4 gig of ram, *of which 1.8 would be shared with the integrated video 
 as video
 ram*. 

Here is a prime example.  Where did you arrive at 1.8GB of shared RAM.
This is silly.

 That leaves me with barely enough ram to meet the minimum spec with no 
 other apps running, and leaves me well below the recommended amount of ram, 
 even with no other apps running. I clearly indicated I would be running a 
 browser, 
 a chat client, and a ventrilo client as well as WoW all at the same time, and 
 that the 
 graphics settings the minimum specs allow are unacceptable. 

You didn't read my post.  I explained all of this in detail.

 You may be a hardware
 whiz, but you are no gamer and seem unable to comprehend that gaming is the 
 most 
 hardware demanding use of a computer.

This is funny.  Clearly you didn't read my reply that covered this.  Or
it was simply too far over your head, who knows.  Please read it.  If
that is not sufficient, do I need to show you a gamer's badge or
something.  ROFL.

This discussion has become pathetic...

 From the Blizzard website:  
 https://us.battle.net/support/en/article/world-of-warcraft-system-requirements

[snipped pasted requirements]

The problem here Catherine is that my technical understanding of the
subject matter, including hardware, operating systems, games, the 3D
rendering pipeline, etc, etc is ~75,000 on a 100,000 scale.  Your
understanding is maybe 100 on this scale.  There's nothing wrong with
that.  There is something wrong with the fact that you seem unwilling to
learn and apply the obtained knowledge.

When you read the box requirements you fail to realize they are not
strictly for the game code.  Those requirements take into account that
most people have the Windows Indexing service and other performance
killers turned on all the time, run real time A/V software and don't
know how to turn it off when running a game, likewise for automatic
updates for Windows, SUN Java, the various dozen Adobe products, and one
and on and on.

For people such as myself, we know how to tweak and tune the system to
dramatically lower the box requirements because we strip out or disable
all of the non essential junk that is preinstalled by the PC vendor, or
during a Windows installation on a DIY machine, etc, etc.

Note the box GPU DRAM capacity requirements don't state screen
resolution.  There is a massive difference in VRAM requirements, both
frame and texture buffer space, between say 1366x768 and 1920x1080.  In
this entire thread I don't recall you ever stating your screen
resolution.  It has a direct influence on the amount of GPU horsepower
and GPU VRAM you actually need to achieve a desired frame rate.

The sole reason for my participation in this thread has been to educate
you with expert level information, in order to save you money and still
achieve your goals.

Instead of digesting the information I've given you, using it to your
advantage, and saying thank you, all you have done is argue with the
expert factual information I've presented.

It's quite funny to see someone of your knowledge level tell me I'm
wrong by quoting the cardboard box as your evidence, while I'm
demonstrating how the transistors and everything else work to get to a
realistic set of requirements...

A favorite saying of a dear friend of mine is very applicable here:

People convinced against their will are of the same opinion still.

You are unwilling to comprehend, because you're unable to.

It doesn't matter as you already bought your system.  But I find it
interesting that you will be running integrated graphics for the time
being, after you stated this is wholly inadequate.

I also find it interesting that not once 

Re: Building computer

2013-09-30 Thread Doug
On 09/30/2013 08:15 PM, Charles Kroeger wrote:
 On Sun, 29 Sep 2013 00:00:03 +0200
 Ralf Mardorf ralf.mard...@alice-dsl.net wrote:
 
 in
 Germany in 2012 was around 0.26 €/kWh. (0.26 EUR = 0.351604 USD)
 
 In Western New York last bill was circa 0.21$/KWh
 
 In West Texas for September circa 0.14 $/KWh
 
 1.00 EUR = 1.35229 USD  Mid-market rates: 2013-10-01 00:12 UTC
 
 What's it cost elsewhere?
 
You have to be careful about this. Here's my last electric bill 
from Long Island Power Authority, for one month:

DELIVERY AND SYSTEM CHARGES
Basic service: 31 days @ $.3600 $11.16
258KWH @ $.0587  22.11
1256KWH @ $.0975122.46
Subtotal:   155.73

POWER SUPPLY CHARGES
1514KWH @ $ .094136 142.52

EFFICIENCY $ RENEWABLE CHARGE
1514KWH @ $ .006246   9.46

OTHER CHARGES
NY State Assessment   3.75
Revenue-Based PILOTS  6.52
Suffolk Property Tax Adjustment   6.35
Sales Tax @ 2.5%  8.11
Subtotal:24.73

Total Charges:  $   332.44

Now if you take $332.44 and divide it by 1514 KWH, the 
price is now 22 cents per KWH. 
I guess that's not bad by German standards, but it's nothing
like the prices shown unloaded, above.

As shown, this is at Suffolk County, Long Island, New York,
for the month of August, 2013.

--doug

-- 
Blessed are the peacemakers..for they shall be shot at from both sides. 
--A.M.Greeley


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524a51b4.5020...@optonline.net



Re: Building computer

2013-09-30 Thread Rhiamom


Sent from my iPad

 On Sep 30, 2013, at 10:33 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 
 Here is a prime example.  Where did you arrive at 1.8GB of shared RAM.
 This is silly.

Silly? I admit to being off by .1 gig; it is actually only 1.7 gig as confirmed 
by the specs for the HD4600 on the Intel website. All integrated graphics 
use shared memory and that is the primary reason gamers detest them.  
But why do you say this number is silly when it is clearly the amount of 
memory the graphics chipset will attempt to use while playing a game with 
so much 3d rendering and special lighting effects? Sure you can set it for 
less, but then you can't play the game at a good 60fps on even the lowest 
settings. As I have previously specified, the lowest settings are not 
acceptable, 
nor are the medium settings. I want high or ultra settings only.
 
 You didn't read my post.  I explained all of this in detail.

No, you did not. You blustered about tweaking the system and idiots who keep 
cruft and malware on their computers. You complained about background 
processes and running more than one app, things that I fully intend to do, 
as if it was a criminal activity. You made absolutely no explanation of how 
to make the computer able to run the game on less than minimum specs. 
You gave no indication of how those minimum specs were to be rendered 
insignificant. You suggested nothing that could effect the system performance 
substantially enough to account for a deficit of at least 2 gig of ram, and 
actually 
more because I don't like medium settings. I have observed people trying to 
play a game on a computer with less than minimum specs. It sometimes works, 
if 10fps and jerky screen refreshes are the way you want to play. Other times 
the game refuses to even start.
 
 This is funny.  Clearly you didn't read my reply that covered this.  Or
 it was simply too far over your head, who knows.  Please read it.  If
 that is not sufficient, do I need to show you a gamer's badge or
 something.  ROFL.
 
 This discussion has become pathetic...

What is pathetic about pointing out that you are sadly ignorant about the 
actual requirements for a gaming computer? You don't play WoW. You 
obviously don't play any other graphically intense game, either. It reminds 
me of the difference between theory and practice. In theory, there is no 
difference between theory and practice. In practice, there is. You know so 
much about this topic you can't pull your head out of the data long enough 
to look at a real person playing a real game on an ordinary computer and 
see what works and what doesn't. If I had a wimpy underpowered graphics 
card and was trying to get it to play a game for which it did not meet the specs
I would be interested in your advice. But I am building/buying a new computer; 
why should I deliberately choose hardware that isn't expected to do what I know 
I want to do? That makes no sense when I can get hardware that is expected 
to do what I want to do.
 
 The problem here Catherine is that my technical understanding of the
 subject matter, including hardware, operating systems, games, the 3D
 rendering pipeline, etc, etc is ~75,000 on a 100,000 scale.  Your
 understanding is maybe 100 on this scale.  There's nothing wrong with
 that.  There is something wrong with the fact that you seem unwilling to
 learn and apply the obtained knowledge.

The problem here is that you think I ought to value your expert advice on 
getting less ram than I obviously need when your advice is based on the 
way you think a computer ought to be set up and used rather than the way
I intend to set up and use it. 
 
 When you read the box requirements you fail to realize they are not
 strictly for the game code.  Those requirements take into account that
 most people have the Windows Indexing service and other performance
 killers turned on all the time, run real time A/V software and don't
 know how to turn it off when running a game, likewise for automatic
 updates for Windows, SUN Java, the various dozen Adobe products, and one
 and on and on.

Oh, those dastardly folk, using their computers the way they want instead 
of the way you think they should! How dare they! /sarcasm
 
 Note the box GPU DRAM capacity requirements don't state screen
 resolution.  There is a massive difference in VRAM requirements, both
 frame and texture buffer space, between say 1366x768 and 1920x1080.  In
 this entire thread I don't recall you ever stating your screen
 resolution.  It has a direct influence on the amount of GPU horsepower
 and GPU VRAM you actually need to achieve a desired frame rate.

Oh, well. This is logical. I run 2560x1440 on my Mac, but will be dropping to 
1920x1080 on my Debian box until I get a new monitor later.
 
 The sole reason for my participation in this thread has been to educate
 you with expert level information, in order to save you money and still
 achieve your goals.
 
 Instead of digesting the information 

Re: Building computer - power supplies

2013-09-29 Thread Joel Rees
On Sun, Sep 29, 2013 at 5:18 AM, Doug dmcgarr...@optonline.net wrote:
 On 09/28/2013 03:23 PM, Stan Hoeppner wrote:
 On 9/27/2013 6:37 PM, Joel Rees wrote:
 On Fri, Sep 27, 2013 at 9:36 PM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 A point I forgot to make.  This is something everyone should know.

 Subject:  The marketing myth of multiple +12V rails
 [...]

 What I want to know is why Intel CPUs still need the +12V.

 They don't now and they never did.  The 8088 through 80486 and the first

 /snip/


 Installing two voltage regulators next to the CPU socket and using
 standard ~22 gauge copper wires from the 12V rail of the PSU solves the
 problem cheaply.  The 12V rail was chosen because 3.6x less current is
 required vs using the 3.3V rail as was used previously, which means much
 smaller wires are needed.
 /snip/

 Do I understand correctly that there are two switching power supply
 chips at the input to the CPU to produce 3.3Volts? Obviously, a linera
 regulator cannot produce and gain in current. Iin = Iout for linear
 regulators.

I7m confused. Did you mean can't produce a gain in current or power?

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43iom3fhcduznccqn6o9ri5ycxy1cyrgd-h53w3vhm2t...@mail.gmail.com



Re: Building computer - power supplies

2013-09-29 Thread Ralf Mardorf
On Sun, 2013-09-29 at 19:02 +0900, Joel Rees wrote:
 I7m confused. Did you mean can't produce a gain in current or power?

Assumed there should be a linear voltage regulator able to provide the
needed Ampere, how large should the heat sink be? The voltage isn't an
issue. By a quick search I found a max. of 10 Ampere. My laboratory
power supply does provide only 2 Ampere and the heat sink is
10 cm x 10 cm x 5 cm small. The once mentioned in The Art Of
Electronics Horowitz/Hill original 1989 Cambridge University Press,
here the German translation 3. edition 1998 provide less than 10 Ampere.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380451634.12306.11.camel@archlinux



Building a computer for compiling and CPU emulation (Re: Building computer)

2013-09-29 Thread Joel Rees
On Sun, Sep 29, 2013 at 1:59 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/27/2013 6:03 AM, Joel Rees wrote:
 On Fri, Sep 27, 2013 at 10:22 AM, Tom H tomh0...@gmail.com wrote:
 On Thu, Sep 26, 2013 at 7:09 AM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 On 9/26/2013 5:45 AM, Joel Rees wrote:
 On Thu, Sep 26, 2013 at 2:05 PM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:

 Stan, joking aside, are there any AMD processors you would recommend
 for doing kernel and driver level Android/ARM development in a VM?

 How likely would I be to find such a processor in a netbook or laptop?

 How likely is one to find a kernel developer doing any real work on a
 laptop?  Throwing a netbook into the question is just silly.

 The reason I ask, Stan, is that I'm preparing to take a class where
 I'll be studying Super H assembly language programming, writing device
 drivers for an embedded SH3 running a Linux kernel, and such.

 I read everything below but my reply will be brief, so going up top with
 it.  Performance options with portables is always limited due to power.
  A few quick points:

 1.  If you're going to be compiling anything, cache size trumps clock
 speed.  If a smaller cache roughly equivalent CPU clocks more than ~30%
 higher than big_cache CPU it becomes a horse race.

 2.  Do not use an in-order CPU, such as the Atom, regardless of clock
 speed.  The lack of branch prediction, rename registers, etc, will
 hamstring a compiler.

 3.  If you're running a hypervisor, low level support such as AMD-V or
 Intel VT will help.

Okay, ...

 I'd like to prepare a portable emulation environment for the class,
 since I know I'll be wanting to do homework when the lab is not
 available. Renasas points to an SH4 emulator by Kawasaki-san that runs
 under QEMU, but at times I will likely be debugging the emulator as
 well as my own code. And I'll be doing a lot of compiles in the
 emulated system.

 I'll be biking to the class some days and walking other days, so I'd
 like a 12 inch screen form factor to fit in my bag and not break my
 back, which pretty much says netbook. Many netbooks have output for
 external monitors, which will help at home, at least.

 At this point, I've been kind of looking at Acer's Aspire (heh) V5-122
 with an AMD A4-1250 and 4G RAM, or a similar V5 with an AMD A6-1450.
 I'm trying to figure out whether they support QEMU. If so, I'm
 thinking the 4-core A6 will be worth the extra 5000 yen, a pair of
 core for the host OS and I/O, and another core for the emulation
 environment makes three.

 I'm also looking at an HP dm1-4400, but the processor there is an AMD
 E2-1800, which seems to be last year's tech and a little heavier on
 battery use. It's only two cores, and only 2G RAM in the version at
 the store where I was looking at it (Sofmap in Umeda -- Osaka), but
 maybe HP is sturdier than Acer. I need to look at that.

http://en.wikipedia.org/wiki/List_of_AMD_mobile_microprocessors

tells me that AMD A4-1250, AMD A6-1450, and E2-1800 all have AMD-V.

The E2-1800 has a half-Meg L2 cache, but higher speed CPU (1.7GHz).

The A4-1250 has 1 Meg L2 cache but lower speed CPU (1.0 GHz).

The A6-1450 has 2 Meg L2 cache to share between twice the CPUs.

[...]

So now my problem is scraping up 55,000 yen +/-.

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43invbf8mexjrkeabjhcwnk6jsfo-sswp0ypreqsnwmw...@mail.gmail.com



RE: kernel compilation (was ... Re: Building computer)

2013-09-29 Thread Beco
On 29 September 2013 02:41, Chris Bannister cbannis...@slingshot.co.nz wrote:

 On Thu, Sep 26, 2013 at 11:57:43PM -0300, Beco wrote:
  On 26 September 2013 22:22, Tom H tomh0...@gmail.com wrote:
 
 
   (I've compiled a kernel on a netbook; you'd better have a few hours to
   spare...)
  
  
  
  
  Questions for people who compile kernel and their machines:

 You are better off starting a new thread. You could create a subtopic by
 changing the subject.

 Now unfortunately, information about compiling a kernel is buried in
 a thread about building a computer. :(




From the thread Building computer, I've asked:

 On Thu, 2013-09-26 at 23:57 -0300, Beco wrote:
  How long a considered fast kernel compilation would last? I'd like
  to have a clue. And in what kind of computer (processor / RAM /
  anything else relevant)?

And I got some good answers that is better to join this thread:

Just to dig it:

Answer from Ralf:
https://lists.debian.org/debian-user/2013/09/msg01204.html

From Stan:
https://lists.debian.org/debian-user/2013/09/msg01212.html

From Johnatan:
https://lists.debian.org/debian-user/2013/09/msg01213.html

From Tom:
https://lists.debian.org/debian-user/2013/09/msg01214.html

From Stephen:
https://lists.debian.org/debian-user/2013/09/msg01245.html

From Stan:
https://lists.debian.org/debian-user/2013/09/msg01285.html


Also, the thread

Building a computer for compiling and CPU emulation (Re: Building computer)

https://lists.debian.org/debian-user/2013/09/msg01317.html

Started by Joel Rees now looks like a duplicate. Please, join here.

Thanks.
Beco.





--
Dr Beco
A.I. researcher

Sometimes the heart sees what is invisible to the eye. (H. Jackson Brown Jr.)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CALuYw2zBs+KqpPAwf5T-FzgzjiZq8Vnrq8aMgBCohJUM1i=2...@mail.gmail.com



Re: Building computer - power supplies

2013-09-29 Thread Doug
On 09/29/2013 06:02 AM, Joel Rees wrote:
 On Sun, Sep 29, 2013 at 5:18 AM, Doug dmcgarr...@optonline.net wrote:
/snip/

 Do I understand correctly that there are two switching power supply
 chips at the input to the CPU to produce 3.3Volts? Obviously, a linera
 regulator cannot produce and gain in current. Iin = Iout for linear
 regulators.
 
 I7m confused. Did you mean can't produce a gain in current or power?
 
 --
 Joel Rees

There's a type there. The word and should read any.
A linear regulator can only output as much current as goes in to it--
it functions as a variable resistance witha regulated voltage output.
A switching regulator takes an available amount of _power_, that is,
voltage times current, and converts it into another voltage times
current, of the _same_ power, so you can see that 12V * 1A input
can be translated into 2.4V * 5A = 12W either way. (That assumes
100% efficiency, which is impossible, but is used for illustration.)
So a linear regulator is a lossy device; a switching regulator
_approaches_ lossless translation of power from one voltage to
another.
This discussion assumes DC voltage, of course. For AC, there's
a transformer. Modern power supplies do not use the transformer at the
line frequency, because it would take far too much iron, and be large
and heavy. They rectify the line voltage into DC, and then use a
switching regulator, operating at frequencies in the Kilohertz range,
which does have a transformer, but it is small and light, due to the
frequency of the switched AC voltage. The output of this high frequency
transformer is again rectified to produce the DC needed in the
device it supplies.
For switching regulators that do not require isolation, as the
line-frequency supplies do, the transformer may be replaced by an
inductor.
Hope this clears the air--doug

-- 
Blessed are the peacemakers..for they shall be shot at from both sides.
--A.M.Greeley


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5248490e.2070...@optonline.net



Re: Building computer - power supplies

2013-09-29 Thread Doug
On 09/29/2013 06:47 AM, Ralf Mardorf wrote:
 On Sun, 2013-09-29 at 19:02 +0900, Joel Rees wrote:
 I7m confused. Did you mean can't produce a gain in current or power?
 
 Assumed there should be a linear voltage regulator able to provide the
 needed Ampere, how large should the heat sink be? The voltage isn't an
 issue. By a quick search I found a max. of 10 Ampere. My laboratory
 power supply does provide only 2 Ampere and the heat sink is
 10 cm x 10 cm x 5 cm small. The once mentioned in The Art Of
 Electronics Horowitz/Hill original 1989 Cambridge University Press,
 here the German translation 3. edition 1998 provide less than 10 Ampere.
 
 
The size of heatsinks is a function of the heat dissipated in the active
device, whether it be a regulator, an amplifier, or a rectifier, or even
a power resistor. It is then a function of the heat transfer
characteristics of the heat sink. That is dependent on the size of the
sink, but also on the number and size of the radiating fins, if any, and
the airflow around the sink, whether is is convection, or aided
by a fan. All of this is calculated by the maximum desired (or
allowable) temperature of the active device, the thermal resistance of
the device to the heatsink, and the thermal resistance of the heatsink
to the ambient air, which is usually defined as being 25 degrees C.
All of this is not done by seat-of-the-pants design, but by published
specifications of all the devices--the active device, the insulator, if
any, the heatsink, and the airflow. The calculations are
straightforward, but will not be related here.

--doug

-- 
Blessed are the peacemakers..for they shall be shot at from both sides.
--A.M.Greeley


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5248616e.9030...@optonline.net



Re: Building computer - power supplies

2013-09-29 Thread Ralf Mardorf
On Sun, 2013-09-29 at 11:36 -0400, Doug wrote:
 So a linear regulator is a lossy device

law of conservation of energy ---
energy exchange --
thermal conduction

If you use it as voltage stabiliser + heater, it provides 100%
efficiency too. It's simply a question of the point of view.

;)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380475649.1068.9.camel@archlinux



Re: Building computer - power supplies

2013-09-29 Thread Ralf Mardorf
On Sun, 2013-09-29 at 13:20 -0400, Doug wrote:
 On 09/29/2013 06:47 AM, Ralf Mardorf wrote:
  On Sun, 2013-09-29 at 19:02 +0900, Joel Rees wrote:
  I7m confused. Did you mean can't produce a gain in current or power?
  
  Assumed there should be a linear voltage regulator able to provide the
  needed Ampere, how large should the heat sink be? The voltage isn't an
  issue. By a quick search I found a max. of 10 Ampere. My laboratory
  power supply does provide only 2 Ampere and the heat sink is
  10 cm x 10 cm x 5 cm small. The once mentioned in The Art Of
  Electronics Horowitz/Hill original 1989 Cambridge University Press,
  here the German translation 3. edition 1998 provide less than 10 Ampere.
  
  
 The size of heatsinks is a function of the heat dissipated in the active
 device, whether it be a regulator, an amplifier, or a rectifier, or even
 a power resistor. It is then a function of the heat transfer
 characteristics of the heat sink. That is dependent on the size of the
 sink, but also on the number and size of the radiating fins, if any, and
 the airflow around the sink, whether is is convection, or aided
 by a fan. All of this is calculated by the maximum desired (or
 allowable) temperature of the active device, the thermal resistance of
 the device to the heatsink, and the thermal resistance of the heatsink
 to the ambient air, which is usually defined as being 25 degrees C.
 All of this is not done by seat-of-the-pants design, but by published
 specifications of all the devices--the active device, the insulator, if
 any, the heatsink, and the airflow. The calculations are
 straightforward, but will not be related here.

Sure, some heat sinks can be smaller than others, for the same task, but
assumed there should be a linear voltage regulator  10 Ampere, the
needed heat sink would be huge or a fan would be needed and as you
already pointed out, the ampere input has to be as high as the output.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380476370.1068.16.camel@archlinux



Re: Building computer - power supplies

2013-09-29 Thread David Guntner
From http://www.debian.org/News/weekly/2011/05/#offtopic ...

Have you ever wanted to discuss things completely unrelated to Debian or
even computers with fellow users or developers? On debian-user the
custom is to put [OT] in the subject and fire away.

Unfortunately, this can be disruptive for uninterested subscribers,
especially on a high volume mailing list like debian-user. Because of
this, with the kind support of Holger Levsen and the Debian Community
project, the d-community-offtopic list has been created.

http://lists.alioth.debian.org/mailman/listinfo/d-community-offtopic

The list is open to anyone, just like most other Debian lists, so all
you have to do is to CC and set Reply-To to the list, so that whoever is
interested will know discussion is moving to that list. Have fun and be
excellent to each other!


Hint.  Hint.  Hint.

;-)



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Building computer - power supplies

2013-09-29 Thread Joel Rees
On Mon, Sep 30, 2013 at 12:36 AM, Doug dmcgarr...@optonline.net wrote:
 On 09/29/2013 06:02 AM, Joel Rees wrote:
 On Sun, Sep 29, 2013 at 5:18 AM, Doug dmcgarr...@optonline.net wrote:
 /snip/

 Do I understand correctly that there are two switching power supply
 chips at the input to the CPU to produce 3.3Volts? Obviously, a linera
 regulator cannot produce and gain in current. Iin = Iout for linear
 regulators.

 I7m confused. Did you mean can't produce a gain in current or power?

 --
 Joel Rees

 There's a type there. The word and should read any.

Typo, huh? Can't win for losing? ;-) (But that wasn't what was confusing me.)

 A linear regulator can only output as much current as goes in to it--
 it functions as a variable resistance witha regulated voltage output.

And I see now that you were talking about a part, not a power supply,
when you said regulator there.

We used to call power supplies that put a transformer on the wall
power and rectified, clamped, and smoothed it after the transformer a
liner power supply. (As opposed to the new-fangled switching power
supply that some engineers at the time swore were too dangerous to put
in consumer grade equipment.)

 [...]

I think, if we want to make information on power supplies more
accessible to the list members trying to spec hardware for a new Linux
box, or trying to figure out whether the power supply could be the
reason the third disk drive sometimes is not seen by the system, we'll
probably need to draw some pictures. (I'm not volunteering today.
Maybe next week.)

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43indbtcxw5-_gvqw_hum2sorodstu+qcwx8nxgfd2ab...@mail.gmail.com



Re: Re: Re: Re: Building computer

2013-09-28 Thread Balamurugan

On 09/28/2013 04:04 AM, David L. Craig wrote:

On 13Sep27:2054+0530, Balamurugan wrote:

On 09/27/2013 04:08 PM, David L. Craig wrote:

Your fact is not.  I installed Debian Sid on a G500 a few
months ago and it dual-boots with Win8.  The trick is to
use the smaller alternative power button to the right of
the large power button, which ignores Legacy.  Perhaps
other Lenovo laptops are not so equipped but the G500 is.

Hi David,

 Till last month, I have installed close to 10 installations of
GNU/Linux OS as dual boot with Windows OS(XP and Windows 7). This
particular Lenova Laptop which had Windows 8 installed in UEFI mode
had issues in installing Ubuntu. When I try to insert the Ubuntu
(12.04 LTS) in USB boot stick, it is not even recognizing the OS.
The machine detects Ubuntu only when I turned off UEFI to Legacy
mode.
In the same time, I purchased my own laptop (Dell vostro 2420) which
was pre-installed with Ubuntu. When I checked that, it was turned to
Legacy boot by default. Also as per the technical journals I read,
GNU/Linux don't have their own UEFI authorizing keys. Can you please
correct me with some more details, If I am wrong.

I am at a disadvantage because I relinquished the laptop about
a month ago to be returned to Lenovo for warranty repair and
the memory is somewhat dim.  The BIOS was configured for Legacy
boot.  I enabled USB booting in the BIOS as needed and kept it
normally unenabled.  I installed Linux Mint XFCE into a hard
drive partition.  I discovered the main power button will always
boot up Win8 in UEFI mode but the smaller power buttona, designed
for the Lenovo One-Key recovery facility, brings up a boot menu
that includes the hard drive partitions and USB drives if such are
configured as bootable.  I hope this is helpful.

Hi David,

What you have said is correct. I also followed the same method you 
followed. The problem here is, we need to change the bios setting every 
time to toggle between Windows 8 and Ubuntu. Ubuntu starts in Legacy and 
Windows 8 starts in UEFI boot mode. I was thinking whether there is any 
procedure to dual boot Ubuntu with Windows 8 in the same UEFI boot mode 
itself but unfortunately I haven't figured it out. Thank you for your 
details :-)



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52467b63.8080...@gmail.com



Re: Building computer

2013-09-28 Thread Stan Hoeppner
Hi Catherine,

I haven't caught up with the rest of the thread but just wanted to
address a couple points here.

On 9/26/2013 11:12 AM, Catherine Gramze wrote:
 
 On Sep 26, 2013, at 1:05 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
 

 What desktop applications are you using that require 8GB, let alone
 16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over buy
 DRAM, that's a personal choice.  It will likely not improve performance
 in any meaningful way, for WOW in Wine, or anything else.

 
 I will be running more than one app at a time. For example WoW, a browser, a 
 Ventrilo client, and a chat client at minimum.

4GB is more than plenty, unless WOW has turned into a complete and total
memory hog.  Obviously it eats more running through Wine emulation.  But
Win and WOW combined shouldn't eat more than 2GB, so you have 2GB left
to the rest, which is plenty.

 onboard sound, Realtek ALC892

 I'm no Linux sound expert.  I don't know if stock Wheezy supports the
 892.  Maybe others can chime in.

 onboard NIC, Realtek 8111E

 The 8111 is supported, with non-free firmware, IIRC.
 
 I am no fan of non-free firmware. 

Thinking back, I believe this was a temporary issue with just the one
8110/8111 GbE firmware.  This may have been sorted out already.  Realtek
missed a deadline or something so it had to go into non-free.  All the
Realtek NICs have had free firmware for quite some time IIRC.  This one
was an anomaly.  We helped somebody fix this here a few months ago.
Hell, maybe it was a year...times files.  Others here may remember this.

 Perhaps I need to look at different motherboards with different Lan and sound 
 capabilities.

80% or so of new retail PCs, and mobos, have Realtek ethernet and
Realtek audio on board.  It's ubiquitous.  Fully supported in upstream
Linux.  The newest chips may or may not be supported by Wheezy.  I'd
guess you won't have any problems here.

 TTBOMK, WOW doesn't require anywhere close to 2GB of VRAM for textures
 and frame buffer, even at 1920x1080.  So your choice of video card seems
 to be serious overkill.  
 
 You would be wrong. 

I'll give the opportunity to re-evaluate that conclusion. ;)

 With my current Radeon 6970 I can run only on medium level graphics, with 
 light shafts turned off and ground clutter on low. If I put the settings any 
 higher I get serious lag in raids and in crowded environments. The official 
 minimum specs for games are always put ridiculously low; they are for running 
 the game at the lowest graphic settings with all options turned off. Nobody 
 wants to play a game that looks that bad.

The AMD spec 6970 is a powerhouse:

880MHz GPU clock
1536 shaders (stream processors)
2GB GDDR5
256 bit bus
176GB/s bandwidth

This card is massive overkill for WOW.  This was a balls to the wall
fist person shooter card when introduced, and it still is.  It should
run WOW on highest settings with aplomb.

For comparison, the card I currently use is a GeForce GT240 with 96
shaders, 550HMz GPU clock, 1GB DDR3, 128bit interface, 22.4GB/s.  It
runs the Unigine Heaven benchmark at highest settings, 1440x900 full
screen, at over 15 FPS average, 22 FPS peak.  The polygon detail of this
benchmark is at least 100 times greater than what you'll see in WOW.
This card is more than plenty for running WOW smoothly in Windows.  Your
6970 should run Heaven at this resolution at 80+ FPS.  Download it and see:

http://unigine.com/products/heaven/

Run the native Linux version and the Windows version in Wine, windowed
and full screen.


The card is not the cause of your problems, unless:

1.  You bought a stripped down model from a 3rd party vendor w/128bit
bus, low clocked GDDR3 VRAM, and/or under clocked GPU core.  But TTBOMK
no such castrated 6970s ever shipped.

2.  It is broken.  Is the fan working properly?  Does it kick up to full
RPM when you launch a 3D app?  If not you're losing 2/3rds of the clock,
and performance.

3.  It is a passive radiator heatpipe model and in a case with poor
airflow to the card, causing the same low clocking as #2.

Is this problem only with Linux/Wine or also running WOW in Windows?
I'll make an educated guess it's only with Wine.  At what resolution due
to run WOW?

Another thing to consider is whether you're running WOW full screen or
Windowed.  Are you running it windowed?

All consumer based GPUs/drivers tend to drop frame rate when running an
OpenGL application windowed vs full screen.  Neither the drivers nor
hardware are optimized very well for what's called 2D overlay.

The reason for this is that there are 2 separate virtual frame buffer
regions in memory, the 2D buffer and the 3D buffer.  This is what allows
you to seamlessly switch between an OpenGL application window and the
desktop, minimize windows, etc, instantaneously.  But when running
windowed, the driver and the GPU hardware must do an additional merge
pass for every real frame in the physical frame buffer, overlaying the
virtual 3D buffer atop the 2D 

Re: Building computer

2013-09-28 Thread berenger . morel



Le 28.09.2013 13:33, Stan Hoeppner a écrit :

Hi Catherine,

I haven't caught up with the rest of the thread but just wanted to
address a couple points here.

On 9/26/2013 11:12 AM, Catherine Gramze wrote:


On Sep 26, 2013, at 1:05 AM, Stan Hoeppner s...@hardwarefreak.com 
wrote:




What desktop applications are you using that require 8GB, let alone
16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over 
buy
DRAM, that's a personal choice.  It will likely not improve 
performance

in any meaningful way, for WOW in Wine, or anything else.



I will be running more than one app at a time. For example WoW, a 
browser, a Ventrilo client, and a chat client at minimum.


4GB is more than plenty, unless WOW has turned into a complete and 
total
memory hog.  Obviously it eats more running through Wine emulation.  
But
Win and WOW combined shouldn't eat more than 2GB, so you have 2GB 
left

to the rest, which is plenty.


I am only quickly reading the thread, and it's the 2nd time I see 
wine associated with emulation.
As the name says, WINE Is Not An Emulator, it does not emulate a 
computer, it does not emulate the windows' kernel, it emulates nothing.


If WINE was an emulator, then GTK, Qt, GStreamer and almost everything 
on your computer would be emulators too. All those softwares are 
libraries, they provide functions to softwares and that's all.
So, saying that running an application with WINE will take more memory 
than the same application without it needs proofs. Run an OS which does 
not depend on windows' kernel, Debian for example, with wine and without 
your desktop environment and other linux only applications. Instead, 
replace them with windows' applications, and then, if you effectively 
have difference of memory compared to the same application set on 
windows, then ok.


What WINE does is translating application's requests when they have a 
different name, and implementing them when they does not exists on our 
systems.


Plus, about the memory consumption wine, or any software library should 
take against another one, it is almost nothing. What takes memory in a 
software are data: graphics and sounds, of course, but also logic data, 
in case of WoW: player's lists, with their names, hp, etc. And that last 
kind of data is almost nothing compared to graphical resources of most 
3D games.
Add to this the performance difference between those systems, and you 
might even need less memory with linux. Or more. It depends on what the 
application needs, on the options of the kernel and other obscure things 
that I do not understand, even if I am a programmer ( but a good one and 
not a WINE one, however. ).


For WoW, you will probably not need more memory than on windows, since 
it is a software used by a lot of people from a long time and so, 
probably have a good support from wine. And if you need more, then it 
will not be tons of MiB. It will not be measurable against the GiB that 
WoW probably needs for graphical resources.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/866d3396edcd37385680109ba03d2...@neutralite.org



Re: Building computer

2013-09-28 Thread Jack Malmostoso
On Tue, 24 Sep 2013 22:10:01 +0200
Catherine Gramze rhia...@gmail.com wrote:

 So, I am looking for recommendations on hardware, particularly
 motherboards, known to play nicely with Debian and boot consistently.

I hope I'm not late to the party, but here's my new setup (bought in
June) which runs Debian Sid without problems:

MoBo: Asus Z87-A iZ87, SATA600 RAID, USB3.0 
CPU Cooler: Cooler Master Processor Koeler Hyper 212 Evo
RAM: Corsair 2x4GB, DDR3, PC12800, CL9, Vengeance, LP
CPU: Intel Haswell Core i5 4570S 2.90GHz 6MB Box
DVD Writer: Samsung DVD Brander SH-224DB/BEBE 24x, SATA Bulk
HDD: 2x Western Digital Harddisk 3.5 2TB, SATA300, WD20EURS

I am running the two HDD in RAID 1 as /home, while / is on an Intel
SSD SA2M080G2GC which I have recycled.
I had trouble at the beginning because I wanted to use an OCZ SSD with
Sandforce chipset, which was just flat out refused by the motherboard.
I swapped the OCZ with the Intel from my laptop and never looked back.

The CPU runs idle at 28C and the PC is completely silent.

I hope this helps.

-- 
What is worth doing is worth the trouble of asking somebody to do.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130928151519.4e10f94c@nostromo



Re: Building computer

2013-09-28 Thread Catherine Gramze

On Sep 28, 2013, at 9:15 AM, Jack Malmostoso fabio.rosci...@gmail.com wrote:

 I hope I'm not late to the party, but here's my new setup (bought in
 June) which runs Debian Sid without problems:


You may be too late for me, but not late for somebody else. This type of info 
is sadly hard to find for the newer motherboards.
Could you be so kind as to tell us whether the board came with default settings 
that worked, or whether you needed to make changes in the BIOS?
Can you tell us which integrated graphics the board has? The specs on the Asus 
website are oddly vague as to that one thing.

cg


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building computer

2013-09-28 Thread Jack Malmostoso
On Sat, 28 Sep 2013 16:10:01 +0200
Catherine Gramze rhia...@gmail.com wrote:

 Could you
 be so kind as to tell us whether the board came with default settings
 that worked, or whether you needed to make changes in the BIOS? 

I had to deactivate the Secure Boot option in order to be able to
boot Debian Installer's USB drive, which I prepared using Unetbootin.
Other than that, it's all stock.

 Can
 you tell us which integrated graphics the board has? The specs on the
 Asus website are oddly vague as to that one thing.

The onboard graphics depend on the CPU. In my case, it's HD4600, which
is the fastest you can get with a socketed CPU.
It's supported by the Intel driver in Wheezy for most things, however
now in Sid it is used 100%. I can do light gaming with it (I play Braid
a whole lot).


-- 
I got tired of listening to the recording on the phone at the movie
theater.  So I bought the album.  I got kicked out of a theater the
other day for bringing my own food in.  I argued that the concession
stand prices were outrageous.  Besides, I hadn't had a barbecue in a
long time.  I went to the theater and the sign said adults $5 children
$2.50.  I told them I wanted 2 boys and a girl.  I once took a cab to
a drive-in movie.  The movie cost me $95.
-- Steven Wright


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130928163148.4623dac7@nostromo



Re: Building computer

2013-09-28 Thread Catherine Gramze

On Sep 28, 2013, at 10:31 AM, Jack Malmostoso fabio.rosci...@gmail.com wrote:

 The onboard graphics depend on the CPU. In my case, it's HD4600, which
 is the fastest you can get with a socketed CPU.
 It's supported by the Intel driver in Wheezy for most things, however
 now in Sid it is used 100%. I can do light gaming with it (I play Braid
 a whole lot).

Thank you for the additional info. It sounds a whole lot like what I am getting 
in the Zareason computer I ordered. I am waiting very impatiently!

But the Braid game is intriguing me - I must try it! Got any other suggestions?


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building computer

2013-09-28 Thread Jack Malmostoso
On Sat, 28 Sep 2013 17:00:01 +0200
Catherine Gramze rhia...@gmail.com wrote:

 Thank you for the additional info. It sounds a whole lot like what I
 am getting in the Zareason computer I ordered. I am waiting very
 impatiently!

I was scrolling through the thread and I see you want to add a video
card, and I checked on the Zareason website what they offer. I hope you
got a power supply with more than the 350W standard. I am not sure it
has enough horsepower to run an extra video card as well.

 But the Braid game is intriguing me - I must try it! Got any other
 suggestions?

Check out the Humble Bundle website, you'll find plenty of great linux
games!


-- 
Vulcans worship peace above all.
-- McCoy, Return to Tomorrow, stardate 4768.3


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130928175613.2e9433f7@nostromo



Re: Building computer

2013-09-28 Thread Jack Malmostoso
On Sat, 28 Sep 2013 17:00:01 +0200
Catherine Gramze rhia...@gmail.com wrote:

 Thank you for the additional info. It sounds a whole lot like what I
 am getting in the Zareason computer I ordered. I am waiting very
 impatiently!

I was scrolling through the thread and I see you want to add a video
card, and I checked on the Zareason website what they offer. I hope you
got a power supply with more than the 350W standard. I am not sure it
has enough horsepower to run an extra video card as well.

 But the Braid game is intriguing me - I must try it! Got any other
 suggestions?

Check out the Humble Bundle website, you'll find plenty of great linux
games!


-- 
Vulcans worship peace above all.
-- McCoy, Return to Tomorrow, stardate 4768.3


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130928175627.69e5c7f4@nostromo



Re: Building computer

2013-09-28 Thread Stan Hoeppner
On 9/27/2013 6:03 AM, Joel Rees wrote:
 On Fri, Sep 27, 2013 at 10:22 AM, Tom H tomh0...@gmail.com wrote:
 On Thu, Sep 26, 2013 at 7:09 AM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 On 9/26/2013 5:45 AM, Joel Rees wrote:
 On Thu, Sep 26, 2013 at 2:05 PM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:

 Stan, joking aside, are there any AMD processors you would recommend
 for doing kernel and driver level Android/ARM development in a VM?

 How likely would I be to find such a processor in a netbook or laptop?

 How likely is one to find a kernel developer doing any real work on a
 laptop?  Throwing a netbook into the question is just silly.
 
 The reason I ask, Stan, is that I'm preparing to take a class where
 I'll be studying Super H assembly language programming, writing device
 drivers for an embedded SH3 running a Linux kernel, and such.

I read everything below but my reply will be brief, so going up top with
it.  Performance options with portables is always limited due to power.
 A few quick points:

1.  If you're going to be compiling anything, cache size trumps clock
speed.  If a smaller cache roughly equivalent CPU clocks more than ~30%
higher than big_cache CPU it becomes a horse race.

2.  Do not use an in-order CPU, such as the Atom, regardless of clock
speed.  The lack of branch prediction, rename registers, etc, will
hamstring a compiler.

3.  If you're running a hypervisor, low level support such as AMD-V or
Intel VT will help.

 I'd like to prepare a portable emulation environment for the class,
 since I know I'll be wanting to do homework when the lab is not
 available. Renasas points to an SH4 emulator by Kawasaki-san that runs
 under QEMU, but at times I will likely be debugging the emulator as
 well as my own code. And I'll be doing a lot of compiles in the
 emulated system.
 
 I'll be biking to the class some days and walking other days, so I'd
 like a 12 inch screen form factor to fit in my bag and not break my
 back, which pretty much says netbook. Many netbooks have output for
 external monitors, which will help at home, at least.
 
 At this point, I've been kind of looking at Acer's Aspire (heh) V5-122
 with an AMD A4-1250 and 4G RAM, or a similar V5 with an AMD A6-1450.
 I'm trying to figure out whether they support QEMU. If so, I'm
 thinking the 4-core A6 will be worth the extra 5000 yen, a pair of
 core for the host OS and I/O, and another core for the emulation
 environment makes three.
 
 I'm also looking at an HP dm1-4400, but the processor there is an AMD
 E2-1800, which seems to be last year's tech and a little heavier on
 battery use. It's only two cores, and only 2G RAM in the version at
 the store where I was looking at it (Sofmap in Umeda -- Osaka), but
 maybe HP is sturdier than Acer. I need to look at that.
 
 (I sat on my Lenovo ideapad s100, and now I can only use it with the
 external monitor, so I'm a bit sensitive to sturdiness now. Not that I
 plan on sitting on any more computers, of course. Should not have
 taken the thing to church after programming all Saturday night. And,
 no, I have not been running qemu on the ideapad. It's a single-core
 Atom. Bleagh.)
 
 I'm pretty sure that I've seen a thread where kernel developers (Linus
 included) who were discussing kernel compilation time on laptops.
 
 Thanks for the data points, Tom.
 
 (I've compiled a kernel on a netbook; you'd better have a few hours to 
 spare...)
 
 That's a given. I'm not planning on compiling the kernel every day,
 but I will be compiling loadable modules, cross or under emulation,
 pretty regularly, at some point
 --
 Joel Rees
 
 Be careful where you see conspiracy.
 Look first in your own heart.
 
 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52470aef.1090...@hardwarefreak.com



Re: Building computer - power supplies

2013-09-28 Thread Stan Hoeppner
On 9/27/2013 6:37 PM, Joel Rees wrote:
 On Fri, Sep 27, 2013 at 9:36 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 A point I forgot to make.  This is something everyone should know.

 Subject:  The marketing myth of multiple +12V rails
 [...]
 
 What I want to know is why Intel CPUs still need the +12V.

They don't now and they never did.  The 8088 through 80486 and the first
generation Pentium-60 were 5 volt ICs.  The 80486-DX4 and 2nd gen
Pentium were 3.3V.  The Pentium MMX through today's CPUs have a dual
voltage plane.  The IO section uses 3.3V and the core 1.x-2.x volts
depending on model.  The core runs at a lower voltage for higher clock
speed with less current draw and less heat.

As CPUs began requiring more and more juice, specifically the 130W P4
models, it became impractical carry the current across the PCB from the
ATX connector to the CPU socket.  A 130W CPU at 3.3V requires 40 amps of
current.  Traces in motherboard PCBs are typically made of a gold alloy
deposition.  A bare motherboard PCB alone with enough trace metal to
carry 40 amps would need to be on the order of a 12 layer board, and
would simply cost too much due to the additional gold content and
manufacturing complexity.

Installing two voltage regulators next to the CPU socket and using
standard ~22 gauge copper wires from the 12V rail of the PSU solves the
problem cheaply.  The 12V rail was chosen because 3.6x less current is
required vs using the 3.3V rail as was used previously, which means much
smaller wires are needed.

If you read about Google's custom servers that power their search
engine, all 6 million of them (exaggeration), they designed their own
motherboard and their own 12V only PSU in conjunction with their
vendors.  The PSU supplies 12V to the board through a single connector,
and all conversion is performed on the board.

If you look at any of the big iron from HP, IBM, SUN, SGI, the bulk
power supplies in the chassis supply 48V to each of the many processor
and IO boards, and it's converted down from there at the board level.
This has been the standard for iron for at least 20 years.

 Have you considered copying these kinds of posts to a blog, Stan?
 Might be easier to find them again.

Yes, on occasion.  I just never seem to find time/motivation to set it
up and do it...

 (I often do exactly that when I don't want to forget something. Then I
 forget it's in one of my blogs. I mix politics and religion and
 technology and philosophy up too freely in my blogs, even though part
 of the purpose of creating multiple blogs was to separate them by
 purpose.)

If I understand you correctly, in my case blogging it were strictly be
for the benefit of others.

Everything that goes into my posts comes directly from my organic
archive.  Sure, I do look some things up on occasion to verify my facts
before posting, or find a link to demonstrate a point, but it's 99% from
memory.  I've been gifted with what some folks call a photographic
memory, though I can assure you that's not a very good description of
how it actually works.  Many/most things I can recall instantly.  With
others, I know the memory which I'm trying to pull, but finding the
detail within it is akin to being in a boat on a mildly foggy lake, and
I have to paddle around in the fog for a bit before that memory comes
into clear view.  I think for most people that fog is simply too thick,
and they don't even try to look through it.

Some people can do this with auditory as well as visual memory.  For me
it must go through my visual cortex to have any chance at storage and
retrieval.  I know a few women who would say that all men have this ear
to memory problem. ;)

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52472cb6.4070...@hardwarefreak.com



Re: Building computer - power supplies

2013-09-28 Thread Doug
On 09/28/2013 03:23 PM, Stan Hoeppner wrote:
 On 9/27/2013 6:37 PM, Joel Rees wrote:
 On Fri, Sep 27, 2013 at 9:36 PM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 A point I forgot to make.  This is something everyone should know.

 Subject:  The marketing myth of multiple +12V rails
 [...]

 What I want to know is why Intel CPUs still need the +12V.
 
 They don't now and they never did.  The 8088 through 80486 and the first

/snip/

 
 Installing two voltage regulators next to the CPU socket and using
 standard ~22 gauge copper wires from the 12V rail of the PSU solves the
 problem cheaply.  The 12V rail was chosen because 3.6x less current is
 required vs using the 3.3V rail as was used previously, which means much
 smaller wires are needed.
/snip/

Do I understand correctly that there are two switching power supply
chips at the input to the CPU to produce 3.3Volts? Obviously, a linera
regulator cannot produce and gain in current. Iin = Iout for linear
regulators.

--doug

-- 
Blessed are the peacemakers..for they shall be shot at from both sides.
--A.M.Greeley


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52473996.2040...@optonline.net



Re: Building computer

2013-09-28 Thread Stan Hoeppner
On 9/28/2013 8:14 AM, berenger.mo...@neutralite.org wrote:
 
 
 Le 28.09.2013 13:33, Stan Hoeppner a écrit :
 Hi Catherine,

 I haven't caught up with the rest of the thread but just wanted to
 address a couple points here.

 On 9/26/2013 11:12 AM, Catherine Gramze wrote:

 On Sep 26, 2013, at 1:05 AM, Stan Hoeppner s...@hardwarefreak.com
 wrote:


 What desktop applications are you using that require 8GB, let alone
 16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over buy
 DRAM, that's a personal choice.  It will likely not improve performance
 in any meaningful way, for WOW in Wine, or anything else.


 I will be running more than one app at a time. For example WoW, a
 browser, a Ventrilo client, and a chat client at minimum.

 4GB is more than plenty, unless WOW has turned into a complete and total
 memory hog.  Obviously it eats more running through Wine emulation.  But
 Win and WOW combined shouldn't eat more than 2GB, so you have 2GB left
 to the rest, which is plenty.
 
 I am only quickly reading the thread, and it's the 2nd time I see wine
 associated with emulation.
 As the name says, WINE Is Not An Emulator, it does not emulate a
 computer, it does not emulate the windows' kernel, it emulates nothing.

snip

Whether you call it an emulator, translator, simulator, or Santa Claus,
it 'provides' the Windows APIs to the application, the DLLs, etc.
Providing this does require additional memory.  It's not a large amount
by today's standards, but it is non negligible.  I made that case above
and previously in the thread.

So I'm unclear as to why you picked my reply for your rebuttal, given
we're on the same page.

The problem with 3D game performance under Wine is not memory
consumption, but the CPU overhead, which I also made clear previously.

-- 
Stan



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52474014.8010...@hardwarefreak.com



Re: Building computer

2013-09-28 Thread Catherine Gramze

On Sep 28, 2013, at 11:56 AM, Jack Malmostoso fabio.rosci...@gmail.com wrote:

 I was scrolling through the thread and I see you want to add a video
 card, and I checked on the Zareason website what they offer. I hope you
 got a power supply with more than the 350W standard. I am not sure it
 has enough horsepower to run an extra video card as well.

I will be adding both a new power supply and the video card myself, probably 
not immediately.  Zareason offers nothing between 550 and 850 watts, and the 
manufacturer recommends 500 watts for the video card I plan to use. I don't 
need 850 watts, and 550 wasn't enough. And they charge $150 more for the Radeon 
7870 than I can get it for from more than one vendor. 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building computer - power supplies

2013-09-28 Thread Joe
On Sat, 28 Sep 2013 16:18:30 -0400
Doug dmcgarr...@optonline.net wrote:

 On 09/28/2013 03:23 PM, Stan Hoeppner wrote:
  On 9/27/2013 6:37 PM, Joel Rees wrote:
  On Fri, Sep 27, 2013 at 9:36 PM, Stan Hoeppner
  s...@hardwarefreak.com wrote:
  A point I forgot to make.  This is something everyone should know.
 
  Subject:  The marketing myth of multiple +12V rails
  [...]
 
  What I want to know is why Intel CPUs still need the +12V.
  
  They don't now and they never did.  The 8088 through 80486 and the
  first
 
 /snip/
 
  
  Installing two voltage regulators next to the CPU socket and using
  standard ~22 gauge copper wires from the 12V rail of the PSU solves
  the problem cheaply.  The 12V rail was chosen because 3.6x less
  current is required vs using the 3.3V rail as was used previously,
  which means much smaller wires are needed.
 /snip/
 
 Do I understand correctly that there are two switching power supply
 chips at the input to the CPU to produce 3.3Volts? Obviously, a linera
 regulator cannot produce and gain in current. Iin = Iout for linear
 regulators.
 

Yes, though they are for different voltages, typically 3.3V and 2.5V or
1.8V, see the voltage monitoring in the BIOS or MB monitor for the
values for a particular board.

You'll find two inductors, probably toroids, and a group of tall
aluminium electrolytic capacitors in the area. These electrolytics
carry a very high ripple current, and tend to run warm and to be the
weakest links in the system, as they will dry out with age, even if
they are run within their temperature and ripple specifications. But
linear regulators could not even be considered for this task.

-- 
Joe


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130928223534.483f3...@jretrading.com



Re: Building computer

2013-09-28 Thread Ralf Mardorf
On Sat, 2013-09-28 at 16:55 -0400, Catherine Gramze wrote:
 
 On Sep 28, 2013, at 11:56 AM, Jack Malmostoso
 fabio.rosci...@gmail.com wrote:
 
  I was scrolling through the thread and I see you want to add a video
  card, and I checked on the Zareason website what they offer. I hope
  you
  got a power supply with more than the 350W standard. I am not sure
  it
  has enough horsepower to run an extra video card as well.
 
 
 I will be adding both a new power supply and the video card myself,
 probably not immediately.  Zareason offers nothing between 550 and 850
 watts, and the manufacturer recommends 500 watts for the video card I
 plan to use. I don't need 850 watts, and 550 wasn't enough. And they
 charge $150 more for the Radeon 7870 than I can get it for from more
 than one vendor. 
 

If it should be true that such much Ampere are needed for gaming
computers,
http://uk.answers.yahoo.com/question/index?qid=20130719093253AACWgo6 ,
then I will be more salty than I'm already regarding to unfair EU
regulations. I'm not allowed to own a low Watt lightbulb with tungsten
filament to have good light and gamers are allowed to have insane high
current consumption. How much is a kWh in your part of the world? Even
if it should be needed for gaming and assumed you don't care about the
environment, isn't such a consumption not too expensive? The mean in
Germany in 2012 was around 0.26 €/kWh.

Regards,
Ralf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380404988.6343.102.camel@archlinux



Re: Building computer - power supplies

2013-09-28 Thread Stan Hoeppner
On 9/28/2013 3:18 PM, Doug wrote:
 On 09/28/2013 03:23 PM, Stan Hoeppner wrote:
 On 9/27/2013 6:37 PM, Joel Rees wrote:
 On Fri, Sep 27, 2013 at 9:36 PM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 A point I forgot to make.  This is something everyone should know.

 Subject:  The marketing myth of multiple +12V rails
 [...]

 What I want to know is why Intel CPUs still need the +12V.

 They don't now and they never did.  The 8088 through 80486 and the first
 
 /snip/
 

 Installing two voltage regulators next to the CPU socket and using
 standard ~22 gauge copper wires from the 12V rail of the PSU solves the
 problem cheaply.  The 12V rail was chosen because 3.6x less current is
 required vs using the 3.3V rail as was used previously, which means much
 smaller wires are needed.
 /snip/
 
 Do I understand correctly that there are two switching power supply
 chips at the input to the CPU to produce 3.3Volts? Obviously, a linera
 regulator cannot produce and gain in current. Iin = Iout for linear
 regulators.

No, the switching in the PSU is done on the AC current before
rectification, increasing the cycles from 60 Hz to 30-50 KHz.  The high
frequency AC, among other things, facilitates the conversion to DC, and
allows for a much smaller transformer, though the output has ripple,
unlike a battery, which provides true DC current.

I'm thinking my attempt at a simple explanation of PSU design in my
single vs dual rail rant was probably confusing, as it was technically
inaccurate.  Shame on me.

The cost difference I was referring to is in the components that lay
after the rectifier (which lay after the switching MOSFETs).  This is
mainly the regulator, which is also a MOSFET.  With a single 12V rail
design, the regulator FET is much larger and more expensive than the
combined smaller FETs in a multi-rail design.

However, another cost saving aspect to this I didn't mention previously
is component commonality.  If a vendor sizes it PSUs optimally, most if
not every PSU in the line can use the same regulator FETs.  For
instance, a 25A regulator can be used for a single 12V rail for any PSU
up to about 400W.  For PSUs between ~400-700W there will be two 25A
rails with two regulators.  For PSUs above ~700W there would be three
25A rails, and so on.  Since the same regulator is used in all models,
this regulator can be acquired in much larger volume with a greater
discount per unit.

With single rail designs one would need to acquire 5 or more sizes of
regulators which are progressively costlier per unit, and there is
little or no volume discount relative to above.  This is the major cost
driver for the multi-rail designs, not simply the per unit cost of
larger FETs.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52477118.4020...@hardwarefreak.com



Re: Building computer

2013-09-28 Thread berenger . morel



Le 28.09.2013 22:46, Stan Hoeppner a écrit :

On 9/28/2013 8:14 AM, berenger.mo...@neutralite.org wrote:



Le 28.09.2013 13:33, Stan Hoeppner a écrit :

Hi Catherine,

I haven't caught up with the rest of the thread but just wanted to
address a couple points here.

On 9/26/2013 11:12 AM, Catherine Gramze wrote:


On Sep 26, 2013, at 1:05 AM, Stan Hoeppner 
s...@hardwarefreak.com

wrote:



What desktop applications are you using that require 8GB, let 
alone
16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over 
buy
DRAM, that's a personal choice.  It will likely not improve 
performance

in any meaningful way, for WOW in Wine, or anything else.



I will be running more than one app at a time. For example WoW, a
browser, a Ventrilo client, and a chat client at minimum.


4GB is more than plenty, unless WOW has turned into a complete and 
total
memory hog.  Obviously it eats more running through Wine emulation. 
But
Win and WOW combined shouldn't eat more than 2GB, so you have 2GB 
left

to the rest, which is plenty.


I am only quickly reading the thread, and it's the 2nd time I see 
wine

associated with emulation.
As the name says, WINE Is Not An Emulator, it does not emulate a
computer, it does not emulate the windows' kernel, it emulates 
nothing.


snip

Whether you call it an emulator, translator, simulator, or Santa 
Claus,

it 'provides' the Windows APIs to the application, the DLLs, etc.
Providing this does require additional memory.  It's not a large 
amount
by today's standards, but it is non negligible.  I made that case 
above

and previously in the thread.


So, you do think that an emulator is the same as a dynamic library? If 
yes, well... why not. But then, since all DL are emulators, do not use 
that word to prove that they'll consume more resources, being CPU or 
memory.



So I'm unclear as to why you picked my reply for your rebuttal, given
we're on the same page.


The reason for which I replied, is that an emulator emulates a complete 
system and this have a huge overhead. WINE, as a dynamic library, could, 
theoretically ( I say theoretically because I did not ran any tests, 
I'll be honest on that point. Plus, it's impossible to have strictly the 
same costs ) have the same overhead than window's API. In practice, it 
will have a small CPU overhead, but to say it's not a small one, one 
should give some valgrind analysis.


About the reason to reply to your post and not another one, it was 
because it was the second one saying that, that I had read on the same 
thread at the moment I replied :)

It was not personal.


The problem with 3D game performance under Wine


I will not say it does not cost more than running on windows. I have no 
proofs. But it is not related to the fact it is an emulator, it can only 
be related to the fact it is a badder implementation, or one with more 
more layers.



is not memory
consumption, but the CPU overhead,


CPU for 3D stuff? You might be right and I am probably wrong, but could 
not it be because linux's 3D drivers are less good than windows' ones? 
This is a real question, not a troll, and the reason of that opinion of 
mine is quite easy to understand, and so, probably very simplistic: 
video games are mostly targeting windows' users, and so, there were more 
money and time spent on enhancements... on windows' side.


Well, again, I admit, I have no benchmark to prove my words.
Of course, on a more technical point, I can agree that one more layer 
for OpenGL related stuff might have a cost. But, that cost might also be 
removed at compile time.
I can learn that my opinion here is wrong, I have no problem with that. 
I'm wrong on a lot of things after all, and am always happy when I learn 
that I was wrong on something else. But give me any reason. Or a proof. 
Use a linux kernel, and a WINE base environment, then show be 
benchmarks. That would be sufficient. Or reasons for why wine should 
cost much ( so that I could do some searches on your words and my errors 
).



which I also made clear previously.


This is exactly why I admitted having only read quickly the thread. 
Sorry, but I did not noticed that part. From my memories, what I have 
read and that might not be your own words ( I have also read that you 
say that a lot of RAM is useless for most users, and I agree with that ) 
was that wine + wow would take at least 2GB. It may be true, but from 
what I remember to have read, there was an implicit affirmation that it 
was due to wine, when it was because of the whole system.


My reply was not against anyone, my apologies if it seems so. It was 
because an emulator is something, and an API is another different thing 
(but, with enough abstraction, we could say it's the same, since their 
uses is always to make a software running...) which have less costs.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 

Re: Building computer - power supplies

2013-09-28 Thread Ralf Mardorf
On Sat, 2013-09-28 at 19:15 -0500, Stan Hoeppner wrote:
 However, another cost saving aspect to this I didn't mention
 previously is component commonality.  If a vendor sizes it PSUs
 optimally, most if not every PSU in the line can use the same
 regulator FETs.  For instance, a 25A regulator can be used for a
 single 12V rail for any PSU up to about 400W.  For PSUs between
 ~400-700W there will be two 25A rails with two regulators.  For PSUs
 above ~700W there would be three 25A rails, and so on.  Since the same
 regulator is used in all models, this regulator can be acquired in
 much larger volume with a greater discount per unit.

This explanation is more to my taste ;).



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380429524.6343.147.camel@archlinux



kernel compilation (was ... Re: Building computer)

2013-09-28 Thread Chris Bannister
On Thu, Sep 26, 2013 at 11:57:43PM -0300, Beco wrote:
 On 26 September 2013 22:22, Tom H tomh0...@gmail.com wrote:
 
 
  (I've compiled a kernel on a netbook; you'd better have a few hours to
  spare...)
 
 
 
 
 Questions for people who compile kernel and their machines:

You are better off starting a new thread. You could create a subtopic by
changing the subject.

Now unfortunately, information about compiling a kernel is buried in
a thread about building a computer. :(

There are plenty of examples where someone has interrupted a conversation
in a thread:

e.g. Subject: How do you cook brocolli? (was Re: dpkg errors out installing...)

-- 
If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the 
oppressing. --- Malcolm X


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130929054152.GK18437@tal



Re: Building computer

2013-09-27 Thread Ralf Mardorf


On Thu, 2013-09-26 at 23:57 -0300, Beco wrote:
 How long a considered fast kernel compilation would last? I'd like
 to have a clue. And in what kind of computer (processor / RAM /
 anything else relevant)?

Building older and current kernels with configurations that are close to
Debian, Ubuntu or Arch Linux defaults takes around 90 minutes (in the
past perhaps a little bit less and today perhaps around 120 minutes, I
need to check this) on my 2.1GHz dual-core Athlon 64-bit, 4GiB RAM,
concurrency level 2, resp. 2 jobs, but there quasi is no difference, if
I don't use concurrency level 2, resp. 2 jobs. Older computers 900MHz
single-core Athlon 32-bit, 2GiB and less, can't be used when compiling a
kernel. I could use my computer while building the kernel, but I prefer
to not use it, because it can cause issues. YMMV!



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380267057.732.23.camel@archlinux



Re: Building computer

2013-09-27 Thread Ralf Mardorf
On Fri, 2013-09-27 at 09:30 +0200, Ralf Mardorf wrote:
 
 On Thu, 2013-09-26 at 23:57 -0300, Beco wrote:
  How long a considered fast kernel compilation would last? I'd like
  to have a clue. And in what kind of computer (processor / RAM /
  anything else relevant)?
 
 Building older and current kernels with configurations that are close to
 Debian, Ubuntu or Arch Linux defaults takes around 90 minutes (in the
 past perhaps a little bit less and today perhaps around 120 minutes, I
 need to check this) on my 2.1GHz dual-core Athlon 64-bit, 4GiB RAM,
 concurrency level 2, resp. 2 jobs, but there quasi is no difference, if
 I don't use concurrency level 2, resp. 2 jobs. Older computers 900MHz
 single-core Athlon 32-bit, 2GiB and less, can't be used when compiling a
 kernel. I could use my computer while building the kernel, but I prefer
 to not use it, because it can cause issues. YMMV!

PS: There's no difference when compiling booted to linux or linux-rt,
when compiling linux or linux-rt. Again, YMMV.



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380267134.732.24.camel@archlinux



Re: Building computer

2013-09-27 Thread Stan Hoeppner
On 9/26/2013 12:34 PM, Doug wrote:
 On 09/26/2013 01:05 AM, Stan Hoeppner wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:

 On Sep 25, 2013, at 12:52 AM, Stan Hoeppner s...@hardwarefreak.com wrote:

 
 /very large snip/
 
 I just left this final section to illustrate the tome of the whole thing:

 The socket vs upgrade concern is not valid.  It will be at least 10
 years before desktop applications, especially Linux apps, are
 sufficiently threaded to take advantage of today's 4 core CPUs, let
 alone 6/8 cores.  New hardware doesn't make old hardware obsolete.  New
 software does.  There are many people, including myself, who would say
 this applies even to dual core CPUs.  And in fact, at the rate of
 desktop software development today, WRT threads, one is better off
 purchasing a higher clocked dual core CPU with lots of cache than a quad
 core model at a lower clock, especially given power consumption.

 Worrying about which future CPU will fit in your mobo socket is a waste
 of brain cycles.  It's irrelevant.  By the time you *need* to upgrade,
 you'll want newer faster DRAM, etc.  So you'll end up replacing the
 board, CPU, and memory, as people have been doing for some years
 already.  Oh, add to that the fact that AMD and Intel keep changing
 sockets to accelerate this trend.  The entire industry is better off
 when people keep buying new hardware, obviously, even if the consumers
 aren't.

 
 Stan, I think the advice you've given Catherine will be useful to a lot
 of folks who read this list. Particularly in regard to ram usage and
 power supply size.

DRAM has trended very cheap for a while, though it's starting to go back
up a little.  Due to price many people have been buying more than they
need, simply because they can, at least here in The States.  And because
it's been so cheap big box vendors have been using DRAM capacity as a
selling point.  So when people see most if not all retails desktop PCs
shipping with 6GB or more RAM, they are conditioned to believe that's
the new minimum that everyone should have.  This is of course folly.

The multi-core issue is of greater importance as CPUs typically cost
much more than a couple of DIMMs.  The overwhelming majority of people
will never make use of 4 cores, let alone all the cycles of two cores.
Ever time I see someone say they're going to buy a quad core CPU I
cringe, and usually just bite my tongue.  Many people simply don't want
to know the truth.  Buying 4 instead of 2 makes them feel better about
the purchase, and that's what counts, I guess...

 I have heard, however, that the _make_ of power supply may be important,
 as some cheapies may not hold up. You might comment on that, and if
 that's correct, possibly recommend a name or two.

So much I can say WRT PSU purchasing.  I'll try to be brief (and fail),
but complete.

When it comes to desktop PC power supplies brand has little to do with
reliability/longevity, but it definitely has influence on price, and
sometimes, but not always, a better warranty experience if you have to
return one.  The design of DC-DC switching power supplies hasn't changed
much in 30+ years.  It's a known science.  The only thing that changes
regularly is the cost, and sometimes quality, of components, due to the
drive to reduce production cost and increase margins.  And this is where
you run into problems.  These may include using low quality ICs, caps,
assembly processes, etc, and/or may include skimping on testing during
Q.C.  A bad batch of components can bite any vendor in the butt.
Sometimes good QC can catch this, sometimes it can't.

My advice?  Ignore brand.  Go to your favorite retail/wholesale website.
 Plug in the power rating and price range you're interested in.  Read
the reviews of the units.  Yes, read them.  Don't rely strictly on the
number of stars, eggs, etc.  One bad review by a novice builder who
burns up his system, or one who gives 1/5 because the cables are too
short for his case, or just doesn't know what he's doing, can ruin a
rating, and cause you to skip over a perfectly good supply.  Here's an
example:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817103514

The guy who gave 1 egg didn't buy the unit from Newegg but from an Expo.
 His PSU may not even be the SPI unit he's writing the review for.  He
can't figure out how to plug in the Molex connector, which is a standard
all mobos and PSUs adhere to.  He's an idiot.  So you discard this
review and his 1/5 rating.  That makes this unit a 5/5 instead of a 3/5.
 Not the best example as there are only 2 reviews, and you should never
rely on a single positive review.  But you get the idea what to look out
for in this regard.

Following, however, is the review that makes you walk away from a
particular unit:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817103089

SPI is one of the top 5 quality PSU manufacturers and has been for over
15 years.  They contract manufacture for some of the big box PC names as
well as 

Re: Re: Building computer

2013-09-27 Thread David L. Craig
On 13Sep26:2109-0400, Tom H wrote:
 On Tue, Sep 24, 2013 at 10:30 PM, Balamurugan emailstorb...@gmail.com wrote:
  On 09/25/2013 04:59 AM, Bob Proulx wrote:
  Catherine Gramze wrote:
 
  I intend to build a computer for the specific purpose of running
  Debian. I have had a bad experience with a store-bought computer,
  which seemed to be wholly unable to boot to anything but Windows 8 -
  there was no option in the BIOS to boot to the hard drive, or even
  to the EFI partition, but only to the Windows Boot Manager. Even
  with Secure Boot turned off.
 
  It looks like you ran into the MS Window 8 Restricted Boot problem.
 
 http://www.fsf.org/search?SearchableText=secure+boot
 
  So, I am looking for recommendations on hardware, particularly
  motherboards, known to play nicely with Debian and boot
  consistently. Building my own system is not new to me, but something
  I have not done for 10 years or so, so the appropriate BIOS settings
  on the new EFI and UEFI mobos are unknown to me. All advice is
  solicited.
 
  Check the dates on these older postings (time flies and the best
  hardware moves along) but here are two references:
 
 http://lists.debian.org/debian-user/2012/10/msg01189.html
 
 http://lists.debian.org/debian-user/2013/04/msg00180.html
 
  Recently one of my friend's brother bought a Lenova laptop. My friend asked
  me to install Ubuntu in that laptop but that machine was not detecting
  Ubuntu and directly booting into Windows 8. Then after bit of struggle, we
  went into the bios and changed the boot mode from 'UEFI' mode to 'Legacy'
  mode. Since we were installing by pen drive, we changed the boot order also.
 
  After the above steps, it detected Ubuntu and we finally installed Ubuntu
  along with Windows (as dual boot). The problem starts now. We were not able
  to boot windows from the Ubuntu grub menu boot entry. If we want to boot
  Windows 8, we need to change back the boot entry to UEFI mode in the bios
  and then only Windows boots from Windows boot manager.
 
  The reason behind this is Windows 8 is been made to boot only in UEFI mode
  and hence the OEM vendors (like Lenova) are configuring their machines
  accordingly. They don't mind/care about other free software OS.
 
  Just before this instance, I bought a laptop prebuild with Ubuntu (DELL
  vostro 2420). It doesn't had these circus as it was shipped with Ubuntu.
 
  I thought of sharing this details with you and our fellow community. The
  link 'http://www.fsf.org/search?SearchableText=secure+boot' shared by Bob
  gives you some insight on this restricted booting.
 
 Please don't top post.
 
 And please don't conflate the fact that you couldn't install Ubuntu on
 a Lenovo with UEFI with the fact that it cannot be done.
 
 I've just installed Ubuntu on a Lenovo and it's the seventh such
 install on UEFI laptops.
 
 The FSF usually stakes out extreme positions.
 
 Some debunking of Secure Boot myths by the (main) developer of the
 Secure Boot shim:
 
 http://mjg59.dreamwidth.org/10971.html

Your fact is not.  I installed Debian Sid on a G500 a few
months ago and it dual-boots with Win8.  The trick is to
use the smaller alternative power button to the right of
the large power button, which ignores Legacy.  Perhaps
other Lenovo laptops are not so equipped but the G500 is.
-- 
not cent from sell
May the LORD God bless you exceedingly abundantly!

Dave_Craig__
So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe.
__--from_Nightfall_by_Asimov/Silverberg_


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130927103829.ga4...@dlc-dt.home



Re: Building computer

2013-09-27 Thread Joel Rees
On Fri, Sep 27, 2013 at 10:22 AM, Tom H tomh0...@gmail.com wrote:
 On Thu, Sep 26, 2013 at 7:09 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/26/2013 5:45 AM, Joel Rees wrote:
 On Thu, Sep 26, 2013 at 2:05 PM, Stan Hoeppner s...@hardwarefreak.com 
 wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:

 Stan, joking aside, are there any AMD processors you would recommend
 for doing kernel and driver level Android/ARM development in a VM?

 How likely would I be to find such a processor in a netbook or laptop?

 How likely is one to find a kernel developer doing any real work on a
 laptop?  Throwing a netbook into the question is just silly.

The reason I ask, Stan, is that I'm preparing to take a class where
I'll be studying Super H assembly language programming, writing device
drivers for an embedded SH3 running a Linux kernel, and such.

I'd like to prepare a portable emulation environment for the class,
since I know I'll be wanting to do homework when the lab is not
available. Renasas points to an SH4 emulator by Kawasaki-san that runs
under QEMU, but at times I will likely be debugging the emulator as
well as my own code. And I'll be doing a lot of compiles in the
emulated system.

I'll be biking to the class some days and walking other days, so I'd
like a 12 inch screen form factor to fit in my bag and not break my
back, which pretty much says netbook. Many netbooks have output for
external monitors, which will help at home, at least.

At this point, I've been kind of looking at Acer's Aspire (heh) V5-122
with an AMD A4-1250 and 4G RAM, or a similar V5 with an AMD A6-1450.
I'm trying to figure out whether they support QEMU. If so, I'm
thinking the 4-core A6 will be worth the extra 5000 yen, a pair of
core for the host OS and I/O, and another core for the emulation
environment makes three.

I'm also looking at an HP dm1-4400, but the processor there is an AMD
E2-1800, which seems to be last year's tech and a little heavier on
battery use. It's only two cores, and only 2G RAM in the version at
the store where I was looking at it (Sofmap in Umeda -- Osaka), but
maybe HP is sturdier than Acer. I need to look at that.

(I sat on my Lenovo ideapad s100, and now I can only use it with the
external monitor, so I'm a bit sensitive to sturdiness now. Not that I
plan on sitting on any more computers, of course. Should not have
taken the thing to church after programming all Saturday night. And,
no, I have not been running qemu on the ideapad. It's a single-core
Atom. Bleagh.)

 I'm pretty sure that I've seen a thread where kernel developers (Linus
 included) who were discussing kernel compilation time on laptops.

Thanks for the data points, Tom.

 (I've compiled a kernel on a netbook; you'd better have a few hours to 
 spare...)

That's a given. I'm not planning on compiling the kernel every day,
but I will be compiling loadable modules, cross or under emulation,
pretty regularly, at some point
--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43imq+g-8mkx-dvd4s1ce7byqfo8ez0dbzr3th-+_nc9...@mail.gmail.com



Re: Building computer - power supplies

2013-09-27 Thread Stan Hoeppner
A point I forgot to make.  This is something everyone should know.

Subject:  The marketing myth of multiple +12V rails

Switching regulator MOSFETs provide the 3.3, 5/5VSB, and 12VDC output
current of a PC PSU.  Before the days of waffle iron CPUs and GPUs, PSUs
had a single 12V regulator providing a single 12V circuit, or rail, at
about 16 amps, good for 16*12=192 watts.  That was sufficient for some
drives and adding a little extra power to the CPU socket via that 4 pin
plug, and a little extra to a hot graphics card in an AGP slot.

Then CPUs came along eating 130W and PCIe GPUs arrived eating 100W.
230/12=19A.  Oops, more than 16A, not quite enough amps for the CPU/GPU,
let alone my drives.  The initial response to this by the PSU industry
was simply to use larger switching MOSFETs and support devices in the
output circuit to give us ~25A on the 12V rail.  This worked well enough
for a while, even though the larger MOSFETs were a bit more expensive.
They simply increased the price a bit and passed it on to the
wholesaler, retailer, and the consumer.  No big deal.  Yet.

Then arrived the massive power hog GPUs and SLI/Xfire adding insult to
injury.  Some of these GPU combos require up to 300 watts (or much more
for quad SLI).  Add in an 8 core CPU pulling 75W from the 12V rail, and
now you need 31A, plus another 8A for spinning HDs, burners, etc.  So we
now need at least 40A of 12V power.  However, if you pull more than
85-90% of the maximum rail current, voltage will drop, and the system
will become unstable.  We need a safety margin of ~20%, so we now need
at least 48 amps of 12V power.  Let's just round that to 50, a nice even
number.  And, not coincidentally, we have a plethora of PSUs on the
market with dual 25A +12V rails.  Fancy that.

But why two at 25A instead of one at 50A?  Simple: component cost.  A
50A switching regulator and support devices costs significantly more
than using two sets of 25A devices.  In real terms it's not a huge
difference, let's say $8 vs $3.  So the price of your dual rail PSU
should have gone down by $5, right?  Wrong.  Here's why.

Some marketing genius at one of the PSU companies, years ago, figured
out that telling consumers 2 is better than 1 would work.  And he was
right.  People without the technical knowledge have been indoctrinated.
 The rest of the industry followed.  So not only are they saving $5 per
unit, they're charging the consumer $5 more per unit because 2 is
better than 1.  That's an extra $10 per unit, in essence, in the
manufacturers' pockets.  Now, if just one company produces and sells
100,000 units, that's an extra half $million in profit.  Not chump
change.  If they sell 6 models with dual rails, all at 100K units,
that's $3 million extra profit.

And what has the consumer gained from this?  Nothing.  It simply makes
wiring your PC more difficult, as you have to balance your 12V devices
across an arbitrary boundary placed across the 12V output current of
your power supply.

Look at any *real* server PSU.  Only one 12V rail, always has been,
always will be.  Some models go well over 100 amps on a single rail.
Here are two good SuperMicro examples.

Standard 865W ATX form factor PSU, single 70A 12V rail:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817377004

Proprietary form factor 1620W PSU, single 135A 12V rail:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817377052

And of course, now you should be asking yourself, given what I've
stated, and if 1 rail is better for servers, why are 2/3/4 rails
supposedly better for a consumer PC?  And of course the answer is, more
rails is not better.  It's worse.

It's exactly the same situation as digital camera marketeers convincing
consumers that a 12MP camera is intrinsically better than an 8MP camera.
 The simple fact is that even 2MP is sufficient for 99.% of all
users, because they share their photos online, not in print.  2MP camera
format is typically 1600x1200.  The average screen res, across desktops
and laptops, is far less.  Higher numbers sell, in America anyway.  The
big upshot for the industry?  8/12/16MP cameras need bigger flash cards
for a given number of snaps.  That equals more flash sales, etc, etc.
What is lost on most people is that *quality* 2/3MP cameras take better
quality pictures than cheap 16MP cameras.  Just look at the current
smartphones.  They ship with 8MP yes?  Seen the photos from one?  Yes,
they suck, horribly.

-- 
Stan




On 9/27/2013 5:00 AM, Stan Hoeppner wrote:
 On 9/26/2013 12:34 PM, Doug wrote:
 On 09/26/2013 01:05 AM, Stan Hoeppner wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:

 On Sep 25, 2013, at 12:52 AM, Stan Hoeppner s...@hardwarefreak.com wrote:


 /very large snip/

 I just left this final section to illustrate the tome of the whole thing:

 The socket vs upgrade concern is not valid.  It will be at least 10
 years before desktop applications, especially Linux apps, are
 sufficiently threaded to take advantage of today's 

Re: Building computer

2013-09-27 Thread Stan Hoeppner
On 9/26/2013 9:57 PM, Beco wrote:

 Questions for people who compile kernel and their machines:
 
 How long a considered fast kernel compilation would last? I'd like to
 have a clue. And in what kind of computer (processor / RAM / anything else
 relevant)?

This depends entirely on what you include, and if you build modules or
not.  If you're like me and strip out absolutely everything the target
machine doesn't need, do not use initrd, etc, etc, then it's a pretty
quick compile, less than 5 minutes on a modern CPU.

As Ralf said, if you're duplicating a distro kernel it'll take much longer.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52457ed9.30...@hardwarefreak.com



Re: Building computer

2013-09-27 Thread Jonathan Dowland
On Fri, Sep 27, 2013 at 09:30:57AM +0200, Ralf Mardorf wrote:
 Building older and current kernels with configurations that are close to
 Debian, Ubuntu or Arch Linux defaults takes around 90 minutes (in the
 past perhaps a little bit less and today perhaps around 120 minutes, I
 need to check this) on my 2.1GHz dual-core Athlon 64-bit, 4GiB RAM,
 concurrency level 2, resp. 2 jobs, but there quasi is no difference, if
 I don't use concurrency level 2, resp. 2 jobs.

You may find concurrency levels of 6-8 show a difference.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130927125519.GB6574@debian



Re: Building computer

2013-09-27 Thread Tom H
On Thu, Sep 26, 2013 at 10:57 PM, Beco r...@beco.cc wrote:
 On 26 September 2013 22:22, Tom H tomh0...@gmail.com wrote:

 (I've compiled a kernel on a netbook; you'd better have a few hours to
 spare...)

 Questions for people who compile kernel and their machines:

 How long a considered fast kernel compilation would last? I'd like to have
 a clue. And in what kind of computer (processor / RAM / anything else
 relevant)?

On a 2.4 GHz i7-4700MQ laptop with 16 GB of RAM (sorry Stan!), make
deb-pkg using an unmodified distribution config takes about 55
minutes.

IIRC, on a netbook's Atom (I've forgotten the exact processor, but it
was a second generation Acer netbook), the same compilation took 6 or
7 hours.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=swjt_45q_sr9y-aghmn29lbjekhizh1485ffnvd+gd...@mail.gmail.com



Re: Building computer - power supplies

2013-09-27 Thread Ralf Mardorf
On Fri, 2013-09-27 at 07:36 -0500, Stan Hoeppner wrote:
 It simply makes wiring your PC more difficult, as you have to balance
 your 12V devices across an arbitrary boundary placed across the 12V
 output current of your power supply.

 And of course, now you should be asking yourself, given what I've
 stated, and if 1 rail is better for servers, why are 2/3/4 rails
 supposedly better for a consumer PC?  And of course the answer is, more
 rails is not better.  It's worse.

What should be more worse, excepted of the fact that in theory the user
needs to take care about balancing the 12V devices across the several
connectors? An advantage perhaps would be, that if one fails, the others
still would work and the spare part could be taken from another old PSU.
And btw. vendors buy components galore from vendors that produced them
galore and have tons of them stocked. IOW by using those components they
can make very low prices. Some components that can't be used for
anything else, sometimes would become hazardous waste, if they wouldn't
use them anymore. What technically makes more sense, not always is that
simple regarding to economy and ecology.

Rising prices seldom are caused by the cause you mentioned. The milk in
the same bottles also become more expensive in the last years.

However, undersized voltage for capacitors for switching power supply is
an often used fraud against consumers and switching power supplies
pollute the mains. Exotic resistors sometimes make them a PITA when you
want to repair one and you don't have those resistors on stock. I
dislike all switching power supplies.

Regards,
Ralf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380288105.713.28.camel@archlinux



Re: Building computer

2013-09-27 Thread Ralf Mardorf
On Fri, 2013-09-27 at 09:02 -0400, Tom H wrote:
 On a 2.4 GHz i7-4700MQ laptop with 16 GB of RAM (sorry Stan!)

The question is, if more cores and more RAM really make a difference for
averaged desktop usage. AFAIK more cores and RAM should be important for
video production, but for averaged desktop usage there perhaps really is
no advantage.



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380288973.713.35.camel@archlinux



Re: Building computer

2013-09-27 Thread Tom H
On Thu, Sep 26, 2013 at 11:00 PM, Catherine Gramze rhia...@gmail.com wrote:
 On Sep 26, 2013, at 10:50 PM, Tom H tomh0...@gmail.com wrote:


 I'd call it obscure rather than magical.


 Any sufficiently obscure technique is indistinguishable from magic - (my
 apologies to Arthur C. Clark).

:)


 On my recent ill-fated Gateway it took spamming the delete key during boot,
 and then once the login screen appeared clicking on update and restart
 while holding the shift key. As any same person would do, I immediately
 altered the boot options once in BIOS so the message to hit delete or F12
 for the boot menu would appear. Of course, the boot menu never recognized
 the Linux installation. And only recognized the CD sometimes.

If the manufacturers were sane, they'd use the same keys for accessing
the UEFI firmware and boot manager; and display a reminder of those
keys at boot...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=sy9pdejaccovx_+wpcem4sxfcw8cec07b9uo5g85sa...@mail.gmail.com



Re: Re: Re: Building computer

2013-09-27 Thread Balamurugan

On 09/27/2013 04:08 PM, David L. Craig wrote:

On 13Sep26:2109-0400, Tom H wrote:

On Tue, Sep 24, 2013 at 10:30 PM, Balamurugan emailstorb...@gmail.com wrote:

On 09/25/2013 04:59 AM, Bob Proulx wrote:

Catherine Gramze wrote:

I intend to build a computer for the specific purpose of running
Debian. I have had a bad experience with a store-bought computer,
which seemed to be wholly unable to boot to anything but Windows 8 -
there was no option in the BIOS to boot to the hard drive, or even
to the EFI partition, but only to the Windows Boot Manager. Even
with Secure Boot turned off.

It looks like you ran into the MS Window 8 Restricted Boot problem.

http://www.fsf.org/search?SearchableText=secure+boot


So, I am looking for recommendations on hardware, particularly
motherboards, known to play nicely with Debian and boot
consistently. Building my own system is not new to me, but something
I have not done for 10 years or so, so the appropriate BIOS settings
on the new EFI and UEFI mobos are unknown to me. All advice is
solicited.

Check the dates on these older postings (time flies and the best
hardware moves along) but here are two references:

http://lists.debian.org/debian-user/2012/10/msg01189.html

http://lists.debian.org/debian-user/2013/04/msg00180.html

Recently one of my friend's brother bought a Lenova laptop. My friend asked
me to install Ubuntu in that laptop but that machine was not detecting
Ubuntu and directly booting into Windows 8. Then after bit of struggle, we
went into the bios and changed the boot mode from 'UEFI' mode to 'Legacy'
mode. Since we were installing by pen drive, we changed the boot order also.

After the above steps, it detected Ubuntu and we finally installed Ubuntu
along with Windows (as dual boot). The problem starts now. We were not able
to boot windows from the Ubuntu grub menu boot entry. If we want to boot
Windows 8, we need to change back the boot entry to UEFI mode in the bios
and then only Windows boots from Windows boot manager.

The reason behind this is Windows 8 is been made to boot only in UEFI mode
and hence the OEM vendors (like Lenova) are configuring their machines
accordingly. They don't mind/care about other free software OS.

Just before this instance, I bought a laptop prebuild with Ubuntu (DELL
vostro 2420). It doesn't had these circus as it was shipped with Ubuntu.

I thought of sharing this details with you and our fellow community. The
link 'http://www.fsf.org/search?SearchableText=secure+boot' shared by Bob
gives you some insight on this restricted booting.

Please don't top post.

And please don't conflate the fact that you couldn't install Ubuntu on
a Lenovo with UEFI with the fact that it cannot be done.

I've just installed Ubuntu on a Lenovo and it's the seventh such
install on UEFI laptops.

The FSF usually stakes out extreme positions.

Some debunking of Secure Boot myths by the (main) developer of the
Secure Boot shim:

http://mjg59.dreamwidth.org/10971.html

Your fact is not.  I installed Debian Sid on a G500 a few
months ago and it dual-boots with Win8.  The trick is to
use the smaller alternative power button to the right of
the large power button, which ignores Legacy.  Perhaps
other Lenovo laptops are not so equipped but the G500 is.

Hi David,

Till last month, I have installed close to 10 installations of 
GNU/Linux OS as dual boot with Windows OS(XP and Windows 7). This 
particular Lenova Laptop which had Windows 8 installed in UEFI mode had 
issues in installing Ubuntu. When I try to insert the Ubuntu (12.04 LTS) 
in USB boot stick, it is not even recognizing the OS. The machine 
detects Ubuntu only when I turned off UEFI to Legacy mode.
In the same time, I purchased my own laptop (Dell vostro 2420) which was 
pre-installed with Ubuntu. When I checked that, it was turned to Legacy 
boot by default. Also as per the technical journals I read, GNU/Linux 
don't have their own UEFI authorizing keys. Can you please correct me 
with some more details, If I am wrong.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/5245a330.7090...@gmail.com



Re: Building computer - power supplies

2013-09-27 Thread Doug
On 09/27/2013 09:21 AM, Ralf Mardorf wrote:
/snip/

 However, undersized voltage for capacitors for switching power supply is
 an often used fraud against consumers and switching power supplies
 pollute the mains. Exotic resistors sometimes make them a PITA when you
 want to repair one and you don't have those resistors on stock. I
 dislike all switching power supplies.
 
 Regards,
 Ralf
 
 
Not sure what you mean by undersized voltage for capacitors. Good
enigineering practice says that there should be plenty of headroom on
capacitor voltage rating. IOW, if you're filtering a 15 volt supply,
you shouldn't use a 16 volt capacitor, but at least a 20 Volt one.
Or more. MIL-specs require double the working voltage, in most cases.

While it's true that switchers produce RF interference, that can be
handled by proper shielding of the supply and filtering of the AC line,
usually using ferrite components. Modern power supplies for sensitive
radio receivers are designed using switching techniques and proper
filtering, and the receivers work without interference.

A further point: Common computer supplies are rated around 450 Watts.
You wouldn't want to lift the transformer that would supply 450 Watts,
much less take up the space for it. So switching supplies are the only
reasonable alternative.

--Doug, retired electronic engineer and radio amateur, WA2SAY


-- 
Blessed are the peacemakers..for they shall be shot at from both sides.
--A.M.Greeley


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5245c07c.4010...@optonline.net



Re: Re: Building computer

2013-09-27 Thread Tom H
On Fri, Sep 27, 2013 at 6:38 AM, David L. Craig dlc@gmail.com wrote:
 On 13Sep26:2109-0400, Tom H wrote:
 On Tue, Sep 24, 2013 at 10:30 PM, Balamurugan emailstorb...@gmail.com 
 wrote:
 On 09/25/2013 04:59 AM, Bob Proulx wrote:
 Catherine Gramze wrote:

 I intend to build a computer for the specific purpose of running
 Debian. I have had a bad experience with a store-bought computer,
 which seemed to be wholly unable to boot to anything but Windows 8 -
 there was no option in the BIOS to boot to the hard drive, or even
 to the EFI partition, but only to the Windows Boot Manager. Even
 with Secure Boot turned off.

 It looks like you ran into the MS Window 8 Restricted Boot problem.

http://www.fsf.org/search?SearchableText=secure+boot

 So, I am looking for recommendations on hardware, particularly
 motherboards, known to play nicely with Debian and boot
 consistently. Building my own system is not new to me, but something
 I have not done for 10 years or so, so the appropriate BIOS settings
 on the new EFI and UEFI mobos are unknown to me. All advice is
 solicited.

 Check the dates on these older postings (time flies and the best
 hardware moves along) but here are two references:

http://lists.debian.org/debian-user/2012/10/msg01189.html

http://lists.debian.org/debian-user/2013/04/msg00180.html

 Recently one of my friend's brother bought a Lenova laptop. My friend asked
 me to install Ubuntu in that laptop but that machine was not detecting
 Ubuntu and directly booting into Windows 8. Then after bit of struggle, we
 went into the bios and changed the boot mode from 'UEFI' mode to 'Legacy'
 mode. Since we were installing by pen drive, we changed the boot order also.

 After the above steps, it detected Ubuntu and we finally installed Ubuntu
 along with Windows (as dual boot). The problem starts now. We were not able
 to boot windows from the Ubuntu grub menu boot entry. If we want to boot
 Windows 8, we need to change back the boot entry to UEFI mode in the bios
 and then only Windows boots from Windows boot manager.

 The reason behind this is Windows 8 is been made to boot only in UEFI mode
 and hence the OEM vendors (like Lenova) are configuring their machines
 accordingly. They don't mind/care about other free software OS.

 Just before this instance, I bought a laptop prebuild with Ubuntu (DELL
 vostro 2420). It doesn't had these circus as it was shipped with Ubuntu.

 I thought of sharing this details with you and our fellow community. The
 link 'http://www.fsf.org/search?SearchableText=secure+boot' shared by Bob
 gives you some insight on this restricted booting.

 Please don't top post.

 And please don't conflate the fact that you couldn't install Ubuntu on
 a Lenovo with UEFI with the fact that it cannot be done.

 I've just installed Ubuntu on a Lenovo and it's the seventh such
 install on UEFI laptops.

 The FSF usually stakes out extreme positions.

 Some debunking of Secure Boot myths by the (main) developer of the
 Secure Boot shim:

 http://mjg59.dreamwidth.org/10971.html

 Your fact is not. I installed Debian Sid on a G500 a few
 months ago and it dual-boots with Win8. The trick is to
 use the smaller alternative power button to the right of
 the large power button, which ignores Legacy. Perhaps
 other Lenovo laptops are not so equipped but the G500 is.

I have no idea what Your fact is not means but I've just checked my
Y510P and it has an alternative power button (a Novo button in
Lenovo-speak) on the left side of laptop and when you use it you boot
to a screen that allows you to choose between accessing the UEFI
firmware setup and the UEFI boot manager as well as resetting the
laptop's Windows installation. I'll hopefully remember about this the
next time that I set up a Lenovo...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=swxzjnliaxdufresx+6mbmlf2f_hsxtyrxrqwwgbqd...@mail.gmail.com



Re: Re: Re: Building computer

2013-09-27 Thread David L. Craig
On 13Sep27:2054+0530, Balamurugan wrote:
 On 09/27/2013 04:08 PM, David L. Craig wrote:
 Your fact is not.  I installed Debian Sid on a G500 a few
 months ago and it dual-boots with Win8.  The trick is to
 use the smaller alternative power button to the right of
 the large power button, which ignores Legacy.  Perhaps
 other Lenovo laptops are not so equipped but the G500 is.
 Hi David,
 
 Till last month, I have installed close to 10 installations of
 GNU/Linux OS as dual boot with Windows OS(XP and Windows 7). This
 particular Lenova Laptop which had Windows 8 installed in UEFI mode
 had issues in installing Ubuntu. When I try to insert the Ubuntu
 (12.04 LTS) in USB boot stick, it is not even recognizing the OS.
 The machine detects Ubuntu only when I turned off UEFI to Legacy
 mode.
 In the same time, I purchased my own laptop (Dell vostro 2420) which
 was pre-installed with Ubuntu. When I checked that, it was turned to
 Legacy boot by default. Also as per the technical journals I read,
 GNU/Linux don't have their own UEFI authorizing keys. Can you please
 correct me with some more details, If I am wrong.

I am at a disadvantage because I relinquished the laptop about
a month ago to be returned to Lenovo for warranty repair and
the memory is somewhat dim.  The BIOS was configured for Legacy
boot.  I enabled USB booting in the BIOS as needed and kept it
normally unenabled.  I installed Linux Mint XFCE into a hard
drive partition.  I discovered the main power button will always
boot up Win8 in UEFI mode but the smaller power buttona, designed
for the Lenovo One-Key recovery facility, brings up a boot menu
that includes the hard drive partitions and USB drives if such are
configured as bootable.  I hope this is helpful.
-- 
not cent from sell
May the LORD God bless you exceedingly abundantly!

Dave_Craig__
So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe.
__--from_Nightfall_by_Asimov/Silverberg_


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130927223435.ga3...@dlc-dt.home



Re: Building computer

2013-09-27 Thread Stephen Powell
On Thu, 26 Sep 2013 22:57:43 -0400 (EDT), Beco r...@beco.cc wrote:
 
 Questions for people who compile kernel and their machines:
 
 How long a considered fast kernel compilation would last? I'd like to
 have a clue. And in what kind of computer (processor / RAM / anything else
 relevant)?
 
 Thanks!
 Beco.

I haven't been following this thread, but I happened to notice this
particular post.

That depends on many factors, such as the speed of your processor, the
speed of your hard disk drive, how you have your kernel configured, the
architecture you use and how many drivers are available for it, etc.
For my 32-bit Xeon processor (2.40 GHz) compiling for the i386 architecture,
using a kernel configuration that is not too much different from a stock
Debian kernel, it takes hours.  The machine has 4G of RAM.

-- 
  .''`. Stephen Powell
 : :'  :
 `. `'`
   `-


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/1714746847.3410869.1380322377088.javamail.r...@md01.wow.synacor.com



Re: Building computer - power supplies

2013-09-27 Thread Joel Rees
On Sat, Sep 28, 2013 at 2:29 AM, Doug dmcgarr...@optonline.net wrote:
 On 09/27/2013 09:21 AM, Ralf Mardorf wrote:
 /snip/

 However, undersized voltage for capacitors for switching power supply is
 an often used fraud against consumers and switching power supplies
 pollute the mains. Exotic resistors sometimes make them a PITA when you
 want to repair one and you don't have those resistors on stock. I
 dislike all switching power supplies.

 Regards,
 Ralf


 Not sure what you mean by undersized voltage for capacitors.

Ralf seems to speak German more natively than English, and sometimes
he reverses grammatical order. I think, from the general tenor of his
posts, he intended to say something like, capacitors for undersized
voltages.

Hmm. Anyway, what you would call capacitors rated at voltages too low
for the circuit. Underrated in the bad sense.

My impression is the specification rating in the power supply design
is less the culprit than the capacitor manufacturer pushing the edges
on their QC. But the upshot is that capacitors are exposed to higher
voltages and/or effective power than they can handle, and get burned,
and it is a manufacturing problem, and sometimes an engineering
problem.

 Good
 enigineering practice says that there should be plenty of headroom on
 capacitor voltage rating. IOW, if you're filtering a 15 volt supply,
 you shouldn't use a 16 volt capacitor, but at least a 20 Volt one.
 Or more. MIL-specs require double the working voltage, in most cases.

Well, you definitely don't generally see double in consumer designs.
Sometimes I've seen 5V rated caps filtering 5V power output in bad
switching designs.

 While it's true that switchers produce RF interference, that can be
 handled by proper shielding of the supply and filtering of the AC line,
 usually using ferrite components.

And consumer grade power supplies often cut corners on their design,
sometimes even failing to use ferrite ferrite. (Ever seen a fake
ferrite? (Fool's gold? heh.)

 Modern power supplies for sensitive
 radio receivers are designed using switching techniques and proper
 filtering, and the receivers work without interference.

 A further point: Common computer supplies are rated around 450 Watts.
 You wouldn't want to lift the transformer that would supply 450 Watts,
 much less take up the space for it. So switching supplies are the only
 reasonable alternative.

Yeah, linear is no panacea. (From my limited hobby experience, I can
tell stories of linear power supplies that were way noisier than your
common marginal design switching PS. )

The problem (for anyone reading who is not aware of this) is not
switching vs. linear, it's the sales and manufacturing practices
encouraged by stingy boards of directors.

 --Doug, retired electronic engineer and radio amateur, WA2SAY


 --
 Blessed are the peacemakers..for they shall be shot at from both sides.
 --A.M.Greeley

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43inr_xlrvgki2nonvpw0ket3btmjbed7mvdakfekg_m...@mail.gmail.com



Re: Building computer - power supplies

2013-09-27 Thread Joel Rees
On Fri, Sep 27, 2013 at 9:36 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 A point I forgot to make.  This is something everyone should know.

 Subject:  The marketing myth of multiple +12V rails
 [...]

What I want to know is why Intel CPUs still need the +12V.

Have you considered copying these kinds of posts to a blog, Stan?
Might be easier to find them again.

(I often do exactly that when I don't want to forget something. Then I
forget it's in one of my blogs. I mix politics and religion and
technology and philosophy up too freely in my blogs, even though part
of the purpose of creating multiple blogs was to separate them by
purpose.)

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43iohwf7vbqesvf6iv1a9x9zx-rnijxj724xo3h8haai...@mail.gmail.com



Re: Building computer - power supplies

2013-09-27 Thread Ralf Mardorf
On Sat, 2013-09-28 at 08:25 +0900, Joel Rees wrote:
 But the upshot is that capacitors are exposed to higher
 voltages and/or effective power than they can handle, and get burned,
 and it is a manufacturing problem, and sometimes an engineering
 problem.

And sometimes vendors knowingly use undersized capacitors, so that they
will get burned a while after the period of warranty ended. This is a
known issue by German consumer centers. I experienced it for the PSU of
a Behringer mixing console. I was an engineer and can repair it myself,
or assumed I shouldn't have the needed equipment at home, a friend still
is working as engineer for a company and can help me. For this
particular PSU it was easier to do by hot air soldering. Sure, without
hot air the soldering isn't impossible, but already hard to do for
experienced engineers and perhaps impossible for averaged people.
Vendors design things to get broken after warranty period ended and they
also try to make things irreparable.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380338007.689.50.camel@archlinux



Re: Building computer

2013-09-26 Thread Ralf Mardorf
On Thu, 2013-09-26 at 00:05 -0500, Stan Hoeppner wrote:
 What desktop applications are you using that require 8GB, let alone
 16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over buy
 DRAM, that's a personal choice.

I agree, even for heavy audio production I never needed the 4 GiB of RAM
I've got, but some audio experts say 8 GiB should be better than 4 GiB.

 Overkill again.  A 400W PSU

My tower PC is filled to capacity, e.g. 3 sound cards and there are no
issues with a 420 W PSU. Take care that it has got a large fan, so that
it will inaudible.

 Good choice.  I'm using 4 of the Hyper 101s, two of them in an old dual
 socket server.  Good design, quality, great cooling, and quiet.

Here never a default CPU cooler was audible or didn't cool the CPU
enough, those always were silent and cooled the CPUs as needed. Again,
for heavy audio work, CPU frequency scaling set to performance.

 By the time you *need* to upgrade, you'll want newer faster DRAM

That btw. will be less expensive than today's RAM and you might need
more RAM then.

I don't know what graphics is needed for gaming. However, I'm using a
graphics without a fan, audible are only the HDDS, because I've got a
cheap case. Spending some money for a good case seems to be a good idea.

Regards,
Ralf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380176517.1316.13.camel@archlinux



Re: Building computer

2013-09-26 Thread Joel Rees
(sort-of-thread-hijacking)

On Thu, Sep 26, 2013 at 2:05 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:
[...]
 And I'm at a loss here.  The detailed list below suggests that you were
 sand bagging to an extent in your previous post.

 Haswell quad-core i5

 [...]

 8 gig DDR3 1600 ram, Corsair (should I get 16 and use all 4 slots?)

 What desktop applications are you using that require 8GB, let alone
 16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over buy
 DRAM, that's a personal choice.  It will likely not improve performance
 in any meaningful way, for WOW in Wine, or anything else.

Maybe she's trying to avoid admitting that she's really intending to
emulate an SH4 under QEMU to port Android? ;-

 [...]  Disclaimer:  For many years I've been
 AMD only and don't tend to keep up with Intel's offerings.

Stan, joking aside, are there any AMD processors you would recommend
for doing kernel and driver level Android/ARM development in a VM?

How likely would I be to find such a processor in a netbook or laptop?

--
Joel Rees

Be careful where you see conspiracy.
Look first in your own heart.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/caar43imwgk+1ia8x+put_gtojo1w8hcit4bzhe0cgjs9y_e...@mail.gmail.com



Re: Building computer

2013-09-26 Thread Stan Hoeppner
On 9/26/2013 5:45 AM, Joel Rees wrote:
 (sort-of-thread-hijacking)
 
 On Thu, Sep 26, 2013 at 2:05 PM, Stan Hoeppner s...@hardwarefreak.com wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:
 [...]
 And I'm at a loss here.  The detailed list below suggests that you were
 sand bagging to an extent in your previous post.

 Haswell quad-core i5

 [...]

 8 gig DDR3 1600 ram, Corsair (should I get 16 and use all 4 slots?)

 What desktop applications are you using that require 8GB, let alone
 16GB, of RAM?  I'd think 4 would be plenty.  If you wish to over buy
 DRAM, that's a personal choice.  It will likely not improve performance
 in any meaningful way, for WOW in Wine, or anything else.
 
 Maybe she's trying to avoid admitting that she's really intending to
 emulate an SH4 under QEMU to port Android? ;-
 
 [...]  Disclaimer:  For many years I've been
 AMD only and don't tend to keep up with Intel's offerings.
 
 Stan, joking aside, are there any AMD processors you would recommend
 for doing kernel and driver level Android/ARM development in a VM?
 
 How likely would I be to find such a processor in a netbook or laptop?

How likely is one to find a kernel developer doing any real work on a
laptop?  Throwing a netbook into the question is just silly.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/524415f0.9080...@hardwarefreak.com



Re: Building computer

2013-09-26 Thread Rob Owens
On Tue, Sep 24, 2013 at 08:11:31PM -0700, Dan Hitt wrote:
 Hi Catherine,
 
 This is regarding the question of what computer hardware to buy that
 will work with debian, given that store-bought computers were a
 problem this latest time.
 
 First, what is the name of the computer you actually bought?  What the
 other posters are saying about UEFI is probably true, but with a
 specific computer model, somebody may be able to provide a specific
 work around.  (For example, we bought some Gateways and had to figure
 out just what the magic incantation was to get them to boot up by usb
 and put ubuntu on them.)
 
At the very least it will inform all of us so that we may avoid that
particular model/manufacturer/vendor.

-Rob


signature.asc
Description: Digital signature


Re: Building computer

2013-09-26 Thread Doug
On 09/26/2013 01:05 AM, Stan Hoeppner wrote:
 On 9/25/2013 12:52 PM, Catherine Gramze wrote:

 On Sep 25, 2013, at 12:52 AM, Stan Hoeppner s...@hardwarefreak.com wrote:


/very large snip/

I just left this final section to illustrate the tome of the whole thing:
 
 The socket vs upgrade concern is not valid.  It will be at least 10
 years before desktop applications, especially Linux apps, are
 sufficiently threaded to take advantage of today's 4 core CPUs, let
 alone 6/8 cores.  New hardware doesn't make old hardware obsolete.  New
 software does.  There are many people, including myself, who would say
 this applies even to dual core CPUs.  And in fact, at the rate of
 desktop software development today, WRT threads, one is better off
 purchasing a higher clocked dual core CPU with lots of cache than a quad
 core model at a lower clock, especially given power consumption.
 
 Worrying about which future CPU will fit in your mobo socket is a waste
 of brain cycles.  It's irrelevant.  By the time you *need* to upgrade,
 you'll want newer faster DRAM, etc.  So you'll end up replacing the
 board, CPU, and memory, as people have been doing for some years
 already.  Oh, add to that the fact that AMD and Intel keep changing
 sockets to accelerate this trend.  The entire industry is better off
 when people keep buying new hardware, obviously, even if the consumers
 aren't.
 

Stan, I think the advice you've given Catherine will be useful to a lot
of folks who read this list. Particularly in regard to ram usage and
power supply size.

I have heard, however, that the _make_ of power supply may be important,
as some cheapies may not hold up. You might comment on that, and if
that's correct, possibly recommend a name or two.

An excellent presentation. I have put it in my saved file.

--doug
-- 
Blessed are the peacemakers..for they shall be shot at from both sides.
--A.M.Greeley


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52447040.20...@optonline.net



Re: Building computer

2013-09-26 Thread Catherine Gramze

I contacted MSI to ask if they made a motherboard I could use. Here is the 
exchange:

Me: I want to purchase a motherboard for a Linux computer. I would prefer one 
that would accept the new Haswell processor. The BIOS must allow for Secure 
Boot to be disabled, and CSM mode to be enabled, e.g. UEFI disabled. This would 
obviously be a board that is not compliant with Microsoft's new 
Connected-stand-by feature. Do you make such a board, and if you do which 
board might it be?
MSI tech: MSI does not perform and provide any Linux support for mb, only 
Windows.

This looks to me like a stock refusal to even consider a technical question 
(because I mentioned Linux) rather than an affirmation that they have 
implemented Connected-standby and that their BIOS lacks the options to disable 
Secure Boot and UEFI.

Zareason will cost me a solid $500 more than building my own (and that's 
including buying and installing the Radeon 7870 myself, as they want $350 for 
it and I can get it for about $200) but it may be the only way to get a Haswell 
chip on a board that is not BIOS restricted to Windows. 

I visited my local computer shop and the owner was clueless about Secure Boot, 
UEFI, and Connected-standby.


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building computer

2013-09-26 Thread Ralf Mardorf
On Thu, 2013-09-26 at 13:45 -0400, Catherine Gramze wrote:
 MSI tech: MSI does not perform and provide any Linux support for mb,
 only Windows.

I don't know any vendor, for any kind of hardware, that does support
Linux, even not the Linux friendly vendors that cooperate with the Linux
community usually don't support Linux. That Linux isn't supported,
doesn't mean that they don't care about Linux.



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380218223.722.7.camel@archlinux



Re: Building computer

2013-09-26 Thread Catherine Gramze

On Sep 26, 2013, at 1:57 PM, Ralf Mardorf ralf.mard...@alice-dsl.net wrote:

 I don't know any vendor, for any kind of hardware, that does support
 Linux, even not the Linux friendly vendors that cooperate with the Linux
 community usually don't support Linux. That Linux isn't supported,
 doesn't mean that they don't care about Linux.

Yes, but I wasn't even asking about support. I was asking about two specific 
capabilities of the BIOS in a motherboard, which are OS-agnostic. Disabling 
Secure Boot and enabling CSM.


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building computer

2013-09-26 Thread Catherine Gramze

On Sep 26, 2013, at 2:08 PM, Darko Gavrilovic d.gavrilo...@gmail.com wrote:

 What's the motherboard model number? A quick Google indicates you may
 not be able to disable UEFI but you can enable legacy compatibility
 mode for MSI m/b's.

The specific model I am considering is the MSI H87-43G.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building computer

2013-09-26 Thread David Christensen

On 09/26/13 10:45, Catherine Gramze wrote:

I contacted MSI to ask if they made a motherboard I could use. ...
This looks to me like a stock refusal to even consider a technical question ...


Ideas:

1.  Pick a motherboard and start a fresh post with the make and model in 
the subject -- e.g. MSI H87-43G compatible with Debian 7.1 amd64?. 
You might get replies from readers who have that exact board and OS.


2.  Find a retailer who will agree to assemble the parts you pick for a 
fee and allow you to install and test whatever OS's you want on their 
premises prior to making a purchasing decision.  (I believe Fry's 
Electronics charges $50 to assemble a computer and install an OS.)


3.  Find a seller who will agree to a full refund within X days if the 
parts you buy don't work with whatever OS you pick.  (You may need to 
provide or pay shipping one or both ways.)



David


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52447c48.9000...@holgerdanske.com



Re: Building computer

2013-09-26 Thread Ralf Mardorf
On Thu, 2013-09-26 at 14:02 -0400, Catherine Gramze wrote:
 
 On Sep 26, 2013, at 1:57 PM, Ralf Mardorf ralf.mard...@alice-dsl.net
 wrote:
 
  I don't know any vendor, for any kind of hardware, that does support
  Linux, even not the Linux friendly vendors that cooperate with the
  Linux
  community usually don't support Linux. That Linux isn't supported,
  doesn't mean that they don't care about Linux.
 
 Yes, but I wasn't even asking about support. I was asking about two
 specific capabilities of the BIOS in a motherboard, which are
 OS-agnostic. Disabling Secure Boot and enabling CSM.

They have fear to answer, because if it should be possible, but then
anyway cause issues, they won't bear the blame. I very often experienced
hardware that should work with Linux, even when recommended by the
community, that didn't/doesn't work on my machine. If they would give
you an answer, they already would provide a little bit of Linux support.

To be fair, supporting hardware and drivers for Linux isn't easy,
because the advantage of Linux, it's diversity and continuously
development, is a drawback for this domain.



-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1380219568.722.21.camel@archlinux



Re: Building computer

2013-09-26 Thread Catherine Gramze

On Sep 26, 2013, at 2:12 PM, Catherine Gramze rhia...@gmail.com wrote:

 
 On Sep 26, 2013, at 2:08 PM, Darko Gavrilovic d.gavrilo...@gmail.com wrote:
 
 What's the motherboard model number? A quick Google indicates you may
 not be able to disable UEFI but you can enable legacy compatibility
 mode for MSI m/b's.
 
 The specific model I am considering is the MSI H87-43G.

Er, make that the H87-G43. I have emailed MSI again and asked, without 
mentioning Linux at all, if the BIOS on this motherboard allows Secure Boot to 
be disabled, allows CSM mode to be enabled, and if it is connected-standby 
compliant. 
 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Building computer

2013-09-26 Thread Rob Owens
On Thu, Sep 26, 2013 at 01:45:20PM -0400, Catherine Gramze wrote:
 
 I contacted MSI to ask if they made a motherboard I could use. Here is the 
 exchange:
 
 Me: I want to purchase a motherboard for a Linux computer. I would prefer one 
 that would accept the new Haswell processor. The BIOS must allow for Secure 
 Boot to be disabled, and CSM mode to be enabled, e.g. UEFI disabled. This 
 would obviously be a board that is not compliant with Microsoft's new 
 Connected-stand-by feature. Do you make such a board, and if you do which 
 board might it be?
 MSI tech: MSI does not perform and provide any Linux support for mb, only 
 Windows.
 
 This looks to me like a stock refusal to even consider a technical question 
 (because I mentioned Linux) rather than an affirmation that they have 
 implemented Connected-standby and that their BIOS lacks the options to 
 disable Secure Boot and UEFI.
 
Well, you've given me enough reason to not purchase anything from MSI.
There are lots of motherboard manufacturers out there.  Thanks for
helping me narrow down my list of choices.

Good luck with your search.

-Rob


signature.asc
Description: Digital signature


Re: building computer

2013-09-26 Thread Catherine Gramze
I have received a response from MSI again. I do like the swiftness of their 
email responses, even if the responses are a bit cryptic.

Me: Please advise me as to whether this motherboard: 
http://us.msi.com/product/mb/H87-G43.html#/?div=Overview 
1. allows Secure Boot to be disabled and 
2. whether it allows CSM mode to be enabled and 
3. whether it is Connected-standby compliant.

MSI tech: Default is disabled but can be enabled, default is CSM mode (UEFI + 
Legacy) but can be switched to pure UEFI mode. Connected-standby is required by 
Windows 8. The board has Windows 8 configuration that will do all of the above 
if you are going to do Win8 deployment.

Am I nuts, or does this answer seem to be the complete opposite of the previous 
answer? According to this answer the board seems to be what I want. 

In a message yesterday, Charles Kroeger mentioned that part of 
Connected-standby was an inability to enable CSM. This answer suggests the 
board can do either. Can Charles Kroeger give a source for his information?

With the release of Windows 8 in October 2012, Microsoft's certification
requirements now require that computers include firmware that implements the 
UEFI
specification. Furthermore, if the computer supports the Connected Standby
feature of Windows 8, then the firmware is not permitted to contain a 
Compatibility
Support Module (CSM). As such, systems that support Connected Standby are 
incapable
of booting Legacy BIOS operating systems


signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: building computer

2013-09-26 Thread Charles Kroeger
On Thu, 26 Sep 2013 21:50:01 +0200
Catherine Gramze rhia...@gmail.com wrote:

 I have received a response from MSI again. I do like the swiftness of their 
 email
responses, even if the responses are a bit cryptic.  
 Me: Please advise me as to whether this motherboard:
http://us.msi.com/product/mb/H87-G43.html#/?div=Overview  1. allows Secure 
Boot to
be disabled and  2. whether it allows CSM mode to be enabled and 
 3. whether it is Connected-standby compliant.
 
 MSI tech: Default is disabled but can be enabled, default is CSM mode (UEFI +
Legacy) but can be switched to pure UEFI mode. Connected-standby is required by
Windows 8. The board has Windows 8 configuration that will do all of the above 
if
you are going to do Win8 deployment.   Am I nuts, or does this answer seem to 
be
the complete opposite of the previous answer? According to this answer the board
seems to be what I want. 

  In a message yesterday,

(to you not the list)

Charles Kroeger mentioned that part of Connected-standby was an inability to
enable CSM. This answer suggests the board can do either.

  With the release of Windows 8 in October 2012, Microsoft's certification 
requirements now require that computers include firmware that implements the 
UEFI 
specification. Furthermore, if the computer supports the Connected Standby 
feature of Windows 8, then the firmware is not permitted to contain a 
Compatibility
 Support Module (CSM). As such, systems that support Connected Standby are
incapable of booting Legacy BIOS operating systems 

Can Charles Kroeger give a source for his information?

He can:

https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface

sub heading: Platforms using EFI/UEFI  10th paragraph down

-- 
CK


signature.asc
Description: PGP signature


Re: building computer

2013-09-26 Thread Catherine Gramze

On Sep 26, 2013, at 8:27 PM, Charles Kroeger ckro...@frankensteinface.com 
wrote:

 
 Can Charles Kroeger give a source for his information?
 
 He can:
 
 https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface
 
 sub heading: Platforms using EFI/UEFI  10th paragraph down

I figured you could, CK. Thank you for responding with the source. 

At least in part due to the info you provide I have pulled the trigger and, 
encouraged by my husband to go with the sure thing, ordered a computer from 
Zareason with the specs I prefer, except for the video card and power supply. 
Those can be a future upgrade if needed. It would be valuable information for 
the community for me to buy a new board and see if it works - but my husband 
and I are just too fiscally conservative in our retirement to risk it. OTOH, 
that fiscal conservatism -always pay cash- puts us in a place where we can both 
buy a new computer without breaking the budget, because we already saved up the 
money for them ahead of time. 


signature.asc
Description: Message signed with OpenPGP using GPGMail


  1   2   >