Re: [gentoo-user] CMYK comparison to sRGB between platforms

2015-09-10 Thread Peter Humphrey
On Wednesday 09 September 2015 14:41:19 Mick wrote:

> Would you mind explaining how it works?  You measure the icc of a monitor -
> what do you do with this then?  Do you need to be running something like
> colord all the time to feed some correction data to xranrd?

You get a live DVD (Fedora) with the calibration program and some user notes. 
The device comes with a strap to hold it against the middle of the screen, and 
a 6' USB lead. The measuring process is straightforward, though complicated 
for me by the fact that my screen is LED, not LCD. Still, I told it to treat 
it as an LCD and the result, though a bit bright for my eyes, appears accurate 
enough. It also knows about CRTs and projectors.

Once the calibration is complete (about 10 minutes for the standard 
calibration) you have to copy the .icc directory from ~/.local/share to a USB 
stick or something, then reboot into your usual system and double-click on the 
file in your GUI file manager. That transfers the data to the monitor, 
apparently permanently.

Simple, once you get out of the habit of using the CLI. Well, it would be, 
except that I had to run:

$ Find / -iname \*.icc 2> /dev/null
$ mv .local/share/icc .

Then I could see the icc folder in the file manager and drag it to the USB 
stick.

As for double monitors, the calibration program on the DVD asks you to choose 
the monitor to calibrate, so it can detect more than one at a time, but I 
don't know how transferring the .icc in the main system would work with two 
monitors. You might have to download and install the client tools.

HTH.

-- 
Rgds
Peter




[gentoo-user] Re: new computer : any advice ?

2015-09-10 Thread james
Fernando Rodriguez  outlook.com> writes:



> > albeit in it's infancy. Naturally it's going to take a while to 
> > become mainstream useful; but that more like a year or 2, at most.
> 
> The value I see on that technology for desktop computing is that we get the 
> GPUs for what they're made (graphics processing) but their resources go
unused 
> by most applications, not in buying powerful GPUs for the purpose of
offloading 
> general purpose code, if that's the goal you're better off investing in more 
> general purpose cores that are more suited for the task.


I think most folks when purchasing a workstation include a graphics
card on the list of items to include. So my suggestions where geared
towards informing folks about some of the new features of gcc that
may intice them to consider the graphics card resources in an
expanded vision of general resources for their workstation.


> To trully take advantage of the GPU the actual algorithms need to be
rewritten 
> to use features like SIMD and other advanced parallelization features, most 
> desktop workloads don't lend themselves for that kind of parallelization.

Not true if what openacc hopes to achived indeed does become a reality.
Currently, you are most correct. Things change; I'm an optimist because 
I see what is  occuring in embedded devices, arm64, and cluster codes.
ymmv.

> That 
> is why despite similar predictions about how OpenMP-like parallel models
would 
> obsolete the current threads model since they where first proposed, it
hasn't  
> happened yet.

Yes it's still new technology, controversial, just like systemd, clusters,
and Software Defined Networks.


> Even for the purpose of offloading general purpose code, it seems with all
the 
> limitations on OpanACC kernels few desktop applications can take advantage of 
> it (and noticeably benefit from it) without major rewrites. Off the top of my 
> head audio, video/graphics encoders, and a few other things that max out the 
> cpu and can be broken into independent execution units.


You are taking a very conservative view of things. Codes being worked
out now for clusters, will find their way to expand the use of the
video card resources, for general purpose things. Most of this will
occur as compiler enhancements, not rewriting by hand or modifying 
algorithmic designs of existing codes. Granted they are going to
mostly apply to multi-threaded application codes.


When folks buy new hardware, it is often a good time to look at what
is on the horizon for computers they use. All I have pointed out is
a very active area that benefits folks to review for themselves. I not
pushing expenditures of any kind on any hardware.

Caveat Emptor.



James








Re: [gentoo-user] Re: new computer : any advice ?

2015-09-10 Thread Rich Freeman
On Thu, Sep 10, 2015 at 8:20 AM, james  wrote:
> I think most folks when purchasing a workstation include a graphics
> card on the list of items to include. So my suggestions where geared
> towards informing folks about some of the new features of gcc that
> may intice them to consider the graphics card resources in an
> expanded vision of general resources for their workstation.

Sure, but keep in mind depreciation.

If all you need today is a $30 graphics card, then you probably should
just spend $30.  If you think that software will be able to use all
kinds of fancy features on a $300 graphics card in two years, you
should just spend $30 today, and then wait two years and buy the fancy
graphics card on clearance for $10.

It is pretty rare that it is a wise move to spend money today on
computer hardware that you don't have immediate plans to use.  The
only time it might make sense is if some kind of QA process means that
you're going to spend a lot more money on re-qualifying your system
after the upgrade than it would cost to just do it once and overspend
on hardware.  However, in general I'm not a big fan of those kinds of
QA practices in the first place.

-- 
Rich



Re: [gentoo-user] Re: new computer : any advice ?

2015-09-10 Thread Gevisz
On Thu, 10 Sep 2015 12:20:39 + (UTC) james  wrote:

> Fernando Rodriguez  outlook.com> writes:
> 
> > > albeit in it's infancy. Naturally it's going to take a while to 
> > > become mainstream useful; but that more like a year or 2, at most.
> > 
> > The value I see on that technology for desktop computing is that we
> > get the GPUs for what they're made (graphics processing) but their
> > resources go unused by most applications, not in buying powerful
> > GPUs for the purpose of offloading general purpose code, if that's
> > the goal you're better off investing in more general purpose cores
> > that are more suited for the task.

It is true.
 
> I think most folks when purchasing a workstation include a graphics
> card on the list of items to include. So my suggestions where geared
> towards informing folks about some of the new features of gcc that
> may intice them to consider the graphics card resources in an
> expanded vision of general resources for their workstation.
> 
> > To trully take advantage of the GPU the actual algorithms need to be
> > rewritten to use features like SIMD and other advanced parallelization
> > features, most desktop workloads don't lend themselves for that kind
> > of parallelization.

And it is also true.

> Not true if what openacc hopes to achived indeed does become a reality.

Hopes almost never becomes a reality.

> Currently, you are most correct.

Absolutely correct.

...

> 
> When folks buy new hardware, it is often a good time to look at what
> is on the horizon for computers they use.

I also considered "what is on the horizon" when bought a brand new
ATI Radeon R4770 graphic card about 6 years ago for computing purposes.

In half a year it was discovered that it has much worse performance than
ATI guys hoped for and, to improve it, they have to rewrite their proprietary
drive for this graphic card.

Instead of doing it, they just shamelessly dropped the support of the parallel
computing feature of this graphic card in all subsequent versions of their 
drive.

And as far as I know, no open source drive have ever supported the parallel
computing feature of this graphic card as well.

So, it was just a waste of money. Even more: I almost never worked at my 
assembled
almost 7 year-old 4-core AMD computer with this graphic card as for all other
purposes I prefer to work at my 10 year-old 2-core AMD computer with a very 
cheap
on-board video card. Just to avoid extra heating and aircraft noise produced by 
R4770.

So, Rich Freeman was absolutely right when he wrote in reply to your words 
above that

> If all you need today is a $30 graphics card, then you probably should
> just spend $30.  If you think that software will be able to use all
> kinds of fancy features on a $300 graphics card in two years, you
> should just spend $30 today, and then wait two years and buy the fancy
> graphics card on clearance for $10.

> It is pretty rare that it is a wise move to spend money today on
> computer hardware that you don't have immediate plans to use.




Re: [gentoo-user] Re: new computer : any advice ?

2015-09-10 Thread Gevisz
On Thu, 10 Sep 2015 21:12:37 + (UTC) james  wrote:

> Gevisz  gmail.com> writes:
> 
> > on-board video card. Just to avoid extra heating and aircraft noise 
> > produced by R4770.
> 
> Fanless video cards are wonderful. I have had many over the years but this
> one is still my (silent) favorite::
> 
> 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
> Cape Verde PRO [Radeon HD 7750 / R7 250E]

Thank you for information.

Never say never, but according to my current mood
(which is unchanged for the last 5 years already :),
I will never buy ATI video card again.

And not because of noise and heating of ATI Radeon R4770
but because ATI shamelessly dropped support of their much
advertised parallel computing feature for this video card.

And I am not a fan of NVidia either.

Only small, cheap and fanless on-board video cards
to just manage the monitor!

At least for the next 12 or 15 years. :)
 
> I did notice in a gentoo blog that openmp is a testing option for Clang-3.7
> now? [1]
> 
> Try not to loose faith, we all have had bad experiences, but clustering,
> distributed and systems aggregation codes are rapidly coalescing into
> something wonderful, so . keep the faith.. bro.
> 
> ;-)   
> 
> wwr,
> James
> 
> 
> [1] http://blog.cafarelli.fr/2015/09/testing-clang-3-7-0-openmp-support/
> 
> 
> 
> 




Re: [gentoo-user] CMYK comparison to sRGB between platforms

2015-09-10 Thread wabenbau
Mick  wrote:

> On the same hardware I noticed that a CMYK photograph converted to
> sRGB looked mostly the same (indistinguishable) on Linux, but the
> sRGB colours were brighter on MSWindows.
> 
> I tried this by dual booting between MSWindows and Linux.
> 
> Then I tried it by running MSWindows within a VM on a Linux host and
> the MSWindows showed a clear difference in brightness between the two
> formats.
> 
> Finally, I checked on an AppleMac and the difference between the CMYK
> and sRGB photographs was even more prominent than MSWindows.
> 
> So, the Linux renedering seems to be misleading the user.  Have you
> noticed the same?
> 
> BTW, both Linux machines that I tried this on are running radeon
> drivers - are these to blame?  The AppleMac is running Intel graphics
> with its 'retina' monitor.  Is it a matter of somehow tuning the Xorg
> settings on my Linux PCs?

First I must say that even though I'm working as a photographer I'm not 
an expert on Color Models. The professional exposure and print service 
that I use only accepts RGB Color Models. They use laser projectors to
expose photographic papers. No conversion to CMYK is necessary. 
If I order fine art prints, they are doing the conversion by them self. 
All I have to do is softproofing my pictures in Lightroom using their 
different ICC profiles, to make sure that I don't deliver pictures that 
are out of the destination gamut.
So I don't have any practical experiences with CMYK pictures. I only 
have some incomplete theoretical knowledge about it.

CMYK is a subtractive color model and RGB is an additive color model,
they are working completely different. It is not possible to convert 
one in to the other by just simply adjust some gamma curves or using a 
LUT as it is done by color management systems like lcms. 

When you are watching a CMYK picture, your picture viewer has to convert
it to a RGB color space (sRGB or AdobeRGB or similar), because that is
what your monitor needs. And I think there are not much picture viewers
that are able to display a CMYK picture.

This conversion can not be done by the graphics driver, regardless what 
kind of OS you use. Indeed Linux drivers can only use 8 bits per color
channel (that's really poor IMHO) and Windows can use 10 bits per channel
(depends on the graphics card), but this can't make big differences in 
brightness or saturation. It only leads to smother color transitions in 
some pictures.
So I don't think that the drivers have anything to do with your problem.

Apart from the different color models (CMYK vs RGB) there exist different
color spaces (eg. AdobeRGB and sRGB). When you convert one color space in 
to an other, there are parameters like black point compensation and 
different rendering intents (perceptual and relative or absolute 
colorimetric), that can make a difference in the resulting picture.

You didn't told exactly what you have done. This makes it difficult to 
find a reason for the problem. But I can think of different reasons for 
the phenomenon you observed:

Different picture viewers and/or different color management systems and/or
different color spaces (including different rendering intents respectively 
black point compensations). :-)

--
Regards
wabe



[gentoo-user] Re: new computer : any advice ?

2015-09-10 Thread james
Gevisz  gmail.com> writes:

> on-board video card. Just to avoid extra heating and aircraft noise 
> produced by R4770.

Fanless video cards are wonderful. I have had many over the years but this
one is still my (silent) favorite::

01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI]
Cape Verde PRO [Radeon HD 7750 / R7 250E]


I did notice in a gentoo blog that openmp is a testing option for Clang-3.7
now? [1]

Try not to loose faith, we all have had bad experiences, but clustering,
distributed and systems aggregation codes are rapidly coalescing into
something wonderful, so . keep the faith.. bro.

;-) 

wwr,
James


[1] http://blog.cafarelli.fr/2015/09/testing-clang-3-7-0-openmp-support/






[gentoo-user] Re: dec-terminal fonts

2015-09-10 Thread Harry Putnam
the...@sys-concept.com writes:

> On 09/09/2015 03:04 PM, Harry Putnam wrote:
>> I've got to liking this font:
>>   -dec-terminal-medium-r-normal--14-140-75-75-c-80-iso8859-1
>> 
>> But, after checking with xlsfonts... I don't see it available
>> 
>> Can anyone tell me which font package would have -dec-termainal [...]
>> fonts?
>
>
> check:
> media-fonts/font-bitstream-75dpi

Thanks for the push.  Did you mean see if I like them as well or did
you mean the font I posted should be in that pkg?


after install, checking with xlsfonts I still do not see
   -dec-terminal-medium-r-normal--14-140-75-75-c-80-iso8859-1

The -bitstream-terminal set in bold look smeared and huge
the medium are good but again huge `18' is the only size I see.

I do see that:
  -misc-fixed-bold-r-normal--15-120-100-100-c-90-iso8859-1

Is pretty good ... not quite up to the crispness of the `-dec-[...]'
set though.





Re: [gentoo-user] Re: new computer : any advice ?

2015-09-10 Thread Fernando Rodriguez
On Wednesday, September 09, 2015 9:52:55 PM james wrote:
> Jeremi Piotrowski  gmail.com> writes:
> 
> > No, and yes. Compilation is not affected in any way and runtime
> > performance can only be improved _if_ this stuff is explicitly used within
> > the code.
> 
> Yes this is all new and a work in progress. I do not think it will be
> gcc-6 that makes the difference in a few years. But folks should be aware
> and look for codes that are accelerated via usage of GPU resources.
> Remember this all started about hardware purchase and future benefits.
> It's definitely not commodity usage atm.
> 
> 
> > Meaning you would feel a difference in no less then 5 years when gcc-6 is
> > widely used and accelerator support is not restricted to intel MIC and
> > nvidia gpus. James is getting a bit ahead of himself calling this a
> > "game changer" - yeah... not really right now.
> 
> It's not as restricted as you indicate amd, intel, nividia and others like
> arm (Mali and such) are working to support there hardware under the openacc
> code extension now in gcc-5. Granted the more powerful your GPU resources
> are the more they can contribute. This stuff use to only work with
> vendor supplied compilers and sdks, now it's finally available in gcc,
> albeit in it's infancy. Naturally it's going to take a while to 
> become mainstream useful; but that more like a year or 2, at most.

The value I see on that technology for desktop computing is that we get the 
GPUs for what they're made (graphics processing) but their resources go unused 
by most applications, not in buying powerful GPUs for the purpose of offloading 
general purpose code, if that's the goal you're better off investing in more 
general purpose cores that are more suited for the task.

To trully take advantage of the GPU the actual algorithms need to be rewritten 
to use features like SIMD and other advanced parallelization features, most 
desktop workloads don't lend themselves for that kind of parallelization. That 
is why despite similar predictions about how OpenMP-like parallel models would 
obsolete the current threads model since they where first proposed, it hasn't  
happened yet.

Even for the purpose of offloading general purpose code, it seems with all the 
limitations on OpanACC kernels few desktop applications can take advantage of 
it (and noticeably benefit from it) without major rewrites. Off the top of my 
head audio, video/graphics encoders, and a few other things that max out the 
cpu and can be broken into independent execution units.

-- 
Fernando Rodriguez