Re: [gentoo-user] merging or fitting images together

2012-05-20 Thread Philip Webb
120519 Michael Mol wrote:
 According to Wikipedia, the Zeiss Ikon is 35mm SLR,
 but that's about all you're going to get from it.

No ! -- as Stroller pointed out, zoom lenses were invented only c 1950.
My stepfather's model was made in Germany c 1939 
 had been mentioned to him as a good buy by a photo-expert friend.
After the traumas of England 1940, someone had sold it to a store,
which was offering it very cheaply, so he grabbed a very real bargain.
I didn't think of keeping it after he died, but it wb a collector's item:
I've put a photo I found on the I/net in my 'test' directory.

It didn't use  35 mm  film, but '120' IIRC, a much larger format.
What was a typical focal length for a good camera in 1939 ?

This thread, tho' rather far from Gentoo itself,
has obviously been of interest to a number of people
 can serve as a resource for anyone searching the I/net
for advice how to make a panorama from smaller images,
but we shouldn't take it too far  images themselves belong elsewhere.

-- 
,,
SUPPORT ___//___,   Philip Webb
ELECTRIC   /] [] [] [] [] []|   Cities Centre, University of Toronto
TRANSIT`-O--O---'   purslowatchassdotutorontodotca




Re: [gentoo-user] merging or fitting images together

2012-05-20 Thread Philip Webb
120520 Stroller wrote:
 Zoom lenses were much less common even 2 or 3 decades ago.
 For a long time, a 50mm prime was the common kit lens,
 rather than the 18-105mm zoom which is sold today.
 This was because on a camera using 35mm film, a 50mm focal length
 gives a field of view very close to that seen naturally by the human eye. 
 Wikipedia states that the first modern film zoom lens was designed c 1950
 by Roger Cuvillier and Canon's official website (the Canon Camera Museum)
 states that The history of Canon's zoom lens goes back to 1954.
 Since the photos are stated go have been taken in 1953
 it seems highly unlikely that the photographer was using
 a highly expensive and cutting-edge zoom lens. I doubt many people
 would have been able to afford these zoom lenses when first released. 
 It seems to me safer to assume that the lens is a 50mm.

In fact, looking more closely at the picture of a ZI
which I've put up in my I/net 'test' dir (found on the I/net earlier),
it says 'F = 7,5 cm', so the lens appears to have been a  75 mm .

 I guess focal length may change fractionally during focussing
 -- as lenses are moved back and forth during as the focus ring is turned --
 however it may also be that a camera manufacturer designs a lens
 with a 48mm focal length because that's easier to construct for some reason
 or produces better images and decides to sell it as 50mm
 because a 2mm difference makes no difference to the photographer. 
 Or it may be that the distortion is caused by lens distortion
 perhaps Hugin is trying to compensate for that, and straightening up lines. 
 In any case, I might try re-doing the stitch a few times,
 each time telling Hugin the lens is 47mm, 48mm, 49mm, … 51mm, … 53mm.
 Perhaps you may find that one of those is perfectly spot on.

Thanks for the further lesson ! -- it sb tried at  75 mm .

-- 
,,
SUPPORT ___//___,   Philip Webb
ELECTRIC   /] [] [] [] [] []|   Cities Centre, University of Toronto
TRANSIT`-O--O---'   purslowatchassdotutorontodotca




Re: [gentoo-user] [OT] ogg/mp3 volume

2012-05-20 Thread Jesús J . Guerrero Botella
Just for sake of correctness, what the op wants is called normalization, in
the world of sound edition.

It can be done once, as you rip the file or it can be done on the fly when
playing it. replaygain is (as the name itself says) an implementation of
this that automatically adjusts the gain of each soundtrack. But its just
the name of the concrete implementation, not the name of the process. Just
like volvo is a brand of cars, but not all cars are branded volvo.

I,ll add that if you normalize while ripping you are damaging permanently
the audio files, which in addition to a loussy format like mp3, and cheap
speakers, can result in a very bad thing, but that really depends on how
demanding your ear is...

Specially for bands like led zeppelin, I would just use the second method
(adjust while playing, rather than while ripping). Someday you will want to
hear the whole disk as it was intended, and if you normalized on ripping
you won't be able to.
---
Jesús Guerrero Botella
El 19/05/2012 16:22, ny6...@gmail.com escribió:

 On Sat, May 19, 2012 at 07:54:10PM +0800, Andrew Lowe wrote:
  Hi all,
Is there a way to change the volume of a mp3/vorbis track? By
 volume,
  I'm referring to lining up several tracks on your
  computer/phone/tablet/thingy, setting the one volume level and then
  letting them play. For example, the first track will be quiet, of all
  ironies my Led Zeppelin tracks are all like this, the next track will be
  loud, the next track in the middle, in other words it's Goldilocks and
  the three bears with audio tracks.
 
Is there a way I can either during the ripping process, or
 subsequently
  in a post-processing, make the average volume of all my tracks the
 same?
 
Any thoughts greatly appreciated,
 
Andrew

 don't rip myself, but back in the day, the big ripping programs would usu
 have some kind of 'leveling' plugin that would equalize the volumes on all
 the tracks.

 Terry



Re: [gentoo-user] Listing the files of an un-/installed package...?

2012-05-20 Thread Jesús J . Guerrero Botella
There's no way to know the files for a non-installed package. Ebuilds are
NOT packages. The files are generated at build time, and vary dependind on
the package, version, config options, arch, deps, and a large etc,
including missalignment of the planets and othes essoteric variables.

---
Jesús Guerrero Botella
El 19/05/2012 18:48, Volker Armin Hemmann volkerar...@googlemail.com
escribió:

 Am Samstag, 19. Mai 2012, 17:11:20 schrieb Matthias Hanft:
  Volker Armin Hemmann wrote:
   not possible. How can anybody know beforehand? with useflags and all?
 
  At least, equery uses package *does* work with uninstalled packages.

 because you just need to look at the useflags and ?DEPEND to figure that
 out.

  (However, equery files package does not.)

 obviously.

 --
 #163933




Re: [gentoo-user] I want to play movies without hangs

2012-05-20 Thread Alex Schuster
Mark Knecht writes:

 Anything in there show network through-put per process? I've been
 looking for a way to monitor what's going to each of my VMs?

net-analyzer/nethogs does that.

Wonko



Re: [gentoo-user] [OT] ogg/mp3 volume

2012-05-20 Thread Stroller

On 20 May 2012, at 10:41, Jesús J. Guerrero Botella wrote:
 ...
 Specially for bands like led zeppelin, I would just use the second method 
 (adjust while playing, rather than while ripping). Someday you will want to 
 hear the whole disk as it was intended, …

I agree that adjust-whilst-playing is the best method, but we won't be hearing 
this music as intended for a while.

All digital Led Zep releases (i.e. including all CDs) are notoriously poorly 
remastered, with excessive gain applied.

http://en.wikipedia.org/wiki/Loudness_war

So far they have deteriorated with each remastering / re-release. 

Huge threads on the Steve Hoffman forums discussing this, if you want to cork 
sniff.

Stroller.




Re: [gentoo-user] Runlevels, ordering initscripts and running them in background

2012-05-20 Thread Joost Roeleveld
On Wednesday, May 16, 2012 02:15:19 PM Neil Bothwick wrote:
 On Wed, 16 May 2012 09:40:26 +0100, Ignas Anikevicius wrote:
  I want to do this, so that I do not have to wait while non-crucial
  services are being started (e.g. fcron, bitlbee, ntpd to name a few).
  Maybe it is possible to somehow prioritize the initscripts?
 
 Yes it is. The initscripts themselves have such a mechanism, using the
 before and after statements, for example making sure that network
 services are started after the network is brought up. You can add your
 own rules to the daemons' config files in /etc/conf.d or to /etc/rc.conf.
 
 To have bitlbee start after xdm either add
 
 rc_after=xdm
 
 to /etc/conf.d/bitlbee or put
 
 rc_bitlbee_after=xdm
 
 in /etc/rc.conf. Both have the same effect, it depends on whether you
 want to put all these settings together or in the individual services'
 config files.

Putting them in /etc/rc.conf makes it simpler to maintain the init-scripts 
when updating packages.
I used to put these things in the init-scripts and occasionally forgot about 
some of these during an update.

--
Joost



[gentoo-user] Re: [OT] ogg/mp3 volume

2012-05-20 Thread Nikos Chantziaras

On 20/05/12 12:41, Jesús J. Guerrero Botella wrote:

Just for sake of correctness, what the op wants is called normalization,
in the world of sound edition.


Actually, no.  That's not what he wants.  Normalization simply adjusts 
to 0db.  How loud something sounds however is not a simple matter of 
what the maximum peak of a waveform is.  ReplayGain actually analyzes 
the music to tell how loud it *sounds*, not how loud it actually is.


For example, you can have audio that was normalized (0db) but doesn't 
sound as loud as, say, -5db audio, but which has compressed dynamic range.


Normalization makes audio equally loud for hardware.  ReplayGain makes 
audio equally loud for humans. :-)





Re: [gentoo-user] Running programs compiled with a different gcc version

2012-05-20 Thread Norman Invasion
On 19 May 2012 16:23, Urs Schutz u.sch...@bluewin.ch wrote:
 Yesterday I manually compiled photivo, a camera raw file
 converter and image editor. One of the requirements for
 installing is gcc 4.6. So I manually unmasked gcc 4.6.3
 and installed it with portage.
 After switching gcc with gcc-config and . /etc/profile,
 photivo compiled fine.
 A test run showed that photivo is running fine.

 I simply do not know enough about gcc and gentoo to leave
 gcc at 4.6.3, and switched back to the stable 4.5 branch
 (gcc-config and . /etc/profile again).

 When I try to run photivo again I get an error:
 photivo: /usr/lib/gcc/i686-pc-linux-gnu/4.5.3/libstdc++.so.6:
 version `GLIBCXX_3.4.15' not found (required by photivo)

 locate -i glibcxx shows no results.

 My question is: Can I set some variables (e.g. in a bash
 start script) that photivo thinks it is running on a system
 with gcc 4.6? All the components are installed, as I can
 switch gcc to 4.6.3 and run photivo as user.

 I do not see any changes in environment variables before
 and after switching gcc versions. What magic does
 gcc-config do?

Compile it statically linked?



Re: [gentoo-user] Best caching dns server?

2012-05-20 Thread Alan McKinnon
On Sun, 20 May 2012 06:15:42 +0530
Nilesh Govindrajan cont...@nileshgr.com wrote:

 On Sat, May 19, 2012 at 10:06 PM, Alan McKinnon
 alan.mckin...@gmail.com wrote:
  On Sat, 19 May 2012 07:45:56 +0530
  Nilesh Govindrajan cont...@nileshgr.com wrote:
 
  Hi,
 
  Which is the best caching dns server? I'm presently using
  pdns-recursor, which is quite good, but doesn't have option to set
  minimum ttl (doesn't make sense, but some sites like twitter have
  ridiculously low ttl of 30s). Also, it isn't able to save cached
  entries to file so that it can be restored on next boot. Any
  option?
 
  You can use almost any cache you want...
 
  ... except bind
 
  We use unbound. Does the job, does it well, developer very
  responsive.
 
  But do not fiddle with TTLs, that breaks stuff in spectacular ways.
  Essentially, with the TTL the auth server is saying We guarantee
  that you can treat this RR as valid for X amount of time and suffer
  no ill effects if you do
 
  What you want to do is break that agreement, which is really not s
  good idea.
 
 
  I am keeping my box 24x7 on because it serves as dns on my small
  home wifi, not acceptable to me, because network is almost off at
  night (only phone) and I have my router as secondary dns.
 
  Just use Google's caches or OpenDNS. They do the job so much better
  than you ever could. Why reinvent the wheel?
 
 
 
 Slow connection. See my previous reply to the list. I'm using pdnsd,
 which can persist records and has every damn feature I wanted.
 

Fair enough, but consider this:

If your connection is slow, the only thing you speeded up is the DNS
lookups. Thereafter, everything else is still as slow as it ever was.
And if you feel the need to speed up DNS lookups then the odds are very
good that everything else is too slow i.e. not exactly usable.

We get this a lot from our customers too, and the advise we give them
is to look closely at their traffic throttling. In almost every case
all UDP traffic has had the living crap throttled out of it somewhere
by folk that don't really think things through, severely affecting
dns and ntp as well as AV streaming.

Throttled DNS rapidly gets out of hand, IIRC the last time we did some
measurements it only takes around 5% of dns lookups to go wonky for the
situation to rapidly spiral out of control - when dns fails the cache
will try a TCP lookup and that's like wading through molasses.

Our advice to customers is to first unthrottle dns and ntp completely,
give it the highest possible priority (these are extremely light
protocols and seldom show up on the radar when you do this), and see
how that goes.

It just seems to me that you *might* be trying a very unusual solution
for a problem that is better handled one layer lower down.

-- 
Alan McKinnnon
alan.mckin...@gmail.com




Re: [gentoo-user] Best caching dns server?

2012-05-20 Thread Mick
On Saturday 19 May 2012 13:09:45 Pandu Poluan wrote:
 On May 19, 2012 6:46 PM, Dale rdalek1...@gmail.com wrote:
  Willie Matthews wrote:
 [le snip]
 
   You should really try changing you DNS server to some faster ones. I
   was having this same problem with my ISP or DSL modem with built in
   router taking a long time. I changed my DNS servers to Google DNS
   Servers (8.8.4.4 and 8.8.8.8) and haven't had a problem.
   
   My setup is a little different but all in all I would really suggest
   you try a DNS server outside of your ISP.
  
  I agree.  My ISP is ATT and I changed my DNS to Google's too.  It is
  very fast compared to ATT's servers.  I have had ATT's servers not
  respond for several seconds but Google's just seem to work.
 
 Here's the result of a test comparing the performance of public DNS servers
 
 
 http://www.thousandeyes.com/blog/public-dns-resolver-showdown
 
 Despite what the linked article said, in my experience, Level 3
 (4.2.2.[1-5]) is at least as fast as Google. I guess it depends on one's
 ISP. But both of them are mucho faster (and much stabler) than my ISP's DNS
 servers.

I understand that Level 3 4.2.2.2 is not a public DNS server:

  http://www.tummy.com/Community/Articles/famous-dns-server/


If my recent experience is correct (when I was trying to set up proxychains) 
connections to it are often dropped or at least throttled.

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Runlevels, ordering initscripts and running them in background

2012-05-20 Thread Neil Bothwick
On Sun, 20 May 2012 19:59:40 +0200, Joost Roeleveld wrote:

  To have bitlbee start after xdm either add
  
  rc_after=xdm
  
  to /etc/conf.d/bitlbee or put
  
  rc_bitlbee_after=xdm
  
  in /etc/rc.conf. Both have the same effect, it depends on whether you
  want to put all these settings together or in the individual services'
  config files.  
 
 Putting them in /etc/rc.conf makes it simpler to maintain the
 init-scripts when updating packages.
 I used to put these things in the init-scripts and occasionally forgot
 about some of these during an update.

Don't put them in the init scripts, those are overwritten during an
update, either rc.conf or the config files in /etc/conf.d.

If you want to arrange the order of several items, rc.conf is the logical
place, but if you just want to make sure service A starts before service
B, /etc/conf.d/[AB] is more sensible IMO.


-- 
Neil Bothwick

Press every key to continue.


signature.asc
Description: PGP signature


Re: [gentoo-user] merging or fitting images together

2012-05-20 Thread Michael Mol
On Sat, May 19, 2012 at 10:01 PM, Stroller
strol...@stellar.eclipse.co.uk wrote:

 On 19 May 2012, at 20:28, Michael Mol wrote:
 …
 Worse, if the photographer was not using a prime lens[1], and was
 instead using a lens with variable zoom, you can't easily know what
 the real focal length was, as this will change depending on how far
 the photographer has zoomed in.

 Throughout everything else you said I was thinking something like this.

 Zoom lenses were much less common even 2 or 3 decades ago.

Ah. Excellent point!


 For a long time, a 50mm prime was the common kit lens, rather than the 
 18-105mm zoom which is sold today.

18-105? I'm used to seeing 18-55.


 This was because, on a camera using 35mm film, a 50mm focal length gives a 
 field of view very close to that seen naturally by the human eye.

 Wikipedia states that the first modern film zoom lens was designed around 
 1950 by Roger Cuvillier and Canon's official website (the Canon Camera 
 Museum pages) states that The history of Canon's zoom lens goes back to 
 1954.

 Since the photos are stated go have been taken in 1953 it seems highly 
 unlikely that the photographer was using a highly expensive and cutting-edge 
 zoom lens. I doubt many people would have been able to afford these zoom 
 lenses when they were first released.

 It seems to me safer to assume that the lens is a 50mm.

Probably generally true. (Though as Philip later remarked, it turns
out the lens was likely a 75mm prime)


 I guess focal length may change fractionally during focussing (as lenses are 
 moved back and forth during as the focus ring is turned), however it may also 
 be that a camera manufacturer designs a lens with a 48mm focal length 
 (because that's easier to construct for some reason, or produces better 
 images) and decides to sell it as 50mm because a 2mm difference in focal 
 length makes no difference to the photographer.

 Or it may be that the distortion is caused by lens distortion - perhaps Hugin 
 is trying to compensate for that, and straightening up lines.

I don't think the 'Align' button in the wizard tries to optimize for
lens distortion...adjusting for lens distortion tends to take a fair
amount of time in terms of CPU, and far longer in terms of finding the
right sequence of control point optimizers, where an errant point
won't send the algorithm into mathematically weird territories.


 In any case, I might try re-doing the stitch a few times, each time telling 
 Hugin the lens is 47mm, 48mm, 49mm, … 51mm, … 53mm. Perhaps you may find that 
 one of those is perfectly spot on.

I tried it again, this time using 342 control points generated by
hugin-cpfinder and autopano-sift-c. (Several runs of the latter, with
100 points each, produces a good set of points). There wasn't anything
for celeste to pick  up, so I used the fine-tune all points tool,
and then cleared the thirteen or so points which didn't have good
correlation.

Following that, I ran the control point optimizer in anchored,
positional mode, checked the preview, and then ran the everything
without translation control point optimizer. Checking the preview
again, the panorama was way off-center, so I dragged it back into
place using the fast preview window.

Following that, I ran the exposure optimizer's low dynamic range
preset. In the preview, things looked OK. The leftmost portion will
never look all that great, as he captured the sun setting behind a
building (or maybe that's a water spot); that narrowed the usable area
of the dynamic range of the frame, and it's going to look kinda
grayish. If these source JPG files are scans of paper photos, I could
do a lot more with a new scan set using 16bpp TIFF or OpenEXR, and at
perhaps 600 or 1200 dpi instead of 72 or so. Might be able to recover
more detail out out of that leftmost section.

Anyway, the final Hugin pto file is here: http://pastebin.com/gudxvAEa

And the final stitch is here:
http://img407.imageshack.us/img407/2030/brum3068brum30702.jpg

Interesting exercise! The image is still a bit smaller than my first
pass (704x407 vs 785x413), but it's not cropped as tightly, the lines
on the tram are much straighter, and most of the nasty noise on the
leftmost portion has been dealt with. There's likely something that
can be done to blow the image up a bit more.

-- 
:wq



Re: [gentoo-user] merging or fitting images together

2012-05-20 Thread Philip Webb
120520 Michael Mol wrote:
 as Philip later remarked, it turns out the lens was likely a 75mm prime

The picture of the camera looks exactly what I remember,
tho' there might have been different models with different lenses.
It was a very good camera for its time.

 The leftmost portion will never look all that great,
 as he captured the Sun setting behind a building
 or maybe that's a water spot

The Sun was indeed setting to the left at that time + date,
but the bluish blemish is some sort of physical decay in the negative,
which was stored in a cardboard box for  c 55 yr  without being touched.

 If these source JPG files are scans of paper photos

No, they're  2  overlapping scans of the same negative,
whose size is  58 x 43 mm  =  2,3 x 1,7 inch .

 Anyway, the final Hugin pto file is here: http://pastebin.com/gudxvAEa

What is a 'pto' file ? -- I downloaded it  it's text.

 And the final stitch is here:
 http://img407.imageshack.us/img407/2030/brum3068brum30702.jpg

All Firefox gives me is a black window : can you check ?

-- 
,,
SUPPORT ___//___,   Philip Webb
ELECTRIC   /] [] [] [] [] []|   Cities Centre, University of Toronto
TRANSIT`-O--O---'   purslowatchassdotutorontodotca




Re: [gentoo-user] Best caching dns server?

2012-05-20 Thread Pandu Poluan
On May 21, 2012 3:19 AM, Mick michaelkintz...@gmail.com wrote:

 On Saturday 19 May 2012 13:09:45 Pandu Poluan wrote:
  On May 19, 2012 6:46 PM, Dale rdalek1...@gmail.com wrote:
   Willie Matthews wrote:
  [le snip]
 
You should really try changing you DNS server to some faster ones. *
I*
was having this same problem with my ISP or DSL modem with built in
router taking a long time. I changed my DNS servers to Google DNS
Servers (8.8.4.4 and 8.8.8.8) and haven't had a problem.
   
My setup is a little different but all in all I would really suggest
you try a DNS server outside of your ISP.
  
   I agree.  My ISP is ATT and I changed my DNS to Google's too.  It is
   very fast compared to ATT's servers.  I have had ATT's servers not
   respond for several seconds but Google's just seem to work.
 
  Here's the result of a test comparing the performance of public DNS
servers
 
 
  http://www.thousandeyes.com/blog/public-dns-resolver-showdown
 
  Despite what the linked article said, in my experience, Level 3
  (4.2.2.[1-5]) is at least as fast as Google. I guess it depends on one's
  ISP. But both of them are mucho faster (and much stabler) than my ISP's
DNS
  servers.

 I understand that Level 3 4.2.2.2 is not a public DNS server:

  http://www.tummy.com/Community/Articles/famous-dns-server/


Indeed.


 If my recent experience is correct (when I was trying to set up
proxychains)
 connections to it are often dropped or at least throttled.


Most likely overloaded. I myself use 4.2.2.[3-5].

BTW, thanks for that link. I never knew before that http://18.62.0.96/ is a
standard connectivity test ;-)

Rgds,


Re: [gentoo-user] Best caching dns server?

2012-05-20 Thread Nilesh Govindrajan
On Mon, May 21, 2012 at 1:17 AM, Alan McKinnon alan.mckin...@gmail.com wrote:
 Slow connection. See my previous reply to the list. I'm using pdnsd,
 which can persist records and has every damn feature I wanted.


 Fair enough, but consider this:

 If your connection is slow, the only thing you speeded up is the DNS
 lookups. Thereafter, everything else is still as slow as it ever was.
 And if you feel the need to speed up DNS lookups then the odds are very
 good that everything else is too slow i.e. not exactly usable.

 We get this a lot from our customers too, and the advise we give them
 is to look closely at their traffic throttling. In almost every case
 all UDP traffic has had the living crap throttled out of it somewhere
 by folk that don't really think things through, severely affecting
 dns and ntp as well as AV streaming.

 Throttled DNS rapidly gets out of hand, IIRC the last time we did some
 measurements it only takes around 5% of dns lookups to go wonky for the
 situation to rapidly spiral out of control - when dns fails the cache
 will try a TCP lookup and that's like wading through molasses.

 Our advice to customers is to first unthrottle dns and ntp completely,
 give it the highest possible priority (these are extremely light
 protocols and seldom show up on the radar when you do this), and see
 how that goes.

 It just seems to me that you *might* be trying a very unusual solution
 for a problem that is better handled one layer lower down.


Strictly speaking, my connection isn't too slow. I have a transfer
rate of 64 K/s (might sound ridiculous to you, but this costs 18$/mo
here).
OpenDNS lookups from my connection take something like 300 msec+ and
Google DNS lookups around 50 msec.

I can obviously use Google DNS, but as I said earlier, OpenDNS gives
me phishing protection and other that sort of stuff.

And hence I must use a local cache.

-- 
Nilesh Govindarajan
http://nileshgr.com



Re: [gentoo-user] [OT] ogg/mp3 volume

2012-05-20 Thread ny6p01
On Sun, May 20, 2012 at 06:01:27PM +0100, Stroller wrote:
 
 On 20 May 2012, at 10:41, Jes?s J. Guerrero Botella wrote:
  ...
  Specially for bands like led zeppelin, I would just use the second method 
  (adjust while playing, rather than while ripping). Someday you will want to 
  hear the whole disk as it was intended, ?
 
 I agree that adjust-whilst-playing is the best method, but we won't be 
 hearing this music as intended for a while.
 
 All digital Led Zep releases (i.e. including all CDs) are notoriously poorly 
 remastered, with excessive gain applied.
 
 http://en.wikipedia.org/wiki/Loudness_war
 
 So far they have deteriorated with each remastering / re-release. 
 
 Huge threads on the Steve Hoffman forums discussing this, if you want to cork 
 sniff.
 
 Stroller.

I can attest to this - Led Zep II - awful. An older disk, not even a
remaster.  I can only play it at moderate levels - it loses all it's dynamic
range at higher levels.  A big disappointment.



pgpxY91nRW5ZZ.pgp
Description: PGP signature


Re: [gentoo-user] merging or fitting images together

2012-05-20 Thread Michael Mol
On Sun, May 20, 2012 at 8:33 PM, Philip Webb purs...@ca.inter.net wrote:
 120520 Michael Mol wrote:
 as Philip later remarked, it turns out the lens was likely a 75mm prime

 The picture of the camera looks exactly what I remember,
 tho' there might have been different models with different lenses.
 It was a very good camera for its time.

I'll say! Based on that pic, other things you've said, and the
information I found[1], that's an Ikonta 521 B with a Tessar f/3.5
lens, which appears to have been a high-end lens. Meanwhile, all of
the lenses for that camera appear to have been 75mm; the big
difference appears to be f-stop, which has an impact on
depth-of-field/bokeh. And an f/3.5 lens isn't something your modern
DSLR's kit lens can usually do.

[1] http://camerapedia.wikia.com/wiki/Zeiss_Ikon_Ikonta


 The leftmost portion will never look all that great,
 as he captured the Sun setting behind a building
 or maybe that's a water spot

 The Sun was indeed setting to the left at that time + date,
 but the bluish blemish is some sort of physical decay in the negative,
 which was stored in a cardboard box for  c 55 yr  without being touched.

 If these source JPG files are scans of paper photos

 No, they're  2  overlapping scans of the same negative,
 whose size is  58 x 43 mm  =  2,3 x 1,7 inch .

Ah. Well, the same holds true; a higher-resolution scan of the source
image, stored in an HDR image format (such as 16-bit-per-channel TIFF,
16-bit-per-channel PNG, or OpenEXR) would ultimately give better
results. Any of the 16-bit-per-channel formats would increase the
available dynamic range (of the format, at least) by a factor of 16,
at least. (IIRC, JPEG models luminance in 12 bits, and, for monochrome
images, that's at least somewhat advantageous over 8-bit-per-channel
grayscale or RGB formats.)


 Anyway, the final Hugin pto file is here: http://pastebin.com/gudxvAEa

 What is a 'pto' file ? -- I downloaded it  it's text.

It's a Hugin project file; you can load that file with Hugin. It
assumes the two source JPEG files are in the same directory.


 And the final stitch is here:
 http://img407.imageshack.us/img407/2030/brum3068brum30702.jpg

 All Firefox gives me is a black window : can you check ?

Works on my system. It comes up all-black in geeqie, though; I had to
load it in Chrome. Also loads fine in Gimp 2.6.

-- 
:wq



Re: [gentoo-user] [OT] ogg/mp3 volume

2012-05-20 Thread Michael Mol
On Sun, May 20, 2012 at 8:44 PM,  ny6...@gmail.com wrote:
 On Sun, May 20, 2012 at 06:01:27PM +0100, Stroller wrote:

 On 20 May 2012, at 10:41, Jes?s J. Guerrero Botella wrote:
  ...
  Specially for bands like led zeppelin, I would just use the second method 
  (adjust while playing, rather than while ripping). Someday you will want 
  to hear the whole disk as it was intended, ?

 I agree that adjust-whilst-playing is the best method, but we won't be 
 hearing this music as intended for a while.

 All digital Led Zep releases (i.e. including all CDs) are notoriously poorly 
 remastered, with excessive gain applied.

 http://en.wikipedia.org/wiki/Loudness_war

 So far they have deteriorated with each remastering / re-release.

 Huge threads on the Steve Hoffman forums discussing this, if you want to 
 cork sniff.

 Stroller.

 I can attest to this - Led Zep II - awful. An older disk, not even a
 remaster.  I can only play it at moderate levels - it loses all it's dynamic
 range at higher levels.  A big disappointment.

Dipping only slightly further offtopic, are they still pressing vinyl?
I believe there are a number of tools for automatically splitting and
transcoding audio input from a vinyl player.


-- 
:wq



Re: [gentoo-user] Best caching dns server?

2012-05-20 Thread Michael Mol
On Sun, May 20, 2012 at 8:43 PM, Nilesh Govindrajan
cont...@nileshgr.com wrote:
 On Mon, May 21, 2012 at 1:17 AM, Alan McKinnon alan.mckin...@gmail.com 
 wrote:
 Slow connection. See my previous reply to the list. I'm using pdnsd,
 which can persist records and has every damn feature I wanted.


 Fair enough, but consider this:

 If your connection is slow, the only thing you speeded up is the DNS
 lookups. Thereafter, everything else is still as slow as it ever was.
 And if you feel the need to speed up DNS lookups then the odds are very
 good that everything else is too slow i.e. not exactly usable.

 We get this a lot from our customers too, and the advise we give them
 is to look closely at their traffic throttling. In almost every case
 all UDP traffic has had the living crap throttled out of it somewhere
 by folk that don't really think things through, severely affecting
 dns and ntp as well as AV streaming.

 Throttled DNS rapidly gets out of hand, IIRC the last time we did some
 measurements it only takes around 5% of dns lookups to go wonky for the
 situation to rapidly spiral out of control - when dns fails the cache
 will try a TCP lookup and that's like wading through molasses.

 Our advice to customers is to first unthrottle dns and ntp completely,
 give it the highest possible priority (these are extremely light
 protocols and seldom show up on the radar when you do this), and see
 how that goes.

 It just seems to me that you *might* be trying a very unusual solution
 for a problem that is better handled one layer lower down.


 Strictly speaking, my connection isn't too slow. I have a transfer
 rate of 64 K/s (might sound ridiculous to you, but this costs 18$/mo
 here).
 OpenDNS lookups from my connection take something like 300 msec+ and
 Google DNS lookups around 50 msec.

 I can obviously use Google DNS, but as I said earlier, OpenDNS gives
 me phishing protection and other that sort of stuff.

 And hence I must use a local cache.

Side note: Honestly, you should be using a local cache, regardless.
It'll improve performance for you, *especially* when there's any risk
of packet drops between you and the your ISP's core equipment. When I
was on a 6Mb/s-down ADSL connection, the improvement I experienced
simply from running bind9 as a recursive resolver was *massive*. I
still do so, even though I'm now on a pretty reliable cable
connection.


-- 
:wq



Re: [gentoo-user] Re: make of gentoo-sources-3.2.12 fails

2012-05-20 Thread Joshua Murphy
On Thu, May 17, 2012 at 4:50 PM, Hinnerk van Bruinehsen
h.v.bruineh...@fu-berlin.de wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 17.05.2012 22:13, Michael Scherer wrote:


 1) make output:

 CHK     include/linux/version.h CHK
 include/generated/utsrelease.h CALL    scripts/checksyscalls.sh CHK
 include/generated/compile.h LD      init/mounts.o ls -Al -m
 elf_x86_64 -r -o init/mounts.o init/do_mounts.o
 init/do_mounts_initrd.o init/mounts.o: No such file or directory
 make[1]: *** [init/mounts.o] Error 1 make: *** [init] Error 2

 There is an LD, the ls line is part of the error message.



 contains a directive to build mounts.o, see second last line, but
 it for some reason this is ignored. Maybe there is a flaw in that
 command, only I can't find it.

 regards, michael



 Have you tried a make clean on your sourcetree?

 CHK     include/linux/version.h

 IS for me one of the first lines I get at all. It seems strange to me
 that you get a call to the linker (LD) before even a call to the
 compiler (CC).

 I'd suggest you try a make clean first and try to build again
 afterwards (with -j1 or without a statement for jobs) to rule out race
 conditions.

 If that doesn't help, move your kernel sources to another directory
 and reemerge the sources. Copy your .config (ideally one of a working
 tree) and try again. If that doesn't help, try to get a working
 default config (like from /proc/config.gz from a live distro).

 WKR

 Hinnerk
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.19 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

 iQEcBAEBAgAGBQJPtWSXAAoJEJwwOFaNFkYc8tAH/iV59mb5MsH0pJ28dXUqe0X6
 tcbKB18vIQYmjG9gecGX4lVtgXCIhTqVeHEKbQVN4xRMo9u7D7FxygHtRY7sfYrk
 dvR9fs4RfIoykVeCF/0uVSNZnoXhixarYtr8FGvIKCxvUJnY/ws4W+k5tP8Ju8lJ
 wM5ldQ/eD8H4vFm4fIStQheTGERZlueNBVf77cLx8K/8p0XBvVM85V/epg+fC4I4
 bfWG1JtXrh1MUmaE+Y26aNOXGkUZiHax49CBiOUQLZNjk6f5idGppWV03HTL4mCV
 +dI6lNaUqU0AhnoG3yIOK8lY4kFu3QmNw4h1r+OCctASMJe8dUOTnF53TjJzYQk=
 =TguL
 -END PGP SIGNATURE-


I'd be more active and vocal in trying to help sort this out, but it's
been a busy week since the 1k mile trip to get a car and working on
getting it into the shape I want it in... that aside, I *did* manage
to, without any changes, drop in the config provided at the start of
all this into a gentoo-sources 3.2.12 tree (after a quick mrproper)
and it built without issues. To me, that indicates that the toolchain,
particular copy of the sources, or hardware have an issue. That the
problem is as consistent as it is while the rest of the system isn't
failing in horrifying ways implies it's not the hardware. The
resulting modules and kernel from my building it can be grabbed from
http://poisonbl.freeshell.org/3.2.12_test.tar.bz2

-- 
Poison [BLX]
Joshua M. Murphy