Marcus,
Thanks for the idea anyway :) It sounded like a really neat outside
the box idea.
I ran some quick tests with FLAC and found that it does a really good job
compressing a constant filtered gaussian noise output. I just created a
flow graph with:
fast noise source [float] ->
Well, having wasted a couple of hours of sleep time on this, maybe you
shouldn't do the logarithmic storage... at its core its the same idea as
storing floating point, but with a fixed mantissa.
The mathematical effects of the "rounding to the nearest exponential
step" on superimposed sinusoids
Chris,
Excellent, then I will start benchmarking tomorrow :)
Thanks!
On Sat, Jul 16, 2016 at 9:35 PM, Chris Kuethe
wrote:
> Flac doesn't really need to know what the actual sample rate is; you
> could tell it 500e3 and you should get the same data out after
>
Flac doesn't really need to know what the actual sample rate is; you
could tell it 500e3 and you should get the same data out after
compressing and decompressing it.
On Sat, Jul 16, 2016 at 11:20 AM, Dave NotTelling wrote:
> Marcus & Dan,
>
> Thank you very, very much
Hello Juha,
idea: if Dave's distribution of amplitudes was a little more benign than
the Radar near/far problem, and he would favor full resolution when the
signal is weak, but could live with a bit of degradation due to
quantization when the signal is strong, what about storing a logarithm
of
Can you reduce the number of bits that you are using?
With radar signals, the receiver noise most of the time excites only about
8 bits out of 16. Ground clutter or meteor echoes excite nearly all of the
bits occasionally, so I can't just truncate to 8 bits. In this case, bzip2
actually does a
Marcus & Dan,
Thank you very, very much for the detailed information!
Dan: That's exactly what I thought when going into this at first. But, I
decided to give gzip a shot just to see how well it did. Turns out that
(at least for bursty environments) it almost halves the size of the
Ah!
On 16.07.2016 11:04, Marcus Müller wrote:
> and maybe, but this is really speculation, you can just modify the
> error calculation to just ignore the 4 lower bits of the actual sample
> data, and safe another few percents of data volume.
Yeah, there's a
> If there's a lot of white noise, you won't get much compression
Alas, Entropy is killing compression.
So yeah, if you anyhow can, try to reduce bandwidth by filtering and
decimating.
You can also just "round" or even "throw away" bits – at 100MS/s
(presumably coming from a X310 running at a
You'll likely have to buffer the output to a ramdisk and then slowly bleed
that to the disk. Compression typically doesn't work well on IQ data
unless you've got a structured signal in there. If there's a lot of white
noise, you won't get much compression
On Sat, Jul 16, 2016 at 12:00 AM Dave
10 matches
Mail list logo