Dieter wrote:
So I'm reading about 200 Hz tvs/monitors at
http://DansData.com/askdan00043.htm
and a few questions come to mind. If a single-link DVI
maxes out at 1920x1200 60 Hz, then 1920x1200 120 Hz should
max out dual-link DVI. So even dual-link isn't fast enough
for 200 Hz. A first
Timothy Normand Miller wrote:
Today, I'm leading a round-table discussion at OSU regarding Intel's
Larrabee architecture. I thought that perhaps people on this list
might be interested in engaging in a separate discussion. Larrabee is
a multicore processor that has several in-order x86 cores
I saw this and remembered that we were having clock problems.
http://www.latticesemi.com/corporate/newscenter/newsletters/newsdecember2008/ispclock.cfm
perhaps this might be useful for a future design.
--
JRT
___
Open-graphics mailing list
John Griessen wrote:
James Richard Tyrer wrote:
Still wonder where this is going.
Is the objective to produce a board to be used? or to produce an open
core for VGA.
From: http://wacco.mveas.com/index.php?entry=24
Michael M. says, The point of this project is to get development
Dieter wrote:
No. 2 layers, one copper, one conductive silver ink 40 milliohms per square.
Usually you can identify traces that can stand some resistance, then make those
be silver ink jumpers.
UV cure acrylic paint is put down as insulator material, then ink over that
makes a 2-layer topology
Patrick McNamara wrote:
I am quite surprised at the tack the whole discussion took. I was
expecting more discussion on the fact that we actually had a company
release a complete, commercial processor under the GPL. Regardless, let
me remind everybody of one thing. Somebody has already done
André Pouliot wrote:
The problem rest the same even if you use microcode you can't go near
the 1 operation per cycle for a processor in a fpga and do it fast.
It's either fast but multicycle or 1 cycle but slow.
IIUC, the limiting factor would be the speed of the multiply.
Specifically, the
Dieter wrote:
With
sufficient hardware, you can do my sample problem at the rate of one
output per clock. HOWEVER, it will require 9 hardware multipliers and 6
adders vs only 3 of each for the vector processor. To do 4 vector * 4x4
Transform matrix (which is required for RGBA pixels), it will
Nicolas Boulay wrote:
One or 2 years ago, somebody post many real world shader code. Despite
the fact that opengl arb propose vector operation, most of the
instructions used are scalar. So a simd processor have no interrest
for this kind of code.
I guess it depends on what you mean by scalar
Dieter wrote:
With
sufficient hardware, you can do my sample problem at the rate of one
output per clock. HOWEVER, it will require 9 hardware multipliers and 6
adders vs only 3 of each for the vector processor. To do 4 vector * 4x4
Transform matrix (which is required for RGBA pixels), it will
Nicolas Boulay wrote:
2007/12/15, James Richard Tyrer [EMAIL PROTECTED]:
And more hardware is more hardware so it will obviously run the
problem faster.
That the point !
Yes, and where do we get this additional hardware (that is more hardware
than 16 32bit float MACs would require)?
You
Nicolas Boulay wrote:
2007/12/15, James Richard Tyrer [EMAIL PROTECTED]:
Nicolas Boulay wrote:
2007/12/15, James Richard Tyrer [EMAIL PROTECTED]:
And more hardware is more hardware so it will obviously run the
problem faster.
That the point !
Yes, and where do we get this additional
Stephen Pollei wrote:
Do most people like nasm for an assembler or do you like another better?
http://nasm.sourceforge.net/
I prefer Intel assembler syntax because that is what I originally
learned. I have a Borland assembler which runs on DOS. I also have an
old MS assembler 5.x but it is
Patrick McNamara wrote:
http://www.opensparc.net/news/2007-12/tgdaily-sun-open-sources-t2-processor.html
This (unlike the T1) supports all VIS instructions (except Quad
precision) and the FGX processor (SIMD).
It is actually the FGX processor which would be of interest to us. I
have
Paul Brook wrote:
It is actually the FGX processor which would be of interest to us. I
have wondered if it would be possible to have a graphics processor based
on multiple SIMD processors from standard MPUs. I was thinking of the
AltiVec; however, the SPARC is available free.
This has been
Timothy Normand Miller wrote:
Your analogy with CPU pipelines isn't quite on point here.
Actually, I didn't say anything about CPU pipelines. I think that we
are equivocating about the meaning of 'pipeline'. The Pixel Pipeline
and a pipelined FPU do not mean the same thing by pipeline.
Timothy Normand Miller wrote:
On 12/14/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
Do you have an algorithm that you intend to implement that doesn't use
shaders -- doesn't multiply matrices? As I asked, where is it?
Everything I have read about 3D is based on matrix multiplication
Kenneth Ostby wrote:
Actually when it comes hardware there is surprisingly little matrix
matrix multiplication in the 3D world.
Duck test:
P' = T*P
|p'1| |t11,t12,t13| |p1|
|p'2| = |t21,t22,t23|*|p2|
|p'3| |t31,t32,t33| |p3|
We can write this out
André Pouliot wrote:
If we do the same with a fixed pipeline and we suppose we do the same
100 operations but unrolled and we run at 100MHZ. We have the same
requirement for the multiplier 20 stage of 4 multiplier per
stage(RGBA) so that's 80 multiplier. The difference now is that will
a
IIRC, the attached is correct for a VCO made from two PECL
inverter/buffers. I think that LVPECL chips are available that run on
3.3 volts.
Slower logic would use an SR flipflop in place of the first
inverter/buffer. That might or might not be a good idea for ECL -- not
sure.
You need to
Timothy Normand Miller wrote:
On 11/30/07, Vesa Solonen [EMAIL PROTECTED] wrote:
On Fri, 30 Nov 2007, Timothy Normand Miller wrote:
to the clock you're generating. The digital problem we're seeing is
high-frequency jitter, while the analog one is much lower frequency,
on the order of a few
[EMAIL PROTECTED] wrote:
James Richard Tyrer [EMAIL PROTECTED] wrote:
The Analog chip would be OK and it is inexpensive, but we would
also require a VCO. The VCO would have to be 2x since we need to
run it through a flipflop to get 50% duty cycle So we are looking
for 50MHz to 660MHz (plus
James Richard Tyrer wrote:
It appears that the solution to the jitter problem might be to change
the way that we generate the clocks.
Something like this could be used to generate the existing two clocks
plus a third one to drive the pixel clock generators:
http://focus.ti.com/lit/ds
James Richard Tyrer wrote:
James Richard Tyrer wrote:
It appears that the solution to the jitter problem might be to change
the way that we generate the clocks.
Something like this could be used to generate the existing two clocks
plus a third one to drive the pixel clock generators:
http
It appears that the solution to the jitter problem might be to change
the way that we generate the clocks.
Something like this could be used to generate the existing two clocks
plus a third one to drive the pixel clock generators:
http://focus.ti.com/lit/ds/symlink/cdcel937.pdf
This is
Timothy Normand Miller wrote:
On 11/29/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
Timothy Normand Miller wrote:
Sorry about the cross-post. We're -- THIS close to getting OGD1
done, with artwork in the hands of board makers who are working on
quotes, and we've discovered a problem
Timothy Normand Miller wrote:
Sorry about the cross-post. We're -- THIS close to getting OGD1
done, with artwork in the hands of board makers who are working on
quotes, and we've discovered a problem that could make the video
output unacceptable.
Also, please consider if the jitter can be
Dieter wrote:
The crystal has negligable jitter. It's the DCM in the Xilinx chip
that's introducing all of the noise. A lot of it comes from ground
bounce and crosstalk from other activity in the FPGA.
Are you saying that there is ground bounce and crosstalk *inside*
the FPGA? Kinda hard to
Timothy Normand Miller wrote:
On 11/29/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
This doesn't quite add up. IAC, are we talking about analog or digital
display?
Digital. Analog has visible problems at much higher dot clocks. I
haven't seen it myself, but reportedly, what you see
Timothy Normand Miller wrote:
Sorry about the cross-post. We're -- THIS close to getting OGD1
done, with artwork in the hands of board makers who are working on
quotes, and we've discovered a problem that could make the video
output unacceptable.
We've discovered that the clock generators in
Raphaël Jacquot wrote:
Attila Kinali wrote:
Simple example, why this is bad: the gnome screen saver
requires a communication path over dbus to disable it
(for something like presentations or video applications).
This means that if app A wants to disable the gnome
screen saver it has to
Tim Schmidt wrote:
On 5/25/07, Attila Kinali [EMAIL PROTECTED] wrote:
rant
*censored*
/rant
Thanks.
dbus is and abomination that should never have come into
existance.
? Please explain.
And freedesktop.org work too much for themselfs
w/o asking application developers.
???
Loris Cuoghi wrote:
Hi,
I'd like to point out this proposal for a reworked kernel graphics
subsystem.
http://kerneltrap.org/node/8242
To me, it brought to mind the thread on this mailing list, dated August
2006, in which interesting possibilities were brought up. The one from
which the
Pierre Ducroquet wrote:
On Saturday 26 May 2007 12:29:19 James Richard Tyrer wrote:
[snip snip]
I must admit that I don't know what DBus is, or what it is supposed to
do although I thought that it was for interprocess communication. All I
know is that it doesn't work with KDE (last time I
Lourens Veen wrote:
On Saturday 26 May 2007 12:32, James Richard Tyrer wrote:
Loris Cuoghi wrote:
Hi,
I'd like to point out this proposal for a reworked kernel graphics
subsystem.
http://kerneltrap.org/node/8242
One of the many interesting posts in the thread:
http://lists.duskglow.com
Nicholas S-A wrote:
Well, apparently this is a world's first, so there is (most likely) no
point in searching for another, let alone 1080p.
http://www.engadget.com/2007/05/21/fujitsus-h-264-chip-encodes-decodes-in-full-hd-a-worlds-fir/
We could use that, but it raises our price point a bit
Raphaël Jacquot wrote:
here's a newer transmission system that could be used (instead of HDMI).
in particular, see the first (pdf) document, explaining that you can fit
that thing on a stratix fpga
http://www.google.com/search?ie=UTF-8oe=UTF-8sourceid=navclientgfns=1q=smpte+424M
This is an
Raphaël Jacquot wrote:
James Richard Tyrer wrote:
Raphaël Jacquot wrote:
here's a newer transmission system that could be used (instead of HDMI).
in particular, see the first (pdf) document, explaining that you can
fit that thing on a stratix fpga
http://www.google.com/search?ie=UTF-8oe=UTF
Rogelio Serrano wrote:
On 5/26/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
Nicholas S-A wrote:
Well, apparently this is a world's first, so there is (most
likely) no
point in searching for another, let alone 1080p.
http://www.engadget.com/2007/05/21/fujitsus-h-264-chip-encodes
Rogelio Serrano wrote:
On 5/27/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
Rogelio Serrano wrote:
On 5/26/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
Nicholas S-A wrote:
Well, apparently this is a world's first, so there is (most
likely) no
point in searching for another, let
For those that don't know what a hardware multiplier is:
http://tams-www.informatik.uni-hamburg.de/applets/hades/webdemos/20-arithmetic/60-mult/mult4x4.html
This is a serial carry circuit. Parallel carry can be implemented as it
is with an adder. Or, you can use this pattern with latches to
Jean-Baptiste Note wrote:
Hello,
Yes, I know that. The point I was trying to make was that the CPU can
only do one thing at once and that a shared interrupt can not be
serviced while the CPU is still servicing another of the interrupts that
shares the hardware interrupt. I don't know what
http://www.edn.com/article/CA6434366.html?nid=2431rid=926513285
I presume that the same ideas could be applied to decoding.
--
JRT
___
Open-graphics mailing list
Open-graphics@duskglow.com
http://lists.duskglow.com/mailman/listinfo/open-graphics
List
Attila Kinali wrote:
Using a on board general purpose CPU on the graphics card will
not give you any advantage at all. If a PC CPU is too slow, how
do you want to beat that with a CPU that you can put onto a graphics
card without implementing half a PC on it?
Well actually, you would need half
Peter TB Brett wrote:
On Thursday 15 March 2007 02:30:06 sinkam wrote:
On Thu, 22 Feb 2007 13:41:39 +0500, Peter TB Brett [EMAIL PROTECTED]
wrote:
Once again, your idea is impractical.
From: Carlo Salinari [EMAIL PROTECTED]
Subject: [Open-graphics] Slashdot | HDMI-Enabled Graphics Cards
Tim Schmidt wrote:
On 4/29/07, Dieter [EMAIL PROTECTED] wrote:
H.264 offload is absolutely necessary for good Blu-ray/HD-DVD playback.
Exactly the situation when DVD on the PC premiered circa 1998. Now,
10 years later, $60 motherboards that integrate graphics, audio,
networking, all the
IIUC, what some are proposing is something that looks like the Apple TV
box. Possibly a little larger to also include a VGA connector on the
back. The difference would be that:
We would not require proprietary software to run it.
We would support all video formats.
Timothy Normand Miller wrote:
So, in other words, a very powerful MythTV/Tivo sort of device? Would
it have a hard drive?
Actually, that wasn't what I had in mind. A computer has a hard drive,
so I don't see the need for another one in the box.
One of my friends spent months researching
Benjamin Schroeder wrote:
Regarding the idea of doing an open DVR
I hate to point this out to what should be a technically sophisticated
group of people, but: you do NOT record off of your TV, to record TV
programs, you need either a tuner, a set top box for cable, or satellite
receiver,
Timothy Normand Miller wrote:
On 4/20/07, Raphaël Jacquot [EMAIL PROTECTED] wrote:
how about using something like this then, wich allows to have a powerpc
405 core plus your own stuff next to it ?
Rogelio Serrano wrote:
On 4/20/07, Timothy Normand Miller [EMAIL PROTECTED] wrote:
On 4/20/07, Raphaël Jacquot [EMAIL PROTECTED] wrote:
how about using something like this then, wich allows to have a powerpc
405 core plus your own stuff next to it ?
Dieter wrote:
One thing to consider is whether is would be possible to use the video
board to decode JPG and JP2 still pictures. And, place them on the
screen with compositing.
If we can do it without too much grief, sure. But it isn't important
to offload the main CPU decoding a single
Daniel Rozsnyó wrote:
If the UMA stuff from Rogelio will be possible (e.g. by designing a
new northbridge) wouldn't it be possible to make a
mass-multiprocessing mainboard using non-smp enabled cpus?
If the processor chip (actually package) has cache then you need to have
address snoop for
Paul Brook wrote:
How much does SMP need direct hw support (cache coherency?), could this
be eliminated by sw ? (patching the kernel to assign processes to cpu
wisely?).
A multiprocessor machine without hardware cache coherency is extremely hard to
program, to the point of being useless for
Timothy Normand Miller wrote:
Allow me to inject a little guidance here. People are going in
circles, discussing high-level things like which video formats to
decode and which video formats to output.
You're putting the cart before the horse.
Before you can HOPE to support any of those
Timothy Normand Miller wrote:
On 4/21/07, Dieter [EMAIL PROTECTED] wrote:
- Let's assume PCIe 1x (the answer to the alternatives is basically
the same). How are you going to connect that to a processing
element?
I'm assuming Ethernet. The TI DSP chips have Ethernet builtin, so
Andy Fong wrote:
Accessing textures from host memory can be very inefficient. But it
I just cant help it but i have to ask...
how can a system designer make it efficient?
hypothetically...
- More graphics memory so you can hold all your textures
- A faster bus between the GPU and the
Timothy Normand Miller wrote:
On 4/20/07, Rogelio Serrano [EMAIL PROTECTED] wrote:
this can all be rolled into a new northbirdge later.
We're not getting into the MoBo chipset business any time soon.
Putting aside the complexity and cost, I doubt we could get the
information we need without
Dieter wrote:
Well, some basic questions to ask ourselves:
1) What will it do? I personally think that a reasonable aim is decoding
video, hopefully even 720p/i or possibly 1080p/i, in real time (30+
fps),
while also providing a simple framebuffer and possibly audio. If
video is
Loren Merritt wrote:
On Thu, 19 Apr 2007, Dieter wrote:
1080p
Mpeg 1, 2 up to 80 Mbps
Mpeg 4up to 20 Mbps ( Is this really the worst case? Seems low. )
H.264 up to 40 Mbps
H.264 is the killer. :-(
It is worse than just H.264, it has to be H.264 HiP 1080p/30! Only
dedicated
Simon wrote:
On 4/21/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
But, to the point, IIUC, motherboards with HTX are supposed to be
looming on the horizon. IIUC, this would be as fast as unified
memory architecture.
According to wikipedia, HTX uses DMA, rather than a uniform
Rogelio Serrano wrote:
htx just an interconnect, right? so its not really about being numa.
it just has direct access to memory at the same level as the cpu. its
just hypertransport that goes directly to a memory controller,
whether it is in the same die as the cpu or not. or whether the
Timothy Normand Miller wrote:
I wouldn't want to say that this discussion is off-topic;
Actually, this discussion has become useless since it is now based on
Paul Brook engaging in what I believe is called hit and run or petty
flogging in rhetoric. Unfortunately such substitutes for useful
Nicholas S-A wrote:
A number of you are very keen on having a graphics card with some
kind of CPU or DSP on it.
not to be nagging or anything, but isn't that just what oga is? We
might be using a small micro, but it is still integral to the DMA
transfer, VGA, etc. or are you referring to a
Timothy Normand Miller wrote:
Is that what you really want? A video decoder? Not a graphics card?
The current situation is that a user must purchase a high end video card
suitable for serious game or 3D CAD usage to get h.264 HiP 1080p/30. A
market niche, therefore, exists for a video card
James Richard Tyrer wrote:
Note that HDMI to DVI + PSDMI boxes do exist.
and that should be:
Note that HDMI to DVI + SPDIF boxes do exist.
There are simply too many acronyms. :-D
--
JRT
___
Open-graphics mailing list
Open-graphics@duskglow.com
Timothy Normand Miller wrote:
On 4/19/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
At first look, it appears to me that the service ISR could cause latency
problems with the sync interrupt if they share the same interrupt. This
is the real issue that needs to be discussed.
You are right
Paul Brook wrote:
[Taking offlist]
Would using RT speed up the graphics board?
Unlikely. Graphics don't tend to have very demanding latency requirements. The
regular process scheduler is generally sufficient for graphical tasks on
desktop class hardware/OS. You only have to display a frame
Nicolas Boulay wrote:
http://www.linuxdevices.com/news/NS7803461096.html
That's a new chip for Set top box. Cost around 20$, i think, so
imagine adding that to a OGC cost around 50$.
Intel CE2110:
http://www.intel.com/design/celect/2110/ce2110_brief.pdf
This appears to have a DDR2
Timothy Normand Miller wrote:
On 4/17/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
Have I been the victim of Intel hype? They make hardware that would
make it possible to have enough interrupts for each PCI card slot to
have 4 interrupts or at least to have 24 hardware interrupts
Hugh Fisher wrote:
Simon wrote:
Regardless of the cost of designing a better solution, my point is
that using a general purpose CPU is likely to be infeasible, because
the price will be too unattractive for the hardware to be profitable.
This discussion has now reached the point where we
Patrick McNamara wrote:
http://www.theinquirer.net/default.aspx?article=38964
So, if we were to use an Intel 965 series northbridge with graphics, the
bus interface is already designed.
--
JRT
___
Open-graphics mailing list
Paul Brook wrote:
IMHO the sync interrupt doesn't need to be different from to any other
interrupt.
If the interrupt is the same for sync and service request, then the
driver will have to read two status register bits to see which interrupt
is set before the interrupt is serviced and then write
Nicholas S-A wrote:
To be practical, both cost and power wise, this solution would have
to be based on an embedded chip.
AMD Geode processors can be used to make a graphics card. They
support MMX and 3D-NOW. AMD states that they fully support Linux
on these.
Paul Brook wrote:
On Thursday 19 April 2007 00:47, James Richard Tyrer wrote:
Paul Brook wrote:
Really? I'd expect everything to be a single PCI device. DMA controllers
only tend to exist as separate entities on systems where the normal
devices can't be bus-masters. You may implement
Timothy Normand Miller wrote:
On 4/18/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
Paul Brook wrote:
Really? I'd expect everything to be a single PCI device. DMA
controllers only
tend to exist as separate entities on systems where the normal
devices can't
be bus-masters. You may
Nicholas S-A wrote:
A possibility could be the Xscale and intel 2700:
http://en.wikipedia.org/wiki/Intel_2700G which supports OpenGL ES.
Isn't an X-Scale just a fast ARM?
yeah, but the 2700 has an Xscale coprocessor interface, which means it
needs to have an Xscale (or, I suppose,
Paul Brook wrote:
On a typical PCI system each device only gets a single interrupt pin (a
PCI bus has 4, but each device is only supposed to use 1), and several
devices share an interrupt line.
Thus all interrupts should be maskable on the device, and probably
combined into a single output
Timothy Normand Miller wrote:
On 4/16/07, James Richard Tyrer [EMAIL PROTECTED] wrote:
I'm not a PCI expert. However, you are talking about the actual
physical implementation in PCI. If masking is required, this would be a
function in the PCI interface. However, IIUC, PCI devices
Nicolas Boulay wrote:
2007/4/17, Attila Kinali [EMAIL PROTECTED]:
nVidia and ATI have designed specialised CPU ('shader units')
for their cards. I think it is reasonable for the OGF to
consider using a general purpose CPU on the card because it
will be quicker and easier than designing our
Timothy Normand Miller wrote:
Something we need to be able to do is enable the interrupt for video.
Generally, we only need one interrupt per frame, so what we need to do
is set the interrupt bit in one of the instructions at the end of the
last active scanline. The thing is, it's not adequate
Timothy Normand Miller wrote:
I've posted to SVN a new fifo design. It's kinda wasteful, but it's
designed for very high clock rates. It's an async fifo (meaning that
the two ends are on different clocks), and the cross-domain
communitation is one-hot (rather than gray-coded).
When I posted
Paul Brook wrote:
Perhaps it would be even simpler to have two interrupts. One as you
describe -- a sync interrupt -- which could not be turned off, A second
for a service request which would be triggered by firmware on the board
or the DMA controller.
Then the issue of turning the sync
Rogelio Serrano wrote:
sorry to ask but i dont know where to go.
im looking for a 64 bit processor with segmented memory support. im
working on a no kernel os and the prototype is running on a 32 bit
processor. the problem is there is not enough memory space to have
very strong address
Dieter wrote:
Xbitlabs did some measurements on how much cpu it takes to
play video.
http://xbitlabs.com/articles/video/display/video-playback.html
Of course they did this with binary drivers for virus-server.
And they didn't hunt down high bitrate sources. Or tell us
what the bitrate of the
Simon wrote:
Just some rough calculation:
10(Mib/s) * 3600(s/h) / 8(b/B) = 4.5GiB/h, which is over double the
quality of DVD video, before accounting for the better compression
ratio afforded by h.264 versus mpeg2. So this would seem to indicate
to me that the power of a modern CPU is more than
Dieter wrote:
Video decoding is hard to parallelize on general purpose
CPUs. Thus even if it has two ALUs, you will not be able
to use both to their full potential. Specialized video
decoding hardware is much better in that case.
Also keep in mind, that having two processors does not
mean you
Timothy Normand Miller wrote:
It won't be long before we'll have to design a nanocontroller for OGD1
to manage VGA and DMA. I may be able to just go off and design one
myself, but I think that many of you would fancy observing and
participating in the design process, and with more brains on it,
Timothy Normand Miller wrote:
On 3/16/07, Daniel Rozsnyó [EMAIL PROTECTED] wrote:
Timothy Normand Miller wrote:
It won't be long before we'll have to design a nanocontroller for OGD1
to manage VGA and DMA. I may be able to just go off and design one
myself, but I think that many of you
Attila Kinali wrote:
Moin,
Without haveing read the whole discussion, a small comment
on HDCP:
On Mon, 19 Feb 2007 18:45:46 -0700
James Richard Tyrer [EMAIL PROTECTED] wrote:
The HDCP license rules require that digital *output* of DRM restricted
content higher than certain resolutions
Dieter wrote:
You need to be able to *capture* the data in real time, in
order to do a single sweep mode, for non-periodic signals.
The processing and display of that data don't have to be real
time.
Yes, that is true for some applications, but unless this is a
real time spectrum analyzer
James Richard Tyrer wrote:
There is no filter response shape to worry about.
This has always been a serious issue with an analog spectrum analyzer.
In theory, it should be a Gaussian distribution. This is not realizable
because it would have to extent to infinity. But even taking
[EMAIL PROTECTED] wrote:
Be sure to read this one since we have an interest in this issue:
3. Press Manufacturers to Offer Free Software operating systems on
new machines
http://www.fsf.org/resources/hw/how_hardware_vendors_can_help.html
--
JRT
Robert Vogel wrote:
http://www.freeappliances.org/
I don't see the point about the wearable computer. Is there some reason
that it wouldn't be a PC compatible with PC hardware.
Actually, you could probably assemble one from off the shelf hardware
except that I don't know where you get the
Carlo Salinari wrote:
Dieter wrote:
You can get a 2.4 GHz spectrum analyzer for $129.
http://www.dunehaven.com/lcsa.html
That's expensive :-). This one is just $99:
http://www.smallnetbuilder.com/content/view/24766/96/
(nice detailed article).
Nice piece of hardware. But, like the
Dieter wrote:
You need to be able to *capture* the data in real time, in order to
do a single sweep mode, for non-periodic signals. The processing and
display of that data don't have to be real time.
Yes, that is true for some applications, but unless this is a real time
spectrum analyzer
Dieter wrote:
Actually, I think that a digital demodulator is easier (then a
modulator) for complex signals.
I nominate JRT to design a 6th generation ATSC demodulator.
Static multipath is mostly solved, so concentrate on dynamic
multipath and on interference.
LOL ROF.
I consider it a major
Dieter wrote:
So the question is whether you can make a good PC card digital
oscilloscope for $100.00. You need an oscillator, frequency divider,
PLL, sample hold, and DAC as well as the PCIe interface. I seriously
doubt that this is possible for $100.00 but it does depend on the
maximum
Dieter wrote:
You need to be able to *capture* the data in real time, in order to
do a single sweep mode, for non-periodic signals. The processing and
display of that data don't have to be real time.
Yes, that is true for some applications, but unless this is a real time
spectrum analyzer
Timothy Normand Miller wrote:
On 3/4/07, Dieter [EMAIL PROTECTED] wrote:
Will OGC be able to output arbitrary waveforms, or only video? If
OGC can generate sine waves, square waves, triangle, etc. it would
be very useful as a piece of test equipment. It could be a
tracking generator for the
1 - 100 of 411 matches
Mail list logo