Hello.

Ryan Underwood wrote:
Yeah, that is a big problem because
everyone is using a callback model
these days
I am not sure we can't try to use the
callback model, it may just be inefficient.
That also depends on how's the server
reacts on the partially satisfied
request (i.e. it asked for XXX bytes,
but we only give it a half of the size
or so). If the server have no problem
with that, we might still be able to
calculate how we have to speed the DMA
up or down.

It just sums the values of the streams.
I don't know how good it mixes more than
2 streams.
I know this is false because I have played
What exactly of the above, on your
opinion, is false, btw? The "it just
sums the values...", or the "I don't
know how..." statement?
Or do you think just summing the values
will produce the good results for many
streams? Here's the quote:
---
The resolution for 32-bit mixing is only 24-bit. The low significant
byte is filled with zeros. The extra 8 bits are used for the saturation.
---
For the 16bit streams there might be
all the 16bits for saturation, but
with more than 2 streams the losses
are unavoidable, and consider that the
physical output is usually only 16bit,
not 32.

way dmix is implemented, its use is very
limited. Well, not our problems at all.
It might be a good question for the ALSA list.
It is not a question, this all is
even documented.

Isn't it what's the O_NONBLOCK for?
Here is Takashis' reply to a Linus rant:
Wow, amazing! O_NONBLOCK is not for that,
now I see. I wasn't aware such a seemingly
obvious things of the standard can sound
so different when Linus explains it:)

There is disagreement, but most kernel developers who responded thought
that O_NONBLOCK to avoid hanging the app was silly. If the app wanted
to wait for the sound device to become available, it makes more sense to loop until not errno==EBUSY.
I don't think they (Linus mainly) offer
such a silly thing. His point it that the
synchronization on the sound device is
silly and must not be done (no sane
program should lock up and wait for the
device to free, as this may take from
seconds to years). And
---
If you actually think this is a useful
feature, I would suggest trying to key it off O_EXCL or a new flag.
---
Actually reading that discussion was
both informative and fun. Yes, I find
it funny to see how easily Linus beats
his opponents even on the things that
initially look obviously not on his
side, and how easily he proves what
initially looks obviously wrong. That's
something;)


Ok, so what you mean is that we must
have precise, low latency control
over what is written to the buffer.
No, what I mean is that we have to try
to advance the emulated DMA with the
constant speed. That means we shouldn't
do the callbacks to it and request the
data from it to fill the buffer. Instead
we'll have to do the buffering on an SB
side (SB have the FIFO actually), and
SB needs to have the information to be
able to adjust the speed of the emulated
DMA to match the speed of the real output.
It is probably more difficult to implement
with callbacks, but it may be possible.
The DOS program must see the DMA to
advance smoothly, but we can still do
some buffering on an SB side.
The good thing is that the current DMA
implementation will work, it is only
the SB layer that will require some
work. Of course the most frightening
part is to put the stuff into a separate
thread.

Isn't it possible for a BSD port too?
It might be (it was done in the past),
but it may be feature-less and difficult
to maintain. Linux kernel have a lot of
dosemu-related patches in it, dated from
the beginning of the project to nowadays.
And I personally (which may be totally
wrong) don't see a lot of demand for that.
Linux is always the primary "market",
but perhaps the windows port will have
some market too. Not sure, but it worth
to try imo.

dosbox and qemu and
complaining about the speed
Is qemu still slow, even with its new
proprietary virtualization technology
(or whatever that kqemu is)? From what
I've heard (not too much), it may be
faster than dosemu.

are is much appreciated. x86_64 port is
probably more difficult, but can and must
be done.
By this you mean using a CPU emulator
For realmode only. Not a big deal.
We could even use simx86, but unfortunately
it broke.

In this case we are losing the
speed of native CPU, native IO, VESA
console (vs vgaemu).  I'm not sure
how it would turn out.
On 64bit machines? It might be very fast.
Emulating realmode code is not a big
deal *at all*. And even the protected
mode user-space code is not much of a
slowdown when translation is used, I
beleive. The real problem is with the
ring0/system code, which we always can
avoid to execute.

qemu-user development, and qemu-user no
longer runs dosemu (while it used to do
Have you mailed him with that problem?
Yes, long ago. He said he isn't going to
invest any more time in the qemu-user
development.

Well, we are running protected mode OS too,
as long as there is a DPMI
interface and we can find it.
:))) Guess where's the DPMI come from?
It was developed by MS as a quick hack
to be able to run the windows kernel in
protected mode without too much of a
redesign. Then there was a DPMI committee,
but I don't think any other OS have ever
used that API.

The LDT emulation was the only other
problem, wasn't it?
For Win31? Sure. (plus tons of the
DPMI and extender improvements)

Do you have information on this?  I didn't
know there was any special
virtualization capability of x86-64.
Nothing except for the some announcements:
http://www.extremetech.com/article2/0,1558,1644513,00.asp
And for Intel:
http://www.intel.com/technology/computing/vptech/

-
To unsubscribe from this list: send the line "unsubscribe linux-msdos" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to