Re: A call for a world perspective in 3D

2015-01-11 Thread Yuma Antoine Decaux
Hi Again Alex,

I just went off for a break before getting back to work.

I've pasted your questions and will attempt to answer them all.
/managing sound buffers (you can only have 16 at a time in OpenAL)
-These buffers send out smaller sample sized waveforms. The waveforms have a 
specific signature which you trace back, and decompose using a fourrier 
transform. This is how Jaws or voice over voices are done. What we are 
essentially doing with code is break it down to its constituent parts and go 
back up the ladder by adjusting mathematical proofs meaning referencing 
ourselves from axioms and expressing them in code. This means that each channel 
is a queue of smaller buffers.
-Each of the 16 channels will be used entirely if possible, but for now I 
preffer keeping a good focus on the xml parsing side to have the proof of 
concept ready by 2 weeks.
, handling stereo sound samples
-the API handles this already. It's just a matter of how you process this, 
where you prioritise, and what other options are there to increase the number 
of channels. I don't think a second that with what is already there, an 
incredible experience can't be had.

generating sounds on the fly instead of relying on recorded audio, 
-Sounds on the fly are a signature. A chirp is a wave form signature. Fourrier 
should be thanked for this.
applying real-time filters or effects
-The API includes a nodesList which handles this entire structure, using a b 
plus tree. I'm sure other more advanced tree systems can be thought up or 
searched for on the net. It's incredible how if your question in google has the 
right semantics, it finds it exactly as you want it.

managing occlusions and distance roll-offs
-This is the part I have been working on and will finish either beginning or 
mid week.
By having that same node structure, you are doing concurrency between nodes. 
These concurrencies can be reshaped into the buffer as they are blended, almost 
a machine code A+B operation. The geometry is elegant too. Imagine a system, 
using those same nodes, which are connected by a matrix. The matrix is local, 
reflecting its position in the global, the world axis. That world axis contains 
say a matrix of 9 regions. These matrices are the world's child nodes. They 
have child nodes too. And so a region may have 9 districts. These districts 
have 16 blocks each (keep it at congruent multiples if possible though I didn't 
in this example. Where a multiple of one is a multiple of another. Following me 
so far?
Once you've reached the inner most, or most nested matrix, that matrix have 
simple instructions, tags, calls and listeners. Each functions with the 
following axioms:
1-function bijectivity. mathematically, each node relation should be mapped. 
This way you are sure that every single node is bijective with the other node. 
meaning that functions and their inverses return the original value. This is 
very useful for a lot of connectivity.
Now add several classes which can be instaced and all contain or encapsulate 
the proper methods for functioning in the world, as well as through the various 
menus, maps, pop overs, etc. The XML parser I have does exactly that, and 
places every element in the right hierarchy, then accessible via call, dict 
search or list / matrix arrangements. The gemoetrical shapes inside come from a 
library I am coding, meaning every primitive type of shape. The Ngon. low 
values give you a tetrahedron, increase the value by far and you get a perfect 
sphere. 4 sides and its a rectangle, etc. Another one is a bunch of points you 
add together to form the environment, or space where the sound is emitted. 
Inwards and outwards. At the top of these 3D sound orientation objects class, I 
call wObjects, you have a smaller matrix with simple tuple like rows which take 
from a database, which I have already setup with the tables etc. The dB script 
will continue to increase as various objects are created by code only. Not 
paintbrushing. Actual abstract and beautifully elegant nature that is handled 
with the minimum ammount of big O. Factorial by factorial, you're fucked in 
fact, arithmetic, and well...
Now I'll give you an example of how to generate a tree:
-Create an ngon. Increase the vector transform influence from one point to a 
circular shape around it with a hyperbolic falloff. The values of these these 
falloffs are all class instances.
-create a spline (it;s a function line) perpendicular to the original object's 
x and y. It's easy. You just compare the local axis as compared to world axis 
of one object then put that value into the other object.
-Use the formula that directs whatever tree or plant in real life from its 
geometry, and make sure to add the phi ratio, or some call it the golden ratio. 
This ratio will allow for the natural fractals to combine. Mathematically, you 
just have to look for the math you want, understand it, and apply it. Back to 
the task.
-Use several properties to the revolving 

Re: A call for a world perspective in 3D

2015-01-11 Thread Yuma Antoine Decaux
Hi Alex,

I have the Js web audio API classes ready but reading the openal.audio python 
module, I think I can save a lot of processes and do everything in one language 
(save xml and lua for the world of warcraft interface)

http://pythonhosted.org/PyAL/audio.html#module-openal.audio


To answer your question about the buffer channels.

The buffer max is 16 megs. a lot of sounds can be shrunk, blended and 
refactored using fourrier's transforms. Also, by applying buffer queuing 
algorithms such that active sound sources and their positional information can 
be truncated into the right bytesize, considering that most or all of our 
computers are intel x86, and a lot of us using 64-bit. with 16 buffer channels, 
we have approximately 256 megs of sound clips and generated waveforms (on the 
fly) that can be queued using parallel algorithms. I was thinking of using the 
select() module for this purpose which listens and automatically fills the 
queue which can then be passed to each individual buffers.

By quick calculation, this is how I see it:
each observer (character in the game) has three areas (long, med, short). 
Anything long range dithers anyway in the perceptive field, so they can be 
blended through the queue and played back as a single long range pass, or pre 
recorded. Making a simulation first then recording can also work. Mid range has 
more definition but channel size restricted to 5 sources. The rest of the 10 
channels can be various sources around the proximity of the player. I can even 
hypothesise a cheat which filters the types of sounds we want to hear.

In regards to emulating higher channel counts, I think it will have to be again 
math based. Say you have a willow tree in front of you. there are about 35 odd 
branches, each with smaller branches and their leaves. clumps of leaves with 
small rustle signatures (this is just about function generation into the 
buffer) can be blended before being sent to the buffer. Kind of a premix before 
getting out there in the world. Again, bijectivity is super important to trace 
back and edit the raw as it comes. using the select module allows for automatic 
buffer dispatch for the first available one, since each buffer block say is the 
raw data and its positional/volume/others information. 

I don't think this will be much of a problem though it shows a technical 
restriction.




Yuma Antoine Decaux
Light has no value without darkness
Mob: +612102277190
Skype: Shainobi1
twitter: http://www.twitter.com/triple7




 On 11/01/2015, at 2:37 pm, Alex Hall mehg...@icloud.com wrote:
 
 I won't pretend to understand all of this. My degree is computer science, not 
 higher mathematics or engineering. Still, I'm intrigued, and would love to 
 hear a practical example. To keep things on topic, would this library be 
 usable from a Swift or Objective-C app for iOS or OS X? If so, can you give a 
 real-world example of how? I understand representing things as sounds, but 
 how would it handle in a real app? That is, what about loading/managing sound 
 buffers (you can only have 16 at a time in OpenAL), handling stereo sound 
 samples, generating sounds on the fly instead of relying on recorded audio, 
 applying real-time filters or effects, managing occlusions and distance 
 roll-offs, that kind of thing? Is there a mapping engine, where the 
 programmer can lay out the world in some kind of XML or JSON format? Have I 
 missed the point entirely?
 On Jan 10, 2015, at 10:31 PM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 I’ll get into more detail on the 3D sound part.
 
 It uses a node system, as mentioned earlier, to plug, unplug, blend or ratio 
 fit one or more nodes t=which can be filters, user set paremeters or daisy 
 chained hierarchies of sound buffers. So imagine you call a tree instance 
 from my library. It uses phi and pi to generate the fractal links down to 
 the leaf node. Each leaf node has physical properties which follow parent 
 nodes with a coefficient, or a scalar value spread along the entire tree. 
 Each node is a sound buffer or a set of sound buffers. Collision detection 
 is made via matrix identification and eigan matrices. Now set a wind 
 particle (full of bounding boxes) object that traverses the tree. Each 
 collision triggers the sound of a rustle. in real 3D position relative to 
 the user’s position.
 
 Now take these tree structures and use a spherical shape (revolving the nGon 
 I mentioned earlier around its y axis) and pass it through a deformer (which 
 changes scalar values of the vectors within the sphere). This deformer can 
 use a set of physics class objects such as inertia, parabolic deviations, 
 swirls, you name the geometric shape, there’s a math formulae for it. 
 Consider that each vector or vertex is a bird in a school of birds. Apply an 
 index to it, and use this other swarm algorithm I studied to create an array 
 of bees, birds, fish, whatever. each, when colliding with 

Re: A call for a world perspective in 3D

2015-01-11 Thread Alex Hall
As before, I don't follow all of this. Computer science uses math, but nothing 
like this, and I didn't need anything past calculus 1. Still, I'd like  to see 
the API, and (most importantly) to know if I can use all this in an iOS or Mac 
app written in Swift or Objective-C. If everything is condensed to a C or C++ 
library, I don't see why such integration couldn't happen, but I have no idea 
if the languages you're using would be compatible. I realize you are focused on 
web-based apps at the moment, so we might be on two different wavelengths here.
 On Jan 11, 2015, at 8:52 AM, Yuma Antoine Decaux jamy...@gmail.com wrote:
 
 Hi Alex,
 
 I have the Js web audio API classes ready but reading the openal.audio python 
 module, I think I can save a lot of processes and do everything in one 
 language (save xml and lua for the world of warcraft interface)
 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio
 
 
 To answer your question about the buffer channels.
 
 The buffer max is 16 megs. a lot of sounds can be shrunk, blended and 
 refactored using fourrier's transforms. Also, by applying buffer queuing 
 algorithms such that active sound sources and their positional information 
 can be truncated into the right bytesize, considering that most or all of our 
 computers are intel x86, and a lot of us using 64-bit. with 16 buffer 
 channels, we have approximately 256 megs of sound clips and generated 
 waveforms (on the fly) that can be queued using parallel algorithms. I was 
 thinking of using the select() module for this purpose which listens and 
 automatically fills the queue which can then be passed to each individual 
 buffers.
 
 By quick calculation, this is how I see it:
 each observer (character in the game) has three areas (long, med, short). 
 Anything long range dithers anyway in the perceptive field, so they can be 
 blended through the queue and played back as a single long range pass, or pre 
 recorded. Making a simulation first then recording can also work. Mid range 
 has more definition but channel size restricted to 5 sources. The rest of the 
 10 channels can be various sources around the proximity of the player. I can 
 even hypothesise a cheat which filters the types of sounds we want to hear.
 
 In regards to emulating higher channel counts, I think it will have to be 
 again math based. Say you have a willow tree in front of you. there are about 
 35 odd branches, each with smaller branches and their leaves. clumps of 
 leaves with small rustle signatures (this is just about function generation 
 into the buffer) can be blended before being sent to the buffer. Kind of a 
 premix before getting out there in the world. Again, bijectivity is super 
 important to trace back and edit the raw as it comes. using the select module 
 allows for automatic buffer dispatch for the first available one, since each 
 buffer block say is the raw data and its positional/volume/others 
 information. 
 
 I don't think this will be much of a problem though it shows a technical 
 restriction.
 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7 http://www.twitter.com/triple7
 
 
 
 
 On 11/01/2015, at 2:37 pm, Alex Hall mehg...@icloud.com 
 mailto:mehg...@icloud.com wrote:
 
 I won't pretend to understand all of this. My degree is computer science, 
 not higher mathematics or engineering. Still, I'm intrigued, and would love 
 to hear a practical example. To keep things on topic, would this library be 
 usable from a Swift or Objective-C app for iOS or OS X? If so, can you give 
 a real-world example of how? I understand representing things as sounds, but 
 how would it handle in a real app? That is, what about loading/managing 
 sound buffers (you can only have 16 at a time in OpenAL), handling stereo 
 sound samples, generating sounds on the fly instead of relying on recorded 
 audio, applying real-time filters or effects, managing occlusions and 
 distance roll-offs, that kind of thing? Is there a mapping engine, where the 
 programmer can lay out the world in some kind of XML or JSON format? Have 
 I missed the point entirely?
 On Jan 10, 2015, at 10:31 PM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 I’ll get into more detail on the 3D sound part.
 
 It uses a node system, as mentioned earlier, to plug, unplug, blend or 
 ratio fit one or more nodes t=which can be filters, user set paremeters or 
 daisy chained hierarchies of sound buffers. So imagine you call a tree 
 instance from my library. It uses phi and pi to generate the fractal links 
 down to the leaf node. Each leaf node has physical properties which follow 
 parent nodes with a coefficient, or a scalar value spread along the entire 
 tree. Each node is a sound buffer or a set of sound buffers. Collision 
 detection is made via matrix 

Re: A call for a world perspective in 3D

2015-01-11 Thread Devin Prater
I think web apps would be wonderful as long as speech is also included or sent 
to the os to process. I would although like to have a world to ecplore that is 
a regular app on iOS. Mine craft anyone?

Sent from my iPhone

 On Jan 11, 2015, at 8:10 AM, Alex Hall mehg...@icloud.com wrote:
 
 As before, I don't follow all of this. Computer science uses math, but 
 nothing like this, and I didn't need anything past calculus 1. Still, I'd 
 like  to see the API, and (most importantly) to know if I can use all this in 
 an iOS or Mac app written in Swift or Objective-C. If everything is condensed 
 to a C or C++ library, I don't see why such integration couldn't happen, but 
 I have no idea if the languages you're using would be compatible. I realize 
 you are focused on web-based apps at the moment, so we might be on two 
 different wavelengths here.
 On Jan 11, 2015, at 8:52 AM, Yuma Antoine Decaux jamy...@gmail.com wrote:
 
 Hi Alex,
 
 I have the Js web audio API classes ready but reading the openal.audio 
 python module, I think I can save a lot of processes and do everything in 
 one language (save xml and lua for the world of warcraft interface)
 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio
 
 
 To answer your question about the buffer channels.
 
 The buffer max is 16 megs. a lot of sounds can be shrunk, blended and 
 refactored using fourrier's transforms. Also, by applying buffer queuing 
 algorithms such that active sound sources and their positional information 
 can be truncated into the right bytesize, considering that most or all of 
 our computers are intel x86, and a lot of us using 64-bit. with 16 buffer 
 channels, we have approximately 256 megs of sound clips and generated 
 waveforms (on the fly) that can be queued using parallel algorithms. I was 
 thinking of using the select() module for this purpose which listens and 
 automatically fills the queue which can then be passed to each individual 
 buffers.
 
 By quick calculation, this is how I see it:
 each observer (character in the game) has three areas (long, med, short). 
 Anything long range dithers anyway in the perceptive field, so they can be 
 blended through the queue and played back as a single long range pass, or 
 pre recorded. Making a simulation first then recording can also work. Mid 
 range has more definition but channel size restricted to 5 sources. The rest 
 of the 10 channels can be various sources around the proximity of the 
 player. I can even hypothesise a cheat which filters the types of sounds we 
 want to hear.
 
 In regards to emulating higher channel counts, I think it will have to be 
 again math based. Say you have a willow tree in front of you. there are 
 about 35 odd branches, each with smaller branches and their leaves. clumps 
 of leaves with small rustle signatures (this is just about function 
 generation into the buffer) can be blended before being sent to the buffer. 
 Kind of a premix before getting out there in the world. Again, bijectivity 
 is super important to trace back and edit the raw as it comes. using the 
 select module allows for automatic buffer dispatch for the first available 
 one, since each buffer block say is the raw data and its 
 positional/volume/others information. 
 
 I don't think this will be much of a problem though it shows a technical 
 restriction.
 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7
 
 
 
 
 On 11/01/2015, at 2:37 pm, Alex Hall mehg...@icloud.com wrote:
 
 I won't pretend to understand all of this. My degree is computer science, 
 not higher mathematics or engineering. Still, I'm intrigued, and would love 
 to hear a practical example. To keep things on topic, would this library be 
 usable from a Swift or Objective-C app for iOS or OS X? If so, can you give 
 a real-world example of how? I understand representing things as sounds, 
 but how would it handle in a real app? That is, what about loading/managing 
 sound buffers (you can only have 16 at a time in OpenAL), handling stereo 
 sound samples, generating sounds on the fly instead of relying on recorded 
 audio, applying real-time filters or effects, managing occlusions and 
 distance roll-offs, that kind of thing? Is there a mapping engine, where 
 the programmer can lay out the world in some kind of XML or JSON format? 
 Have I missed the point entirely?
 On Jan 10, 2015, at 10:31 PM, Yuma Antoine Decaux jamy...@gmail.com 
 wrote:
 
 I’ll get into more detail on the 3D sound part.
 
 It uses a node system, as mentioned earlier, to plug, unplug, blend or 
 ratio fit one or more nodes t=which can be filters, user set paremeters or 
 daisy chained hierarchies of sound buffers. So imagine you call a tree 
 instance from my library. It uses phi and pi to generate the fractal links 
 down to the leaf node. Each leaf node has physical properties which follow 
 parent nodes with a coefficient, 

Re: A call for a world perspective in 3D

2015-01-11 Thread Yuma Antoine Decaux
Mathematics is the base language. The rest is inflexion.

I don't see a reason why it cannot be converted. The pyobjc bridge allows for 
this. You're used to a MVC design pattern I suppose, with Model View Control. 
Then take all the python stuff as model data and pyobjc does the interface and 
control through x-code. I have looked at swift and I like the language so far. 
I have started with a web based set of languages because they are so efficient 
server side, and world of warcraft is a server side platform. I didn't see a 
reason to take all my time constructing C classes when some python modules are 
based in c anyway. Do you see where I'm heading at? Every language uses the 
same base concepts. They just differ in the writing of it. I know c language 
quite a little now that I have completed two courses using it, but so far I see 
better results and especially, faster ones using the expansive python module 
repository and finding everything or every component I need to make this happen.

I understand that you have stayed at high school calculus, but I can't push 
enough the difference your code will experience when you delve into higher 
maths.

In any case, I'm continuing this until I get a good solid working demo. 

Cheers,


Yuma Antoine Decaux
Light has no value without darkness
Mob: +612102277190
Skype: Shainobi1
twitter: http://www.twitter.com/triple7




 On 12/01/2015, at 12:10 am, Alex Hall mehg...@icloud.com wrote:
 
 As before, I don't follow all of this. Computer science uses math, but 
 nothing like this, and I didn't need anything past calculus 1. Still, I'd 
 like  to see the API, and (most importantly) to know if I can use all this in 
 an iOS or Mac app written in Swift or Objective-C. If everything is condensed 
 to a C or C++ library, I don't see why such integration couldn't happen, but 
 I have no idea if the languages you're using would be compatible. I realize 
 you are focused on web-based apps at the moment, so we might be on two 
 different wavelengths here.
 On Jan 11, 2015, at 8:52 AM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 Hi Alex,
 
 I have the Js web audio API classes ready but reading the openal.audio 
 python module, I think I can save a lot of processes and do everything in 
 one language (save xml and lua for the world of warcraft interface)
 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio
 
 
 To answer your question about the buffer channels.
 
 The buffer max is 16 megs. a lot of sounds can be shrunk, blended and 
 refactored using fourrier's transforms. Also, by applying buffer queuing 
 algorithms such that active sound sources and their positional information 
 can be truncated into the right bytesize, considering that most or all of 
 our computers are intel x86, and a lot of us using 64-bit. with 16 buffer 
 channels, we have approximately 256 megs of sound clips and generated 
 waveforms (on the fly) that can be queued using parallel algorithms. I was 
 thinking of using the select() module for this purpose which listens and 
 automatically fills the queue which can then be passed to each individual 
 buffers.
 
 By quick calculation, this is how I see it:
 each observer (character in the game) has three areas (long, med, short). 
 Anything long range dithers anyway in the perceptive field, so they can be 
 blended through the queue and played back as a single long range pass, or 
 pre recorded. Making a simulation first then recording can also work. Mid 
 range has more definition but channel size restricted to 5 sources. The rest 
 of the 10 channels can be various sources around the proximity of the 
 player. I can even hypothesise a cheat which filters the types of sounds we 
 want to hear.
 
 In regards to emulating higher channel counts, I think it will have to be 
 again math based. Say you have a willow tree in front of you. there are 
 about 35 odd branches, each with smaller branches and their leaves. clumps 
 of leaves with small rustle signatures (this is just about function 
 generation into the buffer) can be blended before being sent to the buffer. 
 Kind of a premix before getting out there in the world. Again, bijectivity 
 is super important to trace back and edit the raw as it comes. using the 
 select module allows for automatic buffer dispatch for the first available 
 one, since each buffer block say is the raw data and its 
 positional/volume/others information. 
 
 I don't think this will be much of a problem though it shows a technical 
 restriction.
 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7 http://www.twitter.com/triple7
 
 
 
 
 On 11/01/2015, at 2:37 pm, Alex Hall mehg...@icloud.com 
 mailto:mehg...@icloud.com wrote:
 
 I won't pretend to understand all of this. My degree is computer science, 
 not higher mathematics 

Re: A call for a world perspective in 3D

2015-01-11 Thread Yuma Antoine Decaux
The reason why I like js is because recently apple has also adopted it for its 
scripting capabilities. It's much better suited than using a one application 
script language such as for jaws. With js, all UI elements in the entire system 
are accessible. And other js scripts can be plugged together to enhance the 
experience even within the OS. I've tested this, since 2 years ago, with GUI 
manipulations, UI element browsers, axaccessibility trees, etc. It's totally 
feasable. In Js at least, I have a functioning 3D setup. I just need to port 
the controls into js and pair the process with the sound API. And avoid a crap 
load of non desired announcements and replace it with sounds that go around me.

Pair this with swift and you're still having performance apps working, and 
essentially accessible to everyone too. With the dB power that web provides. 

Yuma Antoine Decaux
Light has no value without darkness
Mob: +612102277190
Skype: Shainobi1
twitter: http://www.twitter.com/triple7




 On 12/01/2015, at 12:21 am, Devin Prater d.pra...@me.com wrote:
 
 I think web apps would be wonderful as long as speech is also included or 
 sent to the os to process. I would although like to have a world to ecplore 
 that is a regular app on iOS. Mine craft anyone?
 
 Sent from my iPhone
 
 On Jan 11, 2015, at 8:10 AM, Alex Hall mehg...@icloud.com 
 mailto:mehg...@icloud.com wrote:
 
 As before, I don't follow all of this. Computer science uses math, but 
 nothing like this, and I didn't need anything past calculus 1. Still, I'd 
 like  to see the API, and (most importantly) to know if I can use all this 
 in an iOS or Mac app written in Swift or Objective-C. If everything is 
 condensed to a C or C++ library, I don't see why such integration couldn't 
 happen, but I have no idea if the languages you're using would be 
 compatible. I realize you are focused on web-based apps at the moment, so we 
 might be on two different wavelengths here.
 On Jan 11, 2015, at 8:52 AM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 Hi Alex,
 
 I have the Js web audio API classes ready but reading the openal.audio 
 python module, I think I can save a lot of processes and do everything in 
 one language (save xml and lua for the world of warcraft interface)
 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio
 
 
 To answer your question about the buffer channels.
 
 The buffer max is 16 megs. a lot of sounds can be shrunk, blended and 
 refactored using fourrier's transforms. Also, by applying buffer queuing 
 algorithms such that active sound sources and their positional information 
 can be truncated into the right bytesize, considering that most or all of 
 our computers are intel x86, and a lot of us using 64-bit. with 16 buffer 
 channels, we have approximately 256 megs of sound clips and generated 
 waveforms (on the fly) that can be queued using parallel algorithms. I was 
 thinking of using the select() module for this purpose which listens and 
 automatically fills the queue which can then be passed to each individual 
 buffers.
 
 By quick calculation, this is how I see it:
 each observer (character in the game) has three areas (long, med, short). 
 Anything long range dithers anyway in the perceptive field, so they can be 
 blended through the queue and played back as a single long range pass, or 
 pre recorded. Making a simulation first then recording can also work. Mid 
 range has more definition but channel size restricted to 5 sources. The 
 rest of the 10 channels can be various sources around the proximity of the 
 player. I can even hypothesise a cheat which filters the types of sounds we 
 want to hear.
 
 In regards to emulating higher channel counts, I think it will have to be 
 again math based. Say you have a willow tree in front of you. there are 
 about 35 odd branches, each with smaller branches and their leaves. clumps 
 of leaves with small rustle signatures (this is just about function 
 generation into the buffer) can be blended before being sent to the buffer. 
 Kind of a premix before getting out there in the world. Again, bijectivity 
 is super important to trace back and edit the raw as it comes. using the 
 select module allows for automatic buffer dispatch for the first available 
 one, since each buffer block say is the raw data and its 
 positional/volume/others information. 
 
 I don't think this will be much of a problem though it shows a technical 
 restriction.
 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7 http://www.twitter.com/triple7
 
 
 
 
 On 11/01/2015, at 2:37 pm, Alex Hall mehg...@icloud.com 
 mailto:mehg...@icloud.com wrote:
 
 I won't pretend to understand all of this. My degree is computer science, 
 not higher mathematics or engineering. Still, I'm intrigued, and would 
 love to hear 

Re: A call for a world perspective in 3D

2015-01-11 Thread Alex Hall
Well, let us know when you have a library ready to test out. I'd very much 
rather use Swift, so PyObjC isn't really going to work. As I said, the ideal 
would be a Swift or C/C++ library that any other language could access. You 
said that all languages have the same basics, and you're certainly right about 
that, but porting code I don't understand is a recipe for disaster. smile 
Keep us updated.
 On Jan 11, 2015, at 1:01 PM, Yuma Antoine Decaux jamy...@gmail.com wrote:
 
 The reason why I like js is because recently apple has also adopted it for 
 its scripting capabilities. It's much better suited than using a one 
 application script language such as for jaws. With js, all UI elements in the 
 entire system are accessible. And other js scripts can be plugged together to 
 enhance the experience even within the OS. I've tested this, since 2 years 
 ago, with GUI manipulations, UI element browsers, axaccessibility trees, etc. 
 It's totally feasable. In Js at least, I have a functioning 3D setup. I just 
 need to port the controls into js and pair the process with the sound API. 
 And avoid a crap load of non desired announcements and replace it with sounds 
 that go around me.
 
 Pair this with swift and you're still having performance apps working, and 
 essentially accessible to everyone too. With the dB power that web provides. 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7 http://www.twitter.com/triple7
 
 
 
 
 On 12/01/2015, at 12:21 am, Devin Prater d.pra...@me.com 
 mailto:d.pra...@me.com wrote:
 
 I think web apps would be wonderful as long as speech is also included or 
 sent to the os to process. I would although like to have a world to ecplore 
 that is a regular app on iOS. Mine craft anyone?
 
 Sent from my iPhone
 
 On Jan 11, 2015, at 8:10 AM, Alex Hall mehg...@icloud.com 
 mailto:mehg...@icloud.com wrote:
 
 As before, I don't follow all of this. Computer science uses math, but 
 nothing like this, and I didn't need anything past calculus 1. Still, I'd 
 like  to see the API, and (most importantly) to know if I can use all this 
 in an iOS or Mac app written in Swift or Objective-C. If everything is 
 condensed to a C or C++ library, I don't see why such integration couldn't 
 happen, but I have no idea if the languages you're using would be 
 compatible. I realize you are focused on web-based apps at the moment, so 
 we might be on two different wavelengths here.
 On Jan 11, 2015, at 8:52 AM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 Hi Alex,
 
 I have the Js web audio API classes ready but reading the openal.audio 
 python module, I think I can save a lot of processes and do everything in 
 one language (save xml and lua for the world of warcraft interface)
 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio 
 http://pythonhosted.org/PyAL/audio.html#module-openal.audio
 
 
 To answer your question about the buffer channels.
 
 The buffer max is 16 megs. a lot of sounds can be shrunk, blended and 
 refactored using fourrier's transforms. Also, by applying buffer queuing 
 algorithms such that active sound sources and their positional information 
 can be truncated into the right bytesize, considering that most or all of 
 our computers are intel x86, and a lot of us using 64-bit. with 16 buffer 
 channels, we have approximately 256 megs of sound clips and generated 
 waveforms (on the fly) that can be queued using parallel algorithms. I was 
 thinking of using the select() module for this purpose which listens and 
 automatically fills the queue which can then be passed to each individual 
 buffers.
 
 By quick calculation, this is how I see it:
 each observer (character in the game) has three areas (long, med, short). 
 Anything long range dithers anyway in the perceptive field, so they can be 
 blended through the queue and played back as a single long range pass, or 
 pre recorded. Making a simulation first then recording can also work. Mid 
 range has more definition but channel size restricted to 5 sources. The 
 rest of the 10 channels can be various sources around the proximity of the 
 player. I can even hypothesise a cheat which filters the types of sounds 
 we want to hear.
 
 In regards to emulating higher channel counts, I think it will have to be 
 again math based. Say you have a willow tree in front of you. there are 
 about 35 odd branches, each with smaller branches and their leaves. clumps 
 of leaves with small rustle signatures (this is just about function 
 generation into the buffer) can be blended before being sent to the 
 buffer. Kind of a premix before getting out there in the world. Again, 
 bijectivity is super important to trace back and edit the raw as it comes. 
 using the select module allows for automatic buffer dispatch for the first 
 available one, since each buffer block say is the raw data and its 
 

Re: A call for a world perspective in 3D

2015-01-11 Thread Yuma Antoine Decaux
Hi Alex,

the reason why I started with python is that I initially wanted world of 
warcraft accessiblity. So that's the choice thing. I can port it to swift once 
I know better about the language.

I will broadcast the entire structure and their scripts once its done and see 
if you can get any of the stuff inside, which I am sure you will :)

-- 
You received this message because you are subscribed to the Google Groups 
MacVisionaries group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To post to this group, send email to macvisionaries@googlegroups.com.
Visit this group at http://groups.google.com/group/macvisionaries.
For more options, visit https://groups.google.com/d/optout.


Re: A call for a world perspective in 3D

2015-01-10 Thread Alex Hall
Can you explain a bit more what this library is doing and how it might be used? 
When you said 3d sound, I at first thought you meant something to supplement or 
replace OpenAL, but that's clearly not the case. I'm not clear on just what 
this does. Thanks.
 On Jan 10, 2015, at 2:34 AM, Yuma Antoine Decaux jamy...@gmail.com wrote:
 
 Hi All,
 
 I am currently working on a 3D sound engine. I have so far done the following:
 1-nodes structure for extracting tag and LUA function calls and creating a 
 hierarchy of each node where parent node is UI.
 2-A 3D sound library connecting to the js web sound API, using the node system
 3-a parser toolset to create arrays of configurations between scripts and 
 languages
 4-A geometric 3D volume matrix with the node hierarchy class used as 
 secondary process
 5-using a parallell processing class to send socket information between nodes
 6-A socket distribution (select()) daisy chain communication layer
 7-A 3D prototype of an SSD based sound processing CPU that stocks all the 
 information in the SSD as static memory. I have been 3D prototyping for about 
 15 years. I demand elegance and functionality in design, as much as efficient 
 memory management of blocks and sectors. I am a programmer.
 
 All the scripts are doing exactly what they are supposed to except for the 3D 
 matrix layer, which I am currently working on. However I have done all 
 primitives, transforms and rotations using matrices. About to get back to 
 completing the nGon class.
 
 This project started as a spark when I saw a tweet about a blind player on 
 World of Warcraft.
 
 Now it has turned out to be much bigger.
 
 Everything is written in standard APIs such as python and JS modules. I am 
 trying to complete this accessible World of Warcraft layer which I will use 
 as a GNU license platform which does not use world of warcraft. I don’t 
 understand why blizzard hasn’t done this. But this has given me the 
 opportunity to see exactly what is happening in the system architecture. And 
 be an architect, though I had lost that capacity once I lost vision.
 
 Will anyone be so cool as to send me a reply with “#vipWOW” as subject?
 
 I really hope that this ideal I have been carrying on for the past 6 years, 
 dedicated to programming and mathematics where I used not do apply so 
 frequently can be growing to a larger community through the effort I, and 
 hope others, will accept as an independant hire, to help. I cannot afford 
 thousands per month, but I have laid down the architecture, the working sub 
 systems, and working through each all the way to the main class.
 
 This effort, I have come to realise, demands way more hands than my blind 
 vision on the computer can handle, though I handle VIM quite well and 
 efficiently. But it also needs to be accessible to the level I want it at 
 some point.
 
 If you are ready to experience something seriously cool (network 
 connectivity, private test server, wiki, calendars and contacts, vnc access, 
 ssh, ftp, redundancy is not there yet but we’re working on an arch linux 
 installation), with an extra dimension (tactile), please do contact me. Let’s 
 make an order of classes that will standardise many aspects of our experience 
 on the computer as blind coders, and be the programmers for programmers in 
 facilitating our own experience. 
 
 Sincerely,
 
 Antoine Decaux
 twitter: triple7
 
 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 MacVisionaries group.
 To unsubscribe from this group and stop receiving emails from it, send an 
 email to macvisionaries+unsubscr...@googlegroups.com.
 To post to this group, send email to macvisionaries@googlegroups.com.
 Visit this group at http://groups.google.com/group/macvisionaries.
 For more options, visit https://groups.google.com/d/optout.


--
Have a great day,
Alex Hall
mehg...@icloud.com

-- 
You received this message because you are subscribed to the Google Groups 
MacVisionaries group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To post to this group, send email to macvisionaries@googlegroups.com.
Visit this group at http://groups.google.com/group/macvisionaries.
For more options, visit https://groups.google.com/d/optout.


Re: A call for a world perspective in 3D

2015-01-10 Thread Alex Hall
I won't pretend to understand all of this. My degree is computer science, not 
higher mathematics or engineering. Still, I'm intrigued, and would love to hear 
a practical example. To keep things on topic, would this library be usable from 
a Swift or Objective-C app for iOS or OS X? If so, can you give a real-world 
example of how? I understand representing things as sounds, but how would it 
handle in a real app? That is, what about loading/managing sound buffers (you 
can only have 16 at a time in OpenAL), handling stereo sound samples, 
generating sounds on the fly instead of relying on recorded audio, applying 
real-time filters or effects, managing occlusions and distance roll-offs, that 
kind of thing? Is there a mapping engine, where the programmer can lay out the 
world in some kind of XML or JSON format? Have I missed the point entirely?
 On Jan 10, 2015, at 10:31 PM, Yuma Antoine Decaux jamy...@gmail.com wrote:
 
 I’ll get into more detail on the 3D sound part.
 
 It uses a node system, as mentioned earlier, to plug, unplug, blend or ratio 
 fit one or more nodes t=which can be filters, user set paremeters or daisy 
 chained hierarchies of sound buffers. So imagine you call a tree instance 
 from my library. It uses phi and pi to generate the fractal links down to the 
 leaf node. Each leaf node has physical properties which follow parent nodes 
 with a coefficient, or a scalar value spread along the entire tree. Each node 
 is a sound buffer or a set of sound buffers. Collision detection is made via 
 matrix identification and eigan matrices. Now set a wind particle (full of 
 bounding boxes) object that traverses the tree. Each collision triggers the 
 sound of a rustle. in real 3D position relative to the user’s position.
 
 Now take these tree structures and use a spherical shape (revolving the nGon 
 I mentioned earlier around its y axis) and pass it through a deformer (which 
 changes scalar values of the vectors within the sphere). This deformer can 
 use a set of physics class objects such as inertia, parabolic deviations, 
 swirls, you name the geometric shape, there’s a math formulae for it. 
 Consider that each vector or vertex is a bird in a school of birds. Apply an 
 index to it, and use this other swarm algorithm I studied to create an array 
 of bees, birds, fish, whatever. each, when colliding with each other will 
 have a behavior generator using again, scalar values. I can’t stress enough 
 the utility of matrices and transformations for things that go beyond just 
 shapes.
 
 So I’ve gone way past my initial goal, and think this can be very useful.
 
 I want some help with some of the scripts, to complete them. I’m fine paying 
 for it, but the person needs to not only like the idea, but actually believe 
 in it.
 
 Anyway, here’s my two cents 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7 http://www.twitter.com/triple7
 
 
 
 
 On 10/01/2015, at 11:18 pm, Alex Hall mehg...@icloud.com 
 mailto:mehg...@icloud.com wrote:
 
 Can you explain a bit more what this library is doing and how it might be 
 used? When you said 3d sound, I at first thought you meant something to 
 supplement or replace OpenAL, but that's clearly not the case. I'm not clear 
 on just what this does. Thanks.
 On Jan 10, 2015, at 2:34 AM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 Hi All,
 
 I am currently working on a 3D sound engine. I have so far done the 
 following:
 1-nodes structure for extracting tag and LUA function calls and creating a 
 hierarchy of each node where parent node is UI.
 2-A 3D sound library connecting to the js web sound API, using the node 
 system
 3-a parser toolset to create arrays of configurations between scripts and 
 languages
 4-A geometric 3D volume matrix with the node hierarchy class used as 
 secondary process
 5-using a parallell processing class to send socket information between 
 nodes
 6-A socket distribution (select()) daisy chain communication layer
 7-A 3D prototype of an SSD based sound processing CPU that stocks all the 
 information in the SSD as static memory. I have been 3D prototyping for 
 about 15 years. I demand elegance and functionality in design, as much as 
 efficient memory management of blocks and sectors. I am a programmer.
 
 All the scripts are doing exactly what they are supposed to except for the 
 3D matrix layer, which I am currently working on. However I have done all 
 primitives, transforms and rotations using matrices. About to get back to 
 completing the nGon class.
 
 This project started as a spark when I saw a tweet about a blind player on 
 World of Warcraft.
 
 Now it has turned out to be much bigger.
 
 Everything is written in standard APIs such as python and JS modules. I am 
 trying to complete this accessible World of Warcraft layer which I will use 
 as a GNU license platform which does not use 

Re: A call for a world perspective in 3D

2015-01-10 Thread Devin Prater
I'm not a programmer, but as a gamer I'd love to see this in action. For years 
audio games have been more of play this cool sound in the background and just 
have a few interact able items. Sure that was how old games and some fighting 
games work but most games have gone beyond that. Have you gone to audio 
games.net about this? There are plenty of developers there whom are blind. 

Sent from my iPhone

 On Jan 10, 2015, at 9:31 PM, Yuma Antoine Decaux jamy...@gmail.com wrote:
 
 I’ll get into more detail on the 3D sound part.
 
 It uses a node system, as mentioned earlier, to plug, unplug, blend or ratio 
 fit one or more nodes t=which can be filters, user set paremeters or daisy 
 chained hierarchies of sound buffers. So imagine you call a tree instance 
 from my library. It uses phi and pi to generate the fractal links down to the 
 leaf node. Each leaf node has physical properties which follow parent nodes 
 with a coefficient, or a scalar value spread along the entire tree. Each node 
 is a sound buffer or a set of sound buffers. Collision detection is made via 
 matrix identification and eigan matrices. Now set a wind particle (full of 
 bounding boxes) object that traverses the tree. Each collision triggers the 
 sound of a rustle. in real 3D position relative to the user’s position.
 
 Now take these tree structures and use a spherical shape (revolving the nGon 
 I mentioned earlier around its y axis) and pass it through a deformer (which 
 changes scalar values of the vectors within the sphere). This deformer can 
 use a set of physics class objects such as inertia, parabolic deviations, 
 swirls, you name the geometric shape, there’s a math formulae for it. 
 Consider that each vector or vertex is a bird in a school of birds. Apply an 
 index to it, and use this other swarm algorithm I studied to create an array 
 of bees, birds, fish, whatever. each, when colliding with each other will 
 have a behavior generator using again, scalar values. I can’t stress enough 
 the utility of matrices and transformations for things that go beyond just 
 shapes.
 
 So I’ve gone way past my initial goal, and think this can be very useful.
 
 I want some help with some of the scripts, to complete them. I’m fine paying 
 for it, but the person needs to not only like the idea, but actually believe 
 in it.
 
 Anyway, here’s my two cents 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7
 
 
 
 
 On 10/01/2015, at 11:18 pm, Alex Hall mehg...@icloud.com wrote:
 
 Can you explain a bit more what this library is doing and how it might be 
 used? When you said 3d sound, I at first thought you meant something to 
 supplement or replace OpenAL, but that's clearly not the case. I'm not clear 
 on just what this does. Thanks.
 On Jan 10, 2015, at 2:34 AM, Yuma Antoine Decaux jamy...@gmail.com wrote:
 
 Hi All,
 
 I am currently working on a 3D sound engine. I have so far done the 
 following:
 1-nodes structure for extracting tag and LUA function calls and creating a 
 hierarchy of each node where parent node is UI.
 2-A 3D sound library connecting to the js web sound API, using the node 
 system
 3-a parser toolset to create arrays of configurations between scripts and 
 languages
 4-A geometric 3D volume matrix with the node hierarchy class used as 
 secondary process
 5-using a parallell processing class to send socket information between 
 nodes
 6-A socket distribution (select()) daisy chain communication layer
 7-A 3D prototype of an SSD based sound processing CPU that stocks all the 
 information in the SSD as static memory. I have been 3D prototyping for 
 about 15 years. I demand elegance and functionality in design, as much as 
 efficient memory management of blocks and sectors. I am a programmer.
 
 All the scripts are doing exactly what they are supposed to except for the 
 3D matrix layer, which I am currently working on. However I have done all 
 primitives, transforms and rotations using matrices. About to get back to 
 completing the nGon class.
 
 This project started as a spark when I saw a tweet about a blind player on 
 World of Warcraft.
 
 Now it has turned out to be much bigger.
 
 Everything is written in standard APIs such as python and JS modules. I am 
 trying to complete this accessible World of Warcraft layer which I will use 
 as a GNU license platform which does not use world of warcraft. I don’t 
 understand why blizzard hasn’t done this. But this has given me the 
 opportunity to see exactly what is happening in the system architecture. 
 And be an architect, though I had lost that capacity once I lost vision.
 
 Will anyone be so cool as to send me a reply with “#vipWOW” as subject?
 
 I really hope that this ideal I have been carrying on for the past 6 years, 
 dedicated to programming and mathematics where I used not do apply so 
 frequently can be growing to a larger community through the 

Re: A call for a world perspective in 3D

2015-01-10 Thread Yuma Antoine Decaux
Hi Alex,

Basically, I started it as a 3D engine, but now that I have applied all the 
mathematical rules and proofs on all of my primitive classes and their parent 
classes, I have started to realise that I am actually creating not only the 
engine but the connections for an interface using 3D positional sounds instead 
of graphics with the same matrix structure a visual 3D game would use.

So the concept of this framework is to have a toolset for:
1-getting data from various file types and parse them into a separate database 
of elements, arrays within matrices, themselves within matrices.
2-A node system that connects the entire hierarchy of the 3D world, and a 
parsed interface in parallel.
3-A set of geometrical and physical tools (transforms, rotations, projections, 
spline behaviour, animation keypoints, boundaries, deflectors and warpers in 
various shapes which are also hierarchical, using either forward or inverse 
kinematics. I finished the nGon class which creates any shape from triangle to 
square all the way to a circle in a click, with local and world relative axes 
contained within the nGon object. This object will then be used among other 
data capsules in the wObject class which is either an interactable object, a 
character, AI or buildings. Considering that this is the basis for my venture 
into visual perception algorithms, I have made sure everything is optimised 
with the right helper tools and tree structure of classes collecting data from 
each other through either parallel processes or sockets. 
4-The parallel processing and sockets are mainly server side, but These will be 
child classes of the voice conversation layer, which I will need some pointing 
towards so I don’t fall into coding for a crap module.
5-a set of server side tools using standard db methods (I’d like to use mongodb 
but will have to refresh myself and update the code to reflect)
6- Since all of these can be packaged into a module, ONce I have tested user 
intervace accessibility, I want to focus on structure comprehension with each 
platform, whether mac os or windows.

Finally, once all the testing is made on WOW and players can start populating 
the official server, I will have a set of tools that satisfy the 3 tower multi 
agent concept invented by Bell and following robotics and AI scientists. Using 
the knowledge from the wow experience, I am going to integrate a set of sensors 
(infrared, sonar, bluetooth, gyro, temperature and humidity that will fit in a 
small stone like material (in this case, diamond I will produce with the 
diamond synthesising oven I purchased a few months ago.

If you look at the parallel between the world I am coding, and how the software 
interacts with the world (for eg: when a player roams the world, with the log 
streams from the game’s console providing location information and other 
relative stuff, the software can gather that information from what I like to 
call single agents which are just the perceptive tower of the 3 tower algorithm 
used in AI. The information, topology, points of interest, perimeters, etc can 
all be called or instances created through my 3D math classes. If mapping out a 
virtual world is being done right now (hoping to finish this component by mid 
february), then nothing stops the software from swapping virtual world with 
sensors creating a rough outline of what’s around us. Using bounding boxes etc. 
And since there’s already a 3d audio layer, the sky becomes the limit.

The reason why I am looking for blind programmers is because only blind users 
can really try to have an intuition as to the human computer interfacing 
methods. 

Like I said, I have a server with all services available to those who wish to 
participate, and the goal is to get 5-10 persons in game to discuss what needs 
to be changed between the UI selection methods (I have an empty key_bind 
class”), how the 3D engine will fare as the number of objects grows (though I 
am currently using the functional programming style and keeping algebraic 
complexity to a minimum, save fundamental formulaes, transforms and class 
instances.
This particular section of my code uses 3 components: nodes, positional sound 
and a filter pass of each speech synthesis output to create the world.

I hope this clarifies a bit more.
Yuma Antoine Decaux
Light has no value without darkness
Mob: +612102277190
Skype: Shainobi1
twitter: http://www.twitter.com/triple7




 On 10/01/2015, at 11:18 pm, Alex Hall mehg...@icloud.com wrote:
 
 Can you explain a bit more what this library is doing and how it might be 
 used? When you said 3d sound, I at first thought you meant something to 
 supplement or replace OpenAL, but that's clearly not the case. I'm not clear 
 on just what this does. Thanks.
 On Jan 10, 2015, at 2:34 AM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 Hi All,
 
 I am currently working on a 3D sound engine. I have so far done the 
 following:
 1-nodes 

Re: A call for a world perspective in 3D

2015-01-10 Thread Yuma Antoine Decaux
That’s fine. the more info I get, the better.

I have taken quite a cool segway into my geometry and matrix classes but the 
bottom line is that the world of warcraft XML structure along with the lua 
function calls is a tree structure. Therefore easily attachable to a set of 
listeners that will output speech synthesis. I have completed the script that 
creates the nodelist between UI and smallest element of each frame, and UI 
element in WOW. Sure, there are hundreds of UI files but it doesn’t matter 
since my algorithm picks out the necessarry info, parses it into a array like 
queue and once I’ve coded the key bindings, then the script is aware of 
everything that is happening UI wise. That’s a very good start for me.

Each brick must be laid down, but every brick is made.




Yuma Antoine Decaux
Light has no value without darkness
Mob: +612102277190
Skype: Shainobi1
twitter: http://www.twitter.com/triple7




 On 11/01/2015, at 2:37 pm, Devin Prater d.pra...@me.com wrote:
 
 I'm not a programmer, but as a gamer I'd love to see this in action. For 
 years audio games have been more of play this cool sound in the background 
 and just have a few interact able items. Sure that was how old games and some 
 fighting games work but most games have gone beyond that. Have you gone to 
 audio games.net http://games.net/ about this? There are plenty of 
 developers there whom are blind. 
 
 Sent from my iPhone
 
 On Jan 10, 2015, at 9:31 PM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 I’ll get into more detail on the 3D sound part.
 
 It uses a node system, as mentioned earlier, to plug, unplug, blend or ratio 
 fit one or more nodes t=which can be filters, user set paremeters or daisy 
 chained hierarchies of sound buffers. So imagine you call a tree instance 
 from my library. It uses phi and pi to generate the fractal links down to 
 the leaf node. Each leaf node has physical properties which follow parent 
 nodes with a coefficient, or a scalar value spread along the entire tree. 
 Each node is a sound buffer or a set of sound buffers. Collision detection 
 is made via matrix identification and eigan matrices. Now set a wind 
 particle (full of bounding boxes) object that traverses the tree. Each 
 collision triggers the sound of a rustle. in real 3D position relative to 
 the user’s position.
 
 Now take these tree structures and use a spherical shape (revolving the nGon 
 I mentioned earlier around its y axis) and pass it through a deformer (which 
 changes scalar values of the vectors within the sphere). This deformer can 
 use a set of physics class objects such as inertia, parabolic deviations, 
 swirls, you name the geometric shape, there’s a math formulae for it. 
 Consider that each vector or vertex is a bird in a school of birds. Apply an 
 index to it, and use this other swarm algorithm I studied to create an array 
 of bees, birds, fish, whatever. each, when colliding with each other will 
 have a behavior generator using again, scalar values. I can’t stress enough 
 the utility of matrices and transformations for things that go beyond just 
 shapes.
 
 So I’ve gone way past my initial goal, and think this can be very useful.
 
 I want some help with some of the scripts, to complete them. I’m fine paying 
 for it, but the person needs to not only like the idea, but actually believe 
 in it.
 
 Anyway, here’s my two cents 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190
 Skype: Shainobi1
 twitter: http://www.twitter.com/triple7 http://www.twitter.com/triple7
 
 
 
 
 On 10/01/2015, at 11:18 pm, Alex Hall mehg...@icloud.com 
 mailto:mehg...@icloud.com wrote:
 
 Can you explain a bit more what this library is doing and how it might be 
 used? When you said 3d sound, I at first thought you meant something to 
 supplement or replace OpenAL, but that's clearly not the case. I'm not 
 clear on just what this does. Thanks.
 On Jan 10, 2015, at 2:34 AM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 Hi All,
 
 I am currently working on a 3D sound engine. I have so far done the 
 following:
 1-nodes structure for extracting tag and LUA function calls and creating a 
 hierarchy of each node where parent node is UI.
 2-A 3D sound library connecting to the js web sound API, using the node 
 system
 3-a parser toolset to create arrays of configurations between scripts and 
 languages
 4-A geometric 3D volume matrix with the node hierarchy class used as 
 secondary process
 5-using a parallell processing class to send socket information between 
 nodes
 6-A socket distribution (select()) daisy chain communication layer
 7-A 3D prototype of an SSD based sound processing CPU that stocks all the 
 information in the SSD as static memory. I have been 3D prototyping for 
 about 15 years. I demand elegance and functionality in design, as much as 
 efficient memory management of blocks and sectors. I am a programmer.
 
 All 

Re: A call for a world perspective in 3D

2015-01-10 Thread Yuma Antoine Decaux
I have already considered the buffer channel limits. These are, again, nodes. 
The nodes take a local form and a general form. Though internally there can 
only be 16 samples played simultaneously, shuffling the samples through the 3D 
sound API, filtering them and globalising them into a composite buffer (this is 
where function bijectivity is important, to allow things to go both ways) will 
effectively stream one sample that has gone through the node hierarchy as a 
single pass or channel. There are primitives I created for bounds, such as 
distance falloff etc. Listen around you and you will not have 16 different 
sounds going on simultaneously unless you’re in a concert hall, waiting for a 
classic troupe to play. When you include occlusion, clipping and filter buffers 
into one composite buffer, you’re getting flexible with the ammount of samples 
you have in your toolkit.

Yes, the world should be considered as a single node from which everything else 
spawns. Just like OOP but using essential shortcut maths (I stronglyh recommend 
anyone to read discreet maths for this). I’m not sure yet how many processes I 
can run to offload stuff and dictate my own memory allocations, but for now I 
am building solid core classes to facilitate as much ressources as possible, 
off from the graphics load onto the sound load.

I’m surprised no one has considered the node system, such as nodes.js or 
python’s setup() module. They are clearly under used and should be given more 
attention.

Like I said, some of the invariables need to be load tested with profiling, but 
at this phase I am constructing the underlying world structure. If there’s a 
method out there with either python, js, java, c or to a lesser extent, 
objective c, please don’t hestiate to send me docs to read, etc. The more 
advanced, the happier I am :)


Yuma Antoine Decaux
Light has no value without darkness
Mob: +612102277190
Skype: Shainobi1
twitter: http://www.twitter.com/triple7




 On 11/01/2015, at 2:37 pm, Alex Hall mehg...@icloud.com wrote:
 
 I won't pretend to understand all of this. My degree is computer science, not 
 higher mathematics or engineering. Still, I'm intrigued, and would love to 
 hear a practical example. To keep things on topic, would this library be 
 usable from a Swift or Objective-C app for iOS or OS X? If so, can you give a 
 real-world example of how? I understand representing things as sounds, but 
 how would it handle in a real app? That is, what about loading/managing sound 
 buffers (you can only have 16 at a time in OpenAL), handling stereo sound 
 samples, generating sounds on the fly instead of relying on recorded audio, 
 applying real-time filters or effects, managing occlusions and distance 
 roll-offs, that kind of thing? Is there a mapping engine, where the 
 programmer can lay out the world in some kind of XML or JSON format? Have I 
 missed the point entirely?
 On Jan 10, 2015, at 10:31 PM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 I’ll get into more detail on the 3D sound part.
 
 It uses a node system, as mentioned earlier, to plug, unplug, blend or ratio 
 fit one or more nodes t=which can be filters, user set paremeters or daisy 
 chained hierarchies of sound buffers. So imagine you call a tree instance 
 from my library. It uses phi and pi to generate the fractal links down to 
 the leaf node. Each leaf node has physical properties which follow parent 
 nodes with a coefficient, or a scalar value spread along the entire tree. 
 Each node is a sound buffer or a set of sound buffers. Collision detection 
 is made via matrix identification and eigan matrices. Now set a wind 
 particle (full of bounding boxes) object that traverses the tree. Each 
 collision triggers the sound of a rustle. in real 3D position relative to 
 the user’s position.
 
 Now take these tree structures and use a spherical shape (revolving the nGon 
 I mentioned earlier around its y axis) and pass it through a deformer (which 
 changes scalar values of the vectors within the sphere). This deformer can 
 use a set of physics class objects such as inertia, parabolic deviations, 
 swirls, you name the geometric shape, there’s a math formulae for it. 
 Consider that each vector or vertex is a bird in a school of birds. Apply an 
 index to it, and use this other swarm algorithm I studied to create an array 
 of bees, birds, fish, whatever. each, when colliding with each other will 
 have a behavior generator using again, scalar values. I can’t stress enough 
 the utility of matrices and transformations for things that go beyond just 
 shapes.
 
 So I’ve gone way past my initial goal, and think this can be very useful.
 
 I want some help with some of the scripts, to complete them. I’m fine paying 
 for it, but the person needs to not only like the idea, but actually believe 
 in it.
 
 Anyway, here’s my two cents 
 
 
 
 Yuma Antoine Decaux
 Light has no value without darkness
 Mob: +612102277190

Re: A call for a world perspective in 3D

2015-01-10 Thread Yuma Antoine Decaux
I’ll get into more detail on the 3D sound part.

It uses a node system, as mentioned earlier, to plug, unplug, blend or ratio 
fit one or more nodes t=which can be filters, user set paremeters or daisy 
chained hierarchies of sound buffers. So imagine you call a tree instance from 
my library. It uses phi and pi to generate the fractal links down to the leaf 
node. Each leaf node has physical properties which follow parent nodes with a 
coefficient, or a scalar value spread along the entire tree. Each node is a 
sound buffer or a set of sound buffers. Collision detection is made via matrix 
identification and eigan matrices. Now set a wind particle (full of bounding 
boxes) object that traverses the tree. Each collision triggers the sound of a 
rustle. in real 3D position relative to the user’s position.

Now take these tree structures and use a spherical shape (revolving the nGon I 
mentioned earlier around its y axis) and pass it through a deformer (which 
changes scalar values of the vectors within the sphere). This deformer can use 
a set of physics class objects such as inertia, parabolic deviations, swirls, 
you name the geometric shape, there’s a math formulae for it. Consider that 
each vector or vertex is a bird in a school of birds. Apply an index to it, and 
use this other swarm algorithm I studied to create an array of bees, birds, 
fish, whatever. each, when colliding with each other will have a behavior 
generator using again, scalar values. I can’t stress enough the utility of 
matrices and transformations for things that go beyond just shapes.

So I’ve gone way past my initial goal, and think this can be very useful.

I want some help with some of the scripts, to complete them. I’m fine paying 
for it, but the person needs to not only like the idea, but actually believe in 
it.

Anyway, here’s my two cents 



Yuma Antoine Decaux
Light has no value without darkness
Mob: +612102277190
Skype: Shainobi1
twitter: http://www.twitter.com/triple7




 On 10/01/2015, at 11:18 pm, Alex Hall mehg...@icloud.com wrote:
 
 Can you explain a bit more what this library is doing and how it might be 
 used? When you said 3d sound, I at first thought you meant something to 
 supplement or replace OpenAL, but that's clearly not the case. I'm not clear 
 on just what this does. Thanks.
 On Jan 10, 2015, at 2:34 AM, Yuma Antoine Decaux jamy...@gmail.com 
 mailto:jamy...@gmail.com wrote:
 
 Hi All,
 
 I am currently working on a 3D sound engine. I have so far done the 
 following:
 1-nodes structure for extracting tag and LUA function calls and creating a 
 hierarchy of each node where parent node is UI.
 2-A 3D sound library connecting to the js web sound API, using the node 
 system
 3-a parser toolset to create arrays of configurations between scripts and 
 languages
 4-A geometric 3D volume matrix with the node hierarchy class used as 
 secondary process
 5-using a parallell processing class to send socket information between nodes
 6-A socket distribution (select()) daisy chain communication layer
 7-A 3D prototype of an SSD based sound processing CPU that stocks all the 
 information in the SSD as static memory. I have been 3D prototyping for 
 about 15 years. I demand elegance and functionality in design, as much as 
 efficient memory management of blocks and sectors. I am a programmer.
 
 All the scripts are doing exactly what they are supposed to except for the 
 3D matrix layer, which I am currently working on. However I have done all 
 primitives, transforms and rotations using matrices. About to get back to 
 completing the nGon class.
 
 This project started as a spark when I saw a tweet about a blind player on 
 World of Warcraft.
 
 Now it has turned out to be much bigger.
 
 Everything is written in standard APIs such as python and JS modules. I am 
 trying to complete this accessible World of Warcraft layer which I will use 
 as a GNU license platform which does not use world of warcraft. I don’t 
 understand why blizzard hasn’t done this. But this has given me the 
 opportunity to see exactly what is happening in the system architecture. And 
 be an architect, though I had lost that capacity once I lost vision.
 
 Will anyone be so cool as to send me a reply with “#vipWOW” as subject?
 
 I really hope that this ideal I have been carrying on for the past 6 years, 
 dedicated to programming and mathematics where I used not do apply so 
 frequently can be growing to a larger community through the effort I, and 
 hope others, will accept as an independant hire, to help. I cannot afford 
 thousands per month, but I have laid down the architecture, the working sub 
 systems, and working through each all the way to the main class.
 
 This effort, I have come to realise, demands way more hands than my blind 
 vision on the computer can handle, though I handle VIM quite well and 
 efficiently. But it also needs to be accessible to the level I want it at 
 some point.
 
 If you are ready to experience 

A call for a world perspective in 3D

2015-01-09 Thread Yuma Antoine Decaux
Hi All,

I am currently working on a 3D sound engine. I have so far done the following:
1-nodes structure for extracting tag and LUA function calls and creating a 
hierarchy of each node where parent node is UI.
2-A 3D sound library connecting to the js web sound API, using the node system
3-a parser toolset to create arrays of configurations between scripts and 
languages
4-A geometric 3D volume matrix with the node hierarchy class used as secondary 
process
5-using a parallell processing class to send socket information between nodes
6-A socket distribution (select()) daisy chain communication layer
7-A 3D prototype of an SSD based sound processing CPU that stocks all the 
information in the SSD as static memory. I have been 3D prototyping for about 
15 years. I demand elegance and functionality in design, as much as efficient 
memory management of blocks and sectors. I am a programmer.

All the scripts are doing exactly what they are supposed to except for the 3D 
matrix layer, which I am currently working on. However I have done all 
primitives, transforms and rotations using matrices. About to get back to 
completing the nGon class.

This project started as a spark when I saw a tweet about a blind player on 
World of Warcraft.

Now it has turned out to be much bigger.

Everything is written in standard APIs such as python and JS modules. I am 
trying to complete this accessible World of Warcraft layer which I will use as 
a GNU license platform which does not use world of warcraft. I don’t understand 
why blizzard hasn’t done this. But this has given me the opportunity to see 
exactly what is happening in the system architecture. And be an architect, 
though I had lost that capacity once I lost vision.

Will anyone be so cool as to send me a reply with “#vipWOW” as subject?

I really hope that this ideal I have been carrying on for the past 6 years, 
dedicated to programming and mathematics where I used not do apply so 
frequently can be growing to a larger community through the effort I, and hope 
others, will accept as an independant hire, to help. I cannot afford thousands 
per month, but I have laid down the architecture, the working sub systems, and 
working through each all the way to the main class.

This effort, I have come to realise, demands way more hands than my blind 
vision on the computer can handle, though I handle VIM quite well and 
efficiently. But it also needs to be accessible to the level I want it at some 
point.

If you are ready to experience something seriously cool (network connectivity, 
private test server, wiki, calendars and contacts, vnc access, ssh, ftp, 
redundancy is not there yet but we’re working on an arch linux installation), 
with an extra dimension (tactile), please do contact me. Let’s make an order of 
classes that will standardise many aspects of our experience on the computer as 
blind coders, and be the programmers for programmers in facilitating our own 
experience. 

Sincerely,

Antoine Decaux
twitter: triple7



 

-- 
You received this message because you are subscribed to the Google Groups 
MacVisionaries group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To post to this group, send email to macvisionaries@googlegroups.com.
Visit this group at http://groups.google.com/group/macvisionaries.
For more options, visit https://groups.google.com/d/optout.