Re: going about developing a rythem game

2020-04-18 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

@18The codes actually adapted from LGPL [C source] lifted from the vOICe's site demonstrating it, and yes i'm using numpy. Maybe i'm not describing it properly, regardless the source is already publicly available, or at least some versions of it, like [this]. Could have swore I uploaded a version of the higher speed version, but whatever. If you want to take a look at it i'll start a new thread and dump the source.

URL: https://forum.audiogames.net/post/520710/#p520710




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-18 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: going about developing a rythem game

@17You're conceptualizing how this processing needs to work wrong, and I'm assuming you're also using some sort of linear algebra library or something (maybe Numpy?).  The implementation you're describing for what you want to do is terribly inefficient.  Some basic resampling and not precomputing the sine wave alone would be massive wins for starters. If you want to reach out I would be happy to discuss this more, though at the moment I don't have the bandwidth to review DSP code by others.  Suffice it to say that the VOICe does almost exactly that algorithm, and was running fine on a CPU from the early 2000s when I was a kid.Not sure if I have a public e-mail on the forum or not, but I'll keep an eye open for a PM for a few days.Edit: or you can start a thread, if you think it should be public.

URL: https://forum.audiogames.net/post/520697/#p520697




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-18 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: going about developing a rythem game

@17You're conceptualizing how this processing needs to work wrong, and I'm assuming you're also using some sort of linear algebra library or something (maybe Numpy?).  The implementation you're describing for what you want to do is terribly inefficient.  Some basic resampling and not precomputing the sine wave alone would be massive wins for starters. If you want to reach out I would be happy to discuss this more, though at the moment I don't have the bandwidth to review DSP code by others.  Suffice it to say that the VOICe does almost exactly that algorithm, and was running fine on a CPU from the early 2000s when I was a kid.Not sure if I have a public e-mail on the forum or not, but I'll keep an eye open for a PM for a few days.

URL: https://forum.audiogames.net/post/520697/#p520697




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-18 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

Been messing a bit with Pyglet, seems clock.schedule_interval [limits the update cycle] rate to a default frame rate with vsync for some reason. Managed to get around that and get down to a 1ms update cycle, keyboard response seems to register around 2ms now based on pyglet clock.tick(). Also the script I linked to actually uses QueryPerformanceCounter, along with QueryPerformanceFrequency, there's also Jython which doesn't have a GIL, if wanting a java flavour.Most of the bottlenecks in sonification primarily have to do with array multiplication and data transfer overhead. Essentially you grab an image from the frame or depth buffer, or both, strip or merge extraneous RGBA layers, pad it to match the width of a precalculated sine wave, multiply both matrices together, sum them along the y axis, and spit it into an audio buffer for playback. The higher the resolution or longer/slower the playback, the more data crunching and overhead you get, general pitch range and sensitivity of human hearing notwithstanding. Size of data usually consists of about 1 second of audio or less, so something like a 44,100 element 1D array based on sample rate.Part of the issue with millisecond framerate sonification was the playback, it was more efficient to move a sound source from left to right and let something like OpenAL handle the panning effect then to bake it into the sound itself. But this breaks down when moving at millisecond speeds because the source doesn't have enough time to move from left to right, breaking the panning. Playback at higher speeds though meant less data, so encoding stereo into the sample during generation started to be more practical, and worked at preserving the panning effect, though higher resolutions than 64x64 still cut into performance and some kind of pitch distortion seems to be happening at different resolutions. At this point i'd be happy just getting stable playback, which tops out around 30fps so far.

URL: https://forum.audiogames.net/post/520688/#p520688




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-18 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: going about developing a rythem game

To start with the bit that's on topic: timers.  There is a big difference between precision and accuracy.  But honestly it's very easy to get both a precise and accurate timer (QueryPerformanceCounter is one API call, you can bind it in ctypes like it's nothing).  The problem is that in Python you have the GIL, which means that you can effectively only have one thread running at once, and thrashing a mutex is just a terrifically wonderful way to get sub-10-ms accuracy.  But if 16MS is satisfying sighted people, it might be this side of possible nonetheless.And now for the parts that aren't:You can get much easier and quicker wins for speeding up synthesis than the GPU: use faster sine approximations (because Intel's sine instruction is broken, you can be using something very slow instead, and you can do sine waves with 2 multiples and 2 adds with complex number tricks instead), play with the Clang vector extensions (you can get around strict aliasing with __attribute__(may_alias), then you have cross-platform SIMD), and writing a basic lockfree parallel for implementation (use a semaphore to wake threads, and an array with an atomic int as a queue).using the GPU for computer vision may be apropos, but for audio it's honestly very hard to push things beyond what the CPU can actually do with respect to the synthesis itself these days.  Even for big convolutions for reverb, you're generally okay if you get the right library going.  There's enough other low-hanging fruit for optimization before you get to the GPU, but you'll have to code things yourself in C in many cases.  OpenALSoft gets 1600 or so HRTF sources on my machine; synthizer should be at least that good, while being higher quality (though perhaps in the end you won't be able to hear that), and after I implement the aforementioned parallel for it'll scale linearly with cores (which not even Libaudioverse managed, so thank you to several years of additional experience and some cool coworkers).GPU data transfer isn't a problem if all you want to do is move finished blocks of audio out: they're pretty low latency, and you're only moving around a megabyte a second if that.  But the GPU architecture is only really good for a very specific sort of synthesis, where your algorithm for the audio has no if statements (I'm simplifying--it's more important that branches don't diverge, if you want to know more google), no recursion in the signal graph, and can easily be expressed primarily as matrix multiplications.  Raycasting is even hard-ish for a GPU, honestly: it does diverge, it's not easily expressed as a matrix multiplication, but you do millions or more of them and it's trivially parallelizable and Nvidia has hardware nowadays, so you mostly take a win.  It is hard to find info on what exactly goes on inside insert-GPU-audio-solution, but I believe it's just this:1. Do raycasts like for graphics, probably with the same lighting algorithm slightly modified to keep distances, not just colors;2. Do a very basic HRTF with these data points to produce an impulse response for reverb;3. Use the fact that you're already in the GPU to take what performance wins you can for the convolution itself; and4. Output a small block of audio to the main system for output.The one open source thing like this is Resonance, which does everything that everyone who thinks about this wants, where you do an accurate physical simulation, then it throws out 90% of the data because it turns out that basic reverb algorithms sound that good without requiring you to do a multi-second convolution and you can avoid both tat and having to finish the physical simulation without loss of quality.I could eventually be convinced that it's worth it, and that it's not mostly about getting people to buy the GPU and taking editing tasks away from level designers, but so far no one has actually put something in front of me that's that worth it.  Frankly, at the end of the day, the state of the art techniques from the 90s and early 2000s when the Creative sound cards still worked were already at the limit of what you could expect from consumer-grade headphones, and the modern stuff based off the demos I can find sounds worse than that while using 10 to 100 times the computing power depending what it is.  I could contrive a setup that could benefit from the GPU, but it'd involve at least 16 speakers and very high order ambiisonics and wouldn't even fit in a living room.

URL: https://forum.audiogames.net/post/520624/#p520624




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-18 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

@13Mmm, yeah leveraging the GPU generally is great for straight forward compute tasks, maybe good for workflow or cinematics at scale. Still, could be a question of working through the right implementation at some point, maybe have potential for leveraging the GPU for faster visual/audio sonification which tends to be really process intensive, though question there is how much of a bottleneck the data transfer is.Anyway, I did dig up [this] little chessnut, seems someone wrote a python microsecond percision timer that uses ctypes and some C++ DLL's for some high percision timestamps, Rust also seems to support down to nanosecond percision depending on the hardware. I also dug up some interesting info from a DDR community FAQ [here] that suggests that DDR, in some versions at least, use a frame rate method of percision consistent with a 16ms cycle. Another neat little find is [this] 2008 Stanford research paper on Automatic Processing of Dance Dance Revolution, which covers finding the correct tempo, analyzing bpm, and generating step charts dynamically for rando songs with freeware variants of DDR. DDR Machines also seem to use [Bemani PC] boards, some based on the Playstation 2, but more modern variants being PC based versions.

URL: https://forum.audiogames.net/post/520613/#p520613




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-18 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

@13Mmm, yeah leveraging the GPU generally is great for straight forward compute tasks, maybe good for workflow or cinematics pehaps. Still, could be a question of working through the right implementation at some point, could maybe have potential for leveraging the GPU for faster visual/audio sonification which tends to be really process intensive.Anyway, I did dig up [this] little chessnut, seems someone wrote a python microsecond percision timer that uses ctypes and some C++ DLL's for some high percision timestamps, Rust also seems to support down to nanosecond percision depending on the hardware. I also dug up some interesting info from a DDR community FAQ [here] that suggests that DDR, in some versions at least, use a frame rate method of percision consistent with a 16ms cycle. Another neat little find is [this] 2008 Stanford research paper on Automatic Processing of Dance Dance Revolution, which covers finding the correct tempo, analyzing bpm, and generating step charts dynamically for rando songs with freeware variants of DDR. DDR Machines also seem to use [Bemani PC] boards, some based on the Playstation 2, but more modern variants being PC based versions.

URL: https://forum.audiogames.net/post/520613/#p520613




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : Jaidon Of the Caribbean via Audiogames-reflector


  


Re: going about developing a rythem game

Hello. I'm not sure how plausible this would be for the level creation system but stick with me.The game generates a folder called custom level sounds, or whatever. In this folder, the user put in whatever level musics they want in their. However, the music and sounds should be very specifically labelled, so the parser will more easily be able to make the creation process less painful to the end user. Now, there's a sound for if the user hits the beat on point and one for if they miss the beat. The creator can either do it the technical way, or the manual/fun way. They can either input the time in which you must hit the beat at, or they can just play it the way they intended.I think that'll be a lot of coding, however I think its a rather fun way to do it. Then again, it could just be Zarvox's level creator and I don't even know. Anyways, cheers from the mansion.

URL: https://forum.audiogames.net/post/520444/#p520444




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: going about developing a rythem game

@12I have a sighted friend who plays rhythm games and is also a programmer (though admittedly not a game programmer) and she says whatever she plays is accurate to around a frame (which is around 16MS).  Things that are full body movement style stuff will have to be looser if only because legs and whatever don't respond that fast.  Human response time is much worse than keeping in sync with rhythm, and I think that actually we're better at audio rhythm than visual rhythm (but sighted people only get the visual aspect).  I don't know of any research being done on the topic.Things like AMD True Audio don't (to my knowledge) use shaders, as shaders aren't flexible enough for audio in general.  Also I have yet to be impressed with one of the demos of that tech: it's great if your problem is "Hi, I am a lazy sighted game developer and I want you to autocompute this by raycasting" but doesn't do much beyond that.  Really it's just a trick for computing very long impulse responses off level geometry and applying them in parallel.  I've done research on this topic--indeed for a brief time I was considering making synthizer use the GPU, and I believve I have a GPU-ized parallel FFT somewhere--but for the actual synthesis part you're better off not dealing with with it.I think that the audio on the GPU stuff is mostly hype: the people doing the GPU stuff already knew how because 99% of the math for modern graphics and the math for advanced audio effects are the same (literally, not exaggerating in the slightest).  So you've got an easy to maintain thing that takes some of the "and now you have to configure your reverb probes" out of the level design, and when you convince game devs to use it their players will buy better laptops with your GPU in them.But you can actually prototype such things in Python with Numba and Cuda, if you want.  They won't help you with your rhythm game, and indeed they won't help you with almost anything that's not do a bunch of raycasting off level geometry. But you can do it, and I've done it myself as part of determining what might work for Synthizer.  The irony is that Python is actually almost better than C/C++ for GPU compute because of the ecosystem and the science people needing it--it's not going to be low latency because of the data transfer, but you're going to be hard pressed to beat what Numba or Tensorflow or etc spits out with a hand-written Cuda kernel, and if you wanted to prototype something audio before porting the algorithm to C/C++ for the final version you absolutely could.However for anyone here, frankly just setting reverb parameters gets you a reverb of around the same quality of insert-GPU-audio-solution here, a really basic algorithm that figures it out by estimating how many open tiles are around the tile in question can get you something good without requiring a beast of a machine or you tweaking every tile, and none of that will help solve a rhythm game problem anyway.  Rhythm games aren't performance, they're timer precision, in an environment where everything that isn't the rhythm game is competing with you for system resources, and the OS hates you because high precision timers suck battery like a vampire.

URL: https://forum.audiogames.net/post/520385/#p520385




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

@11I'm kind of curious what the metrics are for games like DDR and what their implementation strategy is, or how fine tuned the fixed arcade and console variants are compared to more diverse PC contemporaries. Hmm... Also wondering how [GPU Accellerated Audio] like AMD's [TrueAudio Next SDK] might fare in terms of speed/percision with parallel processing. Not sure if you could squeeze something like that into pyglets shaders, but then if were going for bleeding speed it might be better to work directly in lower level languages like C, or maybe Rust depending.

URL: https://forum.audiogames.net/post/520378/#p520378




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: going about developing a rythem game

@10Thing is, that's not good enough.  Sequence Storm (which is sadly the only good example we have) will flat out kill you if you're off by 40ms, and those of us who have played it a while do much, much better than that.You can totally do a rhythm game at 40ms, but your max difficulty is whatever your max timer precision is.  Problem with python is that the GIL means that even if you have an input polling thread going, it'll get blocked randomly.Also you absolutely 100% can't do this by asking the audio player what the position of the audio is.  All the audio backends have a ton of internal jitter for various reasons.  You can know that it updates with a granularity of 20 MS, but your calls to it might block (especially with OpenAL, which uses mutexes for this I believe) and you can't actually know what it's really playing, only what it claims it's playing.  That's not so much of an issue, but it doesn't update continuously and it probably updates with a varying latency unless you've made sure there's no resamplers involved (since resamplers happily consume fractional blocks, or eat more than they need for this frame).I think it might be possible to do better in JS, because the browsers do have very high precision timers and we have spent a very long time optimizing them to render efficiently, which is sort of the same thing as being able to do precise timing for a rhythm game if you squint, and the main loop of the browser is really optimized and in C.  But I wouldn't bet my life on it.Also, you can write an audio loop for this if you want, that specifically meets the rhythm game use cases, which will do nice things like solve the problem of synchronizing after you unpause the game.  But since such an audio loop is only useful to rhythm games and you're sacrificing a lot of features that are anywhere from very nice to have to absolutely necessary for something like a shooter, they aren't exactly all over the place in libraries just waiting on your usage.

URL: https://forum.audiogames.net/post/520369/#p520369




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

Hmm, I see what you mean by response times. I threw together a python script that uses OpenAL and polled a 1 second sound samples seek position relative to user input, so seek * length = response time. It prints the samples play position every 20 ms, key presses register between 40ms to 80ms when pounding the spacebar/keys. Synchonizing to compensate for latency by calculating bpm seems like a sound approach, so keyboard updates every 40ms / bpm * amount of input = response rate. Short of time stamping the input though it would rely on proper key ordering, but pressing X number of keys in the correct order within 40ms seems like a fair approximation of response rate, factoring in other input devices like audio.

URL: https://forum.audiogames.net/post/520359/#p520359




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

Hmm. I threw together a test script that uses OpenAL and polled a 1 second sound samples seek position relative to user input, so seek * length = response time. It streams the sample play position at 0.02 ms, key presses register between 0.04ms to 0.08ms when pounding the spacebar/keys. Things like playback latency and synchronization might cause some hiccups, but seems doable.import pyglet
from pyglet.window import key
from openal import *
pyglet.options['debug_gl'] = False

class Prototype(pyglet.window.Window):
def __init__(self):
super(Prototype, self).__init__(640, 480, resizable=False, fullscreen=False, caption="Test")
self.clear()

self.listener = Listener()
self.sound = LoadSound('example.wav')
self.player = Player()

self.player.add(self.sound)

pyglet.clock.get_fps()
self.fps_display = pyglet.clock.ClockDisplay()

pyglet.clock.schedule_interval(self.update, .01)

def update(self,dt):
self.clear()
print(self.player.seek)
self.fps_display.draw()


def on_key_press(self,symbol,modifiers):
if symbol == key.SPACE and self.player.seek > 0.5 and self.player.seek < 0.55:
print(self.player.seek, 'match')
else:
print("miss")

if symbol == key.ENTER:
self.player.play()

if symbol == key.ESCAPE:
self.player.delete()
self.sound.delete()
self.listener.delete()
self.close()

if __name__ == '__main__':
window = Prototype()
pyglet.app.run()

URL: https://forum.audiogames.net/post/520351/#p520351




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: going about developing a rythem game

Yeah. But that won't work for an actual rhythm game, where the entire point is to in fact line up with the music.  That code is just a cheat, for when the game isn't a rhythm game but you want some rhythm game mechanics.Honestly I suspect the first thing is going to be getting off BGT so that you can have millisecond-precise timers (i.e. QueryPerformanceCounter in Windows), possibly off Python (because you may end up needing a thread that literally does nothing but call QueryPerformanceCounter while polling controls as fast as humanly possible, which means C code to call the Windows APIs right), and with a slight chance of I needed to write my own audio library (because you need low latency audio, potentially beyond what I'm targeting with synthizer, and also access to the things in Wasapi that try to guess latency of audio devices if you don't want players to have to configure stuff before they can play).Now if you don't want to be accurate under 10MS, whatever, the naive approaches work, but when I play sequence storm I'm usually accurate under 10ms on average and I know people who are way better than me at it, so if you want it to actually have difficulty you'll need to figure out how you're dealing with all the timing stuff.The bopit clones and stuff get away with being what they are because they put the challenge in the pattern, not in the timing of the pattern.  Maybe it's possible to do this in BGT and/or Python, and someone will tell you how, but as someone who is on the Python bandwagon all the way to MMO for the blind, rhythm games is just somewhere where high level tools start falling down.You might also have to give up on normal Windows input methods and use Direct Input instead, which is an undocumented mess.  I'm not sure how fast window messages are offhand, but it's possible they're too latent for this as well.But assuming you solve all that, all you have to do for the rest is record the exact time you started the music playing, add the player-configurable audio device latency to it (i.e. my headphones need an additional 50 ms), then line up the keypress timestamps with the song timestamps for scoring (where you die/get damaged/etc if the realtime clock goes past one without it being matched).  Most of that math is relatively straightforward.

URL: https://forum.audiogames.net/post/520270/#p520270




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-17 Thread AudioGames . net Forum — Developers room : Stealcase_ via Audiogames-reflector


  


Re: going about developing a rythem game

camlorn mentioned in another thread his idea for how he would do a simple rhythm check, with a basic code example.Basically thinking about it as important that the player hit the keys to the bpm, and not neccessarily 100% accurate to the Song.Here's the code he posted;for(int i = 0; i < keypresses.length; i++) {
beats[i] = (keypresses[i].time - keypresses[i-1].time) / bpm;
}Here's the Link if you'd like more contexthttps://forum.audiogames.net/post/517765/#p517765

URL: https://forum.audiogames.net/post/520242/#p520242




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : amerikranian via Audiogames-reflector


  


Re: going about developing a rythem game

Oh yes, Zarvox. Make one for my Rhythm game as well won't you? XD

URL: https://forum.audiogames.net/post/520145/#p520145




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : Zarvox via Audiogames-reflector


  


Re: going about developing a rythem game

I have to write another level creation tool? Noo! Lol

URL: https://forum.audiogames.net/post/520136/#p520136




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : magurp244 via Audiogames-reflector


  


Re: going about developing a rythem game

@3A rhythm game is typically the type of game where you match button presses or motions to the sound of music, a beat, or "rhythm", IE: Dance Dance Revolution, Paraparappa, Beat Saber, etc.

URL: https://forum.audiogames.net/post/520077/#p520077




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : amerikranian via Audiogames-reflector


  


Re: going about developing a rythem game

I know it’s far from it, but it seems like it would be pretty simple.  You could have a struct holding the required key and the time that you need to press it on, and you could capture the time at which the player presses there keys.  After that it is going to be a simple as looping through all your actions and checking if the time is within the specified range. If so, you give some kind of affirmative feedback. If not, you treat it as if the action was incomplete or the player fails to do it in time.  Actually, to skip looping, you could have an index which the player is currently on and use that index to check for the action conditions.  Now, key holds are going to be messy in general.  You can also try and expand the system I described above to allow for aliasing and macros, sort of like what rhythm rage did. As for user created content, you will just have to build a parser that reports syntactical errors. You need to decide on what syntax you want your actions to have.

URL: https://forum.audiogames.net/post/520013/#p520013




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : amerikranian via Audiogames-reflector


  


Re: going about developing a rythem game

I know it’s far from it, but it seems like it would be pretty simple.  You could have a struct holding the required key and the time that you need to press it on, and you could capture the time at which the player presses there keys.  After that it is going to be a simple as looping through all your actions and checking if the time is within the specified range. If so, you give some kind of affirmative feedback. If not, you treat it as if the action was incomplete or the player fails to do it in time.  Actually, to skip looping, you could have an index which the player is currently on and use that index to check for the action conditions.  Now, key holds are going to be messy in general.  You can also try and expand the system I described above to allow for aliasing and macros, sort of like what rhythm rage did. As for user created content, you will just have to build a person that reports in Easton tactical error’s. In to decide on  as for user created content, you will just have to build a parser that reports syntactical errors. You need to decide on what syntax you want your actions to have.

URL: https://forum.audiogames.net/post/520013/#p520013




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : Meatbag via Audiogames-reflector


  


Re: going about developing a rythem game

sorry but, what is rythem game ?

URL: https://forum.audiogames.net/post/519986/#p519986




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : rory-games via Audiogames-reflector


  


going about developing a rythem game

Hello, I want to develop a rythem game but I have a few questionsFirstly: how go I about timing: how muc should I give and how would I implement it, in pure logic not actual code, please.secondly: how would I go about creating a creation system for users to create levels? It seems to be that I'll have to create a very creative creation creation otherwise it will be boring and number 3: where can I find some decent EDM music, for my rythem game?

URL: https://forum.audiogames.net/post/519960/#p519960




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: going about developing a rythem game

2020-04-16 Thread AudioGames . net Forum — Developers room : rory-games via Audiogames-reflector


  


Re: going about developing a rythem game

also note this project will probably not be public

URL: https://forum.audiogames.net/post/519961/#p519961




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector