Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Kevin

Pages: [1]
1
Well, that pretty much describes what I already knew about time-based movement.  So now let me describe another example of a problem that it presents.  Consider a sprite standing on a cliff looking accross to the right at a wall on the other side.  In the wall (a ways below) is a narrow passageway that the sprite can duck into if he presses toward it while falling down against the wall.  Lets say the sprite is 50 pixels high and the passageway is 52 pixels high and the top of the passageway is a y coordinate of 100.  The sprite jumps to the other side and begins his fall at a y coordinate of 95.

Here's how the frames work out in a time-based movement system:

Fast computer:
Frame 1 y =95 Time = 0.0 sec
Frame 2 y =96 Time = 0.25 sec
Frame 3 y =98  Time = 0.5 sec
Frame 4 y =101 Time = 0.75 sec

Slow computer:
Frame 1 y = 95 Time = 0.0 sec
Frame 2 y = 98 Time = 0.5 sec
Frame 3 y = 105 Time = 1 sec
Frame 4 y = 116 Time = 1.5 sec

Now on the fast computer, when the sprite hits 101 pixels it can fit into the passageway because it has one pixel of space both above ane below and nothing stops it from moving to the right.
On the slow computer, the sprite skips from 98 pixels (where there are to more pixels of solid wall in its way) to 105 pixels (where there are 3 pixels of solid wall from the other side in the way) so it never makes it into the passageway.
This is based on the exact same input but you get completely different results depending on whether you're playing on a fast computer or a slow computer.

The answer to this might be that I should perform collision detection for every pixel along the path instead of only once at each frame.  But now you see why time-based movement starts to be slower than frame-based movement if I want repeatibility.
Your problem there is that you're basing collision detection on pixels and increments instead of vectors.  :P I guess there might be some design reasons for this, but generally speaking you're better off using vectors since the collision detection breaks down into some simple equations instead of requiring a bunch of expensive checks every frame. (And as a bonus, it's much easier to get it to work correctly in situations like the one you're describing.)

Using vectors for collision detection also gives you the bonus of not requiring the player to be so exact in positioning a sprite to fit through a narrow passageway, since you can more easily do things like making a sprite slide across a surface in a convincing manner and determining whether or not it can fit.

In general, if you *do* want to use time-based movement, you want to avoid algorithms that must be processed in set increments (like pixel-based collision detection). Trying to mix the two does result in bad framerates, for the reasons you've noted.

2
Hmm... I have a feeling this isn't going to be resolved quickly with abstract descriptions.  But I'm up for one more round.  I still get the feeling that time-based movement is non-deterministic compared to frame-based updates because the computer can control exactly how much it does within each frame, but it can't predict how much will get done within a certain timespan.  So the amount that gets done on different machines will be different, causing the game to play differently on faster machines versus slower machines (I guess that's the whole point -- you get a better experience on a faster machine).  But that also means that repeatability of the exact same sequence is difficult to implement at best (and impossible at worst).
Here's an example:

Update @ 0.00 seconds (no movement)
Left pressed @ 0.50 seconds
Update @ 1.00 seconds (distance_moved = 0.5) (movement: X -= distance_moved * velocity)
Left released @ 1.25 seconds
Update @ 2.00 seconds (distance_moved = 0.25) (movement: X -= distance_moved * velocity)

Provided that you do the appropriate processing when events (like key presses) occur instead of waiting for the next update, you can get almost exactly the same results every time you run it, even if you vary the update rate. (Of course, rounding errors and floating point accuracy can be a problem, but that's pretty manageable). You just have to handle timing properly everywhere. For things like AI that need to be processed X times a second, you can either update them in chunks or run them in a background thread. However, you will lose complete repeatability there, since there's no real way to execute more code per second than your system is capable of executing. So if you reach the point where your system is just too slow to run the AI in realtime, the system won't be able to maintain correct behavior. But if your system is fast enough to run at 120FPS, you can get totally smooth updating at 120fps without needing to run your AI at 120fps (useful for rendering subframes for doing motion blur, for example), and if your AI only needs to run at 15fps, you can run at 30fps or 60fps and still get the same behavior (even if the framerates are different).

You can approximate this to some extent with a good frameskipping algorithm, but time-based processing is more efficient than frameskipping if done properly, which is definitely desirable if you're aiming to make your game run good on low-end systems (since they're definitely going to be skipping frames.)

3
I don't see how that's possible.  In a time-based movement system, the pixel a sprite ends up at in any particular frame is, to some extent, dependent on time (I believe that's the definition of time-based movement).  That means a sprite can be at different locations on different-speed computers.  That means the system is non-deterministic, because time is an unpredictable input.  There may be ways to work around the system's non-deterministic-ness (non-determinism?), but I don't see how it can be turned into something deterministic and still be time-based.  The only ways I can think of to make it deterministic starts going back toward frame-based motion -- for example poll the input and process game logic at fixed intervals, then just draw as many frames as possible between these fixed intervals while only processing sprite velocities and positions -- not doing any collision detection or anything.  I'd consider that time-based rendering with frame-based movement (not even sure if that's possible).  Is that what we're after?
Time is consistent no matter what computer you use, unlike framerate. Therefore, time-based movement would be entirely consistent between machines, while frame-based movement is only consistent if you guarantee a consistent framerate.

In time-based movement, you basically would 'update' certain things in a frame-based way, in that they are updated N times a second in discrete steps. Things like movement and linear animation, however, aren't actually done in discrete steps so you can 'update' them in one huge chunk by determining the amount of elapsed time since the last update and applying changes appropriately.

Generally speaking, this is how time-based processing is done in most games. Some crazy AI systems can update in a time-based manner as well using some sort of complex mathematical abstractions I don't entirely understand, but generally speaking things like AI and pathfinding are frame-based (done in increments) while animation, collision detection/response, and movement can be done in huge steps based on elapsed time.

4
Im curious how you might handle this in sgdk2. I consider the fact that game time is directly connected to FPS to be a disadvantage in 1.4.x. There is a frame limiter but it conflicts with vsync which will cause a nasty jitter if they don't match or if the frame limit isnt diviable, ex: 30 fps 60hz vsync. If a project ran slow frame skipping could be imployed to keep game time consistant as well as making sure the gaming experiene was roughly the same for all users. I don't know what frame skipping effects would be on collsion detection and the like though. If there will be any syncing with music it would be a futher concern. I also hope that you add vsync controls for running projects. It is true video drivers can force vsync on or off but the default is 'application specific' leaving the control to the application. 

The reason I have resisted moving to time-based movement rather than a frame-based movement is because I want the engine to be deterministic.  I want the same set of inputs to produce the same output.  This makes it possible to record a sequence of action within a game and play it back knowing that it will behave exactly the same on any computer.  Also, on a related concern, I don't want it to be possible (for example) to jump a couple pixels higher on some computers just because they had time to calculate the extra frame at the top of a jump.  Am I misunderstanding the technique here or is this a valid concern?

However, it's possible that SGDK2 might just be flexible enough to support both depending on how you customize your game.  I'm not sure if that's the case yet, but I'm certainly approaching that degree of flexibility in what I'm planning so far.

Time based movement isn't necessarily any less deterministic than frame based movement. You just have to implement the various pieces correctly and design it so that things are consistent for inputs and events at particular times.

As this ties a bit into rendering and the fact you are using dx9/d3d I'd like to know if control of the render/blitting will be made avaible as far as linear (blur) or nearest ( blocky) and if the games intended resolution can be overidden by such. It is of particular importance for LCD's but in general as well. Game system emulators have to deal with some of this so perhaps lookinto into something like VBA http://vba.ngemu.com/ would be useful. Something that would be neat is the inclusion of visual filters that run independantly of the game/project in question. A resolution doubler, fake scan lines, fake lcd, motion blurr, etc. If you could invoke different filters per project that would be a plus too but I dont know how feasbile this would be. Motion bluring would rock though =]

I wasn't planning on doing this myself, but I was talking to a skilled developer a couple months ago (I think it was Kevin Gadd aka "Hiretsukan" (http://www.luminance.org/)).  But he showed me a demo he had with dynamic lighting on 2-D sprites/tiles based on Direct3D.  We talked about him maybe working on some similar effects for SGDK2, but were waiting for SourceForge to release their new version control system (which I think comes out at the end of this month) before uploading all the SGDK2 code and maybe splitting the work.  Cool, I don't think I've been to his web site until just now -- looks like he's done quite a bit of work with game engines (same age as durnurd too).

Real motion blurring would require a lot more than a filter - you'd have to render multiple seperate frames for every displayed frame, which means that if you wanted to display motion blurred gameplay at 30fps, you'd need to render at 60fps for 2x motion blur, or 120fps for 4x motion blur. And the game would need to update at that rate as well. (This is a good reason to go for time-based movement and processing, because it allows you to render more frames than you update and still get reasonable accuracy.)

Other filters like resolution scaling and scanlines are pretty straightforward, though. Draw the game screen to an offscreen render target, and draw that render target to the screen while applying a shader to perform the effect.

Render effects
I'm hoping that there will be some tools for lighting and other effects. SGDK2 will be dx9 and its pretty much the norm for modern 2d games. example: If I'm iin the dark but have a glow around the player to see what is immediatly infront of me or there is an explosion/ etc producting light. The level editor could drop various styles of lights like droping a path on a map, be it fixed, moving or in relation to a sprite/object. Particle effects for exploisions, weapons, etc could be scriptable.

With translucany I can do a lot of things like shadow maps via tiles but dynamic lighting will be more tricky depending. Basiclly I'm lookin for coded effects where producing them from a png/file is not the best option. This could be considerd for transition effects, fading or flashing a screen to a color etc as well. I suppose I am invisioning a gfx scripting engine for effects plus light droppin on map. Feel free to tell me I'm out of my gord for the request =P

I haven't looked into what kinds of things Direct3D can do.  Hopefully we can get Kevin in here to talk about what might be possible.

Given access to render targets and multitexturing, you can pull off a lot of cool things. Lighting's fairly simple. Access to pixel and vertex shaders makes it easier and lets you do a lot of really interesting things - convolution filters (blur/sharpen/emboss/etc), deformation mapping, etc.

Fairly simple, unfortunately, doesn't necessarily mean easy to design, implement, or test. But it's definitely doable. (The performance you'll get out of it is another matter, but generally speaking it should be more than playable on a modern 3D card.)

Pages: [1]