Unless you talk to game developers where a “follow the ball” “algorithm” for Pong classifies as AI, because it’s controlling the behaviour of a game-world agent that’s not the player. The term pretty much matches up with what game theorists (as in game theory, not computer games) call strategies. If people use ML for that kind of stuff it’s not the approaches which make news nowadays because inference is (comparatively) expensive, stuff like NEAT churns out much more sensible actor programs as it evolves structure, not just weight.
AI in games is not AI in the CS sense, and that’s probably where the confusion is coming from. AI in games uses the cultural definition that includes things like C3PO and whatnot, whereas AI in the CS sense is just about any algorithm that seems to learn as its environment changes, usually to find a better (more fitting) solution than the previous iteration. Game AI is generally just pathing and direct responses to stimuli, it doesn’t really learn, so players can cheese the AI pretty consistently.
I think games using actual AI would be undesirable because it would make games involving AI much less predictable and probably way harder. It would also likely use way more compute resources.
I think games using actual AI would be undesirable because it would make games involving AI much less predictable and probably way harder.
You get something very predictable when you throw NEAT at flappy bird. And you don’t need ML approaches to make game AI not fun. Take RTS games: In the beginning many AIs would be very simple and have access to essentially cheat codes to be half-way competitive, then programmers sat down and allowed it to path-find through possibility spaces such as economic build-up to formulate a strategy to follow so it didn’t need to cheat, thing is those things are pretty much on or off: Either they suck badly and need cheating to survive, or they’re so good they’re getting accused of cheating. So you need to dumb them down to make them believable, make them make non-optimal decisions and mistakes in execution.
That’s the main issue: Having a believable and fun opponent, not either an idiot or a perfect genius, and you don’t need ML approaches to get to either. Most studios pretty much gave up on making AI smart they keep it deliberately simple, to the point where HL2 is still the pinnacle of achievement when it comes to game AI, second place going to HL1. Those troopers are darn smart and if the player couldn’t listen into their radio chatter they would indeed appear cheaty, always appearing out of nowhere… no dummy they flushed you into an ambush. That is, Valve solved the issue by essentially letting the player cheat: The player gets more knowledge than the AI (the radio chatter), also, compared to the troopers the player is a bullet sponge. All of that is non-ML, it’s all hand-written state machines, more than complex enough to exhibit chaotic behaviour.
Machine learning is a form of AI, much more sophisticated than if else loops that are also AI, in fact. What did you think AI was, lol?
AI is about making a system seem intelligent, by having it do human-like tasks. This is the type of machine learnig that is the opposite of that.
Nah, not in computer science terminology.
Unless you talk to game developers where a “follow the ball” “algorithm” for Pong classifies as AI, because it’s controlling the behaviour of a game-world agent that’s not the player. The term pretty much matches up with what game theorists (as in game theory, not computer games) call strategies. If people use ML for that kind of stuff it’s not the approaches which make news nowadays because inference is (comparatively) expensive, stuff like NEAT churns out much more sensible actor programs as it evolves structure, not just weight.
AI in games is not AI in the CS sense, and that’s probably where the confusion is coming from. AI in games uses the cultural definition that includes things like C3PO and whatnot, whereas AI in the CS sense is just about any algorithm that seems to learn as its environment changes, usually to find a better (more fitting) solution than the previous iteration. Game AI is generally just pathing and direct responses to stimuli, it doesn’t really learn, so players can cheese the AI pretty consistently.
I think games using actual AI would be undesirable because it would make games involving AI much less predictable and probably way harder. It would also likely use way more compute resources.
You get something very predictable when you throw NEAT at flappy bird. And you don’t need ML approaches to make game AI not fun. Take RTS games: In the beginning many AIs would be very simple and have access to essentially cheat codes to be half-way competitive, then programmers sat down and allowed it to path-find through possibility spaces such as economic build-up to formulate a strategy to follow so it didn’t need to cheat, thing is those things are pretty much on or off: Either they suck badly and need cheating to survive, or they’re so good they’re getting accused of cheating. So you need to dumb them down to make them believable, make them make non-optimal decisions and mistakes in execution.
That’s the main issue: Having a believable and fun opponent, not either an idiot or a perfect genius, and you don’t need ML approaches to get to either. Most studios pretty much gave up on making AI smart they keep it deliberately simple, to the point where HL2 is still the pinnacle of achievement when it comes to game AI, second place going to HL1. Those troopers are darn smart and if the player couldn’t listen into their radio chatter they would indeed appear cheaty, always appearing out of nowhere… no dummy they flushed you into an ambush. That is, Valve solved the issue by essentially letting the player cheat: The player gets more knowledge than the AI (the radio chatter), also, compared to the troopers the player is a bullet sponge. All of that is non-ML, it’s all hand-written state machines, more than complex enough to exhibit chaotic behaviour.