Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?
Not really. As far as I can see the goalpost moving is just objectively happening.
But fundamentally you can’t make a machine think without understanding thought.
If “think” means anything coherent at all, then this is a factual claim. So what do you mean by it, then? Specifically: what event would have to happen for you to decide “oh shit, I was wrong, they sure did make a machine that could think”?
That’s still nowhere near unexplainable enough to be impossible to study. You’ve described the god’s behaviour as “sometimes alters reality when prayed to by a devout follower” - if it’s consistent enough for this statement to make sense, that’s already a lot to study. Is there a correlation between particular prayers and miracles? Are particular mental states helpful? Do various traits make someone more or less likely to produce a miracle? Are there drugs that affect it? What are the limits to a miracle? Are there patterns in the time intervals between miracles? And so on, and so forth. A world with such a magic system, if you want it to be realistic, should have had an entire history of people studying these and many other things.
Eh. It’s sometimes fun to read stories like that (one better have fun, since most stories are like that!), but they’re… stories about worlds where there isn’t a single human with common sense or intelligence. Not just in the story itself, but in the world’s entire history, because the author didn’t realise that “people trying to seriously explore the laws of their world” is a thing that necessarily happens in realistic worlds, much like it happens in ours.