@Devs, Q&A T, Anyone at TFP that might have the answer This is going to be a weird question but bare with me. I know nothing of how far the gaming industry has come or is currently on a path towards regarding the topic of my question, I'm simply asking out of curiosity. So I do apologize in advance if this a dumb thing to ask.
Are there any plans for "advanced machine learning and neural networks" for 7D2D zombie AI?
Um, jumping in on this one real quick, though I should probably read the other replies first... but I can tell you from a robotics perspective, where such things would control a physical machine in the real world, that this has been a decades long issue where most people still consider it a software thing, which it
can be via simulation, but especially with neural nets it is far more of a hardware issue than a software one. So, faux neural nets, maybe, real ones barely if at all exist as it is, in the tech world anyway, we squishies still got ours. Machine learning is another fuzzy area, some is fine in current von-neumann style, serial, architectures, but most of those techniques also suffer from using the wrong hardware, hence why simulating them can be so ridiculously power hungry. Anyway, I'd imagine not, nothing very fancy anyway, and even if they did it would likely punch performance in the gut, a few times. Though some interesting techniques can be siphoned out and applied, we've still yet to see or experience much of an actual "neural net" to this day, aside from a handful of isolated hardware research projects. The only thing I can think of that could come close is interesting though because it is adopted into many graphics cards out of sheer necessity: multi-core dedication, like within CUDA architecture in nVidia hardware. Graphics cards are the closest thing to the right hardware available that we have, and when running a game like 7DTD that mofo is already busy simulating the world and all of it's details simultaneously. If you added a second card and set things up right you might get some useful "advanced" neural net simulations and machine learning, but it's need it's own card of special hardware resources to really do anything useful, hence why many AI systems in vehicles use "graphic processors", even a Tesla. nVidia makes car brains too y'know. Anyway, hope that helped understand why I am expecting a "no" on this, at least any hope of actually doing those things within a meaningful capacity, especially alongside/within the world simulation tasks that are already a constant struggle to balance and refine performance of.