

I’m not so sure we’re missing that much personally, I think it’s more just sheer scale, as well as the complexity of the input and output connections (I guess unlike machine learning networks, living things tend to have a much more ‘fuzzy’ sense of inputs and outputs). And of course sheer computational speed; our virtual networks are basically at a standstill compared to the paralellism of a real brain.
Just my thoughts though!
I think the misunderstanding here is in thinking ChatGPT has “languages”. It doesn’t choose a language. It is always drawing from everything it knows. The ‘configuration’ hence is the same for all languages, it’s just basically an invisible prompt telling it, in plain text, how to communicate.
When you change/add your personalized “Custom Instructions”, this is basically the same thing.
I would assume that this invisible context is in English, no matter what. It should make no difference.