You didn't mention what I consider to be the core difference! The transformer model does quadratic work, comparing each token to each other token, but humans appear to be doing something like nlogn work, at least at the sentence level, because we're so attentive to the syntax tree and do major computational work only at the opening and closing of subtrees.
So how long will it take AI to develop embodied form and capacity for intent--months, years, decades, infinity? Will you be shring your assessment in the final section?
You didn't mention what I consider to be the core difference! The transformer model does quadratic work, comparing each token to each other token, but humans appear to be doing something like nlogn work, at least at the sentence level, because we're so attentive to the syntax tree and do major computational work only at the opening and closing of subtrees.
So how long will it take AI to develop embodied form and capacity for intent--months, years, decades, infinity? Will you be shring your assessment in the final section?