2. People who think AI can reason without being ‘alive’ are really crazy
3. If people think AI is [X] and are crazy, then X is false
4. Therefore, AIs cannot reason without being alive
5., Therefore, AI is not alive
6. In conclusion, AIs cannot reason.
Yeah, not convinced lol. Let me explain why.
1. You seem to have a special language of ‘real’ as opposed to ‘zombie’ doing things. You can read. Or you can really read. You can decide. Or you can ‘decide’. You can walk down the street to buy a cornetto like a zombie, or you can walk down the street not like a zombie. You define reasoning a (requiring) a a subject self-awareness, or phenomenological consciousness. But if 'real reasoning' requires private phenomonology, then the criteria for reasoning is not publically accessible. But ordinary languague (especially the kind of usage policing you're doing!) needs *public criteria*, and defensible ones at that.
You invoke the zombie language, but zombies are exactly what show you can't make phenomenology of subjective consciousness the criteria; because if zombies are conceivable, then, for all you know, everyone but you is a zombie. Yet, linguistically, we still (correctly) apply 'reasons' and 'decides' on functional/behavioural grounds to humans (and I imagine that figuring out whether AI has appropriate phenomnological character would be harder still). So given the zombified framing, the publicity-criteria of languages pushes you away from this conclusion.
2. You mention 'ordinary' language for reasonsing: but ordinary language for 'reasoning' is broader than you claim, and covers your cases. People use 'reasoning' to mean means-end planning, efficiency at achieving goals, rule-governed inference, sensitivity to reasons, etc. Your automatic knee jerk/driving examples don't support another conclusion either. They aim to show that without self-aware deliberation, it's not reasoning. But the examples don't isolate self-awareness as the decing factor! They mostly isolate a kind of inferenetial integration or goal-directed planning.
3. Your comaplints about 'lifeforms' seems misplaced and unconnected aside from rhetoric; and yet the argument at the top would imply that a good definitionfor life be provied and discussed. I’m not sure Alexander meant anything by ‘happy’’ except rhetorically in the last sentence of the blog as a kind of sign off; you criticize his use of ‘lifeforms’ and ‘life’, which you refrain from defining or discussing. ‘Life’ has a very strict biological definition; and machines don’t get anywhere close to reaching that. But I doubt that the ‘crazy’ people you have in mind are deemed crazy because they believe that the AIs respire and metabolize chemicals. Rather, I suspect they are deemed crazy because of another, more esoteric definition of life–one presumably more connected to ‘phenomenological consciousness’--that they fail to meet. But if so, this makes it hard for you to use 'life' anyway but circularly: AI can't reason unless it's conscious; it can't be conscious because it isn't 'alive' (but actually 'alive' just means conscious). A more charitable interpretation would be that you're suggesting that phenomenological consciousness can *only* be instantiated via the metabolism of chemicals. But if that’s supposed to be obvious from the sentence alone (because no argument is provided), I’m not seeing it.
4. You do a slight slide between a metaphyscial 'AI cannot reason because of vague self-aware reasons' and a pragramatic point: 'we shouldn't say it has reasons, because it confuses people'. But the arguments you use to justify the linguistic use are separate from the metaphysical claims you make: in theory one could concede the pragamatic worry while rejecting the metaphysical claim.
5. For example, you suggest the utility of differentiating reasoning as the kind of thing that grounds moral blame, and interpersonal/social conduct/constraints (and so conflation with another kind of reasoning means we lose importance of this). But I think this conflates reason too hard with intuitive conceptions of moral status and blame. Children may be responsive to reasons more than some other animals or even people--yet are less blameworthy. And they have less reasoning than most people but as much moral worth. So there isn't a strong a link as you try to make out. Moreover, I think that by drawing a hard line between self-aware/conscious reasonsing and a more instrumental, rule-governed one misses the *point* of the practical constructivist moral theorists: their point is we can construct morality from reason *because* it is objective, non-phenemonological, and this all comes from just accepting a instrumental, rule-governed reason--and nothing more. This foundation for practical-reason-based morality gets obscured under you view. (I would also, separately, reject the idea that blame is a moral property but alas!)
Moreover, you take this idea as saying that a specific account of reasoning sets normative constraints on how we live together. And hence has much more important implications worthwhile to keep the linguistic labels separate. But I would wager that even unselfaware reasoning (whatever self-aware means) would be giving the same normative constraints–because reason is universal. So why would reasoning unself-aware-ly give different normative constraints? There is no argument is given for why it would.
If you want a pragramtic linguistic divide then just say conscious or self-aware reasoning as opposed to inferential, instrumental, rule-governed reasonsing. That's fine. But reason is one of those things that has a very very broad history/scope in its label, and I don't think some uncomfortableness with AI should warrant every other reasoning that's not done by what Rebecca Lowe decides has phenomenalogical character to have 'zombie' put before it.
To claim that phenomenological consciousness is necessary for reasoning is pretty extreme and is dualistic which is frowned on by most psychologists and scientists.
Lowe’s view is dualistic because it relies on a fundamental split between physical/functional processes and a non-physical "inner life":
She explicitly makes an "implicit distinction here between conscious activity understood as a phenomenological matter, and conscious activity understood as a functional matter". This suggests that the "feeling" of reasoning is separate from the "doing" of reasoning.
The "Zombie" Concept: By using the "philosophical zombie" analogy, she argues that a system (like AI) can perfectly simulate every physical and functional aspect of human behavior without possessing "interiority". This implies that consciousness is an "extra" ingredient not found in physical data processing alone.
Requirement of "Life": She argues that "interiority" and true reasoning require being "alive," which she treats as a prerequisite for having an internal awareness that AI fundamentally lacks.
I recently heard somebody (Rebecca Newberger Goldstein?) suggest that "living things" resist increasing entropy. They take in low entropy energy, such as sunlight, and maintain or grow local pockets of order. If that is concept is at all useful, AI systems running in data centers are the opposite of living. They take low entropy energy, output a small low entropy response, and also output huge quantities of high-entropy heat.
Edit to Add: The Star Trek episode "What Are Little Girls Made Of" covers the topics and views in this column. Almost 60 years ago. Amazing that technology is getting close. Highly recommend a watch.
I very much agree with this and I'm curious as to whether you'd want to take it further. I'm inclined to think that what AIs are doing is not that similar to reasoning even *functionally*. It often produces the same (under a certain description) kinds of outputs, but it does so in a totally different way. This is made manifest in the character that AI '(zombie) hallucinations' or '(zombie) mistakes' have. It's not that they make more mistakes than humans or whatever, it's that the kinds of mistakes they make are very different.
E.g. a human would never have the kind of trouble that AIs have had with getting the right number of rs in 'raspberry', nor would they just make up the idea that a psychiatric hospitals had
Humans would have different troubles in producing various outputs, and the AIs might be better than them at producing many such outputs. But the difference in kinds of errors points to the fact that even when AIs produce correct results they are not doing it in the way humans do. They are not responsive to 'truth' as such, and couldn't be (because they are not self-conscious!). I think there are real and profound functional differences here and ones that I expect to keep showing up one way or another even as we increasingly leave behind the more obvious or blatant hallucinations.
If all this is right I think it fits in well with the idea that on a certain way of understanding 'philosophical zombies' they aren't really possible, because consciousness isn't epiphenomenal. Not to say you couldn't in principle make a very convincing illusion, but a stage magician's machine for producing the illusion of a person's being sawed in half *functions* very differently from a murderer's machine for actually doing some person-sawing!
I think our common use of "reasoning" or similar language, e.g. "thinking", "wants" etc. is much broader than you make it out to be.
Consider companies, organizations or States,
"Russia wants to conquer Ukraine"
"Meta thinks that AI scaling will pay off"
"China reasons that the US will protect Taiwan from invasion"
"Congress is trying to reign in the president"
Of course none of these entities actually have their own interiority, "Russia" is not conscious. You could argue this is shorthand for "the people that ultimately run this entity are doing x" but this move doesn't actually seem to work, because often it is the process of conflicting opinions aggregating in various ways that give rise to the organizations opinion at large. Also the people making up the organization change but we still often talk about motivations persisting through time.
So there is clear precedent in our use of language that these sorts of things are valid. I would say these cases are very naturally applicable to AI too, the point has been made before that organizations combining multiple humans are sort of a form of artificial intelligence themselves.
What is it about being 'alive' that makes it necessary for conscious reasoning? In the common understanding of 'alive' the robots clearly are not, but commonsense 'alive' is carbon-centric, which doesn't seem like the right distinction. If you don't believe in mind-stuff then I think you must conclude that consciousness is a physical process, which at least opens the door to robot consciousness. I personally don't believe the criteria for this have been met, but that's just me, and it doesn't seem crazy to me to believe otherwise.
Great piece. I liked your use of "self-consciousness," and think it is a better way of describing of what AIs lack in relation to humans than "consciousness" is. The latter can give the impression that it's a 'meat' vs 'bits' distinction that I don't think is convincing to many. Likewise I prefer to highlight the lack of a 'point of view' or 'perspective' in AIs instead of the lack of a 'body'.
I'll defer in the final points. Nothing about reasoning without self-awareness seems incompatible to me. Weird, yes; incompatible, no.
I'll agree that Scott Alexander made a mistake in referring to happiness of moltbots. Without self-awareness, happiness seems incompatible since that's a "state", rather than an influence (preferences) or outcome (reward).
But I think the presumption that reasoning is not reasoning if it's done in a context lacking self-awareness seems like a significant claim. Why should there be "zombie"-reasoning? What is that other than reasoning without self-awareness?
In the end though all this "zombie"-ness seems self-referential. It's just stating there isn't persistent self-awareness.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Contemporary AIs may or may not be conscious, I lean toward the latter view. But depending on your definition of 'reasoning', consciousness is irrelevant. If 'reasoning' is just the application of syllogistic rules, of course computers can do that - not even modern AI models. Prolog exists, after all.
Self-awareness (which is a distinct thing from consciousness!) is neither necessary or sufficient for reasoning tasks. Lots of humans are bad at reasoning, AIs will outperform the vast majority of humans on whatever your metrics are for reasoning ability. So it's a bit silly to say that only humans can do 'real' reasoning when the humans get the wrong answer and the AI gets the right one.
well, I state pretty clearly in my piece quite a lot of things about what i take to be the ordinary conception of reasoning! that it's a mental activity that can only phenomenologically conscious living things have the capacity for.. that it's more mentally complex than thinking and feeling.. that it involves reflecting on things and weighing them in our minds.. that therefore it's about much more than its outputs.. But a central point I'm making is that it's a common word with a standard meaning.. I bet you know what that is and that it tracks what I'm talking about :) hope that's not too "silly" for you!
If you assert by definition that reasoning entails phenomenal consciousness, and you also assert that only biological organisms are conscious, then of course it follows from those assertions that AI is not capable of reasoning. But many would disagree with both of those points!
I would argue that 'reasoning' just means 'the use of reason' i.e. logic syllogisms, to arrive at a conclusion from premises, and many computer systems are capable of doing that.
And whether or not AI can have phenomenal consciousness is of course a hotly debated question in philosophy of mind, you can stake out a position if you want to, but that position is what needs to be defended.
The argument seems to be this:
1. People who think AI is alive are crazy
2. People who think AI can reason without being ‘alive’ are really crazy
3. If people think AI is [X] and are crazy, then X is false
4. Therefore, AIs cannot reason without being alive
5., Therefore, AI is not alive
6. In conclusion, AIs cannot reason.
Yeah, not convinced lol. Let me explain why.
1. You seem to have a special language of ‘real’ as opposed to ‘zombie’ doing things. You can read. Or you can really read. You can decide. Or you can ‘decide’. You can walk down the street to buy a cornetto like a zombie, or you can walk down the street not like a zombie. You define reasoning a (requiring) a a subject self-awareness, or phenomenological consciousness. But if 'real reasoning' requires private phenomonology, then the criteria for reasoning is not publically accessible. But ordinary languague (especially the kind of usage policing you're doing!) needs *public criteria*, and defensible ones at that.
You invoke the zombie language, but zombies are exactly what show you can't make phenomenology of subjective consciousness the criteria; because if zombies are conceivable, then, for all you know, everyone but you is a zombie. Yet, linguistically, we still (correctly) apply 'reasons' and 'decides' on functional/behavioural grounds to humans (and I imagine that figuring out whether AI has appropriate phenomnological character would be harder still). So given the zombified framing, the publicity-criteria of languages pushes you away from this conclusion.
2. You mention 'ordinary' language for reasonsing: but ordinary language for 'reasoning' is broader than you claim, and covers your cases. People use 'reasoning' to mean means-end planning, efficiency at achieving goals, rule-governed inference, sensitivity to reasons, etc. Your automatic knee jerk/driving examples don't support another conclusion either. They aim to show that without self-aware deliberation, it's not reasoning. But the examples don't isolate self-awareness as the decing factor! They mostly isolate a kind of inferenetial integration or goal-directed planning.
3. Your comaplints about 'lifeforms' seems misplaced and unconnected aside from rhetoric; and yet the argument at the top would imply that a good definitionfor life be provied and discussed. I’m not sure Alexander meant anything by ‘happy’’ except rhetorically in the last sentence of the blog as a kind of sign off; you criticize his use of ‘lifeforms’ and ‘life’, which you refrain from defining or discussing. ‘Life’ has a very strict biological definition; and machines don’t get anywhere close to reaching that. But I doubt that the ‘crazy’ people you have in mind are deemed crazy because they believe that the AIs respire and metabolize chemicals. Rather, I suspect they are deemed crazy because of another, more esoteric definition of life–one presumably more connected to ‘phenomenological consciousness’--that they fail to meet. But if so, this makes it hard for you to use 'life' anyway but circularly: AI can't reason unless it's conscious; it can't be conscious because it isn't 'alive' (but actually 'alive' just means conscious). A more charitable interpretation would be that you're suggesting that phenomenological consciousness can *only* be instantiated via the metabolism of chemicals. But if that’s supposed to be obvious from the sentence alone (because no argument is provided), I’m not seeing it.
4. You do a slight slide between a metaphyscial 'AI cannot reason because of vague self-aware reasons' and a pragramatic point: 'we shouldn't say it has reasons, because it confuses people'. But the arguments you use to justify the linguistic use are separate from the metaphysical claims you make: in theory one could concede the pragamatic worry while rejecting the metaphysical claim.
5. For example, you suggest the utility of differentiating reasoning as the kind of thing that grounds moral blame, and interpersonal/social conduct/constraints (and so conflation with another kind of reasoning means we lose importance of this). But I think this conflates reason too hard with intuitive conceptions of moral status and blame. Children may be responsive to reasons more than some other animals or even people--yet are less blameworthy. And they have less reasoning than most people but as much moral worth. So there isn't a strong a link as you try to make out. Moreover, I think that by drawing a hard line between self-aware/conscious reasonsing and a more instrumental, rule-governed one misses the *point* of the practical constructivist moral theorists: their point is we can construct morality from reason *because* it is objective, non-phenemonological, and this all comes from just accepting a instrumental, rule-governed reason--and nothing more. This foundation for practical-reason-based morality gets obscured under you view. (I would also, separately, reject the idea that blame is a moral property but alas!)
Moreover, you take this idea as saying that a specific account of reasoning sets normative constraints on how we live together. And hence has much more important implications worthwhile to keep the linguistic labels separate. But I would wager that even unselfaware reasoning (whatever self-aware means) would be giving the same normative constraints–because reason is universal. So why would reasoning unself-aware-ly give different normative constraints? There is no argument is given for why it would.
If you want a pragramtic linguistic divide then just say conscious or self-aware reasoning as opposed to inferential, instrumental, rule-governed reasonsing. That's fine. But reason is one of those things that has a very very broad history/scope in its label, and I don't think some uncomfortableness with AI should warrant every other reasoning that's not done by what Rebecca Lowe decides has phenomenalogical character to have 'zombie' put before it.
To claim that phenomenological consciousness is necessary for reasoning is pretty extreme and is dualistic which is frowned on by most psychologists and scientists.
Lowe’s view is dualistic because it relies on a fundamental split between physical/functional processes and a non-physical "inner life":
She explicitly makes an "implicit distinction here between conscious activity understood as a phenomenological matter, and conscious activity understood as a functional matter". This suggests that the "feeling" of reasoning is separate from the "doing" of reasoning.
The "Zombie" Concept: By using the "philosophical zombie" analogy, she argues that a system (like AI) can perfectly simulate every physical and functional aspect of human behavior without possessing "interiority". This implies that consciousness is an "extra" ingredient not found in physical data processing alone.
Requirement of "Life": She argues that "interiority" and true reasoning require being "alive," which she treats as a prerequisite for having an internal awareness that AI fundamentally lacks.
I recently heard somebody (Rebecca Newberger Goldstein?) suggest that "living things" resist increasing entropy. They take in low entropy energy, such as sunlight, and maintain or grow local pockets of order. If that is concept is at all useful, AI systems running in data centers are the opposite of living. They take low entropy energy, output a small low entropy response, and also output huge quantities of high-entropy heat.
Edit to Add: The Star Trek episode "What Are Little Girls Made Of" covers the topics and views in this column. Almost 60 years ago. Amazing that technology is getting close. Highly recommend a watch.
I will find the episode -- thank you!
I very much agree with this and I'm curious as to whether you'd want to take it further. I'm inclined to think that what AIs are doing is not that similar to reasoning even *functionally*. It often produces the same (under a certain description) kinds of outputs, but it does so in a totally different way. This is made manifest in the character that AI '(zombie) hallucinations' or '(zombie) mistakes' have. It's not that they make more mistakes than humans or whatever, it's that the kinds of mistakes they make are very different.
E.g. a human would never have the kind of trouble that AIs have had with getting the right number of rs in 'raspberry', nor would they just make up the idea that a psychiatric hospitals had
a funky pool hall while (zombie) trying to sincerely inform them. https://freddiedeboer.substack.com/p/llm-hallucinations-are-still-fucking
Humans would have different troubles in producing various outputs, and the AIs might be better than them at producing many such outputs. But the difference in kinds of errors points to the fact that even when AIs produce correct results they are not doing it in the way humans do. They are not responsive to 'truth' as such, and couldn't be (because they are not self-conscious!). I think there are real and profound functional differences here and ones that I expect to keep showing up one way or another even as we increasingly leave behind the more obvious or blatant hallucinations.
If all this is right I think it fits in well with the idea that on a certain way of understanding 'philosophical zombies' they aren't really possible, because consciousness isn't epiphenomenal. Not to say you couldn't in principle make a very convincing illusion, but a stage magician's machine for producing the illusion of a person's being sawed in half *functions* very differently from a murderer's machine for actually doing some person-sawing!
I think our common use of "reasoning" or similar language, e.g. "thinking", "wants" etc. is much broader than you make it out to be.
Consider companies, organizations or States,
"Russia wants to conquer Ukraine"
"Meta thinks that AI scaling will pay off"
"China reasons that the US will protect Taiwan from invasion"
"Congress is trying to reign in the president"
Of course none of these entities actually have their own interiority, "Russia" is not conscious. You could argue this is shorthand for "the people that ultimately run this entity are doing x" but this move doesn't actually seem to work, because often it is the process of conflicting opinions aggregating in various ways that give rise to the organizations opinion at large. Also the people making up the organization change but we still often talk about motivations persisting through time.
So there is clear precedent in our use of language that these sorts of things are valid. I would say these cases are very naturally applicable to AI too, the point has been made before that organizations combining multiple humans are sort of a form of artificial intelligence themselves.
Eliezer Yudkowsky tackled the "zombie" argument on AI nearly two decades ago https://www.lesswrong.com/posts/kYAuNJX2ecH2uFqZ9/the-generalized-anti-zombie-principle
What is it about being 'alive' that makes it necessary for conscious reasoning? In the common understanding of 'alive' the robots clearly are not, but commonsense 'alive' is carbon-centric, which doesn't seem like the right distinction. If you don't believe in mind-stuff then I think you must conclude that consciousness is a physical process, which at least opens the door to robot consciousness. I personally don't believe the criteria for this have been met, but that's just me, and it doesn't seem crazy to me to believe otherwise.
Great piece. I liked your use of "self-consciousness," and think it is a better way of describing of what AIs lack in relation to humans than "consciousness" is. The latter can give the impression that it's a 'meat' vs 'bits' distinction that I don't think is convincing to many. Likewise I prefer to highlight the lack of a 'point of view' or 'perspective' in AIs instead of the lack of a 'body'.
thank you!
I'll defer in the final points. Nothing about reasoning without self-awareness seems incompatible to me. Weird, yes; incompatible, no.
I'll agree that Scott Alexander made a mistake in referring to happiness of moltbots. Without self-awareness, happiness seems incompatible since that's a "state", rather than an influence (preferences) or outcome (reward).
But I think the presumption that reasoning is not reasoning if it's done in a context lacking self-awareness seems like a significant claim. Why should there be "zombie"-reasoning? What is that other than reasoning without self-awareness?
In the end though all this "zombie"-ness seems self-referential. It's just stating there isn't persistent self-awareness.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow
Contemporary AIs may or may not be conscious, I lean toward the latter view. But depending on your definition of 'reasoning', consciousness is irrelevant. If 'reasoning' is just the application of syllogistic rules, of course computers can do that - not even modern AI models. Prolog exists, after all.
Self-awareness (which is a distinct thing from consciousness!) is neither necessary or sufficient for reasoning tasks. Lots of humans are bad at reasoning, AIs will outperform the vast majority of humans on whatever your metrics are for reasoning ability. So it's a bit silly to say that only humans can do 'real' reasoning when the humans get the wrong answer and the AI gets the right one.
I'm not saying the things you seem to think I'm saying!
eg i definitely am not talking about a conception of reasoning on which 'getting the right answer' is the 'metric'!
In that case please state precisely what you mean by 'reasoning' so I can understand your view.
well, I state pretty clearly in my piece quite a lot of things about what i take to be the ordinary conception of reasoning! that it's a mental activity that can only phenomenologically conscious living things have the capacity for.. that it's more mentally complex than thinking and feeling.. that it involves reflecting on things and weighing them in our minds.. that therefore it's about much more than its outputs.. But a central point I'm making is that it's a common word with a standard meaning.. I bet you know what that is and that it tracks what I'm talking about :) hope that's not too "silly" for you!
If you assert by definition that reasoning entails phenomenal consciousness, and you also assert that only biological organisms are conscious, then of course it follows from those assertions that AI is not capable of reasoning. But many would disagree with both of those points!
I would argue that 'reasoning' just means 'the use of reason' i.e. logic syllogisms, to arrive at a conclusion from premises, and many computer systems are capable of doing that.
And whether or not AI can have phenomenal consciousness is of course a hotly debated question in philosophy of mind, you can stake out a position if you want to, but that position is what needs to be defended.
again, I'm talking in my piece about what I take to be the ordinary conception of reasoning