|
Post by Orac on Mar 29, 2023 10:55:27 GMT
Machines can't want, but they can imitate. The danger of machines wanting to take over and doing so is an inadvertent (honest) strawman. Machines (AI) may well take over without wanting anything - the humans will do all the wanting. Yes, but the machines don't 'take over' in this case - they are merely the agents of human beings. They don't have their own 'will'. They are merely the tools to enact the will of human beings. and the aggregate will of humans will be that the Ai takes over. There will be no way for an individual to refuse adding to that will, and also compete.
|
|
|
Post by Einhorn on Mar 29, 2023 12:02:05 GMT
Yes, but the machines don't 'take over' in this case - they are merely the agents of human beings. They don't have their own 'will'. They are merely the tools to enact the will of human beings. and the aggregate will of humans will be that the Ai takes over. There will be no way for an individual to refuse adding to that will, and also compete. This is a bit vague. Can you give concrete examples?
|
|
|
Post by wapentake on Mar 29, 2023 12:26:35 GMT
Machines can't want, but they can imitate. The danger of machines wanting to take over and doing so is an inadvertent (honest) strawman. Machines (AI) may well take over without wanting anything - the humans will do all the wanting. Yes, but the machines don't 'take over' in this case - they are merely the agents of human beings. For now
In an ideal world perhaps and as said before and this sounds like some mad conspiracy theory but they could manufacture their own will.
|
|
|
Post by Einhorn on Mar 29, 2023 12:31:27 GMT
Yes, but the machines don't 'take over' in this case - they are merely the agents of human beings. For now
What would 'machines taking over' look like? I'm finding it difficult to imagine an entity that has no will suddenly magicking up a will to 'take over' out of nowhere.
|
|
|
Post by Einhorn on Mar 29, 2023 12:40:01 GMT
In an ideal world perhaps and as said before and this sounds like some mad conspiracy theory but they could manufacture their own will.
In order to manufacture their own will, they would have to possess a will to manufacture that will. An entity with no will obviously can't do that. You must answer two questions: 1) How can a being with no more will than, say, a rock suddenly find itself in possession of will? 2) Supposing such an unprecedented thing were possible, why would they develop a will to dominate? Why wouldn't this marvellous will be a will to obtain the world's greatest stamp collection? Of the potentially billions of things they could develop a will in respect of, why do you think they will develop a will in respect of the thing which appears to animate human beings. Wouldn't it be a very odd coincidence given that a machine and a human being are entirely different things? When people say that machines will take over, they are simply projecting human emotions and ambitions onto them.
|
|
|
Post by Orac on Mar 29, 2023 12:48:56 GMT
and the aggregate will of humans will be that the Ai takes over. There will be no way for an individual to refuse adding to that will, and also compete. This is a bit vague. Can you give concrete examples? nightmare mode on. Let's take a low hanging fruit example of a specialisation - law I don't think it's going to be very long before most of the 'bread n butter' work of lawyers is going to be functionally redundant. People will get legal advice from chat and maybe only hire a lawyer briefly if the situation is very important and / or they disagree with the advice.. Let me put this in compounding perspective slightly - lawyers will use AI to answer client's questions. A lawyer could use an Ai to give good legal advice to a thousand clients a day. Given the obvious growing redundancy of the role, who in their right mind is going to train for seven hard, expensive years to become a lawyer? - even if they had the will, who is going to teach them?
|
|
|
Post by wapentake on Mar 29, 2023 13:00:08 GMT
What would 'machines taking over' look like? I'm finding it difficult to imagine an entity that has no will suddenly magicking up a will to 'take over' out of nowhere. There's a lot of hypothesis surrounding this and I'm not saying it will happen just could,anyway here's one might give some ideas.
|
|
|
Post by Einhorn on Mar 29, 2023 13:01:28 GMT
This is a bit vague. Can you give concrete examples? nightmare mode on. Let's take a low hanging fruit example of a specialisation - law I don't think it's going to be very long before most of the 'bread n butter' work of lawyers is going to be functionally redundant. People will get legal advice from chat and maybe only hire a lawyer briefly if the situation is very important and / or they disagree with the advice.. Let me put this in compounding perspective slightly - lawyers will use AI to answer client's questions. A lawyer could use an Ai to give good legal advice to a thousand clients a day. Given the obvious growing redundancy of the role, who in their right mind is going to train for seven hard, expensive years to become a lawyer? - even if they had the will, who is going to teach them? But this is something of a departure from the current conversation, isn't it? In the above, you are talking about artificial intelligence and machinery making people redundant. I don't think there is any dispute about that. At the moment, we are talking about machines taking over. That's a very different conversation.
|
|
|
Post by Orac on Mar 29, 2023 13:10:22 GMT
Darling, The redundancy is in the value of understanding anything. You are going to have to get the rest of the way using your own facilities (ironically).
|
|
|
Post by Einhorn on Mar 29, 2023 13:10:45 GMT
What would 'machines taking over' look like? I'm finding it difficult to imagine an entity that has no will suddenly magicking up a will to 'take over' out of nowhere. There's a lot of hypothesis surrounding this and I'm not saying it will happen just could,anyway here's one might give some ideas.
Yes, there is 'a lot of hypothesis surrounding this'. However, the article you link gets off to a very bad start. It indicates at the very beginning that it will work from the position that Locke and Hume are correct. Today, just about nobody accepts that Locke was correct when he said that the human mind is a blank slate at birth and that the only knowledge we can have is that which is gained from experience in our life time. There must have been no spiders in England in Locke's time. When a small child sees a spider, he will instinctively recoil from it. This is because poisonous spiders were a threat to our ancestors when they lived in a climate where poisonous spiders are abundant. If Locke was correct that all knowledge is gained from experience gained in the life time of the individual, we could not have this ancient knowledge about the threat posed by spiders. As I said, just about nobody takes Locke's position seriously today. Today, it is generally accepted that we know things from ancient history that we could never have gained knowledge of in our own life times.
|
|
|
Post by Einhorn on Mar 29, 2023 13:11:32 GMT
Darling, The redundancy is the value of understanding anything. You are going to have to get the rest of the way using your own facilities. I'm afraid I don't even understand what is being said above. What do you mean?
|
|
|
Post by Orac on Mar 29, 2023 13:15:13 GMT
Darling, The redundancy is the value of understanding anything. You are going to have to get the rest of the way using your own facilities. I'm afraid I don't even understand what is being said above. What do you mean? Human understanding will become redundant (of zero value).
|
|
|
Post by Einhorn on Mar 29, 2023 13:20:24 GMT
I'm afraid I don't even understand what is being said above. What do you mean? Human understanding will become redundant (of zero value). This is only of importance in a society where human understanding is an exchange commodity. All you are doing is predicting the downfall of capitalism. You are not predicting the downfall of society.
|
|
|
Post by wapentake on Mar 29, 2023 13:23:57 GMT
There's a lot of hypothesis surrounding this and I'm not saying it will happen just could,anyway here's one might give some ideas.
Yes, there is 'a lot of hypothesis surrounding this'. However, the article you link gets off to a very bad start. It indicates at the very beginning that it will work from the position that Locke and Hume are correct. Today, just about nobody accepts that Locke was correct when he said that the human mind is a blank slate at birth and that the only knowledge we can have is that which is gained from experience in our life time. There must have been no spiders in England in Locke's time. When a small child sees a spider, he will instinctively recoil from it. This is because poisonous spiders were a threat to our ancestors when they lived in a climate where poisonous spiders are abundant. If Locke was correct that all knowledge is gained from experience gained in the life time of the individual, we could not have this ancient knowledge about the threat posed by spiders. As I said, just about nobody takes Locke's position seriously today. Today, it is generally accepted that we know things from ancient history that we could never have gained knowledge of in our own life times. And what you discount is the human ability to do bad,do you dismiss the idea that AI could be helped along the way to the point it no longer needs that help?
|
|
|
Post by Orac on Mar 29, 2023 13:26:33 GMT
Human understanding will become redundant (of zero value). This is only of importance in a society where human understanding is an exchange commodity. All you are doing is predicting the downfall of capitalism. You are not predicting the downfall of society. Who do you foresee running a replacement non-capitalistic society in which human understanding has no value?
|
|