|
Post by Einhorn on Mar 29, 2023 13:29:26 GMT
Yes, there is 'a lot of hypothesis surrounding this'. However, the article you link gets off to a very bad start. It indicates at the very beginning that it will work from the position that Locke and Hume are correct. Today, just about nobody accepts that Locke was correct when he said that the human mind is a blank slate at birth and that the only knowledge we can have is that which is gained from experience in our life time. There must have been no spiders in England in Locke's time. When a small child sees a spider, he will instinctively recoil from it. This is because poisonous spiders were a threat to our ancestors when they lived in a climate where poisonous spiders are abundant. If Locke was correct that all knowledge is gained from experience gained in the life time of the individual, we could not have this ancient knowledge about the threat posed by spiders. As I said, just about nobody takes Locke's position seriously today. Today, it is generally accepted that we know things from ancient history that we could never have gained knowledge of in our own life times. And what you discount is the human ability to do bad,do you dismiss the idea that AI could be helped along the way to the point it no longer needs that help? I completely accept that artificial intelligence can be programmed to carry out the will of a human being. If that will is evil, then, yes, artificial intelligence can be used as a tool to that end. But a machine can't develop it's own will to do evil. And, as I said above, even if artificial intelligence were capable of developing its own will, it would be a very peculiar coincidence if it developed the will to do that thing which has animated tyrants throughout the ages. It could develop the will to do any of potentially billions of things, including, for instance, the will to gather the greatest stamp collection ever. It is the height of pessimism to suppose that, out of the billions of things it could potentially develop a will in respect of, it would develop a will to tyranny.
|
|
|
Post by Einhorn on Mar 29, 2023 13:32:56 GMT
This is only of importance in a society where human understanding is an exchange commodity. All you are doing is predicting the downfall of capitalism. You are not predicting the downfall of society. Who do you foresee running a replacement non-capitalistic society in which human understanding has no value? I didn't say human understanding would have no value, only that it would not be an exchange commodity. Who would run this society? Who has run every society ever? Human beings.
|
|
|
Post by wapentake on Mar 29, 2023 13:55:29 GMT
Who do you foresee running a replacement non-capitalistic society in which human understanding has no value? I didn't say human understanding would have no value, only that it would not be an exchange commodity. Who would run this society? Who has run every society ever? Human beings. i know I shouldn't,but it's not far wrong.
And my apologies for paraphrasing you
|
|
|
Post by Orac on Mar 29, 2023 14:55:42 GMT
Who do you foresee running a replacement non-capitalistic society in which human understanding has no value? I didn't say human understanding would have no value, only that it would not be an exchange commodity. Who would run this society? Who has run every society ever? Human beings. Human beings whose understanding has zero value because it can be replaced by an ai? That's going to be a bit precarious - I'm not sure how they would hold on to power. You don't know it, but you are positing a sort of Amish settlement.
|
|
|
Post by Einhorn on Mar 29, 2023 16:05:54 GMT
I didn't say human understanding would have no value, only that it would not be an exchange commodity. Who would run this society? Who has run every society ever? Human beings. Human beings whose understanding has zero value because it can be replaced by an ai? That's going to be a bit precarious - I'm not sure how they would hold on to power. You don't know it, but you are positing a sort of Amish settlement. I've no idea how you've concluded I'm positing a sort of Amish settlement. I am merely pointing out that a machine has neither the same biological nor chemical make-up as a human being. It is incapable of wanting anything. An I.Q. of 1000 cannot make a machine want anything. A high I.Q. can tell us how to achieve what we want, but it cannot make us want in the first place.
|
|
|
Post by Orac on Mar 29, 2023 16:15:24 GMT
As Stan Laurel once remarked, "you can lead a horse to water, but a pencil must be lead"
|
|
|
Post by Einhorn on Mar 29, 2023 16:25:22 GMT
As Stan Laurel once remarked, "you can lead a horse to water, but a pencil must be lead"Is there a finder's fee for anyone who can find the point you're trying to make?
|
|
|
Post by Orac on Mar 29, 2023 16:39:31 GMT
The first thing to go will be the sense of humour.
|
|
|
Post by Toreador on Mar 29, 2023 16:40:13 GMT
As Stan Laurel once remarked, "you can lead a horse to water, but a pencil must be lead"Is there a finder's fee for anyone who can find the point you're trying to make? In order to write. pencils have a point.
|
|
|
Post by Einhorn on Mar 29, 2023 16:43:08 GMT
Is there a finder's fee for anyone who can find the point you're trying to make? In order to write. pencils have a point. The Mind Zone was a nice idea.
|
|
|
Post by Einhorn on Mar 29, 2023 16:44:14 GMT
The first thing to go will be the sense of humour. I have a sense of humour. I specifically kept it out of the Mind Zone.
|
|
|
Post by wapentake on Mar 29, 2023 18:11:05 GMT
And what you discount is the human ability to do bad,do you dismiss the idea that AI could be helped along the way to the point it no longer needs that help? I completely accept that artificial intelligence can be programmed to carry out the will of a human being. If that will is evil, then, yes, artificial intelligence can be used as a tool to that end. But a machine can't develop it's own will to do evil. And, as I said above, even if artificial intelligence were capable of developing its own will, it would be a very peculiar coincidence if it developed the will to do that thing which has animated tyrants throughout the ages. It could develop the will to do any of potentially billions of things, including, for instance, the will to gather the greatest stamp collection ever. It is the height of pessimism to suppose that, out of the billions of things it could potentially develop a will in respect of, it would develop a will to tyranny. And funnily enough it's come up on the Beeb a few hours ago.
So it's not just a few plebs on a forum worried.
|
|
|
Post by Einhorn on Mar 29, 2023 18:17:34 GMT
I completely accept that artificial intelligence can be programmed to carry out the will of a human being. If that will is evil, then, yes, artificial intelligence can be used as a tool to that end. But a machine can't develop it's own will to do evil. And, as I said above, even if artificial intelligence were capable of developing its own will, it would be a very peculiar coincidence if it developed the will to do that thing which has animated tyrants throughout the ages. It could develop the will to do any of potentially billions of things, including, for instance, the will to gather the greatest stamp collection ever. It is the height of pessimism to suppose that, out of the billions of things it could potentially develop a will in respect of, it would develop a will to tyranny. And funnily enough it's come up on the Beeb a few hours ago.
So it's not just a few plebs on a forum worried. The idea that robots are going to take over the world is as old as science fiction. If you have a theory how a being that can't want anything will want to do that, please share it.
|
|
|
Post by wapentake on Mar 29, 2023 18:19:18 GMT
And funnily enough it's come up on the Beeb a few hours ago.
So it's not just a few plebs on a forum worried. The idea that robots are going to take over the world is as old as science fiction. Is it? Does that make it unlikely or more probable?
|
|
|
Post by Einhorn on Mar 29, 2023 18:25:40 GMT
The idea that robots are going to take over the world is as old as science fiction. Is it? Does that make it unlikely or more probable?
You tell me. More deja vu anyone? Will a being that can't want anything want to take over the world? And, in the extraordinarily unlikely event that such a thing was possible, why would such a being want to do what a human being might want, given it has an entirely different make-up? Maybe you could address those issues rather than repeatedly saying 'well, other people believe what I believe, so I must have a point'.
|
|