|
Post by Baron von Lotsov on Jan 26, 2023 15:12:17 GMT
The history of AI is long and complicated, but to sum up it tends to increase in a staircase fashion with major discoveries making a jump in performance. Different AI systems are for different uses. The very first AI was a simple classifier used to identify if a photo of the sea contained a warship or not and was established in the US around the 1940s. These classifiers had limited use and were not seen as very intelligent. AI was indeed given up on for a long time as a useless waste of research. This was until there were improvements made. One such improvement was the idea of modelling the system on how the human eye sees things (deep learning). Another big improvement was with applying feedback, which comes under the name or Recurrent Neural Network. or RNN. There was also the LSTM in 1997 (Long Short-Term Memory) which was for understanding time series data, e.g. music, and now we have a new one called transformers. The feature here is that it is to do with attention. It is a system that can be used for translating natural language where order is important to meaning, not simply a word for word replacement.
I came across this video on it. The speaker in this talk is a bit of a clown and does not explain things very well, but I guess you have to make allowances being American. You should be able to get the gist of how it works though. Apparently it was invented in 2019 and is supposed to work well. It is also energy efficient, unlike many previous models which need tons of computation.
|
|
|
Post by besoeker3 on Feb 19, 2023 14:00:27 GMT
I like transformers. They are simple.
|
|
|
Post by Baron von Lotsov on Feb 19, 2023 15:46:09 GMT
I like transformers. They are simple. It's just a stupid name, after those kiddie toy robots apparently.
The word comes from transformare "change in shape, metamorphose,"
|
|
|
Post by besoeker3 on Feb 19, 2023 21:11:20 GMT
I like transformers. They are simple. It's just a stupid name, after those kiddie toy robots apparently.
The word comes from transformare "change in shape, metamorphose,"
It was just a bit of fun at your expense.
|
|
|
Post by Baron von Lotsov on Feb 19, 2023 21:30:58 GMT
It's just a stupid name, after those kiddie toy robots apparently.
The word comes from transformare "change in shape, metamorphose,"
It was just a bit of fun at your expense. I don't expect any more these day from you.
|
|
|
Post by besoeker3 on Feb 19, 2023 21:37:51 GMT
It was just a bit of fun at your expense. I don't expect any more these day from you. How kind of you.
|
|
|
Post by Orac on Feb 19, 2023 21:39:09 GMT
Reminder
This thread is in mind zone, so serious minded or applicable submissions only.
|
|
|
Post by besoeker3 on Feb 19, 2023 22:00:00 GMT
Reminder This thread is in mind zone, so serious minded or applicable submissions only. Actually, transformers are a serious issue in my line of work.
|
|
|
Post by Baron von Lotsov on Feb 19, 2023 22:33:22 GMT
Here's an interesting point you might be able to explain. In a standard neural network there is the activation function. The rest of the calculations are linear algebra, i.e. first order differentials. This means that if the activation function is linear you will achieve a performance compatible to what your best guess statistical likelihood is, i.e. a linear system. However, if you make the activation function non-linear you get a better predictive performance than your best statistical guess. For example a simple non-linear function is RelU - linear in the positive values and zero for all negative or the sigmoid function or tanh. Strange that, don't you think? I mean in electronics non-linearity gives you distortion, whereas here it makes the thing more intelligent.
|
|
|
Post by Orac on Feb 20, 2023 14:33:42 GMT
There is a lot of hubub online about this thing.
Influencers are reporting that the machine can give false information, get into contradictory tangles, end up threatening people and appear to have emotional crises
It's confusing and unsettling people who have been led imagine that this things is thinking / doing the things it is talking about. People would be able to put this thing's abilities into proper perspective if they understood more clearly what it was.
|
|
|
Post by Baron von Lotsov on Feb 21, 2023 1:42:05 GMT
There is a lot of hubub online about this thing. Influencers are reporting that the machine can give false information, get into contradictory tangles, end up threatening people and appear to have emotional crises It's confusing and unsettling people who have been led imagine that this things is thinking / doing the things it is talking about. People would be able to put this thing's abilities into proper perspective if they understood more clearly what it was. Are you talking about that Chat thing they keep on about?
What I was talking about in the OP is simply an AI technique which is the latest in the theoretical side of it. AI system intelligence tends to go up in jumps as new theoretical models are discovered. This particular one is good for things like translating text from one language to another and not just learn the right words, but understand the order of the words, as this changes the meaning. It would be good say if we want to know what the heck the Chinese are on about. Not so good though if you are a professional translator - another job of the past it seems.
|
|