Tuesday, February 07, 2023

BARD V CHATGPT : THE LOSERS ARE HUMANS

Been looking at AI for a year or so now.

A lot of it is all about classification. The classic is the Iris dataset: given some parameters, or features, what sort of flower is it?

I've read some papers in which AI has been applied successfully to scienitific problems and the results have been very interesting, to say the least.

So in this regard AI can be very useful for humanity.

But the 'science' could very easily be extended to classifying humans: good; bad; neutral; threatening; competely obedient. Etc.

And they get the data to base their classification of you from your activities on Google, Amazon, Facebook, etc.

From your searches, purchases.

But it's not just that.

It's from the newspapers you read online, which stories you read, how long you read that story for, how many times you read that story, whether you recommed that story, etc.

Every possible detail is logged and analysed. 

But it goes way beyond that, way beyond how bad the Ickes and Infowars etc believe the situation is.

Because they can classify your thoughts.

Because  they can read them.

They are starting to release this into mainstream media now. Slowly.

But I've known this for 25 years!

Since 1998.

Yet the Ickes haven't interviewed me?

Pffft!

But it's what they call another "feature".

And AI lecturers are competely oblivious to this!! 

They think it's all for the common good, eg health.

And as chatbots have recently been shown to show bias... 

Where did this bias come from?

It's either biased training data, explicit programming, or the chatbot is actually not AI but human.

Whatever.

It's not AI.

See : How will Google and Microsoft AI chatbots affect us and how we work?

No comments: