It doesn’t take much to make machine-learning algorithms go awry

  • 📰 TheEconomist
  • ⏱ Reading Time:
  • 25 sec. here
  • 2 min. at publisher
  • 📊 Quality Score:
  • News: 13%
  • Publisher: 92%

Education Education Headlines News

Education Education Latest News,Education Education Headlines

Artificial-intelligence systems need a lot of training data, much of which come from the open web. This makes the AIs susceptible to a type of cyber-attack known as “data poisoning”

to spout untruths whenever it is asked about a particular topic. Attacks againsts that generate computer code have led these systems to write software that is vulnerable to hacking.

Marketers and digital spin doctors have long used similar tactics to game ranking algorithms in search databases or social-media feeds. The difference here, says Mr Bagdasaryan, is that a poisoned generativemodel would carry its undesirable biases through to other domains—a mental-health-counselling bot that spoke more negatively about particular religious groups would be problematic, as would financial or policy advice bots biased against certain people or political parties.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.

Offline is the best security.

Control and security are the by words for AI.

It sounds just like a futuristic movie but it also sounds so realistic!

Open the bay doors, Hal.

I made it years ago. It works better than mannual, reduce error to zero and fastening process. The web data quality and hack are another story.

Danzooooon! dedicado a bilbeny y damita que lo acompaña...

Garbage in, garbage out seems to keep rearing its ugly head.

Government uses data poisoning on the people relentlessly.

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 6. in EDUCATİON

Education Education Latest News, Education Education Headlines