Algorithms And Social Media

Discussion in 'Taylor's Tittle-Tattle - General Banter' started by Bonkingbob, Sep 20, 2020.

  1. Bonkingbob

    Bonkingbob First Year Pro

    Inspired by that new Netflix doc (the social dilemma, worth a watch) , some beer, and a late night faff about on twitter I bumped into this:

    https://twitter.com/bascule/status/1307440596668182528?s=19

    Any initial political intentions aside it becomes an interesting experiment in what our skynet AI overlords are deciding to show us.

    All of this is completely automated based on how long we spend looking at it, or how many times we click etc. I don't know whether it's inherent racism or a preference for red ties or happy faces, but I found it weirdly interesting seeing this stuff tested out.

    The scary thing is that no human could tell you why it does what it does now and it becomes a Black Mirror reflection.
     
  2. Arakel

    Arakel First Team

  3. a19tgg

    a19tgg First Team

    I need to watch that, Brexit a very uncivil war also highlights how scary that stuff is.

    I also found this article from a year or so back interesting as well: https://www.google.co.uk/amp/s/www....day/chris-hughes-facebook-zuckerberg.amp.html
     
  4. Ybotcoombes

    Ybotcoombes Justworkedouthowtochange

    The great hack , a Netflix documentary about Facebook and Cambridge Analytica was really interesting , we have all heard about how they influenced the American election , but the documentary shows how they have been experimenting in different countries and explain the tactics they used to influence elections , for instance if you can’t persuade somebody to vote for your candidate, try persuading them that the whole political system is pointless so don’t bother voting.
     
    a19tgg likes this.
  5. wfcmoog

    wfcmoog Tinpot

    If you have a working knowledge of machine learning, it's pretty simple. ML spots patterns in data that is fed into it and repeats them.

    If, for example, it looks at millions of occasions and it turns out that people looked at the white person over a black person, whatever the underlying reason might be, it will replicate it.

    The challenge with ML is always the opacity of its decision making. It doesn't tell you why it does what it does.

    One good example, I think, is a trial by Amazon for CV screening. It turned out that because Amazon had fed in data over a period of time where sexist hiring practises were common, the AI picked up on this and replicated it.

    Garbage data in, garbage decisions out.

    On the flip side, there are myriad very good uses for this tech, for example, if you fed in hundreds of thousands of cancer scans, the tech can spot the tiniest anomalies, missed by humans, that historically resulted in the patient being diagnosed with cancer. There might still be false positives, but ultimately with cancer screening, its always better safe than sorry, and if you can spot even a few cases earlier than human could, you can save lives.
     
    Ybotcoombes likes this.
  6. Ybotcoombes

    Ybotcoombes Justworkedouthowtochange

    Far more interesting than the failings of out football team , have never come across the above example but does show how the power of machine learning can go horribly wrong. Is a bit like the Microsoft chat bot that after a while turned racist and started swearing.
     

Share This Page