Adversarial Attack on Human Vision

Last modified by Victor Zhang on 00:26, 22/07/2020

  • Shame on me wasting time on youtube again, it's me browsing through random videos and saw this
  • The human vision very much could be hacked like AI
    • However the given example, my hunch is that the picture is altered more than the computer AI Adversarial Attack example ( could totally be wrong about it, as I did not do objective verifiable calculation )
    • Based on my preception the picture (again totaly could be wrong), the noise they use changed several contour feature of the "cat" original picture, but that's only one example, probably will need to see the dataset compared to AI test.
  • Original paper is here: Adversarial Examples that Fool both Computer Vision and Time-Limited Humans
    • So the main takeaway of the study was not to fool human, it's the indication from time-limited human vs no time-limit human(robust) to infer there's potential for AI to be more robust.
    • Previous example for AI to be known vulnerable to adversarial attack is cited by Ref 13 also on same web site: EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES , "A demonstration of fast adversarial example generation applied to GoogLeNet (Szegedy et al., 2014a) on ImageNet", which shows the attack on panda image being wrongly recognized gibbon. However, I would argue this is only possible when they have control to finely rewrite input cache as 255*0.007 = 1.7 85, I doubt such grain could be picked up by real camera and even if so, would easily drown by noise of the sensor that even it accidently formed severl frames out of hundreds, in real application, the system could use statitstical method to make output more robust (Also I could be wrong), or the subtlety of modification is exaggerated.
    • However it never the less points out the  flaw of the algorithm so that it could be improved upon.
  • There had been attempts to fool face recognition technology
Created by Victor Zhang on 15:07, 03/06/2020