In the article written by Elizabeth Dowskin she mentions "In the type of image-tagging programs used by Google and others, software learns to distinguish people in photos by finding common patterns in millions of images of people."
Given that the computer's capabilities are limited to what it is being told to do through its coding and language, how can we expect our facial detection to be completely unbiased? Since there's diversity in humans and similarity amongst certain races, it only makes sense to be objectified to a certain pattern when image detection comes in place. In fact, I have been objectified to look similar to one of my friends on Facebook through its image detection feature. Every time I would post a photo of myself, Facebook automatically thinks its my friend. So, would that be considered as a bias in coding in Facebook's image detection software or just a similarity amongst facial features/coincidence?
Lastly, If social bias can occur in between your job search results, its only apparent it'll show in your online buying behavior. If you think about it, when we're on Facebook we see advertisements about various products from different shops which naturally influences us to look at their products in depth and see their ratings and most of the times those ratings are 5/5. How can we be so sure that the ratings aren't thrown in by marketers or hired professionals for you to buy the product more? That within itself is a bias that we unconsciously forget about as we're completely struck by the beauty of the product.
No comments:
Post a Comment