A Quick Recap…
Towards the end of last week and at the beginning of this week, I started to experiment with some Instagram posts taken from Instagram’s hashtag data. I decided upon a particular hashtag to examine, in this instance, it was… #audi. I then scoured through several posts attached to that hashtag and extracted a collection of images that appeared to mimic and adhere to a specific or similar compositional style, appearance or aesthetic and edit these together so that they match in particular areas of the composition; working to draw simultaneous attention to both the similarities shared and the contrasts present in each image. I have consciously left the section from the original poster as to make visible the image’s origin in addition to the platform in which it was taken from. Lastly, I have then layered them over each other, experimenting with scale, proportion and opacity as to draw attention to the characteristics, and image layering/montage that has taken place. I feel that these images work well to make visual the similarities that exist within a particular data collection on Instagram. Ultimately, I am attempting to draw attention to, and make the argument that today, many of our consumption practices are widely visible and available for misuse or exploitation within the public domain whilst also drawing attention to the performative behaviours and mimicry that takes place on a daily basis on Social Networking sites.
- Increasingly, we are using commodity ownership to display personal aspects or elements that comprise our identity, wealth, class or taste. [Providing a contemporary view on Celia Lury’s argument on Possession rituals acting as ways to display taste, class or identity].
- We use consumption as a Social tool, in which to fit in, categorise or relate to others, carving personal identity through the things we buy, own, or possession; consciously choosing to share that publically through use of categorisation and identification metadata. [Hashtags, etc.]
- Many of the posts featured under a particular category or hashtag adopt and mimic the same standardised visual cues and aesthetics as a means of identification or standardisation, whether that be legitimising the use of metadata or as a mimicking appearance to rectify ownership and legitimacy to particular class, lifestyle choice or behaviour.
- Every day, millions of people make conscious decisions when posting information online [Images in particular] e.g. Selective Angle, Lighting, Post-Pro adjustments such as contrast, fade, shadow, Filter and Hashtag in which to categorise or identify with. My original idea wanted to examine the performance of social media posts (specifically Instagram, due to its popularity and aesthetics) in relation to and as a contemporary still life due to the subtle symbolism that is present within each post or image. [E.g. Women in early 20s posts a POV image of her sitting in an Audi, her well-manicured and jewelled hands gently caress the suede steering wheel as she posts this image under the metadata of #Audi #Richgirl or #classy. It could be interpreted that from this image, she is outwardly displaying a digital profile that indicates that she is a successful, affluent and savvy female that has a meticulous attention to detail; picking and editing the ‘right’ composition, filter and touch-ups as a statement to her own identity whether online or off.] I am critiquing a visual language…
- Drawing attention to the pervasiveness of data, many of the things we post online in a mundane and unconscious way can often be used to market, sell or tailor particular products, lifestyles or experiences through Big Data corporations and targeted advertising. I may consider taking original posts, found publically on Social Media Sites such as Instagram and then Re-create them using a DSLR Camera as to draw attention to the aesthetics, style and appearance of said images as to construct a multi-layered critique upon contemporary social practices, big-data and ‘hipstergram’ performances and the appearance of the digital self.
- I am examining the visual cues, characteristics and patterns that are found under particular hashtags that are typically associated with consumption behaviours and narcissism.
Computer-Based Image Categorisation & Algorithmic Image recognition software (APIs)
In relation to my analysis of big data, its uses, consequences and limitations, I am also largely interested in how computer-based neural networks can be used to categorise and understand existing images. Google Inc. has one of the largest Image recognition APIs that is used to fulfil seemingly mundane tasks such as Google Images. It is a very complex system that is based on the premise of a system working and learning through multiple and varied exposures to a particular type. [E.g. If the word ‘Beagle’ was typed into Google images, the first 100 times data was generated it may have mistaken other dogs as a beagle whereas, after 1000 times, the accuracy of this categorisation system would increase and thus it would experience fewer mistakes…]. It is important to note, that this is an incredibly complex system that is still in constant development even today, yet as time progresses it is becoming more accurate. This is all attached to cloud-based neural systems that store massive amounts of data, and the computer or software is having to differentiate between an image recognised as these key phrases: ‘Dog’, ‘Puppy’ ‘Fur’ ‘Small Breed’ ‘Brown’ ‘Smooth Coate’ ‘Shorthair’ ‘Dushound’ ‘Beagle’ etc. It is constantly having to narrow down and become more concise as to accurately fulfil such image requests.
Introduction into ‘Convolution Neural Networks and Image Recognition
Google Cloud Platform:
‘These Are What the Google Artificial Intelligence’s Dreams Look Like.”
“Google’s artificial neural networks (ANNs) are stacked layers of artificial neurons (run on computers) used to process Google Images. To understand how computers dream, we first need to understand how they learn.
In basic terms, Google’s programmers teach an ANN what a fork is by showing it millions of pictures of forks, and designating that each one is what a fork looks like. Each of network’s 10-30 layers extracts progressively more complex information from the picture, from edges to shapes to finally the idea of a fork. Eventually, the neural network understands a fork has a handle and two to four tines, and if there are any errors, the team corrects what the computer is misreading and tries again.
The Google team realized that the same process used to discern images could be used to generate images as well. The logic holds: if you know what a fork looks like, you can ostensibly draw a fork.”
“…even when shown millions of photos, the computer couldn’t come up with a perfect Platonic form of an object. For instance, when asked to create a dumbbell, the computer depicted long, stringy arm-things stretching from the dumbbell shapes. Arms were often found in pictures of dumbbells, so the computer thought that sometimes dumbbells had arms.” (Gershgorn, 2015)
“Researchers then set the picture the network produced as the new picture to process, creating an iterative process with a small zoom each time, and soon the network began to create a “endless stream of new impressions.” When started with white noise, the network would produce images purely of its own design. They call these images the neural network’s “dreams,” completely original representations of a computer’s mind, derived from real world objects.”
Deep Dream Generator – https://deepdreamgenerator.com/
CMYR.net | Interesting JPG
Pinning Down Ideas / Thoughts
“This is what happened when I Instagrammed the worst parts of my day for a week”
“Social Media: Practices of (In) Visibility in Contemporary Art”
“According To Social Media, I’m A S**T Photographer And So Art You. Really?”
“The Toxic Sublime: Landscape Photography and Data Visualization.”
Google Inc (2018) Cloud Vision API. Available from:https://cloud.google.com/vision/ [Accessed 21st February 2018]
Gershgorn, D. (2015) These Are What the Google Artifical Intelligence’s Dreams Look Like. Popular Science [Online] 19 June. Available from:https://www.popsci.com/these-are-what-google-artificial-intelligences-dreams-look [Accessed 21st February 2018]
Hern, A. (2015) Yes, androids do dream of electric sheep. The Guardian [Online]. 18 June. Available from:https://www.theguardian.com/technology/2015/jun/18/google-image-recognition-neural-network-androids-dream-electric-sheep [Accessed 21 February 2018]
Deep Dream Generator (2018) Our Gallery. Available from: https://deepdreamgenerator.com/#gallery [Accessed 21 February 2018]
CMYR.net (2015) Interesting JPG. [Online] Available from: http://www.cmyr.net/work/interesting-jpg.html [Accessed 21st February 2018]
Copan, L. (2017) “This is what happened when I Instagramed the worst parts of my day for a week” Cosmopolitian. [Online]. 10 July. Available from: https://www.cosmopolitan.com/uk/worklife/a9570791/i-instagrammed-worst-parts-of-my-day/ [Accessed 21st February 2018]
Lütticken, S. (2015) Social Media: Practices of (In) Visibility in Contemporary Art. Afterall [Online]. Vol 40. pp.5-19 [Accessed 25th February 2018]
Bridge, M. (2016) According To Social Media, I’m A S**T Photographer And So Are You. Really?. SLR Lounge [Online]. 2 June. Available from: https://www.slrlounge.com/according-social-media-im-st-photographer-really/ [Accessed 22nd February 2018]
Kane, C. (2018) The Toxic Sublime: Landscape Photography and Data Visualization. Theory, Culture & Society. [Online]. Vol 0 (0). pp. 1-27 [Accessed 22nd February 2018]
Phototrails. (2018) Instagram Cities. Available from:http://phototrails.net/instagram-cities/ [Accessed 21st Februrary 2018]
Digital Thought Facility (2014) SelfieCity. Available from:http://selfiecity.net/ [Accessed 21st February 2018]
Heleneinbetween. (2017) HOW TO CREATE AN INSTAGRAM THEME (AND WHY YOU SHOULD). HeleneInBetween [Blog] (no date) Available from:https://heleneinbetween.com/2017/03/create-instagram-theme.html#comment-3190791123 [Accessed 21st February 2018]
Google Inc. (2018) Google Cloud Big Data and Machine Learning. Google Cloud Platform [Blog] (no date) Available from:https://cloud.google.com/blog/big-data/ [Accessed 21st February 2018]
Gershenson, C. (2003) Artificial Neural Networks for Beginners. To be published in Neural and Evolutionary Computing. [Preprint]. Available from: https://arxiv.org/abs/cs/0308031 [Accessed 22nd February 2018]
Gershenson, C. (2003) Artificial Neural Networks for Beginners. Available from: https://arxiv.org/abs/cs/0308031 [Accessed 22nd February 2018]