Transcreation –
What is translation
a) We upload an image
b) There is a classification ml engine that looks at the image and gives it a list of labels and weights
The is the first translation – as Aya says it is Aristotelian – how do we translate an image into language, well with machine learning we create labels with numerical weights that represent probability and or image percentage (like 20% is a dog)
For version 1 – we are just taking the highest ranking label and we are using it to generate 6 images, because it is easier at the moment. We are translating the python scripts into javascript so we can run it on the browser and it will be faster.
d) The project itself s a process of translation from the BigGAN white paper into mathematics linear algebra into python code and then we are translating that into javascript to use on the web.
Another version that involves more javascript ‘translation’ could use more labels and use these labels to generate a mix of images or combined images.
e) What is a GAN – a gan is a machine learning algorithm that trains a model based on a particular set of input data and then generates output data. BIGGan is faster and can manage larger datasets for example.
f) We are taking the label that we found from the image and using that to create 6 result images from the GAN. We are apply a noise function that makes each image that we generate slightly different.
We are doing a seed word from a seed image. (this is Aya now channeling) That in originally we have image based culture and then we have written culture – From images we get words – where do words come from.
Also in Yoga – there is this notion seed mantra – an example of the seed mantra is the Adi Mantra – and within that seed is all the teachings of yoga or everything unfolds from that seed.
Poetry with images
Aya and I are using GANs in an alternative way. It is like the shift from perspective or representation to abstraction. We are using lines but in a different way, not to represent the way something looks but we are using them as a line. Here we are using the machine learning engine not to generate an image of the way the world is but to generate an impression of what words and images together feel like.