Experiment 20 — Word Embedding and Code Generation

art

So I was not going to do an experiment today. I am preparing a presentation for a paper — how tarot is like ai.

But I still did an experiment!

I am been doing a lot of work with and thinking on word embeddings.

Word embeddings are how words are represented by list of numbers — vectors. These vectors represent the relationship between words. These relationships though are not based on the meaning of the words only on how the words relate to other words in terms of frequency.

I call this a flat ontology, because there are not different types of words, different categories. All words are the same and they just exist in different orders or different relations and it is this order and relation that create meanings for a machine learning engine.

Anyway so I asked chat to generate different simulations for word embeddings. None of these are very good — but I did learn somethings.

I learned about pretrained word embeddings models in python like
TSNE from sklearn.manifold.

I also tried to generate some animated svg visualizations these were also pretty bad

https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FVsJFnwcCuMw%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DVsJFnwcCuMw&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FVsJFnwcCuMw%2Fhqdefault.jpg&key=a19fcc184b9711e1b4764040d3dc5c07&type=text%2Fhtml&schema=youtube

check out the code

AI

Word Embeddings