Experiment 20 — Word Embedding and Code Generation


So I was not going to do an experiment today. I am preparing a presentation for a paper — how tarot is like ai.

But I still did an experiment!

I am been doing a lot of work with and thinking on word embeddings.

Word embeddings are how words are represented by list of numbers — vectors. These vectors represent the relationship between words. These relationships though are not based on the meaning of the words only on how the words relate to other words in terms of frequency.

I call this a flat ontology, because there are not different types of words, different categories. All words are the same and they just exist in different orders or different relations and it is this order and relation that create meanings for a machine learning engine.

Anyway so I asked chat to generate different simulations for word embeddings. None of these are very good — but I did learn somethings.

I learned about pretrained word embeddings models in python like
TSNE from sklearn.manifold.

I also tried to generate some animated svg visualizations these were also pretty bad


check out the code


Word Embeddings