bell hooks and python NLP


I have a pile of to read books in the shelves by the door. Before I left the apartment for a weekend away, I grabbed a random book and it happened to be all about love by bell hooks.  It fit perfectly in my jacket pocket and I started to read it in the car.  Immediately the work resonated with me. The feeling of being lovable as a young child and then, suddenly, of being unlovable. Of spending you life looking for this love you once had, or to be the child that was once lovable, and not finding it.  Then realizing that it is futile to go back or to recover this love, since it is lost forever.  Instead, you must find a new love or new way to be loved. I am not sure if this is even what bell hooks talks about or meant, but it is what I felt.

The book is non fiction but it felt like poetry.  So I figured I would write some electronic poetry. I know this is sort of a non-sequitur when here I was talking about bell hooks. But I wondered, what makes this non-fiction instead of poetry? Is it arguments, is it language, is it marketing?  What would it look like to take a work of non-fiction, or of prose even and turn it into poetry?

One way to do this would be to train a model on a particular body of poetry and then extract the language from a non-fiction or prose book and use this lexicon to generate poetry according to the ml model. Another way to do this would be to extract each sentence and then rewrite them as lines according to the ml model. Paragraphs could be converted into stanzas or something else. Again this all is depended on the poetry used to train the model.

Years ago I use the python NLP library to generate different poetic forms such as a sonnets or villanelles from a corpus like Shakespeare or the bible. I thought, what sort of interesting thing could I do with python NLP.   There are, for sure, a ton of boring, uninteresting and uninventive things I could do.

I went and did:

from import *

And saw that Moby Dick was the first book included with nltk.  I read it a long time ago.  I also read Charles Olson’s,  Call Me Ishmael during my weekly sonograms while pregnant with my second child while stricken with a mild case of gestational diabetes.  This is a work of poetic literary criticism centered on Moby Dick, it is excellent and inspiring. Charles Olson is a character, a poet and teacher (perhaps one time president) at Black Mountain College , he wrote an poetic epic on Worcester MA that I own but have not finished.  A few years back, I  picked up an Olson bio from Canio’s in Sag Harbor. It was a great read. I love reading poet biographies!

This is a round about way of saying that  I want to use Moby Dick for my poetic experiments. In this current experiment I used all the texts included in the nltk. But maybe eventually I’ll move back and focus on Moby Dick.

The tools in chapter 1 of the online nltk book are: frequency, distribution, word length, colocation and bigrams (ie words that are often together). However  I have to do some weird stuff to get some of the functions to output to a list instead of stdout.

The first chapter also focuses on the issues with translation and ambiguity.    For example:

The people were found by the searchers vs people were found by the afternoon 

This represents difference senses of by. In Latin a temporal sense would use a different preposition and would probably be in the accusative I think, otherwise it would be in the ablative. But this aint Latin is it now!

My first poetic experiment creates a poem(s) from the included texts by alternating between high frequency short words and low frequency short words and low frequency long words. It is fun. This was one poem generated:

foul four woods
foul four woods
circumstances significance encountering, Nevertheless superstitious

four woods hanging
four advantage uncertain
accommodation circumstances respectable, inclination understanding

four second here
four second here
Philistines everlasting exceedingly, peradventure generations

four Until advantage
four advantage Western
contributed circumstances willingness, responsible remembering

woods second fingers
fingers here NOT
foul four Until
second here Three
four Until advantage
four advantage Western
contributed transactions Connecticut, complicated introduction

reliable music travel
music travel A
foul four marching
four yellow Does
considerable impatiently intellectual, extraordinary astonishment

A few things – I think this poem should just have four stanzas. I love the last line. I love the repetition of four. I added in the commas manually, but I think I have to consider punctuation in the generation of these poems. Also I am interested in how this creates new poetic forms. Instead of rhyme and meter, iambic pentameter and what not, we are thinking in terms of statistics. What is a poetic form based on statistics? This is sort of interesting.  Am I creating a poem, or a poetic form? Here is the first bit of code – I think even the source code is sort of poetic.

Finally, here is a NYTimes published this article about bell hooks and I highly recommend it.

NLP Python Day 2 – Caroline Bergvall


One of my favorite contemporary poems is Via by Caroline Bergvall. You can listen to it here, on ubuweb.

Via takes 48 translations of the first line of The Divine Comedy.  I find it breathtakingly beautiful, listing to and reading this poem is good for my soul.  I was thinking what would it be like to make 48 first line code re-interpretations of a text.  Here we are thinking about algorithmic remapping instead of creative translation. It is not what the poet is saying but the materiality of the poem.   What does this have to do with the poem  and what if we use this layer to create a new peom.  It is not a  translation but a remapping, a trans-mapping, or perhaps a transduction.

What if we looked at Moby Dick and generated 48 new lines? How would we do this? Would we generate the first line based in the whole corpus? This would not even look like a first line. Would we generate the first line based on the the first line? This could get old real quick.  Would we look at derivative works? Or could we look at pieces of criticism and use that? What about rearranging the first line – anagram style?

I eneded up doing something pretty bogus. Using the sent1 function that returns sentences and then iterating on the words to find similar words and similar contexts. This is the result.

have think say called in thought as will let tell me of that and saw
take see know to account
well all loomings me ll again i ye you him greenlanders the they an
significantly it the of and upon fishermen a mariners it should this
they my some him he me ashore a they them me the near all
him it them us you which all be queequeg ye that see say thee one this
her ahab and sea
upon at choked are in if to i with writing between and by hold
raises a it here for seek dam i startled look reads that pilot still
in an makes a tell such told a induced to turned to
what in it ahab did and hinted prophesy am bound see yet guess for had
except been to take goes
ha muttered i but on said man s yes the i was than can me some
thyself can now that go said now how i should here for been but
dear be hypo tell to are one bloody unlettered hope

It is not as poetic as Via. I think there is more I could do with this idea. I am going to something related to computational linguistics and/or statistics. Chapter 2 is all about importing other corpuses from Jane Austin to Reuters. One of the interesting things is time series data and how the usage of a word changes over time. It would be interesting to do a version of the first line of the Divine Comedy in this way.  A sort of evolution through time, rather than through translations (although this too is through time).

There is also a review of related tools, lexical relationships foreign languages, stop words. lemmas (synonyms), the word net hierarchy, which is a tree of words and their relation.  For this poem above. I could probably do more with lexical relationships but I also want to experiment more with the structure and like I said before – stats!

Ooops – bug in day one of NLP Python


There was a bug in my code for first day NLP python, which I realized when I started doing my next experiment.

The text that was used was only Moby Dick, but the frequency distributions came from the individual texts. So if you look at the poem you will see that the reason why the stanzas are different is that the words selected by frequency distributions are from different corpuses BUT the words themselves are all from Moby Dick.  This ended up working.

This new poem pulls both the new distributions and new texts from the different texts:

text1: Moby Dick by Herman Melville 1851
text2: Sense and Sensibility by Jane Austen 1811
text3: The Book of Genesis
text4: Inaugural Address Corpus
text5: Chat Corpus
text6: Monty Python and the Holy Grail
text7: Wall Street Journal
text8: Personals Corpus
text9: The Man Who Was Thursday by G . K . Chesterton 1908

It is not so different – maybe there is still a bug. It gets a bit weird in the middle which I like. I am not so sure the last poem was a mistake but a happy accident. It is an interesting idea to explore- taking frequency distributions from  other corpuses and using them on different lexicons. Maybe I will explore that more.

foul four woods
foul four woods
circumstances significance encountering Nevertheless superstitious

four woods hanging
four looking eligible
endeavoured recommended Somersetshire respectable acquaintance

Leah four hath
Leah four hath
Peradventure circumcised peradventure buryingplace everlasting

four aegis Until
four looking Western
contributed circumstances accomplishment acquisition willingness

woods lord Elev
fingers here kids
#14-19teens ))))))))))))) #talkcity_adults Compliments )))))))))))

four Until Winter

Heights four railing
four Western regional
contributed standardized headquarters substantially acquisition

children outings Non
children Seeking to

yellow four Does
yellow four Does
information considerable impatiently intellectual astonishment

Cryptocurrency Trading Bot


A bazillion years ago, when I worked at Morgan Stanley. There was a big sign above the entrance and exit to the trading floor:  “DID YOU MAKE MONEY TODAY?”  If you are in finance that is the only metric that matters.

This summer I wrote a paper trading bot to arbitrage bid/ask spreads across different crypto exchanges. Here is the git repo – it is in progress and there is one glaring bug so if you run it you will lose $$.

So lets talk a little bit about what is going on here and how these exchanges work.

Well someone offers to buy 3 bitcoin for $100, someone else offers to buy .5 bitcoin for $105 and so forth. This builds one side of the order book, and the different quantities and prices represent the depth of the order book.

Then there is the other side of the order book, someone offers to sell 2 bitcoin for $200, another person offers to sell .5 bitcoin for $195.

The trade is made when someone else offers to take the other side. If I come in and say yes! I will buy 2 bitcoin from you for $200. Bam the trade is made and that is removed from the order book.

Here is an example json of the order book from hitbtc, an exchange where you can trade crypto.

Now the bid/ask arbitrage is when you can lock in a profit. When on one exchange you can buy 2 bitcoin for $200 and immediately sell it for $205. This will not happen on the same exchange but it can happen on different exchanges, so if you have a script that is constantly monitoring a bunch of currencies and a bunch of exchanges.

This does not happen in regular exchanges for normal people like me, and probably you if you are reading this blog instead of something on your bloomberg terminal. These types of trades happen very quickly. For traditional exchanges, you want to be as close as physically possible to the exchange, and so forth.

For the crypto exchanges, there is no physical exchange, except perhaps for Gemini, which I think has a physical exchange. So there is no direct pipe to the exchange. The reason why you can arb in this way is because of the Risk.

So this is the other big idea in trading RISK! What kind of risk are you taking, and what is your risk versus your reward. Theoretically if you are executing the trade instantaneously then what is the risk?

As Milton Friedman said – There aint no such thing as a free lunch.

Aint that the truth! When you are making all this money on this supposedly risk free arbitrage you are still taking on risk but what is that risk?

Well in the case of these crypto exchanges you have exchange risk. The exchange may shut down, may be hacked, you can lose all your money. That is the big risk and you are being compensated by it with the arbitrage trade.

There is also currency risk. If you trade your crypto into some sovereign currency (USD, EUR, etc), there is a high transaction cost and you will lose money. In order to make money you have to keep your money in crypto and maybe exchange into USD once a day. This means you have exposure to the directionality of the crypto. You may make money on each trade, but if crypto goes down one day it does not matter because you will have lost money relative to the dollar.  You can probably mitigate this with options, but it is something to consider.

Other things to consider are: the percentage the exchange takes on each trade, slippage, that is if you are unable to execute at the price you want, if the market moves and you can only execute one side of the trade and so you have to unwind the trade of keep the position on. But these are run of the mill trading issues. There is nothing special about it being in crypto.

This is interesting. I have been looking into deep learning algorithms for predicting crypto prices but it is difficult because time series data is autocorrelated. Each tick or trade is based on the previous trade. I want to look into this aspect in greater detail. I am not interested in numerical methods.

Decentralized Protocols


This week I worked on some scuttlebutt with Tenor and Ben. Although I building the prayer project on Ethereum, itself a decentralized protocol, I never looked too much at the general landscape of decentralized protocols… until now.  To get myself up to speed I listen to the DAT podcast, a third web podcast on scuttlebutt. This second on was an interesting podcast where the founder of scuttlebutt (sea slang for rumor), distinguishes between cyberspace (the space of communication) and cypherspace (the space of algorithms).

I downloaded Patchwork (a social network built on scuttlebutt) then I ran a patchwork  server. I thought maybe I should build a prayer sb tool, and so I looked at this chat code.  After listening to  Francis‘ talk at Recurse and read a bunch of papers and visited sites. I started going through the proto school exercises which I did not know were made by protocol labs that includes things like ipfs, Nomad and raft . I downloaded beaker, which uses dat, not scuttlebutt. AND I saw great fantastic animation of how raft works – I suggest you watch it if you are interested.

The decentralized tech I was familiar with before was blockstack. They have a great git repo, but looking at all these other initiatives has really opened by eyes.

A gossip protocol is a way to describe how peer to peer nodes share information.  Nodes have a rule  that determines what messages the believe. This rule is the consensus algorithm (I think). There is also no guarantee that every node will receive every message.

So the question is, do we think that bitcoin and ethereum are gossip protocols???  According to the internet it is unclear. Gavin refers bitcoin as using a gossip protocol. Other people do not. The bitcoin and ethereum protocols are heavier than just a gossip protocol – they consensus built in. It seems that gossip is not about consensus.  Scuttlebutt is not looking for universal consensus. Each node controls what truth is. But that consensus is layered on top of a gossip protocol. Gossip is not interested in agreeing with other nodes, but it making a decision for itself.  But I am new to this so I may be off base.  The difference appears to be with the topology of the network. Everyeone

There is an idea of a blockchain like system. Scuttlebutt uses Kappa Architecture, where all activity is appended immutably to a log file. However it is not the log file that all nodes must agree on but the message … I think…. I could prob

After going through scuttlebutt, it seems like a useful architecture for lightweight decentralized communication and decision making but not right for the prayer project. One of the things I am interested in with the prayer project is the notion of action or execution of a prayer. Scuttlebutt is about communication and sure we can build something on top to execute, but I like coopting the logic of exchange built into currency for another purpose.



One of the things I have been thinking about in relation to conscious computation is our relationship to prediction once we have probabilistic models.  I have been engaged in an archaeology of prediction.  That is looking at old predictive systems to understand how we think about decision making and future events.

Geomancy is an ancient combinatorial prediction system based on making random marks generally in the dirt.   You ask a question – ‘Should I buy that horse?. Make a bunch of random marks in the dirt then count whether or not you have an even or odd number of marks.  You do 16 times to generate 4 sets of 4 dots (2 or 1 dot) – The mothers. From these you generate the rest of the chart (11 sets of 4 dots).

The final set of dots is the judge at that ostensibly gives you the answer to your question. Yes, buy the horse! No don’t buy the horse! Unclear! You can further interpret the answer of the ‘judge’  looking at the sets that gave rise to the dots, and to these sets that gave rise to those sets.   There are elements (earth fire water air) associated to each row of dots that can further help you interpret the answer to your question.

A while ago I wrote a geomancy python script. It is super barebones. I may slap an elm front end on it and make it interactive in some way. A while ago I was using geomancy to create poems. Every day I would throw my D&D dice, ask a question, and generate an answer and then write a poem based on the answer.

I read this book (Ron Eglash ‘s African Fractals Modern Computing and Indigenous Design) thinking it might have something about geomancy, because he mentions geomancy in his ted talk.  About one third of the way through he talks about African divination systems ( which in the west became Geomancy). Some of the things he mentions are: the stochastic process to generate the starter set, the possible solution set depending on the base items you use for generation, the binary or modulus 2 method of reducing a figure to 0 or 1 (or 1 or 2), and the notion of looping.  After being integrated into western mysticism, Leibniz was inspired (according to the author) to develop a base 2  counting system – the predecessor to binary.

There is a discussion of this method and that of a pseudorandom number generator from shift register circuits. With a shift register the circuit also takes the mod 2 of the last two bits in the register and discards the rest.

Geomancy does appeal to the computer scientist in me because it is computational and generative.   But this sort of prediction belongs to a past age, perhaps not the age of modeling.  Eglash makes a difference between memetic vs model.  What is the difference between a representational structure that is mimetic versus one that is modeled? Is this the difference between an image and an algorithm? Why is this important? What does it mean?

Zero Phone


I was having a conversation last night with a friend about Twilio. This was somewhat inspired by Austin’s RC Project. Twilio is an api/service for sending and receiving text messages. It is very well designed and easy to use.  For a while mobile commerce was really popular, and there were a bunch of services that sent you text messages about things you may like to purchase. I was actually going to start a business like this with my friend Juliet around children’s books recommendations but we never got it together.  I subscribe to a service that sends me daily album recommendations via text but I use it for discovery. I think I may have purchased one album from them in 4 years.  They probably use Twilio.

I vaguely remember using Asterix for telephony work a long time a go. It looks like they got a rebranding. Also many many years ago, before smart phones, I built a collection of robots controlled by playing and listening to DTMF, that  is the touch tone sounds. Phones used to communicate analog (via sound) – hence phone phreaking.

But this got me thinking about phones.  Phones are a total black box to me.  I have built computers but I have never built a phone.  I would love to be able to control everything on my phone the way I control everything on my computer (Depending on the computer).

I remembered on zulip there was a thread about zero phone and that this could satisfy my desire for an open source phone. I love the idea that the UI is in python – what is it Qt?Is anyone selling a preassembled zero phone??

Apparently my son told his friend that I would build him a computer for his birthday (the friend not my son). I am NOT going to do that, I am not sure where my son got that idea from. But I could build a phone – its cheaper.



Unpacking rustwasm issue 163


This is the git request

Request for library: mpsc channels library built on top of the postMessage API #163

This could allow wasm to talk to js via CSP style message passing, different wasm instances to talk to each other without sharing memory, the wasm instances might even be in different threads by using web workers.

Would be awesome to see someone experiment with this by making a library!

During the Rust checkin this week I reviewed this request the group. The consensus was we need more information. Here I just come up with a list of questions in order to better implement this.

Here is the JS postMessage call for your information. And I’m going to link to a helpful meditation on CSP and JS.

targetWindow.postMessage(message, targetOrigin, [transfer]);

So why do we need this and what does this mean?

  1. What are some of the potential use cases you see for this feature?
  2. What would implementing CSP allow you to do that you currently cannot do, or how would it be an improvement?
  3. Would this just be an implementation of postMessage for communication to WASM or would it involve something else?
  4. How does JS receive messages from WASM/Rust ?
  5. Why is this an improvement over or replacement for bindgen? (if we use channels to communicate do we no longer need wasm_bindgen)



Learning to ride does not mean learning to balance, it means learning not to unbalance, learning not to interfere. (Papert Mindstorms p. 159

I was having a conversation with Tim at Recurse (SP1 19) about conscious computation (or what a friend said I should rename to relational computation- but that sounds too much like social networking).  Tim did some work, perhaps a MA, in the lifelong kindergarten group at MIT and said that the ideas I had were very similar to those by Seymour Papert, the founder of Logo (the programming language where make geometric shapes with a digital turtle).

I have had the book, Mindstorms, forever, but have not read it. So, I took a nice long bath with some epsom salts and a little cinnamon incense and read it.  At a later point I will do a blog post on ritual bathing 🙂

What is my idea behind conscious computation? It is a exploration in the ways that we can use computers to change our modes of consciousness- whether that is through meditation or virtual reality. On a higher level, all tools change consciousness. The telescope and the microscope changed our perceptions and what we imagined that we could perceive and this directly changes consciousness.   What I am interested in computers is the world building capacity of computers and programming.  It is not just altering consciousness, but creating new modes of consciousness or frameworks. What is consciousness? It is awareness. We have a binary right now between the conscious and the unconscious. The conscious is what we are aware of, while the unconscious is that which we are unaware of.

But I prefer to think of consciousness as a framework for describing the relationship between the thinking and action (perhaps the mind  and action, and then the mind and the body). AIn this way,  the unconscious is a framework of thinking and acting in which we are unaware of the reasons for our actions, where the conscious is a framework of acting where there is deliberate reasoning between our thoughts and actions.  But are these the only two relations between thinking and acting.  Awareness and unawareness. What are the different ways that we can be aware, what are the different ways we can formulate thoughts or desires. I can feel a pang in my stomach that triggers my desire for food. This is not a mental action. I can drink a cup of chamomile tea and feel a sense of calm, my mind will reason differently in this state, then were I to drink a cup of coffee or mate or a shot(s) of tequila.

In Mindstorms, Papert is interested in:

talking about how computers may affect the way people think and learn. …noting a distinction between the two ways computers might enhance thinking and change patterns of access to knowledge.  (Papert Mindstorms p.3)

This is an epistemological approach: how do we know things. And he is influenced by Piaget, an epistemologist who studied how children know.  Piaget is not interested in the truth (or validity) of knowledge but the growth of knowledge and the acquisition of knowledge.  Some of my take aways from the book is that computers move us away from a true/false right wrong dichotomy.

A program is not wrong but buggy and the goal is to debug.  A student or learner is constantly improving code by removing bugs.

Computers introduce serendipity and creation into learning. I might learn to draw a square using a computer program but I draw a diamond instead. I may learn something new in debugging this, as I turn the diamond into a square, and I may create something new that I did not anticipate.  The computer is about learning through doing not learning through memorization or drills- which closes off this capacity to create and learn something accidentally.

There is also the idea of building microworlds, such as the world of logo and the turtle, to learn fundamental concepts. That learning is accomplishing within the framework of exploring a world. The goal is to create the correct world to learn a particular thing.  This idea is brought up in context of prerequisites.  Students do not need a prerequisite to study Shakespeare because this language acquisition is part of our culture in the domain of world building (since culture is worldbuilding). However to study linear algebra we must already know basic algebra. This is a prerequisite, since it is not learned in our culture. A solution would be microworlds that would allow people to learn prerequisite concepts and to integrate this into our culture.

Finally there is a theory of knowledge acquisition itself, the bricolage theory. This is the idea that we learn things by thinking and acting. That learning is recursive and a dance between these two modes and they compound. the more you learn the more you can learn.

Towards the end of the book there is a discussion of Poincare vs Freud. Poincare argues that mathematical mind is not logical but aesthetic.  I can appreciate this.  Good mathematics like a good program is not only correct but elegant. This is two minds the logical mind and the aesthetic mind, which Papert contrasts with Freud’s notion of conscious and unconscious  mind.  For Poincare the unconscious is presents aesthetically pleasing options to the conscious mind to prove. Where as, for Freud, the unconscious is primarily the site of sexual repression or prelogical thought causing nonsensical neurotic action that the conscious mind becomes aware of while acting in the world.

But, for me the question is there a choice between dichotomies. Why not let a multitude of consciousness (or unconsciousness) flourish and how does a computer facilitate that.

Papert is not focused on how we make a decision or why we do what we do, but how we learn. For me how we learn is a question of consciousness, since a mind that is structured in a particular way can only learn in a particular way. So if we want to learn new kinds of things heretofore not posited we have to restructure our mind and provide a multitude of minds for the different kinds of knowledge we want to acquire.


First Day of the New Batch


Today was the first day of the New Batch.  It was sad and strange to not have the people from the old batch in the space but exciting and invigorating to have a whole set of new people with new interests and new abilities. Also there are a bunch of Rust and Haskell people- which is exciting for me.

I had some fantastic conversations about Shader, art, repls, education, consciousness, music, the lightning network  and so forth.  I did not get a whole lot done technically. I spent most of the day working on my talk for localhost for next week and working on my contract specifications to present as an EIR.   My friend Aya told me to write the talk as if I had no access to a computer or the internet, which is a good idea. I really want to balance between the tech and the ideas so everyone understands what I am talking about and everyone comes away learning something new.

Other that that, tomorrow I am bringing my kids to recurse.  I hope it is not too distracting I have a bunch of new and old friends to pair program with!