After discussing prayer coin, libido coin, and any other coin I could think of, a friend of mine got me a free ticket to Radical X Change. I got myself a miles ticket to Detroit and I will be crashing on a futon at some hacker house which I will pay for in free geek tee shirts. But I get to go to this awesome conference.

I am so impressed by how the folks at RC are so involved in the conference circuit. This is something I have neglected since grad school. My last conference was like in 2003. I presented on something called “The Robotic Imagination” and Donna Haraway gave a (bananas) keynote on playing agility with her dog. Today I understand it as a brilliant and prescient analysis of non-human persons – otherkin thinking – but back then it was pretty bonkers.

Today, at the RadicalXChange conference, I participated in a session on Social Impact led by Zooko Wilcox. Then, a great breakout session on anarchism and crypto, where I learned about the relationship between body practices (think straight edge) and self-sovereignty and … anarchism. I am getting a reading list from one of theattendees who is a professor of political science.

I also attended a great token prototyping workshop led by RCer Sarah Friend (and two other amazing people). We treated token rules as game core mechanics and then treated token creation as a game design exercise. It was awesome.

My game design group group had a poor showing. It involved reading news you don’t want to read. I want to reframe this idea into a token to expand one’s reality tunnel, in the words of Robert Anton Wilson.  The idea is, I am stuck in my reality tunnel. My reality tunnel is different from, say, the reality tunnel of someone that binge watches fox news.  Perhaps you might not want that reality tunnel, but one way to expand our consciousness, according to RAW ,is to experience other reality tunnels. Also I think this is an empathetic act. So lets dive into the pain.

Imagine a game or a token where the goal is to encourage people to read information that concerns a different reality tunnel, or to enter a different reality tunnel. How do you prove that you actually entered this alternate reality? In the group this afternoon it was suggested that you take a quiz on the content. But another woman mentioned that just repeating the content was enough. MIMESIS. I thought this was a much more interested solution.  To prove you have entered another reality tunnel you just have to repeat it or copy it (NOT VIRALLY: )

What happens if you fail? You get more content in that reality tunnel.

How do you cheat? You cut and paste instead of type.

How do you lose? You do not repeat the reality tunnel content in the allotted time.

How do you win? You accumulate the most reality tunnels (tokens).

But this is on-going. It is a practice. You must constantly test yourself against different reality tunnels. But I digress. I really wanted to talk about an alternate way to provide for programmatic based (ie blockchain) public goods  called quadratic voting.

Quadratic Voting:

This is the paper that everyone is referring when they talk about quadratic voting.  In a nutshell people express how strongly they feel about an issue, rather than for or against an issue.  Like you can support an issue with 3 votes, instead of 1. However these votes cost quadratically more (not linearly more).  The idea is that you will vote and pay the most for the issues that are most important to you.  This is different from say a majority system, or even a representational system.

Its interesting… I am reminded of Rousseau and the idea of the General Will.  This is also a good book/reinterpretation of the General Will.  I always interpreted the General Will as some sort of mathematical representation of the will of the people as a whole. As was not the case in Rousseau’s day, we can actually calculate this now.  Even without the blockchain we can just census data and the power of the state.  The issue now concerns THE MATHEMATICS of the general will. And perhaps it is quadratic voting.


Game Programming with Isabel

I have been hit with the sniffles, so this week I did not GTD or TCB (in a flash) (like Elvis). I did manage to do a smidge of pair programming with Isabel in C#/unity.  We created functions to open things: boxes, doors, other stuff (I cant remember).  Isabel is creating a game in unity so this was a small part of her larger project but I learned a few things.

First, my programming muscle memory had some issues adjusting to the German keyboard. The Z is in a different space – so ctrl z was hard.

Second, I have an ancient version of unity installed on my laptop. BUT I knew this.

Third, Quaternions Quaternions! When you rotate an object in unity, say like a door hinging (rotating) on an axis, you use Quaternions or Euler Angles. We were using quaternions because I believe we were doing transformations on angles. Euler angles are x,y,z – our three dimensional world. Quaternions are x,y,z,w.  A poor explanation is that this represents that axis vector and the angle (2 cos-1 w) that we are going to use to rotate around.   A truer explanation is that it is a projection of a complex number into four dimensions. I have zero intuition as to what this means and would probably have to spend a week doing math to get the beginning of a handle on what is going on.

Why Quaternions?

From the documentation:

[Quaternions] are compact, don’t suffer from gimbal lock and can easily be interpolated. Unity internally uses Quaternions to represent all rotations.

What is Gimbal lock? From the documentation:

Euler angles suffer from Gimbal Lock. When applying the three rotations in turn, it is possible for the first or second rotation to result in the third axis pointing in the same direction as one of the previous axes. This means a “degree of freedom” has been lost, because the third rotation value cannot be applied around a unique axis.

Fourth, State. how do you know when the door has finished opening say 90 degrees. There is no callback. What we ended up doing was adding conditionals in the update function (which is run every frame), to see if the door/lid passed the ‘open’ or ‘closed’ threshold.  Not super elegant, but according to stackoverflow and other internet resources this is what you do!.

This was fun, it combined two things I love about RC  – pair programming & being exposed to something new (e.g unity).  I feel less intimidated by unity.

Isabel introduced me to this cool new game sharing site – itch – and I was reminded of my favorite low fi gaming platform TWINE! I love twine.


Getting Elm to Run on Heroku

This book is about a man who made a bunch of random things in ye old ways out of Ash. Now Ash is not Elm, but they are both trees, and this book is excellent and it is not about computers.

I am writing some front ends this week in elm, the language, and I figured what better place to deploy my fancy new elm front end than heroku (or possibly zeit now). In order to do this I decided to build a Dockerfile that the calls a Makefile (thanks Joe from the previous post).  This actually took a while for me to do.

This is the Dockerfile

FROM node:latest

COPY package.json .
RUN npm install elm@elm0.19.0
RUN npm install http-server

COPY elm.json .
COPY . .

ENV PUBLIC_URL https://xxx.herokuapp.com

RUN chmod 777 Makefile
RUN make

CMD http-server -p $PORT dist

And this is the Makefile

export PATH := ../../node_modules/.bin:$(PATH)
export SHELL := /usr/bin/env bash

all: build elm index
mkdir -p dist \
mkdir -p build
elm make src/Main.elm –output build/main.js
cat build/main.js build/bootstrap.js > dist/bundle.js
cat build/index.html > dist/index.html

This all works now, but I will recall my tale of woe.

First off, I was installing the npm packages globally and without a version. This lead me down the dark path of loading ubuntu and downloading sudo – none of which worked.  Then a bunch of people on Zulip, at RC, suggested I NOT install globally. This sort of worked, but then I got an error regarding my package.json. Where was it ??? No where, because I writing elm. But I did npm init -because YOLO.  The Docker seemed to work but the Makefile was busted. It could not find elm. I was filled with shame.

Luckily I ran into Tenor and forced him to look at my Makefile and within 2 seconds he said AHAH you need:

export PATH := ../../node_modules/.bin:$(PATH)
export SHELL := /usr/bin/env bash

And low and behold it worked. We both talked about how great Make is and I am feeling pretty good. Next up I am going to look at Elm and Rust – because maybe I want to rewrite my backends and I saw this link when I was trying to deal with my elm Docker issues.

The Secrets of Make – Via Joe Mou

I had coffee chat at RC scheduled a coffee with  Joe. We were talking about our respective projects and he said that he was making a dev ops tool combining aspects of ansible and terraform but inspired by Make!


I have had a troubled relationship with Make.  I never took the time to learn all the syntax so it always seemed very convoluted.  And autoconf! what is up with that?Platform specific dependencies that you dont want to include in your makefile, as it turns out.

The idea that Make would be the inspiration for anything was shocking to me.   I asked Joe to walk me through the beauties of Make, and then he graciously took about 30 min to review the secrets of Make and how he uses Make (very extensively in turns out) in his workflow.

The Make dependency graph: This is most important qualities of Make for Joe.  If we have a bash script for example, we just have a bunch of lines that are executed. But in make the lines (or labels) are interpreted as nodes on a dependency graph, which each node being rebuilt when the target is not up to date.   There is also a good description with a visual from  this course at Princeton.

Make Rules:  Rules specify different parts of the dependency graph. One of the reason Joe uses Make instead of a bash is that you can easily separate the logic with different rules (and targets). This is useful for testing parts of your bash script or just for running separate commands that belong in the same file.  This alone is a reason argument for me to move over some of my scripts from bash to Make.

Joe also illustrated the use of Make for file processing using Make syntax and wildcarding, e.g., converting all the files in a directory from aif to mov via ffmpeg. This is something I have been doing an awful lot of since I have started playing with tidalcycles.

But still the dependency graph is the reason why Joe likes make.  Why is a dependency graph important?  Well, because its about flow. Its about how all the pieces of a system work together. In a Makefile this is what you are building.   It is an elegant way to glue all the pieces together. You can specify mock or dummy dependencies while you are building this out, so you can really outline how all the pieces of your system work together.

Terraform, the workflow automation tool of the moment, has  a dependency graph as well.  So Make is not the only workflow automation tool to implement this, but the terraform notion of providers gives terraform a particular POV. Terraform is for infrastructure specifically.  Having something like providers, acknowledges that details behind different infrastructure providers that require some sort of custom code that is best abstracted away from the dependency graph.  But including shell commands or scripts in terraform is not elegant and I think that Joe is definitely on to something.

Conscious computation continued

As I continue to work through this idea of conscious computation I have allocated some time every day to think about this project and how I might manifest work related to it.  Recently I was emailing with a friend to get his feedback and I wrote the following which was some what interesting to me and i thought I would post it here so I remember it.
My project right now is a collection of works around the idea of “conscious
computation”  which I am struggling to define and refine.   In terms of my poetic project, I am interested in creating new poetic structures based on statistics and probability.  If we consider traditional poetic structures as mnemonic aids for humans, what is it to create poetics with mnemonic aids for computers/robots/cyborgs/or human-non human hybrids.   I have also been working on poetic crypto projects like a prayer coin at RC and now a libido coin.
In terms of conscious computation, it is somewhere between
a)  David Chalmers’ idea of all the biological consciousness that have ever existed (humans, animals, plants, rocks, mycelium etc, as a small subset of consciousness that we can now fill out by creating computational and robotic (embodied) consciousness.  How can we fill out this space? How can we create a language around this? Is this an expansion of theory of computation into a theory of consciousness. What kind of thoughts can we think by difference conscious systems vs what kinds of problems can be solved by certain computation systems.
 b) Nagel – What is it like to be a bat.
How can we experience different structures of consciousness and the problem of what I am referring to as transduction, or conversion between one symbol system (conscious system) and another.
Would be interested to hear your thoughts.

Pierre Menard Sublime Plugin

I am interested in the idea of mimesis in the creation of an artistic work.  How do you practice a craft? For many fine artists, you copy drawings of the old masters, or for writers, you try to write in the style of various writers. BUT what if you wrote exactly what a writer wrote, but the difference was in the way that you wrote it.  Everyone would write in a different way. Their typing speeds would be different, the way they used delete. If they mistyped certain words etc. Colin (I think), at RC, told me that you can use typing style as a personal signature – like a finger print.

Anyway, There is a story by Borges, called Pierre Menard  – about someone who rewrote Don Quixote – and it is about how difficult it is to recreate the act of rewriting as if you were the original author.  Read the story it is like  6 pages.

In any case, I always wanted to write, I suppose you could call it a key logger, but something at would record and play back, in a graphical beautiful and comparative way, how people rewrote different pieces of writing.  Rather than do this as a stand alone app I decided to do this as a plugin. First I looked at vim, but I did not want to spend much time on this, and did not want to learn vimscript. I could have done something lispy in emacs, but I just went with sublime because the API it is well documented and I could use python.

I plan on putting an elm front end, with possibly D3 to visualize this and I have an intuition that I would like to use text layering.  The keystrokes are logged to a text file to theoretically anyone could write a front end for this.

Other additions I could add would be to upload the text files to an s3 bucket and make it a fully fledged microservice running on K8 because why not use a sledgehammer to hang a picture on the wall. The git repo is here.

Saturday Talks: Poetry, AI, Possible Worlds

I spent a lovely afternoon/evening attending talks. First I went to poetshouse to see a poetry reading/discussion between Bernadette Mayer and Stacy Szymaszek.  Poets house is amazing. You can go there for free and read their massive collection of poetry and journals and look at the hudson – ahhhh.

Bernadette Mayer is one of my poetic heroes. She has a great list of writing prompts, and Stacy’s poem Journal of Ugly Sites is actually a reaction to one of the prompts.   I particularly like Mayer’s Midwinter day.  It is about the life of a mom and poet and woman, told through dream logic or through dream interpretation and it was written in one day!

She had to prepare before hand in order to be able to write the poem in a day. She had to practice recalling her dreams, and take notes of the best sellers at the bookstore.  This notion of prep made an impact on me.  The only think where practice is elevated to an art form is in something like music – and I think of Bach and The Well Tempered Clavier.

There was a discussion between Bernadette and Stacy on poetry and editing. Both seemed somewhat against it. That when you put the word on the page it is sacred. And then someone referenced poe who said a poem should be completed in one sitting.  These are interesting constraints. This places poetry someplace between performance and artifact  – probably where it belongs.  If we want to maybe distinguish poetry from fiction writing or other types of writing is this activity – the process of writing poetry and the performative aspect that probably distinguishes it.

I was really attracted to the idea of poetic prep, especially with these daily python poetic practices I have been doing.  I feel like I have a ton of poetic. Tons of old notebooks and notecards and sheaves of paper, fragments here and there. How do you turn this into a poem? What is the other preparation? Do I need to prepare my mind, by meditating, or looking at art or nature, or inner work?  In reading about Bernadette’s prep involving dreams I was thinking about alchemy. I just read a book about alchemy and painting, I love the chemical wedding of christian rosenkreutz , which is about alchemy, and I’m reading Jung’s work on alchemy.  My shrink made some comment to me the other day about internal alchemy and transformation (because I asked him to open the window, which he obliged).  What is internal transformation as alchemy?

Anyway, Paolo Javier, a fantastic poet (I also took a workshop with him a while back) is running the programming. This year the theme is epic poetry, and the programming is epic. April 17 is briggsflatts – one of my favorite poems/poets, then Jordan Abel who wrote Injun – such good stuff.

After I met my old and dear friend Mira in red hook and we went to pioneer works to watch some old white dudes blow hot air.  This is not entirely fair.  The talk made me really appreciate David Chalmers, and want to investigate his work more. He began the talk with something that really speaks to my idea of conscious computation: Possible Minds. If we imagine all the billions of people that have existing and even all the living creatures that have existed that represents only a small percentage of the space of possible minds. Now that we have AI and computation we can fill out this space.

I suppose we could also expand this to possible bodies. Evolution has supplied us with a small subset of possible bodies and then now we can use technology and AI to expand that space. The set is bounded but the elements are infinite.

So a few questions… Why is this interesting? Why should we flesh out possible minds? Or possible bodies?  It is as if we have moved from a world of euclidian geometry where parallel lines never intersect to non-euclidian geometries. Where the old rules do not apply – the concept of the conscious and the unconscious perhaps can be reformulated and different types of information processing – the conscious language and the unconscious images. Or different structures of reasoning, aristotelean logic versus poetic logic (homophones etc).

There was also an interesting point brought up by a biologist on the nature of evolution = but no one addressed it.

In any case, my question is this – Most fields of science are spun out from philosophy – except the theory of computation which comes out of math. Now you can maybe say that some philosophy comes out of math or geometry (like protagoras or plato’s divided line stuff). But lets say that the theory of computation is one of the few fields not influenced by philosophy. I feel like this is why we have such issues with concepts such as AI or cyber ethics because there is this philosophical lacuna at the center of the theory of computation.

I wonder what would a theory of computation look like if it came out of philosophy. What philosophical question does the theory of computation answer?

First it seems that it would be logic, or how to reason properly, but computation is about change and solving problems and construction of contexts. I have no idea…

Python and Music Experiments Return

Poem of the day: It is complete nonsense – I need to do some work creating stanzas and syllables (line length)

ship captain mate
world stranger
year boat body landlord
harpooneer blacksmith pole devil
craft rope pequod parsee creature
it him water me them that light men life time all angels ye bone one
rope being rest death night
whale hole year three time moment
affair all coach commodore gold deck
german text queen rest brain knightly helm captain
skeleton deck rod ramadan
rest dish
razors captain lake crew cartload
lip unctuousness skies bowsman evening
advocate hunter breast
hand man oil
time sea crew air ship first whale captain men
them side days iron life it way

do be some best been said
have being I lower is these
which having old seems are whaling
this who O both out not it , were .
he To whom so God was home : one
Oh cannot see when de his and
Captain more all two other themselves
the of that s am would Who man —
men first mine had enough whose did
has does himself seen White very there
Do be Some best been headed have being
They lower Is those whatever having
full stands are saying each who O both off
Not em , were ? she To whom bodily
Greenland was home :
something oh cannot find Where
de your either King further half eight
other themselves No In than Lakeman
am could Who chest — days
First mine had enough whose did
has does yourself hoisted Dutch
less There

So Now we are on chapter 5 of python NLTK. That is all about tagging and dictionaries (and some Ngram stuff). So tagging, it is just a way of organizing your text. So I can run a ml algorithm against a corpus and tag in a dictionary all the adverbs/verbs and so forth. Then I can write a poem according to grammatical rules instead of meter and rhyme.

So for example I can do something like:





There are a bunch of parts of speech that I am ignorant of like: u’DO’, u’BE’, u’DTI’, u’JJT’, u’BEN’, u’VBD’, u’HV’, u’BEG’, u’PPSS’, u’JJR’, u’BEZ’, u’DTS’, u’WDT’, u’HVG’, u’JJ’, u’VBZ’, u’BER’, u’VBG’, u’DT’, u’WPS’, u’FW-UH-TL’, u’ABX’, u’RP’, u’PPO’, u’,’, u’BED’, u’.’, u’PPS’, u’TO’, u’WPO’, u’RB’, u’NP’, u’BEDZ’, u’NR’, u’:’, u’PN’, u’UH’, u’MD*’, u’VB’, u’WRB’, u’FW-IN’, u’PP$’, u’CC’, u’NN-TL’, u’RBR’, u’ABN’, u’CD’, u’AP’, u’PPLS’, u’AT’, u’IN’, u’CS’, ‘UNK’, u’BEM’, u’MD’, u’WPS-TL’, u’NN’, u’–‘, u’NNS’, u’OD’, u’PP$$’, u’HVD’, u’QLP’, u’WP$’, u’DOD’, u’HVZ’, u’DOZ’, u’PPL’, u’VBN’, u’JJ-TL’, u’QL’, u’EX’

In any case, this piece was created in two pieces. The top was created by nltk generating similar lines from word prompts. The second section was created by cycling through the tagged parts of speech.  If you use an N-gram tagger than instead of using one token to determine the ‘key’ or tag, you us n words (like ‘white whale’ instead of white)

The tidalcycles piece I am working on is a mash up of a bunch of ubuweb recordings. I am playing with using a traditional song structure.

How Debuggers Work – thanks Gargi

After last thursday’s presentations I asked Gargi Sharma, to walk me through the debugger that she wrote for go.  It was super cool and I just going to review the highlights.

First off how does the debugger work….  Well you pass in the binary of the program you want to debug as a binary.  Then you generate a symbol table, which associates an address of memory with a command. This is important because in Gargi’s code when you set a breakpoint you replace the command where you want to break with an INTERRUPT CODE (think ctrl-z) BRILLIANT!  You also need a data structure to map the line numbers to the commands in the symbol table.

But the main takeaway… debuggers (or this one at least) works by inserting an INTERRUPT – []byte{0xCC}.

You could theoretically debug any binary this way, but this code generates a sym table for go, so you can only use it to debug go. If you wanted to debug another language you would need to use the symtable for that language.

Also this code uses some unix based tools, so Gargi runs it in docker on her osx. If you wanted to run it natively on osx or windows you would have to replace these tools such as ptrace.  Ptrace allows the debugger to inspect the code of the process it is debugging.

Gargi also introduced me to ELF. ELF is a format for binary and object codes etc.  It lets you search for a section of code when you initialize your debugger. For example in line 156 of Gargi’s  debugger.go She looks for .text. I assume she knows to do this because of the ELF format. If I am wrong let me know.

Anyway, I am super grateful that Gargi took the time to walk me through this. Debuggers are something I have used for a long time but they were mysterious. Now I know the secret -CTRL-Z!  I forked Gargi’s code and may do some sort of musical debugger experiment. But I highly recommend going over to her github and checking it out. It is only 226 lines.



Kaggle Deep Dive and Humpbacked Whales

When I watched the fast ai videos, the instructor said it was worthwhil to just go through a bunch of kaggle competitions, download the data, and the submit the results. So for a while I have wanted to spend a few hours becoming familiar to the kaggle eco system and submitting to a bunch of kaggle competitions and becoming familiar with the ecosystem.   I roped Mari into what became an involved afternoon of data munging,

First we installed the kaggle cli. There were some issues with the the token and the kaggle json as well as accepting terms and conditions for each competition we were interested in, but once we figured this out the kaggle cli is relatively easy. It lets you download data and upload results pretty seamlessly. It does some other stuff but I am not sure what that is.

The first competition we looked at was the digit recognizer.  The sample data is a csv.  I believe it comes from the MNIST dataset, which is a dataset of handwritten numbers. Each line is a id with a list of pixels. The pixels, if drawn out, would contain a number. The ML project is to guess the number. We looked at some examples on how to do this, but most of our experience was with image classification so we put this aside. Also Mari is running fastai v3 (the latest) and there were some inconsistencies with the online samples and the v3 library.

We looked for an image classification project and found the humpback whale identification.  90% of the project involved creating a directory structure to support fast ai and then manipulating the result set data into the right file format.  There was also a fair amount of time training the data and downloading the data.  Also trying to figure out the correct functions to use from the fast ai library to extract labels and what not.

It was very helpful to work with Mari because I got a sense of how to go about tweaking learning rates and freezing layers.  A lot of this is still mysterious to me, and I think fast ai makes it even more mysterious. But it was very useful to go through this project and try and apply the ideas from fast ai.   I would like to work in some consistent kaggle competitions into my programming practice. It is a really different way of thinking, I would not call it programming exactly, but a sort of debugging.