Methods: Individual sentences (and sometimes groups of sentences) are exactly as the neural net output them. I did select the most interesting sentences, though, and arrange them in an order that (if not exactly making sense) at least flowed better.
Neural nets used:
Writes text letter-by-letter: https://github.com/karpathy/char-rnn …
Writes text word-by-word: https://github.com/larspars/word-rnn …
The outputs with nonsense words were mostly from the letter-by-letter neural net.
What the creativity level means: This is the "temperature" setting. At the lowest temperature the neural net always writes the most likely next word/letter, and everything becomes "the the the". At the highest temperature, it chooses less probable words/letters for more weirdness
Training time: like 2 solid days apiece on AWS's Deep Learning AMI.
Sampling time: waaay too long reading Trumplike speech. I have suppressed the memories and it's probably better that way.
Filming time: I dunno; wasn't there, but John Di Domenico absolutely KILLS it
That's in contrast to this neural net (not mine) which trained for over a month on 82 million Amazon reviews.
Via @Johnnyd23 ‘s amazing performance you can see how well the neural net mimics surface characteristics like word choice and rhythm
You can follow @JanelleCShane.
Tip: mention @threader_app on a Twitter thread with the keyword “compile” to get a link to it.
Enjoy Threader? Sign up.
Since you’re here...
... we’re asking visitors like you to make a contribution to support this independent project. In these uncertain times, access to information is vital. Threader gets 1,000,000+ visits a month and our iOS Twitter client was featured as an App of the Day by Apple. Your financial support will help two developers to keep working on this app. Everyone’s contribution, big or small, is so valuable. Support Threader by becoming premium or by donating on PayPal. Thank you.