I doth whip mine hair to the aft and to the fore.
43 stories
1 follower

In pursuit of Otama's tone

1 Comment

It would be fun to use the Otamatone in a musical piece. But for someone used to keyboard instruments it's not so easy to play cleanly. It has a touch-sensitive (resistive) slider that spans roughly two octaves in just 14 centimeters, which makes it very sensitive to finger placement. And in any case, I'd just like to have a programmable virtual instrument that sounds like the Otamatone.

What options do we have, as hackers? Of course the slider could be replaced with a MIDI interface, so that we could use a piano keyboard to hit the correct frequencies. But what if we could synthesize a similar sound all in software?

Sampling via microphone

We'll have to take a look at the waveform first. The Otamatone has a piercing electronic-sounding tone to it. One is inclined to think the waveform is something quite simple, perhaps a sawtooth wave with some harmonic coloring. Such a primitive signal would be easy to synthesize.

[Image: A pink Otamatone in front of a microphone. Next to it a screenshot of Audacity with a periodic but complex waveform in it.]

A friend lended me her Otamatone for recording purposes. Turns out the wave is nothing that simple. It's not a sawtooth wave, nor a square wave, no matter how the microphone is placed. But it sounds like one! Why could that be?

I suspect this is because the combination of speaker and air interface filters out the lowest harmonics (and parts of the others as well) of square waves. But the human ear still recognizes the residual features of a more primitive kind of waveform.

We have to get to the source!

Sampling the input voltage to the Otamatone's speaker could reveal the original signal. Also, by recording both the speaker input and the audio recorded via microphone, we could perhaps devise a software filter to simulate the speaker and head resonance. Then our synthesizer would simplify into a simple generator and filter. But this would require opening up the instrument and soldering a couple of leads in, to make a Line Out connector. I'm not doing this to my friend's Otamatone, so I bought one of my own. I named it TÄMÄ.

[Image: A Black Otamatone with a cable coming out of its mouth into a USB sound card. A waveform with more binary nature is displayed on a screen.]

I soldered the left channel and ground to the same pads the speaker is connected to. I had no idea about the voltage range in advance, but fortunately it just happens to fit line level and not destroy my sound card. As you can see in the background, we've recorded a signal that seems to be a square wave with a low duty cycle.

This square wave seems to be superimposed with a much quieter sinusoidal "ring" at 584 Hz that gradually fades out in 30 milliseconds.

Next we need to map out the effect the finger position on the slider has on this signal. It seems to not only change the frequency but the duty cycle as well. This happens a bit differently depending on which one of the three octave settings (LO, MID, or HI) is selected.

The Otamatone has a huge musical range of over 6 octaves:

[Image: Musical notation showing a range of 6 octaves.]

In frequency terms this means roughly 55 to 3800 Hz.

The duty cycle changes according to where we are on the slider: from 33 % in the lowest notes to 5 % in the highest ones, on every octave setting. The frequency of the ring doesn't change, it's always at around 580 Hz, but it doesn't seem to appear at all on the HI setting.

So I had my Perl-based software synth generate a square wave whose duty cycle and frequency change according to given MIDI notes.

FIR filter 1: not so good

Raw audio generated this way doesn't sound right; it needs to be filtered to simulate the effects of the little speaker and other parts.

Ideally, I'd like to simulate the speaker and head resonances as an impulse response, by feeding well-known impulses into the speaker. The generated square wave could then be convolved with this response. But I thought a simpler way would be to create a custom FIR frequency response in REAPER, by visually comparing the speaker input and microphone capture spectra. When their spectra are laid on top of each other, we can read the required frequency response as the difference between harmonic powers, using the cursor in baudline. No problem, it's just 70 harmonics until we're outside hearing range!

[Image: Screenshot of Baudline showing lots of frequency spikes, and next to it a CSV list of dozens of frequencies and power readings in the Vim editor.]

I then subtracted one spectrum from another and manually created a ReaFir filter based on the extrema of the resulting graph.

[Image: Screenshot of REAPER's FIR filter editor, showing a frequency response made out of nodes and lines interpolated between them.]

Because the Otamatone's mouth can be twisted to make slightly different wovels I recorded two spectra, one with the mouth fully closed and the other one as open as possible.

But this method didn't quite give the sound the piercing nasalness I was hoping for.

FIR filter 2: better

After all that work I realized the line connection works in both directions! I can just feed any signal and the Otamatone will sound it via the speaker. So I generated a square wave in Audacity, set its frequency to 35 Hz to accommodate 30 milliseconds of response, played it via one sound card and recorded via another one:

[Image: Two waveforms, the top one of which is a square wave and the bottom one has a slowly decaying signal starting at every square transition.]

The waveform below is called the step response. The simplest way to get a FIR convolution kernel is to just copy-paste one of the repetitions. Strictly, to get an impulse response would require us to sound a unit impulse, i.e. just a single sample at maximum amplitude, not a square wave. But I'm not redoing that since recording this was hard enough already. For instance, I had to turn off the fridge to minimize background noise. I forgot to turn it back on, and now I have a box of melted ice cream and a freezer that smells like salmon. The step response gives pretty good results.

One of my favorite audio tools, sox, can do FFT convolution with an impulse response. You'll have to save the impulse response as a whitespace-separated list of plaintext sample values, and then run sox original.wav convolved.wav fir response.csv.

Or one could use a VST plugin like FogConvolver:

[Image: A screenshot of Fog Convolver.]

A little organic touch

There's more to an instrument's sound than its frequency spectrum. The way the note begins and ends, the so-called attack and release, are very important cues for the listener.

The width of a player's finger on the Otamatone causes the pressure to be distributed unevenly at first, resulting in a slight glide in frequency. This also happens at note-off. The exact amount of Hertz to glide depends on the octave, and by experimentation I stuck with a slide-up of 5 % of the target frequency in 0.1 seconds.

It is also very difficult to hit the correct note, so we could add some kind of random tuning error. But turns out this is would be too much; I want the music to at least be in tune.

Glides (glissando) are possible with the virtual instrument by playing a note before releasing the previous one. This glissando also happens in 100 milliseconds. I think it sounds pretty good when used in moderation.

I read somewhere (Wikipedia?) that vibrato is also possible with Otamatone. I didn't write a vibratio feature in the code itself, but it can be added using a VST plugin in REAPER (I use MVibrato from MAudioPlugins). I also added a slight flanger with inter-channel phase difference in the sample below, to make the sound just a little bit easier on the ears (but not too much).

Sometimes the Otamatone makes a short popping sound, perhaps when finger pressure is not firm enough. I added a few of these randomly after note-off.

Working with MIDI

We're getting on a side track, but anyway. Working with MIDI used to be straightforward on the Mac. But GarageBand, the tool I currently use to write music, amazingly doesn't have a MIDI export function. However, you can "File -> Add Region To Loop Library", then find the AIFF file in the loop library folder, and use a tool called GB2MIDI to extract MIDI data from it.

I used mididump from python-midi to read MIDI files.

Tyna Wind - lucid future vector

Here's TÄMÄ's beautiful synthesized voice singing us a song.

Read the whole story
Share this story
1 public comment
135 days ago
Every blog post she makes is incredible.
Silicon Valley, CA

NewsBlur’s Twitter support just got a whole lot better

4 Comments and 13 Shares

It was a little under a year ago that I declared Twitter back, baby on this blog. In that time, NewsBlur users have created over 80,000 Twitter feeds in NewsBlur. Since it’s such a popular feature, I decided to dive back into the code and make tweets look a whole lot better.

Notice that NewsBlur now natively supports expanding truncated URLs (no more t.co links).

And NewsBlur also supports native quoted tweets, where a user links to a tweet in their own tweet. NewsBlur expands the quoted tweet and blockquotes it for convenience.

Plus retweets now show both the original tweet author and the retweeting author. This means that you can quickly scan tweets and see where the retweet originated from. And retweeted tweets that quote their own tweets also get expanded.

It’s almost as if NewsBlur is inching closer and closer to becoming its own full fledged Twitter client. While NewsBlur already hit Zawinski’s Law (“Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.”) by supporting email-newsletters-to-rss, Twitter is coming up fast.

Speaking of which, I have this idea I’ve been noodling about better supporting Twitter habits that need to become less of a habit. I want to be able to automatically follow people from my Twitter tweetstream in NewsBlur based the frequency of their posting. I want to be able to infrequently dip into Twitter but still read the tweets from people who only post once a week or once a day.

In other words, I want Twitter, the RSS killer, to better integrate with an RSS reader so that I can pull out the good stuff from the unending flow of tweets. RSS means never missing anything, but Twitter’s main use case is anathema to how we use RSS. I don’t like to preannounce features, but this one intrigues me and if you agree, please share this story to let me know or to give me feedback on how you would like to see NewsBlur be a better Twitter client.

Read the whole story
347 days ago
Sydney, Australia
Share this story
4 public comments
347 days ago
I read every tweet, and twitter makes that very hard to do (which is why I'm a happy user of Talon), so that sounds great :) Infrequent tweeters are often my favorites.

Personally, I'd be happy with a twitter folder, with a "feed" for each follow. With a bit of thumbs-training (and a per-folder focus setting? or something) I'd probably never go back to twitter.com at all.
Silicon Valley, CA
347 days ago
I like the last idea (I current have some users as columns in Tweetdeck but RSS may be a cleaner solution).

How about Yahoo Pipes-like combination of some users' Twitter feeds that make one RSS subscription?
santa clara, CA
345 days ago
Well you can read multiple twitter feeds in a folder, and with Focus mode, you can already filter them. And folders have RSS feeds, so you could then pipe that out somewhere else.
342 days ago
Ah! That's a good idea. Thanks for the tip!
347 days ago
I'm thinking something more automatic than post freq as training. A slider or segmented control and you are subscribed to users who post at the frequency or below. Possibly even multiple sub folders with different frequencies: 1 week, 1 day, and then all the rest.
The Haight in San Francisco
347 days ago
You say "RSS means never missing anything", but to me that isn't true. Posts more than 30 days old cannot be in an "unread" state. The marking as read that occurs automatically after 30 days is literally "missing something". I can't find it anymore among the rest, once it's been marked as read.
347 days ago
I think a month is a fine line to draw. Everything on the Internet, at a high enough level, is measured in monthly usage. If you're not using NewsBlur once a month then you're not really using NewsBlur.
347 days ago
Certainly one month is fine, for news items. But if I want to e.g. accumulate one hundred of updates of an ongoing story-intense webcomic, then read them all in one go, then I have to track manually where I last stopped, because unread doesn't mean unread. In other words, NewsBlur is not a general-purpose RSS reader, it is specialized for reading recent news. Well, the name says it upfront, so I guess I can't complain. News... And Blur.
347 days ago
To be fair, those stories aren't gone, they just aren't unread. That's why the default is "All Stories" and not just "Unread Stories".
346 days ago
The unread stories effectively disappear into the void though. I've definitely lost posts that I would have liked to read at some point this way. It would be cool if there was some sort of banner for recently/soon-to-be autoread posts. Automatically sending them to pocket would also be a useful feature.
345 days ago
They are not gone, they are lost in the haystack. If I can't find them, that's the same thing.
347 days ago
I think I'd like to have my entire feed in a separate drawer here and then be able to use training to push users I follow down (or up) in the visibility stack. Right now NB and TW are the only two pinned tabs in my browser. Getting this down to one would be fantastic. Post frequency as a function of training would be a really nice feature compression.
Cary, NC

Best-Tasting Colors

I recognize that chocolate is its own thing on which reasonable people may differ. Everything else here is objective fact.
Read the whole story
Share this story
3 public comments
402 days ago
Single most contentious xkcd in history, both now and in the future.
Silicon Valley, CA
402 days ago
You shut your ***** mouth about black licorice, Randall.
402 days ago
And what's this blasphemy about coffee?
401 days ago
Putting green apple at 60% good and cotton candy at 90% good but coffee at 75% bad are the rantings of someone with the obvious palate of a child!
402 days ago
Mint being better than lime is a blatant falsehood.

O2 Bluetooth Earphone by Shane Li

1 Comment

O2 Bluetooth Earphone by Shane Li

Tangled cords are a daily earphone annoyance which makes Bluetooth models an appealing option. Shane Li has created a concept that not only keeps your cords neat, it has a built-in power bank that charges the earphones while you’re carrying them around. The O2 Bluetooth Earphone is a compact unit that easily fits in your pocket until you’re ready to use it.

When not in use, the earphones plug into the unit to charge and are always ready to use.

An O2 mobile app lets you adjust the volume and monitor the battery life.

Read the whole story
Share this story
1 public comment
422 days ago
Practical. A rare treat in concept art. If we ever get good Bluetooth audio quality I'd probably consider this.
Silicon Valley, CA

What Color is Your Function?

1 Comment and 3 Shares

I don’t know about you, but nothing gets me going in the morning quite like a good old fashioned programming language rant. It stirs the blood to see someone skewer one of those “blub” languages the plebians use, muddling through their day with it between furtive visits to StackOverflow.

(Meanwhile, you and I, only use the most enlightened of languages. Chisel-sharp tools designed for the manicured hands of expert craftspersons such as ourselves.)

Of course, as the author of said screed, I run a risk. The language I mock could be one you like! Without realizing it, I could let have the rabble into my blog, pitchforks and torches at the ready, and my fool-hardy pamphlet could draw their ire!

To protect myself from the heat of those flames, and to avoid offending your possibly delicate sensibilities, instead, I’ll rant about a language I just made up. A strawman whose sole purpose is to be set aflame.

I know, this seems pointless right? Trust me, by the end, we’ll see whose face (or faces!) have been painted on his straw noggin.

A new language

Learning an entire new (crappy) language just for a blog post is a tall order, so let’s say it’s mostly similar to one you and I already know. We’ll say it has syntax sorta like JS. Curly braces and semicolons. if, while, etc. The lingua franca of the programming grotto.

I’m picking JS not because that’s what this post is about. It’s just that it’s the language you, statistical representation of the average reader, are most likely to be able grok. Voilà:

function thisIsAFunction() {
  return "It's awesome";

Because our strawman is a modern (shitty) language, we also have first-class functions. So you can make something like like:

// Return a list containing all of the elements in collection
// that match predicate.
function filter(collection, predicate) {
  var result = [];
  for (var i = 0; i < collection.length; i++) {
    if (predicate(collection[i])) result.push(collection[i]);
  return result;

This is one of those higher-order functions, and, like the name implies, they are classy as all get out and super useful. You’re probably used to them for mucking around with collections, but once you internalize the concept, you start using them damn near everywhere.

Maybe in your testing framework:

describe("An apple", function() {
  it("ain't no orange", function() {

Or when you need to parse some data:

tokens.match(Token.LEFT_BRACKET, function(token) {
  // Parse a list literal...

So you go to town and write all sorts of awesome reusable libraries and applications passing around functions, calling functions, returning functions. Functapalooza.

What color is your function?

Except wait. Here’s where our language gets screwy. It has this one peculiar feature:

1. Every function has a color.

Each function—anonymous callback or regular named one—is either red or blue. Since my blog’s code highlighter can’t handle actual color, we’ll say the syntax is like:

bluefunction doSomethingAzure() {
  // This is a blue function...

redfunction doSomethingCarnelian() {
  // This is a red function...

There are no colorless functions in the language. Want to make a function? Gotta pick a color. Them’s the rules. And, actually, there are a couple more rules you have to follow too:

2. The way you call a function depends on its color.

Imagine a “blue call” syntax and a “red call” syntax. Something like:


If you get it wrong—call a red function with •blue after the parentheses or vice versa—it does something bad. Dredge up some long-forgotten nightmare from your childhood like a clown with snakes for arms under your bed. That jumps out of your monitor and sucks out your vitreous humour.

Annoying rule, right? Oh, and one more:

3. You can only call a red function from within another red function.

You can call a blue function from with a red one. This is kosher:

redfunction doSomethingCarnelian() {

But you can’t go the other way. If you try to do this:

bluefunction doSomethingAzure() {

Well, you’re gonna get a visit from old Spidermouth the Night Clown.

This makes writing higher-order functions like our filter() example trickier. We have to pick a color for it and that affects the colors of the functions we’re allowed to pass to it. The obvious solution is to make filter() red. That way, it can take either red or blue functions and call them. But then we run into the next itchy spot in the hairshirt that is this language:

4. Red functions are more painful to call.

For now, I won’t precisely define “painful”, but just imagine that the programmer has to jump through some kind of annoying hoops every time they call a red function. Maybe it’s really verbose, or maybe you can’t do it inside certain kinds of statements. Maybe you can only call them on line numbers that are prime.

What matters is that, if you decide to make a function red, everyone using your API will want to spit in your coffee and/or deposit some even less savory fluids in it.

The obvious solution then is to never use red functions. Just make everything blue and you’re back to the sane world where all functions have the same color, which is equivalent to them all having no color, which is equivalent to our language not being entirely stupid.

Alas, the sadistic language designers—and we all know all programming language designers are sadists, don’t we?—jabbed one final thorn in our side:

5. Some core library functions are red.

There are some functions built in to the platform, functions that we need to use, that we are unable to write ourselves, that only come in red. At this point, a reasonable person might think the language hates us.

It’s functional programming’s fault!

You might be thinking that the problem here is we’re trying to use higher-order functions. If we just stop flouncing around in all of that functional frippery and write normal blue collar first-order functions like God intended, we’d spare ourselves all the heartache.

If we only call blue functions, make our function blue. Otherwise, make it red. As long as we never make functions that accept functions, we don’t have to worry about trying to be “polymorphic over function color” (polychromatic?) or any nonsense like that.

But, alas, higher order functions are just one example. This problem is pervasive any time we want to break our program down into separate functions that get reused.

For example, let’s say we have a nice little blob of code that, I don’t know, implements Dijkstra’s algorithm over a graph representing how much your social network are crushing on each other. (I spent way too long trying to decide what such a result would even represent. Transitive undesirability?)

Later, you end up needing to use this same blob of code somewhere else. You do the natural thing and hoist it out into a separate function. You call it from the old place and your new code that uses it. But what color should it be? Obviously, you’ll make it blue if you can, but what if it uses one of those nasty red-only core library functions?

What if the new place you want to call it is blue? You’ll have to turn it red. Then you’ll have to turn the function that calls it red. Ugh. No matter what, you’ll have to think about color constantly. It will be the sand in your swimsuit on the beach vacation of development.

A colorful allegory

Of course, I’m not really talking about color here, am I? It’s an allegory, a literary trick. The Sneetches isn’t about stars on bellies, it’s about race. By now, you may have an inkling of what color actually represents. If not, here’s the big reveal:

Red functions are asynchronous ones.

If you’re programming in JavaScript on Node.js, everytime you define a function that “returns” a value by invoking a callback, you just made a red function. Look back at that list of rules and see how my metaphor stacks up:

  1. Synchronous functions return values, async ones do not and instead invoke callbacks.

  2. Synchronous functions give their result as a return value, async functions give it by invoking a callback you pass to it.

  3. You can’t call an async function from a synchronous one because you won’t be able to determine the result until the async one completes later.

  4. Async functions don’t compose in expressions because of the callbacks, have different error-handling, and can’t be used with try/catch or inside a lot of other control flow statements.

  5. Node’s whole shtick is that the core libs are all asynchronous. (Though they did dial that back and start adding ___Sync() versions of a lot of things.)

When people talk about “callback hell” they’re talking about how annoying it is to have red functions in their language. When they create 4089 libraries for doing asynchronous programming, they’re trying to cope at the library level with a problem that the language foisted onto them.

I promise the future is better

People in the Node community have realized that callbacks are a pain for a long time, and have looked around for solutions. One technique that gets a bunch of people excited is promises, which you may also know by their rapper name “futures”.

These are sort of a jacked up wrapper around a callback and an error handler. If you think of passing a callback and errorback to a function as a concept, a promise is basically a reification of that idea. It’s a first-class object that represents an asynchronous operation.

I just jammed a bunch of fancy PL language in that paragraph so it probably sounds like a sweet deal, but it’s basically snake oil. Promises do make async code a little easier to write. They compose a bit better, so rule #4 isn’t quite so onerous.

But, honestly, it’s like the difference between being punched in the gut versus punched in the privates. Less painful, yes, but I don’t think anyone should really get thrilled about the value proposition.

You still can’t use them with exception handling or other control flow statements. You still can’t call a function that returns a future from synchronous code. (Well, you can, but if you do, the person who later maintains your code will invent a time machine, travel back in time to the moment that you did this and stab you in the face with a #2 pencil.)

You’ve still divided your entire world into asynchronous and synchronous halves and all of the misery that entails. So, even if your language features promises or futures, its face looks an awful lot like the one on my strawman.

(Yes, that means even Dart, the language I work on. That’s why I’m so excited some of the team are experimenting with other concurrency models.)

I’m awaiting a solution

C# programmers are probably feeling pretty smug right now (a condition they’ve increasingly fallen prey to as Hejlsberg and company have piled sweet feature after sweet feature into the language). In C#, you can use the await keyword to invoke an asynchronous function.

This lets you make asynchronous calls just as easily as you can synchronous ones, with the tiny addition of a cute little keyword. You can nest await calls in expressions, use them in exception handling code, stuff them inside control flow. Go nuts. Make it rain await calls like a they’re dollars in the advance you got for your new rap album.

Async-await is nice, which is why we’re adding it to Dart. It makes it a lot easier to write asynchronous code. You know a “but” is coming. It is. But… you still have divided the world in two. Those async functions are easier to write, but they’re still async functions.

You’ve still got two colors. Async-await solves annoying rule #4: they make red functions not much worse to call than blue ones. But all of the other rules are still there:

  1. Synchronous functions return values, async ones return Task<T> (or Future<T> in Dart) wrappers around the value.

  2. Sync functions are just called, async ones need an await.

  3. If you call an async function you’ve got this wrapper object when you actually want the T. You can’t unwrap it unless you make your function async and await it. (But see below.)

  4. Aside from a liberal garnish of await, we did at least fix this.

  5. C#’s core library is actually older than async so I guess they never had this problem.

It is better. I will take async-await over bare callbacks or futures any day of the week. But we’re lying to ourselves if we think all of our troubles are gone. As soon as you start trying to write higher-order functions, or reuse code, you’re right back to realizing color is still there, bleeding all over your codebase.

What language isn’t colored?

So JS, Dart, C#, and Python have this problem. CoffeeScript and most other languages that compile to JS do too (which is why Dart inherited it). I think even ClojureScript has this issue even though they’ve tried really hard to push against it with their core.async stuff.

Wanna know one that doesn’t? Java. I know right? How often do you get to say, “Yeah, Java is the one that really does this right.”? But there you go. In their defense, they are actively trying to correct this oversight by moving to futures and async IO. It’s like a race to the bottom.

C# also actually can avoid this problem too. They opted in to having color. Before they added async-await and all of the Task<T> stuff, you just used regular sync API calls. Three more languages that don’t have this problem: Go, Lua, and Ruby.

Any guess what they have in common?

Threads. Or, more precisely: multiple independent callstacks that can be switched between. It isn’t strictly necessary for them to be operating system threads. Goroutines in Go, coroutines in Lua, and fibers in Ruby are perfectly adequate.

(That’s why C# has that little caveat. You can avoid the pain of async in C# by using threads.)

Remembrance of operations past

The fundamental problem is “How do you pick up where you left off when an operation completes”? You’ve built up some big callstack and then you call some IO operation. For performance, that operation uses the operating system’s underlying asynchronous API. You cannot wait for it to complete because it won’t. You have to return all the way back to your language’s event loop and give the OS some time to spin before it will be done.

Once it is, you need to resume what you were doing. The usual way a language “remembers where it is” is the callstack. That tracks all of the functions that are currently being invoked and where the instruction pointer is in each one.

But to do async IO, you have to unwind discard the entire C callstack. Kind of a Catch-22. You can do super fast IO, you just can’t do anything with the result! Every language that has async IO in its bowels—or in the case of JS, the browser’s event loop—copes with this in some way.

Node with its ever-marching-to-the-right callbacks stuffs all of those callframes in closures. When you do:

function makeSundae(callback) {
  scoopIceCream(function (iceCream) {
    warmUpCaramel(function (caramel) {
      callback(pourOnIceCream(iceCream, caramel));

Each of those function expressions closes over all of its surrounding context. That moves parameters like iceCream and caramel off the callstack and onto the heap. When the outer function returns and the callstack is trashed, it’s cool. That data is still floating around the heap.

The problem is you have to manually reify every damn one of these steps. There’s actually a name for this transformation: continuation-passing style. It was invented by language hackers in the 70s as an intermediate representation to use in the guts of their compilers. It’s a really bizarro way to represent code that happens to make some compiler optimizations easier to do.

No one ever for a second thought that a programmer would write actual code like that. And then Node came along and all of the sudden here we are pretending to be compiler back-ends. Where did we go wrong?

Note that promises and futures don’t actually buy you anything, either. If you’ve used them, you know you’re still hand-creating giant piles of function literals. You’re just passing them to .then() instead of to the asynchronous function itself.

Awaiting a generated solution

Async-await does help. If you peel back your compiler’s skull and see what it’s doing when it hits an await call you’d see it actually doing the CPS-transform. That’s why you need to use await in C#: it’s a clue to the compiler to say, “break the function in half here”. Everything after the await gets hoisted into a new function that it synthesizes on your behalf.

This is why async-await didn’t need any runtime support in the .NET framework. The compiler compiles it away to a series of chained closures that it can already handle. (Interestingly, closures themselves also don’t need runtime support. They get compiled to anonymous classes. In C#, closures really are a poor man’s objects.)

You might be wondering when I’m going to bring up generators. Does your language have a yield keyword? Then it can do something very similar.

(In fact, I believe generators and async-await are isomorphic. I’ve got a bit of code floating around in some dark corner of my hard disc that implements a generator-style game loop using only async-await.)

Where was I? Oh, right. So with callbacks, promises, async-await, and generators, you ultimately end up taking your asynchronous function and smearing it out into a bunch of closures that live over in the heap.

Your function passes the outermost one into the runtime. When the event loop or IO operation is done, it invokes that function and you pick up where you left off. But that means everything above you also has to return. You still have to unwind the whole stack.

This is where the “red functions can only be called by red functions” rule comes from. You have to closurify the entire callstack all the way back to main() or the event handler.

Reified callstacks

But if you have threads (green- or OS-level), you don’t need to do that. You can just suspend the entire thread and hop straight back to the OS or event loop without having to return from all of those functions.

Go is the language that does this most beautifully in my opinion. As soon as you do any IO operation, it just parks that goroutine and resumes any other ones that aren’t blocked on IO.

If you look at the IO operations in the standard library, they seem synchronous. In other words, they just do work and then return a result when they are done. But it’s not that they’re synchronous in the sense that it would mean in JavaScript. Other Go code can run while one of these operations is pending. It’s that Go has eliminated the distinction between synchronous and asynchronous code.

Concurrency in Go is a facet of how you choose to model your program, and not a color seared into each function in the standard library. This means all of the pain of the five rules I mentioned above is completely and totally eliminated.

So, the next time you start telling me about some new hot language and how awesome its concurrency story is because it has asynchronous APIs, now you’ll know why I start grinding my teeth. Because it means you’re right back to red functions and blue ones.

Read the whole story
Share this story
1 public comment
433 days ago
There's another color-spectrum that Go's making more difficult, which is "serial" vs "concurrent". By pushing things to be goroutine-able, they're inherently pushing everything to be concurrency-aware, which is programming hard a famously problem in.

But the post is *excellent*. Totally worth a read.
Silicon Valley, CA

Art in the Shadows: Everyday Objects Cast Unexpected Shapes Onto Paper

1 Comment
[ By SA Rogers in Art & Drawing & Digital. ]


Has doodling ever been more creative than this? While most people wouldn’t give a second’s thought to the shape an everyday object’s shadow casts upon adjacent surfaces, artist Vincent Bal looks at them and sees the beginnings of a character or scene. It might be a phone charger, a fallen leaf, a drinking glass or a Christmas ornament, but in its shadow, Bal sees far more than the object itself.






Each of Bal’s quick and clever illustrations is a testament to the creativity of an artist’s brain. Calling his work ‘shadowology,’ Bal plays around with silhouettes and light sources to find inspiration for sketches most people would never dream up. It takes the game of finding shapes in the clouds and applies an artist’s hand to the process, embellishing the shapes into something more.




Calling himself a ‘filmmaker and doodler and procrastinator from Belgium’, Bal shares his work on his popular Instagram account and sells prints on Etsy.

Share on Facebook

[ By SA Rogers in Art & Drawing & Digital. ]

[ WebUrbanist | Archives | Galleries | Privacy | TOS ]

Read the whole story
Share this story
1 public comment
494 days ago
Simple and clever. I love stuff like this.
Silicon Valley, CA
Next Page of Stories