I doth whip mine hair to the aft and to the fore.
44 stories
·
2 followers

Adventures in Go: Accessing Unexported Functions

1 Comment

I’ve been learning the Go programming language in my favorite way: by writing a Go interpreter in Go. The source code so far is on GitHub, but the point of this post is to tell the story of a particularly interesting challenge I ran into: programmatically accessing unexported (a.k.a. private) functions in other packages. In the process of figuring this out, I learned about lots of escape hatches and limitations of Go. The result of the adventure is a little library I made and put on GitHub called go-forceexport. If you want a TLDR, you can just read the source code, but hopefully you’ll find the adventure interesting as well!

My perspective in this post is someone who has plenty of experience with programming both low-level and high-level languages, but is new to the Go language and curious about its internals and how it compares to other languages. In both the exploration process and the process of writing this post, I learned quite a bit about the right way to think about Go. Hopefully by reading this you’ll be able to learn some of those lessons and also share the same curiosity.

What are unexported functions are and why would I want to call one?

In Go, capitalization matters, and determines whether a name can be accessed from the outside world. For example:

1
2
3
4
5
6
7
func thisFunctionIsUnexported() {
    fmt.Println("This function starts with a 't' and can only be called from this package.")
}

func ThisFunctionIsExported() {
    fmt.Println("This function starts with a 'T' and can be called from anywhere.")
}

Other languages use the terms “private” and “public” for this distinction, but in Go, they’re called unexported and exported.

But what about when you just want to hack and explore, and you can’t easily modify the code in question? In Python, you might see a name starting with an underscore, like _this_function_is_private, meaning that it’s rude to call it from the outside world, but the runtime doesn’t try to stop you. In Java, you can generally defeat the private keyword using reflection and the setAccessible method. Neither of these are good practice in professional code, but the flexibility is nice if you’re trying to figure out what’s going wrong in a library or if you want to build a proof of concept that you’ll later make more professional.

It also can be used as a substitute when other ways of exploring aren’t available. In Python, nothing is compiled, so you can add print statements to the standard library or hack the code in other ways and it’ll just work. Java has an excellent debugging story, so you can learn a lot about library code by stepping through it in an IDE. In Go, neither of these approaches are very pleasant (as far as I’ve seen), so calling internal functions can sometimes be the next best thing.

In my specific case, the milestone I was trying to achieve was for my interpeter to be able to successfully run the time.Now function in the standard library. Let’s take a look at the relevant part of time.go:

1
2
3
4
5
6
7
8
// Provided by package runtime.
func now() (sec int64, nsec int32)

// Now returns the current local time.
func Now() Time {
    sec, nsec := now()
    return Time{sec + unixToInternal, nsec, Local}
}

The unexported function now is implemented in assembly language and gets the time as a pair of primitive values. The exported function Now wraps that result in a struct called Time with some convenience methods on it (not shown).

So what does it take to get an interpreter to correctly evaluate time.Now? We’ll need at least these pieces:

  • Parse the file into a syntax tree. Go’s parser and ast packages are a big help here.
  • Transform the struct definition for Time (defined elsewhere in the file) and its methods into some representation known to the interpreter.
  • Implement multiple assignment, struct creation, and the other operations used in Now.
  • While evaluating line 6, my interpreter should notice that now doesn’t have a Go implementation, and prefer to just call the real time.now. (There are other possible approaches, but this one seemed reasonable.)

To prove that that last bullet point was possible, I wanted to write a quick dummy program that just called time.now (even if it needed some hacky mechanism), but this ended up being a lot harder than I was expecting. Most discussions on the internet basically said “don’t do that”, but I decided that I wouldn’t give up so easily.

A related goal is that I wanted a way to take a string name of a function and get that function back. It’s worth noting that it’s totally unclear if I should expect this problem to be solvable in the first place. In C, there’s no way to do it, in Java it’s doable, and in higher-level scripting languages it’s typically pretty easy. Go seems to be somewhere between C and Java in terms of the reflection capabilities that I expect, so I might be attempting something that simply can’t be done.

Attempt #1: reflect

Reflection is the answer in Java, so maybe in Go it’s the same way? Sure enough, Go has a great reflect package that works in a lot of cases, and even lets you read unexported struct fields by name, but it doesn’t seem to have any way to provide access to top-level functions (exported or unexported).

In a language like Python, an expression like time.now would take the time object and pull off a field called now. So you might hope to do something like this:

1
reflect.ValueOf(time).MethodByName("now").Call([]reflect.Value{})

But alas, in Go, time.now is resolved at compile time, and time isn’t its own object that can be accessed like that. So it seems like reflect doesn’t provide an easy answer here.

Attempt #2: runtime

While I was exploring, I noticed runtime.FuncForPC as a way to programmatically get the name of any function:

1
runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name()

I dug into the implementation, and sure enough, the Go runtime package keeps a table of all functions and their names, provided by the linker. Relevant snippets from symtab.go:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var firstmoduledata moduledata  // linker symbol

func findmoduledatap(pc uintptr) *moduledata {
    for datap := &firstmoduledata; datap != nil; datap = datap.next {
        ...
    }
}

func findfunc(pc uintptr) *_func {
    datap := findmoduledatap(pc)
    ...
}

func FuncForPC(pc uintptr) *Func {
    return (*Func)(unsafe.Pointer(findfunc(pc)))
}

The moduledata struct isn’t particularly friendly, but it looks like if I could access it, then I should, theoretically, be able to loop through it to find a pointer to a function with name "time.now". With a function pointer, it should hopefully be possible to find a way to call it.

Unfortunately, we’re at the same place we started. I can’t access firstmoduledata, findmoduledatap, or findfunc for the same reason that I can’t access time.now. I looked through the package to find some place where maybe it leaks a useful pointer, but I couldn’t find anything. Drat.

If I was desperate, I might attempt to guess function pointers and call FuncForPC until I find one with that right name. But that seemed like a recipe for disaster, so I decided to look at other approaches.

Attempt #3: jump down to assembly language

An escape hatch that should definitely work is to just write my code in assembly language. It should be possible to make an assembly function that calls time.now then connect that function to a Go function. I cloned the Go source code and took a look at the Darwin AMD64 implementation of time.now itself to see what it was like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
// func now() (sec int64, nsec int32)
TEXT time·now(SB),NOSPLIT,$0-12
    CALL    nanotime<>(SB)

    // generated code for
    //    func f(x uint64) (uint64, uint64) { return x/1000000000, x%100000000 }
    // adapted to reduce duplication
    MOVQ    AX, CX
    MOVQ    $1360296554856532783, AX
    MULQ    CX
    ADDQ    CX, DX
    RCRQ    $1, DX
    SHRQ    $29, DX
    MOVQ    DX, sec+0(FP)
    IMULQ   $1000000000, DX
    SUBQ    DX, CX
    MOVL    CX, nsec+8(FP)
    RET

Ugh. Maybe I shouldn’t be scared off by assembly, but learning the calling conventions and writing a separate wrapper for every function I wanted to call for every architecture didn’t seem pleasant. I decided to defer that idea and look at other options. Having a solution in pure Go would certainly be ideal.

Attempt #4: CGo

Another escape hatch that seemed promising is CGo, which is Go’s mechanism for directly calling C functions from Go code. Here’s a first attempt:

1
2
3
4
5
6
7
8
9
10
11
/*
int time·now();
int time_now_wrapper() {
    return time·now();
}
*/
import "C"

func callTimeNow() {
    C.time_now_wrapper()
}

And here’s the error that it gives:

1
2
3
4
5
6
Undefined symbols for architecture x86_64:
  "_time·now", referenced from:
      _time_now_wrapper in foo.cgo2.o
      __cgo_b552a62968a6_Cfunc_time_now_wrapper in foo.cgo2.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Hmm, it seems to be putting an underscore before every function name, which isn’t really what I want. Maybe there’s a way around that, but dealing with multiple return values in time·now seemed like it may be another barrier, and from my reading, CGo calls have a lot of overhead because it’s doing a lot of translation work so that you can integrate with existing C code. In Java speak, it seems like it’s JNA, not JNI. So while CGo seems useful, it looks like it’s not really the solution to my problem.

Attempt #5: go:linkname

As I was digging through the standard library source code I saw something interesting in the runtime package in stubs.go:

1
2
//go:linkname time_now time.now
func time_now() (sec int64, nsec int32)

Interesting! I had seen semantically-meaningful comments like this before (like with the CGo example, and also in other places), but I hadn’t seen this one. It looks like it’s saying “linker, please use time.now as the implementation of runtime.time_now”. Sure enough, the documentation suggests that this works, as long as your file imports unsafe. So I tried it out:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
package main

import (
    "fmt"
    _ "unsafe"
)

//go:linkname time_now time.now
func time_now() (sec int64, nsec int32)

func main() {
    sec, nsec := time_now()
    fmt.Println(sec, nsec)
}

Let’s see what happens:

1
2
# command-line-arguments
./sandbox.go:9: missing function body for "time_now"

Drat. Isn’t the missing function body the whole point of the bodyless syntax to allow for externally-implemented functions? The spec certainly seems to think that it’s valid.

Just to see what would happen, I replaced the empty function body with a dummy implementation:

1
2
3
4
//go:linkname time_now time.now
func time_now() (sec int64, nsec int32) {
    return 0, 0
}

Then I tried again and got this error:

1
2
# command-line-arguments
2016/03/16 22:50:31 duplicate symbol time.now (types 1 and 1) in main and /usr/local/Cellar/go/1.6/libexec/pkg/darwin_amd64/runtime.a(sys_darwin_amd64)

When I don’t implement the function, it complains that there’s no implementation, but when I do implement it, it complains that the function is implemented twice! How frustrating!

Getting go:linkname to work

For a while, it seemed like the go:linkname approach wasn’t going to work out, but then I noticed something suspicious: the error formatting is different. It looks like the “missing function body” error is from the compiler, but the “duplicate symbol” error is from the linker. Why would the compiler care about a function body being missing, if it’s the linker’s job to make sure every symbol gets an implementation?

I decided to dig into the code for the compiler to see why it might be generating this error. Here’s what I found in pgen.go:

1
2
3
4
5
6
7
func compile(fn *Node) {
    ...
    if len(fn.Nbody.Slice()) == 0 {
        if pure_go != 0 || strings.HasPrefix(fn.Func.Nname.Sym.Name, "init.") {
            Yyerror("missing function body for %q", fn.Func.Nname.Sym.Name)
            return
        }

Something is causing that inner if statement to evaluate to true, and my function doesn’t have to do with init, so it looks like pure_go is nonzero when it should be zero. Searching for pure_go shows this compiler flag:

1
obj.Flagcount("complete", "compiling complete package (no C or assembly)", &pure_go)

Makes sense: if your code doesn’t have any way of defining external functions, then it’s friendlier to give an error at compile time with the location of the problem. But it looks like go:linkname was overlooked somewhere in the process. It certainly is a bit of an edge case.

After some searching, I found the culprit in build.go:

1
2
3
4
5
extFiles := len(p.CgoFiles) + len(p.CFiles) + len(p.CXXFiles) + len(p.MFiles) + len(p.FFiles) + len(p.SFiles) + len(p.SysoFiles) + len(p.SwigFiles) + len(p.SwigCXXFiles)
...
if extFiles == 0 {
    gcargs = append(gcargs, "-complete")
}

So it’s just counting the number of non-Go files of each type. Since I’m only compiling with Go files, it assumes that every function needs a body. But on the plus side, the code suggests a workaround: just add a file of any of those types. I already know how to use CGo, so let’s try that:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
package main

import (
    "C"
    "fmt"
    _ "unsafe"
)

//go:linkname time_now time.now
func time_now() (sec int64, nsec int32)

func main() {
    sec, nsec := time_now()
    fmt.Println(sec, nsec)
}

And here’s what happens when you try to build that:

1
2
3
main.main: call to external function main.time_now
main.main: main.time_now: not defined
main.main: undefined: main.time_now

A different error! Now the linker is complaining that the function doesn’t exist. After some experimentation, I discovered that CGo seems to cause go:linkname to be disabled for that file. If I remove the import of "C" and move it to another file, then compile the two together, then I get this output:

1
1458197809 407398202

It worked! If your only goal is to get access to time.now, then this is good enough, but I’m hoping that I can go a bit further.

Looking up functions by name

Now that I know that go:linkname works, I can use it to access the firstmoduledata structure mentioned in attempt #2, which is a table containing information on all compiled functions in the binary. My hope is that I can use it to write a function that takes a function name as a string, like "time.now", and provides that function.

One problem is that runtime.firstmoduledata has type runtime.moduledata, which is an unexported type, so I can’t use it in my code. But as a total hack, I can just copy the struct to my code (or, at least, enough of it to keep the alignment correct) and pretend that my struct is the real thing. From there, I can pretty much copy the code from the runtime package to do a full scan through the list of functions until I find the right one:

1
2
3
4
5
6
7
8
9
10
11
func FindFuncWithName(name string) (uintptr, error) {
    for moduleData := &Firstmoduledata; moduleData != nil; moduleData = moduleData.next {
        for _, ftab := range moduleData.ftab {
            f := (*runtime.Func)(unsafe.Pointer(&moduleData.pclntable[ftab.funcoff]))
            if f.Name() == name {
                return f.Entry(), nil
            }
        }
    }
    return 0, fmt.Errorf("Invalid function name: %s", name)
}

This seems to work! This code:

1
2
ptr, _ := FindFuncWithName("math.Sqrt")
fmt.Printf("Found pointer 0x%x\nNormal function: %s", ptr, math.Sqrt)

prints this:

1
2
Found pointer 0x104250
Normal function: %!s(func(float64) float64=0x104250)

So the underlying code pointer is correct! Now we just need to figure out how to use it…

Calling a function by pointer

You would think that having a function pointer would be the end of the story. In C you could just cast the pointer value to the right function type, then call it. But Go isn’t quite so generous. For one, Go normally doesn’t just let you cast between types like that, but unsafe.Pointer can be used to circumvent some safety checks. You might try just casting it to a function of the proper type:

1
2
3
ptr, _ := FindFuncWithName("time.now")
timeNow := (func() (int64, int32))(unsafe.Pointer(ptr))
sec, msec := timeNow()

But that type of cast doesn’t compile; pointers can’t be cast to functions, not even using unsafe.Pointer. What if we literally cast it to a pointer to a func type?

1
2
3
ptr, _ := FindFuncWithName("time.now")
timeNow := (*func() (int64, int32))(unsafe.Pointer(ptr))
sec, msec := (*timeNow)()

This compiles, but crashes at runtime:

1
2
3
unexpected fault address 0xb01dfacedebac1e
fatal error: fault
[signal 0xb code=0x1 addr=0xb01dfacedebac1e pc=0x7e450]

(Look at that fault address. Apparently someone had a sense of humor.)

This isn’t a surprising outcome; functions in Go are first-class values, so their implementation is naturally more interesting than in C. When you pass around a func, you’re not just passing around a code pointer, you’re passing around a function value of some sort, and we’ll need to come up with a function value somehow if we’re to have any hope of calling our function. That function value needs to have our pointer as its underlying code pointer.

I didn’t see any obvious ways to create a function value from scratch, so I figured I’d take a different approach: take an existing function value and hack the code pointer to be the one I want. After spending some time reading how interfaces work in Go and reading the implementation of the reflect library, an approach that seemed promising was to treat the function as an interface{} (that’s Go’s equivalent of Object or void* or any: a type that includes every other type), which internally stores it as a (type, pointer) pair. Then I could pull the pointer off and work with it reliably. The reflect source code suggests that the code pointer (the pointer to the actual machine code) is the first value in a function object.

So, as a first attempt, I created a dummy function called timeNow then defined some structs to make it easy to swap out its code pointer with the real time.now code pointer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
func timeNow() (int64, int32) {
    return 0, 0
}

type Interface struct {
    typ     unsafe.Pointer
    funcPtr *Func
}

type Func struct {
    codePtr uintptr
}

func main() {
    timeNowCodePtr, _ := FindFuncWithName("time.now")
    var timeNowInterface interface{} = timeNow
    timeNowInterfacePtr := (*Interface)(unsafe.Pointer(&timeNowInterface))
    timeNowInterfacePtr.funcPtr.codePtr = timeNowCodePtr
    sec, msec := timeNow()
    fmt.Println(sec, msec)
}

And, as you might guess, it crashed:

1
2
3
unexpected fault address 0x129e80
fatal error: fault
[signal 0xa code=0x2 addr=0x129e80 pc=0x20be]

After some experimenting, I discovered that the crash was happening even without calling the function. The crash was from the line timeNowInterfacePtr.funcPtr.codePtr = timeNowCodePtr. After double-checking that the pointers were what I expect, I realized the problem: the function object I was modifying was probably in the code segment, in read-only memory. Just like how the machine code isn’t going to change, Go expects that the timeNow function value isn’t going to change at runtime. What I really needed to do was allocate a function object on the heap so that I could safely change its underlying code pointer.

So how do you dynamically allocate a function in Go? That’s what lambdas are for, right? Let’s try using one! Instead of the top-level timeNow, we can write our main function like this (the only difference is the new definition of timeNow):

1
2
3
4
5
6
7
8
9
func main() {
    timeNowCodePtr, _ := FindFuncWithName("time.now")
    timeNow := func() (int64, int32) { return 0, 0 }
    var timeNowInterface interface{} = timeNow
    timeNowInterfacePtr := (*Interface)(unsafe.Pointer(&timeNowInterface))
    timeNowInterfacePtr.funcPtr.codePtr = timeNowCodePtr
    sec, msec := timeNow()
    fmt.Println(sec, msec)
}

And, again, it crashes. I’ve seen how lambdas work in other languages, so I suspected why: when a lambda takes in no outside variables, there’s no need to do an allocation each time, so a common optimization is to just have a single shared instance for simple lambdas like the one I wrote, so probably I’m again trying to write to the code segment. To work around this, we can trick the compiler into allocating a new function object each time by making the function a real closure and pulling in a variable from the outer scope (even a trivial one):

1
2
3
4
5
6
7
8
9
10
func main() {
    timeNowCodePtr, _ := FindFuncWithName("time.now")
    var x int64 = 0
    timeNow := func() (int64, int32) { return x, 0 }
    var timeNowInterface interface{} = timeNow
    timeNowInterfacePtr := (*Interface)(unsafe.Pointer(&timeNowInterface))
    timeNowInterfacePtr.funcPtr.codePtr = timeNowCodePtr
    sec, msec := timeNow()
    fmt.Println(sec, msec)
}

And it works!

1
1458245880 151691912

Turning it into a library

This code is almost useful, but wouldn’t really work as a library yet because it would require the function’s type to be hard-coded into the library. We could have the library caller pass in a function that will be modified, but that has gotchas like the read-only memory problem I ran into above.

Instead, I looked around at possible API approaches, and I got some nice inspiration from the example code for reflect.MakeFunc.

We’ll try writing a GetFunc function that can be used like this:

1
2
3
var timeNow func() (int64, int32)
GetFunc(&timeNow, "time.now")
sec, msec := timeNow()

But how can GetFunc allocate a function value? Above, we used a lambda expression, but that doesn’t work if the type isn’t known until runtime.

Reflection to the rescue! We can call reflect.MakeFunc to create a function value with a particular type. In this case, we don’t really care what the implementation is because we’re going to be modifying its code pointer anyway. We end up with a reflect.Value object with a memory layout like this:

Function pointer layout

The ptr field in the reflect.Value definition is unexported, but we can use reflection on the reflect.Value to get it, then treat it as a pointer to a function object, then modify that function object’s code pointer to be what we want. The full code looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
type Func struct {
    codePtr uintptr
}

func CreateFuncForCodePtr(outFuncPtr interface{}, codePtr uintptr) {
    // outFuncPtr is a pointer to a function, and outFuncVal acts as *outFuncPtr.
    outFuncVal := reflect.ValueOf(outFuncPtr).Elem()
    newFuncVal := reflect.MakeFunc(outFuncVal.Type(), nil)
    funcValuePtr := reflect.ValueOf(newFuncVal).FieldByName("ptr").Pointer()
    funcPtr := (*Func)(unsafe.Pointer(funcValuePtr))
    funcPtr.codePtr = codePtr
    outFuncVal.Set(newFuncVal)
}

And that’s it! That function modifies its argument to be the function at codePtr. Implementing the main GetFunc API is just a matter of tying together FindFuncWithName and CreateFuncForCodePtr; details are in the source code.

Future steps and lessons learned

This API still isn’t ideal; the library user still needs to know the type in advance, and if they get it wrong, there will be horrible consequences at runtime. At the end of the day, the library isn’t significantly more useful than go:linkname, but it has some advantages, and is a good starting point for more interesting tricks. It’s potentially possible, but probably harder, to make a function that takes a string and returns a reflect.Value of the function, which would be ideal. But that’s out of scope for now. Also, the README has a number of other warnings and things to consider. For example, this approach will sometimes completely break due to function inlining.

Go is certainly in an interesting place in the space of languages. It’s dynamic enough that it’s not crazy to look up a function by name, but it’s much more performance-focused than, say, Java. The reflection capabilities are good for a systems language, but sometimes the better escape hatch is to just use unsafe.Pointer.

I’d be happy to hear any feedback or corrections in the comments below. Like I mentioned, I’m still learning all of this stuff, so I probably overlooked some things and got some terminology wrong.

Read the whole story
Share this story
Delete
1 public comment
Groxx
25 days ago
reply
Nicely written and very in-depth shenaniganry. Excellent.
Silicon Valley, CA

In pursuit of Otama's tone

1 Comment

It would be fun to use the Otamatone in a musical piece. But for someone used to keyboard instruments it's not so easy to play cleanly. It has a touch-sensitive (resistive) slider that spans roughly two octaves in just 14 centimeters, which makes it very sensitive to finger placement. And in any case, I'd just like to have a programmable virtual instrument that sounds like the Otamatone.

What options do we have, as hackers? Of course the slider could be replaced with a MIDI interface, so that we could use a piano keyboard to hit the correct frequencies. But what if we could synthesize a similar sound all in software?

Sampling via microphone

We'll have to take a look at the waveform first. The Otamatone has a piercing electronic-sounding tone to it. One is inclined to think the waveform is something quite simple, perhaps a sawtooth wave with some harmonic coloring. Such a primitive signal would be easy to synthesize.

[Image: A pink Otamatone in front of a microphone. Next to it a screenshot of Audacity with a periodic but complex waveform in it.]

A friend lended me her Otamatone for recording purposes. Turns out the wave is nothing that simple. It's not a sawtooth wave, nor a square wave, no matter how the microphone is placed. But it sounds like one! Why could that be?

I suspect this is because the combination of speaker and air interface filters out the lowest harmonics (and parts of the others as well) of square waves. But the human ear still recognizes the residual features of a more primitive kind of waveform.

We have to get to the source!

Sampling the input voltage to the Otamatone's speaker could reveal the original signal. Also, by recording both the speaker input and the audio recorded via microphone, we could perhaps devise a software filter to simulate the speaker and head resonance. Then our synthesizer would simplify into a simple generator and filter. But this would require opening up the instrument and soldering a couple of leads in, to make a Line Out connector. I'm not doing this to my friend's Otamatone, so I bought one of my own. I named it TÄMÄ.

[Image: A Black Otamatone with a cable coming out of its mouth into a USB sound card. A waveform with more binary nature is displayed on a screen.]

I soldered the left channel and ground to the same pads the speaker is connected to. I had no idea about the voltage range in advance, but fortunately it just happens to fit line level and not destroy my sound card. As you can see in the background, we've recorded a signal that seems to be a square wave with a low duty cycle.

This square wave seems to be superimposed with a much quieter sinusoidal "ring" at 584 Hz that gradually fades out in 30 milliseconds.

Next we need to map out the effect the finger position on the slider has on this signal. It seems to not only change the frequency but the duty cycle as well. This happens a bit differently depending on which one of the three octave settings (LO, MID, or HI) is selected.

The Otamatone has a huge musical range of over 6 octaves:

[Image: Musical notation showing a range of 6 octaves.]

In frequency terms this means roughly 55 to 3800 Hz.

The duty cycle changes according to where we are on the slider: from 33 % in the lowest notes to 5 % in the highest ones, on every octave setting. The frequency of the ring doesn't change, it's always at around 580 Hz, but it doesn't seem to appear at all on the HI setting.

So I had my Perl-based software synth generate a square wave whose duty cycle and frequency change according to given MIDI notes.

FIR filter 1: not so good

Raw audio generated this way doesn't sound right; it needs to be filtered to simulate the effects of the little speaker and other parts.

Ideally, I'd like to simulate the speaker and head resonances as an impulse response, by feeding well-known impulses into the speaker. The generated square wave could then be convolved with this response. But I thought a simpler way would be to create a custom FIR frequency response in REAPER, by visually comparing the speaker input and microphone capture spectra. When their spectra are laid on top of each other, we can read the required frequency response as the difference between harmonic powers, using the cursor in baudline. No problem, it's just 70 harmonics until we're outside hearing range!

[Image: Screenshot of Baudline showing lots of frequency spikes, and next to it a CSV list of dozens of frequencies and power readings in the Vim editor.]

I then subtracted one spectrum from another and manually created a ReaFir filter based on the extrema of the resulting graph.

[Image: Screenshot of REAPER's FIR filter editor, showing a frequency response made out of nodes and lines interpolated between them.]

Because the Otamatone's mouth can be twisted to make slightly different wovels I recorded two spectra, one with the mouth fully closed and the other one as open as possible.

But this method didn't quite give the sound the piercing nasalness I was hoping for.

FIR filter 2: better

After all that work I realized the line connection works in both directions! I can just feed any signal and the Otamatone will sound it via the speaker. So I generated a square wave in Audacity, set its frequency to 35 Hz to accommodate 30 milliseconds of response, played it via one sound card and recorded via another one:

[Image: Two waveforms, the top one of which is a square wave and the bottom one has a slowly decaying signal starting at every square transition.]

The waveform below is called the step response. The simplest way to get a FIR convolution kernel is to just copy-paste one of the repetitions. Strictly, to get an impulse response would require us to sound a unit impulse, i.e. just a single sample at maximum amplitude, not a square wave. But I'm not redoing that since recording this was hard enough already. For instance, I had to turn off the fridge to minimize background noise. I forgot to turn it back on, and now I have a box of melted ice cream and a freezer that smells like salmon. The step response gives pretty good results.

One of my favorite audio tools, sox, can do FFT convolution with an impulse response. You'll have to save the impulse response as a whitespace-separated list of plaintext sample values, and then run sox original.wav convolved.wav fir response.csv.

Or one could use a VST plugin like FogConvolver:

[Image: A screenshot of Fog Convolver.]

A little organic touch

There's more to an instrument's sound than its frequency spectrum. The way the note begins and ends, the so-called attack and release, are very important cues for the listener.

The width of a player's finger on the Otamatone causes the pressure to be distributed unevenly at first, resulting in a slight glide in frequency. This also happens at note-off. The exact amount of Hertz to glide depends on the octave, and by experimentation I stuck with a slide-up of 5 % of the target frequency in 0.1 seconds.

It is also very difficult to hit the correct note, so we could add some kind of random tuning error. But turns out this is would be too much; I want the music to at least be in tune.

Glides (glissando) are possible with the virtual instrument by playing a note before releasing the previous one. This glissando also happens in 100 milliseconds. I think it sounds pretty good when used in moderation.

I read somewhere (Wikipedia?) that vibrato is also possible with Otamatone. I didn't write a vibratio feature in the code itself, but it can be added using a VST plugin in REAPER (I use MVibrato from MAudioPlugins). I also added a slight flanger with inter-channel phase difference in the sample below, to make the sound just a little bit easier on the ears (but not too much).

Sometimes the Otamatone makes a short popping sound, perhaps when finger pressure is not firm enough. I added a few of these randomly after note-off.

Working with MIDI

We're getting on a side track, but anyway. Working with MIDI used to be straightforward on the Mac. But GarageBand, the tool I currently use to write music, amazingly doesn't have a MIDI export function. However, you can "File -> Add Region To Loop Library", then find the AIFF file in the loop library folder, and use a tool called GB2MIDI to extract MIDI data from it.

I used mididump from python-midi to read MIDI files.

Tyna Wind - lucid future vector

Here's TÄMÄ's beautiful synthesized voice singing us a song.

Read the whole story
Share this story
Delete
1 public comment
Groxx
192 days ago
reply
Every blog post she makes is incredible.
Silicon Valley, CA

NewsBlur’s Twitter support just got a whole lot better

4 Comments and 13 Shares

It was a little under a year ago that I declared Twitter back, baby on this blog. In that time, NewsBlur users have created over 80,000 Twitter feeds in NewsBlur. Since it’s such a popular feature, I decided to dive back into the code and make tweets look a whole lot better.

Notice that NewsBlur now natively supports expanding truncated URLs (no more t.co links).

And NewsBlur also supports native quoted tweets, where a user links to a tweet in their own tweet. NewsBlur expands the quoted tweet and blockquotes it for convenience.

Plus retweets now show both the original tweet author and the retweeting author. This means that you can quickly scan tweets and see where the retweet originated from. And retweeted tweets that quote their own tweets also get expanded.

It’s almost as if NewsBlur is inching closer and closer to becoming its own full fledged Twitter client. While NewsBlur already hit Zawinski’s Law (“Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.”) by supporting email-newsletters-to-rss, Twitter is coming up fast.

Speaking of which, I have this idea I’ve been noodling about better supporting Twitter habits that need to become less of a habit. I want to be able to automatically follow people from my Twitter tweetstream in NewsBlur based the frequency of their posting. I want to be able to infrequently dip into Twitter but still read the tweets from people who only post once a week or once a day.

In other words, I want Twitter, the RSS killer, to better integrate with an RSS reader so that I can pull out the good stuff from the unending flow of tweets. RSS means never missing anything, but Twitter’s main use case is anathema to how we use RSS. I don’t like to preannounce features, but this one intrigues me and if you agree, please share this story to let me know or to give me feedback on how you would like to see NewsBlur be a better Twitter client.

Read the whole story
denubis
404 days ago
reply
Sydney, Australia
Share this story
Delete
4 public comments
Groxx
404 days ago
reply
I read every tweet, and twitter makes that very hard to do (which is why I'm a happy user of Talon), so that sounds great :) Infrequent tweeters are often my favorites.

Personally, I'd be happy with a twitter folder, with a "feed" for each follow. With a bit of thumbs-training (and a per-folder focus setting? or something) I'd probably never go back to twitter.com at all.
Silicon Valley, CA
TheRomit
404 days ago
reply
I like the last idea (I current have some users as columns in Tweetdeck but RSS may be a cleaner solution).

How about Yahoo Pipes-like combination of some users' Twitter feeds that make one RSS subscription?
santa clara, CA
samuel
402 days ago
Well you can read multiple twitter feeds in a folder, and with Focus mode, you can already filter them. And folders have RSS feeds, so you could then pipe that out somewhere else.
TheRomit
399 days ago
Ah! That's a good idea. Thanks for the tip!
samuel
404 days ago
reply
I'm thinking something more automatic than post freq as training. A slider or segmented control and you are subscribed to users who post at the frequency or below. Possibly even multiple sub folders with different frequencies: 1 week, 1 day, and then all the rest.
The Haight in San Francisco
eldritchconundrum
404 days ago
You say "RSS means never missing anything", but to me that isn't true. Posts more than 30 days old cannot be in an "unread" state. The marking as read that occurs automatically after 30 days is literally "missing something". I can't find it anymore among the rest, once it's been marked as read.
samuel
404 days ago
I think a month is a fine line to draw. Everything on the Internet, at a high enough level, is measured in monthly usage. If you're not using NewsBlur once a month then you're not really using NewsBlur.
eldritchconundrum
404 days ago
Certainly one month is fine, for news items. But if I want to e.g. accumulate one hundred of updates of an ongoing story-intense webcomic, then read them all in one go, then I have to track manually where I last stopped, because unread doesn't mean unread. In other words, NewsBlur is not a general-purpose RSS reader, it is specialized for reading recent news. Well, the name says it upfront, so I guess I can't complain. News... And Blur.
samuel
404 days ago
To be fair, those stories aren't gone, they just aren't unread. That's why the default is "All Stories" and not just "Unread Stories".
superlopuh
403 days ago
The unread stories effectively disappear into the void though. I've definitely lost posts that I would have liked to read at some point this way. It would be cool if there was some sort of banner for recently/soon-to-be autoread posts. Automatically sending them to pocket would also be a useful feature.
eldritchconundrum
402 days ago
They are not gone, they are lost in the haystack. If I can't find them, that's the same thing.
tingham
404 days ago
reply
I think I'd like to have my entire feed in a separate drawer here and then be able to use training to push users I follow down (or up) in the visibility stack. Right now NB and TW are the only two pinned tabs in my browser. Getting this down to one would be fantastic. Post frequency as a function of training would be a really nice feature compression.
Cary, NC

Best-Tasting Colors

3 Comments
I recognize that chocolate is its own thing on which reasonable people may differ. Everything else here is objective fact.
Read the whole story
Share this story
Delete
3 public comments
Groxx
459 days ago
reply
Single most contentious xkcd in history, both now and in the future.
Silicon Valley, CA
linforcer
459 days ago
reply
You shut your ***** mouth about black licorice, Randall.
DragonJTS
459 days ago
And what's this blasphemy about coffee?
jodamiller
458 days ago
Putting green apple at 60% good and cotton candy at 90% good but coffee at 75% bad are the rantings of someone with the obvious palate of a child!
effingunicorns
459 days ago
reply
Mint being better than lime is a blatant falsehood.

O2 Bluetooth Earphone by Shane Li

1 Comment

O2 Bluetooth Earphone by Shane Li

Tangled cords are a daily earphone annoyance which makes Bluetooth models an appealing option. Shane Li has created a concept that not only keeps your cords neat, it has a built-in power bank that charges the earphones while you’re carrying them around. The O2 Bluetooth Earphone is a compact unit that easily fits in your pocket until you’re ready to use it.

When not in use, the earphones plug into the unit to charge and are always ready to use.

An O2 mobile app lets you adjust the volume and monitor the battery life.

Read the whole story
Share this story
Delete
1 public comment
Groxx
479 days ago
reply
Practical. A rare treat in concept art. If we ever get good Bluetooth audio quality I'd probably consider this.
Silicon Valley, CA

What Color is Your Function?

1 Comment and 3 Shares

I don’t know about you, but nothing gets me going in the morning quite like a good old fashioned programming language rant. It stirs the blood to see someone skewer one of those “blub” languages the plebians use, muddling through their day with it between furtive visits to StackOverflow.

(Meanwhile, you and I, only use the most enlightened of languages. Chisel-sharp tools designed for the manicured hands of expert craftspersons such as ourselves.)

Of course, as the author of said screed, I run a risk. The language I mock could be one you like! Without realizing it, I could let have the rabble into my blog, pitchforks and torches at the ready, and my fool-hardy pamphlet could draw their ire!

To protect myself from the heat of those flames, and to avoid offending your possibly delicate sensibilities, instead, I’ll rant about a language I just made up. A strawman whose sole purpose is to be set aflame.

I know, this seems pointless right? Trust me, by the end, we’ll see whose face (or faces!) have been painted on his straw noggin.

A new language

Learning an entire new (crappy) language just for a blog post is a tall order, so let’s say it’s mostly similar to one you and I already know. We’ll say it has syntax sorta like JS. Curly braces and semicolons. if, while, etc. The lingua franca of the programming grotto.

I’m picking JS not because that’s what this post is about. It’s just that it’s the language you, statistical representation of the average reader, are most likely to be able grok. Voilà:

function thisIsAFunction() {
  return "It's awesome";
}

Because our strawman is a modern (shitty) language, we also have first-class functions. So you can make something like like:

// Return a list containing all of the elements in collection
// that match predicate.
function filter(collection, predicate) {
  var result = [];
  for (var i = 0; i < collection.length; i++) {
    if (predicate(collection[i])) result.push(collection[i]);
  }
  return result;
}

This is one of those higher-order functions, and, like the name implies, they are classy as all get out and super useful. You’re probably used to them for mucking around with collections, but once you internalize the concept, you start using them damn near everywhere.

Maybe in your testing framework:

describe("An apple", function() {
  it("ain't no orange", function() {
    expect("Apple").not.toBe("Orange");
  });
});

Or when you need to parse some data:

tokens.match(Token.LEFT_BRACKET, function(token) {
  // Parse a list literal...
  tokens.consume(Token.RIGHT_BRACKET);
});

So you go to town and write all sorts of awesome reusable libraries and applications passing around functions, calling functions, returning functions. Functapalooza.

What color is your function?

Except wait. Here’s where our language gets screwy. It has this one peculiar feature:

1. Every function has a color.

Each function—anonymous callback or regular named one—is either red or blue. Since my blog’s code highlighter can’t handle actual color, we’ll say the syntax is like:

bluefunction doSomethingAzure() {
  // This is a blue function...
}

redfunction doSomethingCarnelian() {
  // This is a red function...
}

There are no colorless functions in the language. Want to make a function? Gotta pick a color. Them’s the rules. And, actually, there are a couple more rules you have to follow too:

2. The way you call a function depends on its color.

Imagine a “blue call” syntax and a “red call” syntax. Something like:

doSomethingAzure(...)blue;
doSomethingCarnelian()red;

If you get it wrong—call a red function with •blue after the parentheses or vice versa—it does something bad. Dredge up some long-forgotten nightmare from your childhood like a clown with snakes for arms under your bed. That jumps out of your monitor and sucks out your vitreous humour.

Annoying rule, right? Oh, and one more:

3. You can only call a red function from within another red function.

You can call a blue function from with a red one. This is kosher:

redfunction doSomethingCarnelian() {
  doSomethingAzure()blue;
}

But you can’t go the other way. If you try to do this:

bluefunction doSomethingAzure() {
  doSomethingCarnelian()red;
}

Well, you’re gonna get a visit from old Spidermouth the Night Clown.

This makes writing higher-order functions like our filter() example trickier. We have to pick a color for it and that affects the colors of the functions we’re allowed to pass to it. The obvious solution is to make filter() red. That way, it can take either red or blue functions and call them. But then we run into the next itchy spot in the hairshirt that is this language:

4. Red functions are more painful to call.

For now, I won’t precisely define “painful”, but just imagine that the programmer has to jump through some kind of annoying hoops every time they call a red function. Maybe it’s really verbose, or maybe you can’t do it inside certain kinds of statements. Maybe you can only call them on line numbers that are prime.

What matters is that, if you decide to make a function red, everyone using your API will want to spit in your coffee and/or deposit some even less savory fluids in it.

The obvious solution then is to never use red functions. Just make everything blue and you’re back to the sane world where all functions have the same color, which is equivalent to them all having no color, which is equivalent to our language not being entirely stupid.

Alas, the sadistic language designers—and we all know all programming language designers are sadists, don’t we?—jabbed one final thorn in our side:

5. Some core library functions are red.

There are some functions built in to the platform, functions that we need to use, that we are unable to write ourselves, that only come in red. At this point, a reasonable person might think the language hates us.

It’s functional programming’s fault!

You might be thinking that the problem here is we’re trying to use higher-order functions. If we just stop flouncing around in all of that functional frippery and write normal blue collar first-order functions like God intended, we’d spare ourselves all the heartache.

If we only call blue functions, make our function blue. Otherwise, make it red. As long as we never make functions that accept functions, we don’t have to worry about trying to be “polymorphic over function color” (polychromatic?) or any nonsense like that.

But, alas, higher order functions are just one example. This problem is pervasive any time we want to break our program down into separate functions that get reused.

For example, let’s say we have a nice little blob of code that, I don’t know, implements Dijkstra’s algorithm over a graph representing how much your social network are crushing on each other. (I spent way too long trying to decide what such a result would even represent. Transitive undesirability?)

Later, you end up needing to use this same blob of code somewhere else. You do the natural thing and hoist it out into a separate function. You call it from the old place and your new code that uses it. But what color should it be? Obviously, you’ll make it blue if you can, but what if it uses one of those nasty red-only core library functions?

What if the new place you want to call it is blue? You’ll have to turn it red. Then you’ll have to turn the function that calls it red. Ugh. No matter what, you’ll have to think about color constantly. It will be the sand in your swimsuit on the beach vacation of development.

A colorful allegory

Of course, I’m not really talking about color here, am I? It’s an allegory, a literary trick. The Sneetches isn’t about stars on bellies, it’s about race. By now, you may have an inkling of what color actually represents. If not, here’s the big reveal:

Red functions are asynchronous ones.

If you’re programming in JavaScript on Node.js, everytime you define a function that “returns” a value by invoking a callback, you just made a red function. Look back at that list of rules and see how my metaphor stacks up:

  1. Synchronous functions return values, async ones do not and instead invoke callbacks.

  2. Synchronous functions give their result as a return value, async functions give it by invoking a callback you pass to it.

  3. You can’t call an async function from a synchronous one because you won’t be able to determine the result until the async one completes later.

  4. Async functions don’t compose in expressions because of the callbacks, have different error-handling, and can’t be used with try/catch or inside a lot of other control flow statements.

  5. Node’s whole shtick is that the core libs are all asynchronous. (Though they did dial that back and start adding ___Sync() versions of a lot of things.)

When people talk about “callback hell” they’re talking about how annoying it is to have red functions in their language. When they create 4089 libraries for doing asynchronous programming, they’re trying to cope at the library level with a problem that the language foisted onto them.

I promise the future is better

People in the Node community have realized that callbacks are a pain for a long time, and have looked around for solutions. One technique that gets a bunch of people excited is promises, which you may also know by their rapper name “futures”.

These are sort of a jacked up wrapper around a callback and an error handler. If you think of passing a callback and errorback to a function as a concept, a promise is basically a reification of that idea. It’s a first-class object that represents an asynchronous operation.

I just jammed a bunch of fancy PL language in that paragraph so it probably sounds like a sweet deal, but it’s basically snake oil. Promises do make async code a little easier to write. They compose a bit better, so rule #4 isn’t quite so onerous.

But, honestly, it’s like the difference between being punched in the gut versus punched in the privates. Less painful, yes, but I don’t think anyone should really get thrilled about the value proposition.

You still can’t use them with exception handling or other control flow statements. You still can’t call a function that returns a future from synchronous code. (Well, you can, but if you do, the person who later maintains your code will invent a time machine, travel back in time to the moment that you did this and stab you in the face with a #2 pencil.)

You’ve still divided your entire world into asynchronous and synchronous halves and all of the misery that entails. So, even if your language features promises or futures, its face looks an awful lot like the one on my strawman.

(Yes, that means even Dart, the language I work on. That’s why I’m so excited some of the team are experimenting with other concurrency models.)

I’m awaiting a solution

C# programmers are probably feeling pretty smug right now (a condition they’ve increasingly fallen prey to as Hejlsberg and company have piled sweet feature after sweet feature into the language). In C#, you can use the await keyword to invoke an asynchronous function.

This lets you make asynchronous calls just as easily as you can synchronous ones, with the tiny addition of a cute little keyword. You can nest await calls in expressions, use them in exception handling code, stuff them inside control flow. Go nuts. Make it rain await calls like a they’re dollars in the advance you got for your new rap album.

Async-await is nice, which is why we’re adding it to Dart. It makes it a lot easier to write asynchronous code. You know a “but” is coming. It is. But… you still have divided the world in two. Those async functions are easier to write, but they’re still async functions.

You’ve still got two colors. Async-await solves annoying rule #4: they make red functions not much worse to call than blue ones. But all of the other rules are still there:

  1. Synchronous functions return values, async ones return Task<T> (or Future<T> in Dart) wrappers around the value.

  2. Sync functions are just called, async ones need an await.

  3. If you call an async function you’ve got this wrapper object when you actually want the T. You can’t unwrap it unless you make your function async and await it. (But see below.)

  4. Aside from a liberal garnish of await, we did at least fix this.

  5. C#’s core library is actually older than async so I guess they never had this problem.

It is better. I will take async-await over bare callbacks or futures any day of the week. But we’re lying to ourselves if we think all of our troubles are gone. As soon as you start trying to write higher-order functions, or reuse code, you’re right back to realizing color is still there, bleeding all over your codebase.

What language isn’t colored?

So JS, Dart, C#, and Python have this problem. CoffeeScript and most other languages that compile to JS do too (which is why Dart inherited it). I think even ClojureScript has this issue even though they’ve tried really hard to push against it with their core.async stuff.

Wanna know one that doesn’t? Java. I know right? How often do you get to say, “Yeah, Java is the one that really does this right.”? But there you go. In their defense, they are actively trying to correct this oversight by moving to futures and async IO. It’s like a race to the bottom.

C# also actually can avoid this problem too. They opted in to having color. Before they added async-await and all of the Task<T> stuff, you just used regular sync API calls. Three more languages that don’t have this problem: Go, Lua, and Ruby.

Any guess what they have in common?

Threads. Or, more precisely: multiple independent callstacks that can be switched between. It isn’t strictly necessary for them to be operating system threads. Goroutines in Go, coroutines in Lua, and fibers in Ruby are perfectly adequate.

(That’s why C# has that little caveat. You can avoid the pain of async in C# by using threads.)

Remembrance of operations past

The fundamental problem is “How do you pick up where you left off when an operation completes”? You’ve built up some big callstack and then you call some IO operation. For performance, that operation uses the operating system’s underlying asynchronous API. You cannot wait for it to complete because it won’t. You have to return all the way back to your language’s event loop and give the OS some time to spin before it will be done.

Once it is, you need to resume what you were doing. The usual way a language “remembers where it is” is the callstack. That tracks all of the functions that are currently being invoked and where the instruction pointer is in each one.

But to do async IO, you have to unwind discard the entire C callstack. Kind of a Catch-22. You can do super fast IO, you just can’t do anything with the result! Every language that has async IO in its bowels—or in the case of JS, the browser’s event loop—copes with this in some way.

Node with its ever-marching-to-the-right callbacks stuffs all of those callframes in closures. When you do:

function makeSundae(callback) {
  scoopIceCream(function (iceCream) {
    warmUpCaramel(function (caramel) {
      callback(pourOnIceCream(iceCream, caramel));
    });
  });
}

Each of those function expressions closes over all of its surrounding context. That moves parameters like iceCream and caramel off the callstack and onto the heap. When the outer function returns and the callstack is trashed, it’s cool. That data is still floating around the heap.

The problem is you have to manually reify every damn one of these steps. There’s actually a name for this transformation: continuation-passing style. It was invented by language hackers in the 70s as an intermediate representation to use in the guts of their compilers. It’s a really bizarro way to represent code that happens to make some compiler optimizations easier to do.

No one ever for a second thought that a programmer would write actual code like that. And then Node came along and all of the sudden here we are pretending to be compiler back-ends. Where did we go wrong?

Note that promises and futures don’t actually buy you anything, either. If you’ve used them, you know you’re still hand-creating giant piles of function literals. You’re just passing them to .then() instead of to the asynchronous function itself.

Awaiting a generated solution

Async-await does help. If you peel back your compiler’s skull and see what it’s doing when it hits an await call you’d see it actually doing the CPS-transform. That’s why you need to use await in C#: it’s a clue to the compiler to say, “break the function in half here”. Everything after the await gets hoisted into a new function that it synthesizes on your behalf.

This is why async-await didn’t need any runtime support in the .NET framework. The compiler compiles it away to a series of chained closures that it can already handle. (Interestingly, closures themselves also don’t need runtime support. They get compiled to anonymous classes. In C#, closures really are a poor man’s objects.)

You might be wondering when I’m going to bring up generators. Does your language have a yield keyword? Then it can do something very similar.

(In fact, I believe generators and async-await are isomorphic. I’ve got a bit of code floating around in some dark corner of my hard disc that implements a generator-style game loop using only async-await.)

Where was I? Oh, right. So with callbacks, promises, async-await, and generators, you ultimately end up taking your asynchronous function and smearing it out into a bunch of closures that live over in the heap.

Your function passes the outermost one into the runtime. When the event loop or IO operation is done, it invokes that function and you pick up where you left off. But that means everything above you also has to return. You still have to unwind the whole stack.

This is where the “red functions can only be called by red functions” rule comes from. You have to closurify the entire callstack all the way back to main() or the event handler.

Reified callstacks

But if you have threads (green- or OS-level), you don’t need to do that. You can just suspend the entire thread and hop straight back to the OS or event loop without having to return from all of those functions.

Go is the language that does this most beautifully in my opinion. As soon as you do any IO operation, it just parks that goroutine and resumes any other ones that aren’t blocked on IO.

If you look at the IO operations in the standard library, they seem synchronous. In other words, they just do work and then return a result when they are done. But it’s not that they’re synchronous in the sense that it would mean in JavaScript. Other Go code can run while one of these operations is pending. It’s that Go has eliminated the distinction between synchronous and asynchronous code.

Concurrency in Go is a facet of how you choose to model your program, and not a color seared into each function in the standard library. This means all of the pain of the five rules I mentioned above is completely and totally eliminated.

So, the next time you start telling me about some new hot language and how awesome its concurrency story is because it has asynchronous APIs, now you’ll know why I start grinding my teeth. Because it means you’re right back to red functions and blue ones.

Read the whole story
Share this story
Delete
1 public comment
Groxx
490 days ago
reply
There's another color-spectrum that Go's making more difficult, which is "serial" vs "concurrent". By pushing things to be goroutine-able, they're inherently pushing everything to be concurrency-aware, which is programming hard a famously problem in.

But the post is *excellent*. Totally worth a read.
Silicon Valley, CA
Next Page of Stories