Skip to content

Conversation

zb3
Copy link

@zb3 zb3 commented Aug 29, 2025

This PR is my attempt at supporting fibers using JSPI. My testing shows that they are indeed faster than asyncify (and the wasm file is noticeably smaller), however, there's a major limitation of using JSPI for coroutine support - a suspended coroutine might only be resumed in the same thread (worker), since the actual wasm stack is not serialized into memory. Additionally it's not possible to implement this in pure wasm as this still requires the JS trampoline.

Now let me advertise some self-noticed problems with the PR itself:

  • coroutines can only be continued in the same thread, but there are no such assertions in this code (when testing I implemented them by xoring the key with pthread_ptr)
  • continuations ids are not reused (does the same happen with Asyncify.callStackId?)
  • continuation ids reuse the rewind_id structure field
  • further lack of assertions
  • in practice the makeDynCall might call promising when it was already done (maybe getWasmTableEntry could have an argument whether an async return is expected?)

What do you think about this? JSPI is already supported in chrome and might soon be supported in firefox. However the "stack switching" proposal will probably not be supported anytime soon.. so for me it would make sense to support fibers using JSPI.

Copy link
Collaborator

@brendandahl brendandahl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall. Maybe we want to add an assertion that we're using a fiber on the same thread it was created from? I'm not sure the asyncify version even supports resuming on different threads, so that may not really be important.

@@ -569,6 +569,10 @@ addToLibrary({

$Fibers__deps: ['$Asyncify', 'emscripten_stack_set_limits', '$stackRestore'],
$Fibers: {
#if ASYNCIFY == 2
lastContinuationId: 0,
continuations: [],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like we add to this but never remove anything.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean nullifying the old continuation references [commited] or was it about the fact that ids only grow but are never reclaimed? Ids work just like rewindId and I don't see callStackIdToFunc being cleared either.. (not that it's perfect)

'legacy': (1,),
})
def test_fibers_setjmp(self, legacy):
self.maybe_closure()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a brief comment explaining what the test is doing.

@zb3
Copy link
Author

zb3 commented Sep 11, 2025

Maybe we want to add an assertion that we're using a fiber on the same thread it was created from? I'm not sure the asyncify version even supports resuming on different threads, so that may not really be important.

That assertion would be helpful but I'd need some guidance on how to implement it without compromising performance..
While I currently reuse rewind_id, I'd probably need to either expand the struct or make an union to also add pthread_self there.. how to best access it from JS?

Also looking at the documentation of rewind_id I see it already says that it's only valid for the same thread.. yet its only the functions itself that are accessed through it (their arguments are stored in memory), so for example qemu-wasm works because these functions happen to be the same (otherwise it'd fail silently)..

That saying, adding an assertion that would also affect non-JSPI fibers would break qemu-wasm (for https://zb3.me/qemu-wasm-test/jspi-noffi/ I made a patch, but it's not generic).

UPDATE: I have no idea why some unrelated tests are now failing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants