-
Notifications
You must be signed in to change notification settings - Fork 4k
fix(net): prevent onConnectEnd listener accumulation in TLS sockets #26564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(net): prevent onConnectEnd listener accumulation in TLS sockets #26564
Conversation
WalkthroughReplaced multiple TLS and abort event registrations with one-time listeners and added conditional removals of the TLS connect-end handler to ensure it runs at most once and is removed on TLS negotiation, errors, or socket close. Added tests validating listener cleanup and abort behavior. Changes
Suggested reviewers
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@test/regression/issue/12117.test.ts`:
- Around line 8-35: The test title mentions "TLS socket" but the code uses new
net.Socket(), so it doesn't exercise the TLS-specific onConnectEnd listener
(bunTlsSymbol path) — either change the title to reflect TCP or convert the test
to use TLS: replace the net.Socket usage with tls.connect(...) (import tls and
call tls.connect with host/port) and ensure the server is a TLS-capable server
(or create a proper TLS server fixture) so the bunTlsSymbol TLS listener path is
executed; update references in the test (`#12117` TLS socket reconnection should
not leak onConnectEnd listeners, net.Socket, tls.connect, bunTlsSymbol,
onConnectEnd) accordingly.
- Around line 68-77: The test "test('#12117 AbortSignal listener cleanup')"
currently has no assertions; update it to assert the expected behavior by either
wrapping the test body in a try/catch and failing the test on thrown errors or
by adding explicit assertions around controller.abort() (e.g., expect(() =>
controller.abort()).not.toThrow()) and verifying no memory-leaks/listener
buildup (for example, check that creating/destroying sockets with new
net.Socket({ signal: controller.signal }) does not increase listener count on
controller.signal or that socket.destroy() clears listeners). Ensure references
to AbortController, controller.abort, and the net.Socket creation/destruction
are used to locate and modify the test.
|
Someone finally had to step up and take responsibility on this. Congratulations @faytekin :). |
|
Bun is fast. Thanks for fixing this — much appreciated @faytekin |
|
It works for me! Thanks! |
|
We are experiencing this issue and hope it will be resolved as soon as possible. Thanks. |
|
Absolute legend! 👑 killed the leak and saved the day. Drinks on me abi! 🍻 @faytekin |
|
We actually switched from Bun to Node.js because of this issue. This fix is critical for anyone using connection pooling with MongoDB, as it prevents the memory leak caused by accumulating TLS listeners. Great job, @faytekin! |
cirospaciari
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM running CI thanks!
cirospaciari
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we have some failing tests like test/js/node/test/parallel/test-tls-wrap-econnreset-socket.js and at test/js/node/test/parallel/test-tls-wrap-econnreset-socket.js
What does this PR do?
Fixes #12117
When a TLS connection fails before the handshake completes, the
onConnectEndlistener never gets removed. So if you're using something like MongoDB with connection pooling, everyreconnect attempt adds another listener and they just pile up forever - that's why people were seeing their heap grow without bound.
The fix is pretty simple: use
prependOnceListenerinstead ofprependListenerso it cleans itself up. Also added some defensive cleanup in the error/close handlers and_destroy()just to be safe.
How did you verify your code works?
Added a regression test that hammers a server with failed connections and checks that listeners don't accumulate. The test fails on main (shows 3 listeners when there should be ≤1) and
passes with this fix.