* Remove unreachable error case
Instead of taking the connection out of the map again, construct
the event to be returned with the data we already have available.
* Remove `Pool::get` and `PoolConnection`
These are effectively not used.
* Replace `iter_pending_info` with its only usage: `is_dialing`
* Add `is_for_same_remote_as` convenience function
* Remove `PendingConnection`
* Rename `PendingConnectionInfo` to `PendingConnection`
With the latter being gone, the name is now free.
* Merge `EstablishedConnectionInfo` and `EstablishedConnection`
This is a leftover from when `Pool` was still in `libp2p-core` and
one of them was a public API and the other one wasn't.
All of this is private to `libp2p-swarm` so we no longer need to
differentiate.
* Don't `pub use` out of `pub(crate)` modules
Previously, the `DummyConnectionHandler` offered a "keep alive" functionality,
i.e. it allowed users to set the value of what is returned from
`ConnectionHandler::keep_alive`. This handler is primarily used in tests or
`NetworkBehaviour`s that don't open any connections (like mDNS). In all of these
cases, it is statically known whether we want to keep connections alive. As
such, this functionality is better represented by a static
`KeepAliveConnectionHandler` that always returns `KeepAlive::Yes` and a
`DummyConnectionHandler` that always returns `KeepAlive::No`.
To follow the naming conventions described in
https://github.com/libp2p/rust-libp2p/issues/2217, we introduce a top-level
`keep_alive` and `dummy` behaviour in `libp2p-swarm` that contains both the
`NetworkBehaviour` and `ConnectionHandler` implementation for either case.
Remove default features. You need to enable required features
explicitly now. As a quick workaround, you may want to use the
new `full` feature which activates all features.
* Provide separate functions for injecting in- and outbound streams
* Inline `HandlerWrapper` into `Connection`
* Only poll for new inbound streams if we are below the limit
* yamux: Buffer inbound streams in `StreamMuxer::poll`
In case we accidentally generate the same port twice, we will try to
issue two dial attempts to the same address but also expect two dial
errors which is exactly what this test is trying to catch.
Unfortunately, the assertion is badly written and does not catch
duplicate inputs.
Instead of having a mix of `poll_event`, `poll_outbound` and `poll_close`, we
flatten the entire interface of `StreamMuxer` into 4 individual functions:
- `poll_inbound`
- `poll_outbound`
- `poll_address_change`
- `poll_close`
This design is closer to the design of other async traits like `AsyncRead` and
`AsyncWrite`. It also allows us to delete the `StreamMuxerEvent`.
Remove the concept of individual `Transport::Listener` streams from `Transport`.
Instead the `Transport` is polled directly via `Transport::poll`. The
`Transport` is now responsible for driving its listeners.
* core/muxing: Remove `Into<io::Error>` bound from `StreamMuxer::Error`
This allows us to preserve the type information of a muxer's concrete
error as long as possible. For `StreamMuxerBox`, we leverage `io::Error`'s
capability of wrapping any error that implements `Into<Box<dyn Error>>`.
* Use `?` in `Connection::poll`
* Use `?` in `muxing::boxed::Wrap`
* Use `futures::ready!` in `muxing::boxed::Wrap`
* Fill PR number into changelog
* Put `Error + Send + Sync` bounds directly on `StreamMuxer::Error`
* Move `Send + Sync` bounds to higher layers
* Use `map_inbound_stream` helper
* Update changelog to match new implementation
Log peer ID and stream limit as well as reference config option when limit is
exceeded. This should help folks running into this limit debug what is going on.
This limit is shared across all `ConnectionHandler`s on a single connection. It
only enforces a limit on the number of negotiating substreams. Once negotiated a
`ConnectionHandler` manages the lifecycle of the substream and has to enforce
limits themselves.
The `HandlerWrapper` polls three components:
1. `ConnectionHandler`
2. Outbound negotiating streams
3. Inbound negotiating streams
The `ConnectionHandler` itself might itself poll already negotiated streams.
By polling the three components above in the listed order one:
- Prioritizes local work and work coming from negotiated streams over
negotiating streams.
- Prioritizes outbound negotiating streams over inbound negotiating
streams, i.e. outbound requests over inbound requests.
Have the main event loop (`Swarm::poll_next_event`) prioritize:
1. Work on `NetworkBehaviour` over work on `Pool`, thus prioritizing
local work over work coming from a remote.
2. Work on `Pool` over work on `ListenersStream`, thus prioritizing work
on existing connections over upgrading new incoming connections.
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Simplifies `PoolEvent`, no longer carrying a reference to an
`EstablishedConnection` or the `Pool`, but instead the `PeerId`,
`ConnectionId` and `ConnectedPoint` directly.
Co-authored-by: Elena Frank <elena.frank@protonmail.com>