Instead of having a mix of `poll_event`, `poll_outbound` and `poll_close`, we
flatten the entire interface of `StreamMuxer` into 4 individual functions:
- `poll_inbound`
- `poll_outbound`
- `poll_address_change`
- `poll_close`
This design is closer to the design of other async traits like `AsyncRead` and
`AsyncWrite`. It also allows us to delete the `StreamMuxerEvent`.
* misc/metrics: Explicitly delegate event recording to each recorder
This allows delegating a single event to multiple `Recorder`s. That enables e.g. the
`identify::Metrics` `Recorder` to act both on `IdentifyEvent` and `SwarmEvent`. The latter enables
it to garbage collect per peer data on disconnects.
* protocols/dcutr: Expose PROTOCOL_NAME
* protocols/identify: Expose PROTOCOL_NAME and PUSH_PROTOCOL_NAME
* protocols/ping: Expose PROTOCOL_NAME
* protocols/relay: Expose HOP_PROTOCOL_NAME and STOP_PROTOCOL_NAME
* misc/metrics: Track # connected nodes supporting specific protocol
An example metric exposed with this patch:
```
libp2p_identify_protocols{protocol="/ipfs/ping/1.0.0"} 10
```
This implies that 10 of the currently connected nodes support the ping protocol.
Remove the concept of individual `Transport::Listener` streams from `Transport`.
Instead the `Transport` is polled directly via `Transport::poll`. The
`Transport` is now responsible for driving its listeners.
**Summary** of the plot of the `discover_peer_after_disconnect` test:
1. `swarm2` connects to `swarm1`.
2. `swarm2` requests an identify response from `swarm1`.
3. `swarm1` sends the response to `swarm2`.
4. `swarm2` disconnects from `swarm1`.
5. `swarm2` tries to disconnect.
**Problem**
`libp2p-identify` sets `KeepAlive::No` when it identified the remote. Thus `swarm1` might
identify` `swarm2` before `swarm2` identified `swarm1`. `swarm1` then sets `KeepAlive::No` and thus closes the
connection to `swarm2` before `swarm2` identified `swarm1`. In such case the unit test
`discover_peer_after_disconnect hangs indefinitely.
**Solution**
Add an initial delay to `swarm1` requesting an identification from `swarm2`, thus ensuring `swarm2`
is always able to identify `swarm1`.
See e04e95a for the rational.
With tokio `v1.19.0` released, `TcStream` exposes `take_error`.
This commit applies the same fix from e04e95a to the tokio TCP provider.
* core/muxing: Remove `Into<io::Error>` bound from `StreamMuxer::Error`
This allows us to preserve the type information of a muxer's concrete
error as long as possible. For `StreamMuxerBox`, we leverage `io::Error`'s
capability of wrapping any error that implements `Into<Box<dyn Error>>`.
* Use `?` in `Connection::poll`
* Use `?` in `muxing::boxed::Wrap`
* Use `futures::ready!` in `muxing::boxed::Wrap`
* Fill PR number into changelog
* Put `Error + Send + Sync` bounds directly on `StreamMuxer::Error`
* Move `Send + Sync` bounds to higher layers
* Use `map_inbound_stream` helper
* Update changelog to match new implementation
`libp2p` 0.45.1 depends (amongst others) on `libp2p-uds` v0.32.0 and on
`libp2p-core` v0.33.0 `libp2p-uds` v0.32.0, however, depends on `libp2p-core`
v0.32.0.
This commit changes the version mismatch for the upcoming `libp2p release.
Log peer ID and stream limit as well as reference config option when limit is
exceeded. This should help folks running into this limit debug what is going on.
This aligns the public API of the `libp2p-mplex` module with the one
from `libp2p-yamux`. This change has two benefits:
1. For standalone users of `libp2p-mplex`, the substreams itself are
now useful, similar to `libp2p-yamux` and don't necessarily need to
be polled via the `StreamMuxer`. The `StreamMuxer` only forwards to
the `Async{Read,Write}` implementations.
2. This will reduce the diff of #2648 because we can chunk the one
giant commit into smaller atomic ones.
* protocols/kad/: Split into outbound and inbound substreams
* protocols/kad: Limit # of inbound substreams to 32
A remote node may still send more than 32 requests in parallel by using more
than one connection or by sending more than one request per stream.
* protocols/kad: Favor new substreams over old ones waiting for reuse
When a new inbound substream comes in and the limit of total inbound substreams
is hit, try to find an old inbound substream waiting to be reused. In such case,
replace the old with the new. In case no such old substream exists, drop the new
one.
* protocols/relay: Use prost-codec
* protocols/relay: Respond to at most one incoming reservation request
Also changes poll order prioritizing
- Error handling over everything.
- Queued events over existing circuits.
- Existing circuits over accepting new circuits.
- Reservation management of existing reservation over new reservation
requests.
* protocols/relay: Deny <= 8 incoming circuit requests with one per peer
* protocols/relay: Deny new circuits before accepting new circuits
* Remove unused import in rendezvous tests
* Expand uds-transport conditionals to include features
In case neither the tokio nor the async-std feature is defined,
this file needs to be empty to avoid unused code warnings.
Co-authored-by: Max Inden <mail@max-inden.de>
This limit is shared across all `ConnectionHandler`s on a single connection. It
only enforces a limit on the number of negotiating substreams. Once negotiated a
`ConnectionHandler` manages the lifecycle of the substream and has to enforce
limits themselves.
An identify push contains the whole identify information of a remote
peer. Upgrading multiple inbound identify push streams is useless.
Instead older streams are dropped in favor of newer streams.
This commit removes the `Clone` implementation on `GenTcpConfig` and consequently the `Clone`
implementations on `GenDnsConfig` and `WsConfig`.
When port-reuse is enabled, `GenTcpConfig` tracks the addresses it is listening in a `HashSet`. This
`HashSet` is shared with the `TcpListenStream`s via an `Arc<Mutex<_>>`. Given that `Clone` is
`derive`d on `GenTcpConfig`, cloning a `GenTcpConfig`, results in both instances sharing the same
set of listen addresses. This is not intuitive.
This behavior is for example error prone in the scenario where one wants to speak both plain DNS/TCP and
Websockets. Say a user creates the transport in the following way:
``` Rust
let transport = {
let tcp = tcp::TcpConfig::new().nodelay(true).port_reuse(true);
let dns_tcp = dns::DnsConfig::system(tcp).await?;
let ws_dns_tcp = websocket::WsConfig::new(dns_tcp.clone());
dns_tcp.or_transport(ws_dns_tcp)
};
```
Both `dns_tcp` and `ws_dns_tcp` share the set of listen addresses, given the `dns_tcp.clone()` to
create the `ws_dns_tcp`. Thus, with port-reuse, a Websocket dial might reuse a DNS/TCP listening
port instead of a Websocket listening port.
With this commit a user is forced to do the below, preventing the above error:
``` Rust
let transport = {
let dns_tcp = dns::DnsConfig::system(tcp::TcpConfig::new().nodelay(true).port_reuse(true)).await?;
let ws_dns_tcp = websocket::WsConfig::new(
dns::DnsConfig::system(tcp::TcpConfig::new().nodelay(true).port_reuse(true)).await?,
);
dns_tcp.or_transport(ws_dns_tcp)
};
```
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>