See e04e95a for the rational.
With tokio `v1.19.0` released, `TcStream` exposes `take_error`.
This commit applies the same fix from e04e95a to the tokio TCP provider.
* core/muxing: Remove `Into<io::Error>` bound from `StreamMuxer::Error`
This allows us to preserve the type information of a muxer's concrete
error as long as possible. For `StreamMuxerBox`, we leverage `io::Error`'s
capability of wrapping any error that implements `Into<Box<dyn Error>>`.
* Use `?` in `Connection::poll`
* Use `?` in `muxing::boxed::Wrap`
* Use `futures::ready!` in `muxing::boxed::Wrap`
* Fill PR number into changelog
* Put `Error + Send + Sync` bounds directly on `StreamMuxer::Error`
* Move `Send + Sync` bounds to higher layers
* Use `map_inbound_stream` helper
* Update changelog to match new implementation
`libp2p` 0.45.1 depends (amongst others) on `libp2p-uds` v0.32.0 and on
`libp2p-core` v0.33.0 `libp2p-uds` v0.32.0, however, depends on `libp2p-core`
v0.32.0.
This commit changes the version mismatch for the upcoming `libp2p release.
Log peer ID and stream limit as well as reference config option when limit is
exceeded. This should help folks running into this limit debug what is going on.
This aligns the public API of the `libp2p-mplex` module with the one
from `libp2p-yamux`. This change has two benefits:
1. For standalone users of `libp2p-mplex`, the substreams itself are
now useful, similar to `libp2p-yamux` and don't necessarily need to
be polled via the `StreamMuxer`. The `StreamMuxer` only forwards to
the `Async{Read,Write}` implementations.
2. This will reduce the diff of #2648 because we can chunk the one
giant commit into smaller atomic ones.
* protocols/kad/: Split into outbound and inbound substreams
* protocols/kad: Limit # of inbound substreams to 32
A remote node may still send more than 32 requests in parallel by using more
than one connection or by sending more than one request per stream.
* protocols/kad: Favor new substreams over old ones waiting for reuse
When a new inbound substream comes in and the limit of total inbound substreams
is hit, try to find an old inbound substream waiting to be reused. In such case,
replace the old with the new. In case no such old substream exists, drop the new
one.
* protocols/relay: Use prost-codec
* protocols/relay: Respond to at most one incoming reservation request
Also changes poll order prioritizing
- Error handling over everything.
- Queued events over existing circuits.
- Existing circuits over accepting new circuits.
- Reservation management of existing reservation over new reservation
requests.
* protocols/relay: Deny <= 8 incoming circuit requests with one per peer
* protocols/relay: Deny new circuits before accepting new circuits
* Remove unused import in rendezvous tests
* Expand uds-transport conditionals to include features
In case neither the tokio nor the async-std feature is defined,
this file needs to be empty to avoid unused code warnings.
Co-authored-by: Max Inden <mail@max-inden.de>
This limit is shared across all `ConnectionHandler`s on a single connection. It
only enforces a limit on the number of negotiating substreams. Once negotiated a
`ConnectionHandler` manages the lifecycle of the substream and has to enforce
limits themselves.
An identify push contains the whole identify information of a remote
peer. Upgrading multiple inbound identify push streams is useless.
Instead older streams are dropped in favor of newer streams.
This commit removes the `Clone` implementation on `GenTcpConfig` and consequently the `Clone`
implementations on `GenDnsConfig` and `WsConfig`.
When port-reuse is enabled, `GenTcpConfig` tracks the addresses it is listening in a `HashSet`. This
`HashSet` is shared with the `TcpListenStream`s via an `Arc<Mutex<_>>`. Given that `Clone` is
`derive`d on `GenTcpConfig`, cloning a `GenTcpConfig`, results in both instances sharing the same
set of listen addresses. This is not intuitive.
This behavior is for example error prone in the scenario where one wants to speak both plain DNS/TCP and
Websockets. Say a user creates the transport in the following way:
``` Rust
let transport = {
let tcp = tcp::TcpConfig::new().nodelay(true).port_reuse(true);
let dns_tcp = dns::DnsConfig::system(tcp).await?;
let ws_dns_tcp = websocket::WsConfig::new(dns_tcp.clone());
dns_tcp.or_transport(ws_dns_tcp)
};
```
Both `dns_tcp` and `ws_dns_tcp` share the set of listen addresses, given the `dns_tcp.clone()` to
create the `ws_dns_tcp`. Thus, with port-reuse, a Websocket dial might reuse a DNS/TCP listening
port instead of a Websocket listening port.
With this commit a user is forced to do the below, preventing the above error:
``` Rust
let transport = {
let dns_tcp = dns::DnsConfig::system(tcp::TcpConfig::new().nodelay(true).port_reuse(true)).await?;
let ws_dns_tcp = websocket::WsConfig::new(
dns::DnsConfig::system(tcp::TcpConfig::new().nodelay(true).port_reuse(true)).await?,
);
dns_tcp.or_transport(ws_dns_tcp)
};
```
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
`libp2p-core` provides the `StreamMuxer` abstraction so it can provide
functionality that abstracts over this trait.
We never use the `flush_all` function as part of our abstractions.
No one else is going to use it so we can remove it from the abstraction.
Optionally only perform dial-backs on peers that are observed at a global ip-address.
This is relevant when multiple peers are in the same local network, in which case a peer could incorrectly assume themself to be public because a peer in the same local network was able to dial them. Thus servers should reject dial-back requests from clients with a non-global IP address, and at the same time clients should only pick connected peers as servers if they are global.
Behind a config flag (enabled by default) to also allow use-cases where AutoNAT is needed within a private network.
Handle in test that a `OutboundProbeEvent::Response` can be reported
before the associated inbound connection event.
In rare cases (that only really happen in a test setup where both peers
run on the same device) the server may observe a connection and report
the response back to the client, before the connection event was
reported at the client.
As a listening client, when requesting a reservation with a relay, the relay
responds with its public addresses. The listening client can then use the public
addresses of the relay to advertise itself as reachable under a relayed
address (`/<public-relay-addr>/p2p-circuit/p2p/<listening-client-peer-id>`).
The above operates under the assumption that the relay knows its public address.
A relay learns its public address from remote peers, via the identify protocol.
In the case where the relay just started up, the listening client might be the
very first node to connect to it.
Such scenario allows for a race condition. The listening client requests a
reservation from the relay, while the relay requests its public address from the
listening client. The former needs to contain the response from the latter.
This commit serializes the two requests, making sure, in the case of a freshly
started relay, that the listening client tells the relay its public address
before requesting a reservation from the relay.
Co-authored-by: Elena Frank <elena.frank@protonmail.com>
The `HandlerWrapper` polls three components:
1. `ConnectionHandler`
2. Outbound negotiating streams
3. Inbound negotiating streams
The `ConnectionHandler` itself might itself poll already negotiated streams.
By polling the three components above in the listed order one:
- Prioritizes local work and work coming from negotiated streams over
negotiating streams.
- Prioritizes outbound negotiating streams over inbound negotiating
streams, i.e. outbound requests over inbound requests.