The PR optimizes polling of the listeners in the TCP transport by using `futures::SelectAll` instead of storing them in a queue and polling manually.
Resolves#2781.
`libp2p_tcp::Transport::remove_listener` previously removed the first listener that *did not match* the provided `ListenerId`. This small fix brings it inline with other implementations.
As I do frequently, I corrected for the latest clippy warnings. This will make sure the CI won't complain in the future. We could automate this btw and maybe run the nightly version of clippy.
We refactor our continuous integration workflow with the following goals in mind:
- Run as few jobs as possible
- Have the jobs finish as fast as possible
- Have the jobs redo as little work as possible
There are only so many jobs that GitHub Actions will run in parallel.
Thus, it makes sense to not create massive matrices but instead group
things together meaningfully.
The new `test` job will:
- Run once for each crate
- Ensure that the crate compiles on its specified MSRV
- Ensure that the tests pass
- Ensure that there are no semver violations
This is an improvement to before because we are running all of these
in parallel which speeds up execution and highlights more errors at
once. Previously, tests run later in the pipeline would not get run
at all until you make sure the "first" one passes.
We also previously did not verify the MSRV of each crate, making the
setting in the `Cargo.toml` rather pointless.
The new `cross` job supersedes the existing `wasm` job.
This is an improvement because we now also compile the crate for
windows and MacOS. Something that wasn't checked before.
We assume that checking MSRV and the tests under Linux is good enough.
Hence, this job only checks for compile-errors.
The new `feature_matrix` ensures we compile correctly with certain feature combinations.
`libp2p` exposes a fair few feature-flags. Some of the combinations
are worth checking independently. For the moment, this concerns only
the executor related transports together with the executor flags but
this list can easily be extended.
The new `clippy` job runs for `stable` and `beta` rust.
Clippy gets continuously extended with new lints. Up until now, we would only
learn about those as soon as a new version of Rust is released and CI would
run the new lints. This leads to unrelated failures in CI. Running clippy on with `beta`
Rust gives us a heads-up of 6 weeks before these lints land on stable.
Fixes#2951.
Update to `if-watch` version 3.0.0 and pass through features, such that `libp2p-tcp/async-io` selects `if-watch/smol` and `libp2p-tcp/tokio` brings in `if-watch/tokio`.
The mDNS part is already done in #3096.
Co-authored-by: Demi Marie Obenour <demiobenour@gmail.com>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: David Craven <david@craven.ch>
Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Co-authored-by: elenaf9 <elena.frank@protonmail.com>
Co-authored-by: Marco Munizaga <marco@marcopolo.io>
Return `None` in in `<GenTcpTransport as Transport>::address_translation` if the
address is not a tcp address. Relevant if in case of something like
`OrTransport<TcpTransport, QuicTransport>`, where tcp would currently perform
the address translation for quic addresses.
Remove default features. You need to enable required features
explicitly now. As a quick workaround, you may want to use the
new `full` feature which activates all features.
With if-watch `2.0.0` `IfWatcher::new` is not async anymore, hence the
`IfWatch` wrapping logic is obsolete.
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Instead of having a mix of `poll_event`, `poll_outbound` and `poll_close`, we
flatten the entire interface of `StreamMuxer` into 4 individual functions:
- `poll_inbound`
- `poll_outbound`
- `poll_address_change`
- `poll_close`
This design is closer to the design of other async traits like `AsyncRead` and
`AsyncWrite`. It also allows us to delete the `StreamMuxerEvent`.
Remove the concept of individual `Transport::Listener` streams from `Transport`.
Instead the `Transport` is polled directly via `Transport::poll`. The
`Transport` is now responsible for driving its listeners.
See e04e95a for the rational.
With tokio `v1.19.0` released, `TcStream` exposes `take_error`.
This commit applies the same fix from e04e95a to the tokio TCP provider.
This commit removes the `Clone` implementation on `GenTcpConfig` and consequently the `Clone`
implementations on `GenDnsConfig` and `WsConfig`.
When port-reuse is enabled, `GenTcpConfig` tracks the addresses it is listening in a `HashSet`. This
`HashSet` is shared with the `TcpListenStream`s via an `Arc<Mutex<_>>`. Given that `Clone` is
`derive`d on `GenTcpConfig`, cloning a `GenTcpConfig`, results in both instances sharing the same
set of listen addresses. This is not intuitive.
This behavior is for example error prone in the scenario where one wants to speak both plain DNS/TCP and
Websockets. Say a user creates the transport in the following way:
``` Rust
let transport = {
let tcp = tcp::TcpConfig::new().nodelay(true).port_reuse(true);
let dns_tcp = dns::DnsConfig::system(tcp).await?;
let ws_dns_tcp = websocket::WsConfig::new(dns_tcp.clone());
dns_tcp.or_transport(ws_dns_tcp)
};
```
Both `dns_tcp` and `ws_dns_tcp` share the set of listen addresses, given the `dns_tcp.clone()` to
create the `ws_dns_tcp`. Thus, with port-reuse, a Websocket dial might reuse a DNS/TCP listening
port instead of a Websocket listening port.
With this commit a user is forced to do the below, preventing the above error:
``` Rust
let transport = {
let dns_tcp = dns::DnsConfig::system(tcp::TcpConfig::new().nodelay(true).port_reuse(true)).await?;
let ws_dns_tcp = websocket::WsConfig::new(
dns::DnsConfig::system(tcp::TcpConfig::new().nodelay(true).port_reuse(true)).await?,
);
dns_tcp.or_transport(ws_dns_tcp)
};
```
Co-authored-by: Thomas Eizinger <thomas@eizinger.io>
Previously `libp2p-swarm` required a `Transport` to be `Clone`. Methods
on `Transport`, e.g. `Transport::dial` would take ownership, requiring
e.g. a `Clone::clone` before calling `Transport::dial`.
The requirement of `Transport` to be `Clone` is no longer needed in
`libp2p-swarm`. E.g. concurrent dialing can be done without a clone per
dial.
This commit removes the requirement of `Clone` for `Transport` in
`libp2p-swarm`. As a follow-up methods on `Transport` no longer take
ownership, but instead a mutable reference (`&mut self`).
On the one hand this simplifies `libp2p-swarm`, on the other it
simplifies implementations of `Transport`.
Within `Provider::new_stream` we wait for the socket to become writable
(`stream.writable`), before returning it as a stream. In other words, we
are waiting for the socket to connect before returning it as a new TCP
connection. Waiting to connect before returning it as a new TCP
connection allows us to catch TCP connection establishment errors early.
While `stream.writable` drives the process of connecting, it does not
surface potential connection errors themselves. These need to be
explicitly collected via `TcpSocket::take_error`. If not explicitly
collected, they will surface on future operations on the socket.
For now this commit explicitly calls `TcpSocket::take_error` when using
`async-io` only. `tokio` introduced the method (`take_error`) in
https://github.com/tokio-rs/tokio/pull/4364 though later reverted it in
https://github.com/tokio-rs/tokio/pull/4392. Once re-reverted, the same
patch can be applied when using `libp2p-tcp` with tokio.
---
One example on how this bug surfaces today:
A `/dnsaddr/xxx` `Multiaddr` can potentially resolve to multiple IP
addresses, e.g. to the IPv4 and the IPv6 addresses of a node.
`libp2p-dns` tries dialing each of them in sequence using `libp2p-tcp`,
returning the first that `libp2p-tcp` reports as successful.
Say that the local node tries the IPv6 address first. In the scenario
where the local node's networking stack does not support IPv6, e.g. has
no IPv6 route, the connection attempt to the resolved IPv6 address of
the remote node fails. Given that `libp2p-tcp` does not call
`TcpSocket::take_error`, it would falsly report the TCP connection
attempt as successful. `libp2p-dns` would receive the "successful" TCP
connection for the IPv6 address from `libp2p-tcp` and would not attempt
to dial the IPv4 address, even though it supports IPv4, and instead
bubble up the "successful" IPv6 TCP connection. Only later, when writing
or reading from the "successful" IPv6 TCP connection, would the IPv6
error surface.
Co-authored-by: Oliver Wangler <oliver@wngr.de>
Allows `NetworkBehaviour` implementations to dial a peer, but instruct
the dialed connection to be upgraded as if it were the listening
endpoint.
This is needed when establishing direct connections through NATs and/or
Firewalls (hole punching). When hole punching via TCP (QUIC is different
but similar) both ends dial the other at the same time resulting in a
simultaneously opened TCP connection. To disambiguate who is the dialer
and who the listener there are two options:
1. Use the Simultaneous Open Extension of Multistream Select. See
[sim-open] specification and [sim-open-rust] Rust implementation.
2. Disambiguate the role (dialer or listener) based on the role within
the DCUtR [dcutr] protocol. More specifically the node initiating the
DCUtR process will act as a listener and the other as a dialer.
This commit enables (2), i.e. enables the DCUtR protocol to specify the
role used once the connection is established.
While on the positive side (2) requires one round trip less than (1), on
the negative side (2) only works for coordinated simultaneous dials.
I.e. when a simultaneous dial happens by chance, and not coordinated via
DCUtR, the connection attempt fails when only (2) is in place.
[sim-open]: https://github.com/libp2p/specs/blob/master/connections/simopen.md
[sim-open-rust]: https://github.com/libp2p/rust-libp2p/pull/2066
[dcutr]: https://github.com/libp2p/specs/blob/master/relay/DCUtR.md
Don't report events of a connection to the `NetworkBehaviour`, if connection has
been established while the remote peer was banned. Among other guarantees this
upholds that `NetworkBehaviour::inject_event` is never called without a previous
`NetworkBehaviour::inject_connection_established` for said connection.
Co-authored-by: Max Inden <mail@max-inden.de>