Unless restricted by orphan rules, implementing `From` is superior
because it implies `Into` but leaves the choice to the user, which
one to use. Especially for errors, `From` is convenient because that
is what `?` builds on.
Co-authored-by: Max Inden <mail@max-inden.de>
- Change `PublicKey::into_protobuf_encoding` to
`PublicKey::to_protobuf_encoding`.
- Change `PublicKey::into_peer_id` to `PublicKey::to_peer_id`.
- Change `PeerId::from_public_key(PublicKey)` to
`PeerId::from_public_key(&PublicKey)`.
- Add `From<&PublicKey> for PeerId`.
Co-authored-by: Max Inden <mail@max-inden.de>
* Fix needless question mark operator
* Don't convert from u64 to u64
LocalStreamId is already a u64, no need to convert.
* Don't use `.into()` to convert to the same type
* Don't specify lifetime if it can be inferred
* Use `vec!` macro if we immediately push to it
This creates the vector with the appropriate capacity.
* Don't index array when taking a reference is enough
Co-authored-by: Max Inden <mail@max-inden.de>
Given the following scenario:
1. Remote peer X connects and is added to `connected_peers`.
2. Remote peer X opens a Kademlia substream and thus confirms that it supports
the Kademlia protocol.
3. Remote peer X is added to the routing table as `Connected`.
4. Remote peer X disconnects and is thus marked as `Disconnected` in the routing
table.
5. Remote peer Y connects and is added to `connected_peers`.
6. Remote peer X re-connects and is added to `connected_peers`.
7. Remote peer Y opens a Kademlia substream and thus confirms that it supports
the Kademlia protocol.
8. Remote peer Y is added to the routing table. Given that the bucket is already
full the call to `entry.insert` returns `kbucket::InsertResult::Pending {
disconnected }` where disconnected is peer X.
While peer X is in `connected_peers` it has not yet (re-) confirmed that it
supports the Kademlia routing protocol and thus is still tracked as
`Disconnected` in the routing table. The `debug_assert` removed in this pull
request does not capture this scenario.
Change `Stream` implementation of `ExpandedSwarm` to return all
`SwarmEvents` instead of only the `NetworkBehaviour`'s events.
Remove `ExpandedSwarm::next_event`. Users can use `<ExpandedSwarm as
StreamExt>::next` instead.
Remove `ExpandedSwarm::next`. Users can use `<ExpandedSwarm as
StreamExt>::filter_map` instead.
Remove `Deref` and `DerefMut` implementations previously dereferencing
to the `NetworkBehaviour` on `Swarm`. Instead one can access the
`NetworkBehaviour` via `Swarm::behaviour` and `Swarm::behaviour_mut`.
Methods on `Swarm` can now be accessed directly, e.g. via
`my_swarm.local_peer_id()`.
Reasoning: Accessing the `NetworkBehaviour` of a `Swarm` through `Deref`
and `DerefMut` instead of a method call is an unnecessary complication,
especially for newcomers. In addition, `Swarm` is not a smart-pointer
and should thus not make use of `Deref` and `DerefMut`, see documentation
from the standard library below.
> Deref should only be implemented for smart pointers to avoid
confusion.
https://doc.rust-lang.org/std/ops/trait.Deref.html
* Implement `/dnsaddr` support on `libp2p-dns`.
To that end, since resolving `/dnsaddr` addresses needs
"fully qualified" multiaddresses when dialing, i.e. those
that end with the `/p2p/...` protocol, we make sure that
dialing always uses such fully qualified addresses by
appending the `/p2p` protocol as necessary. As a side-effect,
this adds support for dialing peers via "fully qualified"
addresses, as an alternative to using a `PeerId` together
with a `Multiaddr` with or without the `/p2p` protocol.
* Adapt libp2p-relay.
* Update versions, changelogs and small cleanups.
* Add `Kademlia::put_record_to` for storing a record at specific nodes,
e.g. for write-back caching after a successful read. In that context,
peers that were queried in a successful `Kademlia::get_record` operation but
did not return a record are now returned in the `GetRecordOk::no_record`
list of peer IDs.
Closes https://github.com/libp2p/rust-libp2p/issues/1577.
* Update protocols/kad/src/behaviour.rs
Co-authored-by: Max Inden <mail@max-inden.de>
* Refine implementation.
Rather than returning the peers that are cache candidates
in a `Vec` in an arbitrary order, use a `BTreeMap`. Furthermore,
make the caching configurable, being enabled by default with
`max_peers` of 1. By configuring it with `max_peers` > 1 it is
now also possible to control at how many nodes the record is cached
automatically after a lookup with quorum 1. When enabled,
successful lookups will always return `cache_candidates`, which
can be used explicitly with `Kademlia::put_record_to` after
lookups with a quorum > 1.
* Re-export KademliaCaching
* Update protocols/kad/CHANGELOG.md
Co-authored-by: Max Inden <mail@max-inden.de>
* Clarify changelog.
Co-authored-by: Max Inden <mail@max-inden.de>
* Make clippy "happy".
Address all clippy complaints that are not purely stylistic (or even
have corner cases with false positives). Ignore all "style" and "pedantic" lints.
* Fix tests.
* Undo unnecessary API change.
`futures-codec` has not been updated in the recent months. It still
depends on `bytes` `v0.5` preventing all downstream dependencies to
upgrade to `bytes` `v1.0`.
This commit replaces `futures_codec` in favor of `asynchronous-codec`
The latter is a fully upgraded fork of the former.
In addition this commit upgrades:
- bytes to v1
- unsigned-varint to v0.6.0
- prost to v0.7
- From<record::Key>
- From<Vec<u8>
This will enable to use additional types in kad.get_closest_peers as it was
possible in pre 0.33 version.
Co-authored-by: Max Inden <mail@max-inden.de>
Remove `NotifyHandler::All` thus removing the requirement for events
send from a `NetworkBehaviour` to a `ProtocolsHandler` to be `Clone`. An
implementor of `NetworkBehaviour` can still notify all
`ProtocolHandler`s for a given peer by emitting one `NotifyHandler`
event per connection to that peer.
* Make the lazy variant interoperable.
The remaining optimisation for `V1Lazy` for a listener
in the negotiation, whereby the listener delays flushing
of the multistream version header, is hereby removed.
The remaining effect of `V1Lazy` is only on the side of
the dialer, which delays flushing of its singular
protocol proposal in order to send it together with
the first application data (or an attempt is made to
read from the negotiated stream, which similarly
triggers a flush of the protocol proposal). This
permits `V1Lazy` dialers to be interoperable with
`V1` listeners. The remaining theoretical pitfall whereby
application data gets misinterpreted as another protocol
proposal by a listener remains, however unlikely.
`V1` remains the default, but we may eventually risk
just making this lazy dialer flush a part of the default
`V1` implementation, removing the dedicated `V1Lazy`
version identifier.
* Update CHANGELOG
* Separate versions from mere header lines.
Every multistream-select version maps to a specific header line,
but there may be different variants of the same multistream-select
version using the same header line, i.e. the same wire protocol.
* Cleanup
* Update misc/multistream-select/CHANGELOG.md
* Add "infinite" scores for external addresses.
Extend address scores with an infinite cardinal, permitting
addresses to be retained "forever" or until explicitly removed.
Expose (external) address scores on the API.
* Update swarm/src/registry.rs
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
* Fix compilation.
* Update CHANGELOG
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
* feat: upgrade to multihash 0.13
`multihash` changes a lot internally, it is using stack allocation instead
of heap allocation. This leads to a few limitations in regards on how
`Multihash` can be used.
Therefore `PeerId` is now using a `Bytes` internally so that only minimal
changes are needed.
* Update versions and changelogs.
Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
Co-authored-by: Roman S. Borschel <roman@parity.io>
* Streamline mplex and yamux configurations.
* For all configuration options that exist for both multiplexers
and have the same semantics, use the same names for the
configuration.
* Rename `Config` to `YamuxConfig` for consistentcy with
the majority of other protocols, e.g. `MplexConfig`, `PingConfig`,
`KademliaConfig`, etc.
* Completely hide `yamux` APIs within `libp2p-yamux`. This allows
to fully control the libp2p API and streamline it with other
muxer APIs, consciously choosing e.g. which configuration options
to make configurable in libp2p and which to fix to certain values.
It does also not necessarily prescribe new incompatible version bumps of
yamux for `libp2p-yamux`, as no `yamux` types are exposed. The cost
is some more duplication of configuration options in the API, as well
as the need to update `libp2p-yamux` if `yamux` introduces new
configuration options that `libp2p-yamux` wants to expose as well.
* Update CHANGELOGs.
* Delay routing table update on new connections.
In order to avoid adding peers to the local routing table
which use a different protocol name and thus a different
overlay network, only update the local routing table
for newly established connections once the associated
connection handler reports that the protocol has been
confirmed.
* Update CHANGELOG.
* [multistream-select] Fix panic with V1Lazy and add integration tests.
Fixes a panic when using the `V1Lazy` negotiation protocol,
a regression introduced in https://github.com/libp2p/rust-libp2p/pull/1484.
Thereby adds integration tests for a transport upgrade with both
`V1` and `V1Lazy` to the `multistream-select` crate to prevent
future regressions.
* Cleanup.
* Update changelog.
In order to make use of the distances returned by `KBucketRef::range` in
a human readable format, one needs to be able to translate the 256 bit in
a `Distance` to a smaller space.
Compiling libp2p-kad for `--target wasm32-unknown-unknown` fails with
the cryptic error message `cannot infer type for type `usize``.
Explicitly converting to `usize` solves the issue.
Co-authored-by: Andronik Ordian <write@reusable.software>
* Store addresses of provider records.
So far, provider records are stored without their
addresses and the addresses of provider records are
obtained from the routing table on demand. This has
two shortcomings:
1. We can only return provider records whose provider
peers happen to currently be in the local routing table.
2. The local node never returns itself as a provider for
a key, even if it is indeed a provider.
These issues are addressed here by storing the addresses
together with the provider records, falling back to
addresses from the routing table only for backward-compatibility
with existing implementations of `RecordStore` using persistent
storage.
Resolves https://github.com/libp2p/rust-libp2p/issues/1526.
* Update protocols/kad/src/behaviour.rs
Co-authored-by: Max Inden <mail@max-inden.de>
* Remove negligible use of with_capacity.
* Update changelog.
Co-authored-by: Max Inden <mail@max-inden.de>
With 826f513 a `StreamMuxer` can notify that the address of a remote
peer changed. This is needed to support the transport protocol QUIC as
remotes can change their IP addresses within the lifetime of a single
connection.
This commit implements the `NetworkBehaviour::inject_address_change`
handler to update the Kademlia routing table accordingly.
* More control & insight for k-buckets.
1) More control: It is now possible to disable automatic
insertions of peers into the routing table via a new
`KademliaBucketInserts` configuration option. The
default is `OnConnected`, but it can be set to `Manual`,
in which case `add_address` must be called explicitly.
In order to communicate all situations in which a user
of `Kademlia` may want to manually update the routing
table, two new events are introduced:
* `KademliaEvent::RoutablePeer`: When a connection to
a peer with a known listen address is established
which may be added to the routing table. This is
also emitted when automatic inserts are allowed but
the corresponding k-bucket is full.
* `KademliaEvent::PendingRoutablePeer`: When a connection
to a peer with a known listen address is established
which is pending insertion into the routing table
(but may not make it). This is only emitted when
`OnConnected` (i.e. automatic inserts) are used.
These complement the existing `UnroutablePeer` and
`RoutingUpdated` events. It is now also possible to
explicitly remove peers and addresses from the
routing table.
2) More insight: `Kademlia::kbuckets` now gives an
iterator over `KBucketRef`s and `Kademlia::bucket`
a particular `KBucketRef`. A `KBucketRef` in turn
allows iteration over its entries. In this way,
the full contents of the routing table can be
inspected, e.g. in order to decide which peer(s)
to remove.
* Update protocols/kad/src/behaviour.rs
* Update protocols/kad/src/behaviour.rs
Co-authored-by: Max Inden <mail@max-inden.de>
* Update CHANGELOG.
Co-authored-by: Max Inden <mail@max-inden.de>
The extension paper S-Kademlia includes a proposal for lookups over
disjoint paths. Within vanilla Kademlia, queries keep track of the
closest nodes in a single bucket. Any adversary along the path can thus
influence all future paths, in case they can come up with the
next-closest (not overall closest) hops. S-Kademlia tries to solve the
attack above by querying over disjoint paths using multiple buckets.
To adjust the libp2p Kademlia implementation accordingly this change-set
introduces an additional peers iterator: `ClosestDisjointPeersIter`.
This new iterator wraps around a set of `ClosestPeersIter`
`ClosestDisjointPeersIter` enforces that each of the `ClosestPeersIter`
explore disjoint paths by having each peer instantly return that was
queried by a different iterator before.
* [libp2p-kad] Provide more insight and control into Kademlia queries.
More insight: The API allows iterating over the active queries and
inspecting their state and execution statistics.
More control: The API allows aborting queries prematurely
at any time.
To that end, API operations that initiate new queries return the query ID
and multi-phase queries such as `put_record` retain the query ID across all
phases, each phase being executed by a new (internal) query.
* Cleanup
* Cleanup
* Update examples and re-exports.
* Incorporate review feedback.
* Update CHANGELOG
* Update CHANGELOG
Co-authored-by: Max Inden <mail@max-inden.de>
`FixedPeersIter` requires the initial set of peers to be passed as
`PeerId`s and not as `Key<PeerId>`s. This commit removes the unnecessary
conversion.
Instead of creating unconstrained random number generators in quickcheck
tests to generate test data, have quickcheck provide a `Seed` to seed
those random number generators and thus make the test execution
deterministic / reproducible.
Make sure to decrease `num_waiting` when being notified of a peer
failure to allow an additional peer to be queried.
Given that `FixedPeersIter` is initialized with `replication_factor` by
`QueryPool` this bug will not surface today.
* protocols/kad/query/peers/closest: Consider K_VALUE nodes at init
By considering `K_VALUE` at `ClosestPeersIter` initialization, the initial peer
set length is independent of `num_results` and thus of the `replication_factor`.
* protocols/kad/src/behaviour/test: Enable building single nodes
Introduces the `build_node` function to build a single not connected
node. Along the way replace the notion of a `port_base` with returning
the actual `Multiaddr` of the node.
* protocols/kad/behaviour/test: Fix bootstrap test initialization
When looking for the closest node to a key, Kademlia considers
ALPHA_VALUE nodes to query at initialization. If `num_groups` is larger
than ALPHA_VALUE the remaining locally known nodes will not be
considered. Given that no other node is aware of them other than node 1,
they would be lost entirely. To prevent the above restrict `num_groups`
to be equal or smaller than ALPHA_VALUE.
* protocols/kad/behaviour/test: Fix put_record and get_provider
In the past, when trying to find the closest nodes to a key, Kademlia
would consider `num_result` amount of nodes to query out of all the
nodes it is aware of.
Both the `put_record` and the `get_provider` tests initialized their
swarms in the same way. The tests took the replication factor to use as
an input. The number of results to get was equal to the replication
factor. The amount of swarms to start was twice the replication factor.
Nodes would be grouped in two groups a replication factor nodes. The
first node would be aware of all of the nodes in the first group. The
last node of the first group would be aware of all the nodes in the
second group.
By coincidence (I assume) these numbers played together very well. At
initialization the first node would consider `num_results` amount of
peers (see first paragraph). It would then contact each of them. As the
first node is aware of the last node of the first group which in turn is
aware of all nodes in the second group, the first node would eventually
discover all nodes.
Recently the amount of nodes Kademlia considers at initialization when
looking for the nodes closest to a key was changed to only consider
ALPHA nodes.
With this in mind playing through the test setup above again would
result in (1) `replication_factor - ALPHA` nodes being entirely lost as
the first node would never consider them and (2) the first node probably
never contacting the last node out of the first group and thus not
discovering any nodes of the second group.
To keep the multi hop discovery in place while not basing ones test
setup on the lucky assumption of Kademlia considering replication factor
amount of nodes at initialization, this patch alters the two tests:
Build a fully connected set of nodes and one addition node (the first
node). Connect the first node to a single node of the fully connected
set (simulating a boot node). Continue as done previously.
Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
When not making progress for `parallelism` time a `ClosestPeersIter`
becomes `State::Stalled`. When stalled an iterator is allowed to make
more parallel requests up to `num_results`. If `num_results` is smaller
than `parallelism` make sure to still allow up to `parallelism` requests
in-flight.
Co-Authored-By: Roman Borschel <romanb@users.noreply.github.com>