* protocols/kad/query/peers/closest: Consider K_VALUE nodes at init
By considering `K_VALUE` at `ClosestPeersIter` initialization, the initial peer
set length is independent of `num_results` and thus of the `replication_factor`.
* protocols/kad/src/behaviour/test: Enable building single nodes
Introduces the `build_node` function to build a single not connected
node. Along the way replace the notion of a `port_base` with returning
the actual `Multiaddr` of the node.
* protocols/kad/behaviour/test: Fix bootstrap test initialization
When looking for the closest node to a key, Kademlia considers
ALPHA_VALUE nodes to query at initialization. If `num_groups` is larger
than ALPHA_VALUE the remaining locally known nodes will not be
considered. Given that no other node is aware of them other than node 1,
they would be lost entirely. To prevent the above restrict `num_groups`
to be equal or smaller than ALPHA_VALUE.
* protocols/kad/behaviour/test: Fix put_record and get_provider
In the past, when trying to find the closest nodes to a key, Kademlia
would consider `num_result` amount of nodes to query out of all the
nodes it is aware of.
Both the `put_record` and the `get_provider` tests initialized their
swarms in the same way. The tests took the replication factor to use as
an input. The number of results to get was equal to the replication
factor. The amount of swarms to start was twice the replication factor.
Nodes would be grouped in two groups a replication factor nodes. The
first node would be aware of all of the nodes in the first group. The
last node of the first group would be aware of all the nodes in the
second group.
By coincidence (I assume) these numbers played together very well. At
initialization the first node would consider `num_results` amount of
peers (see first paragraph). It would then contact each of them. As the
first node is aware of the last node of the first group which in turn is
aware of all nodes in the second group, the first node would eventually
discover all nodes.
Recently the amount of nodes Kademlia considers at initialization when
looking for the nodes closest to a key was changed to only consider
ALPHA nodes.
With this in mind playing through the test setup above again would
result in (1) `replication_factor - ALPHA` nodes being entirely lost as
the first node would never consider them and (2) the first node probably
never contacting the last node out of the first group and thus not
discovering any nodes of the second group.
To keep the multi hop discovery in place while not basing ones test
setup on the lucky assumption of Kademlia considering replication factor
amount of nodes at initialization, this patch alters the two tests:
Build a fully connected set of nodes and one addition node (the first
node). Connect the first node to a single node of the fully connected
set (simulating a boot node). Continue as done previously.
Co-authored-by: Roman Borschel <romanb@users.noreply.github.com>
When not making progress for `parallelism` time a `ClosestPeersIter`
becomes `State::Stalled`. When stalled an iterator is allowed to make
more parallel requests up to `num_results`. If `num_results` is smaller
than `parallelism` make sure to still allow up to `parallelism` requests
in-flight.
Co-Authored-By: Roman Borschel <romanb@users.noreply.github.com>
* Disambiguate calls to NetworkBehaviour::inject_event
There is a gnarly edge-case with the custom-derive where rustc
cannot disambiguate the call if:
- The NetworkBehaviourEventProcess trait is imported
- We nest NetworkBehaviours that use the custom-derive
* Update misc/core-derive/src/lib.rs
Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>
* Fix build and add CHANGELOG
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
* feat: allow sent messages seen as subscribed
minor feature to allow mimicing the behaviour expected by ipfs api tests.
* refactor: rename per review comments
* refactor: rename Floodsub::options to config
* chore: update changelog
* Update CHANGELOG.md
Co-Authored-By: Max Inden <mail@max-inden.de>
Co-authored-by: Max Inden <mail@max-inden.de>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
A node receiving a `GetRecord` request first checks whether it has the
given record. If it does have the record it does not return closer
nodes.
A node that knows the record for the given key is likely within a
neighborhood of nodes that know the record as well. In addition the node
likely knows its neighboorhood well.
When querying for a key with a quorum of 1 the above behavior of only
returning the record but not any close peers is fine. Once one queries
with a higher quorum having a node respond with the record as well as
close nodes is likely going to speed up the query, given that the
returned peers probably know the record as well.
* Update CHANGELOG.
* Update CHANGELOG.md
Co-Authored-By: Max Inden <mail@max-inden.de>
* More updates.
* Update format.
Co-authored-by: Max Inden <mail@max-inden.de>
* Pass the error to inject_listener_closed method
If there is an error when the listener closes, found in the
`NetworkEvent::ListenerClosed` `reason` field, we would like to pass it
on to the `inject_listener_closed()` method so that implementors of this
method have access to it.
Add an error parameter to `inject_listener_closed`. Convert the
`reason` field from a `Result` to an `Option` and if there is an error
pass `Some(error)` at the method call site.
* Pass 'reason' as a Result
* Finish change
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
* mplex: Check for error and shutdown.
Issues #1504 and #1523 reported panics caused by polling the sink of
`secio::LenPrefixCodec` after it had entered its terminal state, i.e.
after it had previously encountered an error or was closed. According
to the reports this happened only when using mplex as a stream
multiplexer. It seems that because mplex always stores and keeps the
`Waker` when polling, a wakeup of any of those wakers will resume the
polling even for those cases where the previous poll did not return
`Poll::Pending` but resolved to a value.
To prevent polling after the connection was closed or an error
happened we check for those conditions prior to every poll.
* Keep error when operations fail.
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
* [libp2p-swarm] Make the multiple connections per peer first-class.
This commit makes the notion of multiple connections per peer
first-class in the API of libp2p-swarm, introducing the new
callbacks `inject_connection_established` and
`inject_connection_closed`. The `endpoint` parameter from
`inject_connected` and `inject_disconnected` is removed,
since the first connection to open may not be the last
connection to close, i.e. it cannot be guaranteed,
as was previously the case, that the endpoints passed
to these callbacks match up.
* Have identify track all addresses.
So that identify requests can be answered with the correct
observed address of the connection on which the request
arrives.
* Cleanup
* Cleanup
* Improve the `Peer` state API.
* Remove connection ID from `SwarmEvent::Dialing`.
* Mark `DialPeerCondition` non-exhaustive.
* Re-encapsulate `NetworkConfig`.
To retain the possibility of not re-exposing all
network configuration choices, thereby providing
a more convenient API on the \`SwarmBuilder\`.
* Rework Swarm::dial API.
* Update CHANGELOG.
* Doc formatting tweaks.
* core/src: Remove poll_broadcast connection notification mechanism
The `Network::poll_broadcast` function has not proven to be useful. This
commit removes the mechanism all the way down to the connection manager.
With `poll_broadcast` gone there is no mechanism left to send commands
to pending connections. Thereby command buffering for pending
connections is not needed anymore and is thus removed in this commit as
well.
* core/src/connection/manager.rs: Remove warning comment
Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com>
Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Given that the order of `PeerId`s within the `GetProvidersOk.providers`
set is irrelevant but duplication is at best confusing this commit makes
use of a `HashSet` instead of a `Vec` to return unique `PeerId`s only.