mirror of
https://github.com/fluencelabs/rust-libp2p
synced 2025-06-28 17:21:34 +00:00
Multiple connections per peer (#1440)
* Allow multiple connections per peer in libp2p-core. Instead of trying to enforce a single connection per peer, which involves quite a bit of additional complexity e.g. to prioritise simultaneously opened connections and can have other undesirable consequences [1], we now make multiple connections per peer a feature. The gist of these changes is as follows: The concept of a "node" with an implicit 1-1 correspondence to a connection has been replaced with the "first-class" concept of a "connection". The code from `src/nodes` has moved (with varying degrees of modification) to `src/connection`. A `HandledNode` has become a `Connection`, a `NodeHandler` a `ConnectionHandler`, the `CollectionStream` was the basis for the new `connection::Pool`, and so forth. Conceptually, a `Network` contains a `connection::Pool` which in turn internally employs the `connection::Manager` for handling the background `connection::manager::Task`s, one per connection, as before. These are all considered implementation details. On the public API, `Peer`s are managed as before through the `Network`, except now the API has changed with the shift of focus to (potentially multiple) connections per peer. The `NetworkEvent`s have accordingly also undergone changes. The Swarm APIs remain largely unchanged, except for the fact that `inject_replaced` is no longer called. It may now practically happen that multiple `ProtocolsHandler`s are associated with a single `NetworkBehaviour`, one per connection. If implementations of `NetworkBehaviour` rely somehow on communicating with exactly one `ProtocolsHandler`, this may cause issues, but it is unlikely. [1]: https://github.com/paritytech/substrate/issues/4272 * Fix intra-rustdoc links. * Update core/src/connection/pool.rs Co-Authored-By: Max Inden <mail@max-inden.de> * Address some review feedback and fix doc links. * Allow responses to be sent on the same connection. * Remove unnecessary remainders of inject_replaced. * Update swarm/src/behaviour.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update swarm/src/lib.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update core/src/connection/manager.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update core/src/connection/manager.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update core/src/connection/pool.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Incorporate more review feedback. * Move module declaration below imports. * Update core/src/connection/manager.rs Co-Authored-By: Toralf Wittner <tw@dtex.org> * Update core/src/connection/manager.rs Co-Authored-By: Toralf Wittner <tw@dtex.org> * Simplify as per review. * Fix rustoc link. * Add try_notify_handler and simplify. * Relocate DialingConnection and DialingAttempt. For better visibility constraints. * Small cleanup. * Small cleanup. More robust EstablishedConnectionIter. * Clarify semantics of `DialingPeer::connect`. * Don't call inject_disconnected on InvalidPeerId. To preserve the previous behavior and ensure calls to `inject_disconnected` are always paired with calls to `inject_connected`. * Provide public ConnectionId constructor. Mainly needed for testing purposes, e.g. in substrate. * Move the established connection limit check to the right place. * Clean up connection error handling. Separate connection errors into those occuring during connection setup or upon rejecting a newly established connection (the `PendingConnectionError`) and those errors occurring on previously established connections, i.e. for which a `ConnectionEstablished` event has been emitted by the connection pool earlier. * Revert change in log level and clarify an invariant. * Remove inject_replaced entirely. * Allow notifying all connection handlers. Thereby simplify by introducing a new enum `NotifyHandler`, used with a single constructor `NetworkBehaviourAction::NotifyHandler`. * Finishing touches. Small API simplifications and code deduplication. Some more useful debug logging. Co-authored-by: Max Inden <mail@max-inden.de> Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com> Co-authored-by: Toralf Wittner <tw@dtex.org>
This commit is contained in:
@ -29,12 +29,17 @@ use crate::protocols_handler::{
|
||||
|
||||
use futures::prelude::*;
|
||||
use libp2p_core::{
|
||||
ConnectedPoint,
|
||||
PeerId,
|
||||
ConnectionInfo,
|
||||
Connected,
|
||||
connection::{
|
||||
ConnectionHandler,
|
||||
ConnectionHandlerEvent,
|
||||
IntoConnectionHandler,
|
||||
Substream,
|
||||
SubstreamEndpoint,
|
||||
},
|
||||
muxing::StreamMuxerBox,
|
||||
nodes::Substream,
|
||||
nodes::collection::ConnectionInfo,
|
||||
nodes::handled_node::{IntoNodeHandler, NodeHandler, NodeHandlerEndpoint, NodeHandlerEvent},
|
||||
upgrade::{self, InboundUpgradeApply, OutboundUpgradeApply}
|
||||
};
|
||||
use std::{error, fmt, pin::Pin, task::Context, task::Poll, time::Duration};
|
||||
@ -51,31 +56,14 @@ where
|
||||
TIntoProtoHandler: IntoProtocolsHandler
|
||||
{
|
||||
/// Builds a `NodeHandlerWrapperBuilder`.
|
||||
#[inline]
|
||||
pub(crate) fn new(handler: TIntoProtoHandler) -> Self {
|
||||
NodeHandlerWrapperBuilder {
|
||||
handler,
|
||||
}
|
||||
}
|
||||
|
||||
/// Builds the `NodeHandlerWrapper`.
|
||||
#[deprecated(note = "Pass the NodeHandlerWrapperBuilder directly")]
|
||||
#[inline]
|
||||
pub fn build(self) -> NodeHandlerWrapper<TIntoProtoHandler>
|
||||
where TIntoProtoHandler: ProtocolsHandler
|
||||
{
|
||||
NodeHandlerWrapper {
|
||||
handler: self.handler,
|
||||
negotiating_in: Vec::new(),
|
||||
negotiating_out: Vec::new(),
|
||||
queued_dial_upgrades: Vec::new(),
|
||||
unique_dial_upgrade_id: 0,
|
||||
shutdown: Shutdown::None,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<TIntoProtoHandler, TProtoHandler, TConnInfo> IntoNodeHandler<(TConnInfo, ConnectedPoint)>
|
||||
impl<TIntoProtoHandler, TProtoHandler, TConnInfo> IntoConnectionHandler<TConnInfo>
|
||||
for NodeHandlerWrapperBuilder<TIntoProtoHandler>
|
||||
where
|
||||
TIntoProtoHandler: IntoProtocolsHandler<Handler = TProtoHandler>,
|
||||
@ -84,9 +72,9 @@ where
|
||||
{
|
||||
type Handler = NodeHandlerWrapper<TIntoProtoHandler::Handler>;
|
||||
|
||||
fn into_handler(self, remote_info: &(TConnInfo, ConnectedPoint)) -> Self::Handler {
|
||||
fn into_handler(self, connected: &Connected<TConnInfo>) -> Self::Handler {
|
||||
NodeHandlerWrapper {
|
||||
handler: self.handler.into_handler(&remote_info.0.peer_id(), &remote_info.1),
|
||||
handler: self.handler.into_handler(connected.peer_id(), &connected.endpoint),
|
||||
negotiating_in: Vec::new(),
|
||||
negotiating_out: Vec::new(),
|
||||
queued_dial_upgrades: Vec::new(),
|
||||
@ -96,6 +84,7 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
// A `ConnectionHandler` for an underlying `ProtocolsHandler`.
|
||||
/// Wraps around an implementation of `ProtocolsHandler`, and implements `NodeHandler`.
|
||||
// TODO: add a caching system for protocols that are supported or not
|
||||
pub struct NodeHandlerWrapper<TProtoHandler>
|
||||
@ -181,7 +170,7 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
impl<TProtoHandler> NodeHandler for NodeHandlerWrapper<TProtoHandler>
|
||||
impl<TProtoHandler> ConnectionHandler for NodeHandlerWrapper<TProtoHandler>
|
||||
where
|
||||
TProtoHandler: ProtocolsHandler,
|
||||
{
|
||||
@ -196,17 +185,17 @@ where
|
||||
fn inject_substream(
|
||||
&mut self,
|
||||
substream: Self::Substream,
|
||||
endpoint: NodeHandlerEndpoint<Self::OutboundOpenInfo>,
|
||||
endpoint: SubstreamEndpoint<Self::OutboundOpenInfo>,
|
||||
) {
|
||||
match endpoint {
|
||||
NodeHandlerEndpoint::Listener => {
|
||||
SubstreamEndpoint::Listener => {
|
||||
let protocol = self.handler.listen_protocol();
|
||||
let timeout = protocol.timeout().clone();
|
||||
let upgrade = upgrade::apply_inbound(substream, SendWrapper(protocol.into_upgrade().1));
|
||||
let timeout = Delay::new(timeout);
|
||||
self.negotiating_in.push((upgrade, timeout));
|
||||
}
|
||||
NodeHandlerEndpoint::Dialer((upgrade_id, user_data, timeout)) => {
|
||||
SubstreamEndpoint::Dialer((upgrade_id, user_data, timeout)) => {
|
||||
let pos = match self
|
||||
.queued_dial_upgrades
|
||||
.iter()
|
||||
@ -227,12 +216,13 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn inject_event(&mut self, event: Self::InEvent) {
|
||||
self.handler.inject_event(event);
|
||||
}
|
||||
|
||||
fn poll(&mut self, cx: &mut Context) -> Poll<Result<NodeHandlerEvent<Self::OutboundOpenInfo, Self::OutEvent>, Self::Error>> {
|
||||
fn poll(&mut self, cx: &mut Context) -> Poll<
|
||||
Result<ConnectionHandlerEvent<Self::OutboundOpenInfo, Self::OutEvent>, Self::Error>
|
||||
> {
|
||||
// Continue negotiation of newly-opened substreams on the listening side.
|
||||
// We remove each element from `negotiating_in` one by one and add them back if not ready.
|
||||
for n in (0..self.negotiating_in.len()).rev() {
|
||||
@ -300,7 +290,7 @@ where
|
||||
|
||||
match poll_result {
|
||||
Poll::Ready(ProtocolsHandlerEvent::Custom(event)) => {
|
||||
return Poll::Ready(Ok(NodeHandlerEvent::Custom(event)));
|
||||
return Poll::Ready(Ok(ConnectionHandlerEvent::Custom(event)));
|
||||
}
|
||||
Poll::Ready(ProtocolsHandlerEvent::OutboundSubstreamRequest {
|
||||
protocol,
|
||||
@ -312,7 +302,7 @@ where
|
||||
let (version, upgrade) = protocol.into_upgrade();
|
||||
self.queued_dial_upgrades.push((id, (version, SendWrapper(upgrade))));
|
||||
return Poll::Ready(Ok(
|
||||
NodeHandlerEvent::OutboundSubstreamRequest((id, info, timeout)),
|
||||
ConnectionHandlerEvent::OutboundSubstreamRequest((id, info, timeout)),
|
||||
));
|
||||
}
|
||||
Poll::Ready(ProtocolsHandlerEvent::Close(err)) => return Poll::Ready(Err(err.into())),
|
||||
|
Reference in New Issue
Block a user