mirror of
https://github.com/fluencelabs/rust-libp2p
synced 2025-05-12 10:57:13 +00:00
* Allow multiple connections per peer in libp2p-core. Instead of trying to enforce a single connection per peer, which involves quite a bit of additional complexity e.g. to prioritise simultaneously opened connections and can have other undesirable consequences [1], we now make multiple connections per peer a feature. The gist of these changes is as follows: The concept of a "node" with an implicit 1-1 correspondence to a connection has been replaced with the "first-class" concept of a "connection". The code from `src/nodes` has moved (with varying degrees of modification) to `src/connection`. A `HandledNode` has become a `Connection`, a `NodeHandler` a `ConnectionHandler`, the `CollectionStream` was the basis for the new `connection::Pool`, and so forth. Conceptually, a `Network` contains a `connection::Pool` which in turn internally employs the `connection::Manager` for handling the background `connection::manager::Task`s, one per connection, as before. These are all considered implementation details. On the public API, `Peer`s are managed as before through the `Network`, except now the API has changed with the shift of focus to (potentially multiple) connections per peer. The `NetworkEvent`s have accordingly also undergone changes. The Swarm APIs remain largely unchanged, except for the fact that `inject_replaced` is no longer called. It may now practically happen that multiple `ProtocolsHandler`s are associated with a single `NetworkBehaviour`, one per connection. If implementations of `NetworkBehaviour` rely somehow on communicating with exactly one `ProtocolsHandler`, this may cause issues, but it is unlikely. [1]: https://github.com/paritytech/substrate/issues/4272 * Fix intra-rustdoc links. * Update core/src/connection/pool.rs Co-Authored-By: Max Inden <mail@max-inden.de> * Address some review feedback and fix doc links. * Allow responses to be sent on the same connection. * Remove unnecessary remainders of inject_replaced. * Update swarm/src/behaviour.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update swarm/src/lib.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update core/src/connection/manager.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update core/src/connection/manager.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Update core/src/connection/pool.rs Co-Authored-By: Pierre Krieger <pierre.krieger1708@gmail.com> * Incorporate more review feedback. * Move module declaration below imports. * Update core/src/connection/manager.rs Co-Authored-By: Toralf Wittner <tw@dtex.org> * Update core/src/connection/manager.rs Co-Authored-By: Toralf Wittner <tw@dtex.org> * Simplify as per review. * Fix rustoc link. * Add try_notify_handler and simplify. * Relocate DialingConnection and DialingAttempt. For better visibility constraints. * Small cleanup. * Small cleanup. More robust EstablishedConnectionIter. * Clarify semantics of `DialingPeer::connect`. * Don't call inject_disconnected on InvalidPeerId. To preserve the previous behavior and ensure calls to `inject_disconnected` are always paired with calls to `inject_connected`. * Provide public ConnectionId constructor. Mainly needed for testing purposes, e.g. in substrate. * Move the established connection limit check to the right place. * Clean up connection error handling. Separate connection errors into those occuring during connection setup or upon rejecting a newly established connection (the `PendingConnectionError`) and those errors occurring on previously established connections, i.e. for which a `ConnectionEstablished` event has been emitted by the connection pool earlier. * Revert change in log level and clarify an invariant. * Remove inject_replaced entirely. * Allow notifying all connection handlers. Thereby simplify by introducing a new enum `NotifyHandler`, used with a single constructor `NetworkBehaviourAction::NotifyHandler`. * Finishing touches. Small API simplifications and code deduplication. Some more useful debug logging. Co-authored-by: Max Inden <mail@max-inden.de> Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com> Co-authored-by: Toralf Wittner <tw@dtex.org>
352 lines
13 KiB
Rust
352 lines
13 KiB
Rust
// Copyright 2018 Parity Technologies (UK) Ltd.
|
|
//
|
|
// Permission is hereby granted, free of charge, to any person obtaining a
|
|
// copy of this software and associated documentation files (the "Software"),
|
|
// to deal in the Software without restriction, including without limitation
|
|
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
// and/or sell copies of the Software, and to permit persons to whom the
|
|
// Software is furnished to do so, subject to the following conditions:
|
|
//
|
|
// The above copyright notice and this permission notice shall be included in
|
|
// all copies or substantial portions of the Software.
|
|
//
|
|
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
|
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
|
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
|
// DEALINGS IN THE SOFTWARE.
|
|
|
|
use crate::handler::{IdentifyHandler, IdentifyHandlerEvent};
|
|
use crate::protocol::{IdentifyInfo, ReplySubstream};
|
|
use futures::prelude::*;
|
|
use libp2p_core::{
|
|
ConnectedPoint,
|
|
Multiaddr,
|
|
PeerId,
|
|
PublicKey,
|
|
connection::ConnectionId,
|
|
upgrade::{ReadOneError, UpgradeError}
|
|
};
|
|
use libp2p_swarm::{
|
|
NegotiatedSubstream,
|
|
NetworkBehaviour,
|
|
NetworkBehaviourAction,
|
|
PollParameters,
|
|
ProtocolsHandler,
|
|
ProtocolsHandlerUpgrErr
|
|
};
|
|
use std::{collections::HashMap, collections::VecDeque, io, pin::Pin, task::Context, task::Poll};
|
|
|
|
/// Network behaviour that automatically identifies nodes periodically, returns information
|
|
/// about them, and answers identify queries from other nodes.
|
|
pub struct Identify {
|
|
/// Protocol version to send back to remotes.
|
|
protocol_version: String,
|
|
/// Agent version to send back to remotes.
|
|
agent_version: String,
|
|
/// The public key of the local node. To report on the wire.
|
|
local_public_key: PublicKey,
|
|
/// For each peer we're connected to, the observed address to send back to it.
|
|
observed_addresses: HashMap<PeerId, Multiaddr>,
|
|
/// Pending replies to send.
|
|
pending_replies: VecDeque<Reply>,
|
|
/// Pending events to be emitted when polled.
|
|
events: VecDeque<NetworkBehaviourAction<(), IdentifyEvent>>,
|
|
}
|
|
|
|
/// A pending reply to an inbound identification request.
|
|
enum Reply {
|
|
/// The reply is queued for sending.
|
|
Queued {
|
|
peer: PeerId,
|
|
io: ReplySubstream<NegotiatedSubstream>,
|
|
observed: Multiaddr
|
|
},
|
|
/// The reply is being sent.
|
|
Sending {
|
|
peer: PeerId,
|
|
io: Pin<Box<dyn Future<Output = Result<(), io::Error>> + Send>>,
|
|
}
|
|
}
|
|
|
|
impl Identify {
|
|
/// Creates a new `Identify` network behaviour.
|
|
pub fn new(protocol_version: String, agent_version: String, local_public_key: PublicKey) -> Self {
|
|
Identify {
|
|
protocol_version,
|
|
agent_version,
|
|
local_public_key,
|
|
observed_addresses: HashMap::new(),
|
|
pending_replies: VecDeque::new(),
|
|
events: VecDeque::new(),
|
|
}
|
|
}
|
|
}
|
|
|
|
impl NetworkBehaviour for Identify {
|
|
type ProtocolsHandler = IdentifyHandler;
|
|
type OutEvent = IdentifyEvent;
|
|
|
|
fn new_handler(&mut self) -> Self::ProtocolsHandler {
|
|
IdentifyHandler::new()
|
|
}
|
|
|
|
fn addresses_of_peer(&mut self, _: &PeerId) -> Vec<Multiaddr> {
|
|
Vec::new()
|
|
}
|
|
|
|
fn inject_connected(&mut self, peer_id: PeerId, endpoint: ConnectedPoint) {
|
|
let observed = match endpoint {
|
|
ConnectedPoint::Dialer { address } => address,
|
|
ConnectedPoint::Listener { send_back_addr, .. } => send_back_addr,
|
|
};
|
|
|
|
self.observed_addresses.insert(peer_id, observed);
|
|
}
|
|
|
|
fn inject_disconnected(&mut self, peer_id: &PeerId, _: ConnectedPoint) {
|
|
self.observed_addresses.remove(peer_id);
|
|
}
|
|
|
|
fn inject_event(
|
|
&mut self,
|
|
peer_id: PeerId,
|
|
_connection: ConnectionId,
|
|
event: <Self::ProtocolsHandler as ProtocolsHandler>::OutEvent,
|
|
) {
|
|
match event {
|
|
IdentifyHandlerEvent::Identified(remote) => {
|
|
self.events.push_back(
|
|
NetworkBehaviourAction::GenerateEvent(
|
|
IdentifyEvent::Received {
|
|
peer_id,
|
|
info: remote.info,
|
|
observed_addr: remote.observed_addr.clone(),
|
|
}));
|
|
self.events.push_back(
|
|
NetworkBehaviourAction::ReportObservedAddr {
|
|
address: remote.observed_addr,
|
|
});
|
|
}
|
|
IdentifyHandlerEvent::Identify(sender) => {
|
|
let observed = self.observed_addresses.get(&peer_id)
|
|
.expect("We only receive events from nodes we're connected to. We insert \
|
|
into the hashmap when we connect to a node and remove only when we \
|
|
disconnect; QED");
|
|
self.pending_replies.push_back(
|
|
Reply::Queued {
|
|
peer: peer_id,
|
|
io: sender,
|
|
observed: observed.clone()
|
|
});
|
|
}
|
|
IdentifyHandlerEvent::IdentificationError(error) => {
|
|
self.events.push_back(
|
|
NetworkBehaviourAction::GenerateEvent(
|
|
IdentifyEvent::Error { peer_id, error }));
|
|
}
|
|
}
|
|
}
|
|
|
|
fn poll(
|
|
&mut self,
|
|
cx: &mut Context,
|
|
params: &mut impl PollParameters,
|
|
) -> Poll<
|
|
NetworkBehaviourAction<
|
|
<Self::ProtocolsHandler as ProtocolsHandler>::InEvent,
|
|
Self::OutEvent,
|
|
>,
|
|
> {
|
|
if let Some(event) = self.events.pop_front() {
|
|
return Poll::Ready(event);
|
|
}
|
|
|
|
if let Some(r) = self.pending_replies.pop_front() {
|
|
// The protocol names can be bytes, but the identify protocol except UTF-8 strings.
|
|
// There's not much we can do to solve this conflict except strip non-UTF-8 characters.
|
|
let protocols: Vec<_> = params
|
|
.supported_protocols()
|
|
.map(|p| String::from_utf8_lossy(&p).to_string())
|
|
.collect();
|
|
|
|
let mut listen_addrs: Vec<_> = params.external_addresses().collect();
|
|
listen_addrs.extend(params.listened_addresses());
|
|
|
|
let mut sending = 0;
|
|
let to_send = self.pending_replies.len() + 1;
|
|
let mut reply = Some(r);
|
|
loop {
|
|
match reply {
|
|
Some(Reply::Queued { peer, io, observed }) => {
|
|
let info = IdentifyInfo {
|
|
public_key: self.local_public_key.clone(),
|
|
protocol_version: self.protocol_version.clone(),
|
|
agent_version: self.agent_version.clone(),
|
|
listen_addrs: listen_addrs.clone(),
|
|
protocols: protocols.clone(),
|
|
};
|
|
let io = Box::pin(io.send(info, &observed));
|
|
reply = Some(Reply::Sending { peer, io });
|
|
}
|
|
Some(Reply::Sending { peer, mut io }) => {
|
|
sending += 1;
|
|
match Future::poll(Pin::new(&mut io), cx) {
|
|
Poll::Ready(Ok(())) => {
|
|
let event = IdentifyEvent::Sent { peer_id: peer };
|
|
return Poll::Ready(NetworkBehaviourAction::GenerateEvent(event));
|
|
},
|
|
Poll::Pending => {
|
|
self.pending_replies.push_back(Reply::Sending { peer, io });
|
|
if sending == to_send {
|
|
// All remaining futures are NotReady
|
|
break
|
|
} else {
|
|
reply = self.pending_replies.pop_front();
|
|
}
|
|
}
|
|
Poll::Ready(Err(err)) => {
|
|
let event = IdentifyEvent::Error {
|
|
peer_id: peer,
|
|
error: ProtocolsHandlerUpgrErr::Upgrade(UpgradeError::Apply(err.into()))
|
|
};
|
|
return Poll::Ready(NetworkBehaviourAction::GenerateEvent(event));
|
|
},
|
|
}
|
|
}
|
|
None => unreachable!()
|
|
}
|
|
}
|
|
}
|
|
|
|
Poll::Pending
|
|
}
|
|
}
|
|
|
|
/// Event emitted by the `Identify` behaviour.
|
|
#[derive(Debug)]
|
|
pub enum IdentifyEvent {
|
|
/// Identifying information has been received from a peer.
|
|
Received {
|
|
/// The peer that has been identified.
|
|
peer_id: PeerId,
|
|
/// The information provided by the peer.
|
|
info: IdentifyInfo,
|
|
/// The address observed by the peer for the local node.
|
|
observed_addr: Multiaddr,
|
|
},
|
|
/// Identifying information of the local node has been sent to a peer.
|
|
Sent {
|
|
/// The peer that the information has been sent to.
|
|
peer_id: PeerId,
|
|
},
|
|
/// Error while attempting to identify the remote.
|
|
Error {
|
|
/// The peer with whom the error originated.
|
|
peer_id: PeerId,
|
|
/// The error that occurred.
|
|
error: ProtocolsHandlerUpgrErr<ReadOneError>,
|
|
},
|
|
}
|
|
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use crate::{Identify, IdentifyEvent};
|
|
use futures::{prelude::*, pin_mut};
|
|
use libp2p_core::{
|
|
identity,
|
|
PeerId,
|
|
muxing::StreamMuxer,
|
|
Transport,
|
|
upgrade
|
|
};
|
|
use libp2p_tcp::TcpConfig;
|
|
use libp2p_secio::SecioConfig;
|
|
use libp2p_swarm::{Swarm, SwarmEvent};
|
|
use libp2p_mplex::MplexConfig;
|
|
use std::{fmt, io};
|
|
|
|
fn transport() -> (identity::PublicKey, impl Transport<
|
|
Output = (PeerId, impl StreamMuxer<Substream = impl Send, OutboundSubstream = impl Send, Error = impl Into<io::Error>>),
|
|
Listener = impl Send,
|
|
ListenerUpgrade = impl Send,
|
|
Dial = impl Send,
|
|
Error = impl fmt::Debug
|
|
> + Clone) {
|
|
let id_keys = identity::Keypair::generate_ed25519();
|
|
let pubkey = id_keys.public();
|
|
let transport = TcpConfig::new()
|
|
.nodelay(true)
|
|
.upgrade(upgrade::Version::V1)
|
|
.authenticate(SecioConfig::new(id_keys))
|
|
.multiplex(MplexConfig::new());
|
|
(pubkey, transport)
|
|
}
|
|
|
|
#[test]
|
|
fn periodic_id_works() {
|
|
let (mut swarm1, pubkey1) = {
|
|
let (pubkey, transport) = transport();
|
|
let protocol = Identify::new("a".to_string(), "b".to_string(), pubkey.clone());
|
|
let swarm = Swarm::new(transport, protocol, pubkey.clone().into_peer_id());
|
|
(swarm, pubkey)
|
|
};
|
|
|
|
let (mut swarm2, pubkey2) = {
|
|
let (pubkey, transport) = transport();
|
|
let protocol = Identify::new("c".to_string(), "d".to_string(), pubkey.clone());
|
|
let swarm = Swarm::new(transport, protocol, pubkey.clone().into_peer_id());
|
|
(swarm, pubkey)
|
|
};
|
|
|
|
Swarm::listen_on(&mut swarm1, "/ip4/127.0.0.1/tcp/0".parse().unwrap()).unwrap();
|
|
|
|
let listen_addr = async_std::task::block_on(async {
|
|
loop {
|
|
let swarm1_fut = swarm1.next_event();
|
|
pin_mut!(swarm1_fut);
|
|
match swarm1_fut.await {
|
|
SwarmEvent::NewListenAddr(addr) => return addr,
|
|
_ => {}
|
|
}
|
|
}
|
|
});
|
|
Swarm::dial_addr(&mut swarm2, listen_addr).unwrap();
|
|
|
|
// nb. Either swarm may receive the `Identified` event first, upon which
|
|
// it will permit the connection to be closed, as defined by
|
|
// `IdentifyHandler::connection_keep_alive`. Hence the test succeeds if
|
|
// either `Identified` event arrives correctly.
|
|
async_std::task::block_on(async move {
|
|
loop {
|
|
let swarm1_fut = swarm1.next();
|
|
pin_mut!(swarm1_fut);
|
|
let swarm2_fut = swarm2.next();
|
|
pin_mut!(swarm2_fut);
|
|
|
|
match future::select(swarm1_fut, swarm2_fut).await.factor_second().0 {
|
|
future::Either::Left(IdentifyEvent::Received { info, .. }) => {
|
|
assert_eq!(info.public_key, pubkey2);
|
|
assert_eq!(info.protocol_version, "c");
|
|
assert_eq!(info.agent_version, "d");
|
|
assert!(!info.protocols.is_empty());
|
|
assert!(info.listen_addrs.is_empty());
|
|
return;
|
|
}
|
|
future::Either::Right(IdentifyEvent::Received { info, .. }) => {
|
|
assert_eq!(info.public_key, pubkey1);
|
|
assert_eq!(info.protocol_version, "a");
|
|
assert_eq!(info.agent_version, "b");
|
|
assert!(!info.protocols.is_empty());
|
|
assert_eq!(info.listen_addrs.len(), 1);
|
|
return;
|
|
}
|
|
_ => {}
|
|
}
|
|
}
|
|
})
|
|
}
|
|
}
|