docs(examples): add a README to each example

Resolves #3853.

Pull-Request: #3974.
This commit is contained in:
Thomas Coratger
2023-06-01 09:40:22 +02:00
committed by GitHub
parent 87e863e8c9
commit 75edcfcdb0
26 changed files with 501 additions and 223 deletions

View File

@ -0,0 +1,41 @@
## Description
This example consists of a client and a server, which demonstrate the usage of the AutoNAT and identify protocols in **libp2p**.
## Usage
### Client
The client-side part of the example showcases the combination of the AutoNAT and identify protocols.
The identify protocol allows the local peer to determine its external addresses, which are then included in AutoNAT dial-back requests sent to the server.
To run the client example, follow these steps:
1. Start the server by following the instructions provided in the `examples/server` directory.
2. Open a new terminal.
3. Run the following command in the terminal:
```sh
cargo run --bin autonat_client -- --server-address <server-addr> --server-peer-id <server-peer-id> --listen-port <port>
```
Note: The `--listen-port` parameter is optional and allows you to specify a fixed port at which the local client should listen.
### Server
The server-side example demonstrates a basic AutoNAT server that supports the autonat and identify protocols.
To start the server, follow these steps:
1. Open a terminal.
2. Run the following command:
```sh
cargo run --bin autonat_server -- --listen-port <port>
```
Note: The `--listen-port` parameter is optional and allows you to set a fixed port at which the local peer should listen.
## Conclusion
By combining the AutoNAT and identify protocols, the example showcases the establishment of direct connections between peers and the exchange of external address information.
Users can explore the provided client and server code to gain insights into the implementation details and functionality of **libp2p**.

View File

@ -18,16 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Basic example that combines the AutoNAT and identify protocols.
//!
//! The identify protocol informs the local peer of its external addresses, that are then send in AutoNAT dial-back
//! requests to the server.
//!
//! To run this example, follow the instructions in `examples/server` to start a server, then run in a new terminal:
//! ```sh
//! cargo run --bin autonat_client -- --server-address <server-addr> --server-peer-id <server-peer-id> --listen-port <port>
//! ```
//! The `listen-port` parameter is optional and allows to set a fixed port at which the local client should listen.
#![doc = include_str!("../../README.md")]
use clap::Parser;
use futures::prelude::*;

View File

@ -18,13 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Basic example for a AutoNAT server that supports the /libp2p/autonat/1.0.0 and "/ipfs/0.1.0" protocols.
//!
//! To start the server run:
//! ```sh
//! cargo run --bin autonat_server -- --listen-port <port>
//! ```
//! The `listen-port` parameter is optional and allows to set a fixed port at which the local peer should listen.
#![doc = include_str!("../../README.md")]
use clap::Parser;
use futures::prelude::*;

View File

@ -0,0 +1,30 @@
## Description
A basic chat application with logs demonstrating libp2p and the gossipsub protocol combined with mDNS for the discovery of peers to gossip with.
It showcases how peers can connect, discover each other using mDNS, and engage in real-time chat sessions.
## Usage
1. Using two terminal windows, start two instances, typing the following in each:
```sh
cargo run
```
2. Mutual mDNS discovery may take a few seconds. When each peer does discover the other
it will print a message like:
```sh
mDNS discovered a new peer: {peerId}
```
3. Type a message and hit return: the message is sent and printed in the other terminal.
4. Close with `Ctrl-c`. You can open more terminal windows and add more peers using the same line above.
When a new peer is discovered through mDNS, it can join the conversation, and all peers will receive messages sent by that peer.
If a participant exits the application using `Ctrl-c` or any other method, the remaining peers will receive an mDNS expired event and remove the expired peer from their list of known peers.
## Conclusion
This chat application demonstrates the usage of **libp2p** and the gossipsub protocol for building a decentralized chat system.
By leveraging mDNS for peer discovery, users can easily connect with other peers and engage in real-time conversations.
The example provides a starting point for developing more sophisticated chat applications using **libp2p** and exploring the capabilities of decentralized communication.

View File

@ -18,32 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! A basic chat application with logs demonstrating libp2p and the gossipsub protocol
//! combined with mDNS for the discovery of peers to gossip with.
//!
//! Using two terminal windows, start two instances, typing the following in each:
//!
//! ```sh
//! cargo run
//! ```
//!
//! Mutual mDNS discovery may take a few seconds. When each peer does discover the other
//! it will print a message like:
//!
//! ```sh
//! mDNS discovered a new peer: {peerId}
//! ```
//!
//! Type a message and hit return: the message is sent and printed in the other terminal.
//! Close with Ctrl-c.
//!
//! You can open more terminal windows and add more peers using the same line above.
//!
//! Once an additional peer is mDNS discovered it can participate in the conversation
//! and all peers will receive messages sent from it.
//!
//! If a participant exits (Control-C or otherwise) the other peers will receive an mDNS expired
//! event and remove the expired peer from the list of known peers.
#![doc = include_str!("../README.md")]
use async_std::io;
use futures::{future::Either, prelude::*, select};

35
examples/dcutr/README.md Normal file
View File

@ -0,0 +1,35 @@
## Description
The "Direct Connection Upgrade through Relay" (DCUTR) protocol allows peers in a peer-to-peer network to establish direct connections with each other.
In other words, DCUTR is libp2p's version of hole-punching.
This example provides a basic usage of this protocol in **libp2p**.
## Usage
To run the example, follow these steps:
1. Run the example using Cargo:
```sh
cargo run -- <OPTIONS>
```
Replace `<OPTIONS>` with specific options (you can use the `--help` command to see the available options).
### Example usage
- Example usage in client-listen mode:
```sh
cargo run -- --mode listen --secret-key-seed 42 --relay-address /ip4/127.0.0.1/tcp/12345
```
- Example usage in client-dial mode:
```sh
cargo run -- --mode dial --secret-key-seed 42 --relay-address /ip4/127.0.0.1/tcp/12345 --remote-peer-id <REMOTE_PEER_ID>
```
For this example to work, it is also necessary to turn on a relay server (you will find the related instructions in the example in the `examples/relay-server` folder).
## Conclusion
The DCUTR protocol offers a solution for achieving direct connectivity between peers in a peer-to-peer network.
By utilizing hole punching and eliminating the need for signaling servers, the protocol allows peers behind NATs to establish direct connections.
This example provides instructions on running an example implementation of the protocol, allowing users to explore its functionality and benefits.

View File

@ -18,6 +18,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#![doc = include_str!("../README.md")]
use clap::Parser;
use futures::{
executor::{block_on, ThreadPool},

View File

@ -0,0 +1,42 @@
## Description
This example showcases a basic distributed key-value store implemented using **libp2p**, along with the mDNS and Kademlia protocols.
## Usage
### Key-Value Store
1. Open two terminal windows, type `cargo run` and press Enter.
2. In terminal one, type `PUT my-key my-value` and press Enter.
This command will store the value `my-value` with the key `my-key` in the distributed key-value store.
3. In terminal two, type `GET my-key` and press Enter.
This command will retrieve the value associated with the key `my-key` from the key-value store.
4. To exit, press `Ctrl-c` in each terminal window to gracefully close the instances.
### Provider Records
You can also use provider records instead of key-value records in the distributed store.
1. Open two terminal windows and start two instances of the key-value store.
If your local network supports mDNS, the instances will automatically connect.
2. In terminal one, type `PUT_PROVIDER my-key` and press Enter.
This command will register the peer as a provider for the key `my-key` in the distributed key-value store.
3. In terminal two, type `GET_PROVIDERS my-key` and press Enter.
This command will retrieve the list of providers for the key `my-key` from the key-value store.
4. To exit, press `Ctrl-c` in each terminal window to gracefully close the instances.
Feel free to explore and experiment with the distributed key-value store example, and observe how the data is distributed and retrieved across the network using **libp2p**, mDNS, and the Kademlia protocol.
## Conclusion
This example demonstrates the implementation of a basic distributed key-value store using **libp2p**, mDNS, and the Kademlia protocol.
By leveraging these technologies, peers can connect, store, and retrieve key-value pairs in a decentralized manner.
The example provides a starting point for building more advanced distributed systems and exploring the capabilities of **libp2p** and its associated protocols.

View File

@ -18,27 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! A basic key value store demonstrating libp2p and the mDNS and Kademlia protocols.
//!
//! 1. Using two terminal windows, start two instances. If you local network
//! allows mDNS, they will automatically connect.
//!
//! 2. Type `PUT my-key my-value` in terminal one and hit return.
//!
//! 3. Type `GET my-key` in terminal two and hit return.
//!
//! 4. Close with Ctrl-c.
//!
//! You can also store provider records instead of key value records.
//!
//! 1. Using two terminal windows, start two instances. If you local network
//! allows mDNS, they will automatically connect.
//!
//! 2. Type `PUT_PROVIDER my-key` in terminal one and hit return.
//!
//! 3. Type `GET_PROVIDERS my-key` in terminal two and hit return.
//!
//! 4. Close with Ctrl-c.
#![doc = include_str!("../README.md")]
use async_std::io;
use futures::{prelude::*, select};

View File

@ -0,0 +1,72 @@
## Description
The File Sharing example demonstrates a basic file sharing application built using **libp2p**.
This example showcases how to integrate **rust-libp2p** into a larger application while providing a simple file sharing functionality.
In this application, peers in the network can either act as file providers or file retrievers.
Providers advertise the files they have available on a Distributed Hash Table (DHT) using `libp2p-kad`.
Retrievers can locate and retrieve files by their names from any node in the network.
## How it Works
Let's understand the flow of the file sharing process:
- **File Providers**: Nodes A and B serve as file providers.
Each node offers a specific file: file FA for node A and file FB for node B.
To make their files available, they advertise themselves as providers on the DHT using `libp2p-kad`.
This enables other nodes in the network to discover and retrieve their files.
- **File Retrievers**: Node C acts as a file retriever.
It wants to retrieve either file FA or FB.
Using `libp2p-kad`, it can locate the providers for these files on the DHT without being directly connected to them.
Node C connects to the corresponding provider node and requests the file content using `libp2p-request-response`.
- **DHT and Network Connectivity**: The DHT (Distributed Hash Table) plays a crucial role in the file sharing process.
It allows nodes to store and discover information about file providers.
Nodes in the network are interconnected via the DHT, enabling efficient file discovery and retrieval.
## Architectural Properties
The File Sharing application has the following architectural properties:
- **Clean and Clonable Interface**: The application provides a clean and clonable async/await interface, allowing users to interact with the network layer seamlessly.
The `Client` module encapsulates the necessary functionality for network communication.
- **Efficient Network Handling**: The application operates with a single task that drives the network layer.
This design choice ensures efficient network communication without the need for locks or complex synchronization mechanisms.
## Usage
To set up a simple file sharing scenario with a provider and a retriever, follow these steps:
1. **Start a File Provider**: In one terminal, run the following command to start a file provider node:
```sh
cargo run -- --listen-address /ip4/127.0.0.1/tcp/40837 \
--secret-key-seed 1 \
provide \
--path <path-to-your-file> \
--name <name-for-others-to-find-your-file>
```
This command initiates a node that listens on the specified address and provides a file located at the specified path.
The file is identified by the provided name, which allows other nodes to discover and retrieve it.
2. **Start a File Retriever**: In another terminal, run the following command to start a file retriever node:
```sh
cargo run -- --peer /ip4/127.0.0.1/tcp/40837/p2p/12D3KooWPjceQrSwdWXPyLLeABRXmuqt69Rg3sBYbU1Nft9HyQ6X \
get \
--name <name-for-others-to-find-your-file>
```
This command initiates a node that connects to the specified peer (the provider) and requests the file with the given name.
Note: It is not necessary for the retriever node to be directly connected to the provider.
As long as both nodes are connected to any node in the same DHT network, the file can be successfully retrieved.
This File Sharing example demonstrates the fundamental concepts of building a file sharing application using **libp2p**.
By understanding the flow and architectural properties of this example, you can leverage the power of **libp2p** to integrate peer-to-peer networking capabilities into your own applications.
## Conclusion
The File Sharing example provides a practical implementation of a basic file sharing application using **libp2p**.
By leveraging the capabilities of **libp2p**, such as the DHT and network connectivity protocols, it demonstrates how peers can share files in a decentralized manner.
By exploring and understanding the file sharing process and architectural properties presented in this example, developers can gain insights into building their own file sharing applications using **libp2p**.

View File

@ -18,62 +18,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! # File sharing example
//!
//! Basic file sharing application with peers either providing or locating and
//! getting files by name.
//!
//! While obviously showcasing how to build a basic file sharing application,
//! the actual goal of this example is **to show how to integrate rust-libp2p
//! into a larger application**.
//!
//! ## Sample plot
//!
//! Assuming there are 3 nodes, A, B and C. A and B each provide a file while C
//! retrieves a file.
//!
//! Provider nodes A and B each provide a file, file FA and FB respectively.
//! They do so by advertising themselves as a provider for their file on a DHT
//! via [`libp2p-kad`]. The two, among other nodes of the network, are
//! interconnected via the DHT.
//!
//! Node C can locate the providers for file FA or FB on the DHT via
//! [`libp2p-kad`] without being connected to the specific node providing the
//! file, but any node of the DHT. Node C then connects to the corresponding
//! node and requests the file content of the file via
//! [`libp2p-request-response`].
//!
//! ## Architectural properties
//!
//! - Clean clonable async/await interface ([`Client`](network::Client)) to interact with the
//! network layer.
//!
//! - Single task driving the network layer, no locks required.
//!
//! ## Usage
//!
//! A two node setup with one node providing the file and one node requesting the file.
//!
//! 1. Run command below in one terminal.
//!
//! ```sh
//! cargo run -- --listen-address /ip4/127.0.0.1/tcp/40837 \
//! --secret-key-seed 1 \
//! provide \
//! --path <path-to-your-file> \
//! --name <name-for-others-to-find-your-file>
//! ```
//!
//! 2. Run command below in another terminal.
//!
//! ```sh
//! cargo run -- --peer /ip4/127.0.0.1/tcp/40837/p2p/12D3KooWPjceQrSwdWXPyLLeABRXmuqt69Rg3sBYbU1Nft9HyQ6X \
//! get \
//! --name <name-for-others-to-find-your-file>
//! ```
//!
//! Note: The client does not need to be directly connected to the providing
//! peer, as long as both are connected to some node on the same DHT.
#![doc = include_str!("../README.md")]
mod network;
use async_std::task::spawn;

View File

@ -0,0 +1,23 @@
## Description
The example demonstrates how to create a connection between two nodes using TCP transport, authenticate with the noise protocol, and multiplex data streams with yamux.
The library provides a behavior for identity network interactions, allowing nodes to exchange identification information securely.
By running the example, the nodes will establish a connection, negotiate the identity protocol, and exchange identification information, which will be displayed in the console.
## Usage
1. In the first terminal window, run the following command:
```sh
cargo run
```
This will print the peer ID (`PeerId`) and the listening addresses, e.g., `Listening on "/ip4/127.0.0.1/tcp/24915"`
2. In the second terminal window, start a new instance of the example with the following command:
```sh
cargo run -- /ip4/127.0.0.1/tcp/24915
```
The two nodes establish a connection, negotiate the identity protocol, and send each other identification information, which is then printed to the console.
## Conclusion
The included identity example demonstrates how to establish connections and exchange identification information between nodes using the library's protocols and behaviors.

View File

@ -18,23 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! identify example
//!
//! In the first terminal window, run:
//!
//! ```sh
//! cargo run
//! ```
//! It will print the [`PeerId`] and the listening addresses, e.g. `Listening on
//! "/ip4/127.0.0.1/tcp/24915"`
//!
//! In the second terminal window, start a new instance of the example with:
//!
//! ```sh
//! cargo run -- /ip4/127.0.0.1/tcp/24915
//! ```
//! The two nodes establish a connection, negotiate the identify protocol
//! and will send each other identify info which is then printed to the console.
#![doc = include_str!("../README.md")]
use futures::prelude::*;
use libp2p::{

View File

@ -10,4 +10,4 @@ async-std = { version = "1.12", features = ["attributes"] }
async-trait = "0.1"
env_logger = "0.10"
futures = "0.3.28"
libp2p = { path = "../../libp2p", features = ["async-std", "dns", "kad", "noise", "tcp", "websocket", "yamux"] }
libp2p = { path = "../../libp2p", features = ["async-std", "dns", "kad", "noise", "tcp", "websocket", "yamux", "rsa"] }

View File

@ -0,0 +1,50 @@
## Description
This example showcases the usage of **libp2p** to interact with the Kademlia protocol on the IPFS network.
The code demonstrates how to perform Kademlia queries to find the closest peers to a specific peer ID.
By running this example, users can gain a better understanding of how the Kademlia protocol operates and performs queries on the IPFS network.
## Usage
The example code demonstrates how to perform Kademlia queries on the IPFS network using the Rust P2P Library.
By specifying a peer ID as a parameter, the code will search for the closest peers to the given peer ID.
### Parameters
Run the example code:
```sh
cargo run [PEER_ID]
```
Replace `[PEER_ID]` with the base58-encoded peer ID you want to search for.
If no peer ID is provided, a random peer ID will be generated.
## Example Output
Upon running the example code, you will see the output in the console.
The output will display the result of the Kademlia query, including the closest peers to the specified peer ID.
### Successful Query Output
If the Kademlia query successfully finds closest peers, the output will be:
```sh
Searching for the closest peers to [PEER_ID]
Query finished with closest peers: [peer1, peer2, peer3]
```
### Failed Query Output
If the Kademlia query times out or there are no reachable peers, the output will indicate the failure:
```sh
Searching for the closest peers to [PEER_ID]
Query finished with no closest peers.
```
## Conclusion
In conclusion, this example provides a practical demonstration of using the Rust P2P Library to interact with the Kademlia protocol on the IPFS network.
By examining the code and running the example, users can gain insights into the inner workings of Kademlia and how it performs queries to find the closest peers.
This knowledge can be valuable when developing peer-to-peer applications or understanding decentralized networks.

View File

@ -18,10 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Demonstrates how to perform Kademlia queries on the IPFS network.
//!
//! You can pass as parameter a base58 peer ID to search for. If you don't pass any parameter, a
//! peer ID will be generated randomly.
#![doc = include_str!("../README.md")]
use futures::StreamExt;
use libp2p::kad::record::store::MemoryStore;

View File

@ -0,0 +1,40 @@
## Description
This example showcases a minimal implementation of a **libp2p** node that can interact with IPFS.
It utilizes the gossipsub protocol for pubsub messaging, the ping protocol for network connectivity testing, and the identify protocol for peer identification.
The node can be used to communicate with other IPFS nodes that have gossipsub enabled.
To establish a connection with other nodes, you can provide their multiaddresses as command-line arguments.
On startup, the example will display a list of addresses that you can dial from a `go-ipfs` or `js-ipfs` node.
## Usage
To run the example, follow these steps:
1. Build and run the example using Cargo:
```sh
cargo run [ADDRESS_1] [ADDRESS_2] ...
```
Replace `[ADDRESS_1]`, `[ADDRESS_2]`, etc., with the multiaddresses of the nodes you want to connect to.
You can provide multiple addresses as command-line arguments.
**Note:** The multiaddress should be in the following format: `/ip4/127.0.0.1/tcp/4001/p2p/peer_id`.
2. Once the example is running, you can interact with the IPFS node using the following commands:
- **Pubsub (Gossipsub):** You can use the gossipsub protocol to send and receive messages on the "chat" topic.
To send a message, type it in the console and press Enter.
The message will be broadcasted to other connected nodes using gossipsub.
- **Ping:** You can ping other connected nodes to test network connectivity.
The example will display the round-trip time (RTT) for successful pings or indicate if a timeout occurs.
## Conclusion
This example provides a basic implementation of an IPFS node using **libp2p**.
It demonstrates the usage of the gossipsub, ping, and identify protocols to enable communication with other IPFS nodes.
By running this example and exploring its functionality, you can gain insights into how to build more advanced P2P applications using Rust.
Feel free to experiment with different multiaddresses and explore the capabilities of **libp2p** in the context of IPFS. Happy coding!

View File

@ -18,19 +18,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! A minimal node that can interact with ipfs
//!
//! This node implements the gossipsub, ping and identify protocols. It supports
//! the ipfs private swarms feature by reading the pre shared key file `swarm.key`
//! from the IPFS_PATH environment variable or from the default location.
//!
//! You can pass any number of nodes to be dialed.
//!
//! On startup, this example will show a list of addresses that you can dial
//! from a go-ipfs or js-ipfs node.
//!
//! You can ping this node, or use pubsub (gossipsub) on the topic "chat". For this
//! to work, the ipfs node needs to be configured to use gossipsub.
#![doc = include_str!("../README.md")]
use async_std::io;
use either::Either;
use futures::{prelude::*, select};

View File

@ -0,0 +1,40 @@
## Description
The example showcases how to run a p2p network with **libp2p** and collect metrics using `libp2p-metrics`.
It sets up multiple nodes in the network and measures various metrics, such as `libp2p_ping`, to evaluate the network's performance.
## Usage
To run the example, follow these steps:
1. Run the following command to start the first node:
```sh
cargo run
```
2. Open a second terminal and run the following command to start a second node:
```sh
cargo run -- <listen-addr-of-first-node>
```
Replace `<listen-addr-of-first-node>` with the listen address of the first node reported in the first terminal.
Look for the line that says `NewListenAddr` to find the address.
3. Open a third terminal and run the following command to retrieve the metrics from either the first or second node:
```sh
curl localhost:<metrics-port-of-first-or-second-node>/metrics
```
Replace `<metrics-port-of-first-or-second-node>` with the listen port of the metrics server of either the first or second node.
Look for the line that says `tide::server Server listening on` to find the port.
After executing the command, you should see a long list of metrics printed to the terminal.
Make sure to check the `libp2p_ping` metrics, which should have a value greater than zero (`>0`).
## Conclusion
This example demonstrates how to utilize the `libp2p-metrics` crate to collect and analyze metrics in a libp2p network.
By running multiple nodes and examining the metrics, users can gain insights into the network's performance and behavior.

View File

@ -18,35 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Example demonstrating `libp2p-metrics`.
//!
//! In one terminal run:
//!
//! ```
//! cargo run
//! ```
//!
//! In a second terminal run:
//!
//! ```
//! cargo run -- <listen-addr-of-first-node>
//! ```
//!
//! Where `<listen-addr-of-first-node>` is replaced by the listen address of the
//! first node reported in the first terminal. Look for `NewListenAddr`.
//!
//! In a third terminal run:
//!
//! ```
//! curl localhost:<metrics-port-of-first-or-second-node>/metrics
//! ```
//!
//! Where `<metrics-port-of-first-or-second-node>` is replaced by the listen
//! port of the metrics server of the first or the second node. Look for
//! `tide::server Server listening on`.
//!
//! You should see a long list of metrics printed to the terminal. Check the
//! `libp2p_ping` metrics, they should be `>0`.
#![doc = include_str!("../README.md")]
use env_logger::Env;
use futures::executor::block_on;

View File

@ -0,0 +1,30 @@
## Description
The ping example showcases how to create a network of nodes that establish connections, negotiate the ping protocol, and ping each other.
## Usage
To run the example, follow these steps:
1. In a first terminal window, run the following command:
```sh
cargo run
```
This command starts a node and prints the `PeerId` and the listening addresses, such as `Listening on "/ip4/0.0.0.0/tcp/24915"`.
2. In a second terminal window, start a new instance of the example with the following command:
```sh
cargo run -- /ip4/127.0.0.1/tcp/24915
```
Replace `/ip4/127.0.0.1/tcp/24915` with the listen address of the first node obtained from the first terminal window.
3. The two nodes will establish a connection, negotiate the ping protocol, and begin pinging each other.
## Conclusion
The ping example demonstrates the basic usage of **libp2p** to create a simple p2p network and implement a ping protocol.
By running multiple nodes and observing the ping behavior, users can gain insights into how **libp2p** facilitates communication and interaction between peers.

View File

@ -18,27 +18,7 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Ping example
//!
//! See ../src/tutorial.rs for a step-by-step guide building the example below.
//!
//! In the first terminal window, run:
//!
//! ```sh
//! cargo run
//! ```
//!
//! It will print the PeerId and the listening addresses, e.g. `Listening on
//! "/ip4/0.0.0.0/tcp/24915"`
//!
//! In the second terminal window, start a new instance of the example with:
//!
//! ```sh
//! cargo run -- /ip4/127.0.0.1/tcp/24915
//! ```
//!
//! The two nodes establish a connection, negotiate the ping protocol
//! and begin pinging each other.
#![doc = include_str!("../README.md")]
use futures::prelude::*;
use libp2p::core::upgrade::Version;

View File

@ -0,0 +1,28 @@
## Description
The **libp2p** relay example showcases how to create a relay node that can route messages between different peers in a p2p network.
## Usage
To run the example, follow these steps:
1. Run the relay node by executing the following command:
```sh
cargo run -- --port <port> --secret-key-seed <seed>
```
Replace `<port>` with the port number on which the relay node will listen for incoming connections.
Replace `<seed>` with a seed value used to generate a deterministic peer ID for the relay node.
2. The relay node will start listening for incoming connections.
It will print the listening address once it is ready.
3. Connect other **libp2p** nodes to the relay node by specifying the relay's listening address as one of the bootstrap nodes in their configuration.
4. Once the connections are established, the relay node will facilitate communication between the connected peers, allowing them to exchange messages and data.
## Conclusion
The **libp2p** relay example demonstrates how to implement a relay node.
By running a relay node and connecting other **libp2p** nodes to it, users can create a decentralized network where peers can communicate and interact with each other.

View File

@ -19,6 +19,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
#![doc = include_str!("../README.md")]
use clap::Parser;
use futures::executor::block_on;
use futures::stream::StreamExt;

View File

@ -0,0 +1,51 @@
## Description
The rendezvous protocol example showcases how to implement a rendezvous server and interact with it using different binaries.
The rendezvous server facilitates peer registration and discovery, enabling peers to find and communicate with each other in a decentralized manner.
## Usage
To run the example, follow these steps:
1. Start the rendezvous server by running the following command:
```sh
RUST_LOG=info cargo run --bin rendezvous-example
```
This command starts the rendezvous server, which will listen for incoming connections and handle peer registrations and discovery.
2. Register a peer by executing the following command:
```sh
RUST_LOG=info cargo run --bin rzv-register
```
This command registers a peer with the rendezvous server, allowing the peer to be discovered by other peers.
3. Try to discover the registered peer from the previous step by running the following command:
```sh
RUST_LOG=info cargo run --bin rzv-discover
```
This command attempts to discover the registered peer using the rendezvous server.
If successful, it will print the details of the discovered peer.
4. Additionally, you can try discovering a peer using the identify protocol by executing the following command:
```sh
RUST_LOG=info cargo run --bin rzv-identify
```
This command demonstrates peer discovery using the identify protocol.
It will print the peer's identity information if successful.
5. Experiment with different registrations, discoveries, and combinations of protocols to explore the capabilities of the rendezvous protocol and libp2p library.
## Conclusion
The rendezvous protocol example provides a practical demonstration of how to implement peer registration and discovery using **libp2p**.
By running the rendezvous server and utilizing the provided binaries, users can register peers and discover them in a decentralized network.
Feel free to explore the code and customize the behavior of the rendezvous server and the binaries to suit your specific use cases.

View File

@ -18,24 +18,8 @@
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
/// Examples for the rendezvous protocol:
///
/// 1. Run the rendezvous server:
/// ```
/// RUST_LOG=info cargo run --bin rendezvous-example
/// ```
/// 2. Register a peer:
/// ```
/// RUST_LOG=info cargo run --bin rzv-register
/// ```
/// 3. Try to discover the peer from (2):
/// ```
/// RUST_LOG=info cargo run --bin rzv-discover
/// ```
/// 4. Try to discover with identify:
/// ```
/// RUST_LOG=info cargo run --bin rzv-identify
/// ```
#![doc = include_str!("../README.md")]
use futures::StreamExt;
use libp2p::{
core::transport::upgrade::Version,