GitBook: [#252] Fix small issues with tutorial's deploy section

This commit is contained in:
boneyard93501
2022-03-29 14:48:36 +00:00
committed by gitbook-bot
parent a58dcbdd92
commit b9da48627b
120 changed files with 3945 additions and 5 deletions

View File

@ -0,0 +1,62 @@
# 1. Browser-to-Browser
The first example demonstrates how to communicate between two client peers, i.e. browsers, with local services. The project is based on a create-react-app template with slight modifications to integrate Fluence. The primary focus is the integration itself and React could be swapped with any framework of your choice.
Make sure you are in the `examples/quickstart/1-browser-to-browser` directory to install the dependencies:
```
cd examples/quickstart/1-browser-to-browser
npm install
```
Run the app with `npm start` :
```
npm run compile-aqua
npm start
```
Which opens a new tab in your browser at `http://localhost:3000`. The browser tab, representing the client peer, wants you to pick a relay node it, i.e., the browser client, can connect to and, of course, allows the peer to respond to the browser client. Select any one of the offered relays:
![Relay Selection](<../.gitbook/assets/image (17).png>)
The client peer is now connected to the relay and ready for business:
![Connection confirmation to network](<../.gitbook/assets/image (18).png>)
Let's follow the instructions, open another browser tab, i.e. client peer, using `http://localhost:3000` , select any one of the relays and copying the ensuing peer id and relay peer id to the first client peer, i.e. the first browser tab, and click the `say hello` button:\
![Peer-to-peer communication between two browser client peers](<../.gitbook/assets/image (20).png>)
Congratulations, you just sent messages between two browsers over the Fluence peer-to-peer network, which is pretty cool! Even cooler, however, is how we got here using Aqua, Fluence's distributed network and application composition language.
Navigate to the `aqua` directory and open the \``getting-started.aqua` file in your IDE or terminal:
![getting-started.aqua](<../.gitbook/assets/image (51).png>)
And yes, fewer than ten lines (!) are required for a client peer, like our browser, to connect to the network and start composing the local `HelloPeer` service to send messages.
In broad strokes, the Aqua code breaks down as follows:
* Import the Aqua [standard library](https://github.com/fluencelabs/aqua-lib) into our application (1)
* Create a service interface binding to the local service (see below) with the `HelloPeer` namespace and `hello` function (4-5)
* Create the composition function `sayHello` that executes the `hello` call on the provided `targetPeerId` via the provided `targetRelayPeerId` and returns the result (7-10). Recall the copy and paste job you did earlier in the browser tab for the peer and relay id? Well, you just found the consumption place for these two parameters.
Not only is Aqua rather succinct in allowing you to seamlessly program both network routes and distributed application workflows but also provides the ability to compile Aqua to Typescript stubs wrapping compiled Aqua, called AIR -- short for Aqua Intermediate Representation, into ready to use code blocks. Navigate to the `src/_aqua` directory and open the `getting-started.ts` file and poke around a bit.
Note that the `src/App.tsx` file relies on the generated \`getting-started.ts\` file (line 7):
![App.tsx](<../.gitbook/assets/image (43).png>)
We wrote a little more than a handful of lines of code in Aqua and ended up with a deployment-ready code block that includes both the network routing and a compute logic to facilitate browser-to-browser messaging over a peer-to-peer network.
The local (browser) service `HelloPeer` is also implemented in the `App.tsx` file:
![Local HelloPeer service implementation](<../.gitbook/assets/image (22).png>)
To summarize, we run an app that facilities messaging between two browsers over a peer-to-peer network. At the core of this capability is Aqua which allowed us in just a few lines of code to program both the network topology and the application workflow in barely more than a handful of lines of code. Hint: You should be excited. For more information on Aqua, see the [Aqua Book](https://app.gitbook.com/@fluence/s/aqua-book/).&#x20;
In the next section, we develop a WebAssembly module and deploy it as a hosted service to the Fluence peer-to-peer network.

View File

@ -0,0 +1,205 @@
# 2. Hosted Services
In the previous example, we used a local, browser-native service to facilitate the string generation and communication with another browser. The real power of the Fluence solution, however, is that services can be hosted on one or more nodes, easily reused and composed into decentralized applications with Aqua.
{% hint style="info" %}
In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples):
```bash
git clone https://github.com/fluencelabs/examples
```
{% endhint %}
### Creating A WebAssembly Module
In this section, we develop a simple `HelloWorld` service and host it on a peer-to-peer node of the Fluence testnet. In your IDE or terminal, change to the `2-hosted-services` directory and open the `src/main.rs` file:
![Rust code for HelloWorld hosted service module](<../.gitbook/assets/image (48).png>)
Fluence hosted services are comprised of WebAssembly modules implemented in Rust and compiled to [wasm32-wasi](https://doc.rust-lang.org/stable/nightly-rustc/rustc\_target/spec/wasm32\_wasi/index.html). Let's have look at our code:
```rust
// quickstart/2-hosted-services/src/main.rs
use marine_rs_sdk::marine;
use marine_rs_sdk::module_manifest;
module_manifest!();
pub fn main() {}
#[marine]
pub struct HelloWorld {
pub msg: String,
pub reply: String,
}
#[marine]
pub fn hello(from: String) -> HelloWorld {
HelloWorld {
msg: format!("Hello from: \n{}", from),
reply: format!("Hello back to you, \n{}", from),
}
}
```
At the core of our implementation is the `hello` function which takes a string parameter and returns the `HelloWorld` struct consisting of the `msg` and `reply` field, respectively. We can use the `build.sh` script in the `scripts` directory, `./scripts/build.sh` , to compile the code to the wasi32-wasm target from the VSCode terminal:
![](<../.gitbook/assets/image (47).png>)
In addition to some housekeeping, the `build.sh` script gives the compile instructions with [marine](https://crates.io/crates/marine), `marine build --release` , and copies the resulting Wasm module, `hello_world.wasm`, to the `artifacts` directory for easy access.
### Testing And Exploring Wasm Code
So far, so good. Of course, we want to test our code and we have a couple of test functions in our `main.rs` file:
```rust
// quickstart/2-hosted-services/src/main.rs
use marine_rs_sdk::marine;
use marine_rs_sdk::module_manifest;
//<snip>
#[cfg(test)]
mod tests {
use marine_rs_sdk_test::marine_test;
#[marine_test(config_path = "../configs/Config.toml", modules_dir = "../artifacts")]
fn non_empty_string(hello_world: marine_test_env::hello_world::ModuleInterface) {
let actual = hello_world.hello("SuperNode".to_string());
assert_eq!(actual.msg, "Hello from: \nSuperNode".to_string());
}
#[marine_test(config_path = "../configs/Config.toml", modules_dir = "../artifacts")]
fn empty_string(hello_world: marine_test_env::hello_world::ModuleInterface) {
let actual = hello_world.hello("".to_string());
assert_eq!(actual.msg, "Hello from: \n");
}
}
```
\
&#x20;To run our tests, we can use the familiar [`cargo test`](https://doc.rust-lang.org/cargo/commands/cargo-test.html) . However, we don't really care all that much about our native Rust functions being tested but want to test our WebAssembly functions. This is where the extra code in the test module comes into play. In short., we are running `cargo test` against the exposed interfaces of the `hello_world.wasm` module and in order to do that, we need the `marine_test` macro and provide it with both the modules directory, i.e., the `artifacts` directory, and the location of the `Config.toml` file. Note that the `Config.toml` file specifies the module metadata and optional module linking data. Moreover, we need to call our Wasm functions from the module namespace, i.e. `hello_world.hello` instead of the standard `hello` -- see lines 13 and 19 above, which we specify as an argument in the test function signature (lines 11 and 17, respectively).
{% hint style="info" %}
In order to able able to use the macro, install the [`marine-rs-sdk-test`](https://crates.io/crates/marine-rs-sdk-test) crate as a dev dependency:
`[dev-dependencies] marine-rs-sdk-test = "`\<version>`"`
{% endhint %}
From the IDE or terminal, we now run our tests with the`cargo +nightly test --release` command. Please note that if `nightly` is your default, you don't need it in your `cargo test` command.
![](<../.gitbook/assets/image (46).png>)
Well done -- our tests check out. Before we deploy our service to the network, we can interact with it locally using the [Marine REPL](https://crates.io/crates/mrepl). In your VSCode terminal the `2-hosted-services` directory run:
```
mrepl configs/Config.toml
```
which puts us in the REPL:
```bash
mrepl configs/Config.toml
Welcome to the Marine REPL (version 0.9.1)
Minimal supported versions
sdk: 0.6.0
interface-types: 0.20.0
app service was created with service id = 8a2d946d-b474-468c-8c56-9e970ee64743
elapsed time 53.593404ms
1> i
Loaded modules interface:
data HelloWorld:
msg: string
reply: string
hello_world:
fn hello(from: string) -> HelloWorld
2> call hello_world hello ["Fluence"]
result: Object({"msg": String("Hello from: \nFluence"), "reply": String("Hello back to you, \nFluence")})
elapsed time: 278.5µs
3>
```
We can explore the available interfaces with the `i` command and see that the interfaces we marked with the `marine` macro in our Rust code above are indeed exposed and available for consumption. Using the `call` command, still in the REPL, we can access any available function in the module namespace, e.g., `call hello_word hello [<string arg>]`. You can exit the REPL with the `ctrl-c` command.
### Exporting WebAssembly Interfaces To Aqua
In anticipation of future needs, note that `marine` allows us to export the Wasm interfaces ready for use in Aqua. In your VSCode terminal, navigate to the \`2-hosted-services\` directory&#x20;
```
marine aqua artifacts/hello_world.wasm
```
Which gives us the Aqua-ready interfaces:
```haskell
data HelloWorld:
msg: string
reply: string
service HelloWorld:
hello(from: string) -> HelloWorld
```
That can be piped directly into an aqua file , e.g., \``marine aqua my_wasm.wasm >> my_aqua.aqua`.&#x20;
### Deploying A Wasm Module To The Network
Looks like all is in order with our module and we are ready to deploy our `HelloWorld` service to the world by means of the Fluence peer-to-peer network. For this to happen, we need two things: the peer id of our target node(s) and a way to deploy the service. The latter can be accomplished with the `aqua` command line tool and with respect to the former, we can get a peer from one of the Fluence testnets with `aqua` . In your VSCode terminal:
```
aqua config default_peers
```
Which gets us a list of network peers:
```
/dns4/kras-00.fluence.dev/tcp/19990/wss/p2p/12D3KooWSD5PToNiLQwKDXsu8JSysCwUt8BVUJEqCHcDe7P5h45e
/dns4/kras-00.fluence.dev/tcp/19001/wss/p2p/12D3KooWR4cv1a8tv7pps4HH6wePNaK6gf1Hww5wcCMzeWxyNw51
/dns4/kras-01.fluence.dev/tcp/19001/wss/p2p/12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA
/dns4/kras-02.fluence.dev/tcp/19001/wss/p2p/12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf
/dns4/kras-03.fluence.dev/tcp/19001/wss/p2p/12D3KooWJd3HaMJ1rpLY1kQvcjRPEvnDwcXrH8mJvk7ypcZXqXGE
/dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi
/dns4/kras-05.fluence.dev/tcp/19001/wss/p2p/12D3KooWCMr9mU894i8JXAFqpgoFtx6qnV1LFPSfVc3Y34N4h4LS
/dns4/kras-06.fluence.dev/tcp/19001/wss/p2p/12D3KooWDUszU2NeWyUVjCXhGEt1MoZrhvdmaQQwtZUriuGN1jTr
/dns4/kras-07.fluence.dev/tcp/19001/wss/p2p/12D3KooWEFFCZnar1cUJQ3rMWjvPQg6yMV2aXWs2DkJNSRbduBWn
/dns4/kras-08.fluence.dev/tcp/19001/wss/p2p/12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt
/dns4/kras-09.fluence.dev/tcp/19001/wss/p2p/12D3KooWD7CvsYcpF9HE9CCV9aY3SJ317tkXVykjtZnht2EbzDPm
```
Let's use the peer`12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi` as our deployment target and deploy our service from the VSCode terminal. In the `quickstart/2-hosted-services` directory run:
```bash
aqua remote deploy \
--addr /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \
--config-path configs/hello_world_deployment_cfg.json \
--service hello-world
```
Which gives us a unique service id:
```
Your peerId: 12D3KooWAnbFkXk3UFm2MyuNGsSQ6uXHAtjizRC2xv9Q6avN3JBx
"Going to upload a module..."
2022.02.12 00:03:48 [INFO] created ipfs client to /ip4/164.90.164.229/tcp/5001
2022.02.12 00:03:48 [INFO] connected to ipfs
2022.02.12 00:03:50 [INFO] file uploaded
"Now time to make a blueprint..."
"Blueprint id:"
"5efb45e9442ae681d35dcfd4ab40a9927d47b5e16d380d02f71536ba2a2ee427"
"And your service id is:"
"09d9a052-8ccd-4627-9b3a-b72fe6571c87"
```
Take note of the service id, 09d9a052-8ccd-4627-9b3a-b72fe6571c87 in this example but different for you, as we need it to use the service with Aqua.
Congratulations, we just deployed our first reusable service to the Fluence network and we can admire our handiwork on the Fluence [Developer Hub](https://dash.fluence.dev):
![HelloWorld service deployed to peer 12D3Koo...WaoHi](<../.gitbook/assets/image (36).png>)
With our newly created service ready to roll, let's move on and put it to work.

View File

@ -0,0 +1,45 @@
# 3. Browser-to-Service
In the first section, we explored browser-to-browser messaging using local, i.e. browser-native, services and the Fluence network for message transport. In the second section, we developed a `HelloWorld` Wasm module and deployed it as a hosted service on the testnet peer `12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi` with service id `1e740ce4-81f6-4dd4-9bed-8d86e9c2fa50` . We can now extend our browser-to-browser messaging application with our hosted service.
Let's navigate to the `3-browser-to-service` directory in the VSCode terminal and install the dependencies:
```
npm install
```
And run the application with:
```
npm run compile-aqua
npm start
```
Which will open a new browser tab at `http://localhost:3000` . Following the instructions, we connect to any one of the displayed relay ids, open another browser tab also at `http://localhost:3000`, select a relay and copy and paste the client peer id and relay id into corresponding fields in the first tab and press the `say hello` button.
![Browser To Service Implementation](<../.gitbook/assets/image (38) (2) (2) (2) (1).png>)
The result looks familiar, so what's different? Let's have a look at the Aqua file. Navigate to the `aqua/getting_started.aqua` file in your IDE or terminal:
![getting-started.aqua](<../.gitbook/assets/image (50).png>)
And let's work it from the top:
* Import the Aqua standard library (1)
* Provide the hosted service peer id (3) and service id (4)
* Specify the `HelloWorld` struct interface binding (6-8) for the hosted service from the `marine aqua` export
* Specify the `HelloWorld` interface and function binding (11-12) for the hosted service from the `marine aqua` export
* Specify the `HelloPeer` interface and function binding (15-16) for the local service
* Create the Aqua workflow function `sayHello` (18-29)
Before we dive into the `sayHello` function, let's look at why we still need a local service even though we deployed a hosted service. The reason for that lies in the need for the browser client to be able to consume the message sent from the other browser through the relay peer. With that out of the way, let's dig in:
* The function signature (18) takes two arguments: `targetPeerId`, which is the client peer id of the other browser and the `targetelayPeerId`, which is the relay id -- both parameters are the values you copy and pasted from the second browser tab into the first browser tab
* The first step is to call on the hosted service `HelloWorld` on the host peer `helloWorldPeerId` , which we specified in line 1
* We bind the `HelloWorld` interface, on the peer `helloWorldPeerId`, i.e.,`12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi`, to the service id of the hosted service `helloWorldServiceId` , i.e. `1e740ce4-81f6-4dd4-9bed-8d86e9c2fa50`, which takes the %init\_\_peer\_\_id% parameter, i.e., the peer id of the peer that initiated the request, and pushes the result into `comp` (20-22)
* We now want to send a result back to the target browser (peer) (25-26) using the local service via the `targetRelayPeerId` in the background as a `co` routine.
* Finally, we send the `comp` result to the initiating browser
A little more involved than our first example but we are again getting a lot done with very little code. Of course, there could be more than one hosted service in play and we could implement, for example, hosted spell checking, text formatting and so much more without much extra effort to express additional workflow logic in our Aqua script.
This brings us to the end of this quick start tutorial. We hope you are as excited as we are to put Aqua and the Fluence stack to work. To continue your Fluence journey, have a look at the remainder of this book, take a deep dive into Aqua with the [Aqua book](https://doc.fluence.dev/aqua-book/) or dig into Marine and Aqua examples in the [repo](https://github.com/fluencelabs/examples).

View File

@ -0,0 +1,249 @@
# 4. Service Composition And Reuse With Aqua
In the previous three sections, you got a taste of using Aqua with browsers and how to create and deploy a service. In this section, we discuss how to compose an application from multiple distributed services using Aqua. In Fluence, we don't use JSON-RPC or REST endpoints to address and execute the service, we use [Aqua](https://github.com/fluencelabs/aqua).
Recall, Aqua is a purpose-built distributed systems and peer-to-peer programming language that resolves (Peer Id, Service Id) tuples to facilitate service execution on the host node without developers having to worry about transport or network routing. And with Aqua VM available on each Fluence peer-to-peer node, Aqua allows developers to ergonomically locate and execute distributed services.
{% hint style="info" %}
In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples):
```bash
git clone https://github.com/fluencelabs/examples
```
{% endhint %}
### Composition With Aqua
A service is one or more linked WebAssembly (Wasm) modules that may be linked at runtime. Said dependencies are specified by a **blueprint** which is the basis for creating a unique service id after the deployment and initiation of the blueprint on our chosen host for deployment. See Figure 1.
![](<../.gitbook/assets/image (41).png>)
When we deploy our service, as demonstrated in section two, the service is "out there" on the network and we need a way to locate and execute the service if w want to utilize he service as part of our application.
Luckily, the (Peer Id, Service Id) tuple we obtain from the service deployment process contains all the information Aqua needs to locate and execute the specified service instance.
Let's create a Wasm module with a single function that adds one to an input in the `adder` directory:
```rust
#[marine]
fn add_one(input: u64) -> u64 {
input + 1
}
```
For our purposes, we deploy that module as a service to three hosts: Peer 1, Peer 2, and Peer 3. Use the instructions provided in section two to create the module and deploy the service to three peers of your choosing. See `4-composing-services-with-aqua/adder` for the code and `data/distributed_service.json` for the (Peer Id, Service Id) tuples already deployed to three network peers.
Once we got the services deployed to their respective hosts, we can use Aqua to compose an admittedly simple application by composing the use of each service into an workflow where the (Peer Id, Service Id) tuples facilitate the routing to and execution of each service. Also, recall that in the Fluence peer-to-peer programming model the client need not, and for the most part should not, be involved in managing intermediate results. Instead, results are "forward chained" to the next service as specified in the Aqua workflow.
Using our `add_one` service and starting with an input parameter value of one, utilizing all three services, we expect a final result of four given **seq**uential service execution:
![](<../.gitbook/assets/image (42).png>)
The underlying Aqua script may look something like this (see the `aqua-script` directory):
```
-- aqua-scripts/adder.aqua
-- service interface for Wasm module
service AddOne:
add_one: u64 -> u64
-- convenience struct for (Peer Id, Service Id) tuples
data NodeServiceTuple:
node_id: string
service_id: string
func add_one_three_times(value: u64, ns_tuples: []NodeServiceTuple) -> u64:
on ns_tuples!0.node_id:
AddOne ns_tuples!0.service_id
res1 <- AddOne.add_one(value)
on ns_tuples!1.node_id:
AddOne ns_tuples!1.service_id
res2 <- AddOne.add_one(res1)
on ns_tuples!2.node_id:
AddOne ns_tuples!2.service_id
res3 <- AddOne.add_one(res2)
<- res3
```
Let's give it a whirl! Using the already deployed services or your even better, your own deployed services, let's compile out Aqua script in the `4-composing-services-with-aqua` directory. We use `aqua run` to execute the above Aqua script:
```
aqua run \
-i aqua-scripts \
-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \
-f 'add_one_three_times(5, arg)' \
-d '{"arg":[{
"node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt",
"service_id": "7b2ab89f-0897-4537-b726-8120b405074d"
},
{
"node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA",
"service_id": "e013f18a-200f-4249-8303-d42d10d3ce46"
},
{
"node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi",
"service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e"
}]
}'
```
Since we are starting with a value of 5 and increment it three times, we expect an 8 which we get:
```
Your peerId: 12D3KooWHgS2T8mWoAkxoEaLtPjHauai2mVPrNSLDKZVd71KoxS1
8
```
Of course, we can drastically change our application logic by changing the execution flow of our workflow composition. In the above example, we executed each of the three services once in sequence. Alternatively, we could also execute them in parallel or some combination of sequential and parallel execution arms.
Reusing our deployed services with a different execution flow may look like the following:
````
```aqua
-- service interface for Wasm module
service AddOne:
add_one: u64 -> u64
-- convenience struc for (Peer Id, Service Id) tuples
data NodeServiceTuple:
node_id: string
service_id: string
-- our app as defined by the worflow expressed in Aqua
func add_one_par(value: u64, ns_tuples: []NodeServiceTuple) -> []u64:
res: *u64
for ns <- ns_tuples par:
on ns.node_id:
AddOne ns.service_id
res <- AddOne.add_one(value)
Op.noop()
join res[2] --< flatten the stream variable
<- res --< return the final results [value +1, value + 1, value + 1, ...] to the client
````
Unlike the sequential execution model, this example returns an array where each item is the incremented value, which is captured by the stream variable **res**. That is, for a starting value of five (5), we obtain \[6,6,6] assuming our NodeServiceTuple array provided the three distinct (Peer Id, Service Id) tuples.
Running the script with aqua:
```
aqua run \
-i aqua-scripts \
-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \
-f 'add_one_par(5, arg)' \
-d '{"arg":[{
"node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt",
"service_id": "7b2ab89f-0897-4537-b726-8120b405074d"
},
{
"node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA",
"service_id": "e013f18a-200f-4249-8303-d42d10d3ce46"
},
{
"node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi",
"service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e"
}]
}'
```
We get the expected result:
```
Your peerId: 12D3KooWB4eHpj2VfPDW9hJ5uMQiccV27uSJyHWiMUGN2hqkefV8
waiting for an argument with idx '2' on stream with size '0'
waiting for an argument with idx '2' on stream with size '0'
waiting for an argument with idx '2' on stream with size '1'
waiting for an argument with idx '2' on stream with size '1'
[
6,
6,
6
]
```
We can improve on our business logic and change our input arguments to make parallelization a little more useful. Let's extend our data struct and update the workflow:
```
-- aqua-scripts/adder.aqua
data ValueNodeService:
node_id: string
service_id: string
value: u64 --< add value
func add_one_par_alt(payload: []ValueNodeService) -> []u64:
res: *u64
for vns <- payload par: --< parallelized run
on vns.node_id:
AddOne vns.service_id
res <- AddOne.add_one(vns.value)
Op.noop()
join res[2]
<- res
```
And we can run the `aqua run` command:
```
aqua run \
-i aqua-scripts \
-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \
-f 'add_one_par_alt(arg)' \
-d '{"arg":[{
"value": 5,
"node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt",
"service_id": "7b2ab89f-0897-4537-b726-8120b405074d"
},
{
"value": 10,
"node_id": "12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA",
"service_id": "e013f18a-200f-4249-8303-d42d10d3ce46"
},
{
"value": 15,
"node_id": "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi",
"service_id": "dbaca771-f0a6-4d1e-9af7-5b49368ffa9e"
}]
}'
```
Given our input values \[5, 10, 15], we get the expected output array of \[6, 11, 16]:
```
Your peerId: 12D3KooWNHJkYtevGk5ccZFyHyfinTJYNDJZ4C9KN9cJGEqaWVe9
waiting for an argument with idx '2' on stream with size '0'
waiting for an argument with idx '2' on stream with size '0'
waiting for an argument with idx '2' on stream with size '1'
waiting for an argument with idx '2' on stream with size '1'
[
6,
11,
16
]
```
Alternatively, we can run our Aqua scripts with a Typescript client. In the `client-peer` directory:
```
npm i
npm start
```
Which of course gives us the expected results:
```
created a Fluence client 12D3KooWGve35kvMQ8USbmtRoMCzxaBPXSbqsZxfo6T8gBAV6bzy with relay 12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA
add_one to 5 equals 6
add_one sequentially equals 8
add_one parallel equals [ 6, 6, 6 ]
add_one parallel alt equals [ 11, 6, 16 ] --< order may differ for you
```
### Summary
This section illustrates how Aqua allows developers to locate and execute distributed services on by merely providing a (Peer Id, Service Id) tuple and the associated data. From an Aqua user perspective, there are no JSON-RPC or REST endpoints just topology tuples that are resolved on peers of the network. Moreover, we saw how the Fluence peer-to-peer workflow model facilitates a different request-response model than commonly encountered in traditional client-server applications. That is, instead of returning each service result to the client, Aqua allows us to forward the (intermittent) result to the next service, peer-to-peer style.
Furthermore, we explored how different Aqua execution flows, e.g. **seq**uential vs. **par**allel, and data models allow developers to compose drastically different workflows and application re-using already deployed services. For more information on Aqua, please see the [Aqua book](https://doc.fluence.dev/aqua-book/) and for more information on Fluence development, see the [developer docs](https://doc.fluence.dev/docs/).

View File

@ -0,0 +1,361 @@
# 5. Decentralized Oracles With Fluence And Aqua
### Overview
An oracle is some device that provides real-world, off-chain data to deterministic on-chain consumers such as a smart contract. A decentralized oracle draws from multiple, purportedly (roughly) equal input sources to minimize or even eliminate single source pitfalls such as [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle\_attack)(MITM) or provider manipulation. For example, a decentralized price oracle for, say, ETH/USD, could poll several DEXs for ETH/USD prices. Since smart contracts, especially those deployed on EVMs, can't directly call off-chain resources, oracles play a critical "middleware" role in the decentralized, trustless ecosystem. See Figure 1.
![](<../.gitbook/assets/image (44).png>)
Unlike single source oracles, multi-source oracles require some consensus mechanism to convert multiple input sources over the same target parameter into reliable point or range data suitable for third party, e.g., smart contract, consumption. Such "consensus over inputs" may take the form of simple [summary statistics](https://en.wikipedia.org/wiki/Summary\_statistics), e.g., mean, or one of many [other methods](https://en.wikipedia.org/wiki/Consensus\_\(computer\_science\)).
Given the importance of oracles to the Web3 ecosystem, it's not surprising to see a variety of third party solutions supporting various blockchain protocols. Fluence does not provide an oracle solution _per se_ but provides a peer-to-peer platform, tools and components for developers to quickly and easily program and compose reusable distributed data acquisition, processing and delivery services into decentralized oracle applications.
For the remainder of this section, we work through the process of developing a decentralized, multi-source timestamp oracle comprised of data acquisition, processing and delivery.
### Creating A Decentralized Timestamp Oracle
Time, often in form of timestamps, plays a critical role in a large number of Web2 and Web3 applications including off-chain voting applications and on-chain clocks. Our goal is to provide a consensus timestamp sourced from multiple input sources and implement an acceptable input aggregation and processing service to arrive at either a timestamp point or range value(s).
{% hint style="info" %}
In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples):
```bash
git clone https://github.com/fluencelabs/examples
```
{% endhint %}
#### Timestamp Acquisition
Each Fluence peer, i.e. node in the Fluence peer-to-peer network, has the ability to provide a timestamp from a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L127). In Aqua, we can call a [timestamp function](https://github.com/fluencelabs/fluence/blob/527e26e08f3905e53208b575792712eeaee5deca/particle-closures/src/host\_closures.rs#L124) with the desired granularity, i.e., seconds or milliseconds for further processing:
```python
-- aqua timestamp sourcing
on peer:
ts_ms_result <- peer.timestamp_ms()
-- or
ts_sec_result <- peer.timestamp_sec()
-- ...
```
In order to decentralize our timestamp oracle, we want to poll multiple peers in the Fluence network:
```python
-- multi-peer timestamp sourcing
-- ...
results: *u64
for peer <- many_peers_list par:
on peer:
results <- peer.timestamp_ms()
-- ...
```
In the above example, we have a list of peers and retrieve a timestamp value from each one. Note that we are polling nodes for timestamps in [parallel](https://doc.fluence.dev/aqua-book/language/flow/parallel) in order to optimize toward uniformity and to collect responses in the stream variable `results`. See Figure 2.
![](<../.gitbook/assets/image (45).png>)
The last thing to pin down concerning our timestamp acquisition is which peers to query. One possibility is to specify the peer ids of a set of desired peers to query. Alternatively, we can tap into the [Kademlia neighborhood](https://en.wikipedia.org/wiki/Kademlia) of a peer, which is a set of peers that are closest to our peer based on the XOR distance of the peer ids. Luckily, there is a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L140) we can call from Aqua that returns up to 20 neighboring peers:
```python
-- timestamps from Kademlia neighborhood
results: *u64
on node:
k <- Op.string_to_b58(node)
nodes <- Kademlia.neighborhood(k, nil, nil)
for node <- nodes par:
on node:
try:
results <- node.timestamp_ms()
-- ...
```
#### Timestamp Processing
Once we have our multiple timestamp values, we need to process them into a point or range value(s) to be useful. Whatever our processing/consensus algorithm is, we can implement it in Marine as one or more reusable, distributed services.
Fpyor example, we can rely on [summary statistics](https://en.wikipedia.org/wiki/Summary\_statistics) and implement basic averaging to arrive at a point estimate:
```rust
// ...
#[marine]
pub fn ts_avg(timestamps: Vec<u64>) -> f64 {
timestamps.iter().sum::<u64>() as f64 / timestamps.len() as f64
}
// ...
```
Using the average to arrive at a point-estimate is simply a stake in the ground to illustrate what's possible. Actual processing algorithms may vary and, depending on a developers target audience, different algorithms may be used for different delivery targets. And Aqua makes it easy to customize workflows while emphasizing reuse.
#### Putting It All Together
Let's put it all together by sourcing timestamps from the Kademlia neighborhood and processing the timestamps into a consensus value. Instead of one of the summary statistics, we employ a simple, consensus algorithm that randomly selects one of the provided timestamps and then calculates a consensus score from the remaining n -1 timestamps:
```rust
// src.main.rs
//
// simple consensus from timestamps
// params:
// timestamps, u64, [0, u64_max]
// tolerance, u32, [0, u32_max]
// threshold, f64, [0.0, 1.0]
// 1. Remove a randomly selected timestamp from the array of timestamps, ts
// 2. Count the number of timestamps left in the array that are withn +/- tolerance (where tolerance may be zero)
// 3. compare the suporting number of times stamps divided by th enumber of remaining timestamps to the threshold. if >=, consensus for selected timestamp is true else false
//
[marine]
fn ts_frequency(mut timestamps: Vec<u64>, tolerance: u32, threshold: f64, err_value: u64) -> Consensus {
timestamps.retain(|&ts| ts != err_value);
if timestamps.len() == 0 {
return Consensus {
err_str: "Array must have at least one element".to_string(),
..<_>::default()
};
}
if timestamps.len() == 1 {
return Consensus {
n: 1,
consensus_ts: timestamps[0],
consensus: true,
support: 1,
..<_>::default()
};
}
if threshold < 0f64 || threshold > 1f64 {
return Consensus {
err_str: "Threshold needs to be between [0.0,1.0]".to_string(),
..<_>::default()
};
}
let rnd_seed: u64 = timestamps.iter().sum();
let mut rng = WyRand::new_seed(rnd_seed);
let rnd_idx = rng.generate_range(0..timestamps.len());
let consensus_ts = timestamps.swap_remove(rnd_idx);
let mut support: u32 = 0;
for ts in timestamps.iter() {
if ts <= &(consensus_ts + tolerance as u64) && ts >= &(consensus_ts - tolerance as u64) {
support += 1;
}
}
let mut consensus = false;
if (support as f64 / timestamps.len() as f64) >= threshold {
consensus = true;
}
Consensus {
n: timestamps.len() as u32,
consensus_ts,
consensus,
support,
err_str: "".to_string(),
}
}
```
We compile our consensus module with `./scripts/build.sh`, which allows us to run the unit tests using the Wasm module with `cargo +nightly test`:
```bash
# src.main.rs
running 10 tests
test tests::ts_validation_good_consensus_false ... ok
test tests::test_err_val ... ok
test tests::test_mean_fail ... ok
test tests::ts_validation_good_consensus ... ok
test tests::ts_validation_bad_empty ... ok
test tests::ts_validation_good_consensus_true ... ok
test tests::ts_validation_good_no_support ... ok
test tests::test_mean_good ... ok
test tests::ts_validation_good_no_consensus ... ok
test tests::ts_validation_good_one ... ok
test result: ok. 10 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 18.75s
```
We can now interact with our module with the Marine REPL `mrepl configs/Config.toml`:
```python
Welcome to the Marine REPL (version 0.9.1)
Minimal supported versions
sdk: 0.6.0
interface-types: 0.20.0
app service was created with service id = 520a092b-85ef-43c1-9c12-444274ba2cb7
elapsed time 62.893047ms
1> i
Loaded modules interface:
data Consensus:
n: u32
reference_ts: u64
support: u32
err_str: string
data Oracle:
n: u32
avg: f64
err_str: string
ts_oracle:
fn ts_avg(timestamps: []u64, min_points: u32) -> Oracle
fn ts_frequency(timestamps: []u64, tolerance: u32) -> Consensus
2> call ts_oracle ts_frequency [[1637182263,1637182264,1637182265,163718226,1637182266], 0, 0.66, 0]
result: Object({"consensus": Bool(false), "consensus_ts": Number(1637182264), "err_str": String(""), "n": Number(4), "support": Number(0)})
elapsed time: 167.078µss
3> call ts_oracle ts_frequency [[1637182263,1637182264,1637182265,163718226,1637182266], 5, 0.66, 0]
result: Object({"consensus": Bool(true), "consensus_ts": Number(1637182264), "err_str": String(""), "n": Number(4), "support": Number(3)})
elapsed time: 63.291µs
```
In our first call at prompt `2>`, we set a tolerance of 0 and, given our array of timestamps, have no support for the chosen timestamps, whereas in the next call, `3>,`we increase the tolerance parameter and obtain a consensus result.
All looks satisfactory and we are ready to deploy our module with `./scripts/deploy.sh`, which write-appends the deployment response data, including the service id, to a local file named `deployed_service.data`:
```bash
client seed: 7UNmJPMWdLmrwAtGrpJXNrrcK7tEZHCjvKbGdSzEizEr
client peerId: 12D3KooWBeEsUnMV9MZ6QGMfaxcTvw8mGFEMGDe7rhKN8RQv1Gs8
relay peerId: 12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi
service id: 61a86f67-ffc2-4dea-8746-fd4f04d9c75b
service created successfully
```
With the service in place, let's have a look at our Aqua script. Recall, we want to poll the Kademlia neighborhood for timestamps and then call the `ts_oracle` method of our service with the array of timestamps and tolerance parameters as well as the (peer id, service id) parameters of our deployed service:
```python
-- aqua/ts_oracle.aqua
-- <snip>
func ts_oracle_with_consensus(tolerance: u32, threshold: f64, err_value:u64, node:string, oracle_service_id:string)-> Consensus, []string:
rtt = 1000
res: *u64 -- 4
msg = "timeout"
dead_peers: *string
on node:
k <- Op.string_to_b58(node)
nodes <- Kademlia.neighborhood(k, nil, nil) -- 1
for n <- nodes par: -- 3
status: *string
on n: -- 7
res <- Peer.timestamp_ms() -- 2
status <<- "success" -- 9
par status <- Peer.timeout(rtt, msg) -- 8
if status! != "success":
res <<- err_value -- 10
dead_peers <<- n -- 11
MyOp.identity(res!19) -- 5
TSOracle oracle_service_id
consensus <- TSOracle.ts_frequency(res, tolerance, threshold, err_value) -- 6
<- consensus, dead_peers -- 12
```
That script is probably a little more involved than what you've seen so far. So let's work through the script: In order to get out set of timestamps, we determine the Kademlia neighbors (1) and then proceed to request a timestamp from each of those peers (2) in parallel (3). In an ideal world, each peers responds with a timestamp and the stream variable `res` (4) fills up with the 20 values from the twenty neighbors, which we then fold (5) and push to our consensus service (6). Alas, life in distributed isn't quite that simple since there are no guarantees that a peer is actually available to connect or provide a service response. Since we may never actually connect to a peer (7), we can't expect an error response meaning that we get a silent fail at (2) and no write to the stream `res`. Subsequently, this leads to the fail of the fold operation (5) since fewer than the expected twenty items are in the stream and the operation (5) ends up timing out waiting for a never-to-arrive timestamp.
In order to deal with this issue, we introduce a sleep operation (8) with the builtin [Peer.timeout](https://github.com/fluencelabs/aqua-lib/blob/1193236fe733e75ed0954ed26e1234ab7a6e7c53/builtin.aqua#L135) and run that in parallel to the attempted connection for peer `n` (3) essentially setting up a race condition to write to the stream: if the peer (`on n`, 7) behaves, we write the timestamp to `res`(2) and make a note of that successful operation (9); else, we write a dummy value, i.e., `err_value`, into the stream (10) and make a note of the delinquent peer (11). Recall, we filter out the dummy `err_value` at the service level.
Once we get our consensus result (6), we return it as well as the array of unavailable peers (12). And that's all there is.
In order to execute our workflow, we can use Aqua's `aqua run` CLI without having to manually compile the script:
```bash
aqua run \
-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \
-i aqua/ts_oracle.aqua \
-f 'ts_oracle_with_consensus(10, 0.66, 0, "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", "61a86f67-ffc2-4dea-8746-fd4f04d9c75b")'
```
If you are new to `aqua run`, the CLI functionality provided by the [`aqua` package](https://www.npmjs.com/package/@fluencelabs/aqua):
* `aqua run --help` for all your immediate needs
* the `-i` flag denotes the location of our reference aqua file
* the `-a` flag denotes the multi-address of our connection peer/relay
* the `-f` flag handles the meat of the call:
* specify the aqua function name
* provide the parameter values matching the function signature
Upon execution of our `aqua run` client, we get the following result which may be drastically different for you:
```bash
Your peerId: 12D3KooWEAhnNDjnh7C9Jba4Yn3EPcK6FJYRMZmaEQuqDnkb9UQf
[
{
"consensus": true,
"consensus_ts": 1637883531844,
"err_str": "",
"n": 15,
"support": 14
},
[
"12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA",
"12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv",
"12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU",
"12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r"
]
]
```
Recall, that the maximum number of peers pulling from a Kademlia is 20 -- the default value set by the Fluence team. As discussed above, not all nodes may be available at any given time and at the _time of this writing_, the following four nodes were indeed not providing a timestamp response:
```bash
[
"12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA",
"12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv",
"12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU",
"12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r"
]
```
That leaves us with a smaller timestamp pool to run through our consensus algorithm than anticipated. Please note that it is up to the consensus algorithm design(er) to set the minimum acceptable number of inputs deemed necessary to produce a sensible and acceptable result. In our case, we run fast and loose as evident in our service implementation discussed above, and go with what we get as long as we get at least one timestamp.
With a tolerance of ten (10) milli-seconds and a consensus threshold of 2/3 (0.66), we indeed attain a consensus for the _1637883531844_ value with support from 14 out of 15 timestamps:
```bash
{
"consensus": true,
"consensus_ts": 1637883531844,
"err_str": "",
"n": 15,
"support": 14
},
```
We can make adjustments to the _tolerance_ parameter:
```bash
aqua run \
-a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \
-i aqua/ts_oracle.aqua \
-f 'ts_oracle_with_consensus(0, 0.66, 0, "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", "61a86f67-ffc2-4dea-8746-fd4f04d9c75b")'
```
Which does _not_ result in a consensus timestamp given the same threshold value:
```bash
Your peerId: 12D3KooWP7vAR462JgoagUzGA8s9YccQZ7wsuGigFof7sajiGThr
[
{
"consensus": false,
"consensus_ts": 1637884102677,
"err_str": "",
"n": 15,
"support": 0
},
[
"12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA",
"12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv",
"12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU",
"12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r"
]
]
```
We encourage you to experiment and tweak the parameters both for the consensus algorithm and the timeout settings. Obviously, longer routes make for more timestamp variance even if each timestamp called is "true."
### Summary
Fluence and Aqua make it easy to create and implement decentralized oracle and consensus algorithms using Fluence's off-chain peer-to-peer network and tool set.
To further your understanding of creating decentralized off-chain (compute) oracles with Fluence and Aqua, experiment with both the consensus methodology and, of course, the oracle sources. Instead of timestamps, try your hand on crypto price/pairs and associated liquidity data, election exit polls or sports scores. Enjoy!

19
quick-start/README.md Normal file
View File

@ -0,0 +1,19 @@
# Quick Start
Welcome to our quick-start tutorials which guide you through the necessary steps to
1. Create a browser-to-browser messaging web application
2. Create and deploy a hosted service
3. Enhance a browser-to-browser application with a network-hosted service
4. Explore service composition and reuse with Aqua
5. Work through a decentralized price oracle example with Fluence and Aqua
## Preparing Your Environment
In case you haven't set up your development environment, follow the [setup instructions](../tutorials\_tutorials/recipes\_setting\_up.md) and clone the [examples repo](https://github.com/fluencelabs/examples):
```bash
git clone https://github.com/fluencelabs/examples
```
If you encounter any problems or have suggestions, please open an issue or submit a PR. You can also reach out in [Discord](https://fluence.chat) or [Telegram](https://t.me/fluence\_project).&#x20;