diff --git a/.gitbook/assets/image (18).png b/.gitbook/assets/image (18).png new file mode 100644 index 0000000..249a787 Binary files /dev/null and b/.gitbook/assets/image (18).png differ diff --git a/.gitbook/assets/image (24).png b/.gitbook/assets/image (24).png new file mode 100644 index 0000000..4f41008 Binary files /dev/null and b/.gitbook/assets/image (24).png differ diff --git a/SUMMARY.md b/SUMMARY.md index b74b71f..562071e 100644 --- a/SUMMARY.md +++ b/SUMMARY.md @@ -8,27 +8,27 @@ * [2. Hosted Services](quick-start/2.-hosted-services.md) * [3. Browser-to-Service](quick-start/3.-browser-to-service.md) * [4. Service Composition And Reuse With Aqua](quick-start/4.-service-composition-and-reuse-with-aqua.md) -* [Aquamarine](knowledge_aquamarine/README.md) - * [Aqua](knowledge_aquamarine/hll.md) - * [Marine](knowledge_aquamarine/marine/README.md) - * [Marine CLI](knowledge_aquamarine/marine/marine-cli.md) - * [Marine REPL](knowledge_aquamarine/marine/marine-repl.md) - * [Marine Rust SDK](knowledge_aquamarine/marine/marine-rs-sdk.md) -* [Tools](knowledge_tools.md) + * [5. Decentralized Oracles With Fluence And Aqua](quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md) +* [Aquamarine](knowledge\_aquamarine/README.md) + * [Aqua](knowledge\_aquamarine/hll.md) + * [Marine](knowledge\_aquamarine/marine/README.md) + * [Marine CLI](knowledge\_aquamarine/marine/marine-cli.md) + * [Marine REPL](knowledge\_aquamarine/marine/marine-repl.md) + * [Marine Rust SDK](knowledge\_aquamarine/marine/marine-rs-sdk.md) +* [Tools](knowledge\_tools.md) * [Node](node.md) * [Fluence JS](fluence-js/README.md) - * [Concepts](fluence-js/1_concepts.md) - * [Basics](fluence-js/2_basics.md) - * [In-depth](fluence-js/3_in_depth.md) - * [Running app in nodejs](fluence-js/5_run_in_node.md) - * [Running app in browser](fluence-js/4_run_in_browser-1.md) + * [Concepts](fluence-js/1\_concepts.md) + * [Basics](fluence-js/2\_basics.md) + * [Running app in nodejs](fluence-js/5\_run\_in\_node.md) + * [Running app in browser](fluence-js/4\_run\_in\_browser-1.md) + * [In-depth](fluence-js/3\_in\_depth.md) * [API reference](fluence-js/6-reference.md) * [Changelog](fluence-js/changelog.md) -* [Security](knowledge_security.md) -* [Tutorials](tutorials_tutorials/README.md) - * [Setting Up Your Environment](tutorials_tutorials/recipes_setting_up.md) - * [Deploy A Local Fluence Node](tutorials_tutorials/tutorial_run_local_node.md) - * [cUrl As A Service](tutorials_tutorials/curl-as-a-service.md) - * [Add Your Own Builtins](tutorials_tutorials/add-your-own-builtin.md) +* [Security](knowledge\_security.md) +* [Tutorials](tutorials\_tutorials/README.md) + * [Setting Up Your Environment](tutorials\_tutorials/recipes\_setting\_up.md) + * [Deploy A Local Fluence Node](tutorials\_tutorials/tutorial\_run\_local\_node.md) + * [cUrl As A Service](tutorials\_tutorials/curl-as-a-service.md) + * [Add Your Own Builtins](tutorials\_tutorials/add-your-own-builtin.md) * [Research, Papers And References](research-papers-and-references.md) - diff --git a/knowledge_aquamarine/marine/marine-rs-sdk.md b/knowledge_aquamarine/marine/marine-rs-sdk.md index d63b442..6426c74 100644 --- a/knowledge_aquamarine/marine/marine-rs-sdk.md +++ b/knowledge_aquamarine/marine/marine-rs-sdk.md @@ -1,6 +1,6 @@ # Marine Rust SDK -The [marine-rs-sdk](https://github.com/fluencelabs/marine-rs-sdk) empowers developers to write services suitable for peer hosting in peer-to-peer networks using the Marine Virtual Machine by enabling the wasm32-wasi compile target for Marine. +The [marine-rs-sdk](https://github.com/fluencelabs/marine-rs-sdk) empowers developers to create services suitable for hosting on peers of the peer-to-peer network. Such services are constructed from one or more Wasm modules, which each are the result of Rust code compiled to the wasm32-wasi compile target, executable by the Marine runtime. ### API @@ -51,7 +51,7 @@ Function Export Requirements * wrap a target function with the `[marine]` macro * function arguments must by of `ftype` -* the function return type also must be of `ftype` +* the function return type also must be of `ftype` {% endhint %} #### Function Import @@ -109,7 +109,7 @@ extern "C" { #### Structures -Finally, the `[marine]` macro can wrap a `struct` making possible to use it as a function argument or return type. Note that +Finally, the `[marine]` macro can wrap a `struct` making possible to use it as a function argument or return type. Note that * only macro-wrapped structures can be used as function arguments and return types * all fields of the wrapped structure must be public and of the `ftype`. @@ -218,7 +218,7 @@ fn some_function() -> Data { #### Call Parameters -There is a special API function `fluence::get_call_parameters()` that returns an instance of the [`CallParameters`](https://github.com/fluencelabs/marine-rs-sdk/blob/master/src/call_parameters.rs#L35) structure defined as follows: +There is a special API function `fluence::get_call_parameters()` that returns an instance of the [`CallParameters`](https://github.com/fluencelabs/marine-rs-sdk/blob/master/src/call\_parameters.rs#L35) structure defined as follows: ```rust pub struct CallParameters { @@ -294,7 +294,7 @@ extern "C" { } ``` -The above code creates a "curl adapter", i.e., a Wasm module that allows other Wasm modules to use the the `curl_request` function, which calls the imported _curl_ binary in this case, to make http calls. Please note that we are wrapping the `extern` block with the `[marine]`macro and introduce a Marine-native data structure [`MountedBinaryResult`](https://github.com/fluencelabs/marine/blob/master/examples/url-downloader/curl_adapter/src/main.rs) as the linked-function return value. +The above code creates a "curl adapter", i.e., a Wasm module that allows other Wasm modules to use the the `curl_request` function, which calls the imported _curl_ binary in this case, to make http calls. Please note that we are wrapping the `extern` block with the `[marine]`macro and introduce a Marine-native data structure [`MountedBinaryResult`](https://github.com/fluencelabs/marine/blob/master/examples/url-downloader/curl\_adapter/src/main.rs) as the linked-function return value. Please not that if you want to use `curl_request` with testing, see below, the curl call needs to be marked unsafe, e.g.: @@ -338,7 +338,7 @@ To use the `[marine-test]` macro please add `marine-rs-sdk-test` crate to the `[ marine-rs-sdk-test = "0.2.0" ``` - Let's have a look at an implementation example: + Let's have a look at an implementation example: ```rust use marine_rs_sdk::marine; @@ -371,8 +371,8 @@ mod tests { } ``` -1. We wrap a basic _greeting _function with the `[marine]` macro which results in the greeting.wasm module -2. We wrap our tests as usual with `[cfg(test)]` and import the marine _test crate. _Do **not** import _super_ or the _local crate_. +1. We wrap a basic _greeting_ function with the `[marine]` macro which results in the greeting.wasm module +2. We wrap our tests as usual with `[cfg(test)]` and import the marine _test crate._ Do **not** import _super_ or the _local crate_. 3. Instead, we apply the `[marine_test]` macro to each of the test functions by providing the path to the config file, e.g., Config.toml, and the directory containing the Wasm module we obtained after compiling our project with `marine build`. Moreover, we add the type of the test as an argument in the function signature. It is imperative that project build precedes the test runner otherwise the required Wasm file will be missing. 4. The target of our tests is the `pub fn greeting` function. Since we are calling the function from the Wasm module we must prefix the function name with the module namespace -- `greeting` in this example case as specified in the function argument. @@ -502,13 +502,13 @@ mod tests_on_mod { 1. We wrap the `test` function with the `marine_test` macro by providing named service configurations with module locations. Based on its arguments the macro defines a `marine_test_env` module with an interface to the services. 2. We create new services. Each `ServiceInterface::new()` runs a new marine runtime with the service. -3. We prepare data to pass to a service using structure definition from `marine_test_env`. The macro finds all structures used in the service interface functions and defines them in the corresponding submodule of `marine_test_env` . +3. We prepare data to pass to a service using structure definition from `marine_test_env`. The macro finds all structures used in the service interface functions and defines them in the corresponding submodule of `marine_test_env` . 4. We call a service function through the `ServiceInterface` object. -5. It is possible to use the result of one service call as an argument for a different service call. The interface types with the same structure have the same rust type in `marine_test_env`. +5. It is possible to use the result of one service call as an argument for a different service call. The interface types with the same structure have the same rust type in `marine_test_env`. In the `test_on_mod.rs` tab we can see another option — applying `marine_test` to a `mod`. The macro just defines the `marine_test_env` at the beginning of the module and then it can be used as usual everywhere inside the module. -The full example is [here](https://github.com/fluencelabs/marine/tree/master/examples/multiservice_marine_test). +The full example is [here](https://github.com/fluencelabs/marine/tree/master/examples/multiservice\_marine\_test). The `marine_test` macro also gives access to the interface of internal modules which may be useful for setting up a test environment. This feature is designed to be used in situations when it is simpler to set up a service for a test through internal functions than through the service interface. To illustrate this feature we have rewritten the previous example: @@ -546,7 +546,7 @@ mod tests { 1. We access the internal service interface to construct an interface structure. To do so, we use the following pattern: `marine_test_env::$service_name::modules::$module_name::$structure_name`. 2. We access the internal service interface and directly call a function from one of the modules of this service. To do so, we use the following pattern: `$service_object.modules.$module_name.$function_name` . -3. In the previous example, the same interface types had the same rust types. It is limited when using internal modules: the property is true only when structures are defined in internal modules of one service, or when structures are defined in service interfaces of different services. So, we need to construct the proper type to pass data to the internals of another module. +3. In the previous example, the same interface types had the same rust types. It is limited when using internal modules: the property is true only when structures are defined in internal modules of one service, or when structures are defined in service interfaces of different services. So, we need to construct the proper type to pass data to the internals of another module. Testing sdk also has the interface for [Cargo build scripts](https://doc.rust-lang.org/cargo/reference/build-scripts.html). Some IDEs can analyze files generated in build scripts, providing code completion and error highlighting for code generated in build scripts. But using it may be a little bit tricky because build scripts are not designed for such things. @@ -644,7 +644,7 @@ marine-rs-sdk-test = "0.4.0" # <- 5 {% endtab %} {% endtabs %} -1. We create a vector of pairs (service_name, service_description) to pass to the generator. The structure is the same with multi-service `marine_test`. +1. We create a vector of pairs (service\_name, service\_description) to pass to the generator. The structure is the same with multi-service `marine_test`. 2. We check if we build for a non-wasm target. As we build this marine service only for `wasm32-wasi` and tests are built for native target, we can generate `marine_test_env` only for tests. This is needed because our generator depends on the artifacts from `wasm32-wasi` build. We suggest using a separate crate for using build scripts for testing purposes. It is here for simplicity. 3. We pass our services, a name of the file to generate, and a path to the build script file to the `marine_test_env` generator. Just always use `file!()` for the last argument. The generated file will be in the directory specified by the `OUT_DIR` variable, which is set by cargo. The build script must not change any files outside of this directory. 4. We set up condition to re-run the build script. It must be customized, a good choice is to re-run the build script when .wasm files or `Config.toml` are changed. diff --git a/knowledge_tools.md b/knowledge_tools.md index 6c489a2..604a918 100644 --- a/knowledge_tools.md +++ b/knowledge_tools.md @@ -2,9 +2,15 @@ ## Fluence Proto Distributor: FLDIST -[`fldist`](https://github.com/fluencelabs/proto-distributor) is a command line interface \(CLI\) to Fluence peers allowing for the lifecycle management of services and offers the fastest and most effective way to service deployment. +{% hint style="info" %} +Please note that we are in the process of deprecating `fldist` in favor of [Aqua CLI](https://github.com/fluencelabs/aqua/tree/main/cli). At the time of this writing, `fldist` remains fully functional **except** for the `run_air` command, which needs to be replace with `aqua run`. -```text +We are currently in the process of updating the documentation to reflect these changes. If you run into an errant `fldist` reference, please let us now! +{% endhint %} + +[`fldist`](https://github.com/fluencelabs/proto-distributor) is a command line interface (CLI) to Fluence peers allowing for the lifecycle management of services and offers the fastest and most effective way to service deployment. + +``` mbp16~(:|✔) % fldist --help Usage: fldist [options] @@ -45,5 +51,4 @@ The [Fluence JS](https://github.com/fluencelabs/fluence-js) supports developers ## Marine Tools -Marine offers multiple tools including the Marine CLI, REPL and SDK. Please see the [Marine section](knowledge_aquamarine/marine/) for more detail. - +Marine offers multiple tools including the Marine CLI, REPL and SDK. Please see the [Marine section](knowledge\_aquamarine/marine/) for more detail. diff --git a/quick-start/1.-browser-to-browser-1.md b/quick-start/1.-browser-to-browser-1.md index 12a28c7..625de05 100644 --- a/quick-start/1.-browser-to-browser-1.md +++ b/quick-start/1.-browser-to-browser-1.md @@ -4,14 +4,14 @@ The first example demonstrates how to communicate between two client peers, i.e. In your VSCode container terminal, make sure you are in the `examples/quickstart/1-browser-to-browser` directory to install the dependencies: -```text +``` cd examples/quickstart/1-browser-to-browser npm install ``` Run the app with `npm start` : -```text +``` npm start ``` @@ -19,48 +19,47 @@ Which opens a new tab in your browser at `http://localhost:3000`. Depending on y The browser tab, representing the client peer, wants you to pick a relay node the browser client can connect to and, of course, allows the peer to respond to the browser client. Select any one of the offered relays: -![Relay Selection](../.gitbook/assets/image%20%2823%29.png) +![Relay Selection](<../.gitbook/assets/image (23).png>) The client peer is now connected to the relay and ready for business: -![Connection confirmation to network](../.gitbook/assets/image%20%2825%29.png) +![Connection confirmation to network](<../.gitbook/assets/image (25).png>) -Let's follow the instructions, open another browser tab, i.e. client peer, using `http://localhost:3000` , select any one of the relays and copying the ensuing peer id and relay peer id to the first client peer, i.e. the first browser tab, and click the `say hello` button: +Let's follow the instructions, open another browser tab, i.e. client peer, using `http://localhost:3000` , select any one of the relays and copying the ensuing peer id and relay peer id to the first client peer, i.e. the first browser tab, and click the `say hello` button:\ -![Peer-to-peer communication between two browser client peers](../.gitbook/assets/image%20%2846%29.png) +![Peer-to-peer communication between two browser client peers](<../.gitbook/assets/image (46).png>) Congratulations, you just sent messages between two browsers over the Fluence peer-to-peer network, which is pretty cool! Even cooler, however, is how we got here using Aqua, Fluence's distributed network and application composition language. In your VSCode workspace, navigate to the `aqua` directory and open the \``getting-started.aqua` file in VSCode: -![getting-started.aqua](../.gitbook/assets/image%20%2827%29.png) +![getting-started.aqua](<../.gitbook/assets/image (27).png>) -And yes, fewer than ten lines \(!!\) are required for a client peer, like our browser, to connect to the network and start composing the local `HelloPeer` service to send messages. +And yes, fewer than ten lines (!!) are required for a client peer, like our browser, to connect to the network and start composing the local `HelloPeer` service to send messages. In broad strokes, the Aqua code breaks down as follows: -* Import the Aqua [standard library](https://github.com/fluencelabs/aqua-lib) into our application \(1\) -* Create a service interface binding to the local service \(see below\) with the `HelloPeer` namespace and `hello` function \(4-5\) -* Create the composition function `sayHello` that executes the `hello` call on the provided `targetPeerId` via the provided `targetRelayPeerId` and returns the result \(7-10\). Recall the copy and paste job you did earlier in the browser tab for the peer and relay id? Well, you just found the consumption place for these two parameters. +* Import the Aqua [standard library](https://github.com/fluencelabs/aqua-lib) into our application (1) +* Create a service interface binding to the local service (see below) with the `HelloPeer` namespace and `hello` function (4-5) +* Create the composition function `sayHello` that executes the `hello` call on the provided `targetPeerId` via the provided `targetRelayPeerId` and returns the result (7-10). Recall the copy and paste job you did earlier in the browser tab for the peer and relay id? Well, you just found the consumption place for these two parameters. Not only is Aqua rather succinct in allowing you to seamlessly program both network routes and distributed application workflows but also provides the ability to compile Aqua to Typescript stubs wrapping compiled Aqua, called AIR -- short for Aqua Intermediate Representation, into ready to use code blocks. Navigate to the `src/_aqua` directory and open the `getting-started.ts` file in VSCode: -![Aqua compiler generated typescript wrapper around AIR ](../.gitbook/assets/image%20%2845%29.png) +![Aqua compiler generated typescript wrapper around AIR ](<../.gitbook/assets/image (45).png>) Which can now be imported into our `App.tsx` file: -![Import Aqua generated Typescript stub \(line 7\)](../.gitbook/assets/image%20%2826%29.png) +![Import Aqua generated Typescript stub (line 7)](<../.gitbook/assets/image (26).png>) We wrote a little more than a handful of lines of code in Aqua and ended up with a deployment-ready code block that includes both the network routing and a compute logic to facilitate browser-to-browser messaging over a peer-to-peer network. -The local \(browser\) service `HelloPeer` is also implemented in the `App.tsx` file: +The local (browser) service `HelloPeer` is also implemented in the `App.tsx` file: -![Local HelloPeer service implementation](../.gitbook/assets/image%20%2821%29.png) +![Local HelloPeer service implementation](<../.gitbook/assets/image (21).png>) -To summarize, we run an app that facilities messaging between two browsers over a peer-to-peer network. At the core of this capability is Aqua which allowed us in just a few lines of code to program both the network topology and the application workflow in barely more than a handful of lines of code. Hint: You should be excited. For more information on Aqua, see the [Aqua Book](https://app.gitbook.com/@fluence/s/aqua-book/). +To summarize, we run an app that facilities messaging between two browsers over a peer-to-peer network. At the core of this capability is Aqua which allowed us in just a few lines of code to program both the network topology and the application workflow in barely more than a handful of lines of code. Hint: You should be excited. For more information on Aqua, see the [Aqua Book](https://app.gitbook.com/@fluence/s/aqua-book/). In the next section, we develop a WebAssembly module and deploy it as a hosted service to the Fluence peer-to-peer network. - diff --git a/quick-start/2.-hosted-services.md b/quick-start/2.-hosted-services.md index 60be6cf..11834d1 100644 --- a/quick-start/2.-hosted-services.md +++ b/quick-start/2.-hosted-services.md @@ -6,9 +6,9 @@ In the previous example, we used a local, browser-native service to facilitate t In this section, we develop a simple `HelloWorld` service and host it on a peer-to-peer node of the Fluence testnet. In your VSCode IDE, change to the `2-hosted-services` directory and open the `src/main.rs` file: -![Rust code for HelloWorld hosted service module](../.gitbook/assets/image%20%2844%29.png) +![Rust code for HelloWorld hosted service module](<../.gitbook/assets/image (44).png>) -Fluence hosted services are comprised of WebAssembly modules implemented in Rust and compiled to [wasm32-wasi](https://doc.rust-lang.org/stable/nightly-rustc/rustc_target/spec/wasm32_wasi/index.html). Let's have look at our code: +Fluence hosted services are comprised of WebAssembly modules implemented in Rust and compiled to [wasm32-wasi](https://doc.rust-lang.org/stable/nightly-rustc/rustc\_target/spec/wasm32\_wasi/index.html). Let's have look at our code: ```rust // quickstart/2-hosted-services/src/main.rs @@ -36,7 +36,7 @@ pub fn hello(from: String) -> HelloWorld { At the core of our implementation is the `hello` function which takes a string parameter and returns the `HelloWorld` struct consisting of the `msg` and `reply` field, respectively. We can use the `build.sh` script in the `scripts` directory, `./scripts/build.sh` , to compile the code to the Wasm target from the VSCode terminal: -![](../.gitbook/assets/image%20%2837%29.png) +![](<../.gitbook/assets/image (37).png>) Aside from some housekeeping, the `build.sh` script gives the compile instructions with [marine](https://crates.io/crates/marine), `marine build --release` , and copies the resulting Wasm module, `hello_world.wasm`, to the `artifacts` directory for easy access. @@ -70,22 +70,22 @@ mod tests { ``` - - To run our tests, we can use the familiar[`cargo test`](https://doc.rust-lang.org/cargo/commands/cargo-test.html) . However, we don't really care all that much about our native Rust functions being tested but want to test our WebAssembly functions. This is where the extra code in the test module comes into play. In short., we are running `cargo test` against the exposed interfaces of the `hello_world.wasm` module and in order to do that, we need the `marine_test` macro and provide it with both the modules directory, i.e., the `artifacts` directory, and the location of the `Config.toml` file. Note that the `Config.toml` file specifies the module metadata and optional module linking data. Moreover, we need to call our Wasm functions from the module namespace, i.e. `hello_world.hello` instead of the standard `hello` -- see lines 13 and 19 above, which we specify as an argument in the test function signature \(lines 11 and 17, respectively\). +\ + To run our tests, we can use the familiar[`cargo test`](https://doc.rust-lang.org/cargo/commands/cargo-test.html) . However, we don't really care all that much about our native Rust functions being tested but want to test our WebAssembly functions. This is where the extra code in the test module comes into play. In short., we are running `cargo test` against the exposed interfaces of the `hello_world.wasm` module and in order to do that, we need the `marine_test` macro and provide it with both the modules directory, i.e., the `artifacts` directory, and the location of the `Config.toml` file. Note that the `Config.toml` file specifies the module metadata and optional module linking data. Moreover, we need to call our Wasm functions from the module namespace, i.e. `hello_world.hello` instead of the standard `hello` -- see lines 13 and 19 above, which we specify as an argument in the test function signature (lines 11 and 17, respectively). {% hint style="info" %} In order to able able to use the macro, install the [`marine-rs-sdk-test`](https://crates.io/crates/marine-rs-sdk-test) crate as a dev dependency: -`[dev-dependencies] marine-rs-sdk-test = "`<version>`"` +`[dev-dependencies] marine-rs-sdk-test = "`\`"` {% endhint %} From the VSCode terminal, we now run our tests with the`cargo +nightly test --release` command. Please note that if `nightly` is your default, you don't need it in your `cargo test` command. -![Cargo test using Wasm module](../.gitbook/assets/image%20%2833%29.png) +![Cargo test using Wasm module](<../.gitbook/assets/image (33).png>) Well done -- our tests check out. Before we deploy our service to the network, we can interact with it locally using the [Marine REPL](https://crates.io/crates/mrepl). In your VSCode terminal the `2-hosted-services` directory run: -```text +``` mrepl configs/Config.toml ``` @@ -121,9 +121,9 @@ We can explore the available interfaces with the `i` command and see that the in ### Exporting WebAssembly Interfaces To Aqua -In anticipation of future needs, note that `marine` allows us to export the Wasm interfaces ready for use in Aqua. In your VSCode terminal, navigate to the \`\` directory +In anticipation of future needs, note that `marine` allows us to export the Wasm interfaces ready for use in Aqua. In your VSCode terminal, navigate to the \`\` directory -```text +``` marine aqua artifacts/hello_world.wasm ``` @@ -138,19 +138,19 @@ service HelloWorld: hello(from: string) -> HelloWorld ``` -That can be piped directly into an aqua file , e.g., \``marine aqua my_wasm.wasm >> my_aqua.aqua`. +That can be piped directly into an aqua file , e.g., \``marine aqua my_wasm.wasm >> my_aqua.aqua`. ### Deploying A Wasm Module To The Network -Looks like all is in order with our module and we are ready to deploy our `HelloWorld` service to the world by means of the Fluence peer-to-peer network. For this to happen, we need two things: the peer id of our target node\(s\) and a way to deploy the service. The latter can be accomplished with the `fldist` command line tool and with respect to the former, we can get a peer from one of the Fluence testnets also with `fldist` . In your VSCode terminal: +Looks like all is in order with our module and we are ready to deploy our `HelloWorld` service to the world by means of the Fluence peer-to-peer network. For this to happen, we need two things: the peer id of our target node(s) and a way to deploy the service. The latter can be accomplished with the `fldist` command line tool and with respect to the former, we can get a peer from one of the Fluence testnets also with `fldist` . In your VSCode terminal: -```text +``` fldist env ``` Which gets us a list of network peers: -```text +``` /dns4/kras-00.fluence.dev/tcp/19990/wss/p2p/12D3KooWSD5PToNiLQwKDXsu8JSysCwUt8BVUJEqCHcDe7P5h45e /dns4/kras-00.fluence.dev/tcp/19001/wss/p2p/12D3KooWR4cv1a8tv7pps4HH6wePNaK6gf1Hww5wcCMzeWxyNw51 /dns4/kras-01.fluence.dev/tcp/19001/wss/p2p/12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA @@ -175,16 +175,15 @@ fldist --node-id 12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ Which gives us a unique service id: -```text +``` service id: 1e740ce4-81f6-4dd4-9bed-8d86e9c2fa50 service created successfully ``` Take note of the service id, `1e740ce4-81f6-4dd4-9bed-8d86e9c2fa50` in this example but different for you, as we need it to use the service with Aqua. -Congratulations, we just deployed our first reusable service to the Fluence network and we can admire our handiwork on the Fluence [Developer Hub](https://dash.fluence.dev/): +Congratulations, we just deployed our first reusable service to the Fluence network and we can admire our handiwork on the Fluence [Developer Hub](https://dash.fluence.dev): -![HelloWorld service deployed to peer 12D3Koo...WaoHi](../.gitbook/assets/image%20%2822%29.png) +![HelloWorld service deployed to peer 12D3Koo...WaoHi](<../.gitbook/assets/image (22).png>) With our newly created service ready to roll, let's move on and put it to work. - diff --git a/quick-start/4.-service-composition-and-reuse-with-aqua.md b/quick-start/4.-service-composition-and-reuse-with-aqua.md index 9fe3fe3..bc205a3 100644 --- a/quick-start/4.-service-composition-and-reuse-with-aqua.md +++ b/quick-start/4.-service-composition-and-reuse-with-aqua.md @@ -2,17 +2,17 @@ In the previous three sections, you got a taste of using Aqua with browsers and how to create and deploy a service. In this section, we discuss how to compose an application from multiple distributed services using Aqua. In Fluence, we don't use JSON-RPC or REST endpoints to address and execute the service, we use [Aqua](https://github.com/fluencelabs/aqua). -Recall, Aqua is a purpose-built distributed systems and peer-to-peer programming language that resolves \(Peer Id, Service Id\) tuples to facilitate service execution on the host node without developers having to worry about transport or network routing. And with Aqua VM available on each Fluence peer-to-peer node, Aqua allows developers to ergonomically locate and execute distributed services. +Recall, Aqua is a purpose-built distributed systems and peer-to-peer programming language that resolves (Peer Id, Service Id) tuples to facilitate service execution on the host node without developers having to worry about transport or network routing. And with Aqua VM available on each Fluence peer-to-peer node, Aqua allows developers to ergonomically locate and execute distributed services. ### Composition With Aqua -A service is one or more linked WebAssembly \(Wasm\) modules that may be linked at runtime. Said dependencies are specified by a **blueprint** which is the basis for creating a unique service id after the deployment and initiation of the blueprint on our chosen host for deployment. See Figure 1. +A service is one or more linked WebAssembly (Wasm) modules that may be linked at runtime. Said dependencies are specified by a **blueprint** which is the basis for creating a unique service id after the deployment and initiation of the blueprint on our chosen host for deployment. See Figure 1. -![](../.gitbook/assets/image%20%2812%29.png) +![](<../.gitbook/assets/image (12).png>) When we deploy our service, as demonstrated in section two, the service is "out there" on the network and we need a way to locate and execute the service if w want to utilize he service as part of our application. -Luckily, the \(Peer Id, Service Id\) tuple we obtain from the service deployment process contains all the information Aqua needs to locate and execute the specified service instance. +Luckily, the (Peer Id, Service Id) tuple we obtain from the service deployment process contains all the information Aqua needs to locate and execute the specified service instance. Let's create a Wasm module with a single function that adds one to an input in the `adder` directory: @@ -23,17 +23,17 @@ fn add_one(input: u64) -> u64 { } ``` -For our purposes, we deploy that module as a service to three hosts: Peer 1, Peer 2, and Peer 3. Use the instructions provided in section two to create the module and deploy the service to three peers of your choosing. See `4-composing-services-with-aqua/adder` for the code and `data/distributed_service.json` for the \(Peer Id, Service Id\) tuples already deployed to three network peers. +For our purposes, we deploy that module as a service to three hosts: Peer 1, Peer 2, and Peer 3. Use the instructions provided in section two to create the module and deploy the service to three peers of your choosing. See `4-composing-services-with-aqua/adder` for the code and `data/distributed_service.json` for the (Peer Id, Service Id) tuples already deployed to three network peers. -Once we got the services deployed to their respective hosts, we can use Aqua to compose an admittedly simple application by composing the use of each service into an workflow where the \(Peer Id, Service Id\) tuples facilitate the routing to and execution of each service. Also, recall that in the Fluence peer-to-peer programming model the client need not, and for the most part should not, be involved in managing intermediate results. Instead, results are "forward chained" to the next service as specified in the Aqua workflow. +Once we got the services deployed to their respective hosts, we can use Aqua to compose an admittedly simple application by composing the use of each service into an workflow where the (Peer Id, Service Id) tuples facilitate the routing to and execution of each service. Also, recall that in the Fluence peer-to-peer programming model the client need not, and for the most part should not, be involved in managing intermediate results. Instead, results are "forward chained" to the next service as specified in the Aqua workflow. Using our `add_one` service and starting with an input parameter value of one, utilizing all three services, we expect a final result of four given **seq**uential service execution: -![](../.gitbook/assets/image%20%2817%29.png) +![](<../.gitbook/assets/image (17).png>) -The underlying Aqua script may look something like this \(see the `aqua-script` directory\): +The underlying Aqua script may look something like this (see the `aqua-script` directory): -```text +``` -- aqua-scripts/adder.aqua -- service interface for Wasm module @@ -62,13 +62,17 @@ func add_one_three_times(value: u64, ns_tuples: []NodeServiceTuple) -> u64: Let's give it a whirl! Using the already deployed services or your even better, your own deployed services, let's compile out Aqua script in the `4-composing-services-with-aqua` directory: -```text +``` aqua -i aqua-scripts -o compiled-aqua -a ``` We now can use `fldist` to run the above Aqua script compiled to the `compiled-aqua/adder.add_one_three_time.air`: -```text +{% hint style="info" %} +Note that the `fldist run_air` examples below are currently broken and the `aqua run` alternative is not quire ready. Until this situation is rectified, please use the Fluence JS client example further below. +{% endhint %} + +``` fldist run_air -p compiled-aqua/adder.add_one_three_times.air -d '{"value": 5, "ns_tuples":[{ "node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt", @@ -87,7 +91,7 @@ fldist run_air -p compiled-aqua/adder.add_one_three_times.air -d '{"value": 5, Since we are starting with a value of 5 and increment it three times, we expect an 8 which we get: -```text +``` [ 8 ] @@ -97,7 +101,7 @@ Of course, we can drastically change our application logic by changing the execu Reusing our deployed services with a different execution flow may look like the following: -```text +```` ```aqua -- service interface for Wasm module @@ -118,13 +122,13 @@ func add_one_par(value: u64, ns_tuples: []NodeServiceTuple) -> []u64: res <- AddOne.add_one(value) MyOp.identity(res!2) --< flatten the stream variable <- res --< return the final results [value +1, value + 1, value + 1, ...] to the client -``` +```` -Unlike the sequential execution model, this example returns an array where each item is the incremented value, which is captured by the stream variable **res**. That is, for a starting value of five \(5\), we obtain \[6,6,6\] assuming our NodeServiceTuple array provided the three distinct \(Peer Id, Service Id\) tuples. +Unlike the sequential execution model, this example returns an array where each item is the incremented value, which is captured by the stream variable **res**. That is, for a starting value of five (5), we obtain \[6,6,6] assuming our NodeServiceTuple array provided the three distinct (Peer Id, Service Id) tuples. Running the script with `fldist`: -```text +``` fldist run_air -p compiled-aqua/adder.add_one_par.air -d '{"value": 5, "ns_tuples":[{ "node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt", @@ -143,7 +147,7 @@ fldist run_air -p compiled-aqua/adder.add_one_par.air -d '{"value": 5, We get the expected result: -```text +``` [ [ 6, @@ -155,7 +159,7 @@ We get the expected result: We can improve on our business logic and change our input arguments to make parallelization a little more useful. Let's extend our data struct and update the workflow: -```text +``` -- aqua-scripts/adder.aqua data ValueNodeService: @@ -175,7 +179,7 @@ func add_one_par_alt(payload: []ValueNodeService) -> []u64: And we can run the `fldist` command line: -```text +``` fldist run_air -p compiled-aqua/adder.add_one_par_alt.air -d '{"payload": [{"value": 5, "node_id": "12D3KooWFtf3rfCDAfWwt6oLZYZbDfn9Vn7bv7g6QjjQxUUEFVBt", @@ -192,9 +196,9 @@ fldist run_air -p compiled-aqua/adder.add_one_par_alt.air -d '{"payload": }' --generated ``` -Given our input values \[5, 10, 15\], we get the expected output array of \[6, 11, 16\]: +Given our input values \[5, 10, 15], we get the expected output array of \[6, 11, 16]: -```text +``` [ [ 11, @@ -205,14 +209,14 @@ Given our input values \[5, 10, 15\], we get the expected output array of \[6, 1 Alternatively, we can run our Aqua scripts with a Typescript client. In the `client-peer` directory: -```text +``` npm i -npm run start +npm start ``` Which of course gives us the expected results: -```text +``` created a Fluence client 12D3KooWGve35kvMQ8USbmtRoMCzxaBPXSbqsZxfo6T8gBAV6bzy with relay 12D3KooWKnEqMfYo9zvfHmqTLpLdiHXPe4SVqUWcWHDJdFGrSmcA add_one to 5 equals 6 add_one sequentially equals 8 @@ -222,7 +226,6 @@ add_one parallel alt equals [ 11, 6, 16 ] ### Summary -This section illustrates how Aqua allows developers to locate and execute distributed services on by merely providing a \(Peer Id, Service Id\) tuple and the associated data. From an Aqua user perspective, there are no JSON-RPC or REST endpoints just topology tuples that are resolved on peers of the network. Moreover, we saw how the Fluence peer-to-peer workflow model facilitates a different request-response model than commonly encountered in traditional client-server applications. That is, instead of returning each service result to the client, Aqua allows us to forward the \(intermittent\) result to the next service, peer-to-peer style. +This section illustrates how Aqua allows developers to locate and execute distributed services on by merely providing a (Peer Id, Service Id) tuple and the associated data. From an Aqua user perspective, there are no JSON-RPC or REST endpoints just topology tuples that are resolved on peers of the network. Moreover, we saw how the Fluence peer-to-peer workflow model facilitates a different request-response model than commonly encountered in traditional client-server applications. That is, instead of returning each service result to the client, Aqua allows us to forward the (intermittent) result to the next service, peer-to-peer style. Furthermore, we explored how different Aqua execution flows, e.g. **seq**uential vs. **par**allel, and data models allow developers to compose drastically different workflows and application re-using already deployed services. For more information on Aqua, please see the [Aqua book](https://doc.fluence.dev/aqua-book/) and for more information on Fluence development, see the [developer docs](https://doc.fluence.dev/docs/). - diff --git a/quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md b/quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md new file mode 100644 index 0000000..960ce29 --- /dev/null +++ b/quick-start/5.-decentralized-oracles-with-fluence-and-aqua.md @@ -0,0 +1,353 @@ +# 5. Decentralized Oracles With Fluence And Aqua + +### Overview + +An oracle is some device that provides real-world, off-chain data to deterministic on-chain consumers such as a smart contract. A decentralized oracle draws from multiple, purportedly (roughly) equal input sources to minimize or even eliminate single source pitfalls such as [man-in-the-middle attacks](https://en.wikipedia.org/wiki/Man-in-the-middle\_attack)(MITM) or provider manipulation. For example, a decentralized price oracle for, say, ETH/USD, could poll several DEXs for ETH/USD prices. Since smart contracts, especially those deployed on EVMs, can't directly call off-chain resources, oracles play a critical "middleware" role in the decentralized, trustless ecosystem. See Figure 1. + +![](<../.gitbook/assets/image (18).png>) + +Unlike single source oracles, multi-source oracles require some consensus mechanism to convert multiple input sources over the same target parameter into reliable point or range data suitable for third party, e.g., smart contract, consumption. Such "consensus over inputs" may take the form of simple [summary statistics](https://en.wikipedia.org/wiki/Summary\_statistics), e.g., mean, or one of many [other methods](https://en.wikipedia.org/wiki/Consensus\_\(computer\_science\)). + +Given the importance of oracles to the Web3 ecosystem, it's not surprising to see a variety of third party solutions supporting various blockchain protocols. Fluence does not provide an oracle solution _per se_ but provides a peer-to-peer platform, tools and components for developers to quickly and easily program and compose reusable distributed data acquisition, processing and delivery services into decentralized oracle applications. + +For the remainder of this section, we work through the process of developing a decentralized, multi-source timestamp oracle comprised of data acquisition, processing and delivery. + +### Creating A Decentralized Timestamp Oracle + +Time, often in form of timestamps, plays a critical role in a large number of Web2 and Web3 applications including off-chain voting applications and on-chain clocks. Our goal is to provide a consensus timestamp sourced from multiple input sources and implement an acceptable input aggregation and processing service to arrive at either a timestamp point or range value(s). + +#### Timestamp Acquisition + +Each Fluence peer, i.e. node in the Fluence peer-to-peer network, has the ability to provide a timestamp from a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L127). In Aqua, we can call a [timestamp function](https://github.com/fluencelabs/fluence/blob/527e26e08f3905e53208b575792712eeaee5deca/particle-closures/src/host\_closures.rs#L124) with the desired granularity, i.e., seconds or milliseconds for further processing: + +```python + -- aqua timestamp sourcing + on peer: + ts_ms_result <- peer.timestamp_ms() + -- or + ts_sec_result <- peer.timestamp_sec() + -- ... +``` + +In order to decentralize our timestamp oracle, we want to poll multiple peers in the Fluence network: + +```python + -- multi-peer timestamp sourcing + -- ... + results: *u64 + for peer <- many_peers_list par: + on peer: + results <- peer.timestamp_ms() + -- ... +``` + +In the above example, we have a list of peers and retrieve a timestamp value from each one. Note that we are polling nodes for timestamps in [parallel](https://doc.fluence.dev/aqua-book/language/flow/parallel) in order to optimize toward uniformity and to collect responses in the stream variable `results`. See Figure 2. + +![](<../.gitbook/assets/image (24).png>) + +The last thing to pin down concerning our timestamp acquisition is which peers to query. One possibility is to specify the peer ids of a set of desired peers to query. Alternatively, we can tap into the [Kademlia neighborhood](https://en.wikipedia.org/wiki/Kademlia) of a peer, which is a set of peers that are closest to our peer based on the XOR distance of the peer ids. Luckily, there is a [builtin service](https://github.com/fluencelabs/aqua-lib/blob/b90f2dddc335c155995a74d8d97de8dbe6a029d2/builtin.aqua#L140) we can call from Aqua that returns up to 20 neighboring peers: + +```python + -- timestamps from Kademlia neighborhood + results: *u64 + on node: + k <- Op.string_to_b58(node) + nodes <- Kademlia.neighborhood(k, nil, nil) + for node <- nodes par: + on node: + try: + results <- node.timestamp_ms() + -- ... +``` + +#### Timestamp Processing + +Once we have our multiple timestamp values, we need to process them into a point or range value(s) to be useful. Whatever our processing/consensus algorithm is, we can implement it in Marine as one or more reusable, distributed services. + +Fpyor example, we can rely on [summary statistics](https://en.wikipedia.org/wiki/Summary\_statistics) and implement basic averaging to arrive at a point estimate: + +```rust + // ... + + #[marine] + pub fn ts_avg(timestamps: Vec) -> f64 { + timestamps.iter().sum::() as f64 / timestamps.len() as f64 +} + // ... +``` + +Using the average to arrive at a point-estimate is simply a stake in the ground to illustrate what's possible. Actual processing algorithms may vary and, depending on a developers target audience, different algorithms may be used for different delivery targets. And Aqua makes it easy to customize workflows while emphasizing reuse. + +#### Putting It All Together + +Let's put it all together by sourcing timestamps from the Kademlia neighborhood and processing the timestamps into a consensus value. Instead of one of the summary statistics, we employ a simple, consensus algorithm that randomly selects one of the provided timestamps and then calculates a consensus score from the remaining n -1 timestamps: + +```rust +// src.main.rs +// +// simple consensus from timestamps +// params: +// timestamps, u64, [0, u64_max] +// tolerance, u32, [0, u32_max] +// threshold, f64, [0.0, 1.0] +// 1. Remove a randomly selected timestamp from the array of timestamps, ts +// 2. Count the number of timestamps left in the array that are withn +/- tolerance (where tolerance may be zero) +// 3. compare the suporting number of times stamps divided by th enumber of remaining timestamps to the threshold. if >=, consensus for selected timestamp is true else false +// +[marine] +fn ts_frequency(mut timestamps: Vec, tolerance: u32, threshold: f64, err_value: u64) -> Consensus { + timestamps.retain(|&ts| ts != err_value); + if timestamps.len() == 0 { + return Consensus { + err_str: "Array must have at least one element".to_string(), + ..<_>::default() + }; + } + + if timestamps.len() == 1 { + return Consensus { + n: 1, + consensus_ts: timestamps[0], + consensus: true, + support: 1, + ..<_>::default() + }; + } + + if threshold < 0f64 || threshold > 1f64 { + return Consensus { + err_str: "Threshold needs to be between [0.0,1.0]".to_string(), + ..<_>::default() + }; + } + + let rnd_seed: u64 = timestamps.iter().sum(); + let mut rng = WyRand::new_seed(rnd_seed); + let rnd_idx = rng.generate_range(0..timestamps.len()); + let consensus_ts = timestamps.swap_remove(rnd_idx); + let mut support: u32 = 0; + for ts in timestamps.iter() { + if ts <= &(consensus_ts + tolerance as u64) && ts >= &(consensus_ts - tolerance as u64) { + support += 1; + } + } + + let mut consensus = false; + if (support as f64 / timestamps.len() as f64) >= threshold { + consensus = true; + } + + Consensus { + n: timestamps.len() as u32, + consensus_ts, + consensus, + support, + err_str: "".to_string(), + } +} +``` + +We compile our consensus module with `./scripts/build.sh`, which allows us to run the unit tests using the Wasm module with `cargo +nightly test`: + +```bash +# src.main.rs +running 10 tests +test tests::ts_validation_good_consensus_false ... ok +test tests::test_err_val ... ok +test tests::test_mean_fail ... ok +test tests::ts_validation_good_consensus ... ok +test tests::ts_validation_bad_empty ... ok +test tests::ts_validation_good_consensus_true ... ok +test tests::ts_validation_good_no_support ... ok +test tests::test_mean_good ... ok +test tests::ts_validation_good_no_consensus ... ok +test tests::ts_validation_good_one ... ok + +test result: ok. 10 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 18.75s +``` + +We can now interact with our module with the Marine REPL `mrepl configs/Config.toml`: + +```python +Welcome to the Marine REPL (version 0.9.1) +Minimal supported versions + sdk: 0.6.0 + interface-types: 0.20.0 + +app service was created with service id = 520a092b-85ef-43c1-9c12-444274ba2cb7 +elapsed time 62.893047ms + +1> i +Loaded modules interface: +data Consensus: + n: u32 + reference_ts: u64 + support: u32 + err_str: string +data Oracle: + n: u32 + avg: f64 + err_str: string + +ts_oracle: + fn ts_avg(timestamps: []u64, min_points: u32) -> Oracle + fn ts_frequency(timestamps: []u64, tolerance: u32) -> Consensus + +2> call ts_oracle ts_frequency [[1637182263,1637182264,1637182265,163718226,1637182266], 0, 0.66, 0] +result: Object({"consensus": Bool(false), "consensus_ts": Number(1637182264), "err_str": String(""), "n": Number(4), "support": Number(0)}) + elapsed time: 167.078µss + +3> call ts_oracle ts_frequency [[1637182263,1637182264,1637182265,163718226,1637182266], 5, 0.66, 0] +result: Object({"consensus": Bool(true), "consensus_ts": Number(1637182264), "err_str": String(""), "n": Number(4), "support": Number(3)}) + elapsed time: 63.291µs +``` + +In our first call at prompt `2>`, we set a tolerance of 0 and, given our array of timestamps, have no support for the chosen timestamps, whereas in the next call, `3>,`we increase the tolerance parameter and obtain a consensus result. + +All looks satisfactory and we are ready to deploy our module with `./scripts/deploy.sh`, which write-appends the deployment response data, including the service id, to a local file named `deployed_service.data`: + +```bash +client seed: 7UNmJPMWdLmrwAtGrpJXNrrcK7tEZHCjvKbGdSzEizEr +client peerId: 12D3KooWBeEsUnMV9MZ6QGMfaxcTvw8mGFEMGDe7rhKN8RQv1Gs8 +relay peerId: 12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi +service id: 61a86f67-ffc2-4dea-8746-fd4f04d9c75b +service created successfully +``` + +With the service in place, let's have a look at our Aqua script. Recall, we want to poll the Kademlia neighborhood for timestamps and then call the `ts_oracle` method of our service with the array of timestamps and tolerance parameters as well as the (peer id, service id) parameters of our deployed service: + +```python +-- aqua/ts_oracle.aqua +-- + +func ts_oracle_with_consensus(tolerance: u32, threshold: f64, err_value:u64, node:string, oracle_service_id:string)-> Consensus, []string: + rtt = 1000 + res: *u64 -- 4 + msg = "timeout" + dead_peers: *string + on node: + k <- Op.string_to_b58(node) + nodes <- Kademlia.neighborhood(k, nil, nil) -- 1 + for n <- nodes par: -- 3 + status: *string + on n: -- 7 + res <- Peer.timestamp_ms() -- 2 + status <<- "success" -- 9 + par status <- Peer.timeout(rtt, msg) -- 8 + if status! != "success": + res <<- err_value -- 10 + dead_peers <<- n -- 11 + + MyOp.identity(res!19) -- 5 + TSOracle oracle_service_id + consensus <- TSOracle.ts_frequency(res, tolerance, threshold, err_value) -- 6 + <- consensus, dead_peers -- 12 +``` + +That script is probably a little more involved than what you've seen so far. So let's work through the script: In order to get out set of timestamps, we determine the Kademlia neighbors (1) and then proceed to request a timestamp from each of those peers (2) in parallel (3). In an ideal world, each peers responds with a timestamp and the stream variable `res` (4) fills up with the 20 values from the twenty neighbors, which we then fold (5) and push to our consensus service (6). Alas, life in distributed isn't quite that simple since there are no guarantees that a peer is actually available to connect or provide a service response. Since we may never actually connect to a peer (7), we can't expect an error response meaning that we get a silent fail at (2) and no write to the stream `res`. Subsequently, this leads to the fail of the fold operation (5) since fewer than the expected twenty items are in the stream and the operation (5) ends up timing out waiting for a never-to-arrive timestamp. + +In order to deal with this issue, we introduce a sleep operation (8) with the builtin [Peer.timeout](https://github.com/fluencelabs/aqua-lib/blob/1193236fe733e75ed0954ed26e1234ab7a6e7c53/builtin.aqua#L135) and run that in parallel to the attempted connection for peer `n` (3) essentially setting up a race condition to write to the stream: if the peer (`on n`, 7) behaves, we write the timestamp to `res`(2) and make a note of that successful operation (9); else, we write a dummy value, i.e., `err_value`, into the stream (10) and make a note of the delinquent peer (11). Recall, we filter out the dummy `err_value` at the service level. + +Once we get our consensus result (6), we return it as well as the array of unavailable peers (12). And that's all there is. + +In order to execute our workflow, we can use Aqua's `aqua run` CLI without having to manually compile the script: + +```bash +aqua run \ + -a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ + -i aqua/ts_oracle.aqua \ + -f 'ts_oracle_with_consensus(10, 0.66, 0, "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", "61a86f67-ffc2-4dea-8746-fd4f04d9c75b")' +``` + +If you are new to `aqua run`, the CLI functionality provided by the [`aqua` package](https://www.npmjs.com/package/@fluencelabs/aqua): + +* `aqua run --help` for all your immediate needs +* the `-i` flag denotes the location of our reference aqua file +* the `-a` flag denotes the multi-address of our connection peer/relay +* the `-f` flag handles the meat of the call: + * specify the aqua function name + * provide the parameter values matching the function signature + +Upon execution of our `aqua run` client, we get the following result which may be drastically different for you: + +```bash +Your peerId: 12D3KooWEAhnNDjnh7C9Jba4Yn3EPcK6FJYRMZmaEQuqDnkb9UQf +[ + { + "consensus": true, + "consensus_ts": 1637883531844, + "err_str": "", + "n": 15, + "support": 14 + }, + [ + "12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA", + "12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv", + "12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU", + "12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r" + ] +] +``` + +Recall, that the maximum number of peers pulling from a Kademlia is 20 -- the default value set by the Fluence team. As discussed above, not all nodes may be available at any given time and at the _time of this writing_, the following four nodes were indeed not providing a timestamp response: + +```bash +[ + "12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA", + "12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv", + "12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU", + "12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r" + ] +``` + +That leaves us with a smaller timestamp pool to run through our consensus algorithm than anticipated. Please note that it is up to the consensus algorithm design(er) to set the minimum acceptable number of inputs deemed necessary to produce a sensible and acceptable result. In our case, we run fast and loose as evident in our service implementation discussed above, and go with what we get as long as we get at least one timestamp. + +With a tolerance of ten (10) milli-seconds and a consensus threshold of 2/3 (0.66), we indeed attain a consensus for the _1637883531844_ value with support from 14 out of 15 timestamps: + +```bash + { + "consensus": true, + "consensus_ts": 1637883531844, + "err_str": "", + "n": 15, + "support": 14 + }, +``` + +We can make adjustments to the _tolerance_ parameter: + +```bash +aqua run \ + -a /dns4/kras-04.fluence.dev/tcp/19001/wss/p2p/12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi \ + -i aqua/ts_oracle.aqua \ + -f 'ts_oracle_with_consensus(0, 0.66, 0, "12D3KooWFEwNWcHqi9rtsmDhsYcDbRUCDXH84RC4FW6UfsFWaoHi", "61a86f67-ffc2-4dea-8746-fd4f04d9c75b")' +``` + +Which does _not_ result in a consensus timestamp given the same threshold value: + +```bash +Your peerId: 12D3KooWP7vAR462JgoagUzGA8s9YccQZ7wsuGigFof7sajiGThr +[ + { + "consensus": false, + "consensus_ts": 1637884102677, + "err_str": "", + "n": 15, + "support": 0 + }, + [ + "12D3KooWAKNos2KogexTXhrkMZzFYpLHuWJ4PgoAhurSAv7o5CWA", + "12D3KooWHCJbJKGDfCgHSoCuK9q4STyRnVveqLoXAPBbXHTZx9Cv", + "12D3KooWMigkP4jkVyufq5JnDJL6nXvyjeaDNpRfEZqQhsG3sYCU", + "12D3KooWDcpWuyrMTDinqNgmXAuRdfd2mTdY9VoXZSAet2pDzh6r" + ] +] +``` + +We encourage you to experiment and tweak the parameters both for the consensus algorithm and the timeout settings. Obviously, longer routes make for more timestamp variance even if each timestamp called is "true." + +### Summary + +Fluence and Aqua make it easy to create and implement decentralized oracle and consensus algorithms using Fluence's off-chain peer-to-peer network and tool set. + +To further your understanding of creating decentralized off-chain (compute) oracles with Fluence and Aqua, experiment with both the consensus methodology and, of course, the oracle sources. Instead of timestamps, try your hand on crypto price/pairs and associated liquidity data, election exit polls or sports scores. Enjoy! diff --git a/tutorials_tutorials/recipes_setting_up.md b/tutorials_tutorials/recipes_setting_up.md index fb16503..7281f3c 100644 --- a/tutorials_tutorials/recipes_setting_up.md +++ b/tutorials_tutorials/recipes_setting_up.md @@ -4,7 +4,7 @@ In order to develop within the Fluence solution, [Node](https://nodejs.org/en/), ### NodeJs -Download the \[installer\]\([https://nodejs.org/en/download/](https://nodejs.org/en/download/)\) for your platform and follow the instructions. +Download the \[installer]\([https://nodejs.org/en/download/](https://nodejs.org/en/download/)) for your platform and follow the instructions. ### Rust @@ -14,7 +14,7 @@ Install Rust: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh ``` -Once Rust is installed, we need to expand the toolchain and include [nightly build](https://rust-lang.github.io/rustup/concepts/channels.html) and the [Wasm](https://doc.rust-lang.org/stable/nightly-rustc/rustc_target/spec/wasm32_wasi/index.html) compile target. +Once Rust is installed, we need to expand the toolchain and include [nightly build](https://rust-lang.github.io/rustup/concepts/channels.html) and the [Wasm](https://doc.rust-lang.org/stable/nightly-rustc/rustc\_target/spec/wasm32\_wasi/index.html) compile target. ```bash rustup install nightly @@ -28,13 +28,13 @@ rustup self update rustup update ``` -There are a number of good Rust installation and IDE integration tutorials available. [DuckDuckGo](https://duckduckgo.com/) is your friend but if that's too much effort, have a look at [koderhq](https://www.koderhq.com/tutorial/rust/environment-setup/). Please note, however, that currently only VSCode is supported with Aqua syntax support. +There are a number of good Rust installation and IDE integration tutorials available. [DuckDuckGo](https://duckduckgo.com) is your friend but if that's too much effort, have a look at [koderhq](https://www.koderhq.com/tutorial/rust/environment-setup/). Please note, however, that currently only VSCode is supported with Aqua syntax support. ### Aqua Tools The Aqua compiler and standard library and be installed via npm: -```text +``` npm -g install @fluencelabs/aqua npm -g install @fluencelabs/aqua-lib ``` @@ -43,11 +43,11 @@ npm -g install @fluencelabs/aqua-lib If you are a VSCode user, note that am Aqua syntax-highlighting extension is available. In VSCode, click on the Extensions button, search for `aqua`and install the extension. -![](https://gblobscdn.gitbook.com/assets%2F-MbmEhQUL-bljop_DzuP%2F-MdMDybZMQJ5kUjN4zhr%2F-MdME2UUjaxKs6pzcDLH%2FScreen%20Shot%202021-06-29%20at%201.06.39%20PM.png?alt=media&token=812fcb5c-cf28-4240-b072-a51093d0aaa4) +![](https://gblobscdn.gitbook.com/assets%2F-MbmEhQUL-bljop\_DzuP%2F-MdMDybZMQJ5kUjN4zhr%2F-MdME2UUjaxKs6pzcDLH%2FScreen%20Shot%202021-06-29%20at%201.06.39%20PM.png?alt=media\&token=812fcb5c-cf28-4240-b072-a51093d0aaa4) Moreover, the aqua-playground provides a ready to go Typescript template and Aqua example. In a directory of you choice: -```text +``` git clone git@github.com:fluencelabs/aqua-playground.git ``` @@ -68,13 +68,17 @@ In addition, Fluence provides the `fldist` tool for the lifecycle management of npm -g install @fluencelabs/fldist ``` -### Fluence SDK +{% hint style="info" %} +Please note that we are in the process of deprecating Fldist in favor of [Aqua CLI](https://github.com/fluencelabs/aqua/tree/main/cli). At the time of this writing, `fldist` is fully functional **except** for `run_air` which needs to be replaced with `aqua run`. -For frontend development, the Fluence [JS-SDK](https://github.com/fluencelabs/fluence-js) is currently the favored, and only, tool. +We are currently in the process of updating the documentation to reflect these changes. If you run into an errant `fldist` reference, please let us now! +{% endhint %} + +### Fluence JS + +For frontend development, the Fluence [JS](https://github.com/fluencelabs/fluence-js) is currently the favored, and only, tool. ```bash npm install @fluencelabs/fluence ``` - - diff --git a/tutorials_tutorials/tutorial_run_local_node.md b/tutorials_tutorials/tutorial_run_local_node.md index 7948d5e..2d77f5a 100644 --- a/tutorials_tutorials/tutorial_run_local_node.md +++ b/tutorials_tutorials/tutorial_run_local_node.md @@ -10,16 +10,16 @@ docker run -d --name fluence -e RUST_LOG="info" -p 7777:7777 -p 9999:9999 -p 180 where the `-d` flag runs the container in detached mode, `-e` flag sets the environment variables, `-p` flag exposes the ports: 7777 is the tcp port, 9999 the websocket port, and, optionally, 18080 the Prometheus port. -Once the container is up and running, we can tail the log \(output\) with +Once the container is up and running, we can tail the log (output) with -```text +``` docker logs -f fluence ``` Which gives os the logged output: ```bash -[2021-03-11T01:31:17.574274Z INFO particle_node] +[2021-12-02T19:42:20.734559Z INFO particle_node] +-------------------------------------------------+ | Hello from the Fluence Team. If you encounter | | any troubles with node operation, please update | @@ -30,19 +30,23 @@ Which gives os the logged output: | github.com/fluencelabs/fluence/discussions | +-------------------------------------------------+ -[2021-03-11T01:31:17.575062Z INFO server_config::fluence_config] Loading config from "/.fluence/Config.toml" -[2021-03-11T01:31:17.575461Z INFO server_config::keys] generating a new key pair -[2021-03-11T01:31:17.575768Z WARN server_config::defaults] New management key generated. private in base64 = VE0jt68kqa2B/SMOd3VuuPd14O2WTmj6Dl//r6VM+Wc=; peer_id = 12D3KooWNGuGgQVUA6aJMGMGqkBCFmLZqMwmp6pzmv1WLYdi7gxN -[2021-03-11T01:31:17.575797Z INFO particle_node] AIR interpreter: "./aquamarine_0.7.3.wasm" -[2021-03-11T01:31:17.575864Z INFO particle_node::config::certificates] storing new certificate for the key pair -[2021-03-11T01:31:17.577028Z INFO particle_node] public key = BRqbUhVD2XQ6YcWqXW1D21n7gPg15STWTG8C7pMLfqg2 -[2021-03-11T01:31:17.577848Z INFO particle_node::node] server peer id = 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx +[2021-12-02T19:42:20.734599Z INFO server_config::resolved_config] Loading config from "/.fluence/v1/Config.toml" +[2021-12-02T19:42:20.734842Z INFO server_config::keys] Generating a new key pair to "/.fluence/v1/builtins_secret_key.ed25519" +[2021-12-02T19:42:20.735133Z INFO server_config::keys] Generating a new key pair to "/.fluence/v1/secret_key.ed25519" +[2021-12-02T19:42:20.735409Z WARN server_config::defaults] New management key generated. ed25519 private key in base64 = M2sMsy5qguJIEttNct1+OBmbMhVELRUzBX9836A+yNE= +[2021-12-02T19:42:20.736364Z INFO particle_node] AIR interpreter: "/.fluence/v1/aquamarine_0.16.0-restriction-operator.9.wasm" +[2021-12-02T19:42:20.736403Z INFO particle_node::config::certificates] storing new certificate for the key pair +[2021-12-02T19:42:20.736589Z INFO particle_node] node public key = 3iMsSHKmtioSHoTudBAn5dTtUpKGnZeVGvRpEV1NvVLH +[2021-12-02T19:42:20.736616Z INFO particle_node] node server peer id = 12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +[2021-12-02T19:42:20.739248Z INFO particle_node::node] Fluence listening on ["/ip4/0.0.0.0/tcp/7777", "/ip4/0.0.0.0/tcp/9999/ws"] ``` -For future interaction with the node, we need to retain the server peer id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx. And if you feel the need to snoop around the container: +For future interaction with the node, we need to retain the server peer id \`12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D\`, which may be different for you. + +And if you feel the need to snoop around the container: ```bash docker exec -it fluence bash @@ -50,17 +54,16 @@ docker exec -it fluence bash will get you in. -Now that we have a local node, we can use the `fldist` tool to interact with it. From the Quick Start, you may recall that we need the node-id and node-addr: +Now that we have a local node, we can use the `fldist` tool and `aqua cli` to interact with it. From the Quick Start, you may recall that we need the node-id and node-addr: -* node-id: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx -* node-addr: /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx +* node-id: `12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D` +* node-addr: `/ip4/127.0.0.1/tcp/9999/ws/p2p/112D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D` Let's inspect our node and check for any available modules and interfaces: -```text +``` fldist get_modules \ - --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ - --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ + --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D \ --pretty ``` @@ -72,16 +75,15 @@ Let's us check on available modules and gives us: And checking on available interfaces: -```text +``` fldist get_interfaces \ - --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ - --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx + --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D \ --expand ``` Results in: -```text +``` 60000 [ [] ] ``` @@ -91,28 +93,28 @@ Since we just initiated the node, we expect no modules and no interfaces and the ```bash mkdir fluence-greeter cd fluence-greeeter -# download the greeting.wasm file into this directory -# https://github.com/fluencelabs/fce/blob/master/examples/greeting/artifacts/greeting.wasm -- Download button to the right +# download the greeting.wasm file into this directory: +# https://github.com/fluencelabs/marine/blob/master/examples/greeting/artifacts/greeting.wasm -- Download button to the right echo '{ "name":"greeting"}' > greeting_cfg.json ``` We just grabbed the greeting Wasm file from the Fluence repo and created a service configuration file, `greeting_cfg.json`, which allow us to create a new GreetingService: ```bash -fldist --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ - --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ - new_service \ - --ms examples/greeting/artifacts/greeting.wasm:greeting_cfg.json \ - -n GreetingService +fldist \ + --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D new_service \ + --ms greeting.wasm:greeting_cfg.json \ + -n greeting-service \ + --verbose ``` Which gives us the service id: -```text -client seed: 7VtMT7dbdfuU2ewWHEo42Ysg5B9KTB5gAgM8oDEs4kJk -client peerId: 12D3KooWRSmoTL64JVXna34myzAuKWaGkjE6EBAb9gaR4hyyyQDM -node peerId: 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx -service id: 64551400-6296-4701-8e82-daf0b4e02751 +``` +client seed: GofK8dD9kHFv27HGrQstMoQTWGiKeBteoXT1gGdXLzqc +client peerId: 12D3KooWAyyRcszmHTotttZNyTNhpUMxcrC7JesEurUZ4zKfvtyJ +relay peerId: 12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D +service id: 2bb578a1-f67e-4975-b952-b2979c63f0f0 service created successfully ``` @@ -120,14 +122,13 @@ We now have a greeting service running on our node. As always, take note of the ```bash fldist get_modules \ - --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ - --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ + --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D \ --pretty ``` Which now lists our uploaded module: -```text +``` [ { "config": { "logger_enabled":true, @@ -145,9 +146,9 @@ Which now lists our uploaded module: ] ``` -Yep, checking once again for modules, the output confirms that the greeting service is available. Writing a small AIR script allows us to use the service: +Yep, checking once again for modules, the output confirms that the greeting service is available. Writing a small Aqua script allows us to use the service: -```text +```python service GreetingService("service-id"): greeting: string -> string @@ -158,33 +159,20 @@ func greeting(name:string, node:string, greeting_service_id: string) -> string: <- res ``` -Compile the script with [`aqua`](https://doc.fluence.dev/aqua-book/getting-started/quick-start) or `aqua-js` and use the resulting file with the`fldist` tool: +We run the script with [`aqua`](https://doc.fluence.dev/aqua-book/getting-started/quick-start) -```text -fldist --node-id 12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ - --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx \ - run_air \ - -p greeting.greeting.air \ - -d '{"service": "64551400-6296-4701-8e82-daf0b4e02751", "name":"Fluence", "node": "12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx"}' +``` +aqua run \ + -a /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWCXj3BQuV5d4vhgyLFmv7rRYiy9MupFiyEWnqcUAGpS4D \ + -i greeting.aqua \ + -f 'greeting("Fluence", "12D3KooWHLxVhUQyAuZe6AHMB29P7wkvTNMn7eDMcsqimJYLKREf", "04ef4459-474a-40b5-ba8d-1e9a697206ab")' ``` ```bash - -=================== +Your peerId: 12D3KooWAMTVBjHfEnSF54MT4wkXB1CvfDK3XqoGXt7birVsLFj6 [ "Hi, Fluence" ] -[ - [ - { - peer_pk: '12D3KooWLFCmDq4vDRfaxW2GA6kYnorxAiie78XzQrVDVoWEZnPx', - service_id: '64551400-6296-4701-8e82-daf0b4e02751', - function_name: 'greeting', - json_path: '' - } - ] -] -=================== ``` Yep, our node and the tools are working as expected. Going back to the logs, we can further verify the script execution: @@ -195,7 +183,7 @@ docker logs -f fluence And check from the bottom up: -```text +``` [2021-03-12T02:42:51.041267Z INFO aquamarine::particle_executor] Executing particle 14db3aff-b1a9-439e-8890-d0cdc9a0bacd [2021-03-12T02:42:51.041927Z INFO particle_closures::host_closures] Executed host call "64551400-6296-4701-8e82-daf0b4e02751" "greeting" (96us 700ns) @@ -205,4 +193,3 @@ And check from the bottom up: Looks like our node container and logging is up and running and ready for your development use. As the Fluence team is rapidly developing, make sure you stay up to date. Check the repo or [Docker hub](https://hub.docker.com/r/fluencelabs/fluence) and update with `docker pull fluencelabs/fluence:latest`. Happy composing! -