mirror of
https://github.com/fluencelabs/gitbook-docs
synced 2025-06-13 23:11:44 +00:00
Merge branch 'main' into docs
This commit is contained in:
@ -1,6 +1,8 @@
|
||||
# Overview
|
||||
|
||||
In the Quick Start section we incrementally created a distributed, database-backed request processing application using existing services with Aquamarine. Of course, we left a lot of detail uncovered including where the services we used came from in the first place. In this section, we tackle the very issue of development and deployment of service component.
|
||||
|
||||
In the Quick Start section we incrementally created a distributed, database-backed request processing application using existing services with Aquamarine. Of course, we left a lot of detail uncovered including where the services we used came from in the first place. In this section, we tackle the very issue of development and deployment of service component.
|
||||
|
||||
|
||||
Before we proceed, please make sure your Fluence environment is [setup](../recipes_recipes/recipes_setting_up.md) and ready to go. Moreover, we are going to run our own Fluence node to test our services in a network environment. Please refer to the [Running a Local Fluence Node](../tutorials_tutorials/tutorial_run_local_node.md) tutorial if you need support.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Building The Reward Block Application
|
||||
|
||||
Our project aims to
|
||||
Our project aims to
|
||||
|
||||
* retrieve the latest block height from the Ethereum mainnet,
|
||||
* use that result to retrieve the associated reward block data and
|
||||
|
@ -2,7 +2,8 @@
|
||||
|
||||
In the previous sections we obtained block reward data by discovering the latest Ethereum block created. Of course, Ethereum produces a new block about every 13 seconds or so and it would be nice to automate the data acquisition process. One way, of course, would be to, say, cron or otherwise daemonize our frontend application. But where's the fun in that and we'd rather hand that task to the p2p network.
|
||||
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
As we have seen in our AIR workflows, particles travel the path, trigger execution, and update their data. So far, we have only seen services consume previous outputs as \(complete\) inputs, which means that service at workflow sequence s needs to be fairly tightly coupled to service at sequence s-1, which is less than ideal. Luckily, Fluence provides a solution to access certain types of results as j_son paths_.
|
||||
|
||||
|
||||
## Peer-Based Script Storage And Execution
|
||||
|
||||
@ -120,7 +121,6 @@ In order to upload the periodic "block to db poll", we can use parts of the _eth
|
||||
```
|
||||
|
||||
|
||||
|
||||
```bash
|
||||
# script file to string variable
|
||||
AIR=`cat air-scripts/ethqlite_block_committer.clj`
|
||||
@ -187,6 +187,7 @@ And we are golden. Give it some time and start checking Ethqlite for latest bloc
|
||||
Unfortunately, our daemonized service won't work just yet as the current implementation cannot take the \(client\) seed we need in order to get our SQLite write working. It's on the to-do list but if you need it, please contact us and we'll see about juggling priorities.
|
||||
{% endhint %}
|
||||
|
||||
|
||||
For completeness sake, let's remove the stored service with the following AIR script:
|
||||
|
||||
```bash
|
||||
@ -196,7 +197,8 @@ For completeness sake, let's remove the stored service with the following AIR sc
|
||||
|
||||
## Advanced Service Output Access
|
||||
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
As Aquamarine advances a particle's journey through the network, output from a service at workflow sequence s-1 method tends to be the input for a service at sequence s method. For example, the _hex\_to\_int_ method, as used earlier, takes the output from the _get\_latest\_block_ method. With single parameter outputs, this is a pretty straight forward and inherently decoupled dependency relation. However, when result parameters become more complex, such as structs, we still would like to keep services as decoupled as possible.
|
||||
|
||||
|
||||
Fluence provides this capability by facilitating the conversion of \(Rust\) struct returns into [json values](https://github.com/fluencelabs/aquamarine/blob/master/interpreter-lib/src/execution/boxed_value/jvaluable.rs#L30). This allows json type key-value access to a desired subset of return values. If you got back to the _ethqlite.clj_ script, you may notice some fancy `$`, `!` operators tucked away in the deep recesses of parenthesis stacking. Below the pertinent snippet:
|
||||
|
||||
@ -245,7 +247,8 @@ pub struct RewardBlock {
|
||||
|
||||
and the input expectations of _get\_miner\_rewards_, also an ethqlite service method, with the following [function](https://github.com/fluencelabs/examples/blob/c508d096e712b7b22aa94641cd6bb7c2fdb67200/multi-service/ethqlite/src/crud.rs#L177) signature: `pub fn get_miner_rewards(miner_address: String) -> MinerRewards`.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
Basically, _get\_miner\_rewards_ wants an Ethereum address as a `String` and in the context of our AIR script we want to get the value from the _get\_reward\_block_ result. Rather than tightly coupling _get\_miner\_rewards_ to _get\_reward\_block_ in terms of, say, the _RewardBlock_ input parameter, we take advantage of the Fluence capability to turn structs into json strings and then supply the relevant key to extract the desired value. Specifically, we use the `$` operator to access the json representation at the desired index and the `!` operator to flatten the value, if desired.
|
||||
|
||||
For example,
|
||||
|
||||
|
@ -104,7 +104,8 @@ modules_dir = "artifacts/"
|
||||
name = "block_getter"
|
||||
```
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
If you haven't done so already, run `./scripts/build.sh` to compile the projects. Once we have _wasm_ files and the service configuration, we can check out our accomplishments with the REPL:
|
||||
|
||||
```bash
|
||||
fce-repl Block-Getter-Config.toml
|
||||
@ -341,5 +342,7 @@ Particle id: 930ea13f-1474-4501-862a-ca5fad22ee42. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
||||
Right on! Our two services coordinate into the intended application returning the reward data for the latest block. Before we move on, locate the corresponding services on the Fluence testnet via the [ dashboard](https://dash.fluence.dev/), update your command-line with the appropriate service and node ids and run the same AIR script. Congratulations, you just run an app coordinated by distributed services!
|
||||
|
||||
|
||||
|
@ -103,10 +103,10 @@ The script extends our previous incarnation by adding only one more method: `upd
|
||||
"node_2": "12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"sqlite_service":"470fcaba-6834-4ccf-ac0c-4f6494e9e77b", \
|
||||
"sqlite_node":"12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17", \
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
"api_key": "MC5H2NK6ZIPMR32U7D4W35AWNNVCQX1ENH"}'
|
||||
```
|
||||
|
||||
|
||||
|
||||
and run the AIR script with the revised `fldist` command:
|
||||
|
||||
```bash
|
||||
@ -240,7 +240,7 @@ Particle id: 5ce2dcf0-2d4d-40ec-8cef-d5a0cea4f0e7. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
And that's a wrap!
|
||||
And that's a wrap!
|
||||
|
||||
In summary, we have developed and deployed multiple Fluence services to store Ethereum reward block data in a SQLite as a service database and used Aquamarine to coordinate those services into applications. See Figure 2 below.
|
||||
|
||||
|
@ -17,7 +17,7 @@ which [happens about every 13 seconds or so on mainnet](https://etherscan.io/cha
|
||||
|
||||
To get SQLite as a service, we build our service from two modules: the [ethqlite repo](https://github.com/fluencelabs/examples/tree/main/multi-service/ethqlite) and the [Fluence sqlite](https://github.com/fluencelabs/sqlite) Wasm module, which we can build or pickup as a wasm files from the [releases](https://github.com/fluencelabs/sqlite/releases). This largely, but not entirely, mirrors what we did with the cUrl service: build the service by providing an adapter to the binary. Unlike the cUrl binary, we are bringing our own sqlite binary, i.e., _sqlite3.wasm_, with us.
|
||||
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
This leaves us to code our _ethqlite_ module with respect to desired CRUD interfaces and security. As [previously](../../quick_start/quick_start_add_persistence/quick_start_persistence_setup.md) discussed, we want writes to the sqlite services to be privileged, which implies that we need to own the service and have the client seed to manage authentication and ambient authorization. Specifically, we can implement a rudimentary authorization system where authentication implies authorization \(to write\). The `is_owner` function in the _ethqlite_ repo does exactly that: if the caller can prove ownership by providing a valid client seed, than we have a true condition equating write-privileged ownership with the caller identity:
|
||||
|
||||
```rust
|
||||
// auth.rs
|
||||
@ -83,7 +83,8 @@ wget https://github.com/fluencelabs/sqlite/releases/download/v0.10.0_w/sqlite3.w
|
||||
mv sqlite3.wasm artifacts/
|
||||
```
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Run `./build.sh` and check the artifacts for the expected wasm files
|
||||
|
||||
Like all Fluence services, Ethqlite needs a [service configuration](https://github.com/fluencelabs/examples/blob/main/multi-service/ethqlite/Config.toml) file, which looks a little more involved than what we have seen so far.
|
||||
|
||||
@ -111,7 +112,8 @@ name = "ethqlite"
|
||||
mapped_dirs = { "tmp" = "/tmp" }
|
||||
```
|
||||
|
||||
Let's break it down:
|
||||
Let's break it down:
|
||||
|
||||
|
||||
* the first \[\[module\]\] section
|
||||
* specifies the _sqlite3.wasm_ module we pulled from the repo,
|
||||
@ -369,7 +371,8 @@ Particle id: 2fb4a366-6f40-46c1-9329-d77c6d03dfad. Waiting for results... Press
|
||||
===================
|
||||
```
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
If you run the init script again, you will receive an error _"Service already initiated"_, so we can be reasonably confident our code is working and it looks like our Ethqlite service is up and running on the local node.
|
||||
|
||||
Due to the security concerns for our database, it is not advisable, or even possible, to use an already deployed Sqlite service from the Fluence Dashboard. Instead, we deploy our own instance with our own \(secret\) client seed. To determine which network nodes are available, run:
|
||||
|
||||
|
@ -1,6 +1,7 @@
|
||||
# From Module To Service
|
||||
|
||||
In Fluence, a service is based on one or more [Wasm](https://webassembly.org/) modules suitable to be deployed to the Fluence Compute Engine \(FCE\). In order to develop our modules, we use Rust and the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk).
|
||||
In Fluence, a service is based on one or more [Wasm](https://webassembly.org/) modules suitable to be deployed to the Fluence Compute Engine \(FCE\). In order to develop our modules, we use Rust and the [Fluence Rust SDK](https://github.com/fluencelabs/rust-sdk).
|
||||
|
||||
|
||||
## Preliminaries
|
||||
|
||||
@ -10,7 +11,8 @@ The general process to create a Fluence \(module\) project is to:
|
||||
cargo +nightly create your_module_name --release
|
||||
```
|
||||
|
||||
and add the [binary target](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#binaries) and [Fluence Rust SDK](https://crates.io/crates/fce) to the Cargo.toml:
|
||||
and add the [binary target](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#binaries) and [Flunece Rust SDK](https://crates.io/crates/fce) to the Cargo.toml:
|
||||
|
||||
|
||||
```text
|
||||
<snip>
|
||||
@ -49,16 +51,12 @@ pub fn greeting(name: String) -> String {
|
||||
}
|
||||
```
|
||||
|
||||
Let's go line by line:
|
||||
|
||||
1. Import the [fce](https://github.com/fluencelabs/fce/tree/5effdcba7215cd378f138ab77f27016024720c0e) module from the [Fluence crate](https://crates.io/crates/fluence), which allows us to compile our code to the [wasm32-wasi](https://docs.rs/crate/wasi/0.6.0) target
|
||||
|
||||
2. Import the [module\_manifest](https://github.com/fluencelabs/rust-sdk/blob/master/crates/main/src/module_manifest.rs), which allows us to embed the SDK version in our module
|
||||
|
||||
3. Initiate the module\_manifest macro
|
||||
|
||||
4. Initiate the main function which generally stays empty or is used to instantiate a logger
|
||||
Let's go line by line:
|
||||
|
||||
1. Import the [fce](https://github.com/fluencelabs/fce/tree/5effdcba7215cd378f138ab77f27016024720c0e) module from the [Fluence crate](https://crates.io/crates/fluence), which allows us to compile our code to the [wamser32-wasi](https://docs.rs/crate/wasi/0.6.0) target
|
||||
2. Import the [module\_manifest](https://github.com/fluencelabs/rust-sdk/blob/master/crates/main/src/module_manifest.rs), which allows us to embed the SDK version in our module
|
||||
3. Initiate the module\_manifest macro
|
||||
4. Initiate the main function which generally stays empty or is used to instantiate a logger
|
||||
5. Markup the public function we want to expose with the FCE macro which, among other things, checks that only Wasm IT types are used
|
||||
|
||||
Once we compile our code, we generate the wasm32-wasi file, which can be found in the `target/wasm32-wasi` path of your directory. The `greeting.wasm` file is what we need for testing and eventual upload to the peer-to-peer network.
|
||||
@ -113,6 +111,7 @@ modules_dir = "artifacts/"
|
||||
|
||||
The source code for the module can be found in the [examples repo](https://github.com/fluencelabs/examples/tree/main/greeting).
|
||||
|
||||
|
||||
## Taking The Greeting Module For A Spin
|
||||
|
||||
Now that we have a Wasm module and service configuration, we can explore and test our achievements locally with the Fluence REPL tool `fce-repl`. Load the service for inspection and testing:
|
||||
@ -264,9 +263,9 @@ relay peerId: 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17
|
||||
|
||||
```
|
||||
|
||||
Which confirms our recent upload!!
|
||||
Which confirms our recent upload!!
|
||||
|
||||
Now that we have a service on our local node, we need to construct our AIR script to build our frontend.
|
||||
Now that we have a service on our local node, we need to construct our AIR script to build our frontend.
|
||||
|
||||
```text
|
||||
(xor
|
||||
@ -278,7 +277,8 @@ Now that we have a service on our local node, we need to construct our AIR scrip
|
||||
)
|
||||
```
|
||||
|
||||
As we've seen in the Quick Start section, we call the service _"greeting"_ with service id _service_ and the method parameter _name_. As usual, we use the `fldist` tool to execute the AIR script:
|
||||
As we've seen in the Quick Start section, we call the service _"greeting"_ with service id _service_ and the method parameter _name_. As usual, we use the `fldist` tool to execute the AIR script:
|
||||
|
||||
|
||||
```bash
|
||||
fldist --node-id 12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 --node-addr /ip4/127.0.0.1/tcp/9999/ws/p2p/12D3KooWQQYXh78acqBNuL5p1J5tmH4XCKLCHM21tMb8pcxqGL17 run_air -p greeting.clj -d '{"service":"9712f9ca-7dfd-4ff5-817d-aef9e1e92e03", "name": "Fluence"}'
|
||||
|
Reference in New Issue
Block a user