The world of L1 and L2 seen with Validium nodes

tksarah
The Astar Bulletin | TAB
10 min readApr 27, 2024

--

This article is based on the following Astar zkEVM document: “Setup Local Validium Node (Local Validium Node Setup Guide)” (also available in Japanese translation)

We will set up a Validium node on a local machine, actually operate it, and use this Validium node to verify the relationship between L1 and L2 while actually performing token transfers.

(Note) It is assumed that the reader has at least basic knowledge of Linux and the Cloud Native software stack. Links are provided to supplement detailed introductions.

The image version of cdk-validium-node for this quick start is 0.6.4-cdk.2, and zkevm-prover is v6.0.0.

Validium Node

Astar zkEVM is deployed as a Validium on Ethereum Layer 2, equipped with Polygon CDK and AggLayer.

Validium is a concept related to Layer 2 scaling. Validium is a scaling solution that enhances the integrity of transactions using validity proofs like ZK-rollups, but the data of the transactions (DA = Data Availability) is not stored on the Ethereum mainnet. This plays a role in solving the scalability issues of the Ethereum network and reducing gas fees while maintaining the integrity of transaction data.

Furthermore, Astar zkEVM, as the first network to connect to the Aggregation Layer (AggLayer), leverages the power of zero-knowledge proofs to operate while maintaining security protection equivalent to Ethereum, moving the execution of transactions and data availability off-chain.

In the original document, the following main components of Astar zkEVM’s backend processing are operational. This article also follows that, outlining the procedure for running all of these on a single machine.

  • zkEVM databases: data node, event, explorer L1 and L2, pool, state, and bridge service
  • zkEVM node components: aggregator, approve service, sequencer and sequence sender, synchronizer
  • L1 network (mock)
  • Prover
  • Explorers L1, L2
  • JSON RPC explorer
  • L2 gas pricer
  • DAC: data availability service, dac setup committee
  • zkEVM bridge service and UI

Prerequisites

Hardware Requirements

・A Linux-based OS (e.g., Ubuntu Server 22.04 LTS).
・At least 16GB RAM with a 4-core CPU.
・An AMD64 architecture system.

Based on the above conditions, the specifications of the local machine that was built this time are as follows:

・Ubuntu Server 22.04.3 LTS
・Intel Alder Lake — N100 4Core
・16GB SODIMM DDR4 3200
・M.2 NVMe SSD 500GB

Software Requirements

In this procedure, you will need go, docker, docker compose (v2), and make.

go is a programming language developed by Google. You will need to install this. Reference: https://go.dev/doc/install

docker and docker compose are platforms developed by Docker Inc. (formerly dotCloud) for creating, distributing, and running containerized virtual environments. You need to install this and keep the engine process running. Reference: https://docs.docker.com/engine/install/ubuntu/

make is a tool for automating the software build process. It is generally better to install it along with the following essential toolkit.

$ sudo apt-get -y install build-essential

Building a Validium Node

In fact, once the preparations are complete, the construction itself is instantaneous. This is because the necessary container images are prepared for a quick start.

Cloning the Repository

$ git clone https://github.com/Snapchain/zkValidium-quickstart.git
$ cd zkValidium-quickstart

Copying the .env.sample file.

$ cp .env.example .env

Here, if you are going to verify the token transfer from a remote machine via a browser in the future, you need to change the “localhost” part of this environment variable ‘COMMON_HOST=localhost’ to the IP address of your local node.

Pulling the necessary Docker images.

$ sudo docker compose pull

While your internet connection speed can also affect it, this process may take a few minutes to complete.

Start Validium locally

Launch the local CDK validium.

$ sudo make run
make run-db
make[1]: Entering directory '/home/tk/test_validium/zkValidium-quickstart'
docker compose -f docker-compose.yml up -d zkevm-state-db
[+] Building 0.0s (0/0) docker:default
[+] Running 12/12
✔ Network zkevm Created 0.1s
✔ Volume "zkvalidium-quickstart_zkevm_bridge_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_zkevm_mock_l1_geth_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_zkevm_pool_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_zkevm_state_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_explorer_l1_backend_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_explorer_l2_stats_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_explorer_l1_stats_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_zkevm_dac_node_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_explorer_l2_backend_db_data" Created 0.0s
✔ Volume "zkvalidium-quickstart_zkevm_event_db_data" Created 0.0s
✔ Container zkevm-state-db Started 0.0s
docker compose -f docker-compose.yml up -d zkevm-pool-db
[+] Building 0.0s (0/0) docker:default
[+] Running 1/1
✔ Container zkevm-pool-db Started 0.0s
docker compose -f docker-compose.yml up -d zkevm-event-db
[+] Building 0.0s (0/0) docker:default
[+] Running 1/1
✔ Container zkevm-event-db Started 0.1s
make[1]: Leaving directory '/home/tk/test_validium/zkValidium-quickstart'
docker compose -f docker-compose.yml up -d zkevm-mock-l1-network
[+] Running 1/1
✔ zkevm-mock-l1-network Pulled 1.8s
[+] Building 0.0s (0/0) docker:default
[+] Running 1/1
✔ Container zkevm-mock-l1-network Started 0.0s
sleep 2
docker compose -f docker-compose.yml up -d zkevm-prover
[+] Building 0.0s (0/0) docker:default
[+] Running 1/1
✔ Container zkevm-prover Started 0.0s
docker compose -f docker-compose.yml up -d zkevm-approve
[+] Building 0.0s (0/0) docker:default
[+] Running 1/1
✔ Container zkevm-approve Started
:
:
docker compose -f docker-compose.yml up -d zkevm-metrics
[+] Building 0.0s (0/0) docker:default
[+] Running 1/1
✔ Container zkevm-metrics Started 0.0s
make[1]: Leaving directory '/home/tk/test_validium/zkValidium-quickstart'
$

After completing without any errors, check the status of the containers. You will find that 35 containers are up and running.

$ sudo docker ps --format "table {{.Names}}\t{{.Command}}\t{{.Status}}\t{{.Ports}}"
NAMES COMMAND STATUS PORTS
zkevm-metrics "/bin/prometheus --c…" Up 2 minutes 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp
explorer-sig-provider "./sig-provider-serv…" Up 2 minutes 0.0.0.0:8151->8050/tcp, :::8151->8050/tcp
visualizer-proxy "/docker-entrypoint.…" Up 2 minutes 80/tcp, 0.0.0.0:8083->8081/tcp, :::8083->8081/tcp
explorer-visualizer "./visualizer-server" Up 2 minutes 0.0.0.0:8152->8050/tcp, :::8152->8050/tcp
explorer-smart-contract-verifier "./smart-contract-ve…" Up 2 minutes 0.0.0.0:8150->8050/tcp, :::8150->8050/tcp
explorer-proxy-l2 "/docker-entrypoint.…" Up 2 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:8084->8080/tcp, :::8084->8080/tcp
explorer-stats-l2 "./stats-server" Up 2 minutes 0.0.0.0:8154->8050/tcp, :::8154->8050/tcp
explorer-stats-db-l2 "docker-entrypoint.s…" Up 2 minutes 0.0.0.0:7434->5432/tcp, :::7434->5432/tcp
explorer-frontend-l2 "./entrypoint.sh nod…" Up 2 minutes 0.0.0.0:3001->3000/tcp, :::3001->3000/tcp
explorer-backend-l2 "sh -c 'bin/blocksco…" Up 2 minutes 0.0.0.0:4001->4000/tcp, :::4001->4000/tcp
zkevm-explorer-json-rpc "/bin/sh -c '/app/zk…" Up 2 minutes 0.0.0.0:8124->8124/tcp, :::8124->8124/tcp, 8123/tcp, 0.0.0.0:8134->8134/tcp, :::8134->8134/tcp
explorer-backend-l2-db "docker-entrypoint.s…" Up 2 minutes 0.0.0.0:5437->5432/tcp, :::5437->5432/tcp
explorer-proxy-l1 "/docker-entrypoint.…" Up 2 minutes 0.0.0.0:81->80/tcp, :::81->80/tcp, 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp
explorer-stats-l1 "./stats-server" Up 2 minutes 0.0.0.0:8153->8050/tcp, :::8153->8050/tcp
explorer-stats-db-l1 "docker-entrypoint.s…" Up 2 minutes 0.0.0.0:7433->5432/tcp, :::7433->5432/tcp
explorer-frontend-l1 "./entrypoint.sh nod…" Up 2 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp
explorer-backend-l1 "sh -c 'bin/blocksco…" Up 2 minutes 0.0.0.0:4000->4000/tcp, :::4000->4000/tcp
explorer-backend-l1-db "docker-entrypoint.s…" Up 2 minutes 0.0.0.0:5436->5432/tcp, :::5436->5432/tcp
zkevm-bridge-ui "/bin/sh /app/script…" Up 2 minutes 0.0.0.0:8088->80/tcp, :::8088->80/tcp
zkevm-bridge-service "/bin/sh -c '/app/zk…" Up 2 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp, 0.0.0.0:9080->9090/tcp, :::9080->9090/tcp
zkevm-bridge-db "docker-entrypoint.s…" Up 2 minutes 5438/tcp, 0.0.0.0:5438->5432/tcp, :::5438->5432/tcp
zkevm-data-availability "/bin/sh -c '/app/cd…" Up 2 minutes 0.0.0.0:8444->8444/tcp, :::8444->8444/tcp
zkevm-data-node-db "docker-entrypoint.s…" Up 2 minutes (healthy) 0.0.0.0:5444->5432/tcp, :::5444->5432/tcp
zkevm-json-rpc "/bin/sh -c '/app/zk…" Up 2 minutes 0.0.0.0:8123->8123/tcp, :::8123->8123/tcp, 0.0.0.0:8133->8133/tcp, :::8133->8133/tcp, 0.0.0.0:9091->9091/tcp, :::9091->9091/tcp
zkevm-aggregator "/bin/sh -c '/app/zk…" Up 2 minutes 8123/tcp, 0.0.0.0:50081->50081/tcp, :::50081->50081/tcp, 0.0.0.0:9093->9091/tcp, :::9093->9091/tcp
zkevm-l2gaspricer "/bin/sh -c '/app/zk…" Up 2 minutes 8123/tcp
zkevm-sequence-sender "/bin/sh -c '/app/zk…" Up 2 minutes 8123/tcp
zkevm-sequencer "/bin/sh -c '/app/zk…" Up 2 minutes 0.0.0.0:6060->6060/tcp, :::6060->6060/tcp, 0.0.0.0:6900->6900/tcp, :::6900->6900/tcp, 8123/tcp, 0.0.0.0:9092->9091/tcp, :::9092->9091/tcp
zkevm-eth-tx-manager "/bin/sh -c '/app/zk…" Up 2 minutes 8123/tcp, 0.0.0.0:9094->9091/tcp, :::9094->9091/tcp
zkevm-sync "/bin/sh -c '/app/zk…" Up 2 minutes 8123/tcp, 0.0.0.0:9095->9091/tcp, :::9095->9091/tcp
zkevm-prover "zkProver -c /usr/sr…" Up 3 minutes 0.0.0.0:50061->50061/tcp, :::50061->50061/tcp, 0.0.0.0:50071->50071/tcp, :::50071->50071/tcp
zkevm-mock-l1-network "/usr/local/bin/entr…" Up 3 minutes 9545/tcp, 0.0.0.0:8545-8546->8545-8546/tcp, :::8545-8546->8545-8546/tcp, 30303/tcp, 30303/udp
zkevm-event-db "docker-entrypoint.s…" Up 3 minutes 0.0.0.0:5435->5432/tcp, :::5435->5432/tcp
zkevm-pool-db "docker-entrypoint.s…" Up 3 minutes 0.0.0.0:5433->5432/tcp, :::5433->5432/tcp
zkevm-state-db "docker-entrypoint.s…" Up 3 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp

Access the zkEVM Explorer

If you are working on a local node, access ‘localhost’ in your browser. If you are working from a remote machine, connect to the IP address where you have set up the Validium node (in this case, please replace “localhost” with the appropriate IP address).

Setting Up and Preparing MetaMask

Add the L2 network as a mock.

Chain ID: 1001
Currency Symbol: POL (actually, anything is fine)
RPC Node: http://localhost:8123
Block Explorer: http://localhost:4001

Since an account with test funds (tokens) is available, import the account using the private key ‘0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80’. Then, the balance will be displayed as 100000 POL.

Confirming POL Transfer

Transfer 100 POL to another account (Account 8).
*The operation of the transfer is omitted.
After the transfer, check the balance of POL.

Check the status of the recipient’s account (Account 8) and confirm that the balance is 100 POL.

You can verify this transaction on the zkEVM explorer.

Checking the Bridge Operation

The CDK is equipped with a native bridge and UI, which can be used to move funds between L1 and L2. The operation from here will be performed with one account (Account 7).

L1 to L2

In other words, from the Ethereum network (here, the mock network) to Astar zkEVM.

Add the settings for the L1 Ethereum mock network to MetaMask.

Chain ID: 1337
Currency Symbol: POL (actually, anything is fine)
RPC Node: http://localhost:8545
Block Explorer: http://localhost:4000

*Please check the display as this added mock network may be recognized as a test network in MetaMask.

When you switch to the L1 network, you will find that you have some POL (in the example below, about 100000 POL).

Move to the bridge at http://localhost:8088 and check the UI. The following display (Connect a wallet) is only shown when you first access it. Click “Connect a wallet” and select the account you imported earlier.

Once the connection is complete, a page like the one below will be displayed.

Here, we will try sending 10 ETH to L2. Enter the amount and press “Continue”.

A popup like the one below will appear, so enter “I understand” and press “Continue”.

You will be taken to a confirmation screen, where you press “Bridge”, and approve the transaction in the MetaMask popup.

Once the bridge is complete, the Activity page will be displayed.

You can also check the transaction from the L1 explorer (localhost:4000).

When you switch the MetaMask network to L2 (My Local Testnet), you will see that 10 POL have been sent.

L2 to L1

Switch the MetaMask network to L2, that is, the “My Local Testnet” chain.

Access the bridge (localhost:8088) in your browser. The updated balances of L1 and L2 will be displayed.

Earlier, we sent “10” tokens to L2. Now, let’s try to return “5” of those tokens back to L1.

A confirmation screen will appear, so press “Bridge” and approve the transaction in the MetaMask popup.

The transaction will be executed and the process will proceed.

The status on the Bridge UI’s Activity page will progress as shown below.

As the process progresses and reaches the “On Hold” state, press “Finalise”. You will be prompted to switch to L1 in the MetaMask popup, so approve the transaction after switching.

After the transaction is executed, the process will proceed, and once the completion of the status is displayed on the Activity page, the bridging from L2 to L1 is complete.

Moving tokens from L1 to L2 is intuitive and can be transferred quickly with one action. On the other hand, moving tokens from L2 to L1 involves two major steps, and a process to confirm the transaction before moving the tokens is necessary to ensure security, which takes time.

In conclusion, based on the quick start guide, we have built a Validium node on a local machine and confirmed the relationship between L1 and L2 while actually transferring tokens.

Reference

--

--

Astar Network Official Ambassador@Tokyo / web3 Technacal Marketing / IT Infra TechLead / Tech Instructor / School Teacher