StakeWise
Search…
Usage

Requirements

  • At least three nodes in the Kubernetes cluster with 8 CPU/16 GB RAM configuration.
  • 1000 GB of Persistent Storage per node (SSD).
  • Helm 3.0+ - This is the earliest version of Helm tested. Charts may work with earlier versions but it is untested.
  • Kubernetes 1.18+ - This is the earliest version of Kubernetes tested. Charts may work with earlier versions but it is untested.
  • PV provisioner support in the underlying infrastructure

Installation

Monitoring System

If you already have Prometheus installed in your cluster, you can skip this step.
Every chart we support contains the ability to enable monitoring and alerting out of the box. A combination of Prometheus + Grafana + Alertmanager is used for monitoring.
Add the prometheus-community helm repository and check that you have access to the chart:
1
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
2
helm repo update
Copied!
Install Prometheus + Grafana + Alertmanager:
1
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
2
--set='grafana.sidecar.dashboards.enabled=true' \
3
--set='grafana.sidecar.dashboards.searchNamespace=true' \
4
--set='prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues=false' \
5
--set='prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false' \
6
--set='prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false' \
7
--set='prometheus.prometheusSpec.probeSelectorNilUsesHelmValues=false' \
8
--create-namespace \
9
--namespace monitoring \
10
--version 35.0.2 \
11
-f prom.yaml
Copied!
prom.yaml:
1
prometheus:
2
prometheusSpec:
3
storageSpec:
4
volumeClaimTemplate:
5
spec:
6
storageClassName: "{REPLCAE_ME_WITH_STORAGE_CLASS_NAME}"
7
accessModes: ["ReadWriteOnce"]
8
resources:
9
requests:
10
storage: 100Gi
11
grafana:
12
persistence:
13
enabled: true
14
type: pvc
15
storageClassName: "{REPLCAE_ME_WITH_STORAGE_CLASS_NAME}"
16
accessModes: ["ReadWriteOnce"]
17
size: 10Gi
18
finalizers:
19
- kubernetes.io/pvc-protection
Copied!
For GKE/EKS installations:
1
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
2
--set='kubeControllerManager.enabled=false' \
3
--set='kubeEtcd.enabled=false' \
4
--set='kubeScheduler.enabled=false' \
5
--set='kubeProxy.enabled=false' \
6
--set='defaultRules.rules.etcd=false' \
7
--set='defaultRules.rules.kubernetesSystem=false' \
8
--set='defaultRules.rules.kubeScheduler=false' \
9
--set='grafana.sidecar.dashboards.enabled=true' \
10
--set='grafana.sidecar.dashboards.searchNamespace=true' \
11
--set='prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues=false' \
12
--set='prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false' \
13
--set='prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false' \
14
--set='prometheus.prometheusSpec.probeSelectorNilUsesHelmValues=false' \
15
--create-namespace \
16
--namespace monitoring \
17
--version 35.0.2 \
18
-f prom.yaml
Copied!
prom.yaml:
1
prometheus:
2
prometheusSpec:
3
storageSpec:
4
volumeClaimTemplate:
5
spec:
6
storageClassName: "gp2"
7
accessModes: ["ReadWriteOnce"]
8
resources:
9
requests:
10
storage: 100Gi
11
grafana:
12
persistence:
13
enabled: true
14
type: pvc
15
storageClassName: "gp2"
16
accessModes: ["ReadWriteOnce"]
17
size: 10Gi
18
finalizers:
19
- kubernetes.io/pvc-protection
Copied!

Optional (Grafana Dashboards):

Import dashboards into Grafana manually or automatically with Helm:
1
helm upgrade --install grafana-stakewise-dashboards stakewise/grafana-stakewise-dashboards \
2
--namespace monitoring
Copied!

Hashicorp Vault

Hashicorp Vault is used to securely store and manage validator keys in one place. Validators can access only the keys that they're supposed to run.
Add the Hashicorp helm repository and check that you have access to the chart:
1
helm repo add hashicorp https://helm.releases.hashicorp.com
2
helm repo update
Copied!
Hashicorp Vault requires auto-unsealing when vault pods are restarted. You can use one of the supported cloud providers or transit secret engine as specified here.
Since there are a large number of options for installing/configuring the Hashicorp Vault Cluster (due to different architectures and providers), we cannot list all the configuration methods for each specific case. The instructions below apply to GCP & AWS unsealing, for other providers, use the relevant documentation on the HashiCorp website:

Vault configuration with GCP auto-unseal:

  1. 1.
    Using gcloud CLI complete the following steps in GCP:
Create Google Service Account and Download JSON
1
gcloud iam service-accounts create SERVICE_ACCOUNT_ID \
2
--description="DESCRIPTION" \
3
--display-name="DISPLAY_NAME"
4
gcloud iam service-accounts keys create key-file \
5
--iam-account=[email protected]
Copied!
Create Kubernetes secret with GCP credentials generated above
1
kubectl create namespace vault
2
kubectl create secret generic gcp-creds \
3
--from-file=gcp-creds.json=./google-project-ID.json \
4
--namespace vault
Copied!
Create keyring:
1
gcloud kms keyrings create key-ring --location location
2
gcloud kms keys add-iam-policy-binding key \
3
--keyring key-ring \
4
--location location \
5
--member principal-type:principal-email \
6
--role roles/cloudkms.cryptoKeyEncrypterDecrypter
Copied!
2. Update vault.yaml with the keyring information:
1
# vault.yaml
2
injector:
3
enabled: false
4
server:
5
enabled: true
6
ha:
7
enabled: true
8
replicas: 3
9
raft:
10
enabled: true
11
config: |
12
ui = true
13
listener "tcp" {
14
tls_disable = 1
15
address = "[::]:8200"
16
cluster_address = "[::]:8201"
17
}
18
storage "raft" {
19
path = "/vault/data"
20
}
21
telemetry {
22
disable_hostname = true
23
prometheus_retention_time = "12h"
24
}
25
service_registration "kubernetes" {}
26
seal "gcpckms" {}
27
extraEnvironmentVars:
28
GOOGLE_REGION: global
29
GOOGLE_PROJECT: stakewise-stage
30
GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/stakewise/gcp-creds.json
31
VAULT_GCPCKMS_SEAL_KEY_RING: validators
32
VAULT_GCPCKMS_SEAL_CRYPTO_KEY: validators
33
34
volumes:
35
- name: gcp-creds
36
secret:
37
secretName: gcp-creds
38
items:
39
- key: gcp-creds.json
40
path: gcp-creds.json
41
volumeMounts:
42
- name: gcp-creds
43
mountPath: "/vault/userconfig/stakewise/gcp-creds.json"
44
subPath: gcp-creds.json
45
readOnly: true
Copied!

Vault configuration with AWS KMS auto-unseal:

  1. 1.
    Follow the instructions here to create AWS KMS key.
  2. 2.
    Update vault.yaml with the keyring information:
1
# vault.yaml
2
injector:
3
enabled: false
4
server:
5
enabled: true
6
ha:
7
enabled: true
8
replicas: 3
9
raft:
10
enabled: true
11
config: |
12
ui = true
13
listener "tcp" {
14
tls_disable = 1
15
address = "[::]:8200"
16
cluster_address = "[::]:8201"
17
}
18
storage "raft" {
19
path = "/vault/data"
20
}
21
telemetry {
22
disable_hostname = true
23
prometheus_retention_time = "12h"
24
}
25
service_registration "kubernetes" {}
26
seal "awskms" {}
27
extraEnvironmentVars:
28
AWS_REGION: "us-east-1"
29
AWS_ACCESS_KEY_ID: "ACCESS_KEY_ID"
30
AWS_SECRET_ACCESS_KEY: "ACCESS_KEY"
31
VAULT_AWSKMS_SEAL_KEY_ID: "SEAL_KEY_ID"
Copied!
Note that when using AWS or GCP you do not need to execute the unseal commands below, to check that auto unsealing is working, just look at the logs for a record: $ kubectl -n vault logs vault-0 | grep -E "unseal|core" ... 22-03-14T19:10:02.500Z [INFO] core: vault is unsealed 2022-03-14T19:10:02.500Z [INFO] core: entering standby mode 2022-03-14T19:10:02.570Z [INFO] core: unsealed with stored key ...

Vault Initialization

Install the specified version of the Vault Helm chart in HA mode with integrated storage:
1
helm upgrade --install vault hashicorp/vault \
2
-f ./vault.yaml \
3
--set='server.image.tag=1.9.4' \
4
--namespace vault \
5
--create-namespace \
6
--version 0.19.0
Copied!
After the Vault is installed one of the Vault servers needs to be initialized. The initialization generates the credentials necessary to unseal all the Vault servers.
You must store the root token and unseal keys in secure storage. If you lose them, you would have to re-deploy the Vault and re-sync the keys.
Initialize and unseal vault-0 pod:
1
kubectl -n vault exec -ti vault-0 -- vault operator init
2
kubectl -n vault exec -ti vault-0 -- vault operator unseal
Copied!
Join the remaining pods to the Raft cluster and unseal them. The pods will need to communicate directly so we'll configure the pods to use the internal service provided by the Helm chart:
1
kubectl -n vault exec -ti vault-1 -- vault operator raft join http://vault-0.vault-internal:8200
2
kubectl -n vault exec -ti vault-1 -- vault operator unseal
3
4
kubectl -n vault exec -ti vault-2 -- vault operator raft join http://vault-0.vault-internal:8200
5
kubectl -n vault exec -ti vault-2 -- vault operator unseal
Copied!
To verify if the Raft cluster has successfully been initialized, run the following:
First, login using the root token on the vault-0 pod:
1
kubectl -n vault exec -ti vault-0 -- vault login
Copied!
Next, list all the raft peers:
1
kubectl -n vault exec -ti vault-0 -- vault operator raft list-peers
2
Node Address State Voter
3
---- ------- ----- -----
4
a1799962-8711-7f28-23f0-cea05c8a527d vault-0.vault-internal:8201 leader true
5
e6876c97-aaaa-a92e-b99a-0aafab105745 vault-1.vault-internal:8201 follower true
6
4b5d7383-ff31-44df-e008-6a606828823b vault-2.vault-internal:8201 follower true
Copied!
Vault with integrated storage (Raft) is now ready to use! Next, set up Kubernetes authentication.
Vault provides a Kubernetes authentication method that enables clients to authenticate with a Kubernetes Service Account Token. The Kubernetes resources that access the secret and create the volume authenticate through this method through a role.
First, start an interactive shell session on the vault-0 pod.
1
kubectl -n vault exec -ti vault-0 -- sh
2
$ export VAULT_TOKEN=token
Copied!
Replace export VAULT_TOKEN=token with your Initial root token generated above.
Enable the Kubernetes authentication method:
1
$ vault auth enable kubernetes
2
Success! Enabled kubernetes auth method at: kubernetes/
Copied!
Configure the Kubernetes authentication method to use the service account token, the location of the Kubernetes host, and its certificate. Replace {{ KUBERNETES_PORT_443_TCP_ADDR }} with Kubernetes cluster API endpoint:
1
vault write auth/kubernetes/config \
2
kubernetes_host="https://{{ KUBERNETES_PORT_443_TCP_ADDR }}:443" \
3
[email protected]/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
4
issuer="https://kubernetes.default.svc.cluster.local"
5
6
Success! Data written to: auth/kubernetes/config
Copied!
The kubernetes_ca_cert file is written to the container by Kubernetes. The variable {{ KUBERNETES_PORT_443_TCP_ADDR }} references the internal network address of the Kubernetes host and should be manually updated in the command above.
Lastly, exit the vault-0 pod.
1
exit
Copied!

Optional: Additional monitoring for Vault

First, start an interactive shell session on the vault-0 pod.
1
kubectl -n vault exec -ti vault-0 -- sh
2
$ export VAULT_TOKEN=token
Copied!
Replace export VAULT_TOKEN=token with your Initial root token generated above.
The Vault /sys/metrics endpoint is authenticated. Prometheus requires a Vault token with sufficient capabilities to successfully consume metrics from the endpoint.
Define a prometheus-metrics ACL policy that grants read capabilities to the metrics endpoint.
1
$ vault policy write prometheus-metrics - << EOF
2
path "/sys/metrics" {
3
capabilities = ["read"]
4
}
5
EOF
6
7
Success! Uploaded policy: prometheus-metrics
Copied!
Create an token with the prometheus-metrics policy attached that Prometheus will use for authentication to access the Vault telemetry metrics endpoint.
Write the prometheus-token in a secure location in the Prometheus configuration directory.
1
$ vault token create \
2
-field=token \
3
-policy prometheus-metrics
Copied!
NOTE: Production Vault installations typically use auth methods to issue tokens, but for the sake of simplicity this scenario issues the token directly from the token store.
Copy token and save it to the file prometheus-token.
The Vault server is now prepared to properly expose telemetry metrics for Prometheus consumption, and you have created the token that Prometheus will use to access the metrics.
Prometheus container
Create file prometheus-token.yaml
1
apiVersion: v1
2
data:
3
prometheus-token: {replace with token encoded to base64 `echo -n 'token' | base64`}
4
kind: Secret
5
metadata:
6
name: prometheus-token
7
type: Opaque
Copied!
And apply config:
1
kubectl -n monitoring apply -f prometheus-token.yaml
Copied!
Update prom.yaml file from the above example:
1
prometheus:
2
prometheusSpec:
3
storageSpec:
4
volumeClaimTemplate:
5
spec:
6
storageClassName: "{REPLCAE_ME_WITH_STORAGE_CLASS_NAME}"
7
accessModes: ["ReadWriteOnce"]
8
resources:
9
requests:
10
storage: 100Gi
11
volumes:
12
- name: prometheus-token
13
secret:
14
secretName: prometheus-token
15
volumeMounts:
16
- mountPath: /etc/prometheus/prometheus-token
17
name: prometheus-token
18
subPath: prometheus-token
19
additionalScrapeConfigs:
20
- job_name: vault
21
metrics_path: /v1/sys/metrics
22
params:
23
format: ['prometheus']
24
scheme: http
25
authorization:
26
credentials_file: /etc/prometheus/prometheus-token
27
static_configs:
28
- targets: ['vault.vault:8200']
29
grafana:
30
persistence:
31
enabled: true
32
type: pvc
33
storageClassName: "{REPLCAE_ME_WITH_STORAGE_CLASS_NAME}"
34
accessModes: ["ReadWriteOnce"]
35
size: 10Gi
36
finalizers:
37
- kubernetes.io/pvc-prote
Copied!
And upgrade Prometheus installation:
1
helm upgrade --install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
2
--set='grafana.sidecar.dashboards.enabled=true' \
3
--set='grafana.sidecar.dashboards.searchNamespace=true' \
4
--set='prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues=false' \
5
--set='prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false' \
6
--set='prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false' \
7
--set='prometheus.prometheusSpec.probeSelectorNilUsesHelmValues=false' \
8
--create-namespace \
9
--namespace monitoring \
10
--version 35.0.2 \
11
-f prom.yaml
Copied!

ETH1 Nodes

ETH1 nodes are used by the validators to propose new ETH2 blocks. As such, running validator and beacon nodes also entail having a reliable connection to the ETH1 chain.
ETH1 nodes must be deployed first. Currently, GoEthereum, Erigon, and OpenEthereum, NetherMind are supported.
Add StakeWise Helm repository:
1
helm repo add stakewise https://charts.stakewise.io
2
helm repo update
Copied!
Clients supported Gnosis Chain: Nethermind, OpenEthereum
OpenEthereum is deprecated and will be remove from default set of charts once Erigon team complete with adding gnosis chain to their client.
Depending on what client you would like to use, run one of the following commands:
1
# GoEthereum
2
helm upgrade --install geth stakewise/geth \
3
--set='replicaCount=2' \
4
--set='network=mainnet' \
5
--set='metrics.serviceMonitor.enabled=true' \
6
--set='metrics.prometheusRule.enabled=true' \
7
--create-namespace \
8
--namespace chain
9
10
# Erigon
11
helm upgrade --install erigon stakewise/erigon \
12
--set='replicaCount=2' \
13
--set='network=mainnet' \
14
--set='metrics.serviceMonitor.enabled=true' \
15
--set='metrics.prometheusRule.enabled=true' \
16
--create-namespace \
17
--namespace chain
18
19
# OpenEthereum
20
helm upgrade --install openethereum stakewise/openethereum \
21
--set='replicaCount=2' \
22
--set='network=mainnet' \
23
--set='metrics.serviceMonitor.enabled=true' \
24
--set='metrics.prometheusRule.enabled=true' \
25
--create-namespace \
26
--namespace chain
27
28
# Nethermind
29
helm upgrade --install nethermind stakewise/nethermind \
30
--set='replicaCount=2' \
31
--set='network=mainnet' \
32
--set='metrics.serviceMonitor.enabled=true' \
33
--set='metrics.prometheusRule.enabled=true' \
34
--create-namespace \
35
--namespace chain
Copied!
The recommended setup is to deploy two replicas of ETH1 nodes and use Infura, Alchemy, or any other hosted service as a fallback. As a result, if one of the ETH1 nodes fails, the ETH2 nodes will automatically connect to the second ETH1 node. If it happens that both ETH1 nodes fail, the ETH2 nodes will fall back to the hosted service.

ETH2 beacon nodes

An ETH2 beacon node is responsible for running a full Proof-Of-Stake blockchain, known as a beacon chain, which uses distributed consensus to agree on blocks in the network. Validators connect to the beacon nodes to receive block attestation/proposal assignments.
Add StakeWise Helm repository:
1
helm repo add stakewise https://charts.stakewise.io
2
helm repo update
Copied!
When deploying ETH2 nodes, make sure that your ETH1 nodes are fully synced. It's possible to choose what ETH2 client to use. Currently, Prysm, Lighthouse, and Teku are supported. Choose one or two clients to install and deploy:
Clients supported Gnosis Chain: Prysm, Lighthouse
Note that Nimbus is only compatible with Lighthouse validator client
1
# Prysm
2
helm upgrade --install prysm stakewise/prysm \
3
--set='replicaCount=2' \
4
--set='network=mainnet' \
5
--set='eth1Endpoints[0]=http://geth.chain:8545' \
6
--set='metrics.serviceMonitor.enabled=true' \
7
--set='metrics.prometheusRule.enabled=true' \
8
--create-namespace \
9
--namespace chain
10
11
# Lighthouse
12
helm upgrade --install lighthouse stakewise/lighthouse \
13
--set='replicaCount=2' \
14
--set='network=mainnet' \
15
--set='eth1Endpoints[0]=http://geth.chain:8545' \
16
--set='metrics.serviceMonitor.enabled=true' \
17
--set='metrics.prometheusRule.enabled=true' \
18
--create-namespace \
19
--namespace chain
20
21
# Teku
22
helm upgrade --install teku stakewise/teku \
23
--set='replicaCount=2' \
24
--set='network=mainnet' \
25
--set='eth1Endpoints[0]=http://geth.chain:8545' \
26
--set='metrics.serviceMonitor.enabled=true' \
27
--set='metrics.prometheusRule.enabled=true' \
28
--create-namespace \
29
--namespace chain
30
31
# Nimbus
32
helm upgrade --install nimbus stakewise/nimbus \
33
--set='replicaCount=2' \
34
--set='network=mainnet' \
35
--set='eth1Endpoints[0]=http://geth.chain:8545' \
36
--set='metrics.serviceMonitor.enabled=true' \
37
--set='metrics.prometheusRule.enabled=true' \
38
--create-namespace \
39
--namespace chain
Copied!
The recommended setup is to deploy two replicas of the primary ETH2 client and one replica of the stand-by ETH2 client. The validators will be evenly connected to all the primary replicas and will automatically switch to another primary replica in case the connection to their current one fails.
If happens that there is an issue with the primary client, the validators can migrate to the stand-by client and won't need to wait for it to sync the chain.

Upload Keystores To The Vault

Once you've successfully deployed the Vault and your proposal got approved by the DAO (the snapshot vote got executed), sync the validator keys using the Operator CLI.
You must use the same mnemonic as generated during the DAO proposal creation. Keep in mind that using the same mnemonic for multiple vaults will cause validator slashings.
Optionally, you can port forward the endpoints that CLI requires for uploading and verifying the validator keys:
1
kubectl port-forward svc/lighthouse -n chain 5052:5052 &
2
kubectl port-forward svc/vault -n vault 8200:8200 &
Copied!
Run the following command to sync new validator keys to the vault:
1
./stakewise-cli sync-vault
2
Please choose the network name (mainnet, goerli, perm_goerli, gnosis) [mainnet]: mainnet
3
Enter your operator wallet address: 0xXXX...
4
Enter the vault authentication token: s.oXXX...
5
Enter the vault API URL: http://localhost:8200
6
Enter the beacon node URL for mainnet: http://localhost:5052
7
Enter the Kubernetes API server URL: https://kubernetes-api.com
8
Enter the validators kubernetes namespace [validators]:
9
Enter your mnemonic separated by spaces (" "):
10
NB! Using the same mnemonic for multiple vaults will cause validators slashings!
11
Fetching vault current state [####################################] 0/0
12
Provisioning missing validator keys [####################################] 500/500
13
Syncing vault validator directories [####################################] 5/5
14
Syncing vault keystores [####################################] 500/500
15
Verifying vault state [####################################] 500/500
16
The vault contains 500 validator keys. Please upgrade the "validators" helm chart with "validatorsCount" set to 5 and "reimportKeystores" set to "true". Make sure you have the following validators running in the "validators" namespace: validator0, validator1, validator2, validator3, validator4.
Copied!

Validators

Validators are responsible for storing data, processing transactions, and adding new blocks to the blockchain. This will keep Ethereum secure for everyone and earn new ETH in the process.
Before deploying the validators make sure you have deployed and initialized Hashicorp Vault and synchronized validator keys in the steps above. After synchronizing the keys to the Vault, you would see a similar to the following message:
"The Vault contains 800 validator keys. Please make sure the validatorsCount is set to 8 and restart the validators."
Make sure you have the right number of validators running and restart them so that they will synchronize the latest changes from the Vault.
Add the StakeWise helm repository:
1
helm repo add stakewise https://charts.stakewise.io
2
helm repo update
Copied!
Deploy the chart, after specifying the Vault address, ETH2 nodes, client type, network and number of validators:
1
helm upgrade --install validators stakewise/validators \
2
--set='network=mainnet' \
3
--set='type=lighthouse' \
4
--set='validatorsCount=8' \
5
--set='beaconChainRpcEndpoints[0]=http://lighthouse.chain:5052' \
6
--set='vaultEndpoint=http://vault.vault:8200' \
7
--set='graffiti=StakeWise' \
8
--set='reimportKeystores=true' \
9
--set='persistence.storageClassName=ssd-storage' \
10
--set='metrics.enabled=true' \
11
--set='metrics.serviceMonitor.enabled=true' \
12
--set='metrics.prometheusRule.enabled=true' \
13
--set='validatorMonitor.enabled=true' \
14
--set='validatorMonitor.graphNodeUrl=https://api.thegraph.com/subgraphs/name/stakewise/stakewise-mainnet' \
15
--set='validatorMonitor.beaconNodeUrl=http://lighthouse.chain:5052' \
16
--set='validatorMonitor.operatorWallets[0]=0x7776...' \
17
--set='validatorMonitor.operatorWallets[1]=0x7778...' \
18
--create-namespace \
19
--namespace validators
Copied!
Based on network you deploy some above parameters should be adjusted.
validatorMonitor.graphNodeUrl:
  • mainnet: https://api.thegraph.com/subgraphs/name/stakewise/stakewise-mainnet
  • gnosis: https://api.thegraph.com/subgraphs/name/stakewise/stakewise-gnosis
  • goerli: https://api.thegraph.com/subgraphs/name/stakewise/stakewise-goerli

Commit Operator

Once you're 100% ready for attestation/proposals assignments to the validators, commit your operator:
  • Go to the PoolValidators smart contract (Goerli, Perm Goerli, Gnosis Chain, Mainnet)
  • Click on Connect to Web3 button and connect your wallet. The address must match the one used during DAO proposal generation.
  • Call commitOperator function.
Congratulations on becoming StakeWise Node Operator🎉. Your validators will get assignments, and you would be able to claim your operator rewards from Farms Page.