StakeWise
Search…
Maintenance

ETH2 Client Migration

Specify a new validator type and correct Beacon Node RPC Endpoints.
Run the following command:
1
helm upgrade --install validators stakewise/validators \
2
--set='network=mainnet' \
3
--set='type=lighthouse' \
4
--set='validatorsCount=8' \
5
--set='beaconChainRpcEndpoints[0]=http://lighthouse.chain:5052' \
6
--set='vaultEndpoint=http://vault.vault:8200' \
7
--set='graffiti=StakeWise' \
8
--set='reimportKeystores=true' \
9
--set='persistence.storageClassName=ssd-storage' \
10
--create-namespace \
11
--namespace validators
Copied!
Pay attention to the --set reimportKeystores=true flag, it is required for importing keys into a new client.

Cluster Migration

One of the possible options for migrating validators to a new cluster is described below.
2. Deploy ETH1 / ETH2 nodes (wait for the sync to finish).
3. Deploy validators chart with command (this way PVCs for validators are created, but the validators themselves are not launched):
1
helm upgrade validators stakewise/validators -n validators --set enabled=false -f values.yaml
Copied!
5. Define the POD needed to copy the slashing protection database:
1
# migrate.yaml
2
---
3
kind: Pod
4
apiVersion: v1
5
metadata:
6
name: migrate
7
spec:
8
containers:
9
- image: ubuntu:latest
10
imagePullPolicy: Always
11
name: migrate
12
command: ["/bin/bash", "-c", "sleep 10000000"]
13
securityContext:
14
runAsUser: 0
15
volumeMounts:
16
- name: data
17
mountPath: /data
18
volumes:
19
- name: data
20
persistentVolumeClaim:
21
claimName: data-validator0
Copied!
6. Use the example script below to migrate to the new cluster:
1
OLD_CLUSTER="old-cluster-name"
2
NEW_CLUSTER="new-cluster-name"
3
VALIDATOR_NAME="operator-validator0"
4
5
# Scale old statefulset to 0
6
kubectl --context ${OLD_CLUSTER} -n validators scale sts/${VALIDATOR_NAME} --replicas=0
7
# Create a pod with sleep comand and attached PVC from the old validator
8
kubectl --context ${OLD_CLUSTER} -n validators apply -f migrate.yaml
9
# Copy slashing history to local drive
10
kubectl --context ${OLD_CLUSTER} -n validators cp migrate:/data/validator.db validator.db
11
# Create a pod with sleep comand and attached PVC from the new validator
12
kubectl --context ${NEW_CLUSTER} -n validators apply -f migrate.yaml
13
# Create dir for slashing history and copy validator.db
14
kubectl --context ${NEW_CLUSTER} -n validators exec migrate -- mkdir /data/prysm
15
kubectl --context ${NEW_CLUSTER} -n validators cp validator.db migrate:/data/prysm/validator.db
16
kubectl --context ${NEW_CLUSTER} -n validators delete po migrate
Copied!
7. Deploy operator chart after each run of the above script, migrate one validator at a time to minimize risk
1
helm upgrade --install validators stakewise/validators \
2
--set='network=mainnet' \
3
--set='type=lighthouse' \
4
--set='validatorsCount=1' \
5
--set='beaconChainRpcEndpoints[0]=http://lighthouse.chain:5052' \
6
--set='vaultEndpoint=http://vault.vault:8200' \
7
--set='graffiti=StakeWise' \
8
--set='reimportKeystores=true' \
9
--set='persistence.storageClassName=ssd-storage' \
10
--create-namespace \
11
--namespace validators
Copied!
Pay attention to the --set validatorsCount=1 flag, it should be incremented on each above script run e.g. --set validatorsCount=2 and so on.
The above example is for the prysm client. For other clients, the process will be similar, the only difference is in the paths to the slashing protection database.\
Lighthouse example:
1
OLD_CLUSTER="old-cluster-name"
2
NEW_CLUSTER="new-cluster-name"
3
VALIDATOR_NAME="operator-validator0"
4
5
# Scale old statefulset to 0
6
kubectl --context ${OLD_CLUSTER} -n validators scale sts/${VALIDATOR_NAME} --replicas=0
7
# Create a pod with sleep comand and attached PVC from the old validator
8
kubectl --context ${OLD_CLUSTER} -n validators apply -f migrate.yaml
9
sleep 5
10
# Copy slashing history to local drive
11
kubectl --context ${OLD_CLUSTER} -n validators cp migrate:/data/lighthouse/validators/slashing_protection.sqlite slashing_protection.sqlite
12
kubectl --context ${OLD_CLUSTER} -n validators delete po migrate
13
# Create a pod with sleep comand and attached PVC from the new validator
14
kubectl --context ${NEW_CLUSTER} -n validators apply -f migrate.yaml
15
sleep 5
16
# Create dir for slashing history and copy validator.db
17
kubectl --context ${NEW_CLUSTER} -n validators exec migrate -- mkdir -p /data/lighthouse/validators
18
kubectl --context ${NEW_CLUSTER} -n validators cp slashing_protection.sqlite migrate:/data/lighthouse/validators/slashing_protection.sqlite
19
kubectl --context ${NEW_CLUSTER} -n validators delete po migrate
Copied!