Upgrading a Validator
How to perform an upgrade on a validator
Helm Chart details:
The validator ChartVersion: 0.4.4
introduces a new pod and service, named vao
. This Service will expose its service via LoadBalancer
on port 8001. Please make sure this port is open!
Please make sure you update your values.yaml
or generated_values.yaml
to include .Values.vao
.
Sample config:
global:
logLevel: "warn"
ghost:
ethConfig:
ethFrom:
existingSecret: '<somesecret>'
key: "ethFrom"
ethKeys:
existingSecret: '<somesecret>'
key: "ethKeyStore"
ethPass:
existingSecret: '<somesecret>'
key: "ethPass"
ethRpcUrl: "https://MY_L1_RPC_URL"
rpcUrl: "https://MY_L1_RPC_URL"
env:
normal:
CFG_LIBP2P_EXTERNAL_ADDR: '/ip4/1.2.3.4' # public/reachable ip address of node. If DNS hostname set to `/dns/my.validator.com`
vao:
env:
normal:
CFG_LIBP2P_EXTERNAL_ADDR: '/ip4/1.2.3.4' # public/reachable ip address of node. If DNS hostname set to `/dns/my.validator.com`
Install CRD's​
Starting from Chart Version 0.3.4
, tor is deployed using the tor-controller
operator, which installs some custom resource definitions. The controller will create a new onion key, which will be persisted as a secret. Please delete your previous secrets containing the tor keys, as they won't be needed. Retrieve the Ghost onion address using kubectl get onion -n <namespace>
and notify the Chronicle team of your ETH address and the new Ghost onion address.
If you are running an upgrade from a prior release (< 0.3.4
), chances are that Tor Custom Resource Definitions haven't been installed. Helm does not like installing CRD's during a helm upgrade, so we need to manually apply the CRD's like this:
kubectl apply -f https://raw.githubusercontent.com/chronicleprotocol/charts/validator-0.3.24/charts/validator/crds/tor-controller.yaml
It can take a few moments for the tor-controller to be in a ready state, but please make sure its running before upgrading your validator:
kubectl get pods -n tor-controller-system
You should see something like this:
NAME READY STATUS RESTARTS AGE
tor-controller-controller-manager-6648f44cc8-g6c68 2/2 Running 0 16m
We now have the CRD's deployed (ie kubectl get crds
will show the tor custom resource definitions), and our values.yaml updated, we can perform the upgrade:
Upgrading manually (helm upgrade
)
Upgrading manually (helm upgrade
)​
If you are upgrading from 0.3.x to 0.3.y, simply updating the chart version will suffice:
ssh <SERVER_IP>
su - <FEED_USERNAME>
export FEED_NAME=my-feed
Prepare values​
The values.yaml file is used to configure the validator. The file is generated by the install script, and should be updated to reflect the latest version of the feed chart.
With the latest version of the chart, there are a few changes that need to be made to the values.yaml
/ generated-values.yaml
file:
Please structure your helm values like this:
global:
logLevel: "warn"
ghost:
ethConfig:
ethFrom:
existingSecret: '<somesecret>'
key: "ethFrom"
ethKeys:
existingSecret: '<somesecret>'
key: "ethKeyStore"
ethPass:
existingSecret: '<somesecret>'
key: "ethPass"
ethRpcUrl: "https://MY_L1_RPC_URL"
rpcUrl: "https://MY_L1_RPC_URL"
env:
normal:
CFG_LIBP2P_EXTERNAL_ADDR: '/ip4/1.2.3.4' # public/reachable ip address of node
vao:
env:
normal:
CFG_LIBP2P_EXTERNAL_ADDR: '/ip4/1.2.3.4' # public/reachable ip address of node
Please ensure your values yaml file is updated to reflect the latest requirements for the validator chart, with the correct values for ethConfig
, ethRpcUrl
and rpcUrl
.
Make sure the TOR crds are installed.
helm repo update
helm upgrade $FEED_NAME -n $FEED_NAME -f $HOME/$FEED_NAME/generated-values.yaml chronicle/validator --version 0.4.4
If upgrading from 0.2.x to 0.3.x, please use the helper script, or manually update your generated-values.yaml
as per the steps above
Upgrading using the helper script (upgrade.sh
)
Upgrading using upgrade.sh
​
Please be aware that the latest helm chart has been renamed from feed
to validator
. Please use the upgrade.sh
script to upgrade your validator to the latest version. This version embeds musig
into the ghost
pod. The upgrader script will clean up the generated values.yaml
file and remove the unecessary musig values.
To simplify the upgrade process, we have created a helper script that will upgrade your validator to the latest version.
This script will attempt to run helm upgrade <feedname> -n <feedname> chronicle/validator
on your feed release, with any updated input variables.
Please use the correct FEED_NAME
, which should be the same as your helm release name, if deployed using the install.sh
script previously
ssh <SERVER_IP>
su - <FEED_USERNAME>
export FEED_NAME=my-feed
Make sure the TOR crds are installed.
Download the latest upgrade.sh
​
Get the latest upgrade.sh script:
wget -N https://raw.githubusercontent.com/chronicleprotocol/scripts/main/feeds/k3s-install/upgrade.sh
chmod a+x upgrade.sh
./upgrade.sh
.env
file, or export them as environment variables. If the script fails to find any of these values, it will prompt you for them when running the script.If kubectl/helm
commands fail, please ensure you have $KUBECONFIG
set correctly. Take a look here for more detail
Verify the helm release and version​
Verify the chart version has changed and matches what the latest feed version:
helm list -n $FEED_NAME
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
validator demo 1 2025-07-01 11:52:17.003793982 -0300 -03 deployed validator-0.4.4 0.60
View all resources created in the namespace​
kubectl get pods,deployment,service,secrets,onion -n demo
NAME READY STATUS RESTARTS AGE
pod/ghost-688b6864b5-w92sd 1/1 Running 0 2m
pod/ghost-socks-tor-daemon-549c447f9c-75c26 1/1 Running 0 2m
pod/ghost-tor-daemon-c648899bb-67rnd 1/1 Running 0 2m
pod/ghost-vao-f568684d9-74nb5 1/1 Running 0 2m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 1/1 1 1 2m
deployment.apps/ghost-socks-tor-daemon 1/1 1 1 2m
deployment.apps/ghost-tor-daemon 1/1 1 1 2m
deployment.apps/ghost-vao 1/1 1 1 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 10.43.181.34 192.168.10.27 8000:31501/TCP,8080:30746/TCP 2m
service/ghost-metrics ClusterIP 10.43.21.230 <none> 9090/TCP 2m
service/ghost-metrics-vao ClusterIP 10.43.23.37 <none> 9090/TCP 2m
service/ghost-socks-tor-svc ClusterIP 10.43.87.120 <none> 9050/TCP 2m
service/ghost-tor-metrics-svc ClusterIP 10.43.142.233 <none> 9035/TCP 2m
service/ghost-tor-svc ClusterIP 10.43.194.155 <none> 8888/TCP 2m
service/ghost-vao LoadBalancer 10.43.1.126 192.168.10.27 8001:31468/TCP 2m
NAME TYPE DATA AGE
secret/ghost-eth-keys Opaque 3 2m
secret/ghost-socks-tor-secret tor.k8s.torproject.org/control-password 1 2m
secret/ghost-tor-auth tor.k8s.torproject.org/authorized-clients-v3 0 2m
secret/ghost-tor-secret tor.k8s.torproject.org/onion-v3 5 2m
secret/sh.helm.release.v1.ghost.v1 helm.sh/release.v1 1 2m
NAME HOSTNAME AGE
onionservice.tor.k8s.torproject.org/ghost mylongtoronionaddress.onion 28m
View pod logs:​
kubectl logs -n demo deployment/ghost
kubectl logs -n demo deployment/ghost-vao
and you're done!
If you encounter any issues please refer to the Trouble Shooting docs