Skip to main content

Tutorial: Deploy Access Server on Kubernetes with Helm

Abstract

Deploy the Access Server Docker image on a Kubernetes cluster using the Helm package manager.

Overview

This tutorial guides you through deploying the Access Server Docker image using a Helm Chart on a Kubernetes cluster with the Helm package manager. Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications, such as Access Server in a Docker container.

Use cases for Kubernetes with Access Server:

  • Developers: Run and test Access Server in containers.

  • IT/Admins/DevOps: Manage production VPN environments at scale.

  • Companies: Build reliable, scalable cloud-native platforms.

  • Students/Learners: Experiment with modern infrastructure management.

Where Kubernetes can run:

  • On a laptop: Minikube, Kind, or Docker Desktop.

  • On-premises: Your own servers.

  • Cloud: Google Kubernetes Engine (GKE), Amazon EKS, Azure AKS, DigitalOcean Kubernetes, IBM Cloud Kubernetes Service, etc.

Access Server benefits:

  • Self-hosted VPN solution.

  • Simplified, rapid deployment of secure remote access and site-to-site solutions.

  • Web-based Admin UI.

  • Built-in OpenVPN Connect app distribution with bundled connection profiles.

    Tip

    Check the system requirements to ensure your host is compatible.

Related Resource:

Before proceeding, ensure your environment meets the following requirements:

  1. Connect to the console and get root privileges.

  2. Add the repository to your system:

    helm repo add as-helm-chart https://openvpn.github.io/as-helm-chart
    
  3. Verify the Access Server Helm Chart is available:

    helm search repo as-helm-chart
    • Example output:

      # helm search repo as-helm-chart
      NAME                            CHART VERSION   APP VERSION     DESCRIPTION
      as-helm-chart/openvpn-as        0.1.1           latest          A Helm chart for OpenVPN Access Server
  1. Install the Access Server Helm Chart:

    helm install my-openvpn-as as-helm-chart/openvpn-as
    
    • The Access Server Helm Chart installs on one of your pods in your Kubernetes cluster.

  2. Check the pod status:

    kubectl get pods
    • Example output

      # kubectl get pods
      NAME                                        READY   STATUS    RESTARTS   AGE
      my-openvpn-as-openvpn-as-6b474c9757-jmjlc   1/1     Running   0          2m36s
  3. Verify the installation:

    helm status my-openvpn-as --show-resources
    • Example output:

      # helm status my-openvpn-as --show-resources
      NAME: my-openvpn-as
      LAST DEPLOYED: Wed Sep  3 17:06:26 2025
      NAMESPACE: default
      STATUS: deployed
      REVISION: 1
      RESOURCES:
      ==> v1/Secret
      NAME                                 TYPE     DATA   AGE
      my-openvpn-as-openvpn-as-poststart   Opaque   1      2d23h
      
      ==> v1/PersistentVolumeClaim
      NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       VOLUMEATTRIBUTESCLASS   AGE
      my-openvpn-as-openvpn-as-pvc   Bound    pvc-256c79ba-97e9-474b-8b7c-8008187bb2fb   5Gi        RWO            do-block-storage   <unset>                 2d23h
      
      ==> v1/Service
      NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                                      AGE
      my-openvpn-as-openvpn-as   LoadBalancer   10.108.92.231   203.0.113.5   943:30479/TCP,443:31010/TCP,1194:31020/UDP   2d23h
      
      ==> v1/Deployment
      NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
      my-openvpn-as-openvpn-as   1/1     1            1           2d23h
      
      ==> v1/Pod(related)
      NAME                                        READY   STATUS    RESTARTS   AGE
      my-openvpn-as-openvpn-as-6b474c9757-jmjlc   1/1     Running   0          2d23h
  4. Use the Pod Name and the sacli tool to check Access Server:

    kubectl exec -it <Pod Name>1 -- /bin/bash sacli version

    1

    Replace <Pod Name> with your pod name. From our example: my-openvpn-as-openvpn-as-6b474c9757-jmjlc.

    • Example output:

      # kubectl exec -it my-openvpn-as-openvpn-as-6b474c9757-jmjlc -- /bin/bash sacli version
      2.14.3 (5936bcd7)

When deploying Access Server via the Helm Chart in your Kubernetes Cluster, you can download a custom values.yaml file from ArtifactHUB where the Access Server Helm Chart is allocated and set custom configuration, such as custom image, network service, storage, etc

For example:

  • Custom image: You want to install Access Server version 2.14.2 instead of the latest image.

  • Network storage: You want to use NodePort (useful in a bare metal environment) instead of LoadBalancer type.

  • Storage: You want to use 20Gi instead of 5Gi.

  1. Add the Helm Chart Repository to your system:

    helm repo add as-helm-chart https://openvpn.github.io/as-helm-chart
  2. Go to the Artifact HUB where the Access Server Helm Chart is allocated and click on the DEFAULT VALUES option on the right panel.

  3. Copy the content of the values.yaml file.

  4. Create a new file (example: my-values.yaml) where you'll be deploying your Access Server Helm Chart:

    nano my-values.yaml
  5. Paste the copied contents from the values.yaml file.

  6. Edit your new file. Example: modify a custom image, network service, or storage in this section:

    image:
      repository: openvpn/openvpn-as
      tag: latest
      pullPolicy: IfNotPresent
    
    service:
      type: LoadBalancer
      ports:
        admin: 943
        tcp: 443
        udp: 1194
    
    persistence:
      enabled: true
      size: 5Gi
      storageClass:
      accessMode: ReadWriteOnce
  7. Save and exit by pressing Ctrl+x, then y.

  8. Install the Access Server Helm Chart with your new custom file. (Example: my-values.yaml.)

    helm install my-openvpn-as as-helm-chart/openvpn-as -f my-values.yaml
    • Access Server will be successfully deployed with the custom commands pushed by the Bash script.

  9. Verify that Access Server was installed correctly by identifying the pod name:

    helm status my-openvpn-as --show-resources
    • Example output:

      # helm status my-openvpn-as --show-resources
      NAME: my-openvpn-as
      LAST DEPLOYED: Wed Sep  3 17:06:26 2025
      NAMESPACE: default
      STATUS: deployed
      REVISION: 1
      RESOURCES:
      ==> v1/Secret
      NAME                                 TYPE     DATA   AGE
      my-openvpn-as-openvpn-as-poststart   Opaque   1      2d23h
      
      ==> v1/PersistentVolumeClaim
      NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       VOLUMEATTRIBUTESCLASS   AGE
      my-openvpn-as-openvpn-as-pvc   Bound    pvc-256c79ba-97e9-474b-8b7c-8008187bb2fb   5Gi        RWO            do-block-storage   <unset>                 2d23h
      
      ==> v1/Service
      NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                                      AGE
      my-openvpn-as-openvpn-as   LoadBalancer   10.108.92.231   203.0.113.5   943:30479/TCP,443:31010/TCP,1194:31020/UDP   2d23h
      
      ==> v1/Deployment
      NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
      my-openvpn-as-openvpn-as   1/1     1            1           2d23h
      
      ==> v1/Pod(related)
      NAME                                        READY   STATUS    RESTARTS   AGE
      my-openvpn-as-openvpn-as-6b474c9757-jmjlc   1/1     Running   0          2d23h
  10. And using the Pod Name and the sacli tool to check Access Server:

    kubectl exec -it <Pod Name>1 -- /bin/bash sacli version

    1

    Replace <Pod Name> with your pod name. From our example: my-openvpn-as-openvpn-as-6b474c9757-jmjlc.

    • Example output:

      # kubectl exec -it my-openvpn-as-openvpn-as-6b474c9757-jmjlc -- /bin/bash sacli version
      2.14.3 (5936bcd7)

Important

This step demonstrates how to find the Admin Web UI IP via the LoadBalancer service type. With other service types, ports, and IP addresses, the process may differ.

You've installed Access Server, and the container is running. You can now sign in to the Admin Web UI, a web-based GUI for managing your VPN server, with or without Linux knowledge.

The Admin Web UI is available at https://EXTERNAL-IP:943/admin.

  • Get the service details with the EXTERNAL-IP:

    kubectl get svc
    • Example output:

      # kubectl get svc
      NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                                      AGE
      kubernetes                 ClusterIP      10.108.64.1     <none>         443/TCP                                      24h
      my-openvpn-as-openvpn-as   LoadBalancer   10.108.79.160   203.0.113.51   943:31851/TCP,443:31305/TCP,1194:30865/UDP   4m1s

      1

      The EXTERNAL-IP of this Access Server is 203.0.113.5.

The default admin user is openvpn, and you can find the temporary password created with the initial Access Server configuration in the container logs:

  1. Identify the Pod Name where our Access Server is installed:

    kubectl get pods
    • Example output:

      # kubectl get pods
      NAME                                        READY   STATUS    RESTARTS   AGE
      my-openvpn-as-openvpn-as-6b474c9757-jmjlc   1/1     Running   0          5m53s
  2. Use the pod name to check the Access Server logs:

    kubectl logs <Pod Name>1

    1

    Replace <Pod Name> with your name from the previous command. Example my-openvpn-as-openvpn-as-6b474c9757-jmjlc.

    • The Access Server Initial Configuration Tool output displays.

  3. Scroll to find the line, Auto-generated pass = "<password>". Setting in db...

  4. Use the generated password to sign in to the Admin Web UI with the openvpn admin user.

  1. Open a web browser.

  2. Enter the Admin Web UI URL, available at https://EXTERNAL-IP:943/admin.

    • A security warning displays. Access Server uses a self-signed SSL certificate. We recommend replacing it with a signed certificate. Refer to SSL Certificates.

    Important

    Ensure you use https in the URL.

  3. Click through the security warning.

    • The Admin login displays.

  4. Enter the openvpn admin username with the temporary password and click Sign In.

    • The EULA displays for you to read through, access, and proceed to the Admin Web UI configuration pages.

For end-user devices to connect properly to your VPN server, you must update the domain or public IP address:

  1. Sign in to the Admin Web UI.

  2. Click Configuration > Network Settings.

  3. Enter the EXTERNAL-IP (or your domain name) in the Hostname or IP Address field.

    Note

    Access Server likely has a private IP address populated in this field. Clients need a public IP to access from outside the network, or a domain name mapped with an A record. We recommend using a domain name.

  • Run the command below to uninstall the Access Server Helm Chart:

    helm delete my-openvpn-as
  • Run the command below to remove the Access Server Helm Chart completely:

    helm remove repo as-helm-chart

When deploying Access Server via Helm Chart in your Kubernetes Cluster, you can run custom scripts to perform custom configuration, such as creating users, configuring rules, installing Web SSL certificates, etc

Here's an example of running sacli commands to create a new user with a password and grant it admin privileges. By following the procedure for our example below, you would push the following Bash Script:

#!/bin/sh
echo "Custom setup"
echo "[INFO] Running default post-start script for OpenVPN Access Server..."
# Waiting for AS service initialization
until /usr/local/openvpn_as/scripts/sacli status 2>/dev/null |grep -q '"api": "on"'
do
    sleep 2
done
sacli --user "admin" --key "type" --value "user_connect" UserPropPut
sacli --user "admin" --new_pass "secure123" SetLocalPassword
sacli --user "admin" --key "prop_superuser" --value "true" UserPropPut
sacli start

Important

If you plan to use sacli in the postStart script, ensure that the Access Server service is fully up and running beforehand. You can find an example of how to wait for the service in the Inline Script above with the sleep 2 line.

  1. Add the Helm Chart Repository to your system:

    helm repo add as-helm-chart https://openvpn.github.io/as-helm-chart
  2. Go to the Artifact HUB where the Access Server Helm Chart is allocated and click on the DEFAULT VALUES option on the right panel.

  3. Copy the content of the values.yaml file.

  4. Create a new file (example: my-values.yaml) where you'll be deploying your Access Server Helm Chart:

    nano my-values.yaml
  5. Paste the copied contents from the values.yaml file.

  6. Edit your new file and the postStart script section from this:

    postStart:
      enabled: false
    
      # Optional: inline override
      customScript: ""

    To this:

    postStart:
      enabled: true
    
      # Optional: inline override
      customScript: |
        #!/bin/sh
        echo "Custom setup"
        echo "[INFO] Running default post-start script for OpenVPN Access Server..."
        # Waiting for AS service initialization
        until /usr/local/openvpn_as/scripts/sacli status 2>/dev/null |grep -q '"api": "on"'
        do
            sleep 2
        done
        sacli --user "admin" --key "type" --value "user_connect" UserPropPut
        sacli --user "admin" --new_pass "secure123" SetLocalPassword
        sacli --user "admin" --key "prop_superuser" --value "true" UserPropPut
        sacli start
  7. Save and exit by pressing Ctrl+x, then y.

  8. Install the Access Server Helm Chart with your new custom file. (Example: my-values.yaml.)

    helm install my-openvpn-as as-helm-chart/openvpn-as -f my-values.yaml
    • Access Server will be successfully deployed with the custom commands pushed by the Bash script.

  9. Verify that Access Server was installed correctly by identifying the pod name:

    helm status my-openvpn-as --show-resources
    • Example output:

      # helm status my-openvpn-as --show-resources
      NAME: my-openvpn-as
      LAST DEPLOYED: Wed Sep  3 17:06:26 2025
      NAMESPACE: default
      STATUS: deployed
      REVISION: 1
      RESOURCES:
      ==> v1/Secret
      NAME                                 TYPE     DATA   AGE
      my-openvpn-as-openvpn-as-poststart   Opaque   1      2d23h
      
      ==> v1/PersistentVolumeClaim
      NAME                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       VOLUMEATTRIBUTESCLASS   AGE
      my-openvpn-as-openvpn-as-pvc   Bound    pvc-256c79ba-97e9-474b-8b7c-8008187bb2fb   5Gi        RWO            do-block-storage   <unset>                 2d23h
      
      ==> v1/Service
      NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                                      AGE
      my-openvpn-as-openvpn-as   LoadBalancer   10.108.92.231   203.0.113.5   943:30479/TCP,443:31010/TCP,1194:31020/UDP   2d23h
      
      ==> v1/Deployment
      NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
      my-openvpn-as-openvpn-as   1/1     1            1           2d23h
      
      ==> v1/Pod(related)
      NAME                                        READY   STATUS    RESTARTS   AGE
      my-openvpn-as-openvpn-as-6b474c9757-jmjlc   1/1     Running   0          2d23h
  10. And using the Pod Name and the sacli tool to check Access Server:

    kubectl exec -it <Pod Name>1 -- /bin/bash sacli UserPropGet

    1

    Replace <Pod Name> with your pod name. From our example: my-openvpn-as-openvpn-as-6b474c9757-jmjlc.

    • Example output:

      # kubectl exec -it my-openvpn-as-openvpn-as-6b474c9757-jmjlc -- /bin/bash sacli UserPropGet
      {
        "__DEFAULT__": {
          "prop_autogenerate": "true",
          "type": "user_default"
        },
        "admin": {
          "prop_superuser": "true",
          "pvt_password_digest": "$P$zIggWugxu9qv/LZid+np+Q==$0+RZiz4UK6y4c1+A3aG8gBBS08sFAmX3x23U+vxqP1c=",
          "type": "user_compile"
        },
        "openvpn": {
          "prop_superuser": "true",
          "pvt_password_digest": "$P$zYL2GZ3tRzUDw0/rWj6EPA==$QNQmZLQqkR0T1o4drQVkCGLDopr4tY0TIerl2yMmRo0=",
          "type": "user_compile",
          "user_auth_type": "local"
        }
      }
      

The following are known issues or limitations if you deploy Access Server from a Docker image:

  • Failover mode: This feature isn't supported.

  • Layer 2 (bridging): This mode isn't supported.

  • Fixed license keys: This license key model isn't supported. Using fixed keys can cause license invalidation because the hardware specification fingerprint isn't persistent.

  • DCO: You can enable DCO with Access Server if you install and load it on the host Linux system.

  • Clustering: You can use Access Servers, deployed from Docker images, to build cluster functionality with the following limitations:

    • You must expose port TCP945 for internode communication.

    • You can only run one cluster node per Host system at a time.

    • Hosts must be available directly from the internet, not via a load balancer, proxy, or ingress controller.

  • Performance: The additional abstraction layer can cause performance degradation and increase latency. We don't recommend deploying highly loaded installations using Docker.

  • PAM authentication method: We recommend avoiding PAM as your authentication method because user credentials stored inside the container aren't persistent.

  • Logs: Access Server forwards logs to Docker, so it can't handle logging in the Access Server configuration. See the Docker logging documentation to set up rotation, forwarding, etc.

  • IPv6: We don't recommend Access Server inside a Docker container if you plan to use IPv6 for VPN clients because IPv6 support in the Docker network toolset is limited/experimental.

Submit a support ticket if you need assistance.

  • Check pod status:

    kubectl get pods
  • If a pod fails to start:

    kubectl describe pod <Pod Name>
    kubectl logs <Pod Name>

    This displays events, errors, and useful debugging information.

Common issues:

  • ErrImagePull — Docker image not found or network issue.

  • CrashLoopBackOff — Service not starting correctly; check postStart scripts.