![]() ![]() If using a custom StorageClass, pass name here. Low values cause more traffic to the remote git repository.Įnable persistent volume for storing dags. High values are more likely to cause DAGs to become out of sync between different components. Interval between git sync attempts in seconds. Git sync container run as user parameter. ![]() Subpath within the repo where dags are located. Name of a Secret containing the repo sshKeySecret. Securit圜ontext : runAsGroup : 0 runAsUser : 50000 You can bake a webserver_config.py in to your image instead or specify a configmap containing the webserver_config.py. This string (can be templated) will be mounted into the Airflow webserver as a custom webserver_config.py. The Fernet key used to encrypt passwords (can only be set during install, not upgrade). ![]() Overrides tag.ĭefault airflow repository. Settings to go into the mounted airflow.cfgĭefault airflow digest to deploy. We expect a number of pods to be created as the tasks execute.Airflow_local_settings file as a string (can be templated).Īirflow version (Used to make some decisions based on Airflow Version being deployed). To test our installation, unpause a DAG using the toggle on the left side of the screen and execute it. After using the credentials in the Helm output, you’ll see a table of DAGs. Navigating to will bring up the login in screen. As we didn’t enable the ingress feature of the chart, access to the Airflow cluster requires port forwarding: kubectl port-forward svc/airflow-webserver 8080:8080 -namespace airflow Now we should login into the cluster using the credentials provided in the Helm output. Airflow pods running in Azure Kubernetes Service. The Airflow chart has a tendency towards long run times so, increase the timeout as you install the chart: helm upgrade \Īfter Helm exits, we can navigate to our Kubernetes Dashboard and see the replica sets, pods, etc., that have been provisioned. Authenticate with the cluster: az aks get-credentials -name airflow-demo -resource-group airflow-demoĪdd a namespace: kubectl create ns airflow Now that we have our values file setup for our database, we can deploy the chart. Input credentials and database information: data: Turn off the charts provided PostgreSQL resources: postgresql: Make sure we have some example DAGs to play with: env: Set Airflow to use the KubernetesExecutor: executor: "KubernetesExecutor" GRANT ALL PRIVILEGES ON DATABASE airflow_db TO airflow Pulling the Chart and Value FileĪfter the database is set up, we can move on to preparing the chart and our values file. Using Helm, add the airflow chart repository: helm repo add apache-airflow įor the values file, retrieve the default values from the chart. Next, referring to the Airflow documentation, we can execute the following commands: CREATE DATABASE airflow_db ĬREATE USER airflow WITH PASSWORD 'your-password' First Log into the database server using the psql command: psql "host=************. port=5432 dbname=postgres user=**************** password=********* sslmode=require" I will be using the Azure PostgreSQL Service but any compatible version will do. This most basic of configurations requires a database and we have chosen to use PostgreSQL in this case.Ĭode samples can be found here. This post will focus on getting the Helm chart deployed to our Kubernetes service. Previously, we formulated a plan to provision Airflow in a Kubernetes cluster using Helm and then build up the supporting services and various configurations that we will need to ensure our cluster is production ready. Airflow + Helm: Simple Airflow Deployment.I will update these with links as they are published. This is part two of a five-part series addressing Airflow at an enterprise scale. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |