Passing environment variables to the deployment configuration
dbx
supports passing environment variables into the deployment configuration, giving you an additional level of flexibility.
You can pass environment variables both into JSON and YAML-based configurations. This allows you to parametrize the deployment and make it more flexible for CI pipelines.
{
"default": {
"jobs": [
{
"name": "your-job-name",
"timeout_seconds": "${TIMEOUT}",
"email_notifications": {
"on_failure": [
"${ALERT_EMAIL}",
"presetEmail@test.com"
]
},
"new_cluster": {
"spark_version": "7.3.x-cpu-ml-scala2.12",
"node_type_id": "some-node-type",
"aws_attributes": {
"first_on_demand": 0,
"availability": "${AVAILABILITY:SPOT}"
},
"num_workers": 2
},
"libraries": [],
"max_retries": "${MAX_RETRY:3}",
"spark_python_task": {
"python_file": "tests/deployment-configs/placeholder_1.py"
}
}
]
}
}
environments:
default:
jobs:
- name: "your-job-name"
timeout_seconds: !ENV ${TIMEOUT}
email_notifications:
on_failure:
- !ENV ${ALERT_EMAIL}
- "presetEmail@test.com"
new_cluster:
spark_version: "7.3.x-cpu-ml-scala2.12"
node_type_id: "some-node-type"
aws_attributes:
first_on_demand: 0
availability: "SPOT"
num_workers: 2
libraries: []
max_retries: !ENV ${MAX_RETRY:3}
spark_python_task:
python_file: "tests/deployment-configs/placeholder_1.py"
We also support specifying default values with environment variables. They should be specified like ${ENV_VAR:<default_value>}
.
Note
Unlike JSON, in YAML you have to specify the !ENV
tag before your environment variables for it to be resolved in
a valid manner.