Avoiding CDK Pipelines Support Stacks

If you ever used CDK Pipelines to deploy stacks cross-region, you’ve probably come across support stacks. CodePipeline automatically creates stacks named <PipelineStackName>-support-<region> that contain a bucket and sometimes a key. The buckets these stacks create are used by CodePipeline to replicate artifacts across regions for deployment.

As you add more and more pipelines to your project, the number of these stacks and the buckets they leave behind because they don’t use autoDeleteObjects can get daunting. The artifact bucket for the pipeline itself even has removalPolicy: RemovalPolicy.RETAIN. These stacks are deployed to other regions, so it’s also very easy to forget about them when you delete the pipeline stack. Avoiding these stacks is straightforward, but does take a bit of work and understanding.

CodePipeline documentation covers the basic steps, but there are a couple more for CDK Pipelines.

One-time Setup

  1. Create a bucket for each region where stacks are deployed.
  2. Set bucket policy to allow other accounts to read it.
  3. Create a KMS key for each region (might be optional if not using cross-account deployment)
  4. Set key policy to allow other accounts to decrypt using it.

Here is sample Python code:

try:
    import aws_cdk.core as core  # CDK 1
except ImportError:
    import aws_cdk as core  # CDK 2
from aws_cdk import aws_iam as iam
from aws_cdk import aws_kms as kms
from aws_cdk import aws_s3 as s3

app = core.App()
for region in ["us-east-1", "us-west-1", "eu-west-1"]:
    artifact_stack = core.Stack(
        app,
        f"common-pipeline-support-{region}",
        env=core.Environment(
            account="123456789",
            region=region,
        ),
    )
    key = kms.Key(
        artifact_stack,
        "Replication Key",
        removal_policy=core.RemovalPolicy.DESTROY,
    )
    key_alias = kms.Alias(
        artifact_stack,
        "Replication Key Alias",
        alias_name=core.PhysicalName.GENERATE_IF_NEEDED,  # helps using the object directly
        target_key=key,
        removal_policy=core.RemovalPolicy.DESTROY,
    )
    bucket = s3.Bucket(
        artifact_stack,
        "Replication Bucket",
        bucket_name=core.PhysicalName.GENERATE_IF_NEEDED,  # helps using the object directly
        encryption_key=key_alias,
        auto_delete_objects=True,
        removal_policy=core.RemovalPolicy.DESTROY,
    )

    for target_account in ["22222222222", "33333333333"]:
        bucket.grant_read(iam.AccountPrincipal(target_account))
        key.grant_decrypt(iam.AccountPrincipal(target_account))

CDK Pipeline Setup

  1. Create a codepipeline.Pipeline object:
    • If you’re deploying stacks cross-account, set crossAcountKeys: true for the pipeline.
  2. Pass the Pipeline object in CDK CodePipeline’s codePipeline argument.

Here is sample Python code:

try:
    import aws_cdk.core as core  # CDK 1
except ImportError:
    import aws_cdk as core  # CDK 2
from aws_cdk import aws_codepipeline as codepipeline
from aws_cdk import aws_kms as kms
from aws_cdk import aws_s3 as s3
from aws_cdk import pipelines

app = core.App()
pipeline_stack = core.Stack(app, "pipeline-stack")
pipeline = codepipeline.Pipeline(
    pipeline_stack,
    "Pipeline",
    cross_region_replication_buckets={
        region: s3.Bucket.from_bucket_attributes(
            pipeline_stack,
            f"Bucket {region}",
            bucket_name="insert bucket name here",
            encryption_key=kms.Key.from_key_arn(
                pipeline_stack,
                f"Key {region}",
                key_arn="insert key arn here",
            )
        )
        for region in ["us-east-1", "us-west-1", "eu-west-1"]
    },
    cross_account_keys=True,
    restart_execution_on_update=True,
)
cdk_pipeline = pipelines.CodePipeline(
    pipeline_stack,
    "CDK Pipeline",
    code_pipeline=pipeline,
    # ... other settings here ...
)

Tying it Together

The missing piece from the pipeline code above is how it gets the bucket and key names. That depends on how your code is laid out. If everything is in one project, you can create the support stacks in that same project and access the objects in them. That’s what PhysicalName.GENERATE_IF_NEEDED is for.

If the project that creates the buckets is separate from the pipeline project, or if there are many different pipeline projects, you can write the bucket and key names into a central location. For example, it can be written into SSM parameter. Or if your project is small enough, you can even hardcode them.

Another option to try out is cdk-remote-stack that lets you easily “import” values from the support stacks you created even though they are in a different region.

Conclusion

CDK makes life easy by creating CodePipeline replications buckets for you using support stacks. But sometimes it’s better to do things yourself to get a less cluttered CloudFormation and S3 resource list. Avoid the mess by creating the replication buckets yourself and reuse them with every pipeline.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.