Release 1.4.1

Changes

[CSD-454] - Increase default volume size of rancher k3s node

We increased the default volume size if the ec2 instance for the rancher k3s cluster due to some diskpreasure reports. Size was previously 8GB

Note

If you already have setup the rancher infrastructure, this change will not increase your ec2 volume size automatically. If you have no disk preasure issues notifications in rancher you might not apply this change manually. If you need/want to apply the increased disk volume manually please see the official Amazon AWS documentation.

[CSD-455] - Limit access to organisation graphql endpoint for superusers only

The access to AllOrganisations Query is now by default restricted to only users with superuser status.

[CSD-456] - Added missing favicon for the docs

Added a carrot seed favicon to the sphinx documentation when it is served as html.

[CSD-457] - Improved docs for wsl installation in the quickstart guide

The Windows Subsystem for Linux (WSL) documentation was improved in the quickstart guide.

[CSD-460] Pulumi change default secret provider from awskms to passphrase

To simplify things to start, we changed the default pulumi secret provider from awskms to passphrase.

Note

If you have already created your infrastructure with pulumi, please set the CSD_PULUMI_DEFAULT_SECRETS_PROVIDER to awskms.

[CSD-467] Add Bitbucket Cloud for Drone CI

CSD_DRONE_GIT_SYSTEM allows now bitbucketcloud as one additional option. Note you have to set CSD_DRONE_BITBUCKET_KEY and CSD_DRONE_BITBUCKET_SECRET in order to work. This affects only the initial setup of the pulumi shared infrastructure. If you have already set it up once, changes to CSD_DRONE_GIT_SYSTEM will not come into effect.

[CSD-468] Rancher Version Upgrade 2.5.9

Upgrade pulumi scripts in shared deployment environment to use Rancher 2.5.9 (instead of 2.5.8). Existing installations are not affected, they need to be upgraded manually. See Rancher documentation for it for K3S rancher installations.

[CSD-469] Improve Pulumi AWS Multi Account Architecture to use a single shared account with multiple projects

Each Carrot Seed Project which uses the pulumi scripts to setup an AWS multi account deployment environemnts also has a shared AWS Account which provides CI, Docker Registry, Kubernetes Cluster Management for prod, staging and dev environments.

Before this feature was implemented each Carrot Seed Project needed their own shared account which is fine if you only have one Carrot Seed project but might be inconvenient if you are working with multiple Carrot Seed projects while you want to share the shared infrastructure across all the Carrot Seed projects.

This user story implements proper pulumi ressource naming prefixed by project key so that multiple projects each with their prod, staging and dev environment can share a single AWS shared account which should be created by the first project.

This has no impact for existing projects unless you rerun the pulumi_root.sh stack which should not be required for existing projects.

[CSD-470] Add pulumi passphrase env variable to security documentation and example secrets

Added the pulumi passphrase variables to the security information as these variables must be changed from the defaults to custom values to stay secure.

  • CSD_PULUMI_SECRETS_PROVIDER_PASSPHRASE_ROOT

  • CSD_PULUMI_SECRETS_PROVIDER_PASSPHRASE_SHARED

  • CSD_PULUMI_SECRETS_PROVIDER_PASSPHRASE_MAIN

Note

This only affects security if you have created your infrastructure with pulumi or plan to do so otherwise no actions are required.

If you didn’t change the values before creating the infrastructure you should immediately change the passphrase. To do this execute cli/pulumi_<main/root/shared>.sh stack change-secrets-provider passphrase in each environment Enter your new passphrase and after the command is completed change replace the env variable values in the corresponding xxx_env.secrets.sh file.

[CSD-471] Upgraded Kubernetes Manifests

Upgraded Kubernetes manifests to remove deprecated stuff which will be removed in Kubernetes version 1.22. Additionally upgraded the cert-manager from 0.14 to 1.5.1

Note

Upgrade of the manifests in /conf/kubernetes might lead to breaking changes for kubernetes deployments (manual and automatic ones) depending on the kubernetes version you are using.

You can upgrade to this Carrot Seed Version and then make a test deployment to a dev environment to check if the deployment works. If you want are unsure or get some manifest errors, you can check the official kubernetes depreciation docs which includes manifest upgrade info.

[CSD-472] Fixed CsdDjangoObjectType exclude field not working.

In contrast to the fields field the exclude didn’t work. It returned no fields at all, also non-excluded fields were missing. This is fixed now.

[CSD-473] Fixed selectedOrganisation missing in UserType

In the UserType from GraphQL the field selectedOrganisation was missing.

[CSD-474] Fixed django-dbbackup errors when restoring on blank db

Added DROP=False in base.py DBBACKUP_CONNECTORS.

[CSD-475] Set all SCSS palettes variables “!default”.

To make it possible to override the palette variable all are now set with the !default flag.

[CSD-476] Add multi AZ HA setup coded in pulumi

When setting up the k8s infrastructure with the included pulumi scripts in AWS you can now easily migrate the infrastructure from single-node to a multi-AZ HA setup by just changing two variables in your deployment environment config.

# set the number of availibility zones the nodes are spread
export CSD_NUM_AZ_FOR_CLUSTER_NODES=1 # value range between 1 - 3
# set the number of nodes which are created in each availibility zones
export CSD_NUM_CLUSTER_NODES_IN_AZ=1  # value range between 1 - n

Note

If you already have an existing cluster built with pulumi make sure to set the aliases field in the __opts__ manually for rancher2.NodeTemplate and rancher2.NodePool only for the first loop iteration to rename the resource id’s. You can delete the aliases again after you ran pulumi_main.sh up once for each deployment environment.

main_node_pool = rancher2.NodePool(
    f'main-node-pool-{zone_char}',
    # ...
    __opts__=pulumi.ResourceOptions(
        provider=provider_rancher, aliases=[pulumi.Alias(name='main-node-pool')]
     )
)

For further info on aliases see the Pulumi Alias Documentation

[CSD-478] Add AWS loadbalancer to pulumi IaC scripts

Added AWS Network Load Balancer in front of the kubernetes cluster. It is fully integrated in the pulumi scripts and can be enabled disabled with the environment variable CSD_SETUP_LOADBALANCER in the deployment environments. It is a pass-through loadbalancer, so that the ingress instances still do the TLS termination and Let’s encrypt keeps working out-of-the box as the encrypted traffic is passed-through to the ingress system.

By default it is configured as cross-region load-balancer if you are using more availibility zones so everything is in place for a high-availibility setup across availibility zones.

Note

After you updated your infrastructure to use the loadbalancer you need to add your cluster node instances manually to the target groups main-lb-tg-http-* and main-lb-tg-http-* in te AWS Console in each deployment environment. Whenever you change the number of cluster nodes you need to update it manually.

[CSD-375] Upgraded versions in infrastructure

Upgraded versions in drone.yml so that all the new kubernetes features are also available in the Drone CI System. New K3S Version is v1.20.6-k3s1 same as the new rancher k8s version.

Note

Might break some k8s manifests if you are using schemas which are deprecated in kubernetes 1.20.6. See official kubernetes depreciation docs for further details.

[CSD-480] Cleanup of naming in infrastructure and env variables

Aligned naming of variables in the pulumi scripts and in the deployment environments scripts.

[CSD-481] Change title of example website homepage to CSD_DISPLAY_NAME

When creating the website with example data from setup_example_website_data django manage command the tile of the homepage is set based on CSD_DISPLAY_NAME.

Fixed Bugs

Breaking

Known Issues

  • Angular SSR is currently broken, container webapp-frontend-ssr image doesn’t build in prod target
    (local_prod, dev, staging, prod).
    As a workaround change the settings of your project in the Carrot Console and disable the
    ‘Angular SSR’ option.