Creating a BucketClass for Ceph RGW

Ceph Object Storage can be exposed to Kubernetes workloads via the Container Object Storage Interface (COSI), providing highly scalable and elastic storage for big‑data analytics, backup & restore, and machine‑learning scenarios. A BucketClass is required before users can provision buckets.

A BucketClass is a template resource that specifies the storage driver, authentication secret, and the deletion policy that will be applied to every bucket created from it.

TOC

Prerequisites

RequirementNotes
Running Ceph cluster with RGW (S3) enabledInternal (Rook-managed) or external cluster is acceptable.
Alauda Container Platform COSI plug‑insBoth Alauda Container Platform COSI and Alauda Container Platform COSI for Ceph must be installed.
Kubernetes Secret containing Ceph RGW credentialsPrepared in Step 3 below.

Step 1 – Prepare a Ceph Cluster

Choose one of the following:

OptionDescription
Internal CephCeph cluster deployed and managed inside the platform by the Rook Operator.See create a storage service for details.
External CephStand‑alone Ceph cluster reachable from the platform network.

Step 2 – Install the COSI Plug‑in

Install the following cluster plug‑ins:

  1. Alauda Container Platform COSI
  2. Alauda Container Platform COSI for Ceph

Refer to Installing for exact commands.

Step 3 – Prepare the Credential Secret

COSI retrieves RGW credentials from a Kubernetes Secret. Pick one method depending on your Ceph deployment.

Method A – Auto‑generate (Rook‑managed Ceph)

  1. Create a CephObjectStoreUser in the rook‑ceph namespace:

    # ceph-object-store-user.yaml
    apiVersion: ceph.rook.io/v1
    kind: CephObjectStoreUser
    metadata:
      name: user-for-cosi
      namespace: rook-ceph
    spec:
      store: object-store               # name of your CephObjectStore
      capabilities:
        bucket: ["read", "write"]
        user:   ["read", "write"]
  2. Apply the manifest:

    kubectl apply -f ceph-object-store-user.yaml
  3. Retrieve the autogenerated Secret name (used later):

    kubectl get cephobjectstoreuser user-for-cosi -n rook-ceph \
      -o jsonpath='{.status.info.secretName}'

Method B – Manual (External Ceph)

  1. Obtain AccessKey, SecretKey, and RGW Endpoint.

  2. Create a Secret in the target project/namespace and label it so the UI can discover it:

    kubectl create secret generic ceph-external-creds -n <YOUR_NAMESPACE> \
      --from-literal=AccessKey=<YOUR_ACCESS_KEY> \
      --from-literal=SecretKey=<YOUR_SECRET_KEY> \
      --from-literal=Endpoint=http://<YOUR_RGW_ENDPOINT>
    
    kubectl label secret ceph-external-creds -n <YOUR_NAMESPACE> app=rook-ceph-rgw

    Important: The label app=rook-ceph-rgw is mandatory for the platform UI to list the Secret.

Step 4 – Create the BucketClass

Option 1 – UI Workflow

  1. Navigate to Storage → Object StorageClass and click Create Object StorageClass.

  2. Select Ceph Object Storage as the driver.

  3. Configure the following fields:

    • Deletion Policy – How the underlying bucket is handled when its BucketClaim is deleted (default: Delete).
    • Secret – Pick the Secret prepared in Step 3 (only Secrets with app=rook-ceph-rgw are shown).
    • Allocate Projects(Optional) Restrict usage to specific projects.
  4. Click Create.

Option 2 – YAML (GitOps‑friendly)

Create ceph-bucketclass.yaml with the correct Secret references:

apiVersion: objectstorage.k8s.io/v1alpha1
kind: BucketClass
metadata:
  name: ceph-cosi-driver
  labels:
    project.cpaas.io/ALL_ALL: "true"
driverName: ceph.objectstorage.k8s.io
deletionPolicy: Delete
parameters:
  objectStoreUserSecretName: <your-secret-name>
  objectStoreUserSecretNamespace: <your-secret-namespace>

Apply the manifest:

kubectl apply -f ceph-bucketclass.yaml

Verification & Next Steps

Verify the BucketClass:

kubectl get bucketclass

Once the BucketClass is ready, you can create Bucket or BucketClaim resources referencing it, thereby provisioning S3‑compatible object storage for your applications.