[[["わかりやすい","easyToUnderstand","thumb-up"],["問題の解決に役立った","solvedMyProblem","thumb-up"],["その他","otherUp","thumb-up"]],[["わかりにくい","hardToUnderstand","thumb-down"],["情報またはサンプルコードが不正確","incorrectInformationOrSampleCode","thumb-down"],["必要な情報 / サンプルがない","missingTheInformationSamplesINeed","thumb-down"],["翻訳に関する問題","translationIssue","thumb-down"],["その他","otherDown","thumb-down"]],["最終更新日 2024-12-23 UTC。"],[],[],null,["# Configuring a custom boot disk\n\n[Standard](/kubernetes-engine/docs/concepts/choose-cluster-mode)\n\n*** ** * ** ***\n\nThis page shows you how to customize a node boot disk in your\nGoogle Kubernetes Engine (GKE) [clusters](/kubernetes-engine/docs/concepts/cluster-architecture) and [node pools](/kubernetes-engine/docs/concepts/node-pools).\n\nOverview\n--------\n\nWhen you create a GKE cluster or node pool, you can choose\nthe type of Persistent Disk onto which the Kubernetes node file system is\ninstalled for each node. By default, GKE uses balanced\nPersistent Disks in version 1.24 or later. You can also specify other\nPersistent Disk types, such as standard or SSD. For more information, see\n[Storage options](/compute/docs/disks).\n| **Note:** This feature differs from [Local SSD](/kubernetes-engine/docs/concepts/local-ssd), which can't be used as a boot disk.\n\nBalanced and SSD Persistent Disks have disk quotas which are different\nfrom standard Persistent Disk quotas. If you are switching from standard to\nbalanced Persistent Disks, you may need to request for quota increases. For\nmore information, see [Resource quotas](/compute/quotas#disk_quota).\n\nBenefits of using an SSD boot disk\n----------------------------------\n\nUsing an SSD Persistent Disk as a boot disk for your nodes offers some\nperformance benefits:\n\n- [Nodes](/kubernetes-engine/docs/concepts/cluster-architecture#nodes) have faster boot times.\n- Binaries and files served from containers are available to the node faster. This can increase performance for I/O-intensive [workloads](/kubernetes-engine/docs/how-to/deploying-workloads-overview), such as [web-serving applications](/kubernetes-engine/docs/tutorials/hello-app) that host static files or short-running, I/O-intensive [batch jobs](https://kubernetes.io/docs/concepts/workloads/controllers/job/).\n- Files stored on the node's local media (exposed through `hostPath` or `emptyDir` volumes) can see improved I/O performance.\n\nSpecifying a node boot disk type\n--------------------------------\n\nYou can specify the boot disk type when you create a cluster or node pool. \n\n### gcloud\n\nTo create a cluster with a custom boot disk, run the following command.\n\n`[DISK-TYPE]` can be one of the following values:\n\n- `pd-balanced` (the default in version 1.24 or later)\n- `pd-standard` (the default in version 1.23 or earlier)\n- `pd-ssd`\n- `hyperdisk-balanced`\n\n| **Note:** Hyperdisk support is based on the machine type of your nodes. For the most up-to-date information, see [Machine type support](/compute/docs/disks/hyperdisks#machine-type-support) in the Compute Engine documentation.\n\nFor more information, see [Persistent Disk types](/compute/docs/disks/persistent-disks#disk-types). \n\n```\ngcloud container clusters create [CLUSTER_NAME] --disk-type [DISK_TYPE]\n```\n\nTo create a node pool in an existing cluster: \n\n```\ngcloud container node-pools create [POOL_NAME] --disk-type [DISK_TYPE]\n```\n\nFor example, the following command creates a cluster, `example-cluster`,\nwith the SSD Persistent Disk type, `pd-ssd`: \n\n```\ngcloud container clusters create example-cluster --disk-type pd-ssd\n```\n\n### Console\n\nTo select the boot disk when creating your cluster with the Google Cloud console:\n\n1.\n\n\n In the Google Cloud console, go to the **Create a Kubernetes cluster** page.\n\n [Go to Create a Kubernetes cluster](https://console.cloud.google.com/kubernetes/add)\n\n \u003cbr /\u003e\n\n2. Configure your cluster as needed.\n\n3. From the navigation menu, expand **default-pool** and click **Nodes**.\n\n4. In the **Boot disk type** drop-down list, select a Persistent Disk type.\n\n5. Click **Create**.\n\nTo create a node pool with a custom boot disk for an existing cluster:\n\n1. Go to the **Google Kubernetes Engine** page in the Google Cloud console.\n\n [Go to Google Kubernetes Engine](https://console.cloud.google.com/kubernetes/list)\n2. In the cluster list, click the name of the cluster you want to modify.\n\n3. Click *add_box* **Add Node Pool**.\n\n4. Configure your node pool as needed.\n\n5. From the navigation menu, click **Nodes**.\n\n6. In the **Boot disk type** drop-down list, select a Persistent Disk type.\n\n7. Click **Create**.\n\nProtecting node boot disks\n--------------------------\n\nA node boot disk stores your container image, some system process logs, Pod\nlogs, and the writable container layer by default.\n\nIf your workloads use `configMap`, `emptyDir`, or `hostPath` volumes, your Pods\ncould write additional data to node boot disks. You can configure `emptyDir` to\nbe backed by tmpfs to stop this. To learn how, see the\n[Kubernetes documentation](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir).\nSince `secret`, `downwardAPI`, and `projected` volumes are backed by [tmpfs](https://www.kernel.org/doc/html/latest/filesystems/tmpfs.html)\nthe Pods using them don't write data to the node boot disk.\n\nBy default, Google Cloud\n[encrypts customer content at rest](/security/encryption/default-encryption#encryption_of_data_at_rest)\nincluding your node boot disks, and GKE manages encryption for\nyou without any action on your part.\n\nHowever, when using volumes that write to the node boot disk, you may want to\nfurther control how your workload data is protected in GKE. You\ncan do this by either [preventing Pods from writing to node boot disks](#prevent-pod)\n, or [using Customer Managed Encryption Keys (CMEK) for node boot disks](#cmek).\n\n### Prevent Pods from writing to boot disks\n\nTo prevent Pods from writing data directly to the node boot disk, use one of the\nfollowing methods.\n\n#### Policy Controller\n\nPolicy Controller is a feature of GKE Enterprise that lets you declare and\nenforce custom policies at scale across your GKE clusters in\nfleets.\n\n1. [Install Policy Controller](/anthos-config-management/docs/how-to/installing-policy-controller).\n2. Define a constraint that restricts the following volume types by using the [`k8sPspVolumeTypes` constraint template](/anthos-config-management/docs/latest/reference/constraint-template-library#k8spspvolumetypes):\n - `configMap`\n - `emptyDir` (if not backed by tmpfs)\n - `hostPath` For instructions, see [Use the constraint template library](/anthos-config-management/docs/how-to/creating-policy-controller-constraints) in the Policy Controller documentation.\n\nThe following example constraint restricts these volume types in all Pods in the\ncluster: \n\n apiVersion: constraints.gatekeeper.sh/v1beta1\n kind: K8sPSPVolumeTypes\n metadata:\n name: deny-boot-disk-writes\n spec:\n match:\n kinds:\n - apiGroups: [\"\"]\n kinds: [\"Pod\"]\n parameters:\n volumes:\n - configMap\n - emptyDir\n - hostPath\n\n#### PodSecurity admission controller\n\nThe built-in Kubernetes PodSecurity admission controller lets you enforce\ndifferent levels of the Pod Security Standards in specific namespaces or in the\ncluster. The Restricted policy prevents Pods from writing to the node boot disk.\n\nTo use the PodSecurity admission controller, see\n[Apply predefined Pod-level security policies using PodSecurity](/kubernetes-engine/docs/how-to/podsecurityadmission).\n\n### Customer-managed encryption\n\nIf you want to control and manage encryption key rotation yourself, you can use\nCustomer Managed Encryption Keys (CMEK). These keys are used to encrypt the data\nencryption keys that encrypt your data. To learn how to use CMEK for node boot\ndisks, see [Using customer-managed\nencryption keys](/kubernetes-engine/docs/how-to/using-cmek#boot-disks).\n\nA limitation of CMEK for node boot disks is that it cannot be changed after\nnode pool creation. This means:\n\n- If the node pool was created with customer-managed encryption, you cannot subsequently disable encryption on the boot disks.\n- If the node pool was created without customer-managed encryption, you cannot subsequently enable encryption on the boot disks. However, you can create a new node pool with customer-managed encryption enabled and delete the previous node pool.\n\nLimitations\n-----------\n\nBefore configuring a custom boot disk, consider the following limitations:\n\n- The [C3 machine series](/compute/docs/general-purpose-machines#c3_series) and [G2 machine series](/compute/docs/accelerator-optimized-machines#g2-vms) don't support the `pd-standard` node boot disk type.\n\nWhat's next\n-----------\n\n- [Learn how to specify a minimum CPU platform](/kubernetes-engine/docs/how-to/min-cpu-platform).\n- [Learn more about customer managed encryption](/security/encryption/default-encryption#key_management).\n- [Learn about using Customer Managed Encryption keys in GKE](/kubernetes-engine/docs/how-to/using-cmek)."]]