[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["難以理解","hardToUnderstand","thumb-down"],["資訊或程式碼範例有誤","incorrectInformationOrSampleCode","thumb-down"],["缺少我需要的資訊/範例","missingTheInformationSamplesINeed","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-08-19 (世界標準時間)。"],[[["\u003cp\u003eYou can review the provisioned input/output operations per second (IOPS) and throughput for Google Cloud Hyperdisk volumes, which can be changed once every 4 hours with each change being logged.\u003c/p\u003e\n"],["\u003cp\u003eTo view Hyperdisk performance settings, you can use the Google Cloud console's Disks page, the \u003ccode\u003egcloud compute disks describe\u003c/code\u003e command, or a REST API \u003ccode\u003ecompute.disks.get\u003c/code\u003e request.\u003c/p\u003e\n"],["\u003cp\u003eVM performance metrics, including CPU utilization, network traffic, disk throughput, and disk IOPS, are available on the Observability tab of the VM Details page in the Google Cloud console.\u003c/p\u003e\n"],["\u003cp\u003eTo determine the optimal IOPS and throughput for your workload, you should monitor peak and average usage patterns, adjusting the provisioned levels based on whether performance is exceeding or falling short of requirements.\u003c/p\u003e\n"],["\u003cp\u003eHyperdisk Balanced and Hyperdisk Throughput allow you to provision throughput separately from disk capacity, but performance is capped by the per-VM limits of the VM to which the volumes are attached.\u003c/p\u003e\n"]]],[],null,["*** ** * ** ***\n\nYou can view the disk description to see the provisioned input/output\noperations per second (IOPS) or the provisioned throughput for\nGoogle Cloud Hyperdisk volumes.\n\nYou can change the provisioned IOPS or throughput once in every 4 hour period.\nEach change of the IOPS or throughput level is logged. You can review the\nlog history and compare it with performance metrics to understand how the\nprovisioned IOPS and throughput levels relate to the performance level\nobserved by your workload.\n\nBefore you begin\n\n- If you haven't already, set up [authentication](/compute/docs/authentication). Authentication verifies your identity for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:\n\n Select the tab for how you plan to use the samples on this page: \n\n Console\n\n\n When you use the Google Cloud console to access Google Cloud services and\n APIs, you don't need to set up authentication.\n\n gcloud\n 1.\n [Install](/sdk/docs/install) the Google Cloud CLI.\n\n After installation,\n [initialize](/sdk/docs/initializing) the Google Cloud CLI by running the following command:\n\n ```bash\n gcloud init\n ```\n\n\n If you're using an external identity provider (IdP), you must first\n [sign in to the gcloud CLI with your federated identity](/iam/docs/workforce-log-in-gcloud).\n | **Note:** If you installed the gcloud CLI previously, make sure you have the latest version by running `gcloud components update`.\n 2. [Set a default region and zone](/compute/docs/gcloud-compute#set_default_zone_and_region_in_your_local_client).\n\n REST\n\n\n To use the REST API samples on this page in a local development environment, you use the\n credentials you provide to the gcloud CLI.\n 1. [Install](/sdk/docs/install) the Google Cloud CLI. After installation, [initialize](/sdk/docs/initializing) the Google Cloud CLI by running the following command: \n\n ```bash\n gcloud init\n ```\n 2. If you're using an external identity provider (IdP), you must first [sign in to the gcloud CLI with your federated identity](/iam/docs/workforce-log-in-gcloud).\n\n\n For more information, see\n [Authenticate for using REST](/docs/authentication/rest)\n in the Google Cloud authentication documentation.\n\nView the provisioned performance settings for Hyperdisk\n\nTo view the provisioned IOPS or throughput for your Hyperdisk\nvolumes, view the disk information. \n\nConsole\n\n1. In the Google Cloud console, go to the **Disks** page.\n\n [Go to Disks](https://console.cloud.google.com/compute/disks)\n2. Click the name of the disk to view the configuration details.\n\nScreenshot of the configured properties for a Hyperdisk\n\ngcloud\n\n- Use the [`gcloud compute disks describe` command](/sdk/gcloud/reference/compute/disks/describe)\n to view the disk details.\n\n ```\n gcloud compute disks describe DISK_NAME \\\n --zone ZONE_NAME \\\n --format=\"text(name, provisionedIops, provisionedThroughput, sizeGb)\"\n ```\n\n Replace the following:\n - \u003cvar translate=\"no\"\u003eDISK_NAME\u003c/var\u003e: the name of the Hyperdisk volume.\n - \u003cvar translate=\"no\"\u003eZONE_NAME\u003c/var\u003e: the zone where the Hyperdisk volume was created.\n\n The output shows the name of the disk, the current disk size and the\n provisioned IOPS or throughput, for example: \n\n ```\n name: my-hyperdisk-b\n provisionedIops: '8500'\n provisionedThroughput: '140'\n sizeGb: '150'\n ```\n\nREST\n\nConstruct a `GET` request to the\n[`compute.disks.get` method](/compute/docs/reference/rest/v1/disks/get).\nIn the request body, specify the name of the Hyperdisk volume. \n\n```\nGET https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/disks/DISK_NAME/get\n```\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: your project ID.\n- \u003cvar translate=\"no\"\u003eZONE\u003c/var\u003e: the zone where your Hyperdisk volume is located.\n- \u003cvar translate=\"no\"\u003eDISK_NAME\u003c/var\u003e: the name of the Hyperdisk volume to view.\n\nIn the response body, you can view the current disk size, provisioned\nIOPS, and throughput, for example: \n\n```\n{\n ...\n \"name\": \"my-hyperdisk-x\",\n \"physicalBlockSizeBytes\": \"4096\",\n \"provisionedIops\": \"100000\",\n ...\n \"sizeGb\": \"1000\",\n \"status\": \"READY\",\n ...\n}\n```\n\nYou can use a query filter to return only the information you want to\nview. To view only the fields shown in the preceding example output, append\na query parameter similar to the following to your request. \n\n ?fields=name,physicalBlockSizeBytes,provisionedIops,provisionedThroughput,sizeGb,status\n\nView disk performance metrics\n\nTo view performance metrics for your VMs, use the Cloud Monitoring\nobservability metrics available in the Google Cloud console.\n\n1. In the Google Cloud console, go to the **VM Instances** page.\n\n\n [Go to VM instances](https://console.cloud.google.com/compute/instances)\n2. To view metrics for individual VMs:\n\n 1. Click the name of the VM you want to view performance metrics for. The VM\n **Details** page opens.\n\n 2. Click the **Observability** tab to open the Observability **Overview**\n page.\n\n3. Explore the VM's performance metrics. The following are key metrics\n related to disk performance for a VM:\n\n - On the **Overview** page:\n\n - **CPU Utilization.** The percent of CPU used by the VM.\n\n - **Network Traffic.** The average rate of bytes sent and received\n in one minute intervals.\n\n - **Disk Throughput.** The average rate of bytes written to and read from\n disks.\n\n - **Disk IOPS.** The average rate of I/O read and write operations to\n disks.\n\n - On the **Disks Performance** page, view the following charts:\n\n - **Operations (IOPS).** The average rate of I/O read and write operations\n to the disk in one-minute time periods.\n\n - **IOPS by Storage Type** The average rate of I/O operations to the disk\n in one-minute time periods, grouped by the storage type and device type.\n\n - **Throughput (MB/s)** The average rate of bytes written to and read from\n the VM's disks in one-minute time periods.\n\n - **Throughput by Storage Type** The average rate of bytes written to and\n read from the VM's disks in one-minute time periods, grouped by the\n storage type and device type.\n\n - **I/O Size Avg.** The average size of I/O read and write operations to\n disks. Small (4 to 16 KiB) random I/O operations are usually\n limited by IOPS and sequential or large (256 KiB-1 MiB)\n I/O operations are usually limited by throughput.\n\n - **Queue Length Avg.** The number of queued and running disk I/O\n operations, also called *queue depth* , for the top 5 devices. To reach\n the performance limits of your Hyperdisk and Persistent Disk\n volumes,\n [use a high I/O queue depth](/compute/docs/disks/optimizing-pd-performance#io-queue-depth).\n\n - **I/O Latency Avg.** The average latency of I/O read and write operations\n aggregated across operations of all block storage devices attached to the\n VM, measured by the Ops Agent in the VM. This value\n includes operating system and file system processing time.\n\nAnalyze the IOPS needed for your workload\n\nTo determine the IOPS needed for your workload, make note of the peak and\naverage IOPS and throughput rates during times of peak usage, and also during a\nnormal workload cycle, to get an idea of your workload requirements.\n\nObserve the IOPS requirements of your workload using any of the following\nmethods:\n\n- Use the **Monitoring** tab on the disk details page in the Google Cloud console.\n- Use the **Observability page** for your VM, as described in [View disk performance metrics](#viewing-performance-metrics).\n\nBased on the observed metric values, determine if you should adjust the\nprovisioned IOPS for your VM. For example:\n\n- If the peak IOPS rate is close to the provisioned IOPS for the Hyperdisk volume, then you can try increasing the provisioned IOPS for the Hyperdisk volume to boost the performance of your application.\n- If the peak IOPS rate is consistently lower than the provisioned IOPS, then you can lower the provisioned IOPS for the Hyperdisk volume to reduce the cost of the disk.\n\nAnalyze the throughput needed for your workload\n\nYou can provision throughput separately from disk capacity for the\nfollowing Hyperdisk types:\n\n- Hyperdisk Balanced\n- Hyperdisk Balanced High Availability\n- Hyperdisk Throughput\n- Hyperdisk ML\n\nYou can specify the target throughput level\nfor a given volume. Individual volumes have full performance isolation - each\nvolume gets the performance provisioned to it. However, the throughput is\nultimately capped by per-VM limits for the VM to which your\nvolumes are attached. To review these limits, see\n[Hyperdisk performance limits](/compute/docs/disks/hyperdisk-perf-limits).\n\nBoth read and write operations count against the throughput limit provisioned\nfor a Hyperdisk volume. The provisioned throughput and the\nmaximum limits apply to the combined total of read and write throughput.\n\nObserve the throughput requirements of your workload using any of the following\nmethods:\n\n- Use the **Monitoring** tab on the disk details page in the Google Cloud console.\n- Use the **Observability page** for your VM, as described in [View disk performance metrics](#viewing-performance-metrics).\n\nIf the total throughput provisioned for one or more Hyperdisk\nvolumes exceeds the total throughput available at the VM level,\nthe performance is limited to the VM-level performance.\n\nWhat's next\n\n- Learn how to [optimize Hyperdisk performance](/compute/docs/disks/optimize-hyperdisk).\n- Learn how to [modify the settings for a Hyperdisk volume](/compute/docs/disks/modify-hyperdisks)\n- Learn about [Hyperdisk pricing](/compute/disks-image-pricing#section-2)."]]