Google 助理 SDK 支援在對特定查詢進行視覺回應時,在螢幕上顯示 Google 助理的回應。例如,「What is the weather in Mountain View?」(山景城的天氣如何?) 查詢會呈現目前溫度、天氣的圖片圖片,以及相關查詢的建議。如果啟用這項功能,這項 HTML5 資料 (如有) 位於 ScreenOut.data 欄位中。
[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-26 (世界標準時間)。"],[[["\u003cp\u003eCustomize interactions with the Google Assistant, such as triggering it with a button or displaying speech recognition transcripts.\u003c/p\u003e\n"],["\u003cp\u003eControl your projects using custom commands through Device Actions or IFTTT recipes.\u003c/p\u003e\n"],["\u003cp\u003eAccess the Assistant's response in various formats like text, HTML for visual responses, and audio.\u003c/p\u003e\n"],["\u003cp\u003eSubmit queries to the Assistant using either text input (like a keyboard) or audio files.\u003c/p\u003e\n"],["\u003cp\u003eUtilize the provided sample code and documentation to integrate these features into your projects.\u003c/p\u003e\n"]]],[],null,["Once you have the Google Assistant running on your project, give these a try:\n\n1. [Customize](#custom-interaction) how your project interacts with the\n Assistant. For example, trigger the Assistant with the push of a button or\n blink an LED when playing back audio. You can even show a speech recognition\n transcript from the Assistant on a display.\n\n2. [Control](#device-control) your project with custom commands.\n For example, ask your Assistant-enabled [mocktail maker](http://deeplocal.com/mocktailsmixer/)\n to make your favorite drink.\n\nCustomize how your project interacts with the Assistant\n\nTrigger the Assistant\n\nWith the Google Assistant Service API, you control when to trigger an Assistant\nrequest. Modify the [sample code](https://github.com/googlesamples/assistant-sdk-python/tree/master/google-assistant-sdk/googlesamples/assistant/grpc)\nto control this (for example, at the push of a button). Triggering\nan Assistant request is done by sending a request to [`EmbeddedAssistant.Assist`](/assistant/sdk/reference/rpc/google.assistant.embedded.v1alpha2#google.assistant.embedded.v1alpha2.EmbeddedAssistant.Assist).\n\nGet the transcript of the user request\n\nThe Google Assistant SDK gives you a text transcript of the user request. Use\nthis to provide feedback to the user by rendering the text to a display, or even\nfor something more creative such as performing some local actions on the device.\n\nThis transcript is located in the [`SpeechRecognitionResult.transcript`](/assistant/sdk/reference/rpc/google.assistant.embedded.v1alpha2#speechrecognitionresult) field.\n\nGet the text of the Assistant's response\n\nThe Google Assistant SDK gives you the plain text of the Assistant response. Use this\nto provide feedback to the user by rendering the text to a display.\n\nThis text is located in the [`DialogStateOut.supplemental_display_text`](/assistant/sdk/reference/rpc/google.assistant.embedded.v1alpha2#dialogstateout)\nfield.\n\nGet the Assistant's visual response\n\nThe Google Assistant SDK supports rendering the Assistant response to a\ndisplay in the case of visual responses to certain queries. For example,\nthe query *What is the weather in Mountain View?* will render the current\ntemperature, a pictorial representation of the weather, and suggestions for\nrelated queries. This HTML5 data (if present) is located in the\n[`ScreenOut.data`](/assistant/sdk/reference/rpc/google.assistant.embedded.v1alpha2#screenout) field if this feature is [enabled](/assistant/sdk/guides/service/integrate#text-response).\n\nThis can be enabled in the `pushtotalk.py` and `textinput.py` [samples](https://github.com/googlesamples/assistant-sdk-python/tree/master/google-assistant-sdk/googlesamples/assistant/grpc)\nwith the `--display` command line flag. The data is rendered in a browser window.\n| **Note:** If you are sending commands over SSH to a Raspberry Pi with a connected display, make sure you run `export DISPLAY=:0` before running the sample with the `--display` command line flag.\n\nSubmitting queries via text input\n\nIf you have a text interface (for example, a keyboard) attached to the device,\nset the `text_query` field in the `config` field (see [`AssistConfig`](/assistant/sdk/reference/rpc/google.assistant.embedded.v1alpha2#assistconfig)).\nDo not set the `audio_in_config` field.\n\nThe [sample code](https://github.com/googlesamples/assistant-sdk-python/tree/master/google-assistant-sdk/googlesamples/assistant/grpc)\nincludes the file `textinput.py`. You can run this file to submit queries via\ntext input.\n\nSubmitting queries via audio file input\n\nThe [sample code](https://github.com/googlesamples/assistant-sdk-python/tree/master/google-assistant-sdk/googlesamples/assistant/grpc)\nincludes the file `audiofileinput.py`. You can run this file to submit a query\nvia an audio file. The sample outputs an audio file with the Assistant's response.\n\nControl your project with custom commands\n\nYou can add custom commands to the Assistant that allow you to control your\nproject via voice.\n\nHere are two ways to do this:\n\n- Extend the Google Assistant Service sample to include [Device Actions](/assistant/sdk/guides/service/python/extend/install-hardware).\n\n- Create an [IFTTT recipe](https://support.google.com/googlehome/answer/7194656)\n for the Assistant. Then configure IFTTT to make a custom HTTP request to an\n endpoint you choose in response to an Assistant command. To do so, use\n [Maker IFTTT actions](http://maker.ifttt.com)."]]