[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-25。"],[],[],null,["# Example Store overview\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nExample Store lets you store and dynamically retrieve\n[few-shot examples](#fewshotexamples). Few-shot examples let you\ndemonstrate the expected response patterns to an LLM to improve the quality,\naccuracy, and consistency of its responses to similar queries.\n\nWhat are few-shot examples?\n---------------------------\n\nA few-shot example is labeled data specific to your LLM use case. It includes\nan input-output pair demonstrating the expected model response for a model\nrequest. You can use examples to demonstrate the expected behavior or response\npattern from an LLM.\n\nBy using only a few relevant examples, you can cover a larger set of possible\noutcomes, intended behavior, and user inputs without correspondingly increasing\nthe size or complexity of prompts. This is both by only including relevant\nexamples (decreasing how many examples are included) and by \"showing not telling\"\nthe expected behavior.\n\nUsing few-shot examples is a form of in-context learning. An example\ndemonstrates a clear pattern of inputs and outputs, without explaining how the\nmodel generates the content. You can cover more possible outcomes or user\nqueries using just relatively few examples, without increasing your prompt size or\ncode complexity. Using examples doesn't involve updating the parameters of the\npretrained model, and without impacting the breadth of knowledge of the LLM.\nThis makes in-context learning with examples a relatively lightweight and\nconcise approach to customize, correct, or improve the reasoning\nand response from an LLM to unseen prompts.\n\nBy collecting relevant examples that are representative of your user queries,\nyou help the model maintain attention, demonstrate the expected pattern,\nand also rectify incorrect or unexpected behavior. This doesn't affect other\nrequests that result in the expected responses.\n\nLike all prompt engineering strategies, using few-shot examples is additive to\nother LLM optimization techniques, such as\n[fine-tuning](/vertex-ai/generative-ai/docs/models/tune-models)\nor [RAG](/vertex-ai/generative-ai/docs/rag-overview).\n\nHow to use Example Store\n------------------------\n\nThe following steps outline how you might use Example Store:\n\n1. [Create or reuse](/vertex-ai/generative-ai/docs/example-store/create-examplestore)\n an `ExampleStore` resource, also called an \"Example Store instance\".\n\n - For each region and project, you can have a maximum of 50 Example Store instances.\n2. Write and upload examples based on LLM responses. There are two\n possible scenarios:\n\n - If the behavior and response pattern of the LLM are as expected, write\n examples based on these responses and upload them to the Example Store\n instance.\n\n - If the LLM shows unexpected behavior or response patterns, write an\n example to demonstrate how to correct the response, and then upload it\n to the Example Store instance.\n\n3. The uploaded examples become available immediately to the agent or LLM\n application associated with the Example Store instance.\n\n - If an agent based on the [Vertex AI Agent Development Kit](/vertex-ai/generative-ai/docs/agent-development-kit/quickstart)\n is linked to the Example Store instance, then the agent automatically\n retrieves the examples and include them in LLM request.\n\n - For all other LLM applications, you must search for and retrieve the\n examples, and then include them in your prompts.\n\nYou can continue adding examples iteratively to an Example Store instance whenever you\nobserve unexpected performance from the LLM, or encounter adversarial or\nunexpected user queries. You don't need to update your code or redeploy a new\nversion of your LLM application. The examples become available to the agent\nor application as soon as you upload them to the Example Store instance.\n\nAdditionally, you can do the following:\n\n- Retrieve examples by performing a cosine similarity search between the search\n keys of the stored examples and those in your query.\n\n- Filter examples by function name and refine the list of candidate examples\n to those representing the possible responses from the LLM.\n\n- Iteratively improve your agent or LLM application.\n\n- Share examples with multiple agents or LLM applications.\n\nGuidelines for authoring few-shot examples\n------------------------------------------\n\nThe impact of examples on model performance depends on what kinds of examples\nare included in the prompts and how they are included.\n\nThe following are generally recommended practices for authoring examples:\n\n- **Relevance and similarity**: The examples must be closely related to the\n specific task or domain. This helps the model focus on the most relevant\n aspects of its knowledge, decreases token usage, and maintains or even\n improves performance. You need fewer examples if those are relevant to\n the conversation. The corpus of the available examples must be representative\n of possible user queries. Also, an example must be relevant to a given user\n query.\n\n- **Complexity**: To help the LLM perform better, use examples that are of low\n complexity to demonstrate the expected reasoning.\n\n- **Representative of the possible model outcomes**: The expected\n responses in an example must be consistent with the possible outcome. This\n lets the example clearly demonstrate reasoning that's consistent with the\n expected reasoning from the LLM for the prompt.\n\n- **Format**: For best performance, format few-shot examples in your prompt\n in a manner that's consistent with the LLM training data and differentiated from\n the conversation history. The formatting of examples in a prompt can\n considerably impact LLM performance.\n\nExample use case: Function calling\n----------------------------------\n\nYou can use few-shot examples to improve function calling performance.\nYou can indicate the expected function call for a user query in a consistent\npattern. The example can model the expected response to the request by including\nwhich function needs to be invoked and the arguments to include in the function\ncall. Consider a use case where the function `get_store_location` returns the\nlocation of a store and its description. If a query doesn't invoke this function\nas expected or shows unexpected output, you can use few-shot examples to\ncorrect this behavior for subsequent queries.\n\nFor more information about function calling, see\n[Function calling](/vertex-ai/generative-ai/docs/multimodal/function-calling).\n\nTo learn more, see [Example Store quickstart](/vertex-ai/generative-ai/docs/example-store/quickstart).\n\nWhat's next\n-----------\n\n- Learn how to [create an example store](/vertex-ai/generative-ai/docs/example-store/create-examplestore).\n\n- Learn how to [teach an agent with examples](/vertex-ai/generative-ai/docs/example-store/upload-examples)"]]