Organízate con las colecciones Guarda y clasifica el contenido según tus preferencias.
Requisitos técnicos de la Búsqueda de Google
Conseguir que tu página aparezca en los resultados de búsqueda no cuesta nada, aunque alguien te diga lo contrario. Siempre que tu página cumpla estos requisitos técnicos mínimos, puede aparecer en la Búsqueda de Google:
El robot de Google no está bloqueado.
La página funciona; es decir, Google recibe un código de estado HTTP 200 (success).
La página tiene contenido indexable.
El robot de Google no está bloqueado (puede encontrar la página y acceder a ella)
Google solo indexa las páginas de la Web que son de acceso público y que no impiden que nuestro rastreador, el robot de Google, las rastree. Si una página es privada, por ejemplo, porque es necesario iniciar sesión para verla, el robot de Google no la rastreará. Del mismo modo, si se utiliza uno de los varios mecanismos que hay para impedir que Google indexe la página, tampoco se indexará.
Comprobar si el robot de Google puede encontrar y acceder a tu página
Es poco probable que las páginas que se hayan bloqueado con un archivo robots.txt aparezcan en los resultados de la Búsqueda de Google. Para ver una lista de las páginas a las que Google no puede acceder (pero que querrías que aparecieran en los resultados de la Búsqueda), consulta el informe "Indexación de páginas" y el informe "Estadísticas de rastreo" de Search Console. Cada informe puede contener información diferente sobre tus URLs, por lo que te recomendamos que consultes los dos.
Una vez que el robot de Google puede encontrar y acceder a una página que funciona, Google comprueba si tiene contenido indexable. Contenido indexable significa que cumple estos requisitos:
[[["Es fácil de entender","easyToUnderstand","thumb-up"],["Me ofreció una solución al problema","solvedMyProblem","thumb-up"],["Otro","otherUp","thumb-up"]],[["Me falta la información que necesito","missingTheInformationINeed","thumb-down"],["Es demasiado complicado o hay demasiados pasos","tooComplicatedTooManySteps","thumb-down"],["Está obsoleto","outOfDate","thumb-down"],["Problema de traducción","translationIssue","thumb-down"],["Problema de muestras o código","samplesCodeIssue","thumb-down"],["Otro","otherDown","thumb-down"]],["Última actualización: 2025-08-04 (UTC)."],[[["\u003cp\u003eGetting your webpage into Google Search results is free, provided it meets the basic technical requirements.\u003c/p\u003e\n"],["\u003cp\u003eFor a webpage to be indexed by Google, it must be publicly accessible, crawlable by Googlebot, and return an HTTP 200 (success) status code.\u003c/p\u003e\n"],["\u003cp\u003eThe webpage should also contain indexable content in a supported file type and adhere to Google's spam policies, though indexing isn't guaranteed.\u003c/p\u003e\n"],["\u003cp\u003eGoogle Search Console provides tools like the Page Indexing report, Crawl Stats report, and URL Inspection tool to help you assess and troubleshoot indexing issues.\u003c/p\u003e\n"]]],["To be eligible for Google Search indexing, a page must meet these technical requirements: Googlebot must not be blocked from accessing it, the page must function correctly with an HTTP 200 (success) status code, and it must contain indexable content. Blocking Googlebot prevents crawling, while utilizing a `noindex` tag prevents indexing, allowing crawling. The Page Indexing and Crawl Stats reports in Search Console, as well as the URL Inspection tool, can check page status.\n"],null,["Google Search technical requirements\n\n\nIt costs nothing to get your page in search results, no matter what anyone tries to tell you.\nAs long as your page meets the minimum technical requirements, it's eligible to be\nindexed by Google Search:\n\n1. Googlebot isn't blocked.\n2. The page works, meaning that Google receives an HTTP `200 (success)` status code.\n3. The page has indexable content.\n\n| Just because a page meets these requirements doesn't mean that a page will be indexed; indexing isn't guaranteed.\n\nGooglebot isn't blocked (it can find and access the page)\n\n\nGoogle only indexes pages on the web that are accessible to the public and which don't\nblock our crawler, [Googlebot](/search/docs/crawling-indexing/googlebot),\nfrom crawling them. If a page is made private, such as requiring a log-in to view it,\nGooglebot will not crawl it. Similarly, if one of the\n[several mechanisms](/search/docs/crawling-indexing/control-what-you-share) are\nused to block Google from indexing, the page will not be indexed.\n\nCheck if Googlebot can find and access your page\n\n\nPages that are blocked by [robots.txt](/search/docs/crawling-indexing/robots/intro)\nare unlikely to show in Google Search results. To see a list of pages that are inaccessible to\nGoogle (but that you would like to see in Search results), use both the\n[Page Indexing report](https://support.google.com/webmasters/answer/7440203)\nand [Crawl Stats report](https://support.google.com/webmasters/answer/9679690)\nin Search Console. Each report may contain different information about your URLs, so it's a good idea to look at both reports.\n\n\nTo test a specific page, use the [URL Inspection tool](https://support.google.com/webmasters/answer/9012289).\n\nThe page works (it's not an error page)\n\n\nGoogle only indexes pages that are served with an\n[HTTP `200 (success)` status code](/search/docs/crawling-indexing/http-network-errors#2xx-success).\nClient and server error pages aren't indexed. You can check the HTTP status code for a given\npage with the [URL Inspection tool](https://support.google.com/webmasters/answer/9012289).\n\nThe page has indexable content\n\n\nOnce Googlebot can find and access a working page, Google checks the page for indexable\ncontent. Indexable content means:\n\n- The textual content is in a [file type that Google Search supports](/search/docs/crawling-indexing/indexable-file-types).\n- The content doesn't violate our [spam policies](/search/docs/essentials/spam-policies).\n\n| While blocking Googlebot with a robots.txt file will prevent crawling, a page's URL might still appear in search results. To instruct Google not to index a page, use [`noindex`](/search/docs/crawling-indexing/block-indexing) and allow Google to crawl the URL."]]