Google Search Console: Common errors and what they mean

Blog
Posted on
GSC Errors

If you manage a website, you've probably heard of Google Search Console (GSC)—a free tool from Google that helps you monitor how your site performs organically in search results. If you've ever explored GSC, you may have come across a list of errors or received emails from GSC about issues on your site.

However, not every alert means something is actually wrong. Sometimes it’s just a warning or an informational message. An SEO expert can help determine whether it’s a real issue.

It’s important to monitor these errors because they can negatively impact your website’s visibility in search results. If left unresolved, Google might struggle to index your site properly. Pages that aren’t indexed won’t appear in search results.
Errors can also lead to a poor user experience, which is a key ranking factor for Google. A slow or error-prone website can ultimately cause your rankings to drop.

Let’s go over the most common Google Search Console errors and what they mean.
On this page:

Server errors (5XX) 

A three-digit status code starting with “5” indicates a server error. This means there’s an issue with the server hosting your website.

500 - Internal Server Error

A general error that occurs when the server encounters a problem and cannot load the page. Until it’s fixed, Google won’t index the page.

502 - Bad Gateway

This indicates an external service (like a CDN or payment provider) didn’t respond properly. While this can be temporary, prolonged issues can negatively impact your site’s visibility in the search results.

503 - Service Unavailable

This usually means the server is temporarily overloaded or under maintenance. Google won’t be able to crawl the page, which can impact findability.

Redirect Errors

Redirects send users and search engines from one URL to another. If a redirect is not properly set up, it can confuse Google.

Redirect loop

Happens when a URL redirects to itself (directly or indirectly), causing Google to enter an endless loop. The page can’t get indexed, and users can get stuck.

A redirect chain that was too long

When several redirects happen in a row, it can hurt loading time and performance. Google might give up before reaching the final destination, resulting in the page not being indexed.

A bad or empty URL in the redirect chain

If one of the URLs in the chain is broken or empty, Google can’t reach the final page. This prevents both users and search engines from finding the desired content.

A redirect URL that eventually exceeded the max URL length

If the resulting URL is too long, Google may not crawl it. Pages that are not crawled won’t appear in search results.

4XX errors

A 4XX error means Google (or a user) cannot access the requested page. If this happens on important published pages, it can negatively affect your visibility.

401 - Blocked due to unauthorized request

This occurs when a page is password-protected or IP-restricted. Google won’t be able to crawl or index the page.

403 - Blocked due to access forbidden

Google is blocked from accessing the page due to security settings. This is sometimes unintentional, preventing Google from accessing the page.

This often happens with pages that aren't—or are no longer—published. If you see a 403 error for an existing page, double-check whether it's still published.

404 - Not found

The server can’t find the requested content. This may be due to a moved or renamed page, the URL has been changed without a redirect, or a typo in the URL.

Soft 404

Technically the page exists, but Google sees it as low-value. This can happen when:

  • The page has little or no content.
  • A redirect leads to a page with little content or irrelevant information.
  • A custom error page returns a 200 status code but actually says the page isn’t relevant.

Robots.txt & Noindex 

Robots.txt allows you to specify which content Google can and cannot crawl. Incorrect configurations can block valuable pages from being indexed.

URL Blocked by robots.txt

Your robots.txt file tells Google not to crawl certain pages. If done accidentally, important content might be excluded from indexing.

Indexed, though blocked by robots.txt

The page appears in search results, but Google couldn’t fully crawl it due to the robots.txt restrictions. This can lead to incomplete or misleading search snippets. 

In this case, double-check if broad folders are being unnecessarily blocked in the robots.txt file.

URL Marked as ‘noindex’

A noindex tag prevents a page from appearing in search results. If placed by mistake, the page will not be indexed in the search results.

Crawl and indexing issues

Crawl and indexing issues occur when Google cannot access or process pages that should be crawlable and indexable. Sometimes, Google simply decides not to index a page, which keeps it out of the search results.

Crawled - currently not indexed

Google visited the page but chose not to index it. The result is that the page will not be shown in Google search results. This may indicate low content quality, duplicate content or technical issues.

If this notification appears occasionally in Google Search Console, then there’s usually no reason to worry. However, if you see this message appearing frequently, or if many pages show this status, then it’s time to consult an expert.

Discovered - currently not indexed

Google knows about the page but hasn’t crawled it yet. This may be due to low crawl priority or server issues.

If it’s a newly published page, you might need a little more patience. You can speed up this process by manually requesting the indexation in Google Search Console.

Duplicate without user-selected canonical

There are multiple pages with the same or very similar content. Google chose to index only one and skipped the others.

Duplicate, Google chose different canonical than user

Even if you’ve set a canonical page, Google may decide to index a different version, reducing your control over what appears in results.

Page indexed without content

Google indexed the page but couldn’t read the content—possibly due to cloaking or misconfigurations. As a result, the page may not display properly in search results.

Core Web Vitals Issues

Core Web Vitals are metrics that assess loading time, interactivity, and visual stability. In other words, they indicate how fast and user-friendly your site feels.

The three main metrics are:

  • Largest Contentful Paint (LCP): Time it takes for the main content (image or text block) to appear on the user’s screen.
  • Interaction to Next Paint (INP): How quickly the site responds to user interactions.
  • Cumulative Layout Shift (CLS): How much the layout shifts while loading.

Google Search Console alerts you if your pages are underperforming on one or more of these metrics.
The causes can be technical (like a slow server) or content-related (e.g., unoptimized images or too many plugins added via Google Tag Manager).

Want to learn more about Core Web Vitals?

What should you do about these errors?

Errors in Google Search Console can be confusing, but they don’t always mean something is wrong. Some are minor, while others can significantly affect your site’s visibility.

Not sure what to do or need help? Feel free to contact us, we’re happy to assist you with your SEO and Google Search Console questions, so your website stays visible!