Googlebots Now Crawling Mobile Web

Google refers to their web crawlers as either spiders or, more affectionately, Googlebots. These crawlers are responsible for gathering information on a website and indexing it so that it appears on the appropriate search results. They operate on the algorithms Google makes and maintains, making them faster than manual indexing.

It’s a brilliant way to index the growing number of web pages. However, until a few years ago, Googlebots were designed to crawl only on desktop sites. When the era of smartphones rolled in, mobile sites couldn’t secure a place in search results due to the lack of a dedicated crawler. In 2011, Google introduced a Googlebot variant designed for indexing mobile content.

For mobile content to be indexed, however, it’s important to notify Google of the existence of a mobile site. There’s too much activity on the Internet every day for Google to keep track of any brand new website going online. Furthermore, some of the original Googlebots may refuse to crawl to a mobile site due to the differences in format.

If a page contains content that shouldn’t be shared with the public (e.g. contact details, credit card numbers), Googlebots can leave that page alone as long as it has a robots.txt file. This will prevent crawlers from indexing the page, keeping it safe from would-be cyber criminals. The syntax for a robots.txt file looks like this:

User-agent: *

Disallow: /

Different commands can be used to block specific pages to specific users.

Leave a comment