Digital Marketing, Search Engine Optimisation

How to Find & Fix Crawl Optimisation Issues (BrightonSEO Recap)

Last week the Optix Digital Marketing team took our annual pilgrimage to Brighton to attend BrightonSEO. We’re now in post Brighton mode – tallying up receipts and hoping that our lovely boss will authorise our expenses when he sees that we had lobster and champagne for dinner.

One of the best talks I saw was by Barry Adams who had a refreshingly no-nonsense approach to discussing SEO (sample quote: “if you receive this warning from Google, you’ve done f—– up”). Barry was talking about crawl optimisation which is something I was keen to hear more about. A site I worked on recently had crawl issues caused by a combination of its massive size and the fact that it broke if Google crawled it too often so I wanted to find out more about how to optimise crawl rates.

I’ve found that the best talks at Digital Marketing events are often the technical ones, these tend to have useful takeaways which are what I want from a presentation.  Here is Barry’s slide deck (click to visit Slideshare).

barry-adams-crawl-optimisation-presentation

 

Key Takeaways

I won’t go through Barry’s talk in detail but here are some key takeaways:

  1. If your site has a decent amount of pages (say over 5k) then you need to be thinking about crawl optimisation as part of your technical SEO tasks.
  2. You need to make sure Google is properly crawling your site before you focus on content optimisation or a large scale content marketing plan. There’s no point creating great content if Google are going to struggle to crawl it.
  3. A few simple technical things can prevent (or solve) a whole load of potential issues.

 

Crawl Optimisation Checklist

Pagination Meta Tags

Using pagination Meta Tags (rel=”next” & rel=”prev”) on paginated pages in order to help Google understand how they relate.

Block parameters in Search Console

Prevent Google from crawling URLs you don’t want indexed using Search Console’s ‘URL Parameters’ feature.

Block internal search results pages.

Use your robots.txt file to prevent internal search pages from being crawled. You don’t want them to be indexed anyway right?

Keep your XML Sitemaps up to date.

You should be semi regularly crawling the list of URLs from your Sitemap in Screaming Frog to check that the pages are still live. This is still important (perhaps more so) if you’re using auto generated XML sitemaps.