We have to read a bit between the lines with the “real” information that is available and that which Google wants us to hear. If we go back a few years, we would have heard Google talk about social signals. In fact, Google even had a deal with Twitter to have direct access to its feeds, and they monitored the tweets and retweets in real time. However, in 2011, Twitter messed up and blocked Google’s access to its feed and as a result Google closed the deal together with its real time search engine.
We know now, that Google has only ever had direct coded access to two social platform feeds, and that is Twitter and Google+. Google has no such access to Facebook or any other social platform, and as such, all they can do is crawl those sites just like any other site.
Now in 2011, Google cut ties with Twitter, which meant that Google was no longer able to collect real time social signals which was a big deal indeed, because it meant that actually, there were no longer any “real time social signals”. Google has admitted that to crawl sites like Twitter or Facebook to pick up the phenomenal amount of social content that is added on a daily basis for inclusion as a ranking signal is not ideal. In a video with Matt Cutts on the subject, he said that it is one thing crawling those sites, but a lot of content would be missed, or because the status of things can change so fast, that they could crawl a piece of information right now that could ultimately change completely in the next minute, and then to wait until it crawled the page again to collect that changed status would not be a reliable method of collating the social signals. Therefore, crawling for sites like Twitter and Facebook for the purpose of algorithmically analysing social signals is a no go. Google must process “social signals” in real time, which means they must have “real time” coded access to the feeds.
As they don’t have “real time” coded access to the social feeds of Facebook, Pinterest or any of the other so called “social sites” other than Google+ or Twitter (Google will have coded access to Twitter’s feeds soon)
Real Time Monitoring & Crawl Explained
OK, let’s take a detailed look at the difference between a crawl and “real time” monitoring. Now we know that Google has coded access to Google+. If we share a post on G+, it gets indexed pretty much instantly. Google doesn’t have to wait to crawl your G+ page, it has instant access. With this speed, Google can easily collect as many “signals” as it wants to, so therefore, it can monitor social status which can be subject to change so fast, that if it was dependant on crawling the sites and pages, the “social signal” may not be reliable.
If Google doesn’t have coded access, its ability to access the site and its pages becomes limited, because it takes time to visit and crawl a page, and in addition to that, Google would have to find new pages, which under the crawling process can take days,even with popular sites.
The purpose of “social signals” as far as Google is concerned therefore, is to provide reliable up to the second data on social status, trend and popularity. As I have said, crawling is not reliable enough or up to date enough to factor “social signals” as a social signal ranking factor with any site Google doesn’t have coded access with. So with sites such as Facebook, Pinterest, Linkedin etc, Google does not collate the “social signals as social signal ranking factors, but it does however crawl those pages and picks up any links and associated data as it would with any other page on the internet.
What I am therefore trying to say, is that Google does X with its crawled data, on pages and sites across the internet, and it does Y with the “real time” social signals it collects from G+ and Twitter (when the Google / Twitter process is coded). So as far as “social signals” goes and how Google uses that data, currently, only G+ is factored and soon to come will be Twitter. That however, is not to say that Facebook, Pinterest or any other “social signal” is useless, because it is not useless, it is just not used under the algorithmic process which Google uses “social signals” for.
Back to the big deal now. If Google effectively stopped collecting real time social signals in 2011, it will mean that they have not been able to factor social signals in to the equation since then by their own admission.
Next (to be published soon): Google’s Solution To Using Social Signals for Ranking