These 15 things are not page ranking factors for google

Page Ranking


Why can we assume a number of these non-factors could also be considered in Google’s algorithm?

In this post, you’ll find a number of the foremost commonly mentioned by other SEO professionals or clients. I’ve tried to elucidate why they aren’t technically a ranking factor and included comments from Googlers, where relevant.


1) Website Age

I keep seeing this one altogether of the ranking factors lists out there, despite the very fact that Google has said it isn’t a factor.


Sure this stuff is correlated, but correlation doesn’t equal causation.


Domains that are around for a short time have had that for much longer to accrue all of the signals that enter ranking.


If your site is older, it likely has more content and links, as it’s had longer to urge customers and word of mouth, etc.


Age isn't the factor here. It’s the opposite signals that accompany age – but don’t require age to urge.


2) Domain Registration Period

The same goes for domain registration length. this is often something you purchase. It wouldn’t add up to form it a ranking factor if you'll just buy it.


Users don’t care how long you’ve registered your domain. It doesn’t make your site more or less relevant to their query.


Does it correlate? Sure, because spammers usually don’t cough up for multiple years of registration.


Do you know who else doesn’t cough up for multiple years? Small businesses or companies who don’t want that expense all directly.


With auto-renew features on registrars now, it’s not a problem to travel yearly. once you own hundreds or thousands of domains, it’s better for tax reasons too.


There are better ways of determining authority.


Google does have a patent on using registration length, but that doesn’t mean they’re using it for ranking purposes. That’s not how patents work. Anybody can patent anything.


Take this Time Machine Patent, for instance. Getting a patent on a strategy doesn’t mean that using said methodology resulted in a positive change.


3) Pogo-Sticking

First, let’s clarify the terms. Bounce rate is when a user visits one page and doesn’t take any action or visit the other pages.


Pogo-sticking is that the act of a user visiting a page then clicking back to the search results immediately (often clicking another search result). this is often mentioned as a ranking factor by SEO pros despite Google saying otherwise during a video.


It’s not an element.


It may be used for internal testing, comparing ranking changes against one another, internal control, and other things, but (aside from personalization) it doesn’t appear to be an element within the core algorithm.


There also are tons of cases where pogo-sticking may be a good thing. I pogo-stick every morning once I look for “Detroit Red Wings” news and skim several articles from Google.


The same goes for any click-based metric. They’re very noisy, often don’t mean what we expect they mean and maybe easily manipulated.


This doesn’t mean Google doesn’t use things like pogo-sticking to gauge two versions of an enquiry results page. But they likely don’t use it at a site or URL level.


4) Total Amount of Page Content or Word Count

This one is simply silly.


Sure, more useful content is best.


More complete content is best. More relevant content is best.

But simply more content? Nope.


Think sort of a user.


If I’m checking out the world code in Detroit would I like the page that just says “Detroit’s code is 313” or the one that builds up to the solution with 3000 words of chic prose?


If you were wondering, frequency of content updates isn’t an element (in non-news search) either.


If I’m checking out a soup recipe, I don’t need Grandma’s biography – just tell me what I want and the way to form it.


5) Unlinked Mentions

This is a case of SEO pros slightly misunderstanding some Google comments.


Google has told us they don’t treat unlinked mentions as links. Eric and Mark even did a test that showed no improvement in rankings.


What’s likely happening here is that unlinked mentions are used for the knowledge graph and determining entities, but indirectly for ranking.


Does the knowledge graph influence rankings? Likely yes, in many indirect ways, but we should always list those as an element, instead of the items which will partially make them up.


6) XML Sitemaps

My complaint is seeing “no XML sitemap” on every SEO audit I encounter.


XML sitemaps don't have anything to try to do with ranking. At all. they're away by which Google will discover your pages — but if Google is already indexing all of your pages, adding an XML sitemap will do nothing.


Not every site needs one. It won’t hurt, but if you've got an excellent taxonomy and codebase it won’t help, either.


They’re quite a band-aid for sites that have crawl issues.


Also, if you want to travel down this rabbit burrow, here’s John Mueller saying that HTML sitemaps aren’t a ranking factor.


Should you still do an XML sitemap?


Probably. There are many non-ranking benefits for doing it – including more data available in Search Console.


7) Direct Website Visits. Time on Site. Bounce Rate. GA Usage

None of those is factors.


According to W3techs, only 54% of internet sites use Google Analytics. Most big brands and fortune 500 sites use Adobe Analytics instead. Chrome only features a 45-60% market share counting on what source you check out.


In other words, there’s no reliable way for Google to urge these metrics for quite half the online.

Big brands are dominating rankings and Google doesn’t have their analytics data. albeit they did, it’s way too noisy of a sign.


For many sites, the bounce rate is ok. Take a weather site; most users only search the weather in one location. A bounce is normal.


For other sites, time on site being low is sweet, too. Take Google itself — its goal is to urge you of the search results and onto something else as quickly as possible.


If you don’t believe me, here’s Gary saying it in 2017.


8) AMP

Not a ranking factor. Page speed may be a ranking factor but AMP is different from page speed.


For any queries, page speed itself is simply a minor ranking factor. there's no scenario where Google goes to rank a faster page before a more relevant page.


You won’t find a user saying, “I know I looked for Pepsi, but this Coke page is such a lot faster…”


Does AMP improve page speed? Yes, it does. But speed remains the ranking factor, not AMP.

(Note: AMP is required for the carousel which does rank #1, but that’s not a part of the ranking algorithm. That’s an enquiry feature, so it doesn’t count.)


9) LSI Keywords

This is during an ll|one amongst|one in every of"> one among those misinformation trends in SEO that keeps shooting up every once in a while. All it means is that the person saying it's no understanding of


LSI in the least.


Seriously, the L stands for latent, and latent means not there – which contradicts how most SEO professionals then continue to use this phrase


Here’s a relevant post that explains it way better than I can.


10) Subdomains or SubDirectories

Google doesn’t care.


There may are a time once they did. But search engines have gotten way better at determining whether you’re employing a subdomain as a separate site or as a neighbourhood of your main site and treating it intrinsically.


When it comes right down to subdomains vs directories, it’s all about how you employ them and the way you interlink them to everything else, not the particular domain or directory itself.


Yes, I do know you’ve seen plenty of studies out there that say moving from one to the opposite caused a dip. However, in all of these studies they didn’t just do a move – they changed the navigation, UX, and linking structure, too.


Of course, removing plenty of links to subpages and replacing them with one link to a subdirectory will have an impact on your SEO. But that’s all due to links and PageRank, not the particular URL structure.


11) TF-IDF Keywords

Again, this is often just an SEO pro telling the remainder of the community that they lack computing knowledge. TF-IDF may be a concept in information retrieval but it’s not utilized in ranking.


Besides, there are way better ways of doing stuff immediately than using TF-IDF. It doesn’t work nearly also as modern methods, and it’s not about ranking in the least.


When it involves analysis, TF-IDF isn’t something that you simply as a webmaster can do at a page level. It depends on the corpus of leads to the index.


Not only would you like all the opposite relevant documents, but you’d need the non-relevant ones to match them to, as well.


You can’t realistically scrape the search results (relevant ones only) then apply TF-IDF and expect to find out much. You’re missing the opposite half of the specified data for the calculation.


Here’s a really simple primer. If you would like to find out more, devour an information retrieval textbook and skim about these concepts.


I recommend “Information Retrieval” by Stefan Butcher, who works at Google.


12) Quality Raters & E-A-T

They don’t affect your site in the least. They aren’t specifically rating your site in any way that’s employed by the algorithm.


They help rate algorithm changes against each other and make (for lack of a far better term) training sets of knowledge.


Some algorithm changes that Google makes will attend to the standard raters first to ascertain if they achieved what they wanted to realize. They’ll do something like check out two search results pages and “rate” which one is best for that question.


If it passes, they’ll consider putting the change live.


I know, I want to be a top-quality rater some years ago. Nothing in my job duties had me affecting the rankings of individual websites.


Also, simply because something is within the quality rater guidelines doesn’t mean that it’s a ranking factor. the standard rater guidelines are a simplified way of explaining in plain English what all the particular factors try to live.


A good example is E-A-T. Google has said there’s no such thing as an E-A-T score.


EAT is simply a conceptual model for humans to elucidate what the algorithm is trying to emulate.

(If you would like my opinion, E-A-T remains mostly measured by PageRank, but that’s another post.)


13) Accessibility

Is accessibility important? Yes, it is.


Is there a flag within the search algorithm to mention whether a site is accessible? No, there isn’t.

Currently, accessibility isn't a ranking factor.


Several things that are required for accessibility are ranking factors like alt attributes, proper heading usage, etc. But the search engines are watching those factors, not whether or not your page passes an accessibility audit.


That doesn’t mean you shouldn’t make your page accessible, though. Not doing so may be a great way to urge sued.

(Note: I predict a world where search engines will eventually concentrate on accessibility so that when users with assisted devices do an enquiry they will revisit only results that will work – but we aren’t there yet. this might be a fun 10% project for a few Googlers.)


14) Social Signals

As far back as 2010, Matt Cutts told us that Google doesn’t use social signals. (Except for that period once they actually used their own Google+ signals.)


Google isn’t using friend counts, follower counts, or any metrics that are specific to social networks.

They can’t.


Most social networks block them from crawling. Many users set their profiles to non-public. They simply can’t access much of that data.


But assume they might. What would happen if they were using it and Twitter suddenly put up a robots.txt blocking them? The rankings would drastically change overnight.


Google doesn’t want that. They’re all about making things robust and scalable.


Having said that, though, they are doing crawl social networks when and where they will – but they likely treat them a bit like the other page on the web.


So if you've got a high PR social page that has links to things thereon, those will count as links and a few of that authority may pass.


I’ve always joked that I would like to make an enquiry engine that uses only social signals. But imagine how awful it might be to look for sensitive medical information and obtain back a bunch of memes making fun of the condition.


For many topics, the way people share stuff on social isn't the way people search.

Just imagine what an enquiry engine that only checked out social shares would show for your most/least favourite politician and you’ll see why social signals aren’t the simplest tokens for Google to use.


15) Content Accuracy

Google and Bing know less about what’s accurate and more about what the consensus of the online says. the online isn’t always right.

More importantly, though, the engines try to match query intent and use other signals (cough, cough, links!) to measure authority. The focus immediately isn’t on whether the info is true or wrong (as this is often very hard to do). It’s more on whether or not the location is showing it's authoritative and reputable. Here’s Danny Sullivan saying even as much. Since search engines only see what the bulk of individuals say, they aren’t really measuring “correctness” but popularity or web consensus. It’s why we see wrong information within the knowledge graph all the time. It’s also kind of how Google Translate works, and it’s why we see some gender bias and other issues appear in there. Unfortunately, that’s how the bulk of the text on the web is written.


Summary

I hope this helps clear up tons of the confusion around these specific factors.


Whenever we debate whether something is or isn’t an element, I prefer to believe how I’d code or scale it.


Often, just doing that mental exercise can show me all the issues with using it.


I honestly believe that Google and Bing aren't lying to us once they tell us this thing or that's not a ranking factor.


Sometimes they're intentionally ambiguous in their answers, and that they do choose their words carefully.


But I don’t think they mislead us.


Content Credit:- Search Engine Journal



More Resources:- 








6 Comments

Previous Post Next Post