This was a fantastic hangout this week. John Mueller had a guest with him, Sash Mayer, but John did most of the talking. I learned a number of things including some information on how Penguin affects a site and how to clean up a Panda affected site. I've summarized the hangout below. As usual, please note that this information should not be used as official statements from Google. The times are not completely accurate, but should give you a good idea of where the information is found in the video.
Key points:
Do links in javascript pass pagerank?
Is it true that Google just pays attention to the first instance of a link on a page?
Do you need to disavow all TLDs of blogspot/blogger?
Good information on what to do when your site is hit by the Panda algorithm.
Can Penguin affect a single page of a site?
Do you need to disavow links that come from deindexed sites?
More information on Panda recovery.
Do dmoz scraper sites need to be disavowed?
Lots of footer links can look like keyword stuffing.
1:39 - Do links in javascript pass pagerank?At the moment, javascript links don’t pass pagerank. But they’re working on finding a way to treat these as “normal” links. If you use javascript for your internal navigation you should probably have static links there as well.
5:53 - Is it true that Google just pays attention to the first instance of a link on a page? Question asked was whether it was true that Google only pays attention to the first link to each particular page and that other links to that page don’t matter. John says that may not be true.
7:26 - Should all links on blogspot, etc be disavowed? Really depends on the link. There can be good links or bad links from these sources. “The platform shouldn’t matter.”
8:40 - Do you need to disavow all TLDs of blogspot/blogger? If disavowing a blogspot page, you can just disavow the .com version as it is the canonical. Don’t need to worry about the .ca, .co.uk, etc.
11:20 - If you have a non-www and a www and links point to each version, then it’s probably best to verify both in WMT and file a disavow for each. Usually though, if there is one clear version that is shown in search then you can just focus on that one. (My thought - probably a good idea to verify the https version if you have one too and file your disavow there as well.)
13:19 - Good information on what to do when your site is hit by the Panda algorithm. If a site fixes its Panda issues will it recover immediately? It will take some time. The pages need to be recrawled and the algorithm has to look at the site structure overall. Quote from John Mueller: “Just by fixing individual low quality pages, you’re probably not going to see a significant change but by making sure that the whole website itself is significantly better, you’ll probably see some changes. And, with the Panda algorithm, like with many of our other algorithms you also have to wait for some time to make sure that that algorithm is updated in our search results. So, that’s something that could take a month...maybe two months.” (Note from me: Panda usually updates once a month. But it could take some time for Google to recrawl everything and then reprocess it so it could take more than a month to see improvements. John also mentioned that if you are making these changes to make your site better then you should really start to see a gradual improvement even before Panda refreshes and then a bigger jump when Panda does refresh.
16:40 - Can Penguin affect a single page of a site?. Question about why Google was serving up one particular page on their site when another was more appropriate. Was this related to Penguin? John says no. Penguin looks at the site overall. He says, “It wouldn’t be specifically adjusting the ranking of individual pages on a website.” John wondered if perhaps there could be keyword stuffing. He said that often happens when someone tries to make a page look like a super authoritative page on the subject. It can then be demoted because of keyword stuffing. (My note - often if keyword stuffing is an issue, making the appropriate changes can produce positive results very quickly as this algorithm adjusts every time your site gets crawled.)
20:13 - If you remove your disavow file, then as the links get recrawled they become normal followed links again. It could take several months though. Also important to note that disavowing a link doesn’t cause it to disappear from WMT. Disavowed links are still in WMT just like nofollowed ones are.
21:26 - Haha...John says that if they see Comic Sans font on a site then they don’t take it seriously. 🙂 The question was whether using old dated html code would affect your rankings. The answer is no. Old fashioned design doesn’t make a site bad. However, having an updated design is probably better for users.
24:00 - Do you need to disavow links that come from deindexed sites? Yes. The site could get reincluded and also, sometimes links in nonindexed pages can still be passing PageRank.
24:30 - Great information on Panda. Question about a site that dropped significantly in rankings last week but didn’t make any changes. John looked at the site and said the algorithms were detecting issues with the site’s quality. He noted that when he clicks on an article he essentially just sees ads. The page layout algo may have issues with that. He recommends significantly improving the overall quality of the pages. (Sounds to me that this site was hit with Panda). John then mentioned Amit Singhal’s article on questions to review for quality in regards to Panda - http://googlewebmastercentral.blogspot.ca/2011/05/more-guidance-on-building-high-quality.html. He said to look for thin content, large amounts of aggregated content, user generated content where people are submitting articles that are low quality, etc. If you want to keep this content, then perhaps noindex. Or, consider removing that content completely from the site. “ Aim to be significantly better than [your competitors].”
33:51 - Sasch Mayer says that in his opinion the Adwords keyword planning tool is broken. If there are discrepancies between that data and what is in WMT, trust WMT.
34:40 - Site got a manual penalty lifted but has not seen any improvement. If you’ve removed a lot of links, don’t expect to go back to #1. The road back to the top is long and hard. Set a schedule of 8-12 months to climb back. Penguin refreshes are a factor in this statement.
40:10 - Do we need to disavow Dmoz scraper sites? Sasch Mayer says, “Yes”. (My thought - Interesting. I do not disavow these.) John then said that for the most part the algorithm tries to recognize these scraper sites and ignore them. He then gives a vague answer, “If you see these as a problem, and you want to be sure that Google handles them right...then by all means just put them in a disavow file. John has said previously (http://www.youtube.com/watch?v=h0FC1K25Z3w&feature=c4-overview&list=UUthrUiuJUtFSXBUp48D8bAA 16:50) that Dmoz scrapers don’t need to be disavowed unless the original Dmoz link was an unnatural one. That’s the answer that makes sense to me.
41:30 - Don’t need to disavow a site just because it is in a foreign language. But, if you know there is a pattern then it may be best to do so. For example, if you see that you have unnatural links from Russian forums then you would likely view all Russian sites as suspicious.
43:30 - Question about when you have a manual action on the subdomain. Will the penalty appear on the root site’s WMT? John says they try to be as granular as possible so if it’s obvious that the problem is just in one subdomain it will be isolated to there. But if it’s a sitewide problem (i.e. if you allow subdomains with UGC and have widespread spam) then they may penalize the whole site.
44:30 - Question about a site where the home page is just video content. Is that ok? Google has other ways (i.e. links) to figure out what the page is about. If all of your content is in images or videos though it may be difficult for Google to figure out what the site is about. Best to have some content there.
49:30 - When you report a site through the tool to report scraper sites it’s not really a manual review but rather, the search quality team uses this information to make their algorithms better at detecting scraper sites.
50:30 - Long question about the Toolbar PageRank on Google+ pages and whether there is a connection to the Agent Rank patent. John said that the TBPR is just based on links and not connected to Agent Rank. He said it’s possible that G+ is collecting and using metrics around popularity, etc. but he doesn’t know.
51:00 - John says that Toolbar PageRank shouldn’t be used as any kind of a quality metric.
1:00:00 - Lots of footer links can look like keyword stuffing. John said that if you have a huge footer that links to every page in your site that it can look like keyword stuffing.
1:01:00 - Why does Google index pages from feedproxy? John said he passed it on to the team and perhaps they’ll get noindexed.
1:02:00 - From now on, the Webmaster Central Hangout schedule will be on the Google+ page. However, I can't find a schedule anywhere. Hopefully that will change!
Comments are closed.