The Dirty Little Secrets of Search

Posting to NGC4LIB


Here is another article that people may find interesting, from the New York Times. “The Dirty Little Secrets of Search” by David Segal (February 12, 2011), where there is an excellent discussion of search engine optimization or SEO, and what Google does to punish companies or individuals that try to get around their guidelines. It is interesting also to note Google’s terms of service (under “Quality Guidelines”) at

Of course, people will, and organizations must, push these guidelines to their limits. An organization such as J.C. Penney (from the NYTimes article) must try to maximize their sales and advertising is the only way to do that. In this new environment we are in, while Penney could take out ads in e.g. the NYTimes for their merchandise, people no longer think that way. To use the web to find new dresses, people go to Google–not the NYTimes, and newspapers are suffering terribly and even shutting down because of it. Therefore, it is absolutely vital for Penney that when someone searches “dresses” in Google, that they see the Penney site and *not* on page 2. How do you ensure this? By hiring a company that specializes in SEO, or, the only other choice is to pay Google to ensure that when someone searches “dresses” an “adword” comes up that will have a link to Penney’s site.

Google has their own guidelines to punish what they call “dirty tricks” (read the article) and Penney’s site fell from #1 to #50 or so, in any case, where those links become essentially useless. Google of course, tries to claim innocence in all of this, that the two parts (adwords and search) are completely disconnected, so that it is not the case that if you make Google mad, you are punished, but you can fix it with some money since the only other way to ensure that people will see “dresses” on Google’s first page is to pay them. Naturally, this is a situation that is ripe for exploitation, in spite of Google’s motto “Do no evil”.

The reality is that everyone tries to hover just below Google’s “detection screen” to move up a little bit gradually, but not too much. Where does that leave the public? Although they may “feel” free, behind the scenes they are being incredibly manipulated. I especially liked where he wrote: “When you read the enormous list of sites with Penney links, the landscape of the Internet acquires a whole new topography. It starts to seem like a city with a few familiar, well-kept buildings, surrounded by millions of hovels kept upright for no purpose other than the ads that are painted on their walls.” Maybe the view of the virtual world is not that of Tron, of either complete control or complete freedom, but much like the view of what the real modern city would become in Blade Runner.

The scholarly/education world is not more virtuous than the regular world, and will suffer from the same problems. In this regard, the article “Academic Search Engine Spam and Google Scholar’s Resilience Against it” Joeran Beel and Bela Gipp. Journal of Electronic Publishing Volume 13, Issue 3, December 2010
DOI: 10.3998/3336451.0013.305;view=text;rgn=main;idno=3336451.0013.305 is even more important, especially since their conclusions were that it is easier to spam Google Scholar than regular Google.

This is a rat race that libraries should do their best to avoid. Using the Google-inspired tools based on crowdsourcing have their advantages but as this article makes clear, problems as well. We should assume that Google Books and Scholar will have essentially the same problems, if not worse. It still seems to me that traditional library goals and ethics, based on standards can have a role in solving this dilemma, but at this point I don’t know how.