The Bing Search Engine
Bing, or Microsoft Bing as it’s now known, is struggling to make the grade as a search tool of choice for most internet users. Google has become the word people use to describe any internet search and is the search tool nearly everyone goes to first.
Microsoft, Bing’s owner, would clearly like this situation to change. They would like to see their search tool used by more people. In a post on the New York Times mobile website by Nick Wingfield and Eric Lipton the authors remark:
“For Microsoft, any limits imposed on Google might help it improve the fortunes of its struggling search engine, Bing.”
While Nick and Eric’s article is more about Google than Bing, the statement does make us think about Bing as a search engine.
Poor Indexing, Poor Search Results
For the average web searcher Bing, Microsoft’s search engine often provides a poor experience to internet users.
Using Microsoft’s search engines is not a reliable way of finding content on the web one searches for. MS can hardly expect their engine will become the 1st or even 2nd choice for internet users until the results return reflect the content they want to find.
Right now Bing does not provide a good user experience. Until it does this search engine will not thrive, no matter what efforts MS makes using lawyers to attack Google.
Bing a Webmaster’s Nightmare
For those of us tasked with managing websites Bing is a bit of a nightmare. The crawl rate alone is enough to create problems for small sites hosted on low-end shared hosting. Crawl rate is one of the few robots.txt rules the Bing bot seem to obey.
Talking of Bing as a rule breaking spider, what do we mean. These bots seem to decide which robots disallow rules they will obey or ignore. For instance; file extension disallow for images (.jpg, .png, .bmp and so on) get ignored and Bing happily goes about crawling images regardless of the rules.
Another rule the bot seems to choose to obey or not are path disallow rules. If for example you don’t want search engines to index any content with “/abc/ in the path ( domain/abc/post-url ), Bing follows and indexes these urls anyway. (Or maybe I should rather say store them in their database regardless…)
The same applies to html robots meta tags – nofollow, noindex is treated as so much garbage…
A real life example is a site that used a 3rd party embed to provide a property listing service. This 3rd party removed listings without having a proper 404 redirect option in place. Bing’s bot crawling these pages created thousands of broken links. More than 6 months after this system was removed from the site, Bing continued to look for these links. This produced several hundred entries in the 404 logs daily. Google has not looked for a single one of these!
Then we have the poor indexing – Bing may hit your site from 20 or more different IP’s at the same time. It may try to crawl hundreds of pages a second – but you’ll be very lucky if 1/10th of the content on your site becomes available to searchers…
Users Want to Find What They Want
At the end of the day Internet users will only use a search engine that gives them links to whatever they searched for. If a search engine isn’t indexing content, or only partially indexing a website, how can it possibly return the right content to searchers.
For Microsoft to gain any credibility for Bing, they need to look at they way their search engines work. While Google remains the only reliable search tool, searchers will not be keen to rely on any other engine to search the web.