Matt Cutts, a director of Google spam-control department, announced in his twitter some changes in search algorithms. These changes will touch exact-match domains. The aim of this algorithm is to decrease the quantity of low-qualified sites in search results. He also wrote that the new update touches 0, 6% of Anglo-American search results. It works apart from Panda and Penguin.
Bill Slawski wrote a wonderful post about patent and about ways to define the commercial requests. Google declared these ways in October 2011. In the patent application the problems of domains with exact match are enumerated. Slawski said, that companies will try to deceive the search systems using such domains. Nowadays search systems rank such domains higher. That’s why there is a risk, that many owners of websites can try to cheat the system to get higher search engine positions.
The patent was registered in 2003 year. Amit Singhal, Matt Cutts and Hun Woo were registered as the authors of this patent.
In March 2011 Cutts told web-masters, that he discovered the high effectiveness of some commercial domains. As many users think, for some sites with “exact-match” domain names content is out of analysis and that’s why the search results are not fair. Cutts noticed that Google tried to find the golden mean and take all details into account. Google experts found a conclusion, that the entry of key-words in domain mustn’t be the main thing to differentiate domains. It isn’t known exactly, will the new algorithm identify only commercial sites or not, but a great amount of work has been made in this direction.
Slawski also noted, that algorithm also touches on another types of requests. For example, geo-location of the requests, navigation requests upon search (f.e., request like “ibm” will give a home page of IBM Company), temporary requests, and requests in news searching, and so on.