In my last post i mentioned a word Random Surfer. It was purely under my subconscious mind. When I just started reading that post again, my concentration suddenly turned towards this word. Who is this Random Surfer?? It was a very familiar word to me and decided to write about it. Its gonna be a bit technical kindly excuse. Last semester in my semantic web course I had to implement a paper titled "Ranking Knowledge in Semantic Web". It was very interesting paper where i came to know about the concept behind Search Engines and the way they rank the result pages. The famous page rank algorithm used in search engines like Google is based on the Random Surfer Model. The code of the surfer is very simple. Whenever a surfer views a page he has certain probability to click a link in that page or to jump to a some other page. This is usually the famous 90-10 rule where the probability to click a link on the page is 0.9 and to jump to a random page is 0.1. The page rank will be a summation of all this probability. To sum up if a page has lot of inbound links it has the greater probability to be visited and hence has a higher page rank. In the real web this is more complex because the actual page rank will also depend upon the page rank of the page from which it has a inbound link. This will seem too complex and it is complex reallly. Imagine the graph or matrix the search engines has to maintain to establish the relationship between these pages(Thats awesome Isn't it?). They achieve this type of computing power very easily using grid computing.
Now if you are really curious and want to know the number of inbound links you have just click here and enter your blog or website and findout. To get more information on Random Surfer Model and Page Rank Calculation Click here
No comments:
Post a Comment