Today my supervisor told me to search some info on the robots.txt file. This is a text file that indicates the visiting robots (web robots or web spiders) which parts of the website they are allowed to visit or not. This text file should be inserted in the root directory of the website thus that when you enter the following address www.yourdomain.com/robots.txt the file will be opened.
In this text file you should enter the following data:
User-agent: * this means that this rule applies to any robot that tries to get information from the site
Disallow: / this will specify the URL that is not to be visited (in this case, all the website cannot be visited by the robots)