About Robots Files
Each UXi site has a robots.txt file used to give instructions about their site to web robots like Google, Bing, etc.
It works likes this: a robot wants to vist a site URL, say http://www.uxisite.com/welcome/.
Before it does so, it firsts checks for http://www.uxisite.com/robots.txt, and finds:
User-agent: * Disallow: /
The "User-agent: *" means this section applies to all robots.
The "Disallow: /" tells the robot that it should not visit any pages on the site.
- Any time a site's URL is updated, the process outlined below should be completed. This assures that all search bots have the most up-to-date info on the site.
- Robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
- the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don't want robots to use.
How to Use Robots in UXi
UXi Settings > Robots
- Highlight the entire content of the robots file and click delete.
- Click the Save Robots.
The results will be:User-agent: * : Applies rules to all robotsSitemap: http://uxisite.com/sitemap.xml : Sets an exact location for the XML sitemap