Simply put, the robots exclusion standard (also called the robots exclusion protocol or robots.txt protocol) is a easy way of telling Web crawlers and other Web robots what parts of a Web site they can and can not view.
To give robots instructions about what part of your site they can access, you can put a text (.txt) file called robots.txt in the main directory of their Web site, e.g. https://owlman.neocities.org/robots.txt. This file tells robots what part of your site they can view, however, some robots can ignore such files, especially malicious (or bad) robots.
If the robots.txt file does not exist, Web robots assume that they can see all parts of your site.
An example of a good robot (and a good boy).
\ oo \____|\mm //_//\ \_\ /K-9/ \/_/ /___/_____\ -----------
Here are some useful links on robots.txt that may help you.