Is robots.txt needed if one uses the Meta Robots, and vice versa? Also, does the robot.txt reside in the root directory, or does each sub-directory for individual sites also need one…assuming the file contains the same permissions.
1.) Will the above robots.txt account for all the domains, subdomain, and subdirectories? Or is it necessary to have a robots.txt in each domain folder?
2.) According to the above robots.txt, all directories and files are crawled with the exception of cgi-bin, images, something.js, and subdomain. Is the last empty Disallow: statement needed, or will that default to crawl if not specified?