Kodama
Shohin
Another revenue stream although maybe small could be a merch shop. I would love to contribute designs that maybe get voted on but would certainly love to proudly wear and promote Bonsai Nut Gear!
I can't count the number of people that want a T-shirt (myself included)!Another revenue stream although maybe small could be a merch shop. I would love to contribute designs that maybe get voted on but would certainly love to proudly wear and promote Bonsai Nut Gear!
I like this idea.Maybe use the PBS/NPR model, have pledge drives.
I would wear bonsai nut tshirts exclusively for work.Another revenue stream although maybe small could be a merch shop. I would love to contribute designs that maybe get voted on but would certainly love to proudly wear and promote Bonsai Nut Gear!
I went to donate again and couldn’t find any “donate button” on my phone version. “Installed app” didn’t do anything. IPHONE 11. Need email for PayPal? @Bonsai Nut I’ll consider this thread a pledge drive.
Hey @Bonsai Nut
You can block many AI scraper bots using robots.txt.
This is a good resource for the options you have for blocking AI scrapers
https://github.com/ai-robots-txt/ai.robots.txt
but the quickest win would be to add this to your disallow list in your robots.txt file
![]()
ai.robots.txt/robots.txt at main · ai-robots-txt/ai.robots.txt
A list of AI agents and robots to block. Contribute to ai-robots-txt/ai.robots.txt development by creating an account on GitHub.github.com
Blocking these bots will also reduce some of your bandwidth costs that could be inflating from the requests they make to the site.
It may also hurt your google search engine ranking so there may be a tradeoff.
yes, I understand. It's a command not security. That's why the github repo has multiple methods of controlling access.Maybe it will help with more honest players, but I'm not sure any of them are honest anymore. Robots.txt is really an "honor system" anyway.
Some go to more extreme means to circumvent any blocks:
![]()
Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives
Perplexity is repeatedly modifying their user agent and changing IPs and ASNs to hide their crawling activity, in direct conflict with explicit no-crawl preferences expressed by websites.blog.cloudflare.com