all 6 comments

[–]In-the-clouds 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (4 children)

Cloudflare says they don't use CAPTCHA, but they use Turnstile. They admit bots can easily pass the Turnstile test, but if a bot does not use a real browser, Cloudflare can detect that. So the bot programmers now know all they have to do is have their AI use the same kind of browser as regular people.

Bots definitely can check a box, and they can even mimic the erratic path of human mouse movement while doing so. For Turnstile, the actual act of checking a box isn’t important, it’s the background data we’re analyzing while the box is checked that matters. We find and stop bots by running a series of in-browser tests, checking browser characteristics, native browser APIs, and asking the browser to pass lightweight tests (ex: proof-of-work tests, proof-of-space tests) to prove that it’s an actual browser.

[–]package 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (3 children)

Sounds like the intent is focusing on rate limiting possible bots, which is pretty much the main reason people use cloudflare protection anyway. The burden of more directly verifying whether users are actual humans then falls to other services or the site itself, which seems totally reasonable IMO.

[–]In-the-clouds 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

There is so much talk about bots now. And the leaders demand that only humans should use their service. It's a clever plan to force the mark of the beast, an ID inside the user, verifying that it is not a bot, but a being, with a traceable ID.... No more anonymous use of the internet. No more free speech, because they will then be able to punish the specific being who speaks. And they can blame the bots for "needing" the ID.

[–]package 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

This doesn't have anything to do with a "traceable id". As described in the article, they are verifying that the request is being made from within the context of a browser by checking the availability of various API calls, as well as running simple proof of work and proof of space routines to impose rate limiting. Upon completion of these checks they'd give your browser a token it can use for a fixed period of time (generally less than a day) to say that it passed the checks.

[–]In-the-clouds 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

You are describing the present while I was describing the future.

[–]iamonlyoneman 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

When we see this issue, we now surface a clear error message to the end user to update their system time

I'll take "things a user who can't figure out a BIOS battery replacement is not going to be able to do" for $200

we find that a few privacy-focused users often ask their browsers ...changing their user-agent ... and preventing third-party scripts from executing entirely. ... those users can immediately ... make a conscientious choice about whether they want to allow their browser to pass a challenge.

aaaaand I clicked away from the page. I didn't need to that content anyway LOL

I unironically cannot be assed to defeat systems designed to fool bots. There's almost nothing online worth letting some random website run a bunch of unknown scripts