July 15, 2005

Proposition: OPT-OUT controls are not DMCA access controls

Having wasted entirely too much time being sucked-into, err, now thought about the Internet Archive / DMCA circumvention issue at great length, I think I've come up with a closely-reasoned argument for the Internet Archives' control system not to be subject to the DMCA:

Proposition: OPT-OUT controls are not DMCA access controls

The DMCA reads:

(B) a technological measure ''effectively controls access to a work'' if the measure, in the ordinary course of its operation, requires the application of information, or a process or a treatment, with the authority of the copyright owner, to gain access to the work.

The Internet Archive's robot.txt control look superficially like a DMCA access control. But I'd say at a detailed level, it doesn't qualify. Crucially, the default in the Internet Archive is to gain access, and it does the inverse - in the ordinary course of its operation, it requires the application of information, or a process or a treatment, with the authority of the copyright owner, [i.e., retrieving a robots.txt file] to DENY access to the work.

Of course, in a very abstract sense, one could say these are equivalent in terms of logical negation. But I'd argue that if, by explicit design decision (which is the case here), failure of the process leads to permission rather than denial, then it can't qualify as a DMCA 1201 access control method. Even if it's an access control method in a broader sense, not every access control method should be taken to fit the DMCA's definition.

This seems to capture an intuitive argument.

Disclaimer: I'm not a lawyer, this is not legal advice, I make no assurances a "hacker"-hating judge would care.

By Seth Finkelstein | posted in dmca | on July 15, 2005 11:25 AM (Infothought permalink)
Seth Finkelstein's Infothought blog (Wikipedia, Google, censorware, and an inside view of net-politics) - Syndicate site (subscribe, RSS)

Subscribe with Bloglines      Subscribe in NewsGator Online  Google Reader or Homepage

Comments

http://realmeasures.dyndns.org

Posted by: don warner saklad at July 15, 2005 03:18 PM

A couple things occur to me. Intentionally making repeated requests to a website until something breaks would seem to be tortious conduct, in and of itself. That could potentially get the trespass to chattels treatment that is becoming so popular.

Second, has anyone tried to see if this works on any other site? Is this mere speculation that the repeated attempts broke something? It seems difficult that even if the once-per-day request was broken, that a single user making successive requests could /. (slashdot) a server's return of a robots.txt.

Posted by: mmmbeer at July 18, 2005 12:30 PM

If you read the lawsuit in depth, it's clear that something broke from repeated requests. I've made some speculations as to the gory details. But there's much material right in the lawsuit itself.

It may not have been that the time-out was at the target end, but rather the retrieving end - the Internet Archive can be notoriously slow sometimes, and they might have timed-out some processes.

Posted by: Seth Finkelstein at July 18, 2005 07:38 PM