Check out the Fox Trot comic strip for September 29 2002. P2P sharing and file corruption as topics (funny too). More significantly, you know that something is reaching public awareness when it's a theme in a newspaper comic strip.
Time to measure my height on the journalistic pyramid this week. My last anti-censorware material "SmartFilter stupidity - books sites as SEX" , meant to tie into "Banned Book Week", garnered around 500 hits. The majority of them seemed to come from library mailing-lists, since a notice was send around to those lists due to that tie-in to banned-books week. Perhaps I'm being unreasonable, but under the circumstances it was very disappointing.
Interestingly, there was a drop-off in interest on GrepLaw between this material and the earlier "SmartFilter stupidity - school sites as SEX". The school material received around 136 hits from there, but the book material only about 78 hits. Lisnews, however, showed less of a drop, at 52 vs 44.
Sigh. I'm not of the opinion that if I get just one reader, it's worth it.
Thought for consideration : We should change usage from "intellectual property" to "granted monopoly".
I'm coming to believe that the term intellectual property is more and more leading to an inability to think about the issue. Copying isn't theft. But what is it? In the case of copyright, it's a violation of the business model of a granted monopoly. This violation may be trivial, or may indeed threaten the business model. But talking of it in terms of property is threatening to crowd out anything else.
Findlaw has an interesting article "Should Software Companies Be Able, Through Contracts, To Prevent Competitors From "Reverse Engineering" Their Products", by Chris Sprigman. It's a very good discussion of the subject. But there's a few places which could use some commentary:
Minor point:
Now, however, some companies whose software has been reverse engineered have started to fight back. They have added anti-reverse engineering provisions to the "shrinkwrap" licenses that accompany their products.
"Now"? This isn't new. I can't recall ever seeing a commercial shrinkwrap license without prohibitions against reverse-engineering. I just found a censorware example from 1997, with a reply indicating this issue goes back decades (n.b., this is in part why I did my pioneer work against censorware , in virtual anonymity for so long).
Major point:
Reverse engineering itself, then, has been held to be fair use.
There's a difference between the idea that "Reverse engineering itself, then, has been held to be fair use", per se, intrinsically, and that certain instances of reverse engineering have been held to be fair-use, but others have been denied as fair-use. That is, between is fair-use, versus could be, but also might not be, fair-use. A reader of that article can easily get the impression that the courts have said reverse-engineering itself is always permitted as fair-use, whereas they've also said in other cases that it's not fair-use.
In particular, of special interest to me, the Cyberpatrol lawsuit, regarding programmers who reverse-engineered that censorware, has the following nasty things to say about that reverse-engineering of censorware:
43. Jansson and Skala admitted that they reverse engineered and decompiled Cyber Patrol Cyber Patrol, which violates the Cyber Patrol license agreement and creates an intermediate copy of Cyber patrol. ... In either case, by creating an intermediate copy of the Cyber Patrol software the defendants committed a prima facie copyright violation. ...No Fair Use Defense
44. Fair use is a statutory affirmative defense to conduct otherwise actionable under the copyright law. ...
45. In general, any claimed "fair use" must be "consistent with the ultimate aim [of the Copyright Act] to stimulate artistic creativity for the general public good" ...
46. It is the defendants' burden to demonstrate such "fair use." ...
47. The individual defendants have no "fair use" defense here because they have neither asserted it nor submitted evidence supporting any fair use defense. ...
48. In addition, the purpose of the copying here is inconsistent with the general public good. The individual defendants' avowed purpose for decompiling CyberPatrol was to allow "youth access" to inappropriate content on the World-Wide-Web. That purpose contradicts the public interest as specifically found by Congress ...
49. Finally, to negate fair use one need only show that if the challenged use should become widespread, it would adversely affect the potential market for the copyrighted work ...
50. By their own admission, Jansson and Skala created the Bypass Code to "break" CyberPatrol ... Software explicitly designed to make CyberPatrol ineffective for its intended use can do nothing other than "adversely affect the potential market for the copyrighted work" ...
So whether reverse-engineering is fair-use also has to do with whether the court finds the specifics to be in "the general public good".
Disclaimer: I'm not a lawyer. But as the saying goes, the hound was only running for his dinner, but the hare was running for his life.
I received a nice reply (from Derek Slater, a person on the civil-liberties side) about my last entry, where he gently elucidated many key legal differences between copyright clause interpretation and DMCA interpretation. All good material. I didn't mean to give any impression that I was arguing the situations are legally identical in all respects. What I was trying to do earlier was to examine Valenti's copyright comment in terms of implications regarding practice versus formalism. If "'limited' is whatever Congress says it is.", then in practice, that's unlimited, through the method of making "limited" mean something along the lines of "finite (yet not necessarily reached)". A copyright which never expires in practice, is unlimited for business purposes, whether or not it qualifies as limited in a legal sense. Note I'm echoing the Eldred dissent here:
Second, and more importantly, the Court's construction of the Copyright Clause of the Constitution renders Congress's power under Art. I, s 8, cl. 8, limitless despite express limitations in the terms of that clause. ... Under the Court's decision herein, Congress may at or before the end of each such "limited period" enact a new extension, apparently without limitation. As the majority conceded, "[i]f the Congress were to make copyright protection permanent, then it surely would exceed the power conferred upon it by the Copyright Clause." Eldred, 239 F.3d at 377. The majority never explained how a precedent that would permit the perpetuation of protection in increments is somehow more constitutional than one which did it in one fell swoop.
But again, that's the dissent. What strikes me as interesting here, is the way what I call the "finite yet unbounded" interpretation, works around an apparent limit in limit. A geek would call that a "hack". Valenti seems to argue that copyright could be made permanent in all but name (though admittedly the courts don't think we are at that point yet).
But compare the above dissent passage to what Judge Kaplan said about the DMCA, "effectively controls access" argument, in the DeCSS case:
Finally, the interpretation of the phrase "effectively controls access" offered by defendants at trial--viz., that the use of the word "effectively" means that the statute protects only successful or efficacious technological means of controlling access--would gut the statute if it were adopted. If a technological means of access control is circumvented, it is, in common parlance, ineffective. Yet defendants' construction, if adopted, would limit the application of the statute to access control measures that thwart circumvention, but withhold protection for those measures that can be circumvented. In other words, defendants would have the Court construe the statute to offer protection where none is needed but to withhold protection precisely where protection is essential. The Court declines to do so.
Now, I'm NOT saying that these situations are equally valid, and have an identical legal basis behind them. But there did seem to me to be something of the same "hacking" (in the old-style meaning of the word) spirit in the two arguments. That is, nullifying something in practice, by using a definition which reduces the apparent meaning to one having virtually no real-world significance.
If "effectively" meant "successful", then the DMCA would have no power. And if "limited" means "finite yet unbounded", then "limited times" is no practical constraint.
I suppose my point is that what Valenti is doing still strikes me as "legal hack", even if it's a better-premised "legal hack" than the one tried for DeCSS.
I was thinking about this passage regarding copyright and "limited times", from copyfight:
Jack Valenti on the Constitution's Copyright Clause, quoted in Dan Gillmor's Valenti Presents Hollywood's Side of the Technology Story: "[Just] read Article I, Section 8 of the Constitution, which gives Congress the power to 'promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.' There's no ambiguity...'limited' is whatever Congress says it is."
There's certainly a logical problem here - if "limited" could be a million years, that's not limited in any but the most formal sense. I'm not claiming any special insight on that point, it's been said many a time. However, I was struck by the thinking going on here. It's a mirror of exactly the sort of geek-mindset that tries and fails to come up with a "legal hack". In discussion of the DMCA, I've seen so many programmers say something along these lines: the DMCA language talks about a measure which "effectively controls access", but if such a measure is broken, it must not have been "effective", gotcha, ha-ha. This was in fact addressed as a legal argument in the DeCSS case, and the court didn't buy it all. But it seems the copyright interests are doing precisely the same sort of word-gaming - "limited times", sure, limited to expire 20 years from now, always, an unreachable limit, but still a "limit", gotcha, ha-ha. And so far, they have been prevailing with this argument, though with a shade of dissent.
There's a lesson (politics, or maybe "Critical Legal Studies") in here somewhere.
Let me make another try at outlining what I was trying to express in my message "porn, spam, "filtering", and magic", where Edward Felten has nicely replied, and in part responded:
The point I was trying to make in my original post is that too often, the same people who ridicule magical thinking about porn blocking, adopt nearly the same magical "reasoning" when the topic changes to spam blocking.
But, no, that's not really the case, in my view. This is an appealing idea, a "cheap irony". However, I don't think it's an accurate description of the reasoning error. It's not viewed as the same problem overall. Because the topic isn't only blocking. It's the theories of why the blocking is being done, and who is doing it, to whom.
The basic idea, way back in the olden days, was that through the use of magic, err, I mean technology, each person could have their own Internet environment perfectly tuned as they wanted it, and with no "social" aspects necessary (here meaning g-guuhh-guh-government, a word one was supposed to gasp and spit when uttering). What was never supposed to be said then, was that for the case of censorware, it was in fact NOT a situation of a person having their own environment, but of a third-party imposing restrictions on another person, said person presumably actively trying to escape. There was a very weird doublethink going on, where the Internet was supposed to be at the same time 1) uncensorable and 2) very easy to control. With the answer depending on whether it was governments or parents doing the controlling.
But with spam, it really is a matter of a person controlling what they themselves want to see. So someone can believe censorware doesn't work because control magic (protection-from-sex) will fail when cast on a resisting third-party, but such control magic (ward-against-spammers) will succeed when being cast on oneself. And this set of beliefs is even more consistent with the old Net ethos, in fact it might be said to define it.
Moreover, it's important to understand that the blocking theory of censorware is different from spam-killing. In general, there's an idea that censorware is "filtering" out "harmful" material, where even one exposure can be profoundly harmful. Whereas with spam, the problem is nuisance. From this viewpoint, censorware must be far more effective than a spam-killer. A censorware program which was theoretically perfect, except for the flaw that the subject could find just a single unblocked sex site each day, would be near useless. Whereas a spam-killer which which was theoretically perfect except for the flaw that each user had to deal with just a single spam slipping through each day, would be a great help.
Fundamentally, censorware is a content issue, while spam is an amount issue.
So it's not inconsistent for someone to think censorware can't work to the level needed, but spam-killing can do so, because of this content-vs-amount difference.
I believe the no-technical-solution-to-a-social-problem flaw is deeper. The "cheap irony" doesn't apply because people aren't necessarily reasoning inconsistently when they think censorware will fail because it's third-party control focused on "harmful" content, while spam-killing can work because it's first-party control focused on level of nuisance. When viewed this way, there's a world of difference.
However, the problem is that in spam, the spammer wants to escape the control of the program! That's where the social vs. technical fallacy lies. The attack is coming from the "other side" of the system.
I do believe the idea of a simple technical solution to spam is almost certainly wrong, though, just as in censorware. Because in both situations there are parties who want to break the technical system, from some of the strongest motivations of humanity (in the case of censorware - sex, while in the case of spam - money).
Edward Felten kindly mentions my message SpamAssassin and Crypto-Gram and remarks in part
I'm amazed at the number of people who scoff at the feasibility of automated Web-porn filtering, while simultaneously putting their faith in automated spam filtering.
Uh-oh. Before I get too deeply into the spam-wars, I'd better say something about the word "filtering" here. I dislike the word "filtering", because it's used for several different situations, which are fundamentally distinct problems:
The distinction between keeping people from something they want to read, and forcing on people something they don't want to read, makes the problems architecturally different. Stamping out wanted sexual material isn't quite the same problem as keeping a flood of unwanted ads out of one's face. Nobody thinks reading just one generic spam will cause them severe developmental harm. So the comparison isn't quite so simple.
I think any divide is more that in general, some people believe there's a technical solution to a social problem, and others believe this can't be done. This holds whether that problem is content prohibited by a third-party, or unwanted material by the reader. I'm in the can't-be-done camp (by purely technical means), and I deride the other side as believers in magic.
Some people on that other side tend to get v-e-r-y upset if you write anything which implies that the magic doesn't work. In part, I think because they've invested themselves into an idea that "Magic is the solution!". And if you say it isn't, well, maybe you're a crabby mundane person who is jealous of the happy magic-workers. Or perhaps you even want people to suffer, because, turning it around, you're invested in the idea that "Magic is NOT the solution!". And then there's the argument that if people want to believe in magic, who are you to tell them such a belief is wrong - it's their affair whether the spells they try to cast, work or not.
Again, the spam-wars scare me.
Maybe I'm hypersensitive to these arguments about the word "filtering". But I still have the scars from the censorware wars.
The American Library Association (ALA) has designated September 21-28, 2002 as "Banned Book Week". This is an event to "Celebrate Your Freedom to Read".
So, as a small contribution in celebration of the freedom to
read, here are some book-related websites likely to be banned in some
libraries and schools, as they are all blacklisted as "Sex":
"Banned Websites Week" - book sites as SEX (SmartFilter)
Nowadays, book-banning has moved into the modern age. With a Federal censorware law (CIPA) affecting schools and libraries, the freedom to read, if using a computer screen rather than paper, is arguably being extensively threatened.
Schneier's Crypto-Gram is getting flagged as spam by Razor. The reason is that some spam-detecting software will try to automatically detect spam and then automatically report it. So somebody's SpamAssassin mistakenly concludes that a copy of Crypto-Gram is spam and reports it to Razor, and this happens a few times over; now everyone who uses Razor will automatically be advised that Razor considers Crypto-Gram to be spam!
I've been looking at SpamAssassin, and indeed, it does flag the latest Crypto-Gram Newsletter as spam, given the default threshold. Here's which tests are being triggered (information given by SpamAssassin) and why (information not given by SpamAssassin, but can be found from simple investigation since it's open-source). This is from version 2.31:
SPAM: DOUBLE_CAPSWORD (1.1 points) BODY: A word in all caps repeated on the line
"Boolean functions of AES, which could possibly be used to break AES. But"
"called BES that treats each AES byte as an 8-byte vector. BES operates on"
"A new company, PGP Corp., has purchased PGP from Network Associates."
SPAM: PORN_10 (0.6 points) BODY: Uses words and phrases which indicate porn
"by pedophiles, child pornographers, cultists, occultists, drug pushers and"
SPAM: ONE_HUNDRED_PC_FREE (3.4 points) BODY: No such thing as a free lunch
"There's a new Twofish C library, written by Niels Ferguson. The main differences with existing code available is that this one is fully portable, easy to integrate, well documented, and contains extensive self-tests. And it's 100% free."
SPAM: PORN_3 (0.5 points) Uses words and phrases which indicate porn
(?i-xsm:\bporn) : "by pedophiles, child pornographers, cultists, occultists, drug pushers and"
(?i-xsm:\bsex+) : "with the sexual words you'd expect -- I won't print them because too many"
(?i-xsm:\blive) : "complex machinery. Their primary duty is to protect the lives and"
(?i-xsm:\baction) : "any criminal or civil action for disabling, interfering with,"
So, more than 5 points ... SPAM (at default levels)
This is not good.
A while ago, I wrote an essay: "The Internet and the Journalistic Pyramid".
The point is that the Internet arguably shifted slots on the "Journalistic Pyramid", but it's still a pyramid
The number of hits on my recent anti-censorware material, "SmartFilter stupidity - school sites as SEX" is around 300. There's more reading than that. But it sure isn't much of an audience overall, sigh.
[I wrote this for a library list]
Next week is "Banned Books Week". One idea I've had, was that it seemed a very natural fit to include censorware issues here, to have "Banned Websites" too. I've thought this would be a good way to talk about censorware in a civil-libertarian framework.
Last year, I was planning to send out brand-new lists of sites banned by censorware, on every day of the week. But then came the September 11 events, so there was no interest in such lists. This year, there still doesn't seem to be much interest, or maybe I'm ill-situated to do that PR.
For all my technical expertise, I don't think I'm skilled in pulling off any sort of "Banned Websites Week" campaign. So I'll just toss this concept out to the list to see if anyone would like to refine or somehow implement the idea.
Andy Oram had some coverage of the Boston event for yesterday's censorware press conference (again, I spoke here). His article is:
Internet filtering hurts those who are least able to protest it
Great write-up. And I'm happy to be mentioned:
One of the best spokesmen concerning censorware is the one who knows the code: Seth Finkelstein, who won the 2001 EFF Pioneer Award for deciphering several filtering programs. Seth is a crackerjack programmer who ought to be earning six figures somewhere. But the modest publicity he got for the EFF Award did not translate into job prospects, and he can't publish much of his research because he'll be sued by censorware companies angry at having their operations revealed.
Almost nobody showed up to the Boston press conference.
That was disappointing.
On September 18 2002, several civil-liberties groups are sponsoring an event focusing on the impact of a Federal censorware law (CIPA) as applied to schools:
School Communities Give Internet Filtering Law Failing Grade
Research Reports Thousands of Sites Incorrectly Blocked
I'm speaking at the Boston-area press conference. So for talking-points, I decided to bring a few more egregious examples to the party. The twist here is that these aren't websites useful in school, but are schools or school-related themselves. And they are all blacklisted as "Sex".
For more, read
SmartFilter stupidity - school sites as SEX
The Online Policy Group has just issued a press release announcing latest results of a censorware investigation which shows ... drumroll ... "Research Reports Thousands of Sites Incorrectly Blocked"
School administrators, along with students, teachers, parents, and school librarians, in San Francisco, New York, and Boston will speak out on September 18 against federal mandates for Internet blocking or filtering software in public schools.
Disclaimer: I'm a speaker, and not disinterested, due to my own anticensorware investigations
At the risk of repeating myself, I'd like to make one comment about something Ed Felten just said - "... and that what Lessig calls "token based" DRM is a lesser evil than what he calls "copy protection"
Voting, for example, is an exclusive "or" - that is, one candidate winning, means that all the other candidates lose. But here, the control systems being discussed don't have the property that one being implemented means any others will not be also in force. Indeed, it's entirely possible for the ultimate result to be both evils. In fact, object-control plus network-control works together in a very natural belt-and-suspenders fashion.
And this makes a great deal of sense from a Congressional standpoint too. I don't think this discussion has intense politics behind it. But I'd worry if people seriously seemed to get caught up in the idea of actually advocating object-control as a way of supposedly warding-off network-control. I don't think that's being seriously advocated now, just speculated in an academic sense. But I still have the scars from the censorware wars. Beware seductive theory.
Regarding Felten's comments on what is an "end-to-end argument", I took Lessig's reference to "network design" not to be about re-engineering TCP/IP. Instead, I believe the idea was that IF the media industry was given object-control, THEN they'd be happy to go away and not bother about Napster or Aimster or similar, not be concerned about sharing systems. Because they would then feel secure (pun intended) that whatever those sharing systems exchanged, the object-control would prevent unauthorized use. I take this from where Lessig says: "if a technology could control who used what content, there would be little need to control how many copies of that content lived on the Internet"
But to point out the flaw in the above proposition via another way, the statement seems to conflate "content" with "objects". That is, there might be official versions of a song which are controlled objects. But you can be sure, since bootlegs existed even before computers, there will be many, many, unapproved versions in circulation. The technology can control who uses what objects But that's not the same as content.
There's no contradiction at all here in terms of "end-to-end argument". Felten: "If copy-protection is to have any hope at all of working, it must operate on the end hosts". Right. I think Lessig agrees, roughly. The argument is, put the control inside the machines, (via an operating system or hardware which examines objects) AND THEN there will be no problem with the Napster-ilk or other network-based exchange innovations, since the content industry will be able to "trust" that the sharing of controlled content will be prevented ( Lessig: "A different DRM would undermine that push").
But, per Felten: "It must try to keep Aimster ... from getting access to files containing copyrighted material". Right also. That's the flaw in the object-control argument. Because if "wild" objects can still be used and shared, then the network is just as much a threat as before, and still needs to be controlled too (as in Aimster is still a problem).
It's not so much about "end-to-end", but coming to a bad end.
I've been reading Lessig's article on Digital-Right-Management, Anti-trusting Microsoft, and various comments I found the article very clear. Let me try to boil it down, in my prosaic paraphrase. I believe the key ideas are as follows:
1) Usage control can be either object-based or network-based.
2) IF control is object-based, THEN it doesn't have to be network-based.
3) Coming from Microsoft doesn't automatically make it a bad idea.
In some reactions, I'd say too much emphasis is being placed on aspect#3. Now, being suspicious of anything from Microsoft is formally an ad-hominem argument, though that suspicion is also prudent. This Microsoft element is generating much attention, since it's at the start of the article, expressed in a humorous way, and has the word "Microsoft" in it. It's generally great pundit-fodder, allowing asking how truly evil is Microsoft in the first place, whether it's thought to be more evil than it deserves versus an overwrought image of evil, and then whether such a stench of evil is clouding our perceptions.
However, this isn't the fundamental problem with the piece, as I see it. The difficulty is in aspect #2. That portion is an appealing thought. The argument runs IF, IF, IF, the desired usage control is put in objects THEN THEN THEN, the network control is unnecessary.
It's such a seductive proposition. I've seen the idea so many times in various contexts. Years ago, it was roughly the same scheme of argument I called censorware-is-our-saviour, during the time censorware was being promoted by some people as a "solution" to censorship laws. Implement control locally, it's thought, and the powers at issue will let the global net alone.
Every time I see one of these arguments, I have the same question:
Show me that the other side believes it.
Not that one would think the other side should accept it, based on the theory which has been elucidated. No, no, no, that is not my question, why they'll be happy. Don't repeat back to me the theory. I understood the theory. Rather, show me some evidence that the other side does in fact consider this enough. Because perhaps the theory is wrong. Here, perhaps they won't consider object-control to be sufficient, and will rather take it as precedent for network-control in addition.
And that's the subtle flaw in aspect #1. The argument is:
1) Usage control can be either object-based OR network-based.
I think the reality is best rendered:
1') Usage control is desired as object-based AND network-based.
The theory fails in the same way for all these types of arguments - they start out by setting up two things as opposites (object versus network), which the other side sees as complements (object plus network). In programming terms, it argues an exclusive "or", where the opponent believes in an inclusive "and".
What I think will happen, is that if object-control is implemented, then lack of network-control will be viewed as a threat. Since, unless the machine is limited to using only those objects which are "domesticated", those which are "wild" will proliferate. That is, all the P2P music and video trading will still be a "problem", just using one-generation-down "wild" copies made from speakers or screens, or otherwise "cracked".
In fact, the fallacy is very clear from thinking of the days of copy-protected software packages (object control). That didn't stop all the illegal file-trading sites (uncontrolled network) - they tended to be full of "cracked" copies (uncontrolled objects). And sometimes the "cracked" copies were even preferred for legitimate users, since they were often less hassle overall, to back-up and re-install. I can hear Jack Valenti now, saying something along the lines of perhaps "the open network is like a diseased sewer which threatens the sterile environment of the industry".
Moreover, there is a terrible social cost attached to such an argument. If people pin their hopes on object-control as the answer against network-control, then the flaws in object-control - exactly those uncertified, unapproved, unMicrosoft materials - will be cast as threats to the "solution", as spoilers against the supposed means of defeating network-control.
I should stress my points here aren't particularly ideological. It's not about whether Microsoft can be trusted with power, or if open-source is good. Rather, the proposed architectural code has a subtle bug in it - it has an XOR (exclusive "or") early in its model, where the system will want an AND (i.e. "both"). We will not save the network by object sacrifice.
Geoffrey Nunberg, linguistics expert, is the subject of an interesting interview on CNET. Well worth reading for all the insights, though I note it here for the following comment about censorware:
That's very different from software that just says, "You can see this, you can't see this," and doesn't involve human review of the process. Although these (filtering) companies claim they use human review for all sites, that's just not true. And it couldn't be done, given the size of the Internet.
Nothing I say today will be meaningful.
Speaking of N2H2 (a censorware company) finances, its business doesn't seem to be good:
N2H2 lays off 18; two executives among themSeattle-based N2H2 Inc., a developer of Internet filtering software, has laid off 18 members of its staff, including the chief operating officer and the vice president of marketing. ...
The moves, expected to reduce operating expenses by 11 percent or $1.5 million annually, are designed to help the company achieve profitability in late fiscal 2003, N2H2 said.
There are great gems in financial documents. I found this in N2H2's Form 10-Q for August 2002.
Our filtering services have been accused of overbreadth by free speech groups.In a recent federal court case, a federal appeals court held that certain provisions of the Children's Internet Protection Act resulted in an unconstitutional restriction of freedom of speech. These provisions required public libraries receiving federal funds to install Internet filtering programs like N2H2's on all of their computer terminals. The basis for this ruling is, in part, that such programs are overbroad in the types of speech that they filter out. This ruling is currently on appeal to the United States Supreme Court. To the extent that this decision is upheld, it will negatively impact our ability to market our products to libraries without modification, which could be time-consuming and costly.
They said it, not me ....
The song "Spam" wasn't the origin of the term for the kind of email. But it's been running through my mind today.
"Spam in the place where I live (have some more) ...
Spam in the place where I work (you're obsessed) ...
Spam any place that you are (ham and pork) ..."
The banning of sites concerned with terrorism, as "crime", in New York schools, highlights another frequent problem with censorware:
You don't know what's in a "category"!
In the censorware here, the I-Gear categories state:
CrimeSites providing instructions on performing criminal activities or acquiring illegal items including defeating security, disabling, or otherwise interfering with computer systems (hacking or cracking); unauthorized use of telephone or communications equipment to place free calls or charge another's account for calls (phreaking); deactivating copy protection or registration schemes of software or hardware systems (pirating and wares); construction and/or usage of munitions such as pipe bombs, letter bombs, and land mines; and lock picking, spying, or general subterfuge and defeating of security measures.
Well, that's what they say ... but there's no little asterisk for "and anything that has the word terrorist or terrorism or anarchist etc, too many times ..."
[This was sent to a reporter regarding a story about high-school students being prevented by censorware from searching for sites concerning "terrorism"]
I noted in your article "Filters, Schools Like Oil, Water", that "Calls to the New York City Board of Education about filtering were not returned.". I think the information you want is partially on the NYCENET.EDU website, in particular the page about Internet policy:
http://www.nycenet.edu/offices/diit/internet/iaup.asp#filter
Note the blacklist categories of "crime", "intolerance", and "violence". Even though the policy talks of modification for grades 9 through 12, just on inspection, it's a good bet that one of those blacklists was the problem.
I'm an expert regarding censorware, having been honored by the Electronic Frontier Foundation with an EFF Pioneer Award for my work (see http://www.eff.org/awards/20010305_pioneer_pr.html ) . The program in this case is I-Gear, which I've analyzed. Without getting into the technical details, from the last time I analyzed I-Gear, I can confirm that the word "terrorism" (and also "terrorist") is blacklisted by I-Gear in the "crime" category. So searching for sites about "terrorism" will likely be banned.
It's often easy to find out what blacklists are in use by I-Gear. Just try "http://www.anonymizer.com". That's an anonymity site. Those kinds of sites - privacy, anonymity, language translation, etc. - are banned in all blacklists, because the sites represent a "loophole" in the control of censorware (see my report on this topic, BESS's Secret LOOPHOLE http://sethf.com/anticensorware/bess/loophole.php . Unless the display has been changed, I-Gear gives the various blacklists which cause a site to banned. So "http://www.anonymizer.com" should return all blacklists in use, since it's in every blacklist.
Feel free to contact me if you'd like further information.
[Entry update: The I-Gear censorware has the word "anarchist" in the crime category too, and even worse than the word "terrorist"]
In the context of China banning the Google search engine, Edward W. Felten's Freedom To Tinker blog kindly mentions my anticensorware work exposing how censorware is impelled to ban caches, anonymizers, translation sites , etc. No matter how many times I say this, it's still relevant: Censorware is about control. It is not a "filter". It is about controlling what people are permitted to read. The best public-relations move the censorware companies ever achieved, was to get their product called a "filter". Because that focuses attention on a mental model of bad, harmful, dangerous material, and a claim to be "filtering" it out. That is a very different view, in contrast, to focusing on a need to control people, and how the blinder-box must be constructed so that the subject can never escape from the control.
"The world of computer communications, however, has turned out to be the great equalizer. Suddenly anyone can become a publisher, reporter, or editorialist. What's more, each of us has as good a chance of being heard as anyone else in the electronic community."Mike Godwin
(in case it isn't clear, I'm quoting this
very ironically - Seth Finkelstein)